Here I am at the University of Minnesota and I find out that I need to download this huge 3.4 gigabyte file. I don’t need it now, but I know I’ll need it eventually. What do I do? I ssh to my server at home and start up wget. But in the past, I realized, wget will fail to function properly if I close the terminal, and since I have class, that would happen quite soon. And this download will take hours.
wget is smart enough though to offer a background option that will allow it to be decoupled from the terminal process that started it.
Startup: -V, --version display the version of Wget and exit. -h, --help print this help. -b, --background go to background after startup. -e, --execute=COMMAND execute a `.wgetrc'-style command.
This allows me to freely leave the connection hanging, and it’ll still continue at home without me. But what about progress now? Ever heard of tail
?
Usage: tail [OPTION]... [FILE]... Print the last 10 lines of each FILE to standard output.
There’s an another option that will allow me to follow the increasingly added data to the file too, essentially appending it to the original 10 lines. In short, it will add 10 lines to the screen as soon as they are ready. When you started the background wget you also were told about the wget_log file: “wget-log”.
So just run a tail -f wget-log
and you’ll see the output of the progress of your super massive download that is decoupled from your terminal session! It’s fantastic.
Actually, everything has a background option of sorts…
nohup
It, too, decouples the process from the terminal, and (technically) forces the program to not listen to the hangup signal that is sent when the respective terminal/SSH session ends. For me, nohup was critical for streaming music over icecast.
Alternatively have a look at the program “screen”. http://linux.die.net/man/1/screen