So this is an example when doing some research before you start coding some automation pays off.
Of course I can write my own FTP crawler in Tcl!
But why when it's easier to copy a whole FTP server recursively in just 2 lines!
First install FTPfs (Debian Linux distro assumed)
sudo apt-get install curlftpfs
Let's say we want to make a copy of all the Gutenberg books.. for safekeeping and archiving purpose.
What curlftps does is create a virtual filesystem with a mountpoint to the ftp website. This means we can access it as if we where accesing anyother file on our pc. The true and great power of Linux is thus shown in the following commands:
sudo curlftpfs ftp://aleph.gutenberg.org/ /mnt/gutenbergftpsudo cp -Ruv /mnt/gutenbergftp /home/lostone/Gutenberg
Something similair can be done with Tcl using the tcl vfs
The Tcl VFS (Virtual FileSystem) is a ver ypowerful thing which lts you mount many remote filesystems as if they where local:
- http
- ftp
- tar files
- webdav
- zip files
- And many others with extra extensions like ssh, LZW, delta virtual filesystems, etc..
Think of the folowing example ftp:
package require vfs::urltype vfs::urltype::Mount ftp file copy ftp://foo.bar.com/pub/Readme . file copy ftp://user:password@foo.bar.com/private.txt .
You can find out more at: https://wiki.tcl.tk/12832 and https://wiki.tcl.tk/2466
Cron jobs
Of course you can also set a cron job to update data once in a while.. But test everything out before writing any cronjob, ok?
curlftpfs ftp://aleph.gutenberg.org/ /mnt/backup
Then set this to cron: cp -R -n /i/wanna/backup/this /mnt/backup/ > /dev/null