rsync -avz user@remote-host:/path/to/dir .
It's a public website.
--reject jpg,png --accept htmlTo clarify, that should recursively go through every link and fetch the data. The fact it is a "website" is not quite so important.I'm not bothered about copying a website.
What I wanted was to just get the files from
-R flag to filter out the stuff you don't need... And wget is available for FreeBSD as ftp/wget...I always use fetch to retrieve files and forget about wget. It would be nice if fetch could do recursive retrievalAdding to what Alain De Vos mentions, the wget manpage also mentions that you can use the-Rflag to filter out the stuff you don't need... And wget is available for FreeBSD as ftp/wget...
fetch(1) doesn't do specifically recursive retrieval on its own. Read up the manpage. You can, however, write a .sh script that implements a recursive retrieval using fetch(1). For recursive retrieval, use wget.I always use fetch to retrieve files and forget about wget. It would be nice if fetch could do recursive retrieval