HTTrack is an offline browser utility that allows you to download a website from the Internet to a local directory, building recursively all directories, getting html, images, and other files from the server to your computer. It arranges the original site's relative link-structure. Simply open a page of the "mirrored" website in your browser, and you can browse the site from link to link, as if you were viewing it online. It can also update an existing mirrored site, and resume interrupted downloads. It is fully configurable, and has an integrated help system.
Reviewing 3.47-27 (Mar 11, 2014)
Reviewing 3.47-23 (Aug 21, 2013)
It works , but the interface settings need some restyling ,
very poor thing , it wouldn't much work to transform it in a powerful instrument under the average user profile .
Reviewing 3.47-22 (Aug 13, 2013)
It's a good app. It does the same as Lots of the Crawlers & Bots.
Most people would want to use This and Similar Apps to scrape the
Web for Media. A lot of similar Apps will respect (Robots.txt) &
Certain Apps will allow you to configure your Bot to Ignore Meta
data, Tags, Robots.txt etc. If you don't understand the WebSite
strategies or HTML code h**p://www.w3schools.com/ I would suggest
to read the Helpfiles & do as much research as possible before
using Crawlers, Sniffers, Bots. h**p://www.httrack.com/html/abuse.html
5* For being highly configurable & a favorite.
If ya wanna sniff media. Some OK stuff in this package: h**p://www.nirsoft.net/network_tools.html ... (~_^)
All "singin n Dancin" with a fancy UI. But It'll cost ya.
Reviewing 3.47-15 (Jun 2, 2013)
It generally works, if you manage to understand it's one-hundred options (default settings are not usable for me). Also, user interface is from stone age of computing, I remember this application looking the same 10 or 12 years ago.
No comments yet