Hello, I would like to make a browsable copy of a website on my hard drive in HTML, HTM, including all its subdirectory structure, files (text, graphics, sound), links (As well as links pointing to other servers) and most importantly, its search engine. (So that when I type something in the “search index” box, it will find the appropriate page from that website, stored in my hard drive) The Website I’m interested in is: http://memory-alpha.org/en/wiki/Portal:Main What would be the best tool/program for this task? Thanks in advance
Older version of Internet Explorer had an option to make websites available offline. With that you were able to make most of the links and stuff to work depending of how many of the consecutive pages you wanted to copy to the hard drive. I just found out that IE 7 doesn't do that any more. If you do a search for viewing websites offline you can find some software that can do it. Here is a link to some freeware that just might be what you are looking for Wysigot Light
Well, if I can't get the search engine it will be no good, as that’s what makes that site so cool and manageable. Will I be able to get the search engine with any of the following tools? HTTrack (Free) http://www.httrack.com/ Offline Explorer Pro http://www.metaproducts.com/mp/Offline_Explorer_Pro.htm Offline commander http://www.zylox.com/ wget (free) http://www.gnu.org/software/wget/#downloading Teleport pro / ultra (commercial) http://www.tenmax.com/teleport/pro/ Wysigot Light http://www.snapfiles.com/reviews/Wysigot_Light/wysigot.html
well, you could save a site page by page, but the search engine is designed to search for files on a server on the internet, not through folders on your personal computer. this would require a healthy change of code, which isn't really worth it or (from my knowledge) effectively feasible.