Spider a website with wget

This command might be useful if you want to auto-generate the Boost module cache files on a Drupal site

wget -r -l4 --spider -D thesite.com http://www.thesite.com

Let's analyse the options...

-r indicates it's recursive (so "follow the links" and look for more than one page)

-l indicates the number of levels we want to recurse. If you are on the first page and you follow a link, you are at level 1. If you follow a link on that last page, you are at level 2, etc

--spider indicates not to download anything (we just want to go through the pages, that's all)

-D indicates the list (separated by commas) of domains where we think it's acceptable to "spider" (that is, if a link points to "hello.com", we won't follow it)

This will create a hierarchy of directories where you start executing the command, but it's mostly a list to know where it's been. It doesn't store anything (as per the "--spider" option). If you know your site lasts some time to deliver pages, you might want to set the timeout to something like 20 seconds. Although WGET documentation seems to say that the default is 900 seconds, for some reason it tends to abandon earlier in my case. You might also want to "fake" your user agent in case you have a website that reacts to mobile phones (in this case we simulate an iPhone) This command might be useful if you want to auto-generate the Boost module cache files on a Drupal site wget -r -l4 --spider -D thesite.com http://www.thesite.com Let's analyse the options... -r indicates it's recursive (so "follow the links" and look for more than one page) -l indicates the number of levels we want to recurse. If you are on the first page and you follow a link, you are at level 1. If you follow a link on that last page, you are at level 2, etc --spider indicates not to download anything (we just want to go through the pages, that's all) -D indicates the list (separated by commas) of domains where we think it's acceptable to "spider" (that is, if a link points to "hello.com", we won't follow it) This will create a hierarchy of directories where you start executing the command, but it's mostly a list to know where it's been. It doesn't store anything (as per the "--spider" option). If you know your site lasts some time to deliver pages, you might want to set the timeout to something like 20 seconds. Although WGET documentation seems to say that the default is 900 seconds, for some reason it tends to abandon earlier in my case. You might also want to "fake" your user agent in case you have a website that reacts to mobile phones (in this case we simulate an iPhone)

 wget -r -l4 --spider --delete-after --user-agent="iOS 4_3 - iPhone - Safari 533.17.9" --timeout=20 -D m.thesite.com http://www.thesite.com

The delete-after directive tells wget to delete the file after it's downloaded it, which apparently might affect your system if you're using a proxy (beware that in this case it will be stored in your proxy and, next time you check it, it will come from there, as far as I understand it). In my case, it is not necessary.  

Comments

In reply to by YW

Permalink

You're right, thanks for that. Wget has changed a lot since the writing of this article. You should be able to find what you're looking for by searching a little for "wget --page-requisites", but I'm not sure it enables you to spider a complete website anymore. Let us know!

This looks like an interesting solution http://www.linuxforums.org/forum/applications/145133-get-complete-webpa…