Help-Site Computer Manuals
  Algorithms & Data Structures   Programming Languages   Revision Control
  Cameras   Computers   Displays   Keyboards & Mice   Motherboards   Networking   Printers & Scanners   Storage
  Windows   Linux & Unix   Mac

a robot doesn't stop, but remembers where it has been

NoStopRobot - a robot doesn't stop, but remembers where it has been


NoStopRobot - a robot doesn't stop, but remembers where it has been


  use NoStopRobot

  my $ua=NoStopRobot::new(....)



This module implements a user agent which remembers where it has been and when so that the user can avoid too fast visits, but doesn't actual implement that wait.


The robot logic implemented here is somewhat more aggressive than that implemented in WWW::RobotUA. We never actually sleep in any of the functions here. This means that if a request is initiated it will complete with robot checks and redirects all in one go.

Instead the user should actually implement waits outside the module using the `host_wait()' method. The key benefit of this is that it is possible to check which request can be run first and reorder requests to work as fast as wanted whilst maintaining good load spread between different sites.

Secondly (and as a direct consequence), if there are multiple requests to different sites which end up as redirects to the same site, the wait time logic will not warn against this. This is reasonable since each request can be considered as a separate request to a separate site.


Becuase LWP::RobotUA collapses completely when called with URLs other than HTTP this is implemented over the top of LWP::UserAgent (via LWP::Auth_UA) rather than as a subclass of LWP::RobotUA.

robot_check - given a URL carries out all actions needed to check whether a request to that URL will be allowed by the robot rules but doesn't actually send a request to the URL its self. This if the function host_wait is called then it will accurately reflect the time before a request can be made to that URL.


simple_request carries out one HTTP request. It does robot checks to ensure that the request is permitted, however, in contrast to RobotUA it never sleeps. It merely records which sites it visits.

N.B. there is one theoretical hole in this logic. If multiple sites are redirected to the same site, it is possible for us to check

This funciton is like host_wait; but there are two differences. Firstly, it should be called with a url (string or object). Secondly, it should work for any url (actually URI), but will return undef for urls which can't have a netloc derived from them.

Returns the number of seconds (from now) you must wait before you can make a new request to this host.

Sets a regular expression for links for which the robot agent should not wait. Typically these would be local pages or servers in the same organisation as the link checking is being carried out by.

$ua = LWP::RobotUA->new($agent_name, $from, [$rules])
Your robot's name and the mail address of the human responsible for the robot (i.e. you) are required by the constructor.

Optionally it allows you to specify the WWW::RobotRules object to use.

Set the minimum delay between requests to the same server. The default is 1 minute.

Get/set a value indicating whether the UA should sleep() if requests arrive too fast (before $ua->delay minutes has passed). The default is TRUE. If this value is FALSE then an internal SERVICE_UNAVAILABLE response will be generated. It will have an Retry-After header that indicates when it is OK to send another request to this server.

Set/get which WWW::RobotRules object to use.

Returns the number of documents fetched from this server host. Yes I know, this method should probably have been named num_visits() or something like that. :-(

Returns a string that describes the state of the UA. Mainly useful for debugging.