Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Commit 122541b

Browse files
committed
Issue 21469: Mitigate risk of false positives with robotparser.
* Repair the broken link to norobots-rfc.txt. * HTTP response codes >= 500 treated as a failed read rather than as a not found. Not found means that we can assume the entire site is allowed. A 5xx server error tells us nothing. * A successful read() or parse() updates the mtime (which is defined to be "the time the robots.txt file was last fetched"). * The can_fetch() method returns False unless we've had a read() with a 2xx or 4xx response. This avoids false positives in the case where a user calls can_fetch() before calling read(). * I don't see any easy way to test this patch without hitting internet resources that might change or without use of mock objects that wouldn't provide must reassurance.
1 parent 73308d6 commit 122541b

1 file changed

Lines changed: 9 additions & 2 deletions

File tree

Lib/urllib/robotparser.py

Lines changed: 9 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77
2) PSF license for Python 2.2
88
99
The robots.txt Exclusion Protocol is implemented as specified in
10-
http://info.webcrawler.com/mak/projects/robots/norobots-rfc.html
10+
http://www.robotstxt.org/norobots-rfc.txt
1111
"""
1212

1313
import urllib.parse, urllib.request
@@ -57,7 +57,7 @@ def read(self):
5757
except urllib.error.HTTPError as err:
5858
if err.code in (401, 403):
5959
self.disallow_all = True
60-
elif err.code >= 400:
60+
elif err.code >= 400 and err.code < 500:
6161
self.allow_all = True
6262
else:
6363
raw = f.read()
@@ -85,6 +85,7 @@ def parse(self, lines):
8585
state = 0
8686
entry = Entry()
8787

88+
self.modified()
8889
for line in lines:
8990
if not line:
9091
if state == 1:
@@ -129,6 +130,12 @@ def can_fetch(self, useragent, url):
129130
return False
130131
if self.allow_all:
131132
return True
133+
# Until the robots.txt file has been read or found not
134+
# to exist, we must assume that no url is allowable.
135+
# This prevents false positives when a user erronenously
136+
# calls can_fetch() before calling read().
137+
if not self.last_checked:
138+
return False
132139
# search for given user agent matches
133140
# the first match counts
134141
parsed_url = urllib.parse.urlparse(urllib.parse.unquote(url))

0 commit comments

Comments
 (0)