Answer a question

I have been using the requests library to mine this website. I haven't made too many requests to it within 10 minutes. Say 25. All of a sudden, the website gives me a 404 error.

My question is: I read somewhere that getting a URL with a browser is different from getting a URL with something like a requests. Because the requests fetch does not get cookies and other things that a browser would. Is there an option in requests to emulate a browser so the server doesn't think i'm a bot? Or is this not an issue?

Answers

Basically, at least one thing you can do is to send User-Agent header:

headers = {'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:20.0) Gecko/20100101 Firefox/20.0'}

response = requests.get(url, headers=headers)

Besides requests, you can simulate a real user by using selenium - it uses a real browser - in this case there is clearly no easy way to distinguish your automated user from other users. Selenium can also make use a "headless" browser.

Also, check if the web-site you are scraping provides an API. If there is no API or you are not using it, make sure you know if the site actually allows automated web-crawling like this, study Terms of use. You know, there is probably a reason why they block you after too many requests per a period of time.

Also see:

  • Sending "User-agent" using Requests library in Python
  • Headless Selenium Testing with Python and PhantomJS

edit1: selenium uses a webdriver rather than a real browser; i.e., it passes a webdriver = TRUE in the header, making it far easier to detect than requests.

Logo

Python社区为您提供最前沿的新闻资讯和知识内容

更多推荐