May 25th, 2020
Will Your Web Scraper Perform Under These Conditions? Here Is a Checklist

Web crawling operations are lying navigating a ship during a storm. Apart from your code, almost everything else is not under your control. Being well prepared is half the battle.

Here is a bunch of conditions you are likely to face, and you will need to be ready for before you set sail.

  1. The target web servers will go down
  1. The target website will timeout on your fetches
  1. You will find connections hung and taking too many resources because your crawler will encounter an unusually large file.
  1. The web server will block your crawler as it can't identify you as a browser.
  1. The website might IP ban you.
  1. The website might restrict access at the speeds you want, so some of your queries will fail. It might temporarily restrict all access as well.
  1. The website might throw a CAPTCHA challenge.
  1. The Robots.txt file will be different from what you expect.
  1. The website might change its patterns making your web scraping code like CSS selectors or XPaths redundant
  1. It might have links to external websites, so your crawler veers way off course
  1. The images and documents you want may be on a CDN, and your external domain restrictions might mean you won't crawl these.
  1. Your crawler will hand because of the load and unexpected behaviors.
  1. Your crawler will suffer memory overloads.
  1. You will have problems handling large amounts of data. For example, you might be storing all your files in a single folder and, after a few weeks, might have millions of them making managing them a nightmare.
  1. You run out of resources as you ask more from your crawler over time. These could be CPU, memory, network speeds and even storage space
  1. Some websites are just too large, and if you have no policy, you will be stuck in getting data that you dont want
  1. Your crawler keeps breaking, but you have no idea were amongst the thousands of links it is fetching as you have not built in a sufficiently informative logger
  1. Your crawler is getting gibberish or no data at all for weeks in certain parts, and you didn't even notice

There are more. But these are all starting points to think about and have handlers, loggers, and alerting mechanisms in place. You might have to use a rotating proxy service to overcome many of the IP blocks and other access-related problems above. We have developed a cloud-based crawling service keeping these problems in mind called TeraCrawler, which automatically handles all these issues behind the scenes and removes more or less 99% of all headaches connected with large scale web crawling. TeraCrawler also uses our rotating proxy service behind the scenes to crawl almost any kind of website without getting IP banned.

Share this article:

Get our articles in your inbox

Dont miss our best tips/tricks/tutorials about Web Scraping
Only great content, we don’t share your email with third parties.
Icon