May 26th, 2020
Systematic Web Scraping

If it helps to think of a web crawler as a system than a piece of code.

This shift is very important and will be forced on any developer whoever attempts web scraping at scale.

It is one of the best ways to learn thinking in systems.

We can see the whole crawling process as a workflow with multiple possible points of failure. in fact, any place where the scraper is dependant on external resources is a place it could and will fail. So 90% of the time spent by the developer is in fixing in bit and pieces these inevitable issues.

At Proxies API, we have gone through the drudgery of not thinking in a systematic way about web scraping till we one day took a step back and identified the central problem. The code was never the problem. The whole thing didn't work as a system. We finally decided on our own set of rules to make crawlers that work systematically.

Here are the rules that the system has to obey:

  1. Handle fetching issues (timeouts, redirects, headers, browser spoofing, CAPTCHAs and IP blocks)
  1. Where the crawler doesn't have a solution to each of the issues(for example, CAPTCHAs), it should at least handle and log them.
  1. The system should be able to "step over" any issue and not stumble and fail to bring everything down with it.
  1. The system should immediately alert the developer about an issue.
  1. The system should help the developer diagnose the last issue quickly with as much context as possible, so it is easily re-producable
  1. The system should be as generic as possible at the code level and should push individual website logic to an external database as much as possible.
  1. The system should have enough levers to control the speed and scale of the crawl.

Share this article:

Get our articles in your inbox

Dont miss our best tips/tricks/tutorials about Web Scraping
Only great content, we don’t share your email with third parties.
Icon