
Web crawling and web scraping are essential for public data gathering. E-commerce businesses use web scrapers to collect fresh data from various websites. This information is later used to improve business and marketing strategies.
Getting blacklisted while scraping data is a common issue for those who don’t know how to crawl a website without getting blocked. We gathered a list of actions to prevent getting blacklisted while scraping and crawling websites.
- 1. Check robots exclusion protocol
- 2. Use a proxy server
- 3. Rotate IP addresses
- 4. Use real user agents
- 5. Set your fingerprint right
- 6. Beware of honeypot traps
- 7. Use CAPTCHA solving services
- 8. Change the crawling pattern
- 9. Reduce the scraping speed
- 10. Crawl during off-peak hours
- 11. Avoid image scraping
- 12. Avoid JavaScript
- 13. Use a headless browser
- Conclusion
Check robots exclusion protocol
Before crawling or scraping any website, make sure your target allows data gathering from their page. Inspect the robots exclusion protocol (robots.txt) file and respect the rules of the website.
Even when the web page allows crawling, be respectful, and don’t harm the page. Follow the rules outlined in the robots exclusion protocol, crawl during off-peak hours, limit requests coming from one IP address, and set a delay between them.
However, even if the website allows web scraping, you may still get blocked, so it’s important to follow other steps, too.

Use a proxy server
Web crawling would be hardly possible without proxies. Pick a reliable proxy service provider and choose between the datacenter and residential IP proxies, depending on your task.
Using an intermediary between your device and the target website reduces IP address blocks, ensures anonymity, and allows you to access websites that might be unavailable in your region. For example, if you’re based in Germany, you may need to use a US proxy in order to access web content in the United States.
For the best results, choose a proxy provider with a large pool of IPs and a wide set of locations.
Here is an infographic for better visualization, explaining the main proxy management challenges and solutions:

Rotate IP addresses
When you’re using a proxy pool, it’s essential that you rotate your IP addresses.
If you send too many requests from the same IP address, the target website will soon identify you as a threat and block your IP address. Proxy rotation makes you look like a number of different internet users and reduces your chances of getting blocked.
All Oxylabs Residential Proxies are rotating IPs, but if you’re using Datacenter Proxies, you should use a proxy rotator service. We also rotate IPv4 and IPv6 proxies. If you are interested in the differences between IPv4 vs IPv6, check out the article my colleague Iveta wrote.

Use real user agents
Most servers that host websites can analyze the headers of the HTTP request that crawling bots make. This HTTP request header, called user agent, contains various information ranging from the operating system and software to application type and its version.
Servers can easily detect suspicious user agents. Real user agents contain popular HTTP request configurations that are submitted by organic visitors. To avoid getting blocked, make sure to customize your user agent to look like an organic one.
Since every request made by a web browser contains a user agent, you should switch the user agent frequently.
It’s also important to use up to date and the most common user agents. If you’re making requests with a 5-year-old user agent from a Firefox version that is no longer supported, it raises a lot of red flags. You can find public databases on the internet that show you which user agents are the most popular these days. We also have our own regularly updated database, get in touch with us if you need access to it.
Set your fingerprint right
Anti-scraping mechanisms are getting more sophisticated and some websites use Transmission Control Protocol (TCP) or IP fingerprinting to detect bots.
When scraping the web, TCP leaves various parameters. These parameters are set by the end user’s operating system or the device. If you’re wondering how to prevent getting blacklisted while scraping, make sure your parameters are consistent.
If you’re interested, learn more about fingerprinting and its impact on web scraping.
Beware of honeypot traps
Honeypots are links in the HTML code. These links are invisible to organic users, but web scrapers can detect them. Honeypots are used to identify and block web crawlers because only robots would follow that link.
Since setting honeypots requires a relatively large amount of work, this technique is not widely used. However, if your request is blocked and crawler detected, beware that your target might be using honeypot traps.

Use CAPTCHA solving services
CAPTCHAs are one of the biggest web crawling challenges. Websites ask visitors to solve various puzzles in order to confirm they’re humans. The current CAPTCHAs often include images that are nearly impossible to read for computers.
How to bypass CAPTCHAs when scraping? In order to work around CAPTCHAs, use dedicated CAPTCHAs solving services or ready-to-use crawling tools. For example, Oxylabs’ data crawling tool solves CAPTCHAs for you and delivers ready to use results.
Change the crawling pattern
The pattern refers to how your crawler is configured to navigate the website. If you constantly use the same basic crawling pattern, it’s only a matter of time when you get blocked.
You can add random clicks, scrolls, and mouse movements to make your crawling seem less predictable. However, the behavior should not be completely random. One of the best practices when developing a crawling pattern is to think of how a regular user would browse the website and then apply those principles to the tool itself. For example, visiting home page first and only then making some requests to inner pages makes a lot of sense.

Reduce the scraping speed
To mitigate the risk of being blocked, you should slow down your scraper speed. For instance, you can add random breaks between requests or initiate wait commands before performing a specific action.
Crawl during off-peak hours
Most crawlers move through pages significantly faster than an average user as they don’t actually read the content. Thus, a single unrestrained web crawling tool will affect server load more than any regular internet user. In turn, crawling during high-load times might negatively impact user experience due to service slowdowns.
Finding the best time to crawl the website will vary on a case-by-case basis but picking off-peak hours just after midnight (localized to the service) is a good starting point.
Avoid image scraping
Images are data-heavy objects that can often be copyright protected. Not only it will take additional bandwidth and storage space but there’s also a higher risk of infringing on someone else’s rights.
Additionally, since images are data-heavy, they are often hidden in JavaScript elements (e.g. behind Lazy loading) which will significantly increase the complexity of the data acquisition process and slow down the web scraper itself. To get images out of JS elements, a more complicated scraping procedure (something that would force the website to load all content) would have to be written and employed.
Avoid JavaScript
Data nested in JavaScript elements is hard to acquire. Websites use many different JavaScript features to display content based on specific user actions. A common practice is to only display product images in search bars after the user has provided some input.
JavaScript can also cause a host of other issues – memory leaks, application instability or, at times, complete crashes. Dynamic features can often become a burden. Avoid JavaScript unless absolutely necessary.
Use a headless browser
One of the additional tools for block-free web scraping is a headless browser. It works like any other browser, except a headless browser doesn’t have a graphical user interface (GUI).
A headless browser also allows scraping content that is loaded by rendering JavaScript elements. The most widely-used web browsers, Chrome and Firefox, have headless modes.
Conclusion
Scrape public data without worrying about how to prevent getting blacklisted while scraping. Set your browser parameters right, take care of fingerprinting, and beware of honeypot traps. Most importantly, use reliable proxies and scrape websites with respect. Then all your public data gathering jobs will go smoothly and you’ll be able to use fresh information to improve your business.
Since now you know how to crawl a website without getting blocked, check out our blog, and read more about web scraping uses. Also, if you still wonder if crawling and scraping a website are legal, check out our blog post Is Web Scraping Legal?