5 Key HTTP Headers for Web Scraping

Vytautas Kirjazovas

Apr 29, 2020 5 min read

Common and repetitive questions in the world of web scraping is how to avoid getting blocked by target servers? And, how to increase the quality of retrieved data?

HTTP headers for web scraping

Of course, there are proven resources and techniques, such as the use of a proxy or practicing rotating IP address that will help your web scraper to avoid blocks.

However, another sometimes overlooked technique is to use and optimize HTTP headers. This practice will allow to significantly decrease your web scraper’s chances of getting blocked by various data sources, and also ensure that the retrieved data is of high quality. 

Now don’t get alarmed if you have no or little knowledge about HTTP headers, as we covered what are HTTP headers, and discussed how they are connected in web scraping process. 

In this article, we are revealing 5 most essential HTTP headers that need to be used and optimized, and provide you with the reasoning behind it.

HTTP headers enable both for the client and server to transfer further details within the request or response.

What’s the purpose of HTTP headers?

1.    User-Agent

The User-Agent request header passes information related to the identification of application type, operating system, software and its version, and allows for data target to decide what type of HTML layout to use in response i.e. mobile, tablet or pc. 

User-AgentMozilla/5.0 (Macintosh; Intel Mac OS X 10_14_5)
AppleWebKit/605.1.15 (KHTML, like Gecko)
Version/12.1.1 Safari/605.1.15

Authenticating User-Agent request header is a common practice by web servers, and it is the first check that allows data sources to identify suspicious requests. For instance, when web scraping is in process, numerous requests are traveling to the web server, and if User-Agent request headers are identical, it will seem as it is a bot-like activity. Hence, experienced web scraping punters will manipulate and differentiate User-Agent header strings, which consequently allow portraying multiple organic users’ sessions.

So, when it comes to User-Agent request header, remember to frequently alter the information this header carries, which will allow to substantially reduce your odds of getting blocked. 

2.    Accept-Language

The Accept-Language request header passes information indicating to a web server which languages the client understands, and which particular language is preferred when the web server sends the response back. 


It’s worth mentioning that this particular header usually comes into play when web servers are unable to identify the preferred language e.g. via URL.

That said, the key with the Accept-Language request header is relevance. It is essential to ensure that set languages are in accordance with the data-target domain and client’s IP location. Simply because, if requests from the same client would appear in multiple languages this would raise suspicions to the web server of bot-like behavior (non-organic request approach), and consequently, they might block the web scraping process. 

3.    Accept-Encoding 

The Accept-Encoding request header notifies the web server what compression algorithm to use when the request is handled. In other words, it states that the required information can be compressed (if web server can handle it) when being sent out from the web server to the client.

Accept-Encodingbr, gzip, deflate

However, when optimized it allows saving traffic volume, which is a win-win situation for both the client and the web server from the traffic load perspective. The client still gets the required information (just compressed), and the web server isn’t wasting its resources by transferring a huge load of traffic.

4.    Accept

The Accept request header falls into a content negotiation category, and its purpose is to notify the web server on what type of data format can be returned to the client. 


It’s as simple as it sounds, but a common hiccup with web scraping is overlooking or forgetting to configure the request header accordingly to the web server’s accepted format. If the Accept request header is configured suitably, it will result in a more organic communication between the client and the server, and consequently, decrease web scraper’s chances of getting blocked.

5.    Referer

The Referer request header provides the previous web page’s address before the request is sent to the web server. 


It might seem that the Referer request header has very little impact when it comes to blocking the scraping process, when in fact, it actually does. Think of a random organic user’s internet usage patterns. This user is quite likely surfing the mighty internet and losing track of hours in a day. Hence, if you want to portray the web scraper’s traffic to seem more organic, simply specify a random website before starting web scraping session.

The key is not to jump the gun and instead take this rather straightforward step. Hence, remember to always set up the Referer request header, and boost your chances of slipping under anti-scraping measures implemented by web servers.

Wrapping it up

Now you know which HTTP headers for web scraping to configure, and by doing so, it will allow increasing your web scraper’s chances of the successful and efficient data extraction operation.

It’s safe to state that the more you know about the technical side of web scraping, the more fruitful your web scraping results will be. Use this knowledge wisely, and it’s a given that your web scraper will work more effectively and efficiently. If you’re just starting a web scraping project and wondering how to begin web scraping at all, read it up at our blog. Get scraping!

Of course, if you have any further questions or would like to get a consultation, feel free to leave a comment below, drop us a line via live chat or email us at [email protected]


About Vytautas Kirjazovas

Vytautas Kirjazovas is a PR Manager at Oxylabs, and he places a strong personal interest in technology due to its magnifying potential to make everyday business processes easier and more efficient. Vytautas is fascinated by new digital tools and approaches, in particular, for web data harvesting purposes, so feel free to drop him a message if you have any questions on this topic. He appreciates a tasty meal, enjoys travelling and writing about himself in the third person.

Related articles

Datacenter Proxies Quick Start Guide

Datacenter Proxies Quick Start Guide

Nov 18, 2020

11 min read

What is Web Scraping?

What is Web Scraping?

Nov 16, 2020

7 min read

How to Extract Data from A Website?

How to Extract Data from A Website?

Nov 13, 2020

10 min read

All information on Oxylabs Blog is provided on an "as is" basis and for informational purposes only. We make no representation and disclaim all liability with respect to your use of any information contained on Oxylabs Blog or any third-party websites that may be linked therein. Before engaging in scraping activities of any kind you should consult your legal advisors and carefully read the particular website's terms of service or receive a scraping license.