Back to blog

Scraping Amazon Product Data: A Complete Guide

Maryia Stsiopkina

7 min read

Amazon is packed with useful e-commerce data, such as product information, reviews, and prices. Extracting this data efficiently and putting it to use is imperative for any modern business. Whether you intend to monitor the performance of your products sold by third-party resellers or track your competitors, you need reliable web scraping services, like Amazon Scraper, to grasp this data for market analytics. 

Amazon scraping, however, has its peculiarities. In this step-by-step guide, we’ll go over every stage needed to create an Amazon web scraper. 

Setting up for scraping

To follow along, you will need Python. If you do not have Python 3.8 or above installed, head to and download and install Python.

Next, create a folder to save your code files for web scraping Amazon. Once you have a folder, creating a virtual environment is generally a good practice.

The following commands work on macOS and Linux. These commands will create a virtual environment and activate it:

$ python3 -m venv .env
$ source .env/bin/activate

If you are on Windows, these commands will vary a little as follows:

d:\amazon>python -m venv .env

The next step is installing the required Python packages.

You will need packages for two broad steps—getting the HTML and parsing the HTML to query relevant data.

Requests is a popular third-party Python library for making HTTP requests. It provides a simple and intuitive interface to make HTTP requests to web servers and receive responses. This library is perhaps the most known library related to web scraping.

The limitation of the Requests library is that it returns the HTML response as a string, which is not easy to query for specific elements such as listing prices while working with web scraping code.

This is where Beautiful Soup steps in. Beautiful Soup is a Python library used for web scraping to pull the data out of HTML and XML files. It allows you to extract information from the page by searching for tags, attributes, or specific text. 

To install these two libraries, you can use the following command:

$python3 -m pip install requests beautifulsoup4

If you are on windows, use Python instead of Python3. The rest of the command remains unchanged:

d:\amazon>python -m pip install requests beautifulsoup4

Note that we are installing version 4 of the Beautiful Soup library.

It's time to try out the Requests scraping library. Create a new file with the name and enter the following code:

import requests
url = ''

response = requests.get(url)


Save the file and run it from the terminal.


In most cases, you cannot view the desired HTML. Amazon will block this request, and you will see the following text in the response:

To discuss automated access to Amazon data please contact

If you print the response.status_code, you will see that instead of getting 200, which means success, you get 503, which means an error.

Amazon knows this request was not using a browser and thus blocks it. 

It is a common practice employed by many websites. Amazon will block your requests and return an error code beginning with 500 or sometimes even 400.

The solution is simple. You can send the headers along with your request that a browser would. 

Sometimes, sending only the user-agent is enough. At other times, you may need to send more headers. A good example is sending the accept-language header.

To identify the user-agent sent by your browser, press F12 and open the Network tab. Reload the page. Select the first request and examine Request Headers.

You can copy this user-agent and create a dictionary for the headers. 

The following example shows a dictionary with the user-agent and accept-language headers:

custom_headers = {
'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/ Safari/537.36',
'accept-language': 'en-GB,en;q=0.9',

You can send this dictionary to the optional parameter of the get method as follows:

response = requests.get(url, headers= custom_headers)

Executing the code with these changes will show the expected HTML with the product details.

Another note is that if you send as many headers as possible, you will not need Javascript rendering. If you need rendering, you will need tools like Playwright or Selenium.

Scraping Amazon product data

When web scraping Amazon products, typically, you would work with two categories of pages — the category page and the product details page.

For example, open or search for Over-Ear Headphones on Amazon. The page that shows the search results is the category page.

The category page displays the product title, product image, product rating, product price, and, most importantly, the product URLs page. If you want more details, such as product descriptions, you will get them only from the product details page.

Let's examine the structure of the product details page.

Open a product URL, such as, in Chrome or any other modern browser, right-click the product title, and select Inspect. You will see that the HTML markup of the product title is highlighted.

You will see that it is a span tag with its id attribute set to "productTitle". 

Similarly, if you right-click the price and select Inspect, you will see the HTML markup of the price.

You can see that the dollar component of the price is in a span tag with the class "a-price-whole", and the cents component is in another span tag with the class set to "a-price-fraction". 

Similarly, you can locate the rating, image, and description.

Once you have this information, add the following lines to the code we have written so far:

response = requests.get(url, headers=custom_headers)
soup = BeautifulSoup(response.text, 'lxml')

Beautiful Soup supports a unique way of selecting tags that utilize the find methods. Alternatively, Beautiful Soup also supports CSS selectors. You can use either of these to get the same results. In this guide, we will use CSS selectors, which are universal ways to select elements. CSS selectors work with almost all web scraping tools that can be used for web scraping Amazon product data.

We are now ready to use the Soup object to query for specific information.

Scraping product name

The product name or the product title is located in a span element with its id productTitle. It's easy to select elements using the id that is unique.

See the following code for example:

title_element = soup.select_one('#productTitle')

We send the CSS selector to the select_one method, which returns an element instance.

We can extract information from the text using the text attribute.

title = title_element.text

Upon printing it, you will see that there are few white spaces. To fix that, add .strip() function call as follows:

title = title_element.text.strip()

Scraping product rating

Scraping Amazon product ratings needs a little more work. 

First, let's create a selector for rating:


Now, the following statement can select the element that contains the rating.

rating_element = soup.select_one('#acrPopover')

Note that the rating value is actually in the title attribute:

rating_text = rating_element.attrs.get('title')
# prints '4.6 out of 5 stars'

Lastly, we can use the replace method to get the number:

rating = rating_text.replace('out of 5 stars','')

Scraping product price

The product price is located in two places — below the product title and also on the Buy Now box. 

We can use either of these tags to scrape Amazon product prices.

Let's create a CSS selector for the price:


This CSS selector can be passed to the select_one method of BeautifulSoup as follows:

price_element = soup.select_one('#price_inside_buybox')

You can now print the price 


Scraping image

Let's scrape the default image. This image has the CSS selector as #landingImage. With this information, we can write the following lines of code to get the image URL from the src attribute:

image_element = soup.select_one('#landingImage')
image = image_element.attrs.get('src')

Scraping product description

The next step in scraping Amazon product information is scraping the product description.

The methodology remains the same — create a CSS selector and use the select_one method.

The CSS selector for the description is as follows:


It means that we can extract the element as follows:

description_element = soup.select_one('#productDescription')

Handling product listing

So far, we have explored how to scrape product information.

However, to reach the product information, you will begin with product listing or category pages.

For example, is the category page for over-ear headphones. 

If you examine this page, you will notice that all the products are contained in a div that has a special attribute [data-asin]. In that div, all the product links are in an h2 tag.

With this in mind, the CSS Selector would be as follows:

[data-asin] h2 a

We can read the href attribute of this selector and run a loop. However, note that the links will be relative. You would need to use the urljoin method to parse these links.

from urllib.parse import urljoin
def parse_listing(listing_url):
    link_elements ="[data-asin] h2 a")
    page_data = []
    for link in link_elements:
        full_url = urljoin(search_url, link.attrs.get("href"))
        product_info = get_product_info(full_url)

Handling pagination

The link to the next page is in a link that contains the text Next. We can look for this link using the contains operator of CSS as follows:

next_page_el = soup.select_one('a:contains("Next")')
if next_page_el:
    next_page_url = next_page_el.attrs.get('href')
    next_page_url = urljoin(listing_url, next_page_url)

Exporting Amazon data 

The data we are scraping is being returned as a dictionary. This is intentional. We can create a list that contains all the scraped products.

def parse_listing(listing_url):
page_data = []
for link in link_elements:
product_info = get_product_info(full_url)

This page_data can then be used to create a Pandas DataFrame object:

df = pd.DataFrame(page_data)
df.to_csv('headphones.csv', index = False)

Reviewing final script 

Putting together everything, the following is the final script:

import requests
from bs4 import BeautifulSoup
from urllib.parse import urljoin
import pandas as pd

custom_headers = {
    "accept-language": "en-GB,en;q=0.9",
    "user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/ Safari/537.36",

def get_product_info(url):
    response = requests.get(url, headers=custom_headers)
    if response.status_code != 200:
        print("Error in getting webpage")

    soup = BeautifulSoup(response.text, "lxml")

    title_element = soup.select_one("#productTitle")
    title = title_element.text.strip() if title_element else None

    price_element = soup.select_one("#price_inside_buybox")
    price = price_element.text if price_element else None

    rating_element = soup.select_one("#acrPopover")
    rating_text = rating_element.attrs.get("title") if rating_element else None
    rating = rating_text.replace("out of 5 stars", "") if rating_text else None

    image_element = soup.select_one("#landingImage")
    image = image_element.attrs.get("src") if image_element else None

    description_element = soup.select_one("#productDescription")
    description = description_element.text.strip() if description_element else None

    return {
        "title": title,
        "price": price,
        "rating": rating,
        "image": image,
        "description": description,
        "url": url,

def parse_listing(listing_url):

    response = requests.get(listing_url, headers=custom_headers)
    soup_search = BeautifulSoup(response.text, "lxml")
    link_elements ="[data-asin] h2 a")
    page_data = []
    for link in link_elements:
        full_url = urljoin(listing_url, link.attrs.get("href"))
        print(f"Scraping product from {full_url[:100]}", flush=True)
        product_info = get_product_info(full_url)

    next_page_el = soup_search.select_one('a:contains("Next")')
    if next_page_el:
        next_page_url = next_page_el.attrs.get('href')
        next_page_url = urljoin(listing_url, next_page_url)
        print(f'Scraping next page: {next_page_url}', flush=True)
        page_data += parse_listing(next_page_url)

    return page_data

def main():
    data = []
    search_url = ""
    data = parse_listing(search_url)
    df = pd.DataFrame(data)

if __name__ == '__main__':

Best practices

Scraping Amazon without proxies or dedicated scraping tools is full of obstacles. Just like many other popular scraping targets, Amazon has rate-limiting in place, meaning it can block your IP address if you exceed the established limit. Apart from that, Amazon uses bot-detection algorithms that can check your HTTP headers for any suspicious details. Also, you should be ready to constantly adapt to the different page layouts and various HTML structures. 

Considering these factors, it’s recommended to follow some common practices to prevent getting detected and blocked by Amazon. Some of the most useful tips are: 

  1. Use a real User-Agent. It’s important to make your User-Agent look as plausible as possible. Here’s the list of the most common user agents.

  2. Set your fingerprint. Many websites use Transmission Control Protocol (TCP) and IP fingerprinting to detect bots. To avoid getting spotted, you need to make sure your fingerprint parameters are always consistent. 

  3. Change the crawling pattern. To develop a successful crawling pattern, you should think about how a regular user would behave while exploring a page and add clicks, scrolls, and mouse movements accordingly.

Easier solution to extract Amazon data 

And this is only a small portion of the requirements you should keep in mind when scraping Amazon. Alternatively, you can turn to a ready-made scraping solution designed specifically for scraping Amazon - Amazon Scraper API. With this scraper, you can:

  • Scrape and parse various Amazon page types, including Search, Product, Offer listing, Questions & Answers, Reviews, Best Sellers, and Sellers.

  • Target localized product data in 195 locations worldwide;

  • Retrieve accurate parsed results in JSON format without installing any other library;

  • Enjoy multiple handy features, such as bulk scraping and automated jobs.

Let's look at Amazon Scraper API in action.

Extracting product details

Consider the example of getting product data from product pages. 

All you need is the product URL — irrespective of the country of the Amazon store. For example, the following code extracts details for the Bose QC 45 from

import requests

# Structure payload.
payload = {
'source': 'amazon',
'url': '',
'parse': True

# Get response
response = requests.request(

# Print prettified response to stdout.

You will get the complete product data returned in JSON format.

Another way to get the information is by ASIN of the product. The only line you need to modify is the payload:

payload = {
'source': 'amazon_product',
'domain': '',
'query': 'B098FKXT8L',
'parse': True,
'context': [
'key': 'autoselect_variant', 'value': True

Note the optional parameter domain. You can use this parameter to get Amazon data from any domain, such as

Searching products

Searching for the products is very easy.

Again, the only code that changes is the payload. Here is the payload for the search for "bose":

payload = {
'source': 'amazon_search',
'query': 'bose', # Search for "bose"
'start_page': 1,
'pages': 10,
'parse': True,
'context': [
{'key': 'category_id', 'value': 12097479011} # category id for headphones

Notice how it requests 10 pages beginning with page 1. Also, we limit the search to category id 12097479011, which is Amazon's category id for headphones.


You can write code to scrape Amazon products using the Requests and Beautiful Soup libraries. It may need some effort, but it works. Sending custom headers, rotating user-agents, and proxy rotation can help bypass bans or rate limiting.

However, the easiest solution to scrape Amazon products is using the Amazon Scraper API. Oxylabs also allows you to gather data from 50 other marketplaces using its E-Commerce Scraper API.

If you have any questions, do not hesitate to contact us.

Frequently asked questions

Does Amazon allow scraping?

Scraping publicly available data contained within the Amazon website isn’t considered illegal as long as your actions don’t violate its ToS. However, before engaging in any web scraping activity, our legal experts strongly recommend consulting with lawyers knowledgeable in this field.

Can scraping be detected?

Yes, scraping can be detected by the anti-bot software that can check your IP address, browser parameters, user agents, and other details. After being detected, the website will throw CAPTCHA, and if not solved, your IP will get blocked.

Does Amazon ban IP?

Yes, Amazon may ban an IP address if it finds it suspicious. 

About the author

Maryia Stsiopkina

Content Manager

Maryia Stsiopkina is a Content Manager at Oxylabs. As her passion for writing was developing, she was writing either creepy detective stories or fairy tales at different points in time. Eventually, she found herself in the tech wonderland with numerous hidden corners to explore. At leisure, she does birdwatching with binoculars (some people mistake it for stalking), makes flower jewelry, and eats pickles.

All information on Oxylabs Blog is provided on an "as is" basis and for informational purposes only. We make no representation and disclaim all liability with respect to your use of any information contained on Oxylabs Blog or any third-party websites that may be linked therein. Before engaging in scraping activities of any kind you should consult your legal advisors and carefully read the particular website's terms of service or receive a scraping license.

Related articles

Get the latest news from data gathering world

I’m interested


  • Setting up for scraping

  • Scraping Amazon product data

  • Reviewing final script 

  • Best practices

  • Easier solution to extract Amazon data 

  • Conclusion

Forget about complex web scraping processes

Choose Oxylabs' advanced web scraping solutions to gather real-time public data hassle-free.

Scale up your business with Oxylabs®