Proxy locations

Europe

North America

South America

Asia

Africa

Oceania

See all locations

Network statusCareers

Back to blog

Scraping Amazon Product Data: A Complete Guide

Maryia Stsiopkina

2023-08-069 min read
Share

Amazon is packed with useful e-commerce data, such as product information, reviews, and prices. Extracting this data efficiently and putting it to use is imperative for any modern business. Whether you intend to monitor the performance of your products sold by third-party resellers or track your competitors, you need reliable web scraping services, like Amazon Scraper, to grasp this data for market analytics. 

Amazon scraping, however, has its peculiarities. In this step-by-step guide, we’ll go over every stage needed to create an Amazon web scraper. 

Setting up for scraping

To follow along, you will need Python. If you do not have Python 3.8 or above installed, head to python.org and download and install Python.

Next, create a folder to save your code files for web scraping Amazon. Once you have a folder, creating a virtual environment is generally a good practice.

The following commands work on macOS and Linux. These commands will create a virtual environment and activate it:

$ python3 -m venv .env
$ source .env/bin/activate

If you are on Windows, these commands will vary a little as follows:

d:\amazon>python -m venv .env
d:\amazon>.env\scripts\activate

The next step is installing the required Python packages.

You will need packages for two broad steps—getting the HTML and parsing the HTML to query relevant data.

Requests is a popular third-party Python library for making HTTP requests. It provides a simple and intuitive interface to make HTTP requests to web servers and receive responses. This library is perhaps the most known library related to web scraping.

The limitation of the Requests library is that it returns the HTML response as a string, which is not easy to query for specific elements such as listing prices while working with web scraping code.

This is where Beautiful Soup steps in. Beautiful Soup is a Python library used for web scraping to pull the data out of HTML and XML files. It allows you to extract information from the page by searching for tags, attributes, or specific text. 

To install these two libraries, you can use the following command:

$python3 -m pip install requests beautifulsoup4

If you are on windows, use Python instead of Python3. The rest of the command remains unchanged:

d:\amazon>python -m pip install requests beautifulsoup4

Note that we are installing version 4 of the Beautiful Soup library.

It's time to try out the Requests scraping library. Create a new file with the name amazon.py and enter the following code:

import requests
url = 'https://www.amazon.com/Bose-QuietComfort-45-Bluetooth-Canceling-Headphones/dp/B098FKXT8L'

response = requests.get(url)

print(response.text)

Save the file and run it from the terminal.

$python3 amazon.py

In most cases, you cannot view the desired HTML. Amazon will block this request, and you will see the following text in the response:

To discuss automated access to Amazon data please contact api-services-support@amazon.com.

If you print the response.status_code, you will see that instead of getting 200, which means success, you get 503, which means an error.

Amazon knows this request was not using a browser and thus blocks it. 

It is a common practice employed by many websites. Amazon will block your requests and return an error code beginning with 500 or sometimes even 400.

The solution is simple. You can send the headers along with your request that a browser would. 

Sometimes, sending only the user-agent is enough. At other times, you may need to send more headers. A good example is sending the accept-language header.

To identify the user-agent sent by your browser, press F12 and open the Network tab. Reload the page. Select the first request and examine Request Headers.

You can copy this user-agent and create a dictionary for the headers. 

The following example shows a dictionary with the user-agent and accept-language headers:

custom_headers = {
'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36',
'accept-language': 'en-GB,en;q=0.9',
}

You can send this dictionary to the optional parameter of the get method as follows:

response = requests.get(url, headers= custom_headers)

Executing the code with these changes will show the expected HTML with the product details.

Another note is that if you send as many headers as possible, you will not need Javascript rendering. If you need rendering, you will need tools like Playwright or Selenium.

Scraping Amazon product data

When web scraping Amazon products, typically, you would work with two categories of pages — the category page and the product details page.

For example, open https://www.amazon.com/b?node=12097479011 or search for Over-Ear Headphones on Amazon. The page that shows the search results is the category page.

The category page displays the product title, product image, product rating, product price, and, most importantly, the product URLs page. If you want more details, such as product descriptions, you will get them only from the product details page.

Let's examine the structure of the product details page.

Open a product URL, such as https://www.amazon.com/Bose-QuietComfort-45-Bluetooth-Canceling-Headphones/dp/B098FKXT8L, in Chrome or any other modern browser, right-click the product title, and select Inspect. You will see that the HTML markup of the product title is highlighted.

You will see that it is a span tag with its id attribute set to "productTitle". 

Similarly, if you right-click the price and select Inspect, you will see the HTML markup of the price.

You can see that the dollar component of the price is in a span tag with the class "a-price-whole", and the cents component is in another span tag with the class set to "a-price-fraction". 

Similarly, you can locate the rating, image, and description.

Once you have this information, add the following lines to the code we have written so far:

1. Send a GET request with custom headers

response = requests.get(url, headers=custom_headers)
soup = BeautifulSoup(response.text, 'lxml')

Beautiful Soup supports a unique way of selecting tags that utilize the find methods. Alternatively, Beautiful Soup also supports CSS selectors. You can use either of these to get the same results. In this guide, we will use CSS selectors, which are universal ways to select elements. CSS selectors work with almost all web scraping tools that can be used for web scraping Amazon product data.

We are now ready to use the Soup object to query for specific information.

2. Locate and scrape product name

The product name or the product title is located in a span element with its id productTitle. It's easy to select elements using the id that is unique.

See the following code for example:

title_element = soup.select_one('#productTitle')

We send the CSS selector to the select_one method, which returns an element instance.

We can extract information from the text using the text attribute.

title = title_element.text

Upon printing it, you will see that there are few white spaces. To fix that, add .strip() function call as follows:

title = title_element.text.strip()

3. Locate and scrape product rating

Scraping Amazon product ratings needs a little more work. 

First, let's create a selector for rating:

#acrPopover

Now, the following statement can select the element that contains the rating.

rating_element = soup.select_one('#acrPopover')

Note that the rating value is actually in the title attribute:

rating_text = rating_element.attrs.get('title')
print(rating_text)
# prints '4.6 out of 5 stars'

Lastly, we can use the replace method to get the number:

rating = rating_text.replace('out of 5 stars','')

4. Locate and scrape product price

The product price is located in two places — below the product title and also on the Buy Now box. 

We can use either of these tags to scrape Amazon product prices.

Let's create a CSS selector for the price:

#price_inside_buybox

This CSS selector can be passed to the select_one method of BeautifulSoup as follows:

price_element = soup.select_one('#price_inside_buybox')

You can now print the price 

print(price_element.text)

5. Locate and scrape product image

Let's scrape the default image. This image has the CSS selector as #landingImage. With this information, we can write the following lines of code to get the image URL from the src attribute:

image_element = soup.select_one('#landingImage')
image = image_element.attrs.get('src')

6. Locate and scrape product description

The next step in scraping Amazon product information is scraping the product description.

The methodology remains the same — create a CSS selector and use the select_one method.

The CSS selector for the description is as follows:

#productDescription

It means that we can extract the element as follows:

description_element = soup.select_one('#productDescription')
print(description_element.text)

7. Locate and scrape product reviews

One last thing we could scrape from a product page is its reviews.

Now, the process of scraping product reviews can be more complex, seeing as one product can have several reviews. Not to mention, a single review may feature a lot of information that you might want to capture.

Let's start by getting all the review objects. We’ll need to find a CSS selector for the product reviews and then use the .select method to extract all of them.

We can use this selector to identify the reviews:

div.review

And the following code to collect them:

review_elements = soup.select("div.review")

This will leave us with an array of all the reviews over which we’ll iterate and gather the required information.

We need an array where we can add the processed reviews and a for loop to start iterating:

scraped_reviews = []

   for review in review_elements:

Let’s begin by getting the author's name. The following CSS selector will select the name:

span.a-profile-name

We can collect the names in plain text with the following snippet:

r_author_element = review.select_one("span.a-profile-name")

r_author = r_author_element.text if r_author_element else None

The next thing to extract is the review rating. It can be found with the following CSS:

i.review-rating

The rating string has some extra text that we won’t need, so let’s remove that: 

r_rating_element = review.select_one("i.review-rating")

r_rating = r_rating_element.text.replace("out of 5 stars", "") if r_rating_element else None

We can get the element that contains the title by using this selector:

a.review-title

Getting the actual title text will require us to specify the span as shown below:

r_title_element = review.select_one("a.review-title")

r_title_span_element = r_title_element.select_one("span:not([class])") if r_title_element else None

r_title = r_title_span_element.text if r_title_span_element else None

The review text itself can be found with the following selector:

span.review-text

And extracted accordingly:

r_content_element = review.select_one("span.review-text")

r_content = r_content_element.text if r_content_element else None

One more thing to fetch from the review is the date. It can be found using the following CSS selector:

span.review-date

Here’s the code that fetches the date value from the object:

r_date_element = review.select_one("span.review-date")

r_date = r_date_element.text if r_date_element else None

Finally, we can check if the review is verified or not. The object holding this information can be accessed with this selector:

span.a-size-mini

And extracted using the following code:

r_verified_element = review.select_one("span.a-size-mini")

r_verified = r_verified_element.text if r_verified_element else None

Now that we have all this information gathered let’s assemble it into a single object. Then, let’s add that object to the array of reviews for this product that we’ve created before starting our for loop:

r = {
       "author":r_author,
       "rating":r_rating,
       "title":r_title,
       "content":r_content,
       "date":r_date,
       "verified":r_verified
}

scraped_reviews.append(r)

Handling product listing

So far, we have explored how to scrape product information.

However, to reach the product information, you will begin with product listing or category pages.

For example, https://www.amazon.com/b?node=12097479011 is the category page for over-ear headphones. 

If you examine this page, you will notice that all the products are contained in a div that has a special attribute [data-asin]. In that div, all the product links are in an h2 tag.

With this in mind, the CSS Selector would be as follows:

[data-asin] h2 a

We can read the href attribute of this selector and run a loop. However, note that the links will be relative. You would need to use the urljoin method to parse these links.

from urllib.parse import urljoin
...
def parse_listing(listing_url):
…
    link_elements = soup_search.select("[data-asin] h2 a")
    page_data = []
    for link in link_elements:
        full_url = urljoin(search_url, link.attrs.get("href"))
        product_info = get_product_info(full_url)
        page_data.append(product_info)

Handling pagination

The link to the next page is in a link that contains the text Next. We can look for this link using the contains operator of CSS as follows:

next_page_el = soup.select_one('a:contains("Next")')
if next_page_el:
    next_page_url = next_page_el.attrs.get('href')
    next_page_url = urljoin(listing_url, next_page_url)

8. Export scraped product data to a CSV file

The data we're scraping is being returned as a dictionary. This is intentional. We can create a list that contains all the scraped products:

def parse_listing(listing_url):
...
page_data = []
for link in link_elements:
...
product_info = get_product_info(full_url)
page_data.append(product_info)

This page_data can then be used to create a Pandas DataFrame object:

df = pd.DataFrame(page_data)
df.to_json('headphones.json, orient ='records')

Reviewing final script 

Putting together everything, the following is the final script:

import requests
from bs4 import BeautifulSoup
from urllib.parse import urljoin
import pandas as pd

custom_headers = {
    "accept-language": "en-GB,en;q=0.9",
    "user-agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/16.1 Safari/605.1.15",
}


def get_product_info(url):
    response = requests.get(url, headers=custom_headers)
    if response.status_code != 200:
        print("Error in getting webpage")
        exit(-1)

    soup = BeautifulSoup(response.text, "lxml")

    title_element = soup.select_one("#productTitle")
    title = title_element.text.strip() if title_element else None

    price_element = soup.select_one("#price_inside_buybox")
    price = price_element.text if price_element else None

    rating_element = soup.select_one("#acrPopover")
    rating_text = rating_element.attrs.get("title") if rating_element else None
    rating = rating_text.replace("out of 5 stars", "") if rating_text else None

    image_element = soup.select_one("#landingImage")
    image = image_element.attrs.get("src") if image_element else None

    description_element = soup.select_one("#productDescription")
    description = description_element.text.strip() if description_element else None

    review_elements = soup.select("div.review")

    scraped_reviews = []

    for review in review_elements:
        r_author_element = review.select_one("span.a-profile-name")
        r_author = r_author_element.text if r_author_element else None

        r_rating_element = review.select_one("i.review-rating")
        r_rating = r_rating_element.text.replace("out of 5 stars", "") if r_rating_element else None

        r_title_element = review.select_one("a.review-title")
        r_title_span_element = r_title_element.select_one("span:not([class])") if r_title_element else None
        r_title = r_title_span_element.text if r_title_span_element else None

        r_content_element = review.select_one("span.review-text")
        r_content = r_content_element.text if r_content_element else None

        r_date_element = review.select_one("span.review-date")
        r_date = r_date_element.text if r_date_element else None

        r_verified_element = review.select_one("span.a-size-mini")
        r_verified = r_verified_element.text if r_verified_element else None

        r = {
            "author": r_author,
            "rating": r_rating,
            "title": r_title,
            "content": r_content,
            "date": r_date,
            "verified": r_verified
        }

        scraped_reviews.append(r)

    return {
        "title": title,
        "price": price,
        "rating": rating,
        "image": image,
        "description": description,
        "url": url,
        "reviews": scraped_reviews,
    }


def parse_listing(listing_url):
    response = requests.get(listing_url, headers=custom_headers)
    print(response.status_code)
    soup_search = BeautifulSoup(response.text, "lxml")
    link_elements = soup_search.select("[data-asin] h2 a")
    page_data = []
    for link in link_elements:
        full_url = urljoin(listing_url, link.attrs.get("href"))
        print(f"Scraping product from {full_url[:100]}", flush=True)
        product_info = get_product_info(full_url)
        page_data.append(product_info)

    next_page_el = soup_search.select_one('a:contains("Next")')
    if next_page_el:
        next_page_url = next_page_el.attrs.get('href')
        next_page_url = urljoin(listing_url, next_page_url)
        print(f'Scraping next page: {next_page_url}', flush=True)
        page_data += parse_listing(next_page_url)

    return page_data


def main():
    data = []
    search_url = "https://www.amazon.com/s?k=bose&rh=n%3A12097479011&ref=nb_sb_noss"
    data = parse_listing(search_url)
    df = pd.DataFrame(data)
    df.to_json("amz.json", orient='records')


if __name__ == '__main__':
    main()

Best practices

Scraping Amazon without proxies or dedicated scraping tools is full of obstacles. Just like many other popular scraping targets, Amazon has rate-limiting in place, meaning it can block your IP address if you exceed the established limit. Apart from that, Amazon uses bot-detection algorithms that can check your HTTP headers for any suspicious details. Also, you should be ready to constantly adapt to the different page layouts and various HTML structures. 

Considering these factors, it’s recommended to follow some common practices to prevent getting detected and blocked by Amazon. Some of the most useful tips are: 

  1. Use a real User-Agent. It’s important to make your User-Agent look as plausible as possible. Here’s the list of the most common user agents.

  2. Set your fingerprint. Many websites use Transmission Control Protocol (TCP) and IP fingerprinting to detect bots. To avoid getting spotted, you need to make sure your fingerprint parameters are always consistent. 

  3. Change the crawling pattern. To develop a successful crawling pattern, you should think about how a regular user would behave while exploring a page and add clicks, scrolls, and mouse movements accordingly.

And this is only a small portion of the requirements you should keep in mind when scraping Amazon.

An easier solution to extract Amazon data 

Alternatively, you can turn to a ready-made scraping solution designed specifically for scraping Amazon - Amazon Scraper API. With this scraper, you can:

  • Scrape and parse various Amazon page types, including Search, Product, Offer listing, Questions & Answers, Reviews, Best Sellers, and Sellers.

  • Target localized product data in 195 locations worldwide;

  • Retrieve accurate parsed results in JSON format without installing any other library;

  • Enjoy multiple handy features, such as bulk scraping and automated jobs.

Let's look at Amazon Scraper API in action.

Searching products

You can search and extract the products from Amazon with this straightforward code example:

import requests
from pprint import pprint

# Structure payload.
payload = {
    'source': 'amazon_search',
    'query': 'bose',  # Search for "bose"
    'start_page': 1,
    'pages': 10,
    'parse': True,
    'context': [
        {'key': 'category_id', 'value': 12097479011}  # category id for headphones
    ],
}

# Get response
response = requests.request(
    'POST',
    'https://realtime.oxylabs.io/v1/queries',
    auth=('USERNAME', 'PASSWORD'),
    json=payload,
)

# Print prettified response to stdout.
pprint(response.json())

Notice how it requests 10 pages beginning with page 1. Also, we limit the search to category ID 12097479011, which is Amazon's category ID for headphones. You’ll get the data returned in JSON format:

Extracting product details

All you need is the product URL — irrespective of the country of the Amazon store. The only change in code is the payload. For example, the following payload extracts details for the Bose QC 45 from Amazon.com:

payload = {
    'source': 'amazon',
    'url': 'https://www.amazon.com/dp/B098FKXT8L',
    'parse': True
}

Here’s a snippet of the output:

Another way to get the information is by the ASIN of the product. Again, you need to modify the payload:

payload = {
    'source': 'amazon_product',
    'domain': 'co.uk',
    'query': 'B098FKXT8L',
    'parse': True,
    'context': [
        {
            'key': 'autoselect_variant', 'value': True
        }]
}

Note the optional parameter domain. You can use this parameter to get Amazon data from any domain, such as amazon.co.uk.

Collecting product reviews

You can also extract Amazon product reviews by using the amazon_product data source and providing the product ASIN in the payload, for example:

payload = {
    'source': 'amazon_reviews',
    'domain': 'com',
    'query': 'B098FKXT8L',
    'start_page': 1,
    'pages': 3,
    'parse': True
}

The above payload instructs Amazon Scraper API to start from the first page and scrape three pages in total. Here you can see a snapshot of one of the reviews in the output:

Conclusion

You can write code to scrape Amazon products using the Requests and Beautiful Soup libraries. It may need some effort, but it works. Sending custom headers, rotating user-agents, and proxy rotation can help bypass bans or rate limiting.

However, the easiest solution to scrape Amazon products is using the Amazon Scraper API. Oxylabs also allows you to gather data from 50 other marketplaces using its E-Commerce Scraper API.

If you have any questions, do not hesitate to contact us.

Frequently asked questions

Does Amazon allow scraping?

Scraping publicly available data contained within the Amazon website isn’t considered illegal as long as your actions don’t violate its ToS. However, before engaging in any web scraping activity, our legal experts strongly recommend consulting with lawyers knowledgeable in this field.

Can scraping be detected?

Yes, scraping can be detected by the anti-bot software that can check your IP address, browser parameters, user agents, and other details. After being detected, the website will throw CAPTCHA, and if not solved, your IP will get blocked.

Does Amazon ban IP?

Yes, Amazon may ban an IP address if it finds it suspicious. 

How to bypass CAPTCHA while scraping Amazon?

In order to overcome CAPTCHAs, as it's one of the biggest challenges when gathering public data, you should minimize encounters with them as much as possible. Of course, it's important to note that avoiding them can be challenging. Here are some tips on how you can achieve that:

  1. Use reliable proxies and rotate your IP addresses.
  2. Reduce the scraping speed by adding random breaks between requests.
  3. Make sure your fingerprint parameters are consistent, or choose Web Unblocker – an AI-powered proxy solution with dynamic fingerprinting functionality.

For more information, check out this blog post.

How to crawl Amazon?

You can utilize free web scraping and crawling tools, like Scrapy, that allow crawling websites on a large scale. Additionally, you can take advantage of Oxylabs’ Web Crawler feature that comes with Amazon Scraper API. It can spider all pages on a website, select the content that you need, and deliver results in bulk.

About the author

Maryia Stsiopkina

Senior Content Manager

Maryia Stsiopkina is a Senior Content Manager at Oxylabs. As her passion for writing was developing, she was writing either creepy detective stories or fairy tales at different points in time. Eventually, she found herself in the tech wonderland with numerous hidden corners to explore. At leisure, she does birdwatching with binoculars (some people mistake it for stalking), makes flower jewelry, and eats pickles.

All information on Oxylabs Blog is provided on an "as is" basis and for informational purposes only. We make no representation and disclaim all liability with respect to your use of any information contained on Oxylabs Blog or any third-party websites that may be linked therein. Before engaging in scraping activities of any kind you should consult your legal advisors and carefully read the particular website's terms of service or receive a scraping license.

Related articles

Get the latest news from data gathering world

I’m interested

IN THIS ARTICLE:


  • Setting up for scraping

  • Scraping Amazon product data

  • Reviewing final script 

  • Best practices

  • An easier solution to extract Amazon data 

  • Conclusion

Try Amazon Scraper API

Choose Oxylabs' Amazon Scraper API to gather real-time public data hassle-free.

Scale up your business with Oxylabs®