Proxy locations


North America

South America




See all locations

Network statusCareers

Back to blog

E-Commerce Scraper API Quick Start Guide

Iveta Vistorskyte

Iveta Vistorskyte

2023-11-213 min read

Oxylabs’ E-Commerce Scraper API is a public data scraper API designed to collect real-time localized data and search information from most e-commerce websites such as Amazon, Google Shopping, eBay, and many others at scale.

Follow this guide for a step-by-step setup and learn how to send your first search query.

Setting up E-Commerce Scraper API

  1. Register, or if you already have an account, log in to the dashboard.

  2. Choose a free trial or a plan that suits your needs. Afterward, a pop-up window will appear, asking to create an API user. Pick a username and password and click Create API user.

3. Now, another pop-up will appear with a test query for scraping the page. To initiate the test, copy the given code to your terminal, insert your API user credentials, and run the query.

cURL code sample for testing

A test query from the dashboard

The following is an example of this query in code form:

curl '' --user 'USERNAME:PASSWORD' -H 'Content-Type: application/json' -d '{"source": "amazon_search", "query": "shoes", "domain": "com", "geo_location": "90210", "parse": true}'

For a visual representation of how to set up and manually test E-Commerce Scraper API, check the video below.

Another method of checking out how our Ecommerce Scraper API works is through our Scraper APIs Playground, accessible via the dashboard.

Integration methods

The example above shows the Realtime integration method. With Realtime, you can send your request and receive data back on the same open HTTPS connection straight away.

You can integrate E-commerce Scraper API using one of the three methods:

  1. Realtime

  2. Push-Pull

  3. Proxy Endpoint

To read more about integration methods and how to choose one, click here. The main differences between them are shown below:

Push-Pull Realtime Proxy Endpoint
Type Asynchronous Synchronous Synchronous
Job query format JSON JSON URL
Job status check Yes No No
Batch query Yes No No
Upload to storage Yes No No

For full examples of Push-Pull and Proxy Endpoint integration methods, see our GitHub or documentation

Dedicated Scrapers

We also have dedicated scrapers for our E-commerce Scraper API to target a specific e-commerce marketplace and its page types. We’ve provided a table below showing all available dedicated scrapers:

Domain Sources
Amazon amazon, amazon_bestsellers, amazon_pricing, amazon_product, amazon_questions, amazon_reviews, amazon_search, amazon_sellers.
Google Shopping google_shopping, google_shopping_search, google_shopping_product, google_shopping_pricing.
Wayfair wayfair, wayfair_search.
Other domains universal_ecommerce.


Below are the main query parameters. Check documentation for more information and additional parameters, like handling specific context types.

Parameter Description
source Sets the scraper to process your request.
url or query Direct URL (link) or keyword, depending on the source
user_agent_type Device type and browser. Default value: desktop
geo_location Geo-location of a proxy used to retrieve the data.
locale Locale, as expected in the Accept-Language header.
render Enables JavaScript rendering when the target requires JavaScript to load content. Only works via the Push-Pull method.
content_encoding Add this parameter if you are downloading images.
callback_url URL to your callback endpoint.
parse true will return parsed data from sources that support this parameter.

Response codes

Here are the most common response codes you can encounter using our E-commerce Scraper API. If you receive an error code not found in our documentation, please contact technical support.

Response Error message Description
200 OK All went well.
202 Accepted Your request was accepted.
204 No content You are trying to retrieve a job that has not been completed yet.
400 Multiple error messages Wrong request structure. Could be a misspelled parameter or an invalid value. The response body will have a more specific error message.
401 Authorization header not provided / Invalid authorization header / Client not found Missing authorization header or incorrect login credentials.
403 Forbidden Your account does not have access to this resource.
404 Not found The job ID you are looking for is no longer available.
422 Unprocessable entity There is something wrong with the payload. Make sure it's a valid JSON object.
429 Too many requests Exceeded rate limit. Please contact your account manager to increase limits.
500 Internal server error We're facing technical issues, please retry later. We may already be aware, but feel free to report it anyway.
524 Timeout Service unavailable.
612 Undefined internal error Job submission failed. Retry at no extra cost with faulted jobs, or reach out to us for assistance.
613 Faulted after too many retries Job submission failed. Retry at no extra cost with faulted jobs, or reach out to us for assistance.

Using API features

Our E-commerce Scraper API  has a variety of smart built-in features.

1. Web Crawler allows you to crawl any website, select the exact content you require, and have it delivered in bulk. Crucially, it can also discover all pages on a website and fetch data from them at scale and in real time. Click this link for more details.

2. Scheduler automates repeating web scraping and parsing jobs by scheduling them. The schedules can be done at any interval – whether it's minutes, hours or days and so on. With Scheduler, there’s no need to repeat requests with the same parameters. Click this link for more details.

3. Custom Parser enables you to get structured data from any website. The data can be parsed with the help of XPath and CSS expressions. It also allows you to take the necessary information from the HTML and convert it into a readable format. Click this link for more details.

4. Cloud integration means your data can be delivered to a preferred cloud storage bucket, whether it's AWS S3 or GCS. This eliminates the need for additional requests to fetch results – data goes directly to your cloud storage. Click this link for more details.

5. Headless Browser enables you to interact with a web page, imitate organic user behavior, and efficiently render JavaScript. You don't need to develop and maintain your own headless browser solution, so you can save time and resources on more critical tasks. Read more for tech details.

Dashboard statistics

Within the Oxylabs dashboard, you can track your usage, while in the Statistics section, you’ll find a graph with scraped pages and a table with your API user's data. It includes average response time, daily request counts, and total requests. Additionally, you can filter the statistics to see your usage during specified intervals.

E-commerce Scraper API dashboard

Additional resources

With our Scraper APIs you’re able to get a 1 week free trial. Also, if you have any questions, don’t hesitate to contact us via the live chat or email us at

For more tutorials and tips on all things data gathering, stay engaged here:

Frequently asked questions

What are the E-Commerce Scraper API rate limits?

The rate limits depend on your plan. If your Plan size (jobs per month) is 5,000, 29,000, 160,000, or 526,000, then the Rate limits (jobs per second) are 5, 15, 50, and 100, respectively.

How to download images using E-Commerce Scraper API?

You can download images either by saving the output to the image extension when using the Proxy Endpoint integration method or passing the content_encoding parameter when using the Push-Pull or Realtime integration methods.

What are the pricing options for E-Commerce Scraper API?

Multiple options are available, some of which are aimed towards small businesses while others are more for large enterprises. The pricing options start from $49/month.

How does billing for E-Commerce Scraper API work?

Billing is dependent on successful results. You won’t get billed for failed attempts with an error from our side.

About the author

Iveta Vistorskyte

Iveta Vistorskyte

Lead Content Manager

Iveta Vistorskyte is a Lead Content Manager at Oxylabs. Growing up as a writer and a challenge seeker, she decided to welcome herself to the tech-side, and instantly became interested in this field. When she is not at work, you'll probably find her just chillin' while listening to her favorite music or playing board games with friends.

All information on Oxylabs Blog is provided on an "as is" basis and for informational purposes only. We make no representation and disclaim all liability with respect to your use of any information contained on Oxylabs Blog or any third-party websites that may be linked therein. Before engaging in scraping activities of any kind you should consult your legal advisors and carefully read the particular website's terms of service or receive a scraping license.

Related articles

Get the latest news from data gathering world

I’m interested


  • Setting up E-Commerce Scraper API

  • Integration methods

  • Dedicated Scrapers

  • Parameters

  • Response codes

  • Using API features

  • Dashboard statistics

  • Additional resources

Get E-Commerce Scraper API

Extract localized e-commerce content at scale from almost any target.

Scale up your business with Oxylabs®