Pay only for successfully delivered results
Get highly localized real-time data without IP blocks
Enhance efficiency and cut infrastructure costs
No credit card is required. Free trial lasts for 1 week and includes 5K results.
Track product data in real-time
Implement dynamic pricing strategies
Monitor reviews to build brand reputation
Conduct market research
Get data from Google page types like search and images
Extract SERP elements like featured snippet and local pack
Keep track of your brand and competitor SERP rankings
Optimize SEO strategies
Explore real-time property prices across various platforms
Compare prices for trend analysis
Analyze rental rates in high-demand zones
Make precise property value estimations
Gather real-time prices for flights and accommodations
Compare data across various platforms for strategy refinement
Track and analyze accommodation availability
Analyze customer reviews for insights
Leverage company profiles for B2B lead generation
Scrape essential business details and job postings
Identify potential partners for collaboration
Strengthen business development efforts
Uncover audience preferences through content trend analysis
Explore user engagement across different websites
Diligently monitor media for copyright infringement prevention
Preserve the integrity of intellectual property
Accessing data from challenging websites has never been easier. Explore the capabilities of Web Scraper API with practical code samples.
Input parameters
source
Scraper
Set the scraper to 'amazon_product' to get product data. (Other sources: 'amazon_search', 'amazon_pricing', 'amazon_reviews', 'amazon_questions', 'amazon_bestsellers', 'amazon_sellers')
query
ASIN
Input 10-symbol ASIN code of the product you want to scrape.
geo_location
Localization
Specify the 'Deliver to' location.
domain
Amazon domain
Specify Amazon marketplace you want to scrape.
locale
Interface language
Set the interface language.
render
JavaScript rendering
Enable to load JavaScript-based content.
parse
Structured data
Enable to get structured product data.
Input
Output
Output preview
Copy
import requests from pprint import pprint # Structure payload. payload = { 'source': 'amazon_product', 'query': 'B0BGYWPWNC', 'geo_location': '90210', 'domain': 'com', 'parse': True } # Take a free trial or buy the product on our dashboard to create an API user. # Replace 'USERNAME' and 'PASSWORD' with your API credentials to run this request. # Get response by using real-time endpoint. response = requests.request( 'POST', 'https://realtime.oxylabs.io/v1/queries', auth=('USERNAME', 'PASSWORD'), json=payload, ) # Print prettified response to stdout. pprint(response.json())
See full code
... "product_details": { "os": "iOS 16", "ram": "1024 GB", "asin": "B0BGYWPWNC", "color": "Silver", "batteries": "1 Lithium Ion batteries required. (included)", "form_factor": "Slate", "item_weight": "15.5 ounces", "manufacturer": "Apple Computer", "customer_reviews": "4.2 4.2 out of 5 stars 1,444 ratings 4.2 out of 5 stars", "whats_in_the_box": "iPhone, Charger, Mfi cable, SIM Pin ejector", "best_sellers_rank": "#139 in Amazon Renewed (See Top 100 in Amazon Renewed) #49 in Renewed Smartphones #1,003 in Climate Pledge Friendly: Electronics", "country_of_origin": "China", "item_model_number": "A2483", "product_dimensions": "0.28 x 2.8 x 5.75 inches", "battery_power_rating": "3095", "date_first_available": "September 30, 2022", "other_display_features": "Wireless", "memory_storage_capacity": "1024 GB", "connectivity_technologies": "Wi-Fi", "ram_memory_installed_size": "1 TB", "standing_screen_display_size": "6.1 Inches" }, ...
With Oxylabs Web Scraper API, you can bypass anti-scraping systems and extract large volumes of data from even the most complex websites. Accuracy, no missing pieces and overall quality of the retrieved data is our guarantee.
Custom headers and cookies
Send custom headers and cookies at no extra cost for enhanced control over your scraping.
Global coverage
Our premium proxy pool spans 195 countries, providing you with unrestricted access to localized data.
OxyCopilot, an integral feature of Web Scraper API, is an AI-powered assistant designed to auto-generate codes for scraping requests and parsing instructions, eliminating manual coding:
Use Scraper API Playground
Input your prompt
Receive ready-to-use code
Try Web Scraper API with free 5K results
Leverage Web Scraper API smart features for collecting data at scale.
Proxy management
ML-driven proxy selection and rotation using our premium proxy pool from 195 countries.
Custom parameters
Enhance your scraping control with custom headers and cookies at no extra cost.
AI-driven fingerprinting
Unique HTTP headers, JavaScript, and browser fingerprints ensure resilience to dynamic content.
CAPTCHA bypass
Automatic retries and CAPTCHA bypassing for uninterrupted data retrieval.
JavaScript rendering
Accurate, high-quality data extraction from dynamic and interactive websites.
Web Crawler
Comprehensive page discovery on websites, extracting only essential data.
Scheduler
Automate recurring scraping jobs with desired frequency and receive data to AWS S3 or GCS.
Custom Parser
Define your parsing logic using XPath or CSS selectors for structured data collection.
Render JavaScript-based pages with a single line of code, eliminating the need for complex browser development or automated third-party tools. Set up custom browser instructions and enable Headless Browser to execute mouse clicks, input text, scroll pages, wait for elements to appear, and more.
Effortless JavaScript rendering
Browser instructions execution
Seamless data collection
Benefit from our AI-powered web data collection infrastructure that is ready-to-use straight away.
No need to develop or maintain scrapers and browsers
Bypass anti-scraping systems
Allocate your resources towards analyzing data
Step 1: Enter your endpoint URL, API user credentials, and data payload into a single request.
Step 2: Send this request to our API. We’ll take it from there – you don’t need to take any other actions.
Step 3: Retrieve the result directly from the API or store it in your chosen cloud storage bucket.
Copy
import requests from pprint import pprint username = "USERNAME" password = "PASSWORD" payload = { "source": "universal", "url": "https://sandbox.oxylabs.io/products/", "geo_location": "United States", } response = requests.request( 'POST', 'https://realtime.oxylabs.io/v1/queries', auth=(username, password), json=payload, ) pprint(response.json())
Pay only for successful results
Avoid CAPTCHAs and IP blocks
Save time and development costs
0
1 week trial
Limited to 1 user
49
$2.00 / 1K results
$49 + VAT billed monthly
99
$1.80 / 1K results
$99 + VAT billed monthly
249
$1.65 / 1K results
$249 + VAT billed monthly
24,500
55,000
151,000
10 requests / s
30 requests / s
10% off
Yearly plans discount
For all our plans by paying yearly. Contact sales to learn more.
We accept these payment methods:
"Building and maintaining your own scraping and parsing solution is expensive. That’s why we turned to Oxylabs. They offered one of the best price-to-value combinations in the market and helped us save our total web scraping costs."
Wei Zheng
Chief Product Officer at Conductor
Technical API documentation
Discover available scraping parameters and explore code examples for specific targets.
Oxylabs Github repositories
Learn how to scrape websites, use our tools, integrate products, and more.
Setting up Web Scraper API
Quickly integrate and start using Web Scraper API with our quick start guide.
A web scraping API is software that retrieves data from a URL with the help of an API call. It helps establish a connection between a user and a web server to access and extract data.
Web Scraper API can deliver data in raw HTML or structured JSON from any web page, including e-commerce marketplaces and SERPs. Additionally, it leverages the JavaScript rendering feature to retrieve data from websites using JavaScript for dynamic content loading.
Yes, with the free Scheduler feature, you can automate recurring scraping jobs by scheduling them. Simply put, you don't need to send new requests with identical parameters to receive regular updates. Also, there's no need to create or maintain your scheduling scripts.
Check our documentation to learn more about the Scheduler feature.
Web Scraper API can deliver real-time results from almost any website worldwide. The delivery time highly depends on a requested target. For more details regarding specific targets, please get in touch with your Account Manager or contact our support team.
Web scraping services may be legal in cases where it is done without breaching any laws regarding the source targets or data itself. We have explored this subject in one of our blog posts, and we highly recommend that you read it and consult with your legal advisor before any scraping project to avoid any potential risks.
Using Web Scraper API consists of three main steps:
First, create a request and add the necessary information, such as the endpoint URL, user credentials, and the payload.
Second, send the request to the API.
Finally, receive the results—you can retrieve them via the API or have them delivered to the storage solution of your choice. To see how Web Scraper API looks in action, check out our video here.
While scrapers and parsers go hand-in-hand, they have different functionalities. Simply put, scrapers retrieve the information from the web, while parsers focus on analyzing text based on predefined rules and syntax.
Yes, Web Scraping API comes with a specific job submission rate limit. The rate at which you can submit jobs depends on your plan size. For example, the free plan comes with 5K results, you can submit 5 total jobs per second, and for rendered jobs, you can submit one job per second. On the other hand, Web Scraper API can bypass rate limiting that websites implement as anti-bot measures.
To see the specifics for each plan, please refer to our documentation.
Yes, Web Scraper API is ISO/IEC 27001:2017 certified. This certification demonstrates our commitment to maintaining a robust Information Security Management System (ISMS) that adheres to internationally recognized standards for data security. To learn more about what ISO/IEC 27001:2017 certification means for our product and users, please read here.
Try out Web Scraper API with Postman before using it at scale. You can import our API collection to Postman and start scraping right away.
Get the latest news from data gathering world
Scale up your business with Oxylabs®
GET IN TOUCH
General:
hello@oxylabs.ioSupport:
support@oxylabs.ioCareer:
career@oxylabs.ioCertified data centers and upstream providers
Connect with us
Advanced proxy solutions
Resources
Data Collection
Innovation hub
oxylabs.io© 2024 All Rights Reserved