Pay only for successfully delivered results
Get highly localized real-time data without IP blocks
Enhance efficiency and cut infrastructure costs
No credit card is required. Free trial lasts for 1 week and includes 5k results.
Explore real-time property prices across various platforms
Compare prices for trend analysis
Analyze rental rates in high-demand zones
Make precise property value estimations
Gather real-time prices for flights and accommodations
Compare data across various platforms for strategy refinement
Track and analyze accommodation availability
Analyze customer reviews for insights
Leverage company profiles for B2B lead generation
Scrape essential business details and job postings
Identify potential partners for collaboration
Strengthen business development efforts
Uncover audience preferences through content trend analysis
Explore user engagement across different websites
Diligently monitor media for copyright infringement prevention
Preserve the integrity of intellectual property
Scrape automotive websites for vehicle specifications
Analyze historical sales data to identify market trends
Explore customer preferences for deeper insights
Track emerging trends in the automotive market
Accessing data from challenging websites has never been easier. Explore the capabilities of Web Scraper API with practical code samples.
Input parameters
source
Scraper
Set the scraper to 'universal' to get results from the target page.
url
URL
Input URL of the page you want to scrape.
geo_location
Localization
Specify the location of a proxy to get localized resultls.
render
JavaScript rendering
Enable to load JavaScript-based content.
parse
Structured data
Use together with ‘parsing_instructions’ to get structured data.
Input
Output
Output preview
Copy
import requests from pprint import pprint # Structure payload. payload = { 'source': 'universal', 'url': 'https://www.zillow.com/homedetails/10066-Cielo-Dr-Beverly-Hills-CA-90210/243990393_zpid/' } # Take a free trial or buy the product on our dashboard to create an API user. # For this script to work, replace 'USERNAME' and 'PASSWORD' below with the credentials of the API user you created. # Get response by using real-time endpoint. response = requests.request( 'POST', 'https://realtime.oxylabs.io/v1/queries', auth=('USERNAME', 'PASSWORD'), json=payload, ) # Print prettified response to stdout. pprint(response.json())
See full code
{ "results": [ { "content": "\n\n ... \n\n", "created_at": "2024-05-06 11:13:55", "updated_at": "2024-05-06 11:14:27", "page": 1, "url": "https://www.zillow.com/homedetails/10066-Cielo-Dr-Beverly-Hills-CA-90210/243990393_zpid/", "job_id": "7193206343652609025", "status_code": 200, } ] }
With Oxylabs Web Scraper API, you can bypass anti-scraping systems and extract large volumes of data from even the most complex websites. Accuracy, no missing pieces and overall quality of the retrieved data is our guarantee.
Custom headers and cookies
Send custom headers and cookies at no extra cost for enhanced control over your scraping.
Global coverage
Our premium proxy pool spans 195 countries, providing you with unrestricted access to localized data.
Try Web Scraper API with free 5k results
Leverage Web Scraper API smart features for collecting data at scale.
Proxy management
ML-driven proxy selection and rotation using our premium proxy pool from 195 countries.
Custom parameters
Enhance your scraping control with custom headers and cookies at no extra cost.
AI-driven fingerprinting
Unique HTTP headers, JavaScript, and browser fingerprints ensure resilience to dynamic content.
CAPTCHA bypass
Automatic retries and CAPTCHA bypassing for uninterrupted data retrieval.
JavaScript rendering
Accurate, high-quality data extraction from dynamic and interactive websites.
Web Crawler
Comprehensive page discovery on websites, extracting only essential data.
Scheduler
Automate recurring scraping jobs with desired frequency and receive data to AWS S3 or GCS.
Custom Parser
Define your parsing logic using XPath or CSS selectors for structured data collection.
Render JavaScript-based pages with a single line of code, eliminating the need for complex browser development or automated third-party tools. Set up custom browser instructions and enable Headless Browser to execute mouse clicks, input text, scroll pages, wait for elements to appear, and more.
Effortless JavaScript rendering
Browser instructions execution
Seamless data collection
Benefit from our AI-powered web data collection infrastructure that is ready-to-use straight away.
No need to develop or maintain scrapers and browsers
Bypass anti-scraping systems
Allocate your resources towards analyzing data
Step 1: Enter your endpoint URL, API user credentials, and data payload into a single request.
Step 2: Send this request to our API. We’ll take it from there – you don’t need to take any other actions.
Step 3: Retrieve the result directly from the API or store it in your chosen cloud storage bucket.
Copy
import requests from pprint import pprint username = "USERNAME" password = "PASSWORD" payload = { "source": "universal", "url": "https://sandbox.oxylabs.io/products/", "geo_location": "United States", } response = requests.request( 'POST', 'https://realtime.oxylabs.io/v1/queries', auth=(username, password), json=payload, ) pprint(response.json())
Scraper APIs Playground
Try this exclusive dashboard feature for a firsthand encounter with our Scraper APIs. Input your target URL, customize parameters, and watch results unfold.
Postman
Try out Web Scraper API before using it at scale using Postman. Import API collection to Postman and start scraping right away.
Web Scraper API is designed to handle the workload for you, ensuring seamless access to essential data. Backed by our commitment to excellence, we offer top-notch customer support and extensive resources to assist you 24/7.
Pay only for successful results
Avoid CAPTCHAs and IP blocks
Save time and development costs
0
1 week trial
Limited to 1 user
49
$2.80 / 1K results
$49 + VAT billed monthly
99
$2.60 / 1K results
$99 + VAT billed monthly
249
$2.40 / 1K results
$249 + VAT billed monthly
17,500
38,000
104,000
10 requests / s
30 requests / s
10% off
Yearly plans discount
For all our plans by paying yearly. Contact sales to learn more.
We accept these payment methods:
Technical API documentation
Discover available scraping parameters and explore code examples for specific targets.
Oxylabs Github repositories
Learn how to scrape websites, use our tools, integrate products, and more.
Setting up Web Scraper API
Quickly integrate and start using Web Scraper API with our quick start guide.
A web scraping API is software that retrieves data from a URL with the help of an API call. It helps establish a connection between a user and a web server to access and extract data.
Web Scraper API can deliver the HTML code of the page. Additionally, it leverages the JavaScript rendering feature to retrieve required HTML from websites utilizing JavaScript for dynamic content loading. The Custom Parser feature can also be used to obtain data in JSON format.
Yes, we offer the free Scheduler feature for all Scraper APIs. You can automate your recurring scraping jobs by scheduling them. Simply put, you don't need to send new requests with identical parameters to receive regular updates of the same public data. Also, there's no need to create or maintain your scheduling scripts. Check our documentation to learn more about the Scheduler feature.
Web Scraper API can deliver real-time results from almost any website worldwide. The delivery time highly depends on a requested target. For more details regarding specific targets, please get in touch with your Account Manager or contact our support team.
Web scraping services may be legal in cases where it is done without breaching any laws regarding the source targets or data itself. We have explored this subject in one of our blog posts, and we highly recommend that you read it and consult with your legal advisor before any scraping project to avoid any potential risks.
Using Web Scraper API consists of three main steps. First, you put together a request, adding the necessary information, like the endpoint URL, user credentials, and the payload. Second, you send the request to the API. Finally, you receive the result – you can retrieve them via the API or get them delivered to the storage solution of your choice. To see how Web Scraper API looks in action, check out our video here.
While scrapers and parsers go hand-in-hand, they have different functionalities. Simply put, scrapers retrieve the information from the web, while parsers focus on analyzing text based on predefined rules and syntax.
Yes, Web Scraping API comes with a specific job submission rate limit. The rate at which you can submit jobs depends on your plan size. For example, the free plan comes with 5K results, you can submit 5 total jobs per second, and for rendered jobs, you can submit one job per second. On the other hand, Web Scraper API can bypass rate limiting that websites implement as anti-bot measures.
To see the specifics for each plan, please refer to our documentation.
Yes, Web Scraper API is ISO/IEC 27001:2017 certified. This certification demonstrates our commitment to maintaining a robust Information Security Management System (ISMS) that adheres to internationally recognized standards for data security. To learn more about what ISO/IEC 27001:2017 certification means for our product and users, please read here.
Get the latest news from data gathering world
Scale up your business with Oxylabs®
GET IN TOUCH
General:
hello@oxylabs.ioSupport:
support@oxylabs.ioCareer:
career@oxylabs.ioCertified data centers and upstream providers
Connect with us
Advanced proxy solutions
Resources
Innovation hub
oxylabs.io© 2024 All Rights Reserved