Proxy locations


North America

South America




See all locations

Network statusCareers

Get Data From Any Website With Web Scraper API

  • Pay only for successfully delivered results

  • Get highly localized real-time data without IP blocks

  • Enhance efficiency and cut infrastructure costs

See it in action

No credit card is required. Free trial lasts for 1 week and includes 5k results.

Web Scraper API for various business use cases

Scrape real estate data

  • Explore real-time property prices across various platforms

  • Compare prices for trend analysis

  • Analyze rental rates in high-demand zones

  • Make precise property value estimations

Idealista Redfin Zillow Zoopla

Extract travel industry data

  • Gather real-time prices for flights and accommodations

  • Compare data across various platforms for strategy refinement

  • Track and analyze accommodation availability

  • Analyze customer reviews for insights

Airbnb Agoda Booking TripAdvisor

Collect B2B intelligence data

  • Leverage company profiles for B2B lead generation

  • Scrape essential business details and job postings

  • Identify potential partners for collaboration

  • Strengthen business development efforts

Crunchbase ZoomInfo AngelList Product Hunt

Scrape entertainment websites

  • Uncover audience preferences through content trend analysis

  • Explore user engagement across different websites

  • Diligently monitor media for copyright infringement prevention

  • Preserve the integrity of intellectual property

Netflix Soundcloud Youtube IMDb

Retrieve comprehensive vehicle data

  • Scrape automotive websites for vehicle specifications

  • Analyze historical sales data to identify market trends

  • Explore customer preferences for deeper insights

  • Track emerging trends in the automotive market

AutoEurope RockAuto Halfords Autotrader

Dive into code samples

Accessing data from challenging websites has never been easier. Explore the capabilities of Web Scraper API with practical code samples.

Input parameters



Set the scraper to 'universal' to get results from the target page.



Input URL of the page you want to scrape.



Specify the location of a proxy to get localized resultls.


JavaScript rendering

Enable to load JavaScript-based content.


Structured data

Use together with ‘parsing_instructions’ to get structured data.



Output preview


import requests
from pprint import pprint

# Structure payload.
payload = {
    'source': 'universal',
    'url': ''

# Take a free trial or buy the product on our dashboard to create an API user.
# For this script to work, replace 'USERNAME' and 'PASSWORD' below with the credentials of the API user you created.

# Get response by using real-time endpoint.
response = requests.request(
    auth=('USERNAME', 'PASSWORD'),

# Print prettified response to stdout.

See full code

  "results": [
      "content": "\n\n
"created_at": "2024-05-06 11:13:55",
"updated_at": "2024-05-06 11:14:27",
"page": 1,
"url": "",
"job_id": "7193206343652609025",
"status_code": 200,

Try Web Scraper API out for yourself

Discover Scraper APIs Playground on our dashboard for a firsthand interaction with Scraper APIs, and read our blog post to learn how to scrape Zillow listings with Python.

Collect quality data from any URL

With Oxylabs Web Scraper API, you can bypass anti-scraping systems and extract large volumes of data from even the most complex websites. Accuracy, no missing pieces and overall quality of the retrieved data is our guarantee. 

Custom headers and cookies

Send custom headers and cookies at no extra cost for enhanced control over your scraping.

Global coverage

Our premium proxy pool spans 195 countries, providing you with unrestricted access to localized data.

Try Web Scraper API with free 5k results

Advanced features

Leverage Web Scraper API smart features for collecting data at scale.

Proxy management

ML-driven proxy selection and rotation using our premium proxy pool from 195 countries.

Custom parameters

Enhance your scraping control with custom headers and cookies at no extra cost.

AI-driven fingerprinting

Unique HTTP headers, JavaScript, and browser fingerprints ensure resilience to dynamic content.

CAPTCHA bypass

Automatic retries and CAPTCHA bypassing for uninterrupted data retrieval.

JavaScript rendering

Accurate, high-quality data extraction from dynamic and interactive websites.

Web Crawler

Comprehensive page discovery on websites, extracting only essential data.


Automate recurring scraping jobs with desired frequency and receive data to AWS S3 or GCS.

Custom Parser

Define your parsing logic using XPath or CSS selectors for structured data collection.

Headless Browser

Render JavaScript-based pages with a single line of code, eliminating the need for complex browser development or automated third-party tools. Set up custom browser instructions and enable Headless Browser to execute mouse clicks, input text, scroll pages, wait for elements to appear, and more.

  • Effortless JavaScript rendering

  • Browser instructions execution

  • Seamless data collection

Learn more

Get a maintenance-free scraping infrastructure

Benefit from our AI-powered web data collection infrastructure that is ready-to-use straight away.

  • No need to develop or maintain scrapers and browsers

  • Bypass anti-scraping systems

  • Allocate your resources towards analyzing data

Simple integration

Step 1: Enter your endpoint URL, API user credentials, and data payload into a single request.

Step 2: Send this request to our API. We’ll take it from there – you don’t need to take any other actions. 

Step 3: Retrieve the result directly from the API or store it in your chosen cloud storage bucket.


import requests
from pprint import pprint

username = "USERNAME"
password = "PASSWORD"

payload = {
    "source": "universal",
    "url": "",
    "geo_location": "United States",

response = requests.request(
    auth=(username, password),


Scraper APIs Playground

Try this exclusive dashboard feature for a firsthand encounter with our Scraper APIs. Input your target URL, customize parameters, and watch results unfold.


Try out Web Scraper API before using it at scale using Postman. Import API collection to Postman and start scraping right away.

I am a big fan of their Web Scraper API & Residential Proxy products. There are many useful features and options to customize the way requests are made, and these are all available for requests made in bulk, which is our most common use case. The support team is also great.

Alex L.

Software Developer

What do others say?

Web Scraper API is designed to handle the workload for you, ensuring seamless access to essential data. Backed by our commitment to excellence, we offer top-notch customer support and extensive resources to assist you 24/7.

Web Scraper API pricing


Pay only for successful results

Avoid CAPTCHAs and IP blocks

Save time and development costs

Don’t miss out

Free Trial


1 week trial

Limited to 1 user



$2.80 / 1K results

$49 + VAT billed monthly



$2.60 / 1K results

$99 + VAT billed monthly



$2.40 / 1K results

$249 + VAT billed monthly





Rate limit
5 requests / s

10 requests / s

15 requests / s

30 requests / s

JavaScript rendering
Country-level targeting
24/7 support
Dedicated Account Manager

10% off

Yearly plans discount

For all our plans by paying yearly. Contact customer support to learn more.

We accept these payment methods:

Frequently asked questions

What is a web scraping API?

A web scraping API is software that retrieves data from a URL with the help of an API call. It helps establish a connection between a user and a web server to access and extract data.

What type of data can I extract with Web Scraper API?

Web Scraper API can deliver the HTML code of the page. Additionally, it leverages the JavaScript rendering feature to retrieve required HTML from websites utilizing JavaScript for dynamic content loading. The Custom Parser feature can also be used to obtain data in JSON format.

Can I automate recurring scraping jobs with Web Scraper API?

Yes, we offer the free Scheduler feature for all Scraper APIs. You can automate your recurring scraping jobs by scheduling them. Simply put, you don't need to send new requests with identical parameters to receive regular updates of the same public data. Also, there's no need to create or maintain your scheduling scripts. Check our documentation to learn more about the Scheduler feature.

How long does Web Scraper API take to give the results back?

Web Scraper API can deliver real-time results from almost any website worldwide. The delivery time highly depends on a requested target. For more details regarding specific targets, please get in touch with your Account Manager or contact our support team.

Is it legal to scrape a website?

Web scraping services may be legal in cases where it is done without breaching any laws regarding the source targets or data itself. We have explored this subject in one of our blog posts, and we highly recommend that you read it and consult with your legal advisor before any scraping project to avoid any potential risks.

How to use a web scraping API?

Using Web Scraper API consists of three main steps. First, you put together a request, adding the necessary information, like the endpoint URL, user credentials, and the payload. Second, you send the request to the API. Finally, you receive the result – you can retrieve them via the API or get them delivered to the storage solution of your choice. To see how Web Scraper API looks in action, check out our video here.

What is the difference between a scraper and a parser?  

While scrapers and parsers go hand-in-hand, they have different functionalities. Simply put, scrapers retrieve the information from the web, while parsers focus on analyzing text based on predefined rules and syntax.

Does Web Scraper API have rate limits? 

Yes, Web Scraping API comes with a specific job submission rate limit. The rate at which you can submit jobs depends on your plan size. For example, the free plan comes with 5K results, you can submit 5 total jobs per second, and for rendered jobs, you can submit one job per second. On the other hand, Web Scraper API can bypass rate limiting that websites implement as anti-bot measures.

To see the specifics for each plan, please refer to our documentation.

Is Web Scraper API ISO certified?

Yes, Web Scraper API is ISO/IEC 27001:2017 certified. This certification demonstrates our commitment to maintaining a robust Information Security Management System (ISMS) that adheres to internationally recognized standards for data security. To learn more about what ISO/IEC 27001:2017 certification means for our product and users, please read here.

More FAQs

Get the latest news from data gathering world

I'm interested

Scale up your business with Oxylabs®