Integrating proxies with any scraping or web requests library is inevitable. It helps avoid undesirable IP blocks by the targeted websites and minimizes vulnerabilities associated with revealing your IP. In this proxy server/Python integration guide, we'll demonstrate how to set up Oxylabs Residential, Datacenter Proxies, and Web Unblocker with Python Requests and explain two methods to rotate Datacenter Proxies.
The Python Requests library is a user-friendly de facto standard for managing HTTP/1.1 requests. It’s one of the most popular third-party Python packages, with around 233 million monthly downloads. The Requests library simplifies HTTP requests and response management without you having to enter query strings in the URL manually.
Installing the Python Requests library is straightforward. Open your integrated development environment’s (IDE) Python command terminal and execute the following pip command:
pip install requests
On a Windows device, use the following command:
python -m pip install requests
Once installed, you can import the Requests library by using the following statement:
import requests
The Python Requests module supports different methods to send HTTP requests, including GET, POST, and PUT. For the sake of demonstration, the following code snippet uses the requests.get() method to send an HTTP GET request to http://httpbin.org/ip.
import requests
response = requests.get("http://httpbin.org/ip")
print("Response: ", response), print()
print("Response Status Code:", response.status_code), print()
print("Response data in Content format:\n", response.content), print()
print("Response data in Text format:\n", response.text), print()
print("Response data in JSON format:\n", response.json())
The above code sends an HTTP GET request and stores the response in the response object. Next, it prints status_code and the received content in different formats, as can be seen in the output snippet below:
You can provide your proxy IP and proxy authentication credentials within the get() function. Python Requests proxy integration is a three-step procedure.
Step 1: Assuming that you have already installed the Python Requests library, you must first import the Requests library before using any of its functionalities:
import requests
Step 2: Create a dictionary variable containing all the information about your proxy endpoint, including the proxy address, port number, and proxy authentication credentials:
proxies = {
"PROTOCOL": "PROXY_TYPE://USERNAME:PASSWORD@PROXY_ADDRESS:PORT_NUMBER"
}
Here, USERNAME and PASSWORD are your Oxylabs sub-user’s credentials. The request PROTOCOL can be HTTP or HTTPS and isn’t necessarily the same as the PROXY_TYPE.
The following code demonstrates creating a dictionary with one Residential Proxy endpoint for an HTTP request, and another for HTTPS:
proxies = {
"http": "http://USERNAME:PASSWORD@pr.oxylabs.io:7777",
"https": "http://USERNAME:PASSWORD@pr.oxylabs.io:7777"
}
Here, you can use country-specific entries. For example, if you enter gb-pr.oxylabs.io instead of PROXY_ADDRESS and 20000 instead of PORT, you’ll acquire a United Kingdom exit node. Please refer to our documentation for a complete list of country-specific entry nodes and sticky session details.
The following table summarizes the proxy server configuration for both Shared and Dedicated Datacenter Proxies:
Datacenter Proxy | Proxy Type | Proxy Address | Port | User Credentials |
---|---|---|---|---|
Enterprise Dedicated | HTTP or SOCKS5 | Your selected IP from the acquired list (e.g., 1.2.3.4) | 60000 | Oxylabs sub-user’s username and password |
Self-Service Dedicated | HTTP, HTTPS, or SOCKS5 | ddc.oxylabs.io | 8001 | Oxylabs sub-user’s username and password |
Datacenter | HTTP, HTTPS, or SOCKS5 | dc.oxylabs.io | 8001 | Oxylabs sub-user’s username and password |
In the case of Enterprise Dedicated Datacenter Proxies purchased via sales, you’ll have to choose an IP address from the acquired list. Take a look at our documentation to learn more.
When it comes to Self-Service Dedicated Datacenter Proxies acquired via the Oxylabs’ dashboard, the port number corresponds with the sequential number of the IP address from the obtained list. Please take a look at our documentation to find out more.
With the pay-per-IP subscription plan, each port number is assigned to an IP address in sequence from your list. This means port 8001 uses the first IP from the list. For more details, please review our documentation.
For the pay-per-traffic subscription, port 8001 will randomly select an IP address but will maintain the same IP for the session's duration. To connect to a specific country’s proxy, for example a German proxy, include the two letter country code in the user authentication string, like this: user-USERNAME-country-DE:PASSWORD. Please refer to our documentation for additional information.
Proxy Type: HTTP, HTTPS, or SOCKS5
Proxy Address: isp.oxylabs.io
Port: 8001
To integrate Web Unblocker as an HTTP or HTTPS proxy, use the unblock.oxylabs.io:60000 endpoint and ignore the SSL certificate. You can also utilize its built-in proxies, a Headless Browser, and other functionalities by sending them as request headers. Check out our documentation to learn more.
Step 3: The last step is to send an HTTP request using a method of your choice—get(), post(), or put()—and pass the proxies dictionary along the target URL.
The following code sends a GET request to https://ip.oxylabs.io/ by using the Residential Proxy server configuration described in step 2. It further prints the response in text:
response = requests.get("https://ip.oxylabs.io/", proxies=proxies)
print(response.text)
For Web Unblocker, you must ignore the SSL certificate by passing an additional argument:
response = requests.get("https://ip.oxylabs.io/", verify=False, proxies=proxies)
print(response.text)
The combined code of the above three steps will produce the output like this:
Some scraping targets require a session for HTTP communications. In this case, you need to configure proxies with the request session you create.
The following code demonstrates creating a request session and configuring a residential proxy:
import requests
session_obj = requests.Session()
session_obj.proxies = {
"http": "http://USERNAME:PASSWORD@pr.oxylabs.io:7777",
"https": "http://USERNAME:PASSWORD@pr.oxylabs.io:7777"
}
response = session_obj.get("https://ip.oxylabs.io/")
print(response.text)
As you can see, first, you have to create a new request session and store it in session_obj. Next, you must set the proxies property with one of the proxy endpoints.
The configured session_obj is then used to generate a GET request to https://ip.oxylabs.io/. Lastly, the code prints the returned response in different formats.
You can test your proxy server settings by sending a GET request to https://ip.oxylabs.io/location.
Use the following code to test the proxy connectivity. Don’t forget to use your Oxylabs sub-user’s username and password:
import requests
proxies = {
"http": "http://USERNAME:PASSWORD@pr.oxylabs.io:7777",
"https": "http://USERNAME:PASSWORD@pr.oxylabs.io:7777"
}
response=requests.get("https://ip.oxylabs.io/location", proxies=proxies)
print("Response Status Code", response.status_code)
print("Response data in Content format:\n", response.content)
If the status code is 200 and the output IP differs from your actual network IP address, then your proxy server is configured correctly.
Some websites discourage automated scraping of content and can block any suspected IP addresses. If you’re using a single proxy server IP for iterative scraping requests, there is a good chance your proxy IP will be blocked.
Oxylabs Residential Proxies can either randomly change the proxy address for each request or use the same proxy IP for up to 30 minutes. Shared Datacenter Proxies also offer the options mentioned above but can keep the same IP unlimitedly. Thus, Oxylabs Residential, Shared Datacenter Proxies, and Web Unblocker perform proxy rotation intrinsically and don’t require external rotation. Check out our documentation for Residential and Shared Datacenter Proxies to find out more.
However, Dedicated Datacenter Proxies don’t have an in-built rotation feature, but they can be implemented with our Proxy Rotator. With this tool, you can easily automate the rotation of our Dedicated Datacenter Proxies.
Alternatively, Python Requests proxy rotation can be achieved in Python. Unfortunately, the Requests library doesn’t have a built-in rotation feature, but you can still rotate proxies using the following two methods.
This straightforward process involves creating a list of proxy endpoints and randomly selecting one before every new HTTP request.
Assuming you have a proxy list, you can use the following code to rotate the proxies from the given proxy list:
import requests
import random
proxies = [
"http://USERNAME:PASSWORD@PROXY_ADDRESS_1:10000",
"http://USERNAME:PASSWORD@PROXY_ADDRESS_2:20000",
"http://USERNAME:PASSWORD@PROXY_ADDRESS_3:30000",
.
.
.
"http://USERNAME:PASSWORD@PROXY_ADDRESS_N:100000"
]
for i in range(len(proxies)):
random_proxy = random.choice(proxies)
proxy = {"http": random_proxy, "https": random_proxy}
response = requests.get("https://ip.oxylabs.io/", proxies=proxy)
print("Proxy used: ", random_proxy)
print("Response Status Code", response.status_code)
print("Response data in Text format:\n", response.text)
The above code creates a list of proxy endpoints and uses random.choice() which returns a randomly selected element from the specified sequence, in our case proxies. Each call to the choice() method randomly selects a new proxy, which is then used to create the proxy dictionary for the subsequent HTTP request.
The for loop in this code makes four different GET requests to https://ip.oxylabs.io/. It uses a random proxy endpoint from the proxies for each new HTTP request.
The choice() method can select the same proxy endpoint multiple times. Each value from the given range is always equally likely in each choice() call. Thus, there can be a possibility of repeating the same endpoints several times in consecutive HTTP requests.
The randomness in the previous proxy-rotation scheme makes it non-deterministic. However, you might want to use a more deterministic rotating technique. You can do that by using a pattern similar to the round-robin.
In this scheme, you create a list of proxy endpoints and iterate over the list indices until you reach the end of the list. Thanks to modular arithmetics, the next value of i is mapped to the 0th index. It goes on until the for loop completes all of its iterations.
The following code provides the implementation of this method:
import requests
proxies = [
"http://USERNAME:PASSWORD@PROXY_ADDRESS_1:10000",
"http://USERNAME:PASSWORD@PROXY_ADDRESS_2:20000",
"http://USERNAME:PASSWORD@PROXY_ADDRESS_3:30000",
.
.
.
"http://USERNAME:PASSWORD@PROXY_ADDRESS_N:100000"
]
for index in range(len(proxies)):
proxy = {"http": proxies[index], "https": proxies[index]}
response = requests.get("https://ip.oxylabs.io/", proxies=proxy)
print("Proxy used: ", proxies[index])
print("Response Status Code", response.status_code)
print("Response data in Text format:\n", response.text)
The for loop makes as many HTTP requests as the length of the proxies list. It maps all the values of index between 0 and the length-1. Thus, ensuring a smooth rotation of proxies.
Integrating proxies with any scraping or request library is inevitable. With the proxies set up in Python Requests, you can begin your web scraping projects without worrying about IP blocks and geo-restrictions. Additionally, you may want to configure a Python requests retry strategy that automatically uses proxies to rerun requests that return specific error codes, such as 403 and 429.
If you have any questions or need assistance, please feel free to contact us.
Please be aware that this is a third-party tool not owned or controlled by Oxylabs. Each third-party provider is responsible for its own software and services. Consequently, Oxylabs will have no liability or responsibility to you regarding those services. Please carefully review the third party's policies and practices and/or conduct due diligence before accessing or using third-party services.
HTTPX vs Requests vs AIOHTTP
Find out about the critical features of HTTPX, Requests, and AIOHTTP and how they differ from each other.
Python Web Scraping Tutorial: Step-By-Step
We take you through every step of building your first web scraper. Find out how to get started in data acquisition with Python.
Web Scraping with Selenium and Python
Learn how the fundamentals of web scraping work by using Selenium, one of the better known tools for automating web browser interactions.
Get the latest news from data gathering world
Get Python Requests proxies for $8/GB