The demand for digital content has increased exponentially. Since the resulting competition increases, the existing websites are rapidly changing and updating their structure.
Quick updates are beneficial to general consumers. However, it’s a considerable hassle for a specific portion of businesses that collect public data since web scraping uses routines tailored for specific conditions of the individual websites, and frequent updates tend to disrupt them. This is where RegEx comes into play by alleviating some of the more complex elements of certain acquisition and parsing processes.
RegEx stands for Regular Expressions, a method to match specific patterns depending on the provided combinations, which can be used as filters to get the desired output.
RegEx can be used to validate all types of character combinations, including special characters like line breaks. One of the biggest pros Regular Expressions have is that no matter the type of data/input (irrespective of its size), it’s always compared to the same single regular expression, making the code more efficient.
Regular Expressions are universal and can be implemented in any programming language.
Token | Matches |
^ | Start of a string |
$ | End of a string |
. | Any character (except \n) |
| | Characters on either side of the symbol |
\ | Escapes special characters |
Char | The character given |
* | Any number of previous characters |
? | 1 previous character |
+ | 1 or more previous characters |
{Digit} | Exact number |
{Digit-Digit) | Between range |
\d | Any digit |
\s | Any whitespace character |
\w | Any word character |
\b | Word boundary character |
\D | Inverse of \d |
\S | Inverse of \s |
\W | Inverse of \w |
In this tutorial, the RegEx scraping target is product titles and prices from a dummy website intended for training purposes.
The latest version of Python.
Beautiful Soup 4 library to parse HTML.
Requests library to make HTML requests.
Let’s begin with creating a virtual environment for the project:
python3 -m venv scrapingdemo
Activate the newly created virtual environment (the example for Linux):
source ./scrapingdemo/bin/activate
Now, install the required Python modules.
Requests is a library responsible for sending requests to the websites on the internet and returning their response. To install Requests, enter the following:
pip install requests
Beautiful Soup is a module used to parse and extract data from the HTML response. To install Beautiful Soup, enter the following:
pip install beautifulsoup4
re is a built-in Python module responsible for working with Regular Expressions.
Next, create an empty Python file, for example, demo.py.
To import the required libraries, enter the following:
import requests
from bs4 import BeautifulSoup
import re
Use the Requests library to send a request to a web page from which you want to scrape the data. In this case, https://sandbox.oxylabs.io/products. To commence, enter the following:
page = requests.get('https://sandbox.oxylabs.io/products')
First, create a Beautiful Soup object and pass the page content received from your request during the initialization, including the parser type. As you’re working with an HTML code, select HTML.parser as the parser type.
Inspecting the HTML code element
By inspecting the elements (right-click and select inspect element) in a browser, you can see that each game title and price are presented inside a div element with the class called product-card. Use Beautiful Soup to get all the data inside these elements and then convert it to a string:
soup = BeautifulSoup(page.content, 'html.parser')
products = soup.find_all("div", class_="product-card")
content = str(content)
Since the acquired content has a lot of unnecessary data, create two regular expressions to get only the desired data.
Content of the acquired data
Finding the pattern
First, inspect the title of the product to find the pattern. You can see above that every title is present after the same class name in the <h4 class="title css-7u5e79 eag3qlw7">The Legend of Zelda: Ocarina of Time</h4> format.
Generating the expression
Then, create an expression that returns data inside the element tag by specifying "(.*?)".
The first expression is as follows:
re_titles = r'class="title css-7u5e79 eag3qlw7">(.*?)<\/h4>'
Finding the pattern
First, inspect the price of the product. Every price is present in a div tag in the <div class="price-wrapper css-li4v8k eag3qlw4">91,99 €</div> format.
Generating the expression
Then, create an expression that returns data inside the div element.
The second expression is as follows:
re_prices = r'class="price-wrapper css-li4v8k eag3qlw4">(.*?)<\/div>'
To conclude, use the expressions with re.findall to find substrings matching the patterns. Lastly, save them in the data variables.
title = re.findall(re_titles, product_html)
price = re.findall(re_prices, product_html)
data.append((title, price))
To save the output, loop over the pairs for the titles and prices and write them to the output.txt file.
with open("output.txt", "w", encoding="utf-8") as f:
for title, price in data:
f.write(f"{title}\t{price}\n")
The output file
Putting everything together, this is the complete code that can be run by calling python demo.py:
# Importing the required libraries.
import requests
from bs4 import BeautifulSoup
import re
from pprint import pprint
# Requesting the HTML from the target website.
url = "https://sandbox.oxylabs.io/products"
page = requests.get(url)
# Selecting data.
soup = BeautifulSoup(page.content, "html.parser")
products = soup.find_all("div", class_="product-card")
# Processing data using Regular Expressions.
re_titles = r'class="title css-7u5e79 eag3qlw7">(.*?)<\/h4>'
re_prices = r'class="price-wrapper css-li4v8k eag3qlw4">(.*?)<\/div>'
data = []
for product in products:
product_html = str(product)
title = re.findall(re_titles, product_html)
price = re.findall(re_prices, product_html)
data.append((title, price))
# Saving the output.
with open("output.txt", "w", encoding="utf-8") as f:
for title, price in data:
f.write(f"{title}\t{price}\n")
This article explained what Regular Expressions are, how to use them, and what most commonly used tokens do. An example of scraping the titles and prices from a web page utilizing Python and Regular Expressions was also provided. If you're looking for an advanced web scraping solution, feel free to explore the features of our Web Scraper API.
Don’t forget to check our blog for more step-by-step tutorials on web scraping with Python, PHP, Ruby, Golang, and many more, or take a look at a guide on how to use Wget with proxy.
About the author
Augustas Pelakauskas
Senior Copywriter
Augustas Pelakauskas is a Senior Copywriter at Oxylabs. Coming from an artistic background, he is deeply invested in various creative ventures - the most recent one being writing. After testing his abilities in the field of freelance journalism, he transitioned to tech content creation. When at ease, he enjoys sunny outdoors and active recreation. As it turns out, his bicycle is his fourth best friend.
All information on Oxylabs Blog is provided on an "as is" basis and for informational purposes only. We make no representation and disclaim all liability with respect to your use of any information contained on Oxylabs Blog or any third-party websites that may be linked therein. Before engaging in scraping activities of any kind you should consult your legal advisors and carefully read the particular website's terms of service or receive a scraping license.
Get the latest news from data gathering world
Scale up your business with Oxylabs®