Back to blog

How to Scrape Google Trends Data With Python

author avatar

Augustas Pelakauskas

2023-08-305 min read
Share

Google Trends is a platform that provides public data on top Google Search queries. Using this tool, you can discover the search interest rates for specific keywords during specific time frames in specific regions. Scraping Google Trends data is a powerful way to analyze the popularity of search keywords over time. To efficiently gather large amounts of data while avoiding blocks, using a proxy can be essential.

Having insight into what users are searching for can be critical for a wide range of businesses. However, covering vast amounts of Google Trends data may be difficult without an automated web scraping solution.

This article provides a step-by-step guide on how to get Google Trends data with Python and Web Scraper API.

Google Trends website overview

Overview of the Google Trends website

Here are some of the uses for scraped Google Trends data:

  • Keyword research
    Google Trends is widely used among SEO specialists and content marketers. Since it provides insights into the past and present popularity of search terms, these professionals can tailor their marketing strategies to gain more website traffic. By scraping Google Trends with Python, marketers can get search volume data for trending topics.

  • Market research
    Google Trends data can be used for market research, helping businesses understand consumer interests and preferences over time. For example, e-commerce businesses can use Google Trends search insights for product development by analyzing geographical location trends.

  • Societal research
    Google Trends interface is a valuable resource for journalists and researchers, offering a glimpse into societal trends and public interest in trending topics. With the help of a Google Trends scraper, they can monitor popular topics over specific time periods and analyzepublic interest.

These are just a few examples. You can scrape Google Trends data to help with investment decisions, brand reputation monitoring, and other cases.

Now, let’s get into Google Trends scraping using Python and Oxylabs’ Google Trends API.

Try free for 1 week

Get a free trial to test Google Trends API for 7 days.

  • 5K results
  • No credit card required

1. Install libraries

Once you get access to the Google Trends scraper, you’ll need to install additional libraries. Open your terminal and run the following pip command:

pip install requests pandas

Then, import these libraries in a new Python file:

import os
import json
from typing import List

import requests
import pandas as pd

You’ll need the requests library to use API requests and pandas to manipulate received data.

2. Send a request

Let’s begin with building an initial request to the API:

import requests

USERNAME = "YourUsername"
PASSWORD = "YourPassword"

query = "persian cat"

print(f"Getting data from Google Trends for {query} keyword..")

url = "https://realtime.oxylabs.io/v1/queries"
auth = (USERNAME, PASSWORD)

payload = {
       "source": "google_trends_explore",
       "query": query,
}

Variables USERNAME and PASSWORD contain the authentication required by Web Scraper API, and payload contains configuration for the API on how to process your request.

Meanwhile, source defines the type of scraper that should be used to process this request. Naturally, google_trends_explore is tailored for this specific use case.

Also, define a query that you want to search for. For more information about possible parameters, check our documentation, including handling Google’s terms and rate limits for API requests.

You can also specify the Google Trends URL to refine the query and scrape the relevant data.

The configuration is done. You can now form and send the request:

try:
    response = requests.request("POST", url, auth=auth, json=payload)
except requests.exceptions.RequestException as e:
    print("Caught exception while getting trend data")
    raise e

data = response.json()
content = data["results"][0]["content"]
print(content)

If everything’s in order, when you run the code, you should see the raw results of the query in the terminal window like this:

3. Save results to CSV

Now that you have the results, adjust the formatting and save in the CSV format – this way, it’ll be easier to analyze the data. All this can be done with the help of the pandas Python library. This method is useful for those who want to scrape data and organize it effectively.

The response you get from the API provides you with four categories of information: interest_over_time, breakdown_by_region, related_topics, and related_queries. Let’s split each category into its own separate CSV file. 

Begin by converting each into a pandas dataframe:

def flatten_topic_data(topics_data: List[dict]) -> List[dict]:
   """Flattens related_topic data"""
   topics_items = []
   for item in topics_data[0]["items"]:
       item_dict = {
           "mid": item["topic"]["mid"],
           "title": item["topic"]["title"],
           "type": item["topic"]["type"],
           "value": item["value"],
           "formatted_value": item["formatted_value"],
           "link": item["link"],
           "keyword": topics_data[0]["keyword"],
       }
       topics_items.append(item_dict)

   return topics_items

trend_data = json.loads(content)
print("Creating dataframes..")

   # Interest over time
iot_df = pd.DataFrame(trend_data["interest_over_time"][0]["items"])
iot_df["keyword"] = trend_data["interest_over_time"][0]["keyword"]

   # Breakdown by region
bbr_df = pd.DataFrame(trend_data["breakdown_by_region"][0]["items"])
bbr_df["keyword"] = trend_data["breakdown_by_region"][0]["keyword"]

   # Related topics
rt_data = flatten_topic_data(trend_data["related_topics"])
rt_df = pd.DataFrame(rt_data)

   # Related queries
rq_df = pd.DataFrame(trend_data["related_queries"][0]["items"])
rq_df["keyword"] = trend_data["related_queries"][0]["keyword"]

As the data for related_topics is multi-leveled, you'll have to flatten the structure into a single-leveled one. Thus, the function flatten_topic_data was added to do so. 

The only thing left is to save the data to a file:

CSV_FILE_DIR = "./csv/"

keyword = trend_data["interest_over_time"][0]["keyword"]
keyword_path = os.path.join(CSV_FILE_DIR, keyword)

try:
    os.makedirs(keyword_path, exist_ok=True)
except OSError as e:
    print("Caught exception while creating directories.")
    raise e

print("Dumping to CSV...")
iot_df.to_csv(f"{keyword_path}/interest_over_time.csv", index=False)
bbr_df.to_csv(f"{keyword_path}/breakdown_by_region.csv", index=False)
rt_df.to_csv(f"{keyword_path}/related_topics.csv", index=False)
rq_df.to_csv(f"{keyword_path}/related_queries.csv", index=False)

You’ve now created a folder structure to hold all of your separate CSV files grouped by keyword.

Using Python to scrape publicly available Google Trends data is a powerful tool for businesses looking to stay ahead of market trends.

4. Create a result comparison

With all the initial request information transformed into dataframes, you could now use the pandas library and create simple keyword comparisons. This can be especially valuable when analyzing search volume across different keywords or regions.

This would require you to adjust your current code to handle multiple keywords and then add functionality to gather all the information in one place. When scraping data from the Google Trends page, you can compare the geo location of search trends to better understand how interest varies in different regions.

Let’s begin with multiple keyword handling. To make the code iterable, split it into reusable functions.

First, extract the code for the request to the API into a function that takes a query as an argument and returns you the response:

def get_trend_data(query: str) -> dict:
   """Gets a dictionary of trends based on given query string from Google Trends via Web Scraper API"""
   print(f"Getting data from Google Trends for {query} keyword..")
   url = "https://realtime.oxylabs.io/v1/queries"
   auth = (USERNAME, PASSWORD)
   payload = {
       "source": "google_trends_explore",
       "query": query,
   }
   try:
       response = requests.request("POST", url, auth=auth, json=payload)
   except requests.exceptions.RequestException as e:
       print("Caught exception while getting trend data")
       raise e

   data = response.json()
   content = data["results"][0]["content"]
   return json.loads(content)

Next, you need a function that would transform a raw response into pandas dataframes, save said dataframes as CSV files, and return them:

def dump_trend_data_to_csv(trend_data: dict) -> dict:
   """Dumps given trend data to generated CSV file"""
   CSV_FILE_DIR = "./csv/"
   # Interest over time
   print("Creating dataframes..")
   iot_df = pd.DataFrame(trend_data["interest_over_time"][0]["items"])
   iot_df["keyword"] = trend_data["interest_over_time"][0]["keyword"]

   # Breakdown by region
   bbr_df = pd.DataFrame(trend_data["breakdown_by_region"][0]["items"])
   bbr_df["keyword"] = trend_data["breakdown_by_region"][0]["keyword"]

   # Related topics
   rt_data = flatten_topic_data(trend_data["related_topics"])
   rt_df = pd.DataFrame(rt_data)

   # Related queries
   rq_df = pd.DataFrame(trend_data["related_queries"][0]["items"])
   rq_df["keyword"] = trend_data["related_queries"][0]["keyword"]

   keyword = trend_data["interest_over_time"][0]["keyword"]
   keyword_path = os.path.join(CSV_FILE_DIR, keyword)
   try:
       os.makedirs(keyword_path, exist_ok=True)
   except OSError as e:
       print("Caught exception while creating directories")
       raise e

   print("Dumping to csv..")
   iot_df.to_csv(f"{keyword_path}/interest_over_time.csv", index=False)
   bbr_df.to_csv(f"{keyword_path}/breakdown_by_region.csv", index=False)
   rt_df.to_csv(f"{keyword_path}/related_topics.csv", index=False)
   rq_df.to_csv(f"{keyword_path}/related_queries.csv", index=False)

   result_set = {}
   result_set["iot"] = iot_df
   result_set["bbr"] = bbr_df
   result_set["rt"] = rt_df
   result_set["rq"] = rq_df

   return result_set

Now that the request and dataframe creation is covered, you can create comparisons:

def create_comparison(trend_dataframes: dict) -> None:
    comparison = trend_dataframes[0]
    i = 1

    for df in trend_dataframes[1:]:
        comparison["iot"] = pd.merge(comparison["iot"], df["iot"], on="time", suffixes=("", f"_{i}"))
        comparison["bbr"] = pd.merge(comparison["bbr"], df["bbr"], on="geo_code", suffixes=("", f"_{i}"))

        if not df["rt"].empty and "title" in df["rt"].columns:
            comparison["rt"] = pd.merge(comparison["rt"], df["rt"], on="title", how="inner", suffixes=("", f"_{i}"))
        
        if not df["rq"].empty and "query" in df["rq"].columns:
            comparison["rq"] = pd.merge(comparison["rq"], df["rq"], on="query", how="inner", suffixes=("", f"_{i}"))

        i = i + 1

    comparison["iot"].to_csv("comparison_interest_over_time.csv", index=False)
    comparison["bbr"].to_csv("comparison_breakdown_by_region.csv", index=False)
    comparison["rt"].to_csv("comparison_related_topics.csv", index=False)
    comparison["rq"].to_csv("comparison_related_queries.csv", index=False)

This function will accept the dataframes for all the queries you have created, go over them, and merge them for comparison on key metrics.

The last thing to do is to create the core logic of your application. Adding it all together, the final version of the code should look like this: 

import os
import json
from typing import List

import requests
import pandas as pd


def get_trend_data(query: str) -> dict:
    USERNAME = "YourUsername"
    PASSWORD = "YourPassword"

    print(f"Getting data from Google Trends for {query} keyword...")

    url = "https://realtime.oxylabs.io/v1/queries"
    auth = (USERNAME, PASSWORD)

    payload = {
        "source": "google_trends_explore",
        "query": query
    }

    try:
        response = requests.request("POST", url, auth=auth, json=payload)
    except requests.exceptions.RequestException as e:
        print("Caught exception while getting trend data")
        raise e

    data = response.json()
    content = data["results"][0]["content"]
    return json.loads(content)


def flatten_topic_data(topics_data: List[dict]) -> List[dict]:
    topics_items = []
    for item in topics_data[0]["items"]:
        item_dict = {
            "mid": item["topic"]["mid"],
            "title": item["topic"]["title"],
            "type": item["topic"]["type"],
            "value": item["value"],
            "formatted_value": item["formatted_value"],
            "link": item["link"],
            "keyword": topics_data[0]["keyword"],
        }
        topics_items.append(item_dict)

    return topics_items


def dump_trend_data_to_csv(trend_data: dict) -> dict:
    print("Creating dataframes...")

    iot_df = pd.DataFrame(trend_data["interest_over_time"][0]["items"])
    iot_df["keyword"] = trend_data["interest_over_time"][0]["keyword"]

    bbr_df = pd.DataFrame(trend_data["breakdown_by_region"][0]["items"])
    bbr_df["keyword"] = trend_data["breakdown_by_region"][0]["keyword"]

    rt_data = flatten_topic_data(trend_data["related_topics"])
    rt_df = pd.DataFrame(rt_data)

    rq_df = pd.DataFrame(trend_data["related_queries"][0]["items"])
    rq_df["keyword"] = trend_data["related_queries"][0]["keyword"]

    CSV_FILE_DIR = "./csv/"

    keyword = trend_data["interest_over_time"][0]["keyword"]
    keyword_path = os.path.join(CSV_FILE_DIR, keyword)

    try:
        os.makedirs(keyword_path, exist_ok=True)
    except OSError as e:
        print("Caught exception while creating directories.")
        raise e

    print("Dumping to CSV...")
    iot_df.to_csv(f"{keyword_path}/interest_over_time.csv", index=False)
    bbr_df.to_csv(f"{keyword_path}/breakdown_by_region.csv", index=False)
    rt_df.to_csv(f"{keyword_path}/related_topics.csv", index=False)
    rq_df.to_csv(f"{keyword_path}/related_queries.csv", index=False)

    result_set = {}
    result_set["iot"] = iot_df
    result_set["bbr"] = bbr_df
    result_set["rt"] = rt_df
    result_set["rq"] = rq_df

    return result_set


def create_comparison(trend_dataframes: dict) -> None:
    comparison = trend_dataframes[0]
    i = 1

    for df in trend_dataframes[1:]:
        comparison["iot"] = pd.merge(comparison["iot"], df["iot"], on="time", suffixes=("", f"_{i}"))
        comparison["bbr"] = pd.merge(comparison["bbr"], df["bbr"], on="geo_code", suffixes=("", f"_{i}"))

        if not df["rt"].empty and "title" in df["rt"].columns:
            comparison["rt"] = pd.merge(comparison["rt"], df["rt"], on="title", how="inner", suffixes=("", f"_{i}"))
        
        if not df["rq"].empty and "query" in df["rq"].columns:
            comparison["rq"] = pd.merge(comparison["rq"], df["rq"], on="query", how="inner", suffixes=("", f"_{i}"))

        i = i + 1

    comparison["iot"].to_csv("comparison_interest_over_time.csv", index=False)
    comparison["bbr"].to_csv("comparison_breakdown_by_region.csv", index=False)
    comparison["rt"].to_csv("comparison_related_topics.csv", index=False)
    comparison["rq"].to_csv("comparison_related_queries.csv", index=False)


def main():
    keywords = ["cat", "cats"]

    results = []

    for keyword in keywords:
        trend_data = get_trend_data(keyword)
        df_set = dump_trend_data_to_csv(trend_data)
        results.append(df_set)

    create_comparison(results)


if __name__ == "__main__":
    main()

Running the code will create comparison CSV files that have the combined information of the supplied keywords on each of the categories:

  • interest_over_time

  • breakdown_by_region

  • related_topics 

  • related_queries

Different scraping approaches compared

If you're wondering whether to scrape with proxies, without them, or using a dedicated API, here's a comprehensive table.

Scraping approaches compared

Method Advantages Disadvantages Best For
Scraping without proxies Simple to implement
No additional costs
Lightweight solution
Quick setup and deployment
Full control over the code
High risk of IP blocks
Limited request volume
No bypass for Google's rate limiting
Unreliable for large-scale scraping
Frequent CAPTCHAs
Small-scale projects
Testing and development
Personal research
Quick, one-off data collection
Learning and educational purposes
Scrapingwith proxies Higher success rate
Ability to scale
Bypass rate limiting
Geographic location flexibility
Lower chance of blocks
Requires proxy management
Additional costs for proxies
More complex implementation
Need for proxy rotation logic
Potential proxy reliability issues
Medium to large-scale projects
Regular data collection
Production environments
Multi-region data gathering
Continuous monitoring
Scraper APIs Highest reliability
Zero maintenance
Automatic updates
Built-in anti-blocking measures
Technical support available
Highest cost
Less customization flexibility
Dependency on third-party
Potential API limitations
Usage quotas
Enterprise solutions
Large-scale operations
Mission-critical applications
Teams without scraping expertise
Time-sensitive projects

Wrapping up

Hopefully, now you learnt how to get data from Google Trends using Python. Make sure to check our technical documentation for all the Google Trends API parameters and variables mentioned in this Google Trends crawler tutorial. Interested in scraping data from other tools? Check out the step-by-step guides below:

Also, take a look at how to extract data from other popular targets, such as Wikipedia, Google News, Amazon, Wayfair, and many more on our blog. Using a combination of web scraping and automation tools like Selenium can also help optimize your data extraction efforts. Tools like Selenium web scraping can be especially useful when interacting with dynamic content from Google Trends. If you're looking to scale your efforts, you can buy proxies, either datacenter or residential proxies, to ensure efficient and uninterrupted scraping.

We hope that you found this tutorial helpful. If you have any questions, reach us at support@oxylabs.io, and one of our professionals will give you a hand. Happy scraping!

Frequently asked questions

Is it legal to scrape Google Trends?

Is web scraping legal? The legality of Google Trends scraping depends on the specific data you extract and how you intend to use it. It's essential to adhere to all regulations, including Google's terms, copyright, and privacy laws. Before engaging in web scraping, we advise you to seek professional legal advice.

How do I extract data from Google Trends?

You can use an advanced all-in-one solution, such as Oxylabs Google Trends API, an unofficial Google Trends API, Pytrends, or build your own custom scraper from scratch. Pytrends is a good option if you’re looking for an open-source solution. It’s straightforward to use, has automation capabilities and custom reports, and enables data analysis. However, it's important to remember that as an open-source project, its functionality can be impacted for extended periods due to Google platform updates.

How to get Google Trends data to Excel?

While this guide showcases how to scrape Google Trends data and save it to a CSV file, you can also save the scraped data to an Excel spreadsheet. Simply use the .to_excel function of pandas instead of .to_csv. Also, make sure to replace all .csv extensions in the Python code with .xlsx.

Is there an API for Google Trends?

No, there’s no official Google Trends API. As an alternative, you can utilize the feature-rich Google Trends API by Oxylabs or other web scraping tools.

About the author

author avatar

Augustas Pelakauskas

Senior Copywriter

Augustas Pelakauskas is a Senior Copywriter at Oxylabs. Coming from an artistic background, he is deeply invested in various creative ventures - the most recent one being writing. After testing his abilities in the field of freelance journalism, he transitioned to tech content creation. When at ease, he enjoys sunny outdoors and active recreation. As it turns out, his bicycle is his fourth best friend.

All information on Oxylabs Blog is provided on an "as is" basis and for informational purposes only. We make no representation and disclaim all liability with respect to your use of any information contained on Oxylabs Blog or any third-party websites that may be linked therein. Before engaging in scraping activities of any kind you should consult your legal advisors and carefully read the particular website's terms of service or receive a scraping license.

Related articles

Get the latest news from data gathering world

I’m interested