Back to blog

How to Scrape Google Images With Python

Danielius Radavicius

2024-09-034 min read
Share

Recent strides in visual search engine technology have significantly boosted the trend of scraping images from the internet, utilizing defined visual references. This approach is gaining widespread traction due to its efficacy in gathering targeted visual data.

Google Images stands out as a treasure trove of visual information among the massive resources accessible on the internet. Sometimes, given a visual example, you need to scrape all the images from Google Images with similar or related traits. However, doing so at a large scale without getting blocked becomes extremely difficult.

This article shows you how the possibilities of this enormous visual archive can be unlocked by guiding you through the process of scraping Google Images using our Google Images Search API.

Try free for 1 week

Request a free trial to test our Web Scraper API.

  • 5K results
  • No credit card required
  • What data can be scraped from Google Images?

    You can scrape a wide range of useful information from Google Images. The following types of information can be extracted from Google Images:

    • Image URLs: The URLs of the individual images are the most often extracted data from Google images. These URLs can be used to access and download photos that are hosted on different websites.

    • Image metadata: Titles, descriptions, and alt text are among the metadata that Google Images frequently makes available for images. You may learn from this metadata how images are tagged and described online.

    • Thumbnails for images: Images' thumbnails can also be scraped. Smaller copies of the images that are frequently seen in search results or on websites are known as thumbnails.

    • Sizes and Dimensions of Images: Google Images provides information on the sizes and dimensions of the images. This information might help you comprehend the many image sizes available for a given topic.

    • Source URLs: Besides the image URLs, you can scrape the URLs of the pages where the images are located. This can add context and additional information to the images.

    Image captions and descriptions: Many images in Google images have informative names or captions. Scraping these captions can reveal information on the subject matter and setting of the pictures.

    These are a few examples of the information you can gather from Google Images. The specific data you aim to obtain will depend on your goals and the insights you're seeking. Extracting this information can be beneficial, but it's important to always consider and respect the copyright and usage rights associated with the images you collect.

    Scrape Google Images using Oxylabs’ Google Images Scraper API

    For this tutorial, we will use Google Images Search API to get the Google images related to the one given in the query. This Google Image scraper helps us retrieve all the related images and the URLs (where these images are hosted).  

    To use this API, you must create an account on Oxylabs and get the API credentials. These credentials will be used in the later stages.

    Step 1 - Setting up the environment

    To get started, we must have Python 3.6+ installed and running on your system. Also, we need the following packages to put our code into action:

    • requests - for sending HTTP requests to Oxylabs API.

    • Pandas - for saving our output data in dataframes and saving in CSV files. 

    To install these packages, we can use the following command:

    pip install requests pandas 

    Running this command will install all the required packages. 

    Step 2- Import the required libraries

    After the installation of packages, start by creating a new Python file and import the required libraries using the following code:

    import requests
    import pandas as pd

    Step 3 - Structure the payload

    The Oxylabs Image Scraper API has some parameters that can be set to structure the payload and make the request accordingly. The details of these parameters can be found in the official documentation by Oxylabs. 

    The payload is structured as follows:

    payload = {
       "source": "google_images",
       "domain": "com",
       "query": "<search_image_URL>",
       "context": [
           {
               "key": "search_operators",
               "value": [
                   {"key": "site", "value": "example.com"},
                   {"key": "filetype", "value": "html"},
                   {"key": "inurl", "value": "image"},
               ],
           }
       ],
       "parse": "true",
       "geo_location": "United States"
    
    
    }

    Make sure to replace the query parameter value with the required search image URL. 

    The context parameter is used to apply some search filters. For example, our search operators force the API to scrape only the links from Google image search results that belong to example.com. If you remove this site key from the search_operators, the Python Google Image scraper may return related results from all the websites.

    The search operators filetype: html and inurl: image define search criteria to only retrieve results with a file type of HTML and where "image" is included in the URL.

    The parse parameter is set to true to get the results parsed in the JSON format. Additionally, you can add pages and start_page parameters to the payload to scrape multiple result pages starting from the start_page. A value of 1 is the default value for both the parameters.

    Step 4 - Make the request

    After creating the payload structure, you can initiate a POST request to Oxylabs’ API using the following code segment.

    response = requests.request(
       "POST",
       "https://realtime.oxylabs.io/v1/queries",
       auth=(USERNAME, PASSWORD),
       json=payload,
    )

    Make sure to replace username and password with your API credentials. The response received can be viewed in the JSON format. 

    Step 5 - Extract the data and save it in a CSV file

    We can extract the required images from the response object. The response object has a key results that contains all the related image data. We will extract and save all the image data in the data frame. Later, this dataframe can be saved in a CSV file using the following code.

    result = response.json()["results"][0]["content"]
    image_results = result["results"]["organic"]
    
    # Create a DataFrame
    df = pd.DataFrame(columns=["Image Title", "Image Description", "Image URL"])
    
    for i in image_results:
       title = i["title"]
       description = i["desc"]
       url = i["url"]
    
       df = pd.concat(
           [pd.DataFrame([[title, description, url]], columns=df.columns), df],
           ignore_index=True,
       )
    
    # Copy the data to CSV and JSON files
    df.to_csv("google_image_results.csv", index=False)
    df.to_json("google_image_results.json", orient="split", index=False)

    Now, let's take an example URL of a cat as the query image and put all the code together to make more cognitive sense. Assume that we want to scrape the first page from Google Images and want to restrict search to wikipedia.org only.  Here is what the code looks like:

    # Import Required libraries
    import requests
    import pandas as pd
    from pprint import pprint
    
    # Set your Oxylabs API credentials
    USERNAME = "<your_username>"
    PASSWORD = "<your_password>"
    
    # Structure payload.
    payload = {
       "source": "google_images",
       "domain": "com",
       "query": "https://upload.wikimedia.org/wikipedia/commons/a/a3/June_odd-eyed-cat.jpg",
       "context": [
           {
               "key": "search_operators",
               "value": [
                   {"key": "site", "value": "wikipedia.org"},
                   {"key": "filetype", "value": "html"},
                   {"key": "inurl", "value": "image"},
               ],
           }
       ],
       "parse": "true",
       "geo_location": "United States"
    
    }
    
    # Get response.
    response = requests.request(
       "POST",
       "https://realtime.oxylabs.io/v1/queries",
       auth=(USERNAME, PASSWORD),
       json=payload,
    )
    
    # Extract data from the response
    result = response.json()["results"][0]["content"]
    image_results = result["results"]["organic"]
    
    # Create a DataFrame
    df = pd.DataFrame(columns=["Image Title", "Image Description", "Image URL"])
    
    for i in image_results:
       title = i["title"]
       description = i["desc"]
       url = i["url"]
    
       df = pd.concat(
           [pd.DataFrame([[title, description, url]], columns=df.columns), df],
           ignore_index=True,
       )
    
       # Print the data on the screen
       print("Image Name: " + title)
       print("Image Description: " + description)
       print("Image URL: " + url)
    
    # Copy the data to CSV and JSON files
    df.to_csv("google_image_results.csv", index=False)
    df.to_json("google_image_results.json", orient="split", index=False)

    Here is what our output looks like:

    The complete API response for this API request can be found here.

    Conclusion

    Scraping Google Images without a dedicated tool is a complex task. As such, since Google Images as a repository offers a vast and diverse collection that's invaluable for various applications and analyses, implementing a solution like Oxylabs Google Images Scraper API can be key.

    Looking to scrape data from other Google sources? See our in-depth guides for scraping Jobs, Search, Scholar, Trends, News, Flights, Shopping, and Maps.

    For web scraping, proxies are an essential anti-blocking measure. To avoid detection by the target website, you can buy proxies of various types to fit any scraping scenario.

    Frequently asked questions

    Is it legal to scrape Google Images?

    When scraping images from Google, it's important to consider the website's terms of service, copyright of the images, and jurisdiction to determine if it's legal. Google's Terms of Service prohibit automatic access without consent, and scraping licensed images may lead to copyright issues. There may be exceptions for fair use and publicly accessible data, but ethical concerns should also be taken into account. Seeking legal advice and reviewing the terms of service before scraping is recommended.

    How to scrape images from Google images?

    Scraping Google Images involves programmatically extracting image-related data from Google Images’ search results. This process typically includes sending an HTTP request to Google's servers using a programming language like Python, parsing the HTML response to extract relevant image URLs, titles, descriptions, and metadata, and then storing or utilizing this data for analysis, research, or other purposes.

    How do I get pictures off Google Images?

    To scrape Images from Google, you can manually download them by right-clicking on them and selecting "Save image as...". However, when it comes to downloading large numbers of images, it’s better to automate the process using web scraping techniques, such as writing scripts with Python libraries like BeautifulSoup and Selenium to collect images in bulk.

    Does Google allow screen scraping?

    Google’s terms of service generally prohibit screen scraping, especially when done at scale. However, some forms of data extraction may be permitted under certain conditions, so it's important to review and comply with Google's policies to avoid potential legal consequences.

    How do I scrape Google without being banned?

    To scrape Google without being banned, it's essential to use rotating proxies, limit the number of requests per minute, randomize user agents, and respect the robots.txt file. Alternatively, you can also use an all-in-one web scraping solution that will mimic human behavior and reduce the chances of triggering Google's anti-bot mechanisms for you.

    What is the best way to scrape Google results?

    The most efficient way of collecting public Google results data is by using a ready-to-use web scraping solution, like Oxylabs’ Google Images Scraper API. Such tools allow you to efficiently collect data while minimizing the risk of being blocked, provided you adhere to ethical scraping practices.

    About the author

    Danielius Radavicius

    Former Copywriter

    Danielius Radavičius was a Copywriter at Oxylabs. Having grown up in films, music, and books and having a keen interest in the defense industry, he decided to move his career toward tech-related subjects and quickly became interested in all things technology. In his free time, you'll probably find Danielius watching films, listening to music, and planning world domination.

    All information on Oxylabs Blog is provided on an "as is" basis and for informational purposes only. We make no representation and disclaim all liability with respect to your use of any information contained on Oxylabs Blog or any third-party websites that may be linked therein. Before engaging in scraping activities of any kind you should consult your legal advisors and carefully read the particular website's terms of service or receive a scraping license.

    Related articles

    Get the latest news from data gathering world

    I’m interested