How to take a screenshot with Playwright?

Learn how to capture screenshots using Playwright with this concise guide. Ideal for documenting and troubleshooting your automation scripts, this tutorial simplifies the process, ensuring you can easily save visual evidence of your projects.

Best practices

  • Ensure that the browser environment is correctly configured for screenshots, including viewport settings that match the desired capture size.

  • Use the `full_page=True` parameter in the `page.screenshot()` method to capture the entire length of the page, which is useful for pages with scrolling.

  • When capturing an element, always verify that the selector used in `page.query_selector()` accurately targets the desired element to avoid capturing incorrect parts of the page.

  • Handle exceptions and errors gracefully, especially for scenarios where the page or element might not load correctly, to prevent the script from crashing and to ensure robustness.

# Importing necessary libraries
from playwright.sync_api import sync_playwright

# Function to take a screenshot using Playwright
def take_screenshot(url, path):
# Start Playwright and launch browser
with sync_playwright() as p:
browser = p.chromium.launch()
page = browser.new_page()
# Navigate to the target URL
page.goto(url)
# Take a full page screenshot
page.screenshot(path=path, full_page=True)
# Close the browser
browser.close()

# Example usage: screenshot of Oxylabs product page
take_screenshot('https://sandbox.oxylabs.io/products', 'full_page_screenshot.png')

# Taking a screenshot of an element
def screenshot_element(url, selector, path):
with sync_playwright() as p:
browser = p.chromium.launch()
page = browser.new_page()
page.goto(url)
# Capture screenshot of a specific element
element = page.query_selector(selector)
element.screenshot(path=path)
browser.close()

# Example: screenshot of a specific element
screenshot_element('https://sandbox.oxylabs.io/products', '.product-list', 'element_screenshot.png')

Common issues

  • Ensure that the path specified for saving the screenshot exists and is writable to avoid file creation errors.

  • Adjust the screenshot quality and format by using options like `quality` for JPEG images and `type` to specify the image format (e.g., png or jpeg) in the `screenshot()` method.

  • Consider using `browser.close()` inside a `finally` block to ensure the browser always closes properly, even if an error occurs during the screenshot process.

  • Implement timeouts or wait functions to ensure that all page elements are fully loaded before taking a screenshot, particularly for dynamic content.

# Incorrect: Assuming the directory exists
page.screenshot(path='/nonexistent_directory/screenshot.png')

# Correct: Ensure the directory exists before saving
import os
if not os.path.exists('screenshots'):
os.makedirs('screenshots')
page.screenshot(path='screenshots/screenshot.png')

# Incorrect: Saving screenshot without specifying type, defaults to PNG
page.screenshot(path='screenshot')

# Correct: Specify the image format and quality if needed
page.screenshot(path='screenshot.jpeg', type='jpeg', quality=80)

# Incorrect: Not handling browser closure on error
browser = p.chromium.launch()
try:
page = browser.new_page()
page.goto(url)
page.screenshot(path='screenshot.png')
except Exception as e:
print(e)
# Browser might not close if an error occurs

# Correct: Ensure browser closes using finally
browser = p.chromium.launch()
try:
page = browser.new_page()
page.goto(url)
page.screenshot(path='screenshot.png')
finally:
browser.close()

# Incorrect: Taking screenshot without waiting for dynamic content
page.goto(url)
page.screenshot(path='screenshot.png')

# Correct: Wait for specific elements to ensure they are loaded
page.goto(url)
page.wait_for_selector('.dynamic-content')
page.screenshot(path='screenshot.png')

Try Oyxlabs' Proxies & Scraper API

Residential Proxies

Self-Service

Human-like scraping without IP blocking

From

8

Datacenter Proxies

Self-Service

Fast and reliable proxies for cost-efficient scraping

From

1.2

Web scraper API

Self-Service

Public data delivery from a majority of websites

From

49

Useful resources

Get the latest news from data gathering world

I'm interested