How to use Selenium wait commands: implicit & explicit

Learn to master Selenium wait commands, both implicit and explicit, through this concise tutorial. Understand the nuances of each to optimize your data extraction tasks efficiently and effectively.

Best practices

  • Use implicit waits to handle general expected conditions where elements take time to load, but avoid using them when precise timing or specific conditions are required.

  • Explicit waits are preferable for conditions that need to be met before proceeding, such as waiting for elements to become clickable, visible, or disappear, as they allow you to define specific wait conditions and durations.

  • Avoid mixing implicit and explicit waits as they can lead to unpredictable wait times and can make debugging more difficult.

  • Always define a reasonable timeout for explicit waits to prevent indefinitely hanging tests if the expected condition is never met.

from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver.support import expected_conditions as EC
from selenium.common.exceptions import TimeoutException

# Initialize WebDriver
driver = webdriver.Chrome()

# Set implicit wait
driver.implicitly_wait(10) # Waits up to 10 seconds before throwing a NoSuchElementException

# Navigate to a webpage
driver.get("https://sandbox.oxylabs.io/products")

# Example of explicit wait: wait until an element is clickable
try:
element = WebDriverWait(driver, 20).until(
EC.element_to_be_clickable((By.ID, "target-element-id"))
)
element.click()
except TimeoutException:
print("Element not clickable within 20 seconds")

# Clean up: close the browser
driver.quit()

Common issues

  • Ensure that the element locators used in explicit waits are accurate and unique to avoid targeting the wrong elements or multiple elements unintentionally.

  • When using explicit waits, handle exceptions such as `TimeoutException` to provide clear error messages or alternative actions when an element does not meet the expected condition within the specified time.

  • Regularly update the conditions used in explicit waits to align with changes in the web application's UI and functionality to maintain test reliability.

  • Use explicit waits when testing AJAX-loaded elements to ensure that all dynamic content has fully loaded before interacting with the page.

# Incorrect: Using a non-unique ID that might match multiple elements
element = WebDriverWait(driver, 20).until(
EC.element_to_be_clickable((By.ID, "non-unique-id"))
)

# Correct: Using a unique and specific locator
element = WebDriverWait(driver, 20).until(
EC.element_to_be_clickable((By.ID, "unique-element-id"))
)

# Incorrect: Not handling exceptions, which might leave the browser hanging if an element is not found
element = WebDriverWait(driver, 20).until(
EC.element_to_be_clickable((By.ID, "some-id"))
)
element.click()

# Correct: Handling TimeoutException to provide feedback or take alternative actions
try:
element = WebDriverWait(driver, 20).until(
EC.element_to_be_clickable((By.ID, "some-id"))
)
element.click()
except TimeoutException:
print("Element not clickable within 20 seconds")

# Incorrect: Using outdated or incorrect conditions that no longer match the UI
element = WebDriverWait(driver, 20).until(
EC.visibility_of_element_located((By.ID, "old-ui-element"))
)

# Correct: Regularly updating the wait conditions to match the current UI elements
element = WebDriverWait(driver, 20).until(
EC.visibility_of_element_located((By.ID, "updated-ui-element"))
)

# Incorrect: Trying to interact with AJAX-loaded elements without waiting for them to load
driver.get("https://example.com")
element = driver.find_element(By.ID, "ajax-element")
element.click()

# Correct: Using explicit waits for AJAX-loaded elements to ensure they are fully loaded
driver.get("https://example.com")
element = WebDriverWait(driver, 20).until(
EC.presence_of_element_located((By.ID, "ajax-element"))
)
element.click()

Try Oyxlabs' Proxies & Scraper API

Residential Proxies

Self-Service

Human-like scraping without IP blocking

From

8

Datacenter Proxies

Self-Service

Fast and reliable proxies for cost-efficient scraping

From

1.2

Web scraper API

Self-Service

Public data delivery from a majority of websites

From

49

Useful resources

Get the latest news from data gathering world

I'm interested