Main causes

  • Uploading a file that exceeds the server's maximum allowed size.

  • Submitting a form with too much data.

  • Sending a request with too many batched operations or large headers.

  • Exceeding the server's limit on the length of URL or query parameters.

# Request example 
curl -X GET https://sandbox.oxylabs.io/ -H 'Accept: application/xml;q=0.9' -H 'Accept-Encoding: gzip, deflate' -H 'User-Agent: Safari/537.36'

Try Scraper API with 5K results

Common fixes

  • Verify the URL for any typing errors or outdated links.

  • Check if the resource has been moved to a different URL and update your request accordingly.

  • Contact website administrator to confirm if the resource has been permanently removed.

  • Explore using an archived version of the resource if available via services like the Wayback Machine.

# Response example 
HTTP/1.1 413 Payload Too Large
Content-Type: text/html; charset=UTF-8
Content-Length: 49
Connection: close

413 Payload Too Large

Forget HTTP errors with Scraper API

Web scraper API

Self-Service

Public data delivery from a majority of websites

From

49

Useful resources

Get the latest news from data gathering world

I'm interested