Documentation has been updated: see help center and changelog in one place.
🆕Explore
LogoLogo
Oxylabs dashboardContact usProduct
  • Documentation
  • Help center
  • Changelog
  • Help center
  • Most popular questions
    • Getting started: Proxy Solutions
    • Getting started: Web Unblocker
    • Getting started: Web Scraper API
    • Custom pricing or a free trial
    • Restricted targets: Proxy Solutions and Web Scraper API
    • What is your refund policy?
    • How do I choose the right product?
  • Getting started
    • Start using Residential Proxies
    • Start using Mobile Proxies
    • Start using ISP Proxies
    • Start using Datacenter Proxies per IP
    • Start using Dedicated Datacenter Proxies
    • Start using Web Scraper API
    • Start using Web Unblocker
  • Proxies integrations with third-party tools
  • Where can I find setup tutorials?
  • Products & features
    • How can I whitelist IPs?
    • How to set up limitations
    • Location settings for Proxies
    • Supported protocols
    • Public API
    • How to use session control
    • Web Scraper API features
    • Web Scraper API integration methods
    • My usage statistics
    • Countries Oxylabs Proxies cover
    • Can I buy products without having to contact you?
    • Playground to test Web Scraper API
    • Dedicated parsers
    • Is web scraping legal?
    • Features that can assist with web scraping tasks
    • How to select your proxy location
    • YouTube Downloader for AI projects
    • What data can I extract with the YouTube Downloader?
    • What solution should I use for building LLMs?
    • How can I define browser instructions automatically?
    • What actions can I automate with Browser instructions?
    • What is the fair usage policy for Dedicated Datacenter IPs?
    • Examples on how to use OxyCopilot
  • Can I use pre-made OxyCopilot prompts for my own projects?
  • Troubleshooting
    • Response codes for Proxies
    • Response codes for Web Scraper API
    • How do I use a cURL command?
    • I can’t access my account
    • Where can I find my scraping job ID?
    • Does Web Unblocker have JavaScript rendering?
    • From what targets can I get parsed data?
    • What targets can I scrape with a Web Scraper API?
    • How to use the Endpoint Generator
  • Billing & payments
    • How does Web Scraper API pricing work?
    • How does Web Unblocker pricing work?
    • How to cancel a subscription?
    • Do I need to sign a contract before purchase?
    • What forms of payment do you accept?
    • How does your billing cycle work?
    • Are there any additional fees I should be aware of?
    • Can I get a refund for unused traffic?
    • What are Oxylabs pricing plans' limitations?
  • Dashboard
    • How to transfer team ownership
    • How to invite team members
    • Web Scraper API 101: Navigating the dashboard
    • Proxies 101: Navigating the dashboard
    • IP Replacement
  • Free Datacenter Proxies
    • Set up free Datacenter Proxies
    • Free Datacenter IPs: troubleshooting guide
    • Can I choose the locations for my free IPs, or are they assigned automatically?
    • Are there limits on how many connections or threads I can run at the same time?
    • Can I replace my free Datacenter IPs with new ones?
    • Can I use IP whitelisting for authentication when using the free Datacenter IPs?
    • Is the 5 GB traffic limit per IP or shared across all 5 IPs?
    • If I upgrade to a paid plan, can I go back to the free one later?
    • What is the fair usage policy for Free Datacenter IPs?
    • If I upgrade to a paid plan, will I keep the same IPs?
  • Data for LLMs
    • Do you deliver data in an LLM-optimized format?
    • Do I have to manually format scraped data for AI workflows?
    • What is Model Context Protocol (MCP), and how does it benefit Web Scraper API usage?
    • How do I use Model Context Protocol (MCP) with Web Scraper API?
    • How does Model Context Protocol (MCP) standardize data for LLMs?
Powered by GitBook
On this page
  • API
  • Parsers

Was this helpful?

  1. Troubleshooting

Response codes for Web Scraper API

PreviousResponse codes for ProxiesNextHow do I use a cURL command?

Last updated 23 days ago

Was this helpful?

The most common response codes you can encounter while using Oxylabs' include:

API

Code

Status

Description

200

OK

All went well.

202

Accepted

Your request was accepted.

204

No Content

You are trying to retrieve a job that has not been completed yet.

400

Multiple error messages

Wrong request structure. Could be a misspelled parameter or an invalid value. The response body will have a more specific error message.

401

‘Authorization header not provided’ / Invalid authorization header’ / ’Client not found’

Missing authorization header or incorrect API user credentials.

403

Forbidden

Your account does not have access to this resource.

404

Not Found

The job ID you are looking for is no longer available.

422

Unprocessable Entity

There is something wrong with the payload you posted to us. Make sure it's a valid JSON object.

429

Too many requests

Exceeded rate limit. Please contact your account manager to increase limits.

500

Internal Server Error

We're facing technical issues, please retry later. We may already be aware, but feel free to report it anyway.

524

Timeout

Service unavailable.

612

Undefined Internal Error

Job submission failed. Retry at no extra cost with faulted jobs, or reach out to us for assistance.

613

Faulted After Too Many Retries

Job submission failed. Retry at no extra cost with faulted jobs, or reach out to us for assistance.

Parsers

Code

Status

Description

12000

Success

The returned parsed content is full, and there should be no missing or broken fields.

12002

Failure

We couldn't parse the page entirely. There may be an issue with the target website changing its HTML structure.

12003

Not Supported

The web page you asked us to parse is not supported.

12004

Partial Success

We were able to parse the majority of the page. However, a few fields are missing.

12005

Partial Success

We were able to parse the majority of the page. However, some fields might have default values because we could not find them in the HTML.

12006

Failure

Unexpected error. Let us know you got this response, and we'll check what went wrong.

12007

Unknown

Unknown parsed data status. The actual result can range from a complete failure to a total success.

12008

Failure

Parsed content is missing.

12009

Failure

Product not found. Check the URL you submitted.


For more response codes as well as detailed explanations about what each of them means, visit our .

Web Scraper API
documentation

Head back to the dashboard