Documentation has been updated: see help center and changelog in one place.
Explore
LogoLogo
Oxylabs dashboardProduct
  • Documentation
  • Help center
  • Changelog
  • Help center
  • Most popular questions
    • Getting started: Proxy Solutions
    • Getting started: Web Unblocker
    • Getting started: Web Scraper API
    • Custom pricing or a free trial
    • Restricted targets: Proxy Solutions and Web Scraper API
    • What is your refund policy?
    • How do I choose the right product?
  • Getting started
    • Start using Residential Proxies
    • Start using Mobile Proxies
    • Start using ISP Proxies
    • Start using Datacenter Proxies per IP
    • Start using Dedicated Datacenter Proxies
    • Start using Web Scraper API
    • Start using Web Unblocker
  • Proxies integrations with third-party tools
  • Where can I find setup tutorials?
  • Products & features
    • How can I whitelist IPs?
    • How to set up limitations
    • Location settings for Proxies
    • Supported protocols
    • Public API
    • How to use session control
    • Web Scraper API features
    • Web Scraper API integration methods
    • My usage statistics
    • Countries Oxylabs Proxies cover
    • Can I buy products without having to contact you?
    • Playground to test Web Scraper API
    • Dedicated parsers
    • Is web scraping legal?
    • Features that can assist with web scraping tasks
    • How to select your proxy location
    • YouTube Downloader for AI projects
    • What data can I extract with the YouTube Downloader?
    • What solution should I use for building LLMs?
    • How can I define browser instructions automatically?
    • What actions can I automate with Browser instructions?
    • What is the fair usage policy for Dedicated Datacenter IPs?
    • Examples on how to use OxyCopilot
  • Can I use pre-made OxyCopilot prompts for my own projects?
  • Troubleshooting
    • Response codes for Proxies
    • Response codes for Web Scraper API
    • How do I use a cURL command?
    • I can’t access my account
    • Where can I find my scraping job ID?
    • Does Web Unblocker have JavaScript rendering?
    • From what targets can I get parsed data?
    • What targets can I scrape with a Web Scraper API?
    • How to use the Endpoint Generator
  • Billing & payments
    • How does Web Scraper API pricing work?
    • How does Web Unblocker pricing work?
    • How to cancel a subscription?
    • Do I need to sign a contract before purchase?
    • What forms of payment do you accept?
    • How does your billing cycle work?
    • Are there any additional fees I should be aware of?
    • Can I get a refund for unused traffic?
    • What are Oxylabs pricing plans' limitations?
  • Dashboard
    • How to transfer team ownership
    • How to invite team members
    • Web Scraper API 101: Navigating the dashboard
    • Proxies 101: Navigating the dashboard
    • IP Replacement
  • Free Datacenter Proxies
    • Set up free Datacenter Proxies
    • Free Datacenter IPs: troubleshooting guide
    • Can I choose the locations for my free IPs, or are they assigned automatically?
    • Are there limits on how many connections or threads I can run at the same time?
    • Can I replace my free Datacenter IPs with new ones?
    • Can I use IP whitelisting for authentication when using the free Datacenter IPs?
    • Is the 5 GB traffic limit per IP or shared across all 5 IPs?
    • If I upgrade to a paid plan, can I go back to the free one later?
    • What is the fair usage policy for Free Datacenter IPs?
    • If I upgrade to a paid plan, will I keep the same IPs?
  • Data for LLMs
    • Do you deliver data in an LLM-optimized format?
    • Do I have to manually format scraped data for AI workflows?
    • What is Model Context Protocol (MCP), and how does it benefit Web Scraper API usage?
    • How do I use Model Context Protocol (MCP) with Web Scraper API?
    • How does Model Context Protocol (MCP) standardize data for LLMs?
Powered by GitBook
On this page
  • Create an API user
  • Make a cURL request
  • Integration methods
  • Targets and parameters
  • Additional free features

Was this helpful?

  1. Getting started

Start using Web Scraper API

PreviousStart using Dedicated Datacenter ProxiesNextStart using Web Unblocker

Last updated 2 days ago

Was this helpful?

Create an API user

Begin by creating an account on the . After that, select a pricing plan or pick a free trial and create an API user. Your credentials will be your key for user authorization later on.

Make a cURL request

After creating a user, you'll see a few code examples to test your Web Scraper API. Copy-paste the code into your terminal or preferred tool. For manual testing, we recommend using Postman. You can start scraping Amazon, Google, or any other target right away. Here are the cURL examples for your first request:

Amazon:

curl ''https://realtime.oxylabs.io/v1/queries'' --user 'USERNAME:PASSWORD' -H 'Content-Type: application/json' -d '{"source": "amazon_product", "query": "B07FZ8S74R", "geo_location": "90210", "parse": true}'

Google:

curl ''https://realtime.oxylabs.io/v1/queries'' --user 'USERNAME:PASSWORD' -H 'Content-Type: application/json' -d '{"source": "google_search", "query": "adidas", "geo_location": "California,United States", "parse": true}'

Other:

curl ''https://realtime.oxylabs.io/v1/queries'' --user 'USERNAME:PASSWORD' -H 'Content-Type: application/json' -d '{"source": "universal", "url": "https://sandbox.oxylabs.io/"}'

Replace the USERNAME and PASSWORD with the API credentials you’ve just created, and then , Postman, or any other setup.

See this short video tutorial demonstrating how to use Web Scraper API:

Integration methods

There are three different methods you can use to integrate Web Scraper API:

Targets and parameters

Additional free features

If you want to extend the API functionality in your projects, you can use the following features integrated into Web Scraper API:


Test our Web Scraper API in the Playground, which is accessible via the .

– provides a synchronous method where you have to keep the connection open until the job is finished.

– enables an asynchronous method where you’ll have to make another request to the API to retrieve results once the job is finished.

– offers a synchronous method where you can use our endpoint as a proxy.

For more information, please refer to the respective integration pages on our or this comprehensive .

Web Scraper API can effectively gather public data from any website, including e-commerce, search engines, online travel agencies, real estate platforms, and others. Visit to learn more about the parameters you can use and see code examples.

For using features such as JavaScript rendering and user agents, explore the pages in our documentation.

– generates ready-to-use code for web scraping in seconds. This feature can be accessed via API Playground on our dashboard. We also have a collection of pre-made prompts and code samples to help you get public data from different targets even faster. It’s always available for you in the .

– renders JavaScript and lets you define custom browser instructions, such as entering text, clicking elements, scrolling pages, and more.

– enables you to create your own parsing and data processing logic that’s executed on a raw scraping result.

– crawls any site, allowing you to select useful content and receive it in bulk. You can use it to perform URL discovery, crawl all pages on a site, index all URLs on a domain, and more.

– provides a way for you to automate recurring scraping and parsing tasks by creating schedules.

– allows you to automate browser actions – like clicking, scrolling, filling out forms, or waiting for elements – before extracting data from a webpage. The feature is accessible in the API playground.

dashboard
Realtime
Push-Pull
Proxy Endpoint
documentation
blog post
documentation
Features
OxyCopilot
OxyCopilot prompts and code samples library
Headless Browser
Custom Parser
Web Crawler
Scheduler
Browser instructions
Oxylabs dashboard
run the request via terminal

Head back to the dashboard