Documentation has been updated: see help center and changelog in one place.
Explore
LogoLogo
Oxylabs dashboardProduct
  • Documentation
  • Help center
  • Changelog
  • Help center
  • Most popular questions
    • Getting started: Proxy Solutions
    • Getting started: Web Unblocker
    • Getting started: Web Scraper API
    • Custom pricing or a free trial
    • Restricted targets: Proxy Solutions and Web Scraper API
    • What is your refund policy?
    • How do I choose the right product?
  • Getting started
    • Start using Residential Proxies
    • Start using Mobile Proxies
    • Start using ISP Proxies
    • Start using Datacenter Proxies per IP
    • Start using Dedicated Datacenter Proxies
    • Start using Web Scraper API
    • Start using Web Unblocker
  • Proxies integrations with third-party tools
  • Where can I find setup tutorials?
  • Products & features
    • How can I whitelist IPs?
    • How to set up limitations
    • Location settings for Proxies
    • Supported protocols
    • Public API
    • How to use session control
    • Web Scraper API features
    • Web Scraper API integration methods
    • My usage statistics
    • Countries Oxylabs Proxies cover
    • Can I buy products without having to contact you?
    • Playground to test Web Scraper API
    • Dedicated parsers
    • Is web scraping legal?
    • Features that can assist with web scraping tasks
    • How to select your proxy location
    • YouTube Downloader for AI projects
    • What data can I extract with the YouTube Downloader?
    • What solution should I use for building LLMs?
    • How can I define browser instructions automatically?
    • What actions can I automate with Browser instructions?
    • What is the fair usage policy for Dedicated Datacenter IPs?
    • Examples on how to use OxyCopilot
  • Can I use pre-made OxyCopilot prompts for my own projects?
  • Troubleshooting
    • Response codes for Proxies
    • Response codes for Web Scraper API
    • How do I use a cURL command?
    • I can’t access my account
    • Where can I find my scraping job ID?
    • Does Web Unblocker have JavaScript rendering?
    • From what targets can I get parsed data?
    • What targets can I scrape with a Web Scraper API?
    • How to use the Endpoint Generator
  • Billing & payments
    • How does Web Scraper API pricing work?
    • How does Web Unblocker pricing work?
    • How to cancel a subscription?
    • Do I need to sign a contract before purchase?
    • What forms of payment do you accept?
    • How does your billing cycle work?
    • Are there any additional fees I should be aware of?
    • Can I get a refund for unused traffic?
    • What are Oxylabs pricing plans' limitations?
  • Dashboard
    • How to transfer team ownership
    • How to invite team members
    • Web Scraper API 101: Navigating the dashboard
    • Proxies 101: Navigating the dashboard
    • IP Replacement
  • Free Datacenter Proxies
    • Set up free Datacenter Proxies
    • Free Datacenter IPs: troubleshooting guide
    • Can I choose the locations for my free IPs, or are they assigned automatically?
    • Are there limits on how many connections or threads I can run at the same time?
    • Can I replace my free Datacenter IPs with new ones?
    • Can I use IP whitelisting for authentication when using the free Datacenter IPs?
    • Is the 5 GB traffic limit per IP or shared across all 5 IPs?
    • If I upgrade to a paid plan, can I go back to the free one later?
    • What is the fair usage policy for Free Datacenter IPs?
    • If I upgrade to a paid plan, will I keep the same IPs?
  • Data for LLMs
    • Do you deliver data in an LLM-optimized format?
    • Do I have to manually format scraped data for AI workflows?
    • What is Model Context Protocol (MCP), and how does it benefit Web Scraper API usage?
    • How do I use Model Context Protocol (MCP) with Web Scraper API?
    • How does Model Context Protocol (MCP) standardize data for LLMs?
Powered by GitBook
On this page
  • Synchronous integration
  • Asynchronous integration

Was this helpful?

  1. Products & features

Web Scraper API integration methods

PreviousWeb Scraper API featuresNextMy usage statistics

Last updated 2 days ago

Was this helpful?

Web Scraper API integration methods are the different ways you can use our Web Scraper API within your infrastructure and make calls to the API. There are , each having its own benefit. See this for an in-depth look.

Synchronous integration

This method is easier to implement and use, yet it consumes more infrastructure resources.

  • – you have to keep the connection open until the job is finished. This technique is great for sending JSON payloads with scraping and parsing descriptions, including advanced scraping parameters.

  • – allows you to use our endpoint like a proxy server. You can use this technique if you’re more familiar with proxies and would like to just get unblocked content.

Asynchronous integration

This method is best for large-scale web scraping as it offers more functionality and doesn’t consume additional infrastructure resources.

– you’ll have to send another request to our API to retrieve job results. This approach also enables you to retrieve scraped results straight to your cloud storage, such as AWS S3 or Google Cloud Storage.

For more information about each integration method, please refer to the respective pages of our .


three methods
blog post
Realtime
Proxy Endpoint
Push-Pull
documentation

Head back to the dashboard