Oxylabs Documentation

Getting Started

Creating a job

To start scraping with one of our Scraper APIs, follow the simple steps below:
  1. 1.
    Select the domain you want to scrape under the Scraper API you are using.
    • E.g., if you are trying out our SERP Scraper API, you can choose to scrape Google, Baidu, or any other search engine. Our Scraper APIs are on the left-hand side menu.
  2. 2.
    Choose your preferred page type under the chosen domain.
    • E.g., if you want to scrape Google, you can scrape it by providing a URL to your target page or a few input parameters via specifically built page types (e.g., Search, Ads, and other) so we can form the URL on our end.
  3. 3.
    Put together a query and send it to our API.
    • Under your chosen page type or domain, you will find code examples in different programming languages. Use them to build your query and make sure always to include the following elements:
      • Endpoint. In all of our code examples, we send POST requests to the Realtime endpoint (https://realtime.oxylabs.io/v1/queries). If you decide to use another integration method, you may have to submit your queries to another endpoint.
      • Content-type. When submitting jobs, always send the content-type: application/json header.
      • Payload. It's a collection of query parameters that specify in detail the job you would like our service to do. Pay attention to mandatory parameters (source, query, or URL). They are marked in green in the query parameter tables. You can make a very basic request using those mandatory parameters or add various additional ones (e.g., geo_location, user_agent_type, etc.).
      • Username and password. You must provide your API user credentials. Otherwise, your query won't work. Our Scraper APIs use basic HTTP authentication.
IMPORTANT: Always replace USERNAME and PASSWORD in the provided code examples with your API user credentials. Check out the authentication section for more information.

Tools for manual testing

If you want to try our API manually before using it at scale, we recommend using Postman. Every scraper API, has its own Postman collection, which you can import to Postman and start scraping right away. The collections contain request templates for various sources and integration methods.
  • We strongly advise you to visit our API reference section to use and integrate with our Scraper APIs effectively. There, you will find information on integration methods, global parameter values, response codes, and usage statistics.
  • At any point you can check out your all-time usage statistics by querying the following endpoint: GET https://data.oxylabs.io/v2/stats. It's also possible to return your monthly or daily numbers. Visit the below section for more information.
  • Check out our Scraper APIs Scheduler functionality. It can be used for recurring scraping and parsing jobs.
  • Try out Web Crawler - a Scraper APIs feature that lets you crawl any site, select useful content and have it delivered to you in bulk.
  • Check out Oxylabs GitHub for tutorials on how to scrape websites, use our tools, implement products or integrate them using the most popular programming languages (e.g., C#, Java, NodeJs, PHP, Python, etc.).
If you need any assistance in making your query, feel free to contact us at [email protected] or via the 24/7 available live chat.
All information herein is provided on an “as is” basis and for informational purposes only. We make no representation and disclaim all liability with respect to your use of any information contained on this page. Before engaging in scraping activities of any kind you should consult your legal advisors and carefully read the particular website’s terms of service or receive a scraping license.