API reference
This section contains all the information required to effectively use and integrate Scraper APIs. It includes details about integration methods, authentication, global parameter values, rate limits, response codes, cloud integration, usage statistics, billing information as well as download images.
- 1.
- 2.Try Scraper APIs for free for 1 week or choose a subscription plan.
You can also test out our scrapers and parsers through the Scraper APIs Playground, which is accessible via our dashboard.
To start scraping with one of our Scraper APIs, follow the simple steps below:
- 1.Select the domain you want to scrape under the Scraper API you are using.
- E.g., if you are trying out our SERP Scraper API, you can choose to scrape Google, Baidu, or any other search engine. Our Scraper APIs are on the left-hand side menu.
- 3.Put together a query and send it to our API.
- Under the chosen page type, you will find code examples in different programming languages. Use them to build your query and make sure always to include the following elements:
- Endpoint. In all of our code examples, we send
POST
requests to the Realtime endpoint (https://realtime.oxylabs.io/v1/queries
). If you decide to use another integration method, you may have to submit your queries to another endpoint. Content-type
. When submitting jobs, always send thecontent-type: application/json
header.- Payload. It's a collection of query parameters that specify in detail the job you would like our service to do. Pay attention to mandatory parameters (
source
,query
, orURL
). They are marked in green in the query parameter tables. You can make a very basic request using those mandatory parameters or add various additional ones (e.g.,geo_location
,user_agent_type
, etc.). Username
andpassword
. You must provide your API user credentials. Otherwise, your query won't work. Our Scraper APIs use basic HTTP authentication.
IMPORTANT: Always replace
USERNAME
and PASSWORD
in the provided code examples with your API user credentials. Check out the authentication section for more information.If you want to try our API manually before using it at scale, we recommend using Postman. Every scraper API, has its own Postman collection, which you can import to Postman and start scraping right away. The collections contain request templates for various sources and integration methods. More information can be found here:
- At any point you can check out your all-time usage statistics by querying the following endpoint:
GET https://data.oxylabs.io/v2/stats
. It's also possible to return your monthly or daily numbers. Visit the below section for more information.
- Try out Web Crawler - a Scraper APIs feature that lets you crawl any site, select useful content and have it delivered to you in bulk.
- Check out our Scraper APIs Scheduler functionality. It can be used for recurring scraping and parsing jobs.
- Discover the Custom Parser feature, which empowers you to define your own parsing and data processing logic to be executed on raw scraping results.
- Check out Oxylabs GitHub for tutorials on how to scrape websites, use our tools, implement products or integrate them using the most popular programming languages (e.g., C#, Java, NodeJs, PHP, Python, etc.).
- Explore our Scraping expert lessons to delve deeper into the most relevant scraping topics presented by industry experts.
If you need any assistance in making your query, feel free to contact us at [email protected] or via the 24/7 available live chat.
All information herein is provided on an “as is” basis and for informational purposes only. We make no representation and disclaim all liability with respect to your use of any information contained on this page. Before engaging in scraping activities of any kind you should consult your legal advisors and carefully read the particular website’s terms of service or receive a scraping license.
Last modified 6d ago