Forming Requests
Read detailed guides on how to get started and make requests using Web Scraper API for different websites.
Search Engines
Getting started
Select the search engine you want to scrape: Google, Bing, Other Search Engines.
Request sample
curl 'https://realtime.oxylabs.io/v1/queries' \
--user 'USERNAME:PASSWORD' \
-H 'Content-Type: application/json' \
-d '{
"source": "google_search",
"query": "adidas"
}'import requests
from pprint import pprint
# Structure payload.
payload = {
'source': 'google_search',
'query': 'adidas',
}
# Get response.
response = requests.request(
'POST',
'https://realtime.oxylabs.io/v1/queries',
auth=('USERNAME', 'PASSWORD'),
json=payload,
)
# Print prettified response to stdout.
pprint(response.json())We use synchronous Realtime integration method in our examples. If you would like to use Proxy Endpoint or asynchronous Push-Pull integration, refer to the integration methods section.
Forming a request
Pick your integration method: synchronous (Realtime, Proxy Endpoint) or asynchronous (Push-Pull).
When forming a request, include the following elements:
Endpoint:
Username and password (HTTP authentication). Create API user credentials either during your trial sign-up or product purchase.
If you need more than one API user for your account, please contact our customer support or message our 24/7 live chat support.
Content-type. When submitting jobs, always add this header:
Payload:
source- This parameter sets the scraper that will be used to process your request.URLorquery- Provide theURLorqueryfor the type of page you want to scrape. Refer to the table below and the corresponding target sub-pages for detailed guidance on when to use each parameter.Additional parameters: Optionally, you can include additional parameters such as
geo_location,user_agent_type,parse,renderand more to customize your scraping request.
Endpoint:
Ignore certificates. In
cURL, it's-kor--insecure.Username and password (HTTP authentication). Create API user credentials either during your trial sign-up or product purchase.
If you need more than one API user for your account, please contact our customer support or message our 24/7 live chat support.
Payload:
URL- Provide theURLfor the page you want to scrape.Additional parameters: Optionally, you can include additional parameters such as
geo_location,user_agent_type,parse, and send them as headers.
Endpoint:
Username and password (HTTP authentication). Create API user credentials either during your trial sign-up or product purchase.
If you need more than one API user for your account, please contact our customer support or message our 24/7 live chat support.
Content-type. When submitting jobs, always add this header:
Payload:
source- This parameter sets the scraper that will be used to process your request.URLorquery- Provide theURLorqueryfor the type of page you want to scrape. Refer to the table below and the corresponding target sub-pages for detailed guidance on when to use each parameter.Additional parameters: Optionally, you can include additional parameters such as
geo_location,user_agent_type,parse, and more to customize your scraping request.
Upon submitting a request, you will promptly receive a JSON response containing all job details, including job parameters, job ID, and URLs for downloading job results:
google
google_search,
google_ads,
google_images,
google_lens,
google_maps,
google_travel_hotels,
google_suggest,
google_trends_explore
Marketplaces
Getting started
Select the online marketplace you want to scrape: Amazon, Google Shopping, Walmart, Best Buy, Etsy, Target, Other Websites.
Request sample
We use synchronous Realtime integration method in our examples. If you would like to use Proxy Endpoint or asynchronous Push-Pull integration, refer to the integration methods section.
Forming a request
Pick your integration method: synchronous (Realtime, Proxy Endpoint) or asynchronous (Push-Pull).
When forming a request, include the following elements:
Endpoint:
Username and password (HTTP authentication). Create API user credentials either during your trial sign-up or product purchase.
If you need more than one API user for your account, please contact our customer support or message our 24/7 live chat support.
Content-type. When submitting jobs, always add this header:
Payload:
source- This parameter sets the scraper that will be used to process your request.URLorquery- Provide theURLorqueryfor the type of page you want to scrape. Refer to the table below and the corresponding target sub-pages for detailed guidance on when to use each parameter.Additional parameters: Optionally, you can include additional parameters such as
geo_location,user_agent_type,parse,renderand more to customize your scraping request.
Endpoint:
Ignore certificates. In
cURL, it's-kor--insecure.Username and password (HTTP authentication). Create API user credentials either during your trial sign-up or product purchase.
If you need more than one API user for your account, please contact our customer support or message our 24/7 live chat support.
Payload:
URL- Provide theURLfor the page you want to scrape.Additional parameters: Optionally, you can include additional parameters such as
geo_location,user_agent_type,parse, and send them as headers.
Endpoint:
Username and password (HTTP authentication). Create API user credentials either during your trial sign-up or product purchase.
If you need more than one API user for your account, please contact our customer support or message our 24/7 live chat support.
Content-type. When submitting jobs, always add this header:
Payload:
source- This parameter sets the scraper that will be used to process your request.URLorquery- Provide theURLorqueryfor the type of page you want to scrape. Refer to the table below and the corresponding target sub-pages for detailed guidance on when to use each parameter.Additional parameters: Optionally, you can include additional parameters such as
geo_location,user_agent_type,parse, and more to customize your scraping request.
Upon submitting a request, you will promptly receive a JSON response containing all job details, including job parameters, job ID, and URLs for downloading job results:
amazon
amazon_product,
amazon_search,
amazon_pricing,
amazon_sellers,
amazon_bestsellers,
amazon_reviews,
amazon_questions
Other websites
Getting started
Scrape any URL with our universal source. You can also add additional parameters.
Request sample
We use synchronous Realtime integration method in our examples. If you would like to use Proxy Endpoint or asynchronous Push-Pull integration, refer to the integration methods section.
Forming a request
Pick your integration method: synchronous (Realtime, Proxy Endpoint) or asynchronous (Push-Pull).
When forming a request, include the following elements:
Endpoint:
Username and password (HTTP authentication). Create API user credentials either during your trial sign-up or product purchase.
If you need more than one API user for your account, please contact our customer support or message our 24/7 live chat support.
Content-type. When submitting jobs, always add this header:
Payload.
source- This parameter sets the scraper that will be used to process your request.URL- Provide theURLof the target you want to scrape, for example:Real Estate: Idealista, Redfin, Zillow, Zoopla
Travel: Airbnb, Agoda, Booking, TripAdvisor
Automotive: Crunchbase, ZoomInfo, AngelList, Product Hunt
Company data: Netflix, SoundCloud, YouTube, IMDb
Entertainment: AutoEurope, Autotrader, RockAuto, Halfords
Any other.
Additional parameters: Optionally, you can include additional parameters such as
geo_location,user_agent_type, and more to customize your scraping request.
Endpoint:
Ignore certificates. In
cURL, it's-kor--insecure.Username and password (HTTP authentication). Create API user credentials either during your trial sign-up or product purchase.
If you need more than one API user for your account, please contact our customer support or message our 24/7 live chat support.
Payload:
URL- Provide theURLfor the page you want to scrape, for example:Real Estate: Idealista, Redfin, Zillow, Zoopla
Travel: Airbnb, Agoda, Booking, TripAdvisor
Automotive: Crunchbase, ZoomInfo, AngelList, Product Hunt
Company data: Netflix, SoundCloud, YouTube, IMDb
Entertainment: AutoEurope, Autotrader, RockAuto, Halfords
Any other.
Additional parameters: Optionally, you can include additional parameters such as
geo_location,user_agent_type, and send them as headers.
Endpoint:
Username and password (HTTP authentication). Create API user credentials either during your trial sign-up or product purchase.
If you need more than one API user for your account, please contact our customer support or message our 24/7 live chat support.
Content-type. When submitting jobs, always add this header:
Payload.
source- This parameter sets the scraper that will be used to process your request.URL- Provide theURLof the target you want to scrape, for example:Real Estate: Idealista, Redfin, Zillow, Zoopla
Travel: Airbnb, Agoda, Booking, TripAdvisor
Automotive: Crunchbase, ZoomInfo, AngelList, Product Hunt
Company data: Netflix, SoundCloud, YouTube, IMDb
Entertainment: AutoEurope, Autotrader, RockAuto, Halfords
Any other.
Additional parameters: Optionally, you can include additional parameters such as
geo_location,user_agent_type, and more to customize your scraping request.
Upon submitting a request, you will promptly receive a JSON response containing all job details, including job parameters, job ID, and URLs for downloading job results:
Last updated
Was this helpful?

