Scrape other websites (YouTube, Aliexpress, eBay, Home Depot, Idealo, Zillow, Yandex, Baidu, etc.) with our universal source. It accepts URLs along with additional parameters.
Request samples
In this example, the API will retrieve an e-commerce product page.
import requestsfrom pprint import pprint# Structure payload.payload ={'source':'universal','url':'https://sandbox.oxylabs.io/products/1',}# Get response.response = requests.request('POST','https://realtime.oxylabs.io/v1/queries', auth=('USERNAME', 'PASSWORD'), json=payload,)# Instead of response with job status and results url, this will return the# JSON response with the result.pprint(response.json())
# The whole string you submit has to be URL-encoded.https://realtime.oxylabs.io/v1/queries?source=universal&url=https%3A%2F%2Fsandbox.oxylabs.io%2Fproducts%2F1&access_token=12345abcde
Geo location of proxy used to retrieve the data. The complete list of the supported locations can be found here.
-
render
Enables JavaScript rendering when set to html. More info. NOTE: If you are observing low success rates or retrieve empty content, please try adding this parameter.
-
browser_instructions
Define your own browser instructions that are executed when rendering JavaScript. More info.
-
parse
Returns parsed data when set to true, as long as a dedicated parser exists for the submitted URL's page type.
false
parsing_instructions
Define your own parsing and data transformation logic that will be executed on an HTML scraping result. Read more: Parsing instructions examples.
If you want to use the same proxy with multiple requests, you can do so by using this parameter. Just set your session to any string you like, and we will assign a proxy to this ID, and keep it for up to 10 minutes. After that, if you make another request with the same session ID, a new proxy will be assigned to that particular session ID.
-
context:
http_method
Set it to post if you would like to make a POST request to your target URL via E-commerce Scraper API. Learn more here.
get
user_agent_type
Device type and browser. The full list can be found here.
desktop
context:
content
Base64-encoded POST request body. It is only useful if http_method is set to post.
-
content_encoding
Add this parameter if you are downloading images. Learn more here.
base64
context:
follow_redirects
Set to true to enable scraper to follow redirects. By default, redirects are followed up to a limit of 10 links, treating the entire chain as one scraping job.
false
context:
successful_status_codes
Define a custom HTTP response code (or a few of them), upon which we should consider the scrape successful and return the content to you. May be useful if you want us to return the 503 error page or in some other non-standard cases.
-
All parameters
In this example, all available parameters are included (though not always necessary or compatible within the same request), to give you an idea on how to format your requests.