Proxy Endpoint
If you have ever used regular proxies for data scraping, integrating the Proxy Endpoint delivery method will be a breeze. All you need to do is use our entry node as a proxy, authorize with Scraper API credentials, and ignore certificates. In cURL
, it's -k
or --insecure
. Your data will reach you on an open connection.
Proxy Endpoint only works with the URL-based data sources, where full URL is provided. Therefore, it only accepts a handful of additional job parameters, which should be sent as headers.
The product is not designed to be used with headless browsers (e.g., Chromium, PhantomJS, Splash, etc.) and their drivers (e.g., Playwright, Selenium, Puppeteer, etc.) directly.
Endpoint
Input
Please see a request example below.
Output
Below you will find a sample response from https://example.com
:
Accepted parameters
When making your request, along with the URL, you can send us some job parameters that we will use while executing your job. The job parameters should be sent in your request headers - see an example here.
Here is the list of job parameters that you can send with Proxy Endpoint requests:
Parameter | Description |
---|---|
| There is no way to indicate a specific User-Agent, but you can let us know which user-agent type you would like us to use. A list of supported User-Agent types can be found here. |
| In some cases, you may need to indicate the geographical location that the result should be adapted for. This parameter corresponds to the |
| JavaScript execution. Read more here. |
Last updated