NAV Navbar
cURL Python
Next-Gen Residential Proxies

Quick Start

Next-Gen Residential Proxies are built for heavy-duty data retrieval operations. They enable effortless web data extraction without any delays or errors. The product is as customizable as a regular proxy, but at the same time it guarantees a much higher success rate. Custom headers and IP stickiness are both supported, alongside reusable cookies and POST requests.

If you have ever used regular proxies for data scraping, integrating Next-Gen Residential Proxies will be a breeze. The only difference is that we require to accept our certificate, or ignore it altogether with -k or --insecure cURL flags (or an equivalent expression in the language of your choice).

To make a request using Next-Gen Residential Proxies, we need to use ngrp.oxylabs.io:60000 endpoint. Below is an example in cURL. You can find code samples in other languages here or full code examples on our GitHub.

curl -k -x ngrp.oxylabs.io:60000 -U "USERNAME:PASSWORD" http://ip.oxylabs.io

If you have any questions not covered by this documentation, please contact your account manager or our support staff at support@oxylabs.io.

Making requests

curl -k -v -x ngrp.oxylabs.io:60000 -U user:pass1 "https://ip.oxylabs.io"
import requests
from pprint import pprint

# Define proxy dict. Don't forget to put your real user and pass here as well.
proxies = {
  'http': 'http://user:pass1@ngrp.oxylabs.io:60000',
  'https': 'http://user:pass1@ngrp.oxylabs.io:60000',
}

response = requests.get(
    'https://ip.oxylabs.io',
    verify=False,  # It is required to ignore certificate
    proxies=proxies,
)

# Print result page to stdout
pprint(response.text)
GET ngrp.oxylabs.io:60000

The easiest way to start is to send us a simple query without any custom options. We will add all standard headers on our end, pick the fastest proxy and deliver you the response body.

To utilize desired functionalities of Next-Gen Residential Proxies such as setting up proxy geolocation or reusing the same IP via session control, we need to send additional headers with the request.

Here's the full list of supported functionalities and headers:

Query parameters

Parameter Description
X-Oxylabs-Session-Id If you need to reuse the same IP for multiple requests, add a session ID which can be a random string of characters
X-Oxylabs-Geo-Location To use an IP address from a specific location, specify a country of or a city, for example Germany. Supported geo-locations can be found here.
Headers You can add your own headers such as User-Agent, or any other, insted of using pre-generated ones.
Cookies You can add your own cookies, for example Cookie: NID=1234567890, to your requests.
X-Oxylabs-Status-Code In case your target returns a custom status code with a successful response, you can send the status code of the response and our system will not retry the request.
X-Oxylabs-Render If you wish to render JavaScript use html to get a rendered HTML or png to get a screenshot of the page.
X-Oxylabs-Parser-Type This header can be used to select parser type. Currently ecommerce-product is supported.
X-Oxylabs-Parse THe headers is used to to parse the data from the website. Value 1 must pe passed to enable parsing.

Session

curl -k -v -x ngrp.oxylabs.io:60000 -U user:pass1 "https://ip.oxylabs.io" -H "X-Oxylabs-Session-Id: 123randomString"
import requests
from pprint import pprint

# Define proxy dict. Don't forget to put your real user and pass here as well.
proxies = {
  'http': 'http://user:pass1@ngrp.oxylabs.io:60000',
  'https': 'https://user:pass1@ngrp.oxylabs.io:60000',
}

headers = {
    "X-Oxylabs-Session-Id": "123randomString"
}

response = requests.get(
    'https://ip.oxylabs.io',
    verify=False,  # It is required to ignore certificate
    proxies=proxies,
    headers=headers,
)

# Print result page to stdout
pprint(response.text)

If you want to use the same proxy to make multiple requests, you can do that by adding the X-Oxylabs-Session-Id header with a randomly-generated string for the session ID. We will assign a proxy to this ID and keep it for up to 10 minutes. After that a new proxy will be assigned to that particular session ID.

Geo-Location

curl -k -v -x ngrp.oxylabs.io:60000 -U user:pass1 "https://ip.oxylabs.io" -H "X-Oxylabs-Geo-Location: Munich,Germany"
import requests
from pprint import pprint

# Define proxy dict. Don't forget to put your real user and pass here as well.
proxies = {
  'http': 'http://user:pass1@ngrp.oxylabs.io:60000',
  'https': 'https://user:pass1@ngrp.oxylabs.io:60000',
}

headers = {
    "X-Oxylabs-Geo-Location": "Munich,Germany"
}

response = requests.get(
    'https://ip.oxylabs.io',
    verify=False,  # It is required to ignore certificate
    proxies=proxies,
    headers=headers,
)

# Print result page to stdout
pprint(response.text)

Some websites will not serve content if accessed from unsupported geo-locations. You can specify in your request which country you want to access the target from. Just add the X-Oxylabs-Geo-Location header with the value set to a country name, for example, Germany for Germany or United States for the United States.

Next-Gen Residential Proxies also support city-level targeting. If you want to add a city, simply add the city name before the country, e.g,: Munich, Germany.

The full list of supported geo-location parameter values can be found here.

Headers

curl -k -v -x ngrp.oxylabs.io:60000 -U user:pass1 "https://ip.oxylabs.io" -H "Your-Custom-Header: interesting header content" -H "User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/73.0.3683.86 Chrome/73.0.3683.86 Safari/537.36" -H "Accept-Language: en-US"
import requests
from pprint import pprint

# Define proxy dict. Don't forget to put your real user and pass here as well.
proxies = {
  'http': 'http://user:pass1@ngrp.oxylabs.io:60000',
  'https': 'https://user:pass1@ngrp.oxylabs.io:60000',
}

headers = {
    "Your-Custom-Header": "interesting header content",
    "User-Agent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Ubuntu Chromium/73.0.3683.86 Chrome/73.0.3683.86 Safari/537.36",
    "Accept-Language": "en-US"
}

response = requests.get(
    'https://ip.oxylabs.io',
    verify=False,  # It is required to ignore certificate
    proxies=proxies,
    headers=headers,
)

# Print result page to stdout
pprint(response.text)

If you know more than we do about a particular target, you can add your own headers to your request. It can be both standard headers, such as User-Agent or Accept-Language, and something completely custom and target-specific.

Cookies

curl -k -v -x ngrp.oxylabs.io:60000 -U user:pass1 "https://ip.oxylabs.io" -H "Cookie: NID=1234567890; 1P_JAR=0987654321"
import requests
from pprint import pprint

# Define proxy dict. Don't forget to put your real user and pass here as well.
proxies = {
  'http': 'http://user:pass1@ngrp.oxylabs.io:60000',
  'https': 'https://user:pass1@ngrp.oxylabs.io:60000',
}

headers = {
    "Cookie": "NID=1234567890; 1P_JAR=0987654321"
}

response = requests.get(
    'https://ip.oxylabs.io',
    verify=False,  # It is required to ignore certificate
    proxies=proxies,
    headers=headers,
)

# Print result page to stdout
pprint(response.text)

The system also allows setting custom cookies to target website. With your initial request we will return all response headers and cookies. You can modify them on your end and send them back to our system with the next request. It may be a good idea to reuse the cookies if you use the same IP address to make a few consecutive requests (see Session for more)

Custom Status Code

curl -k -v -x ngrp.oxylabs.io:60000 -U user:pass1 "https://ip.oxylabs.io" -H "X-Oxylabs-Status-Code: 500,501,502,503" 
import requests
from pprint import pprint

# Define proxy dict. Don't forget to put your real user and pass here as well.
proxies = {
  'http': 'http://user:pass1@ngrp.oxylabs.io:60000',
  'https': 'https://user:pass1@ngrp.oxylabs.io:60000',
}

headers = {
    "X-Oxylabs-Status-Code": "500,501,502,503"
}

response = requests.get(
    'https://ip.oxylabs.io',
    verify=False,  # It is required to ignore certificate
    proxies=proxies,
    headers=headers,
)

# Print result page to stdout
pprint(response.text)

By default we assume that the request is successful as long as it returns a 2xx or a 4xx status code. However, sometimes websites return the required content together with a non-standard HTTPS status code. If one of your targets does that, you can indicate which status codes are acceptable and actually valuable for you. Simply add X-Oxylabs-Status-Code header with all HTTP response codes that actually work for you. Please note that 2xx and 4xx will still be automatically marked as successful.

POST Requests

curl -X POST -k -v -x ngrp.oxylabs.io:60000 -U user:pass1 "https://ip.oxylabs.io" -d "@/path/to/file.json"
import requests
from pprint import pprint

# Define proxy dict. Don't forget to put your real user and pass here as well.
proxies = {
  'http': 'http://user:pass1@ngrp.oxylabs.io:60000',
  'https': 'https://user:pass1@ngrp.oxylabs.io:60000',
}

data = {
    "Your POST JSON": "data"
}

response = requests.post(
    'https://ip.oxylabs.io',
    verify=False,  # It is required to ignore certificate
    proxies=proxies,
    data=data,
)

# Print result page to stdout
pprint(response.text)

Next-Gen Residential Proxies don't only support sending GET requests, but also let you POST to a web endpoint of your choice. This means that you can send data to a target website, which can then make the website return a different result.

JavaScript rendering

curl -k -v -x ngrp.oxylabs.io:60000 -U user:pass1 "https://ip.oxylabs.io" -H "X-Oxylabs-Render: html"
import requests
from pprint import pprint

# Define proxy dict. Don't forget to put your real user and pass here as well.
proxies = {
  'http': 'http://user:pass1@ngrp.oxylabs.io:60000',
  'https': 'https://user:pass1@ngrp.oxylabs.io:60000',
}

headers = {
    "X-Oxylabs-Render": "html"
}

response = requests.get(
    'https://ip.oxylabs.io',
    verify=False,  # It is required to ignore certificate
    proxies=proxies,
    headers=headers,
)

# Print result page to stdout
pprint(response.text)

Enable JavaScript rendering. Use when the target requires JavaScript to load content. There are two available values for this parameter: html (get raw output) and png (get a Base64-encoded screenshot).

Adaptive parsing

curl -v -k -x ngrp.oxylabs.io:60000 -U user:pass1 "https://ip.oxylabs.io" -H "X-Oxylabs-Parser-Type: ecommerce_product" -H "X-Oxylabs-Parse: 1"
import requests
from pprint import pprint

# Define proxy dict. Don't forget to put your real user and pass here as well.
proxies = {
  'http': 'http://user:pass1@ngrp.oxylabs.io:60000',
  'https': 'https://user:pass1@ngrp.oxylabs.io:60000',
}

headers = {
    "X-Oxylabs-Parser-Type": "ecommerce_product",
    "X-Oxylabs-Parse": "1",
}

response = requests.get(
    'https://ip.oxylabs.io', #E-commerce product page
    verify=False,  # It is required to ignore certificate
    proxies=proxies,
    headers=headers,
)

# Print result page to stdout
pprint(response.json())

Sample output:

{
    "results": [
        {
            "content": {
                "url": "https://ip.oxylabs.io/product/product_example.html",
                "body": {
                    "price": 11.99,
                    "title": "Example product title",
                    "currency": "$",
                    "old_price": 15.99,
                    "description": null,
                    "image_links": [
                        "https://ip.oxylabs.io/img/product_example.png"
                    ],
                    "ids_from_url": [],
                    "simple_links": [
                        {
                            "link": "https://ip.oxylabs.io/product/another_example.html",
                            "description": "Another product example"
                        }
                    ],
                    "ids_from_html": [
                        {
                            "Product number": "14158288"
                        }
                    ],
                    "price_range_lower": null,
                    "price_range_upper": null
                },
                "meta": {
                    "title": "Example product meta title",
                    "keywords": [],
                    "description": "Example product description"
                },
                "parse_status_code": 12000
            },
            "created_at": "2020-01-01 10:00:00",
            "updated_at": "2020-01-01 10:00:07",
            "id": 29964797,
            "page": 1,
            "url": "https://ip.oxylabs.io/product/product_example.html",
            "job_id": "6699272813062145025",
            "status_code": 200
        }
    ]
}

Adaptive parsing is capable of parsing the most important fields from any e-commerce product page. To enable adaptive parsing for e-commerce product pages it is required to send two additional headers: X-Oxylabs-Parser-Type: ecommerce_product to select the parser type and X-Oxylabs-Parse: 1 to parse the retrieved page.

Fields that can be parsed by Next-Gen Residential Proxies Adaptive Parsing for e-commerce product pages:

Usage Statistics

This query will return all time statistics. You can find your daily and monthly usage by adding either ?group_by=day or ?group_by=month. Traffic is being displayed in bytes. Please note that it's necessary to prepend "NGRP__" to your username.

curl --user NGRP__user:pass1 'https://data.oxylabs.io/v1/stats'

Sample output:

{
    "meta": {
        "group_by": null,
        "date_from": null,
        "date_to": null,
        "source": null
    },
    "data": {
        "sources": [
            {
                "results_count_all": "1482",
                "results_count": "0",
                "realtime_results_count": "0",
                "super_api_results_count": "12777",
                "render": "0",
                "geo_location": "0",
                "average_response_time": 2.18,
                "request_traffic": "6629", 
                "response_traffic": "17850",
                "title": "universal"
            }
        ]
    }
}

You can find your usage statistics by visiting our Dashboard or by querying the following endpoint:

GET https://data.oxylabs.io/v1/stats

By default the API will return all time usage statistics. Adding ?group_by=month will return monthly stats, while ?group_by=day will return daily numbers. Traffic is being displayed in bytes.

Sample Response

HTTP/1.1 200 OK
X-Job-Id: 1234567890123456
X-Session-Id: 123randomString
cf-cache-status: DYNAMIC
cf-ray: 55c2ab837eddcba8-VIE
content-encoding: gzip
content-length: 72657
content-type: text/html
date: Tue, 1 Jan 2020 00:00:01 GMT,expect-ct: max-age=604800,report-uri="https://report-uri.cloudflare.com/cdn-cgi/beacon/expect-ct"
last-modified: Tue, 1 Dec 2020 00:00:00 GMT
server: cloudflare
status: 200
strict-transport-security: max-age=31536000
X-DNS-Prefetch-Control: off
Set-Cookie: NID=1234567890; expires=Wed, 29-Jul-2020 10:56:21 GMT
Set-Cookie: 1P_JAR=0987654321; expires=Wed, 29-Jul-2020 10:56:21 GMT
x-cache: MISS from localhost
x-cache-lookup: MISS from localhost:3129

<html>content here</html>

This is what the response looks like. To see headers and cookies, Verbose mode must be enabled.