Other Search Engines

Scrape other search engines with our universal source. It accepts URLs along with additional parameters. You can find the list of available parameters in the table below.


Below is a quick overview of all the available data source values we support with other targets.

SourceDescriptionStructured data


Submit any URL you like.

Use Custom Parser feature to get structured data.

Code examples

In the example below, we make a request to retrieve a result for the provided Baidu URL.

curl 'https://realtime.oxylabs.io/v1/queries' \
-H 'Content-Type: application/json' \
-d '{
        "source": "universal",
        "url": "https://www.baidu.com/s?ie=utf-8&f=8&rsv_bp=1&rsv_idx=1&ch=&tn=baidu&bar=&wd=adidas"

The example above uses the Realtime integration method. If you would like to use some other integration method in your query (e.g. Push-Pull or Proxy Endpoint), refer to the integration methods section.

Query parameters

ParameterDescriptionDefault Value


Data source. More info.



Direct URL (link) to Universal page



Device type and browser. The full list can be found here.



Geo location of proxy used to retrieve the data. The complete list of the supported locations can be found here.



Enables JavaScript rendering. More info.



Add this parameter if you are downloading images. Learn more here.


context: content

Base64-encoded POST request body. It is only useful if http_method is set to post.


context: cookies

Pass your own cookies.


context: follow_redirects

Set to true to enable scraper to follow redirects. By default, redirects are followed up to a limit of 10 links, treating the entire chain as one scraping job.


context: headers

Pass your own headers.


context: http_method

Set it to post if you would like to make a POST request to your target URL.


context: session_id

If you want to use the same proxy with multiple requests, you can do so by using this parameter. Just set your session to any string you like, and we will assign a proxy to this ID, and keep it for up to 10 minutes. After that, if you make another request with the same session ID, a new proxy will be assigned to that particular session ID.


context: successful_status_codes

Define a custom HTTP response code (or a few of them), upon which we should consider the scrape successful and return the content to you. May be useful if you want us to return the 503 error page or in some other non-standard cases.



URL to your callback endpoint. More info.



true will return structured data as long as you define parsing_instructions



Define your own parsing and data transformation logic that will be executed on an HTML scraping result. Read more: Parsing instructions examples.


- required parameter

If you are observing low success rates or retrieve empty content, please try using additional "render": "html" parameter in your request. More info about render parameter can be found here.

Forming URLs


Job parameter assignment to URL:


When forming URLs, please follow these instructions:

  1. Encoding search terms: Search terms must be URL-encoded. For instance, spaces should be replaced with %20, which represents a space character in a URL.

  2. Calculating start page: The start_page parameter now corresponds to the number of search results to skip. Use the equation limit*start_page-limit to calculate the value.

  3. Subdomain assignment: The subdomain value depends on the user agent type provided in the job. If the user agent type contains mobile, the subdomain value should be m. Otherwise, it should be www.

  4. Query parameter: Depending on the subdomain value (m or www), the query parameter for the query term should be adjusted accordingly (word for m and wd for www).

Sample Built URLs

For mobile:


For desktop:


Equivalent Job Examples

Decommissioned baidu_search source:

    "source": "baidu_search",
    "query": "test",
    "domain": "com",
    "limit": 5,
    "start_page": 3,
    "user_agent_type": "desktop"

Updated universal source:

    "source": "universal",
    "url": "https://www.baidu.com/s?ie=utf-8&wd=test&rn=5&pn=10",
    "user_agent_type": "desktop"


Job parameter assignment to URL:


When forming URLs, please follow these instructions:

  1. Encoding search terms: Search terms must be URL encoded. For instance, spaces should be replaced with %20, which represents a space character in a URL.

  2. Start page adjustment: The value of the start_page has to be reduced by 1. For example, if the desired starting page is 3, then the value in the URL, which represents the page number, has to be 2.

  3. Localization: If the domain is either ru or tr, an additional query parameter lr is added with the geo_location value. For other domains, the geo_location value is under the query parameter rstr, where a - symbol is added before the value.

  4. Unsupported: pages parameter is no longer supported. Jobs have to be submitted separately by changing the current page value in the URL.

Built URL examples


Equivalent job example

Decommissioned yandex_search source:

    "source": "yandex_search",
    "query": "test",
    "domain": "com",
    "limit": 5,
    "start_page": 3,
    "geo_location": 100,
    "results_language": "en"

Updated universal source:

    "source": "universal",
    "url": "https://yandex.ru/search?text=adidas&numdoc=5&p=2&lr=100&lang=en"

Parameter values


Check the complete list of supported geo_location values here.

Here is an example:

"United Arab Emirates",
"Venezuela Bolivarian Republic of",
"South Africa",


Universal scraper supports two HTTP(S) methods: GET (default) and POST.


Last updated