Integrating Web Unblocker is easy, especially if you have previously used regular proxies for web scraping. The only difference is that we require you to ignore the SSL certificate using the -k or --insecure cURL flags (or an equivalent expression in the language of your choice).
To make a request using Web Unblocker, you need to use the unblock.oxylabs.io:60000 proxy endpoint. See a cURL example below. You can find code samples in other languageshereor complete code examples on our GitHub.
Use ip.oxylabs.io/location to check the parameters of your IPs—this domain delivers information from four geolocation databases: MaxMind, IP2Location, DB-IP, and IPinfo.io. The parameters include IP address, provider, country, city, ZIP code, ASN, organization name, time zone, and meta (when disclosed by database).
import requests# Use your Web Unblocker credentials here.USERNAME, PASSWORD ='YOUR_USERNAME','YOUR_PASSWORD'# Define proxy dict.proxies ={'http':f'http://{USERNAME}:{PASSWORD}@unblock.oxylabs.io:60000','https':f'https://{USERNAME}:{PASSWORD}@unblock.oxylabs.io:60000',}response = requests.request('GET','https://ip.oxylabs.io/location', verify=False, # Ignore the SSL certificate proxies=proxies,)# Print result page to stdoutprint(response.text)# Save returned HTML to result.html filewithopen('result.html', 'w')as f: f.write(response.text)
If you are observing low success rates or retrieve empty content, please try adding additional "x-oxylabs-render: html" header with your request.
If Web Unblocker is being used to scrape websites dependent on loading data via JavaScript, refer to the JavaScript rendering section. The product is not designed to be used with headless browsers (e.g., Chromium, PhantomJS, Splash, etc.) and their drivers (e.g., Playwright, Selenium, Puppeteer, etc.) directly.
Watch the video below for an example of scraping difficult target without getting blocked:
Lesson
If you want to learn more about getting data on a large scale with Web Unblocker - we suggest watching this Scraping Experts lesson: