Cloud Storage
Last updated
Last updated
Scraper API job results are stored in our storage. You can get your results from our storage by GET
ting the /results
endpoint.
As an alternative, we can upload the results onto your cloud storage. This way, you don't have to make extra requests to fetch results - everything goes directly to your storage bucket.
Cloud storage integration works only with Push-Pull integration method.
Currently, we support Amazon S3 and Google Cloud Storage. If you would like to use a different type of storage, please contact your account manager to discuss the feature delivery timeline.
The upload path looks like this: YOUR_BUCKET_NAME/job_ID.json
. You will find the job ID in the response that you receive from us after submitting a job.
Parameter | Description | Valid values |
---|---|---|
To get your job results uploaded to your Amazon S3 bucket, please set up access permissions for our service. To do that, go to https://s3.console.aws.amazon.com/ > S3 > Storage > Bucket Name (if don't have one, create a new one) > Permissions > Bucket Policy
You can find the bucket policy attached below or in the code sample area.
Don't forget to change the bucket name under YOUR_BUCKET_NAME
. This policy allows us to write to your bucket, give access to uploaded files to you, and know the location of the bucket.
To get your job results uploaded to your Google Cloud Storage bucket, please set up special permissions for our service. To do that, create a custom role with the storage.objects.create
permission, and assign it to the Oxylabs service account email oxyserps-storage@oxyserps-storage.iam.gserviceaccount.com
.
storage_type
Your cloud storage type.
s3
(AWS S3);
gcs
(Google Cloud Storage).
storage_url
Your cloud storage name.
Any s3
or gcs
bucket name.