S3-compatible API
Use Runpod’s S3-compatible API to access and manage your network volumes.
The S3-compatible API is currently in beta. If you’d like to provide feedback, please join our Discord.
Runpod provides an S3-protocol compatible API for direct access to your network volumes. This allows you to manage files on your network volumes without launching a Pod, reducing cost and operational friction.
Using the S3-compatible API does not affect pricing. Network volumes are billed hourly at $0.07/GB/month for the first 1TB, and $0.05/GB/month for additional storage.
Datacenter availability
The S3-compatible API is available for network volumes in select datacenters. Each datacenter has a unique endpoint URL that you’ll use when calling the API:
Datacenter | Endpoint URL |
---|---|
EUR-IS-1 | https://s3api-eur-is-1.runpod.io/ |
EU-RO-1 | https://s3api-eu-ro-1.runpod.io/ |
Create your network volume in a supported datacenter to use the S3-compatible API.
Setup and authentication
Create a network volume
First, create a network volume in a supported datacenter. See Network volumes -> Create a network volume for detailed instructions.
Create an S3 API key
Next, you’ll need to generate a new key called an “S3 API key” (this is separate from your Runpod API key).
- Go to the Settings page in the Runpod console.
- Expand S3 API Keys and select Create an S3 API key.
- Name your key and select Create.
- Save the access key (e.g.,
user_***...
) and secret (e.g.,rps_***...
) to use in the next step.
For security, Runpod will show your API key secret only once, so you may wish to save it elsewhere (e.g., in your password manager, or in a GitHub secret). Treat your API key secret like a password and don’t share it with anyone.
Configure AWS CLI
To use the S3-compatible API with your Runpod network volumes, you must configure your AWS CLI with the Runpod S3 API key you created.
- If you haven’t already, install the AWS CLI on your local machine.
- Run the command
aws configure
in your terminal. - Provide the following when prompted:
- AWS Access Key ID: Enter your Runpod user ID. You can find this in the Secrets section of the Runpod console, in the description of your S3 API key. By default, the description will look similar to:
Shared Secret for user_2f21CfO73Mm2Uq2lEGFiEF24IPw 1749176107073
.user_2f21CfO73Mm2Uq2lEGFiEF24IPw
is the user ID (yours will be different). - AWS Secret Access Key: Enter your Runpod S3 API key’s secret access key.
- Default Region name: You can leave this blank.
- Default output format: You can leave this blank or set it to
json
.
- AWS Access Key ID: Enter your Runpod user ID. You can find this in the Secrets section of the Runpod console, in the description of your S3 API key. By default, the description will look similar to:
This will configure the AWS CLI to use your Runpod S3 API key by storing these details in your AWS credentials file (typically at ~/.aws/credentials
).
Using the S3-compatible API
You can use the S3-compatible API to interact with your Runpod network volumes using standard S3 tools:
Core AWS CLI operations such as ls
, cp
, mv
, rm
, and sync
function as expected.
s3 CLI examples
When using aws s3
commands, you must pass in the endpoint URL for your network volume using the --endpoint-url
flag.
Unlike traditional S3 key-value stores, object names in the Runpod S3-compatible API correspond to actual file paths on your network volume. Object names containing special characters (e.g., #
) may need to be URL encoded to ensure proper processing.
List objects
Use ls
to list objects in a network volume directory:
ls
operations may take a long time when used on a directory containing many files (over 10,000) or large amounts of data (over 10GB), or when used recursively on a network volume containing either.
Transfer files
Use cp
to copy a file to a network volume:
Use cp
to copy a file from a network volume to a local directory:
Use rm
to remove a file from a network volume:
If you encounter a 502 “bad gateway” error during file transfer, try increasing AWS_MAX_ATTEMPTS
to 10 or more:
Sync directories
This command syncs a local directory (source) to a network volume directory (destination):
s3api CLI example
You can also use aws s3api
commands (instead of the aws s3
) to interact with the S3-compatible API.
For example, here’s how you could use aws s3api get-object
to download an object from a network volume:
Replace [LOCAL_FILE]
with the desired path and name of the file after download—for example: ~/local-dir/my-file.txt
.
For a list of available s3api
commands, see the AWS s3api reference.
Boto3 Python example
You can also use the Boto3 library to interact with the S3-compatible API, using it to transfer files to and from a Runpod network volume.
The script below demonstrates how to upload a file to a Runpod network volume using the Boto3 library. It takes command-line arguments for the network volume ID (as an S3 bucket), the datacenter-specific S3 endpoint URL, the local file path, the desired object (file path on the network volume), and the AWS Region (which corresponds to the Runpod datacenter ID).
Your Runpod S3 API key credentials must be set as environment variables using the values from the Setup and authentication step:
AWS_ACCESS_KEY_ID
: Should be set to your Runpod S3 API key access key (e.g.,user_***...
).AWS_SECRET_ACCESS_KEY
: Should be set to your Runpod S3 API key’s secret (e.g.,rps_***...
).
Example usage:
Supported S3 actions
The S3-compatible API supports the following operations. For detailed information on each, refer to the AWS S3 API documentation.
Operation | Description |
---|---|
CopyObject | Copy objects between locations. |
DeleteObject | Remove objects. |
GetObject | Download objects. |
HeadBucket | Verify bucket exists and you have permissions. |
HeadObject | Retrieve object metadata. |
ListBuckets | List available buckets. |
ListObjects | List objects in a bucket. |
PutObject | Upload objects. |
CreateMultipartUpload | Start a multipart upload for large files. |
UploadPart | Upload a part of a multipart upload. |
CompleteMultipartUpload | Finish a multipart upload. |
AbortMultipartUpload | Cancel a multipart upload. |
ListMultipartUploads | View in-progress multipart uploads. |
Large file handling is supported through multipart uploads, allowing you to transfer files larger than 5GB.
ListObjects
operations may take a long time when used on a directory containing many files (over 10,000) or large amounts of data (over 10GB), or when used recursively on a network volume containing either.
Limitations
Storage and time synchronization
- Storage capacity: Network volumes have a fixed storage capacity, unlike the virtually unlimited storage of standard S3 buckets. The
CopyObject
andUploadPart
actions do not check for available free space beforehand and may fail if the volume runs out of space. - Maximum file size: 4TB (the maximum size of a network volume).
- Object names: Unlike traditional S3 key-value stores, object names in the Runpod S3-compatible API correspond to actual file paths on your network volume. Object names containing special characters (e.g.,
#
) may need to be URL encoded to ensure proper processing. - Time synchronization: Requests that are out of time sync by 1 hour will be rejected. This is more lenient than the 15-minute window specified by the AWS SigV4 authentication specification.
Multipart uploads
- The S3-compatible API enforces a 500MB maximum on upload part size.
- The 5MB minimum part size for multipart uploads is not enforced.
- Parts from multipart uploads are stored on disk until either
CompleteMultipartUpload
orAbortMultipartUpload
is called.
Unsupported S3 features
- Object versioning.
- Object encryption.
- Object tagging.
- Object ACLs.
- Object locking.
- Website redirects.
- Storage classes besides
STANDARD
.
Reference documentation
For comprehensive documentation on AWS S3 commands and libraries, refer to: