Object Storage with S3 API
NetActuate provides S3-compatible object storage that you can provision via Terraform and access using standard S3 tools and SDKs. Storage is available in 13 locations worldwide.
What You Will Build
By the end of this guide you will have:
- An S3-compatible storage bucket with access credentials
- An object store with independent scaling
- Working AWS CLI configuration for uploading and listing objects
Prerequisites
- Terraform 1.0 or later
- The NetActuate Terraform provider configured with a valid API key
- AWS CLI or any S3-compatible client (optional, for verification)
Available Locations
NetActuate storage is available in the following 13 locations:
| Code | Location |
|---|---|
| RDU | Raleigh, NC |
| AMS | Amsterdam, NL |
| SJC | San Jose, CA |
| DFW | Dallas, TX |
| LHR | London, UK |
| FRA | Frankfurt, DE |
| SCL | Santiago, CL |
| HKG | Hong Kong |
| SIN | Singapore |
| DXB | Dubai, AE |
| SYD | Sydney, AU |
| LAX | Los Angeles, CA |
| MIA | Miami, FL |
Note: Storage locations differ from compute locations. Use
data.netactuate_storage_locationsto discover which locations are available for your account.
Storage Types
NetActuate offers two S3-compatible storage resource types. Both use standard S3 protocols and work with any S3 client.
S3 Bucket
A standard S3-compatible bucket with access and secret keys. Each bucket includes a privacy toggle that controls public or private access.
Best for general object storage, static assets, and backups.
Object Store
An S3-compatible object store, similar to buckets but with independent scaling and no privacy toggle. Object stores are provisioned as standalone units.
Best for application data, media storage, and workloads that grow unpredictably.
Key Differences
Both types are fully S3-compatible and return access credentials on creation. The main differences:
- Buckets expose a
privateattribute for controlling public access - Object stores scale independently and are suited to application workloads
- Both support
enable_auto_scalingfor automatic capacity expansion
Step 1: Discover Locations
Use the netactuate_storage_locations data source to list available storage regions:
data "netactuate_storage_locations" "all" {}
output "storage_locations" {
value = data.netactuate_storage_locations.all.locations
}
Run terraform apply to see the full list of location codes and names.
Step 2: Create a Bucket
Define a netactuate_storage_bucket resource with a label, location, and capacity in gigabytes:
resource "netactuate_storage_bucket" "assets" {
label = "static-assets"
location = "SJC"
capacity = 2
private = false
}
Setting private to false allows public read access. Set it to true for private-only access.
Step 3: Create an Object Store
Define a netactuate_storage_object_store resource:
resource "netactuate_storage_object_store" "appdata" {
label = "app-media"
location = "SJC"
capacity = 10
}
The object store provisions independently from any bucket and receives its own set of credentials.
Step 4: Retrieve Credentials
Both resource types export connection details as sensitive outputs. Add outputs to your configuration:
output "bucket_endpoint" {
value = netactuate_storage_bucket.assets.endpoint
sensitive = true
}
output "bucket_access_key" {
value = netactuate_storage_bucket.assets.access_key
sensitive = true
}
output "bucket_secret_key" {
value = netactuate_storage_bucket.assets.secret_key
sensitive = true
}
Run terraform output -json to retrieve the credentials in plaintext.
Using with AWS CLI
Configure a named profile with the credentials from Terraform output:
aws configure --profile netactuate
# Enter the access_key when prompted for AWS Access Key ID
# Enter the secret_key when prompted for AWS Secret Access Key
# Leave region and output format as defaults
List bucket contents:
aws s3 ls --endpoint-url https://your-endpoint.netactuate.com --profile netactuate
Upload a file:
aws s3 cp myfile.txt s3://static-assets/ --endpoint-url https://your-endpoint.netactuate.com --profile netactuate
Replace https://your-endpoint.netactuate.com with the actual endpoint from your Terraform output.
Auto-Scaling
Enable automatic capacity expansion to avoid running out of space:
resource "netactuate_storage_bucket" "assets" {
label = "static-assets"
location = "SJC"
capacity = 2
private = false
enable_auto_scaling = true
}
When auto-scaling is enabled, NetActuate increases the storage capacity automatically as usage approaches the provisioned limit.
Teardown
Remove all stored objects before destroying the Terraform resources:
aws s3 rm s3://static-assets/ --recursive --endpoint-url https://your-endpoint.netactuate.com --profile netactuate
terraform destroy
Terraform will deprovision the bucket and object store and release all associated credentials.
Additional Resources
- Example repository: netactuate-terraform-storage
- Storage reference
- Block Storage for Persistent Data
Need Help?
If you need assistance, visit our support page.