The Per-GB Price Is Not Your Problem
Most cloud storage cost discussions start and end with the per-GB price. AWS S3 Standard at $0.023/GB. Azure Blob Hot at $0.018/GB. GCP Cloud Storage Standard at $0.020/GB. Pick the cheapest one, call it optimized.
Here is what that conversation misses: for most organizations, the per-GB storage rate is not the dominant cost in their storage bill. The dominant costs are the billing mechanics that run underneath the per-GB rate that almost no documentation explains clearly.
You are paying for incomplete multipart uploads that have been accumulating in your S3 buckets for years. You are paying minimum duration penalties on objects you deleted months ago. You are paying transition fees on lifecycle policies that cost more per object than the storage savings they generate. You are paying for S3 versions of files that nobody knows still exist.
None of these show up labeled as "waste" in your billing console. They all look like normal storage line items. And they are quietly, persistently more expensive than the optimizations you are already focused on.
This guide covers 12 of these hidden billing mechanics, with exact numbers so you can calculate what each one is costing you before you spend a single hour fixing it.
S3 Storage Class Traps
1. Minimum Duration Penalties Will Charge You for Objects You Already Deleted
Every reduced-cost S3 storage class comes with a minimum storage duration. Delete an object before the minimum is up, and you still pay for the full duration.
| Storage Class | Minimum Duration | Price per GB/Month | Penalty per Early Deletion |
|---|---|---|---|
| S3 Standard | None | $0.023 | None |
| S3 Standard-IA | 30 days | $0.0125 | Remaining days charged |
| S3 One Zone-IA | 30 days | $0.01 | Remaining days charged |
| S3 Glacier Instant Retrieval | 90 days | $0.004 | Remaining days charged |
| S3 Glacier Flexible Retrieval | 90 days | $0.0036 | Remaining days charged |
| S3 Glacier Deep Archive | 180 days | $0.00099 | Remaining days charged |
The trap: teams set aggressive lifecycle policies to transition objects quickly to cheaper tiers. An object created Monday gets transitioned to Standard-IA on Tuesday if there is a 1-day transition rule. It gets deleted the following Thursday (8 days after creation). The minimum duration is 30 days. AWS charges for the remaining 22 days.
For small objects transitioned frequently, this penalty compounds. A 1MB object transitioned to Standard-IA costs $0.0125/GB/month minimum. For the 22-day early deletion penalty: (22/30) x $0.0125 x 0.001 = a trivial number per object. But multiply by 50 million such objects, and the penalty reaches $460/month from objects that no longer exist.
The fix: set transition rules to match actual access patterns rather than "move to IA as fast as possible." If your access logs show objects are not accessed after 45 days, set a 45-day transition rule. The 15 extra days of Standard storage costs less than early deletion penalties on objects that do get deleted in the 30 to 44-day window.
2. Lifecycle Transition Fees Often Cost More Than the Savings on Small Objects
Transitioning an object from one storage class to another is not free. AWS charges $0.01 per 1,000 objects for each lifecycle transition. That fee applies to every object, regardless of size.
For large objects, this is trivial. Transitioning a 10GB file costs $0.00001. The storage savings are significant.
For small objects, the math inverts. Transitioning a 10KB file from Standard ($0.023/GB/month) to Standard-IA ($0.0125/GB/month) saves $0.0000098/month. The transition fee is $0.00001. The transition fee exceeds one month of storage savings. You need the object to stay in Standard-IA for more than a month before the transition pays off, and that assumes no other penalty applies.
For buckets containing millions of objects smaller than 128KB, lifecycle policies can actively increase your costs through transition fees, minimum duration penalties, and Standard-IA retrieval fees that negate the storage savings.
The fix: calculate the break-even before implementing lifecycle policies on small-object buckets.
Break-even months = ($0.01 / 1000 objects) / (storage_per_object_GB x (standard_price - IA_price))
For 10KB objects: ($0.00001) / (0.000010 x $0.0105) = 95 months. You need an object to sit in Standard-IA for 8 years before the lifecycle transition pays for itself. For these buckets, skip Standard-IA and either keep objects in Standard or use Glacier directly for archival.
3. S3 Versioning Silently Doubles Your Storage Over Time
S3 versioning keeps previous versions of every object. Every PUT request to a versioned bucket creates a new version. Every overwrite creates a previous version that persists until explicitly deleted.
This is genuinely useful for applications that need version history and for protecting against accidental deletions. It is quietly catastrophic for storage costs when not managed.
Consider an application that updates 1,000 configuration or data files per day, each averaging 5MB. With versioning enabled and no version expiration policy:
- After 30 days: 30,000 versions x 5MB = 150GB of previous versions
- After 90 days: 90,000 versions = 450GB of previous versions
- Storage cost at day 90 from versions alone: 450GB x $0.023 = $10.35/month and growing daily
The current objects might represent 5GB (1,000 files at 5MB). The accumulated versions represent 9 times as much storage.
AWS does not delete old versions automatically. You must add an expiration rule specifically for non-current versions:
{
"Rules": [
{
"Status": "Enabled",
"NoncurrentVersionExpiration": {
"NoncurrentDays": 30
}
}
]
}
This keeps 30 days of version history (useful for rollback) and automatically removes older versions. Adding this rule to a high-update bucket typically reduces storage by 50 to 90%.
4. Delete Markers Accumulate and Cost More Than Zero
When you delete an object from a versioned bucket, S3 creates a delete marker. This marker has no content, but it exists as an object and counts toward certain billing dimensions. More importantly, delete markers accumulate indefinitely just like versions.
In an application with heavy create-and-delete patterns (temporary files, session data, ephemeral cache objects), the delete marker count grows without bound. A bucket processing 1 million deletes per day accumulates 30 million delete markers per month. Each marker is negligibly small in storage, but the GET and LIST operations that have to process them are not.
If you run lifecycle policies or analytics on buckets with large numbers of delete markers, the operation time and cost grow with marker count. Buckets with billions of delete markers can take hours to process in lifecycle runs.
The fix: add an expiration rule for expired delete markers in the same lifecycle policy as non-current version expiration.
Incomplete Operations That Nobody Cleans Up
5. Incomplete Multipart Uploads Have Been Accumulating for Years
S3 multipart upload splits large uploads into parts that are uploaded independently and assembled at the end. When uploads fail or applications crash, the parts are uploaded but never assembled. The incomplete upload stays in S3 indefinitely, and you pay for every part that was uploaded.
AWS does not show incomplete multipart uploads in the standard S3 console view. They do not appear when you list bucket objects. They are essentially invisible unless you specifically query for them.
In environments that upload large files regularly (video processing, data exports, ML datasets), incomplete multipart uploads can represent gigabytes to terabytes of storage that has been accumulating for years.
Find them now:
aws s3api list-multipart-uploads --bucket YOUR_BUCKET_NAME
To prevent future accumulation, add an AbortIncompleteMultipartUpload lifecycle rule:
{
"Rules": [
{
"Status": "Enabled",
"AbortIncompleteMultipartUpload": {
"DaysAfterInitiation": 7
}
}
]
}
This automatically deletes any multipart upload that has not completed within 7 days. Teams that run this query for the first time frequently find hundreds of gigabytes of invisible storage they have been paying for without knowing it existed.
6. Replication Rules That Nobody Turned Off
S3 Cross-Region Replication (CRR) replicates objects from a source bucket to a destination bucket in another region. This is genuinely useful for disaster recovery and compliance.
What teams miss: CRR costs money in two ways beyond the doubled storage. The data transfer from source to destination is charged at inter-region data transfer rates ($0.02/GB for US regions). And replication is ongoing, meaning every future write is replicated indefinitely.
For a bucket receiving 100GB of new writes per day with CRR to another US region:
- Daily replication transfer cost: 100GB x $0.02 = $2/day
- Monthly replication transfer: $60/month in data transfer alone
- Additional destination storage: 100GB/day x 30 days x $0.023 = $69/month
- Total ongoing monthly cost of CRR: $129/month, plus the initial replication of existing data
Replication rules set up for a temporary compliance audit or disaster recovery test often get left running permanently. Audit every active replication rule in every bucket and confirm whether it is still justified. For most non-production buckets, it is not.
Block Storage vs Object Storage Cost Confusion
7. The EBS vs S3 vs EFS Price Gap Is Larger Than Most Teams Realize
These three AWS storage types serve different purposes but often get used interchangeably in ways that create significant cost waste.
| Storage Type | Price per GB/Month | Best For |
|---|---|---|
| EFS Standard | $0.30 | Shared file systems, concurrent access from multiple EC2 instances |
| EBS gp3 | $0.08 | OS volumes, database storage, single-instance attachment |
| S3 Standard | $0.023 | Objects, backups, static assets, infrequently updated files |
| S3 Standard-IA | $0.0125 | Infrequently accessed objects, backup copies |
| S3 Glacier | $0.004 | Long-term archival |
The ratio: EFS is 13 times more expensive than S3 per GB. EBS is 3.5 times more expensive than S3 per GB.
Common misplacement that creates unnecessary cost:
- Application static assets on EBS: Web applications mounting EBS volumes for images, PDFs, and configuration files. These belong in S3 where they can be served directly without a compute layer. Moving 1TB of static assets from EBS to S3 saves $57/month permanently.
- Log files accumulating on EBS: Application logs written to disk on EC2 instances. Logs grow continuously, EBS volumes fill up, teams provision larger volumes. Shipping logs to S3 via Fluent Bit and relying on CloudWatch Logs for recent queries typically cuts log storage costs by 70%.
- Database exports on EFS: Database dumps stored on EFS shared volumes for multi-instance access. EFS at $0.30/GB/month for database exports that should be in S3 Glacier at $0.004/GB/month is a 75x price difference.
8. RDS Automated Backup Retention Goes From Free to Very Expensive Quickly
RDS automated backups are free for up to the same storage size as your database, but only for the retention period. The free storage applies to the most recent automated backup. Additional storage beyond that threshold is charged at $0.095/GB/month.
Here is what surprises teams: the "free" storage is for the equivalent of one full backup. But automated backups are incremental differential backups. After 30 days, your backup history can represent 2x to 5x your actual database size depending on write activity.
For a 500GB RDS database with 30-day retention and 10GB of daily changes:
- Daily incremental backup size (rough): 10GB/day
- Total backup storage after 30 days: approximately 800GB
- Free tier: 500GB (database size)
- Chargeable backup storage: 300GB x $0.095 = $28.50/month
Reduce to 14-day retention and the chargeable amount drops to near zero. For most databases, 14-day automated backup retention combined with monthly manual snapshots stored in S3 provides equivalent recovery capability at significantly lower cost.
Request Cost Traps at Scale
9. S3 Request Costs Become Significant at High Volume
S3 charges per request, and at scale these fees are not negligible. Current rates for S3 Standard:
- PUT, COPY, POST, LIST: $0.005 per 1,000 requests
- GET, SELECT: $0.0004 per 1,000 requests
For a high-traffic application serving 100 million GET requests per month from S3:
- GET cost: 100,000 x $0.0004 = $40/month just in GET fees
- If each request also writes a log entry (PUT): another $500/month in PUT fees
For APIs that use S3 as a data store rather than a CDN origin (fetching objects on every API call), request costs can rival storage costs. The fix is caching: CloudFront in front of S3 serves objects from edge cache for repeated requests, and CloudFront's data transfer pricing is often lower than repeated S3 GET fees.
10. CloudWatch Logs Insights Queries Have Per-GB Scan Costs That Add Up Fast
CloudWatch Logs Insights charges $0.005 per GB of log data scanned per query. This is not the ingestion or storage cost. This is a separate charge every time you run a query.
For an environment with 6 months of logs across all services totaling 2TB:
- One broad query scanning all logs: 2,048GB x $0.005 = $10.24 per query
- Ten engineers running broad queries during an incident investigation: $102 per incident
For teams doing frequent log analysis, query costs can exceed storage costs. The fix: use time range filters aggressively to scan only the log data relevant to your investigation, use specific log group names rather than querying all groups, and consider moving historical logs to S3 and querying with Athena ($5/TB scanned vs $5/GB in CloudWatch Insights, a 1,000x difference).
Third-Party Storage Traps
11. Wasabi and Backblaze B2 Have Minimum Storage Requirements That Change the Math
Wasabi markets itself as zero-egress cloud storage at $0.0059/GB/month. For large, stable datasets this is genuinely excellent pricing.
What the marketing does not lead with: Wasabi charges a 90-day minimum storage fee per object. If you store a 1GB file and delete it after 45 days, Wasabi charges you for the full 90 days. If your dataset has high object churn (frequent creates and deletes), this minimum completely changes the effective price.
Wasabi also has a minimum monthly charge of $6.99, so for very small datasets the effective per-GB rate is much higher than $0.0059.
Backblaze B2 is even cheaper at $0.006/GB/month with no minimum storage duration per object and $0.01/GB egress. For datasets with moderate object churn, B2 is often more cost-effective than Wasabi.
Before migrating to either provider, calculate your average object lifetime. If most objects live longer than 90 days, Wasabi is excellent. If objects frequently get deleted before 90 days, run the minimum duration math first.
12. Cloudflare R2's Zero-Egress Pricing Has Different Request Costs Than S3
Cloudflare R2 offers zero egress fees and uses the S3 API natively, which makes migration straightforward. But the request pricing is different from S3 in ways that matter at scale.
R2 request pricing:
- Class A operations (PUT, POST, LIST): $4.50 per million requests
- Class B operations (GET): $0.36 per million requests
S3 request pricing comparison:
- PUT/LIST: $5 per million requests
- GET: $0.40 per million requests
R2 is slightly cheaper per request than S3, and combined with zero egress, it is significantly cheaper for high-read, externally served workloads. For workloads that primarily write (backup, archival, log storage) with rare retrieval, egress is not the main cost anyway, and Backblaze B2 at $0.006/GB/month storage beats both R2 ($0.015/GB) and S3 ($0.023/GB) on storage rate.
The decision framework: if your workload serves objects to external users frequently (content delivery, API-served files, media streaming), R2 wins through egress savings. If your workload primarily stores and rarely retrieves (backup, archival, cold analytics), Backblaze B2 wins on storage rate.
The Storage Cost Optimization Checklist
Run through this for every storage environment you operate:
Immediate Fixes (Under 2 Hours Each)
- List all S3 buckets and check for AbortIncompleteMultipartUpload lifecycle rules (add where missing)
- Find versioned buckets without NoncurrentVersionExpiration rules (add 30 to 90-day expiration)
- Identify CloudWatch Log Groups without retention policies (set 7 to 30-day retention)
- Audit active S3 replication rules (confirm each is still justified)
- Check for S3 Transfer Acceleration on buckets where it is not needed (disable)
Optimization (2 to 8 Hours Each)
- Calculate lifecycle transition break-even for small-object buckets before adding transition policies
- Identify static assets stored on EBS that could move to S3
- Review RDS automated backup retention settings (reduce from 30 to 14 days for databases with low daily change volume)
- Move historical CloudWatch Logs to S3 and query with Athena for cost analysis
- Audit delete marker accumulation in high-churn versioned buckets
Strategic (Planning Required)
- Evaluate Cloudflare R2 or Backblaze B2 for high-egress or backup workloads
- Calculate minimum duration penalty risk before migrating to Wasabi for variable-lifetime objects
- Review EFS usage and identify shares that could be replaced with S3 or EBS
- Build per-bucket cost attribution in Cost Explorer to identify your most expensive storage buckets
Frequently Asked Questions
Why do my S3 costs keep growing even when I am not storing more data?
The most common causes: S3 versioning accumulating previous versions without an expiration policy, incomplete multipart uploads building up silently, S3 Intelligent-Tiering monitoring fees growing as object count grows, and CRR replication continuing to duplicate new writes to another region. Run aws s3api list-multipart-uploads and check your versioning lifecycle rules first. These two issues alone explain cost growth in the majority of environments we audit.
What is the minimum duration charge in S3 and how does it affect lifecycle policies?
S3 Standard-IA, Glacier Instant Retrieval, Glacier Flexible, and Glacier Deep Archive all have minimum storage durations (30, 90, 90, and 180 days respectively). If you delete or transition an object out of one of these storage classes before the minimum, you are still charged for the full minimum period. Aggressive lifecycle policies that transition objects quickly through multiple tiers can incur multiple minimum duration charges plus transition fees that exceed the storage savings. Always calculate break-even before implementing transitions on high-churn buckets.
Should I enable S3 Intelligent-Tiering on all my buckets?
No. Intelligent-Tiering charges a monitoring fee of $0.0025 per 1,000 objects per month. For buckets with small average object sizes (under 128KB), the monitoring fee typically exceeds the storage savings from automatic tiering. Calculate your monitoring fee (object count / 1000 x $0.0025) and compare it to your estimated storage savings before enabling. For large-object buckets with unpredictable access patterns, Intelligent-Tiering is genuinely cost-effective. For small-object buckets, use explicit lifecycle policies instead.
Is Cloudflare R2 actually cheaper than S3?
For workloads with significant egress to external users, yes. R2's zero egress fee is a real advantage. For a workload storing 10TB and serving 50TB/month to internet users, S3 egress costs $4,500/month while R2 costs zero. The storage rate difference ($0.023 vs $0.015) is negligible by comparison. For workloads with minimal egress (internal access only, backup storage, archival), the egress advantage disappears and S3's deeper ecosystem integration may be worth the small storage premium. Our full cloud storage pricing comparison covers this analysis in detail.
What is the right RDS backup retention period?
For most production databases, 14-day automated backup retention plus a monthly manual snapshot provides adequate recovery capability. The 14-day automated window covers operational recovery (accidental deletes, failed deployments, data corruption). The monthly manual snapshot covers compliance requirements and historical recovery scenarios. Reducing from 30 to 14-day retention on a heavily written database can eliminate most chargeable backup storage, since the incremental backups beyond the free storage tier are typically generated in the last 2 weeks of a 30-day window.
When does it make sense to move CloudWatch Logs to S3?
When log data is older than 30 days and you access it infrequently. CloudWatch Logs storage costs $0.03/GB/month. S3 Standard-IA costs $0.0125/GB/month. For logs older than 90 days that you rarely access, S3 Glacier at $0.004/GB/month is 7.5x cheaper than CloudWatch. More importantly, querying CloudWatch Logs Insights costs $0.005/GB scanned per query, while Athena queries on S3 cost $0.005/TB scanned, a 1,000x difference. For historical log analysis, moving logs to S3 and using Athena is almost always the right call.
What the Per-GB Price Was Always Hiding
Cloud storage feels like a commodity purchase. Compare the per-GB prices, pick the lowest, done.
The bill you get at the end of the month reflects something more complicated: the version history nobody configured to expire, the incomplete uploads nobody cleaned up, the lifecycle transitions that charged more per object than they saved, the replication rules from last year's disaster recovery project that nobody turned off.
None of these require major infrastructure changes to fix. Most of them require a lifecycle policy and an afternoon of configuration work. The gp2 to gp3 story from compute is the same here: the cheaper, better option exists, and the only reason most teams have not done it is that they did not know to look.
Start with the immediate fixes checklist above. Find your incomplete multipart uploads. Add a NoncurrentVersionExpiration rule to your versioned buckets. Set retention policies on your CloudWatch Log Groups. Those three changes, done in an afternoon, typically reduce storage costs by 15 to 30% for environments with active development.
For a complete audit of your storage spend alongside your compute and networking costs, take our free Cloud Waste and Risk Scorecard. For expert help implementing storage optimization as part of a broader FinOps practice, our Cloud Cost Optimization and FinOps team can identify and fix these issues across your entire environment.
Related reading:
- Cloud Storage Pricing Comparison 2026: S3 vs Azure Blob vs GCP vs R2
- Cloud Backup and Storage Pricing in 2026: What Nobody Tells You Before the Retrieval Bill Arrives
- The AWS Cost Optimization Playbook: 14 Service-Specific Savings Most Teams Never Find
- Stop Paying for Ghost Servers: 12 Strategies to Eliminate Cloud Waste
- 7 Proven Ways Automated Cloud Cost Optimization Transforms Modern Infrastructure
- Stop Burning Cloud Dollars: 7 Proven Steps to Detect Waste and Modernize Infrastructure
External resources: