Pass the AWS Certified Solutions Architect Associate Certification SAA-C03-(Episode 11: S3 Advanced)
📌 Notice
Welcome to the series of blog in my AWS Certified Solutions Architect Associate (SAA-C03) exam preparation series! If you’re looking to pass this challenging yet rewarding certification, you’re in the right place.
Throughout this blog series, you’ll master core AWS architecture concepts — from IAM security fundamentals to advanced VPC networking, cost-optimized EC2 deployments, serverless patterns with Lambda, and multi-region disaster recovery strategies. We’ll break down all key services (S3, RDS, CloudFront etc.) through real-world solution architectures and exam-focused scenarios. Each post will include hands-on walkthroughs, pro tips for the SAA-C03 exam, and best practices used by AWS professionals. Get ready to transform from AWS beginner to certified Solutions Architect!
Note : The blog will be updated with the extra questions and CDK Implementation in a timely manner
🌟 Introduction
Amazon S3 offers powerful analytics and performance optimization tools to manage storage efficiently. S3 Analytics helps transition objects to Standard-IA, while Requester Pays shifts data access costs to requesters. Event Notifications trigger actions like Lambda functions for real-time processing, and Baseline Performance ensures high throughput with per-prefix scaling. S3 Batch Operations enable large-scale object management, and Storage Lens provides deep insights into cost, security, and usage across accounts and regions. These features make S3 a versatile solution for storage optimization, security, and automation.
Amazon S3 — Moving between Storage Classes
- You can transition objects between storage classes
- For infrequently accessed object, move them to Standard IA
- For archive objects that you don’t need fast access to, move them to
Glacier or Glacier Deep Archive - Moving objects can be automated using a Lifecycle Rules.
Amazon S3 — Lifecycle Rules
- Transition Actions — configure objects to transition to another storage class
• Move objects to Standard IA class 60 days after creation
• Move to Glacier for archiving after 6 months - Expiration actions — configure objects to expire (delete) after some time
• Access log files can be set to delete after a 365 days
• Can be used to delete old versions of files (if versioning is enabled)
• Can be used to delete incomplete Multi-Part uploads - Rules can be created for a certain prefix (example: s3://mybucket/mp3/*)
- Rules can be created for certain objects Tags (example: Department: Finance)
Amazon S3 — Lifecycle Rules (Scenario 1)
Use Case : Your application on EC2 creates images thumbnails after profile
photos are uploaded to Amazon S3.These thumbnails can be easily
recreated, and only need to be kept for 60 days.The source images
should be able to be immediately retrieved for these 60 days, and
afterwards, the user can wait up to 6 hours. How would you design
this?
- S3 source images can be on Standard, with a lifecycle configuration to
transition them to Glacier after 60 days - S3 thumbnails can be on One-Zone IA, with a lifecycle configuration to
expire them (delete them) after 60 days
Amazon S3 — Lifecycle Rules (Scenario 2)
Use Case : A rule in your company states that you should be able to recover your deleted S3 objects immediately for 30 days, although this may happen
rarely. After this time, and for up to 365 days, deleted objects should
be recoverable within 48 hours.
- Enable S3 Versioning in order to have object versions, so that “deleted
objects” are in fact hidden by a “delete marker” and can be recovered - Transition the “noncurrent versions” of the object to Standard IA
- Transition afterwards the “noncurrent versions” to Glacier Deep Archive
Amazon S3 Analytics — Storage Class Analysis
- Helps decide when to transition to Standard-IA.
- Excludes One-Zone-IA and Glacier.
- Reports updated daily, with insights available in 24–48 hours.
- Useful to guide or refine lifecycle rules.
S3 — Requester Pays
- By default, bucket owners pay for data access.
- With Requester Pays, the requester covers request and data transfer costs.
- Useful for sharing large datasets across AWS accounts (authenticated users only).
S3 Event Notifications
- Trigger actions on object events (e.g., upload, delete, restore)
- Events:
ObjectCreated
,ObjectRemoved
,ObjectRestore
, etc. - Filters: By suffix (e.g.,
*.jpg
) or prefix. - Targets: Lambda, SNS, SQS, or EventBridge for advanced routing.
Use case: generate thumbnails of images uploaded to S3
S3 Event Notifications — IAM Permissions
- Here we don’t use I AM Role based policy, instead we use Resource Based Policies
S3 Event Notifications with Amazon EventBridge
- JSON-based filtering (e.g., object size, metadata).
- Supports archiving, replaying events, and multiple destinations (Step Functions, Kinesis Streams / Firehose).
S3 — Baseline Performance
- S3 scales automatically with low latency (100–200 ms).
- Per-prefix limits:
- 3,500 PUT/COPY/POST/DELETE requests/sec.
- 5,500 GET/HEAD requests/sec.
- Use multiple prefixes to increase throughput (e.g.,
/images/1/
,/images/2/
).
S3 Performance
Multi-Part upload
- Recommended for files > 100MB, must use for files > 5GB
- Can help parallelize uploads (speed up transfers)
S3 Transfer Acceleration
- Increase transfer speed by transferring file to an AWS edge location which will forward the data to the S3 bucket in the target region
- Compatible with multi-part upload
S3 Performance — S3 Byte-Range Fetches
- Parallelize GETs by requesting specific byte ranges
- Better resilience in case of failures
Can be used to speed up downloads
Can be used to retrieve only partial data (for example the head of a file)
S3 Batch Operations
- Perform large-scale operations on millions of S3 objects with just one request.
What you can do:
- 🏷 Modify metadata, tags, or ACLs
- 📂 Copy objects between buckets
- 🔐 Encrypt previously unencrypted objects
- 🧊 Restore objects from Glacier or Deep Archive
- ⚙️ Invoke Lambda functions to run custom logic per object
Key Features:
- Uses S3 Inventory to generate object lists
- Supports filtering via Amazon Athena
- Built-in retry logic, status tracking, completion reports, and notifications
Use Case: Need to add encryption to all objects in a bucket? Do it at once using Batch Operations!
S3 — Storage Lens
- A powerful analytics tool to help you optimize, secure, and manage S3 storage across your AWS environment.
Key Capabilities:
- 📉 Cost Insights — Identify unused data or expensive storage patterns
- 🛡 Data Protection — Monitor versioning, encryption, and replication
- 🧾 Usage Metrics — Track object count, storage size, and trends
- 🗂 Account & Region View — See data across org, accounts, buckets, or prefixes
- 📁 Exports — Send daily metrics to S3 (CSV/Parquet)
- 📌 Dashboards — Use the default or build your own view
Use Case: Spot high-growth buckets or misconfigured security settings across your AWS Organization.
Storage Lens — Default Dashboard
- Visualize summarized insights and trends for both free and advanced metrics
- Default dashboard shows Multi-Region and Multi-Account data
- Preconfigured by Amazon S3
- Can’t be deleted, but can be disabled
Storage Lens — Metrics
Summary Metrics
- Overall usage (e.g.,
StorageBytes
,ObjectCount
).
Cost Optimization
- Detect unused objects, noncurrent versions, incomplete uploads.
Data Protection
- Track versioning, MFA delete, encryption, and replication rules.
Access Management
- Review Object Ownership configurations.
Event & Activity Metrics
- Monitor event configurations, request patterns, and transfer performance.
Detailed Status Metrics
- Analyze success and error rates (
200 OK
,403 Forbidden
,404 Not Found
).
Storage Lens — Free vs. Paid
Free Metrics (Default Tier)
✅ Automatically enabled for all AWS customers
📊 Includes ~28 basic usage metrics like:
- Storage size
- Object count
- Growth trends
Data Retention: 14 days
Ideal for general monitoring and quick insights
💼 Advanced Metrics & Recommendations (Paid Tier)
Unlock deeper analytics and cost-saving insights:
- Activity Metrics — Request types, access patterns
- Advanced Cost Optimization — Incomplete uploads, noncurrent versions
- Advanced Data Protection — Versioning, encryption, replication tracking
- Status Code Metrics — 200s, 403s, 404s, and more
📦 Extras Included:
- CloudWatch Publishing — View metrics in CloudWatch at no extra charge
- Prefix-Level Aggregation — Drill down to folder-level details
Data Retention: Up to 15 months
Use Case: Recommended for teams managing large-scale storage needing detailed, long-term visibility and optimization.
AWS Hands-On
- S3 Lifecycle Rules — Click Here
- s3 Event Notification — Click Here
AWS Cloud Practitioner Questions
- S3 Advanced — Click Here
SAA-C03 Sample Questions
Question 1 :
Domain: Design Cost-Optimized Architectures
A media agency stores its re-creatable assets on Amazon Simple Storage Service (Amazon S3) buckets. The assets are accessed by a large number of users for the first few days and the frequency of access falls down drastically after a week. Although the assets would be accessed occasionally after the first week, but they must continue to be immediately accessible when required. The cost of maintaining all the assets on Amazon S3 storage is turning out to be very expensive and the agency is looking at reducing costs as much as possible.
As an AWS Certified Solutions Architect — Associate, can you suggest a way to lower the storage costs while fulfilling the business requirements?
Overall explanation
✅ Correct option:
Configure a lifecycle policy to transition the objects to Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) after 30 days
Amazon S3 One Zone-IA is for data that is accessed less frequently, but requires rapid access when needed. Unlike other S3 Storage Classes which store data in a minimum of three Availability Zones (AZs), Amazon S3 One Zone-IA stores data in a single Availability Zone (AZ) and costs 20% less than Amazon S3 Standard-IA. Amazon S3 One Zone-IA is ideal for customers who want a lower-cost option for infrequently accessed and re-creatable data but do not require the availability and resilience of Amazon S3 Standard or Amazon S3 Standard-IA. The minimum storage duration is 30 days before you can transition objects from Amazon S3 Standard to Amazon S3 One Zone-IA. Amazon S3 One Zone-IA offers the same high durability, high throughput, and low latency of Amazon S3 Standard, with a low per GB storage price and per GB retrieval fee. S3 Storage Classes can be configured at the object level, and a single bucket can contain objects stored across Amazon S3 Standard, Amazon S3 Intelligent-Tiering, Amazon S3 Standard-IA, and Amazon S3 One Zone-IA. You can also use S3 Lifecycle policies to automatically transition objects between storage classes without any application changes.
❌ Incorrect options:
Configure a lifecycle policy to transition the objects to Amazon S3 Standard-Infrequent Access (S3 Standard-IA) after 7 days
Configure a lifecycle policy to transition the objects to Amazon S3 One Zone-Infrequent Access (S3 One Zone-IA) after 7 days
As mentioned earlier, the minimum storage duration is 30 days before you can transition objects from Amazon S3 Standard to Amazon S3 One Zone-IA or Amazon S3 Standard-IA, so both these options are added as distractors.
Configure a lifecycle policy to transition the objects to Amazon S3 Standard-Infrequent Access (S3 Standard-IA) after 30 days — Amazon S3 Standard-IA is for data that is accessed less frequently, but requires rapid access when needed. S3 Standard-IA offers the high durability, high throughput, and low latency of Amazon S3 Standard, with a low per GB storage price and per GB retrieval fee. This combination of low cost and high performance makes Amazon S3 Standard-IA ideal for long-term storage, backups, and as a data store for disaster recovery files. But, it costs more than Amazon S3 One Zone-IA because of the redundant storage across Availability Zones (AZs). As the data is re-creatable, so you don’t need to incur this additional cost.
References:
https://aws.amazon.com/s3/storage-classes/
https://docs.aws.amazon.com/AmazonS3/latest/dev/lifecycle-transition-general-considerations.html
Question 2
Domain: Design Secure Architectures
A company uses Amazon S3 buckets for storing sensitive customer data. The company has defined different retention periods for different objects present in the Amazon S3 buckets, based on the compliance requirements. But, the retention rules do not seem to work as expected.
Which of the following options represent a valid configuration for setting up retention periods for objects in Amazon S3 buckets? (Select two)
Overall explanation
✅ Correct option:
When you apply a retention period to an object version explicitly, you specify a Retain Until Date
for the object version
You can place a retention period on an object version either explicitly or through a bucket default setting. When you apply a retention period to an object version explicitly, you specify a Retain Until Date
for the object version. Amazon S3 stores the Retain Until Date setting in the object version's metadata and protects the object version until the retention period expires.
Different versions of a single object can have different retention modes and periods
Like all other Object Lock settings, retention periods apply to individual object versions. Different versions of a single object can have different retention modes and periods.
For example, suppose that you have an object that is 15 days into a 30-day retention period, and you PUT an object into Amazon S3 with the same name and a 60-day retention period. In this case, your PUT succeeds, and Amazon S3 creates a new version of the object with a 60-day retention period. The older version maintains its original retention period and becomes deletable in 15 days.
❌ Incorrect options:
You cannot place a retention period on an object version through a bucket default setting — You can place a retention period on an object version either explicitly or through a bucket default setting.
When you use bucket default settings, you specify a Retain Until Date
for the object version - When you use bucket default settings, you don't specify a Retain Until Date. Instead, you specify a duration, in either days or years, for which every object version placed in the bucket should be protected.
The bucket default settings will override any explicit retention mode or period you request on an object version — If your request to place an object version in a bucket contains an explicit retention mode and period, those settings override any bucket default settings for that object version.
Reference:
https://docs.aws.amazon.com/AmazonS3/latest/dev/object-lock-overview.html
🧾 Conclusion
Amazon S3 offers tailored storage classes like Standard-IA and One Zone-IA to optimize costs and performance for infrequently accessed data. Use Standard-IA for critical backups requiring multi-AZ resilience, while One Zone-IA suits re-creatable data at lower costs. Automate transitions with lifecycle policies to move objects after 30 days, balancing accessibility and affordability. Always match storage choices to data criticality — redundancy for irreplaceable files, cost savings for disposable copies. By leveraging these features, you ensure efficient, scalable storage management aligned with your data’s value and usage patterns.
Previous Episode : “Pass the AWS Certified Solutions Architect Associate Certification SAA-C03! (Episode 10: S3 Introduction )”
Next Episode : “Pass the AWS Certified Solutions Architect Associate Certification SAA-C03! (Episode 12: S3 Security )”