Table of Contents
hide
AWS S3 Data Protection
- S3 provides S3 data protection using highly durable storage infrastructure designed for mission-critical and primary data storage.
- Objects are redundantly stored on multiple devices across multiple facilities in an S3 region.
- S3 PUT and PUT Object copy operations synchronously store the data across multiple facilities before returning SUCCESS.
- Once the objects are stored, S3 maintains its durability by quickly detecting and repairing any lost redundancy.
- S3 also regularly verifies the integrity of data stored using checksums. If S3 detects data corruption, it is repaired using redundant data.
- In addition, S3 calculates checksums on all network traffic to detect corruption of data packets when storing or retrieving data
- Data protection against accidental overwrites and deletions can be added by enabling Versioning to preserve, retrieve and restore every version of the object stored
- S3 also provides the ability to protect data in transit (as it travels to and from S3) and at rest (while it is stored in S3)
S3 Encryption
Refer blog post @ S3 Encryption
AWS Certification Exam Practice Questions
- Questions are collected from Internet and the answers are marked as per my knowledge and understanding (which might differ with yours).
- AWS services are updated everyday and both the answers and questions might be outdated soon, so research accordingly.
- AWS exam questions are not updated to keep up the pace with AWS updates, so even if the underlying feature has changed the question might not be updated
- Open to further feedback, discussion and correction.
- A customer is leveraging Amazon Simple Storage Service in eu-west-1 to store static content for a web-based property. The customer is storing objects using the Standard Storage class. Where are the customers objects replicated?
- A single facility in eu-west-1 and a single facility in eu-central-1
- A single facility in eu-west-1 and a single facility in us-east-1
- Multiple facilities in eu-west-1
- A single facility in eu-west-1
- A system admin is planning to encrypt all objects being uploaded to S3 from an application. The system admin does not want to implement his own encryption algorithm; instead he is planning to use server side encryption by supplying his own key (SSE-C). Which parameter is not required while making a call for SSE-C?
x-amz-server-side-encryption-customer-key-AES-256
x-amz-server-side-encryption-customer-key
x-amz-server-side-encryption-customer-algorithm
x-amz-server-side-encryption-customer-key-MD5
For Q2 why AS 256 is not required as it is not using customer algorithm.
x-amz-server-side-encryption-customer-key-AES-256 is not a valid parameter.
x-amz-server-side-encryption-customer-key needs to be AES-256.
About Q7. I wonder how any Glacier-based solution can be considered highly available (last sentence of the question)? The only HA solution is A.
Dear Jayendra:
Your blog helps me a lot during learning the AWS.
I have a question to ask about the SSE-S3.
As you referred , Whether or not objects are encrypted with SSE-S3 can’t be enforced when they are uploaded using pre-signed URLs, because the only way you can specify server-side encryption is through the AWS Management Console or through an HTTP request header.
I also find the post on AWS blog,
https://aws.amazon.com/blogs/developer/generating-amazon-s3-pre-signed-urls-with-sse-part-1/
It seems to be possible to use the SigV4 to upload object through pre signed url.
May you check it or I misunderstand the post?
Thank you