concept Amazon S3 in category aws

This is an excerpt from Manning's book AWS Security MEAP V03.
3. The service short name: This is the same name that is used in the AWS CLI. For example, 's3' for Amazon S3, and 'iam' for AWS Identity and Access Management.
Figure 5.6 Gateway VPC endpoints are used for privately accessing Amazon S3 and DynamoDB. Gateway VPC endpoints are different from interface VPC endpoints in that they live at the VPC level, rather than the subnet level.
![]()
The processes for securing data are generally broken down into two categories. Protections for data at rest, and protections for data in transit. Data at rest refers to things like files and objects stored in Amazon S3, or records in a database. Data in transit refers to data being communicated over the network, like the data in network calls between your own servers, or between your server and Amazon S3. In section 6.2 we'll look specifically at protecting data at rest. This includes topics like encryption at rest, least privilege access to resources, and backup and versioning options within AWS. The following section, 6.3, covers data in transit. This section primarily covers secure transport protocols that maintain confidentiality and integrity of your data, and methods for enforcing them. This section also discusses enforcing least privilege network access controls where your data is transmitted.
For point A, data stored in Amazon S3, how can we prevent unauthorized users from modifying data? One way is to use the bucket policy to restrict who has write access to the data. Similar to the previous section where we restricted read access to a bucket to a specific role, listing 6.3 restricts PutObject access. This will prevent anyone who can't assume the role from changing the data in our S3 bucket.
Listing 6.3 - S3 bucket policy restricting PutObject access to a specific IAM role
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Deny", #A "Principal": "*", #A "Action": "s3:PutObject", #A "Resource": "arn:aws:s3:::MyBucket/*" #B "Condition": { "StringNotLike": { #C "aws:userId": [ #C "AROAEXAMPLEID:*", #D "111111111111" ] } } } ] }By using a policy with an explicit deny for any users who aren't the intended user, we go a long way toward preventing any malicious tampering. But if something did happen, how could we recover from it? The best way to recover from tampering is to have good backups of your data. For Amazon S3, one great way we can keep backups is by enabling versioning on our S3 bucket. Versioning allows us to restore an object from any of it's revisions. If an object was tampered with, we can restore it to its previous version. Even if an object is deleted, we can still recover any of the old versions of the object. We can use the following command to enable versioning for our S3 bucket.

This is an excerpt from Manning's book AWS Lambda in Action: Event-driven serverless applications.
Asynchronous calls are useful when functions are subscribed to events generated by other resources, such as Amazon S3, an object store, or Amazon DynamoDB, a fully managed NoSQL database.
Listing 6.1. Policy to give access to private folders on Amazon S3
{ "Version": "2012-10-17", "Statement": [ { "Action": ["s3:ListBucket"], "Effect": "Allow", "Resource": ["arn:aws:s3:::B"], "Condition": {"StringLike": {"s3:prefix": ["P/${cognito-identity.amazonaws.com:sub}/*"]} #1 } }, { "Action": [ "s3:GetObject", "s3:PutObject" ], "Effect": "Allow", "Resource": ["arn:aws:s3:::B/P/${cognito-identity.amazonaws.com:sub}/*"] #1 } ] }
You used Amazon S3 multiple times in the examples in this book to store different kinds of information, such as pictures or HTML files. In the same way, you can store the ZIP file of a Lambda function on an S3 bucket.
To simplify deployment, AWS Lambda supports the deployment of a Lambda function straight from a ZIP file stored on Amazon S3. In this way, you don’t need to upload and send the ZIP file when creating or updating the code of a Lambda function. You can use any available tool (such as the S3 console, the AWS CLI, or any third-party tool supporting Amazon S3) to upload the ZIP file to an S3 bucket, and then call the CreateFunction or UpdateFunctionCode Lambda API to use the S3 object as the source of the function code. See figure 14.1.
Figure 14.1. After you upload the ZIP file containing the function code to Amazon S3, you can use the web console, the AWS CLI or SDKs, or the Lambda API to create a new function or update an existing function to use the code in the ZIP file. After that, you can invoke the function as usual.
![]()
You could use the Lambda web console and point the source code to a file on Amazon S3. But to prepare ourselves for a more automated approach, let’s use the AWS CLI to upload the greeetingsOnDemand function (which we’ve used multiple times during this book) and create and then update the function code.