SCP does not grant any permissions, IAM Policy does.
Permissions Boundaries are applied to IAM entities (Users/Roles).
NLBs do not have security groups configured and pass connections straight to EC2 instances with the source IP of the client preserved (when registered by instance-id).
It is a best practice to move the database connection outside the event handler so subsequent invocations of the Lambda function can reuse it.
In place of RDS, DynamoDB provides much higher throughput and scalability.
Concepts
- AWS CloudTrail
- CloudTrail is an AWS service that helps you enable operational and risk auditing, governance, and compliance of your AWS account. Actions taken by a user, or an AWS service are recorded as events in CloudTrail. Events include actions taken in the Management Console, CLI, SDK and APIs.
- Event History
- the event history provides a viewable, searchable, downloadable, and immutable record of past 90 days of management events in an AWS Region.
- CloudTrail Lake
- Trails
- Trails capture a record of AWS activities, delivering and storing these events in an Amazon S3 bucket, with optional delivery to CW logs and EventBridge.
- Athena
- Amazon Athena uses a managed Data Catalog to store information and schemas about the databases and tables that you create for your data stored in Amazon S3.
- Most results are delivered within seconds
- Amazon Athena uses Presto, an open source, distributed SQL query engine optimized for low latency, interactive data analysis.
- Athena supports a wide variety of data formats such as CSV, JSON, ORC, Avro, or Parquet.
- You should use Amazon Athena if you want to interact with your Cost and Usage Report stored in S3 to gain extremely specific information on how your AWS bill is being calculated. This can be done natively within the console.
- best practices
- partition/bucket data
- compression
- optimize file size
- optimize columnar data store generation
- optimize Order/Group by
- use approximate functions
- only include the columns you need
- Organization
- When a member account leaves an organization, all charges incurred by the account are charged directly to the standalone account.
- SCPs do not affect any service-linked role. Service-linked roles enable other AWS services to integrate with AWS Organizations and can’t be restricted by SCPs.
- key concepts
- AWS Organization – An organization is a collection of AWS accounts that you can organize into a hierarchy and manage centrally.
- AWS Account – An AWS account is a container for your AWS resources.
- Management Account – A management account is the AWS account you use to create your organization.
- Member Account – A member account is an AWS account, other than the management account, that is part of an organization.
- Administrative Root – An administrative root is the starting point for organizing your AWS accounts. The administrative root is the top-most container in your organization’s hierarchy.
- Organizational Unit (OU) – An organizational unit (OU) is a group of AWS accounts within an organization. An OU can also contain other OUs enabling you to create a hierarchy.
- Policy – A policy is a “document” with one or more statements that define the controls that you want to apply to a group of AWS accounts. AWS Organizations supports a specific type of policy called a Service Control Policy (SCP). An SCP defines the AWS service actions, such as Amazon EC2 RunInstances, that are available for use in different accounts within an organization.
From Practice Exam Explanation
- use Amazon ECS Spot instances and configure Spot Instance Draining
- ECS takes over the coordination of termination of tasks with the termination of the underlying EC2 instance using inherent instance “DRAINING” functionality.
- to remove license overhead of oracle, it’s good to switch to non-oracle databases
- keep your eye on whether RDS mentioned Oracle version…
- while enabling caching in API gateway can reduce the number of requests reaching the backend, converting to an edge-optimized endpoint primarily benefits geographically distributed clients and may not address the core issue of database load.
- Use AWS SAM and CodeDeploy to deploy the new lambda version, gradually shift traffic to the new version, and use pre-traffic and post-traffic test functions to verify code. Rollback in case CloudWatch alarms is triggered.
- CodeDeploy leverages Lambda’s traffic shifting capabilities to automate the gradual rollout of new function versions.
- CodeDeploys with deployment best practices
- pre-tests with alarm auto rollback
- Create an EventBridge rule that triggers an Lambda function to use AWS Trusted Advisor to retrieve the most current utilization and service limit data. If the current utilization is above 80%, publish a message to an SNS topic to alert the cloud team.
- Trusted Advisor inspects your AWS environment, and then makes recommendations when opportunities exist to save money, improve system availability and performance, or help close security gaps.
- Setup a standby ECS cluster and service on Fargate in a different Region. Create a cross-Region RDS read replica in this new Region. Design an Lambda function to promote the read replica to a primary database and reconfigure Route53 to reroute traffic to the standby ECS cluster. Adjust the EventBridge rule to include this Lambda as a target.
- by having a standby ECS cluster and a cross-Region RDS read replica, the application can quickly switch to the standby environment.
- restore RDS from a snapshot is not as efficient as read-replica
- Use AWS Compute Optimizer call the ExportLambdaFunctionRecommendations operations for the Lambda functions. Export the csv file to an S3 bucket. Create a EventBridge rule to schedule the Lambda to run every 2 weeks.
- You can export your recommendations to record them over time and share the data with others. Recommendations are exported in a comma-separated values (.csv) file, and its metadata in a JavaScript Object Notation (.json) file, to an existing Amazon Simple Storage Service (Amazon S3) bucket that you specify.
- Identify the IP addresses in S3 requests with S3 access logs and Athena. Use Config with auto remediation to remediate any changes to S3 bucket policies. Configure alerting with Config and SNS.
- S3 server access logging provides detailed records for the requests that are made to a bucket.
- The Config Auto remediation feature automatically remediates non-compliant resources evaluated by Config rules.
- AWS OpsWork supports a blue/green deployment strategy, doesn’t support canary.
- If your CloudFront distribution is using a website endpoint, to avoid Access Denied
- objects in the bucket must be publicly accessible
- objects in the bucket can’t be encrypted by KMS
- the bucket policy must allow access to
s3:GetObject
- if the bucket policy grants public read access, then the AWS account that owns the bucket must also own the object
- the requested objects must exist in the bucket
- S3 block public access must be disabled on the bucket
- if Requester Pays is enabled, then the request must include the request-payer parameter
- if using a Referer header to restrict access from CloudFront to S3 origin, then review the custom header
- Configure the SG on the interface endpoint to allow connectivity to the AWS services
- you must ensure that the SG allows between the endpoint network interface and the resources in your VPC that communicate with the service.
- when create an interface endpoint, endpoint-specific DNS hostnames are generated.
- the hosted zone contains a record set for the default DNS name for the service that resolves to the private IP addresses of the endpoint interfaces in your VPC. this enables you to make requests to the service using its default DNS hostname instead of the endpoint-specific DNS hostnames.
- Configure the Aurora MySQL DB cluster to generate slow query logs by setting parameter in the parameter group
- For Aurora, you can monitor the error log (default), slow query log, and the general log
- Implement the X-Ray SDK to trace incoming HTTP requests on the EC2 instances and SQL queries
- Change the default encryption to server-side encryption with KMS managed encryption keys (SSE-KMS) in S3 bucket. Set an S3 bucket policy to deny unencrypted
PutObject
requests. Use AWS CLI to re-upload all objects in the S3 bucket.- You can configure your bucket to use an S3 Bucket Key for SSE-KMS on new objects by using the Amazon S3 console, REST API, Amazon SDK, Amazon CLI, or Amazon CloudFormation.
- By implementing encryption using KMS keys, the accessor of the resources would need S3 policy access and access to a KMS key to decrypt the data.
- Set the authorization to AWS_IAM for the API Gateway method. Create a permissions policy that grants
execute-api:Invoke
permission on the REST API resource and attach it to a group containing the IAM user accounts. Enable request signing with AWS Signature for every call to the API endpoint. Trace and analyze each user request on API Gateway by using X-Ray.- Resource Policies
- Standard IAM roles and policies
- IAM tags
- Endpoint policies for interface VPC endpoints
- Lambda Authorizers
- Amazon Cognito user pools
- Use the CloudWatch Logs agent to stream log messages directly to CW logs. Configure the
batch_count
parameter to 1.- The batch_count parameter specifies the max number of log events in a batch, up to 10000. Using a value of 1 will result in every log entry being immediately streamed to CloudWatch Logs.
- S3 Block Public Access
- IgnorePublicAcls - causes S3 to ignore all public ACLs on a bucket and any objects that it contains
- BlockPublicAcls - PUT bucket ACL and PUT object requests are blocked if granting public access
- BlockPublicPolicy - Rejects requests to Put a bucket policy if granting public access
- RestrictPublicBuckets - Restricts access to principles in the bucket owner’s account
- Attach an endpoint policy to the gateway endpoint that restricts access to the specific S3 bucket. Assign an IAM role to the EC2 instances and attach a policy to the S3 bucket that grants access only to this role.
- Download the Lambda function package from the source account. Use the deployment package and create new lambda functions in the target accounts. Share the Aurora DB cluster with the target account by using AWS RAM. Grant the target account permission to clone the Aurora DB cluster.
- cross account cloning is much faster than restore a database snapshot
- Lambda is not a sharable resource with RAM
- Use multipart upload for the backup jobs. Create a lifecycle policy for the incomplete multipart uploads on the S3 bucket to prevent new failed uploads from accumulating.
- Use DMS to migrate the database to RDS. Replicate the client VMs into AWS using SMS(Server Migration Service). Create Route53 A record for each client VM.
- reduce the operational overhead associated with the DB; minimize the impact on operations staff following the completion of the migration
- choose the same
distro
within RDS
- choose the same
- the client instances should not be behind a ELB as they have different configurations therefore Route53 A records can be created to connect directly to each instance
- reduce the operational overhead associated with the DB; minimize the impact on operations staff following the completion of the migration
- Placing Lambda in private subnets within the VPC and using a NAT gateway for managing internet traffic is a secure method to allow access to the Neptune DB cluster.
- Hosting Lambda in dedicated subnets within the VPC and creating a VPC endpoint for DynamoDB provides a secure and direct connection to DynamoDB.
- Create an IAM role with the
AmazonSSMManagedInstanceCore
managed policy attached. Attach IAM role to all the EC2 instances. Remove all SG rules attached to the EC2 instances that allow inbound TCP on port 22. Have the engineers install the AWS System Manager Session Manager plugin for their devices and remotely access the instances by using the start-session API call from System Manager.- Instance Connect requires SSH Key to connect with the instance, which brings operation overhead and security risk
- Configure an AWS Glue crawler to crawl the databases and create tables in the AWS Glue Data Catalog. Create an Glue ETL job that loads data from the RDS to S3. Use Athena to run the queries (ad-hoc).
- the lowest cost option is to extract the data to the S3 using a Glue ETL Job.
- Athena Tables can be created in the Glue Data Catalog
- Athena ODBC driver can be used to connect to Athena from the existing analytics tool
- Use Application Auto Scaling to scale out write capacity on the DynamoDB table based on a schedule. (predictable traffic pike)
- Configure a CodeCommit trigger to invoke an Lambda function to scan new code submissions for credentials. If any credentials are found, disable them and notify the user.
- Macie scans S3 buckets but you cannot see the S3 bucket assigned to CodeCommit as it’s an AWS managed service
- Create a VPN connection between the company’s corporate network and the VPC. Configure security groups for the EC2 instances to only allow traffic from the VPN connection.
- the most resilient connectivity: install a second DX connection from a different network carrier and attach it to the same virtual private gateway as the first DX connection.
- the virtual private gateway has built in redundancy so sharing is acceptable
- Associate the private hosted zone to all the VPCs. Create a Route53 inbound resolver in the shared services VPC. Attach all VPCs to the transit gateway and create forwarding rules in the on-premises DNS server for
internal.company.local
that point to the inbound resolver.- although it’s possible to use forwarding rules to resolve private hosted zones in other VPCs, the most reliable, performant, and low-cost approach is to share and associate private hosted zones directly to all VPCs
- migrate MySQL but with low-bandwidth network under 2 weeks
- export the data from the database using database-native tools and import the data to AWS using Snowball
- Launch an RDS Aurora MySQL instance and load the data from the snowball export. Configure replication from the on-premises database to the Aurora instance using VPN
- when the Aurora instance is fully synchronized, change the DNS entry to point to the Aurora instance and stop replication
- Launch the EC2 instances in a private subnet with an outbound route to a NAT gateway in a public subnet. Associate an ElasticIP address to the NAT gateway that can be whitelisted on the external API service.
- Create a separate AWS account for identities where IAM user accounts can be created. Create roles with appropriate permissions in the production and testing accounts. Add the identity account to the trust policies for the roles.
- Organization does not provide centralized control of identities as they are always created within individual accounts.
- SCPs control the maximum available permissions but to not actually control the IAM permissions assigned to users.
- Take a snapshot of the EBS volume by using Data Lifecycle Manager (DLM). Use the EBS direct APIs to (read and) copy the data from the snapshot to S3.
- you can use EBS direct API to create EBS snapshots, write data directly to your snapshots, read data on your snapshots and identify the differences or changes between two snapshots.
- running manual commands on a business-critical instance is not recommended
- Create two DX connections from the NewYork data center to an AWS Region. Configure the company WAN to send traffic over the DX connection. Use DX Gateway to access data in other Regions.
- minimal privilege on network permissions for ECS cluster
- setup the tasks using the
awsvpc
network mode for enhanced network isolation and control - attach security groups to the individual tasks and utilize IAM roles specifically designed for tasks to access other AWS resources
- explanation
- the
awsvpc
network mode provides each task with its own elastic network interface, IP address, and security group, offering better isolation and control compared to bridge mode. - applying security group to tasks and using IAM roles for tasks is a best practice in AWS.
- the
- setup the tasks using the
- Build an API with API gateway and Lambda, use S3 for hosting static web resources and create CloudFront distribution with the S3 bucket as the origin. Use Cognito to provide user management authentication functions.
- CloudFront offers basic DDoS protections with Shields standard
- The best option for deploying updates quickly is to create a pipeline in CodePipeline with the source set to CodeCommit, the build set to CodeBuild project, and the deploy stage configured to use CodeDeploy to push the updates to the application.
- s3 & Athena
- store the data in S3 using Apache Parquet or Apache ORC formats
- store the data using Apache Hive partitioning in S3 using a key that includes a date
- using topology spread constraints based on AZs is a strategic approach to enhance node resilience in an EKS cluster.
- ensure that the pods are evenly distributed across different AZ
- Create a SAML-based identity management provider in a central account and map IAM roles that provide the necessary permissions for users. Map users in the on-premises IdP groups to IAM roles. Use cross-account access to the other AWS account.
- Migrate the applications to Docker containers on ECS. Create a separate ECS task and service for each application. Enable service auto scaling based on memory utilization and set the threshold to 75%. Monitor services and hosts by using CW.
- Beanstalk is less cost-efficient than ECS tasks
- cannot scale EC2 based on memory utilization unless you configure a custom metric in CW; less cost-efficient than ECS tasks
- Create a pipeline in CodePipeline and trigger execution using CodeCommit branches. Use AWS CodeBuild for running unit tests and stage the artifacts in an S3 bucket in a separate testing account.
- Implement AWS Global Accelerator with a standard accelerator configuration. Associate each regional deployment’s ALB with the Global Accelerator and distribute its static IP addresses to customers.
- Global Accelerator provides static IP addresses as a core feature
- Multiple VPC & one on-premise, requires transitive peering
- create an DX gateway and attach the DX gateway to a transit gateway. Enable route propagation with BGP.
- create an Transit gateway and add attachments for all of the VPCs. Configure the route tables in the VPCs to send traffic to the transit gateway.
- explanation
- transit gateway allows fully transitive connections between VPCs in a Region
- DX gateway can connect the transit gateway to the DX connection
- BGP is used to propagate routes from the on-premises data center into AWS and vice versa
- Create an IAM account for the new employee and add the account to the security team IAM group. Set a permissions boundary that grants access to manage DynamoDB, RDS, CW. When the employee takes on new management responsibilities, add the additional services to the permissions boundary IAM policy.
- RDS MySQL RPO 5minutes, RTO 15 minutes, fully automated
- create a cross-Region read replica in us-west-1. Use EventBridge to trigger an Lambda that promotes the read replica to primary and updates the DNS endpoint address for the database.
- explanation
- a cross-Region read replica will ensure RPO 5minutes.
- read replicas can be promoted to primary at any time
- promotion process takes a few minutes to complete
- Deploy a scaled-down version of the production environment in a separate Region ensuring the minimum distance requirements are met. The DR environment should include one instance for the web tier and one instance for the application tier. Create another database instance and configure source-replica replication for MySQL. Configure Auto Scaling for the web and app tiers to they can scale based on load. Use Route53 to switch traffic to the DR Region. (cost-efficient)
- The security team will provide permissions to each team using the principle of least privilege (central identities control & trust + assumeRole)
- Use Organizations to create a management account and create each team’s account from the management count. Create a security account for cross-account access. Apply service control policies on each account and grant the security team cross-account access to all accounts. The security team will create IAM policies to provide least privilege access.
- choose AWS services over third-party solution
- Configure the application to send Set-Cookie header to the viewer and control access to the files using signed cookies.
- CloudFront signed cookies allow you to control who can access your content when you don’t want to change your current URLs or when you want to provide access to multiple restricted files
- signed url is for access to an individual file, will incur overhead
- Write a custom health check that verifies successful access to the database endpoints in each Region. Add the health check within the latency-based routing policy in Route53.
- redirect to a specific URL and route combination for each domain
- create ALB that includes HTTP and HTTPS listeners
- create an SSL certificate by using ACM. Include the domains as Subject Alternative Names
- create a CloudFront distribution and deploy a Lambda@Edge function
- Create an S3 bucket for the pipeline. Configure S3 caching for the CodeBuild projects that are in the pipeline. Update the build specifications of the CodeBuild projects. Add the data file directory to the cache definition.
- Create a Service Catalog product from the environment template and add a launch constraint to the product with the existing role. Give users in the testing team permission to use Service Catalog APIs only. Train users to launch the template from the Service Catalog console.
- A launch constraint specifies the AWS identity and IAM role that Service Catalog assumes when an end user launches a product.
- CloudFront with S3 and ALB
- create a web ACL in AWS WAF with a rule to validate the presence of a custom header and associate the web ACL with the ALB
- configure CloudFront to add a custom header to requests that it sends to the origin
- create a cloudfront origin access identity (OAI) and add it to the CloudFront distribution. Update the S3 bucket policy to allow access to the OAI only
- create an SCP with the Deny effect on the
ec2:PurchaseReservedInstancesOffering
action. Attach the SCP to the root of the organization- when you attach a policy to the organization root, all OUs and accounts in the organization inherit that policy which ensures that any new accounts that are added will inherit the policy automatically
- a fixed budget and ensure no excess
- Use the Budgets service to define a fixed monthly budget for each development account
- create an SCP that denies access to expensive services. Apply the SCP to an OU containing the development accounts
- create a Budgets alert action to send SNS notification when the budget amount is reached. Invoke Lambda function to terminate all services.
- explanation
- cannot define resource limits using SCP
- IdP didn’t work, checklist
- The company’s IdP defines assertions that properly map users or groups in the company to IAM roles with appropriate permissions
- The IAM roles created for the federated users’ or federated groups’ trust policy have set the SAML provider as the principle
- the web portal calls the STS
AssumeRoleWithSAML
API with the ARN of the SAML provider, the ARN of the IAM role, and the SAML assertion from IdP - explanation
- users need to have permissions to access the role
- the role that is being assumed must be allowed to federate using SAML as it is the role that performs the
sts:AssumeRoleWithSAML
action
- create an API Gateway HTTP API. configure this API with integrations to Lambda functions that return data from the DynamoDB tables
- a REST API must be used to provide a direct AWS Service integration to DynamoDB
- but REST API is less cost-efficient as HTTP API + Lambda
- configure SCPs within AWS control tower to disallow assigning public IP addresses to EC2 instances across all OUs.
- by configuring SCPs in Control Tower, the organization can effectively enforce policies at the account level, ensuring that all accounts within an OU comply with the established policies
- enable a cross-Region read replica for the RDS database. In the case of an outrage, promote the replica to be a standalone DB instance. Point applications to the new DB endpoint and create a read replica to maintain high availability
- a large number of reads and writes exhausted the I/O credit balance due to provisioning low disk storage during the setup phase
- The database was running performantly for several weeks until a peak shopping period when customers experienced slow performance and timeouts.
- When using General Purpose SSD storage, a DB instance receives an initial I/O credit balance of 5.4 million I/O credits. This initial credit balance is enough to sustain a burst performance of 3,000 IOPS for 30 minutes. This balance is designed to provide a fast initial boot cycle for boot volumes and to provide a good bootstrapping experience for other applications.
- setup a Client VPN endpoint, associate it with a subnet in the VPC, and configure a Client VPN self-service portal. Instruct developers to connect using the Client VPN client.
- Client VPN is a managed client-based VPN services that enables secure access to AWS resources in a VPC.
- the self-service portal simplifies the connection process, making it an ideal solution for a team
- explanation
- Site-to-Site VPN is more suited for connecting entire network
- bastion host is cumbersome and less efficient compared to VPN
- install the Application Discovery Service Discovery Connector in VMware vCenter. install the Application Discovery Service Discovery Agent on the physical on-premises servers. allow the Agent to collect data for a period of time
- create a new DX connection to the same Region. Provision a DX gateway and establish new private VIFs to a virtual private gateway in the VPCs in each Region
- a DX gateway is a globally available resource
- The company requires redundancy for the existing DX connection in the same Region and will then need a DX gateway to connect across Regions.
- configure the KDF delivery stream to partition the data in S3 by date and event type. Redefine the Athena table to include these partitions and modify the queries to specifically target relevant partitions
- an infrastructure services platform for end users
- define the infrastructure services in CloudFormation templates. Upload each template as a Service Catalog product to portfolios created in a central AWS account. Share these portfolios with the Organizations structure created for the company
- allow IAM users to have
AWSServiceCatalogEndUserReadOnlyAccess
Permissions only. Assign the policy to a group called Endusers, add all users to the group. Apply launch constraints - explanation
- Service Catalog combined with a launch constraint that will use a dedicated IAM role that ensures least privilege access
- without a launch constraint, end users must launch and manage products using their own IAM credentials
- CVE and compliance
- use Inspector to run the CVE assessment package on the EC2 instances launched from the approved AMIs
- use Lambda to write automatic approval rules. Store the approved AMI list in System Manager Parameter Store. use EventBridge to trigger an System Manager Automation document on all EC2 instances every 30 days
- Create an Glue crawler and custom classifier based on the data feed formats and build a table definition to match. Trigger an AWS Lambda on file delivery to start an Glue ETL job to transform the entire record according to the processing and transformation requirements. Define the output format as JSON. Once complete, have the ETL job send the results to another S3 bucket for internal processing.
- configure the reserved concurrency limit for the new Lambda. Monitor existing critical Lambda with CW alarms for the Throttles lambda metric
- concurrency is subject to a Regional quota that is shared by all Lambda functions in a Region
- to ensure that a function can always reach a certain level of concurrency, you can configure the function with reserved concurrency
- create a new public VIF for the existing DX connection, and create a new VPN that connects the VPC over the DX public VIF
- A public VIF must be used when using an IPSec VPN over a DX connection
- Add a SG rule to the ALB to allow traffic from the AWS managed prefix list for CloudFront only
- attach each VPC to a shared transit gateway. Use an egress VPC with firewall appliances in two AZs and connect the transit gateway using IPSec VPNs with BGP
- the company should create an IAM role and assign the required permissions to the IAM role. The customer should then (assumeRole) use the IAM role’s ARN, including the external ID in the IAM role’s trust policy, when requesting access to perform the required tasks
- use DataSync to schedule a daily task that replicates data between the on-premises file share and FSx
- AWS recommends using AWS DataSync to transfer data between FSx for Windows File Server file systems.
- DataSync is a data transfer service that simplifies, automates, and accelerates moving and replicating data between on-premises storage systems and other AWS storage services over the internet or AWS Direct Connect.
- DataSync can transfer your file system data and metadata, such as ownership, timestamps, and access permissions.
- Use CodePipeline to create a change set when updates are made to the CloudFormation templates in Gitlab. Include a CodePipeline action to test the deployment with testing scripts run using CodeBuild. Upon successful testing, configure CodePipeline to execute the change set and deploy to production
- CloudFormation with Budget
- Update the CloudFormation template to include the
AWS::Budgets::Budget::resource
with theNotificationsWithSubscribers
property - Create an Service Catalog portfolio for each team. Add each team’s Redshift cluster as CloudFormation template to their Service Catalog portfolio as a Product.
- explanation
- You can use AWS Budgets to track your service costs and usage within AWS Service Catalog. You can associate budgets with AWS Service Catalog products and portfolios.
- Update the CloudFormation template to include the
- Use System Manager Patch Manager to deploy patches on the on-premises servers and EC2 instances. Use Systems Manager to generate patch compliance reports.
- use Tag Editor to tag existing resources. Create cost allocation tags to define the cost center and projectID. use SCPs to restrict the creation of resources that do not have the cost center and project ID tags specified.
- You can use tags to organize your resources, and cost allocation tags to track your AWS costs on a detailed level. After you activate cost allocation tags, AWS uses the cost allocation tags to organize your resource costs on your cost allocation report, to make it easier for you to categorize and track your AWS costs.
- HDD (st1) only supports a maximum of 500 IOPS per volume, while GP2 supports 3000 IOPS
- create a Step Functions workflow to run the lambda in parallel. Create a Lambda to retrieve a list of files and write each item to SQS. Configure a Lambda to retrieve messages from the SQS and call the
StartExecution
API - Perform multiple copy operations at one time by running each command from a separate terminal window, in separate instances of the Snowball client
- to improve transfer speed from your data source to the Snowball
- use the latest snowball client
- batch small file together
- perform multiple copy operations at one time
- copy from multiple workstations
- transfer directories, not files
- to improve transfer speed from your data source to the Snowball
- Create an Cost and Usage Report from the AWS Organization management account. Allow each team to visualize the CUR through an QuickSight dashboard
- You can either generate a CUR from a management or member account. If you generate from member accounts then you must do this individually for each member account which will be a lot of work.
- Add a custom “flag as spam” button to the contact control panel (CCP) in Amazon Connect. This button triggers an Lambda to update call attributes and log the number in a DynamoDB table. Adapt the contact flows to reference these attributes and interact with the DynamoDB table for future call filtering
- Use Amazon WorkSpaces for providing cloud desktops. Connect it to the on-premises network via VPN, integrate with the on-premises Active Directory using an AD connector, and set up a RADIUS server to enable MFA.
- Create EC2 instances for the service. Create one ElasticIP address for each AZ. Create a NLB and expose the assigned TCP port. Assign the ElasticIP addresses to the NLB for each AZ. Create a target group and register the EC2 instances with the NLB. Create a new A (alias) record set named
cloud.myservice.com
and assign the NLB DNS name to the record set.- binding a ECS cluster name directly behind NLB is not an option (dynamical port mapping)
- Use Beanstalk and create a secondary environment configured as a deployment target for the CI/CD pipeline. To deploy, swap the staging and production environment URLs.
- With Beanstalk, you can perform a blue/green deployment (swap CNAME)
StackSets
are used for deploying a stack to different accounts or Regions, and there is no StackSet Address
- Reserve capacity for the RDS database and the minimum number of EC2 instances that are constantly running
- RI for constant running, on-demand for handling traffic pike
- Convert the Aurora Serverless v1 database to a multi-Region Aurora MySQL database, ensuring continuous data replication across the primary and a secondary Region. Use SAM to script the application deployment in the secondary Region for rapid recovery
- using SAM to prepare the application deployment script for the secondary Region aligns with the RTO of 10 minutes
- Create a new cross-account IAM role in the production account with write access to the S3 bucket. Modify the build pipeline to assume this role to upload the files to the production account
- OAI is a special user account that is associated with a CloudFront distribution
- cross Region DR
- enable S3 cross-Region replication on the buckets that contain images
- enable DynamoDB global tables to achieve multi-Region table replication
- enable Route53 health checks to determine if the primary site is down, and route traffic to the disaster recovery site if there is an issue
- Create a PrivateLink interface VPC endpoint. Connect this endpoint to the endpoint service that the third-party SaaS application provides. Create a SG to limit the access to the endpoint and associate the SG with the endpoint.
- PrivateLink provides you option to connect to SaaS products privately, as if it is running on customer’s VPC itself
- a service consumer create a VPC endpoint to connect their VPC to an endpoint service. A service consumer must specify the service name of the endpoint service when creating a VPC endpoint.
- endpoint types
- interface - Create an interface endpoint to send traffic to endpoint services that use a NLB to distribute traffic. traffic destined for the endpoint service is resolved using DNS
- GatewayLB - Create a Gateway Load Balancer endpoint to send traffic to a fleet of virtual appliances using private IP addresses. You route traffic from your VPC to the Gateway Load Balancer endpoint using route tables. The Gateway Load Balancer distributes traffic to the virtual appliances and can scale with demand.
- Gateway - Create a gateway endpoint to send traffic to Amazon S3 or DynamoDB using private IP addresses. You route traffic from your VPC to the gateway endpoint using route tables. Gateway endpoints do not enable AWS PrivateLink.
- Configure on-demand capacity mode for the table to enable pay-per-request pricing for read and write requests
- Use DMS to mirage data to DynamoDB using a continuous replication. task. Refactor the API to use the DynamoDB. Implement the refactored API in API gateway and enable API caching.
- Kafka & real-time solution
- Establish a DX connection from the on-premises data center to AWS
- Create a EC2 ASG to pull the message from the on-premises Kafka cluster and use the Amazon Kinesis Producer Library (KPL) to put the data into a KDS
- Create a WebSocket API in API gateway, create a Lambda (KCL) to process KDS, and use the
@connections
command to send callback messages to connected clients
- IdP
- use a custom-built OpenID Connect-compatible solution for authentication and use Cognito for authorization
- use a custom-built SAML-compatible solution that uses LDAP for authentication and use a SAML assertion to perform authorization to the IAM identity provider
- Create an alias for new versions of the Lambda. Use the CLI
update-alias
command with therouting-config
parameter to distribute the load- a Lambda alias should be used to point to a new version
- users can access the function new version using the alias ARN
aws lambda update-alias --function-name myfunction --name myalias --routing-config '{"AdditionalVersionWeights" : {"2" : 0.05} }'
- CodePipeline
- Create a CodePipeline pipeline that sources the tool code from the CodeCommit repository and initiates a CodeBuild build
- Create an CodeBuild project that pulls the latest container image from ECR, update the container with code from the source CodeCommit repository, and pushes the updated container image to ECR
- Create an ECR repository for the image. Create a CodeCommit repository containing code for the tool being deployed to the container image in ECR
- Define the AWS resources using JS or TS. Use the AWS CDK to create CloudFormation templates from the developers’ code and use AWS CDK to create CloudFormation stacks. Incorporate the CDK as a CodeBuild job in CodePipeline
- Use S3 static website for the web application. Store uploaded videos in a S3 bucket. Use S3 event notification to publish events to the SQS. Process the queue with a Lambda that calls the
Rekognition
API to perform facial analysis - Use a SCP in Organizations to implement a deny list of AWS services. Apply this SCP at each OU level. Leave the default AWS managed SCP attached to the root level and all OUs. For accounts that require specific exceptions, create an OU under root and attach a SCP that denies fewer services.
- an explicit deny at a higher level will override an allow at any level beneath
- private hosted zone sharing (between two account)
- create an authorization to associate the private hosted zone in the Management account with the new VPC in the production account
- Associate a new VPC in the production account with a hosted zone in Management account. Delete the association authorization in the Management account.
- sequence steps
- connect to EC2 instance in management account
aws route53 list-hosted-zones
aws route53 create-vpc-association-authorization --hosted-zone-id <id> -- vpc VPCRegion=<region>,VPCId=<vpc-id> --region <Region>
- connect to the EC2 in target account
- Use Lambda to create daily EBS snapshots and copy them to the disaster recovery Region. Implement an Aurora Replica in the DR Region. Use Route53 with an active-passive failover configuration. Use EC2 in an ASG with the capacity set to 0 in the DR Region.
- minimal cost, RTO 6hours, RPO 24hours
- Create a customer managed policy document for each project that requires access to AWS resources. Specify full control of the resources that belong to the project. Attach the project-specific policy document to an IAM group. Change the group membership when developers change projects. Update the policy document when the set of resources changes.
- A policy document specifying full control to resources for Developers in that group can be created
- The only way to un-encrypt an encrypted database is to export the data and import the data into another DB instance.
- Set up a monitoring system in the Organization’s central account using Budgets. Focus on tracking the EC2 operation, setting a monitoring interval to daily. Define a budget limit that is 15% above the 45-day average usage of EC2, as determined by Cost Explorer, and configure alerts for the architecture team when this limit is reached.
- AWS Budgets is a versatile tool that allows organizations to set custom budgets for tracking their AWS spending and usage.
- Convert the API Gateway Regional endpoint to an edge-optimized endpoint. Enable caching in the production stage.
- introducing ElastiCache cluster could be expensive
- Create a VPC Endpoint Service that accepts TCP traffic and host it behind a NLB. Enable access to the IT services over the DX connection.
- Use Github webhooks to trigger the CodePipeline pipeline. Use the Jenkins plugin for CodeBuild to conduct unit testing. Send alerts to SNS topic for any bad builds. Deploy in a blue/green deployment using CodeDeploy
- Use the VMware vSphere client to export the application as an image in OVF format. Create an S3 bucket to store the image in the destination Region. Create and apply an IAM role for VM import. Use CLI to run the EC2 import command.
- Create a WAF web ACL with a rule to allow access from the IP addresses used by the companies. Associate the web ACL with the API. Create a usage plan with a request limit and associate it with the API. Create an API key and add it to the usage plan.
- Use CloudFront with S3 to host the web application. Use AppSync to build the application APIs. Use Cognito groups for RBAC. Authorize data access by leveraging Cognito groups in AppSync resolvers.
- GraphQL can be used to query multiple databases, micro-services, APIs
- CloudHSM enable quorum authentication
- using the
cloudhsm_mgmt_util
command line tool, enable encrypted communication, login as a CO- set the Quorum minimum value to two using the
setMValue
command - register a key for signing with the
registerMofnPubKey
command
- set the Quorum minimum value to two using the
- using the
- In each regional account, establish the
SecurityAudit
role and grant permission to the central account to assume this role - Use DX along with a site-to-site VPN to establish a connection between the data center and AWS
- this combination provides an IPsec-encrypted private connection that also reduces network costs, increases bandwidth throughput
- debug RDS & Application
- configure the Aurora MySQL DB cluster to publish slow query and error logs to CW logs
- set up the X-Ray SDK to trace incoming HTTP requests on the EC2 as well as set up tracing of SQL queries with the X-Ray SDK for Java
- Install and configure an CW logs agent on the EC2 to send the application logs to CW logs
- FSx for Windows monitor
- Configure a new Amazon FSx for Windows file system with a deployment type of Multi-AZ. Transfer data to the newly created file system using the DataSync service. Point all the file system users to the new location. You can test the failover of your Multi-AZ file system by modifying its throughput capacity
- You can monitor storage capacity and file system activity using CW, and monitor end-user actions with file access auditing using CW logs and KDF
- explanation
- In a Multi-AZ deployment, Amazon FSx automatically provisions and maintains a standby file server in a different Availability Zone.
- You can monitor storage capacity and file system activity using Amazon CloudWatch, monitor all Amazon FSx API calls using AWS CloudTrail, and monitor end-user actions with file access auditing using Amazon CloudWatch Logs and Amazon Kinesis Data Firehose.
- You must not modify or delete the elastic network interfaces associated with your file system. Modifying or deleting the network interface can cause a permanent loss of connection between your VPC and your file system.
- To encrypt data outside of KMS
- use the
GenerateDataKey
operation to get a data key - use the plaintext data key to encrypt your data outside of KMS
- store the encrypted data key with the encrypted data
- use the
- Suspend the ASG’s
Terminate process
. Use Session Manager to log in to an instance that is marked as unhealthy and analyze the system logs to figure out the root cause- If you suspend the
HealthCheck
process, then none of the instances would be marked as unhealthy. - Enabling EC2 instance termination (DisableApiTermination attribute) does not prevent Amazon EC2 Auto Scaling from terminating an instance.
- If you suspend the
- Deploy a DataSync agent to an on-premises server that has access to the NFS file system. Send data over the DX connection to a PrivateLink interface VPC endpoint for EFS by using a private VIF. Configure a DataSync scheduled task to send the images to the EFS file system every night
- bucket owner enforced setting
- If you used object ACLs for permissions management before you applied the
bucket owner enforced
setting and you didn’t migrate these object ACL permissions to your bucket policy after you re-enable ACLs, these permissions are restored - You, as the bucket owner, still own any objects that were written to the bucket while the
bucket owner enforced
setting was applied. These objects are not owned by theobject writer
, even if you re-enables ACLs
- If you used object ACLs for permissions management before you applied the
- S3 access denied while serving as static web
- Objects can’t be encrypted by KMS
- The AWS account that owns the bucket must also own the object
- explanation
- You must remove KMS encryption from the objects that you want to serve using the Amazon S3 static website endpoint. Instead of using AWS KMS encryption, use AES-256 to encrypt your objects.
- To allow public read access to objects, the AWS account that owns the bucket must also own the objects.
- The object-ownership requirement applies to public read access granted by a bucket policy. It doesn’t apply to public read access granted by the object’s access control list (ACL).
- CloudFront S3 307 Temporary Redirect
- CloudFront by default, forwards the requests to the default S3 endpoint. Change the origin domain name of the distribution to include the Regional endpoint of the bucket
- When a new S3 bucket is created, it takes up to 24 hours before the bucket name propagates across all Regions
- Create an AWS Organizations organization-wide AWS Config rule that mandates all resources in the selected OUs to be associated with the AWS WAF rules. Configure automated remediation actions by using AWS Systems Manager Automation documents to fix non-compliant resources. Set up AWS WAF rules by using an AWS CloudFormation stack set to target the same OUs where the AWS Config rule is applied.
- Set up an interface VPC endpoint for Kinesis Data Streams in the VPC. Ensure that the VPC endpoint policy allows traffic from the applications
- VPC endpoint policies enable you to control access by either attaching a policy to a VPC endpoint or by using additional fields in a policy that is attached to an IAM user, group, or role to restrict access to only occur via the specified VPC endpoint.
- Use AWS X-Ray to analyze the micro-services applications through request tracing. Configure Amazon CloudWatch for monitoring containers, latency, web server requests, and incoming load-balancer requests and create CloudWatch alarms to send out notifications if system latency is increasing
- Storage Gateway doesn’t automatically update the cache when you upload a file directly to Amazon S3. Perform a
RefreshCache
operation to see the changes on the file share - Set up KDF in the logging account and then subscribe the delivery stream to CW logs streams in each application AWS account via subscription filters. Persist the log data in S3 bucket inside the logging AWS account
- CloudWatch Logs agent cannot publish data to a Kinesis Data Firehose stream
- Use DataSync to migrate existing data to S3 and use File Gateway for low latency access to the migrated data for ongoing updates from the on-premises applications
- Configure DMS data validation on the migration task so it can compare the source and target data for the DMS task and report any mismatches
- DMS compares the source and target records and then reports any mismatches
- for a CDC-enabled task, DMS compares the incremental changes and reports any mismatches
- explanation
- table metrics to capture statistics such as insert, update, delete and DDL statements completed for the tables being migrated
- Use VPC sharing to share one or more subnets with other AWS accounts belonging to the same parent organization from AWS Organizations
- the owner cannot share the VPC itself
- VPC peering does not facilitate central managed VPCs
- Set up a CloudFormation stack set for Redshift cluster creation so it can be launched in another Region and configure Redshift to automatically copy snapshots for the cluster to the other Region. In case of a disaster, restore the cluster in the other Region from snapshot
- share CF stack set across Regions/accounts
- Route53 DNS requests and subsequent application traffic routed through cloudFront are inspected inline. Always on monitoring, anomaly detection, and mitigation against common infrastructure DDoS attacks
- multiple VPC and s3
- In the AWS account that owns the S3 buckets, create an S3 access point for each bucket that the applications must use to access the data. Set up all applications in a single data lake VPC.
- Create a gateway endpoint for S3 in the data lake VPC. Attach an endpoint policy to allow access to the S3 bucket only via the access points. Specify the route table that is used to access the table
- add a bucket policy on the buckets to deny access from applications outside the data lake VPC
- Lambda best practice
- If you intend to reuse code in more than one Lambda function, you should consider creating a Lambda Layer for the reusable code
- By default, Lambda functions always operate from an AWS owned VPC and hence have access to any public internet address or public AWS APIs. Once a Lambda function is VPC-enabled, it will need a route through a NAT gateway in a public subnet to access public resources
- Since Lambda can scale extremely quickly, it’s a good idea to deploy a CW alarm that notifies your team when function metrics such as
ConcurrentExecutions
orInvocations
exceeds the expected threshold - explanation
- AWS recommends that you should not over-provision your function time out settings. Always understand your code performance and set a function time out accordingly.
- All the dependencies can be packaged into the single Lambda deployment package without any performance impact.
- Transfer the on-premises data into multiple Snowball Edge Storage Optimized devices. Copy the snowball Edge data into S3 and create a lifecycle policy to transition the data into Glacier
- Snowmobile for at least 10PB
- Snowmobile supports import data directly into Glacier
- snowball edge can’t copy data directly into Glacier
- the engineering team can address the shortfall of 3,000 IOPS by increasing the EBS volume size by 1 TB which will add 3,000 IOPS (3 IOPS per GB * 1000 GB) to the EBS volume on each instance.
- Instance X is in the default SG. the default rules for the default SG allow inbound traffic from network interfaces (and their associated instances) that are assigned to the same SG. Instance Y is in a new SG. the default rules for a SG that you create allow no inbound traffic.
- Use Redshift Spectrum to create Redshift cluster tables pointing to the underlying historical data in S3. The analytics team can then query this historical data to cross-reference with the daily reports from Redshift
- Use S3 Glacier vault to store the sensitive archived data and then use a vault lock policy to enforce compliance controls
- Amazon S3 access control lists (ACLs) enable you to manage access to buckets and objects. It cannot be used to enforce compliance controls.
- continuous assessing, auditing and monitoring the configurations
- Leverage Config rules to audit changes to AWS resources and monitor the compliance of the configuration by running the evaluations for the rule at a frequency that you choose. Develop Config custom rules to establish a test-driven development approach by triggering the evaluation when any resource that matches the rule’s scope changes in configuration
- Enable trails and set up CloudTrail events to review and monitor management activities of all AWS accounts by logging these activities into CW logs using a KMS key. Ensure that CloudTrail is enabled for all accounts as well as all available AWS services
- When you create a launch configuration, the default value for the instance placement tenancy is null and the instance tenancy is controlled by the tenancy attribute of the VPC.
- If you set the Launch Configuration Tenancy to default and the VPC Tenancy is set to dedicated, then the instances have dedicated tenancy.
- If you set the Launch Configuration Tenancy to dedicated and the VPC Tenancy is set to default, then again the instances have dedicated tenancy.
- Use AWS Global Accelerator to distribute a portion of traffic to a particular deployment
- AWS Global Accelerator is a network layer service that directs traffic to optimal endpoints over the AWS global network, this improves the availability and performance of your internet applications
- Redshift snapshot DR
- Create a snapshot copy grant in the destination Region for a KMS key in the destination Region. Configure Redshift cross-Region snapshots in the source Region
- You cannot create a snapshot copy grant in the destination Region for a KMS key in the source Region.
- SCP
- SCPs do not affect service-linked role
- If a user or role has an IAM permission policy that grants access to an action that is either not allowed or explicitly denied by the applicable SCPs, the user or role can’t perform that action
- SCPs affect all users and roles in attached accounts, including the root user
- When a KDS is configured as the source of a KDF delivery stream, Firehose’s
PutRecord
andPutRecordBatch
operations are disabled and Kinesis Agent cannot write to Firehose delivery stream directly. - Configure SAML-based authentication tie to an IAM role that has the
PowerUserAccess
managed policy attached to it. Attach a customer-managed policy that denies access to RDS in any Region except us-east-1.PowerUserAccess
provides full access to AWS services and resources but does not allow management of users and groups.- Service Catalog will deny all other services it didn’t list
- WAF Country IPSet
- use WAF IP set statement that specifies the IP address that you want to allow through
- use WAF geo match statement listing the countries that you want to block
- use message timers to postpone the delivery of certain messages to the queue by one minute
- You can use message timers to set an initial invisibility period for a message added to a queue.
- The default (minimum) delay for a message is 0 seconds. The maximum is 15 minutes.
- Use AWS Volume Gateway - Cached Volume - to store the most frequently accessed results locally for low-latency access while storing the full volume with all results in its Amazon S3 service bucket
- S3 Upload/Download latency
- use CloudFront distribution with origin as the S3 bucket. This would speed uploads as well as downloads for the video files
- Enable amazon S3 Transfer Acceleration for S3 bucket. This would speed uploads as well as downloads for the video files
- Setup VPN CloudHub between branch offices and corporate headquarters which will enable branch offices to send and receive data with each other as well as with their corporate headquarters
- Performance improvement
- Configure EC2 instances behind an ALB with round-robin routing and stick session enabled
- Enable Aurora Auto Scaling for Aurora Replicas.
- Explanation
- Aurora Replicas provisioned for an Aurora DB cluster using single-master replication.
- Round robin is a good choice when the requests and targets are similar, or if you need to distribute requests equally among targets.
- Use Centralized VPC Endpoints for connecting with multiple VPCs, also known as shared services VPC.
- RDS / Aurora
- Multi-AZ deployments for both RDS MySQL and Aurora MySQL follow sync replication
- The primary and standby DB instance are upgraded at the same time for RDS MySQL Multi-AZ. All instances are upgraded at the same time for Aurora MySQL
- Read Replicas can be manually promoted to a standalone database instance for RDS MySQL whereas Read Replicas for Aurora MySQL can be promoted to the primary instance
- Use SAM and leverage the built-in traffic-shifting feature of SAM to deploy the new lambda version via CodeDeploy and use pre-traffic and post-traffic test functions to verify code. Rollback in case CW alarms are triggered
- You can use CloudFormation change sets to preview how proposed changes to a stack might impact your running resources
- you would not know about any potential failures until you actually deploy the stack and point to the new endpoint.
- API Gateway 504
- Process and analyze the X-Ray traces and analyze HTTP methods to determine the root cause of the HTTP errors
- Process and analyze the CW logs for lambda to determine processing times for requested images at pre-configured intervals
- explanation
- API Gateway
execution logs
contain helpful information that you can use to identify and fix most errors with your API - API Gateway
access logs
contain details about who accessed your API and how they accessed it
- API Gateway
- Set up a DX to each on-premises data center from different service providers and configure routing to failover to the other on-premises data center’s DX in case one connection fails. Make sure that no VPC CIDR blocks overlap one another or the on-premises network
- Set up separate Lambda function to provision and terminate the Beanstalk environment. Configure a Lambda execution role granting the required Beanstalk environment permissions and assign the role to the Lambda functions. Configure cron expression based on EventBridge events rules to trigger the Lambda
- ASG behavior
- As the AZ got unbalanced, EC2 Auto scaling will compensate by rebalancing the AZ. When rebalancing, EC2 auto scaling launches new instances before terminating the old ones, so that rebalancing does not compromise the performance or availability of your application
- EC2 auto scaling creates a new scaling activity for terminating the unhealthy instance and then terminates it. Later another scaling activity launches a new instance to replace the terminated instance
- S3 public access detection
- Enable object-level logging for S3. Set up a EventBridge event pattern when a
PutObject
API call with public-read permission is detected in the CloudTrail logs and set the target as an SNS topic for downstream notifications - Configure a Lambda function as one of the SNS topic subscribers, which is invoked to secure the objects in the S3 bucket
- You can use AWS Access Analyzer to receive findings into the source and level of public or shared access for each public or shared bucket. It cannot be used for near real-time detection of a new public object uploaded on S3, and it cannot invoke a Lambda.
- Enable object-level logging for S3. Set up a EventBridge event pattern when a
- There are no S3 data transfer charges when data is transferred in from the internet. Also with S3TA, you pay only for transfers that are accelerated.
- Database backup, snapshot, replication
- Automated backups, manual snapshots, and read replicas are supported across multiple Regions
- Databases snapshots are user-initiated backups of your complete DB instance that serve as full backups. These snapshots can be copied and shared to different Regions and accounts.
- Use custom routing accelerator of Global Accelerator to deterministically route one or more users to specific instance using VPC subnet endpoints
- A custom routing accelerator is a new type of accelerator in Global Accelerator. It allows you to use your own application logic to deterministically route one or more users to a specific Amazon EC2 instance destination in a single or multiple AWS Regions.
- Custom routing accelerators support only VPC subnet endpoints.
- VPN based on DX
- create a VPC with a virtual private gateway
- create an IPSec tunnel between your customer gateway appliance and the virtual private gateway
- set up a public virtual interface on the DX connection
- Configure CloudFront to use a custom header and configure an WAF rule on the origin’s ALB to accept only traffic that contains the header
- Create a new ASG launch configuration that uses the newly created AMI. Double the size of the ASG and allow the new instances to become healthy and then reduce the ASG back to the original size. If the new instances do not work as expected, associate the ASG with the old launch configuration.
- Create a CloudFormation template describing the application infrastructure in the Resources section. Use CloudFormation stack set from an administrator account to launch stack instances that deploy the application to various other regions.
- template
- stack - manage related resources as a single unit
- change set - if you need to make changes to the running resources in a stack
- stack set - a stack set lets you create stacks in AWS accounts across Regions by using a single CloudFormation template
- Create a new S3 bucket to be used for replication. Create a new S3 Replication Time Control (S3 RTC) rule on the source S3 bucket that filters data based on the prefix (high-value claim type) and replicates it to the new S3 bucket. Leverage an S3 event notification to trigger a notification when the time to copy the claim data exceeds the desired threshold.
- Replication events are available within 15 minutes of enabling S3 RTC. You can use Amazon S3 event notifications to track replication objects. Amazon EventBridge does not support receiving S3 object replication events.
- WAF log destination
- CW logs
- S3
- KDF
- Enable default encryption on the S3 bucket that uses Amazon S3-managed keys (SSE-S3) encryption (AES-256) for audit logging. Use Redshift Spectrum to query data for monthly audits.
- Currently, you can only use Amazon S3-managed keys (SSE-S3) encryption (AES-256) for audit logging.
- KCL (Kinesis Client Library) & KDS
- You can only use DynamoDB for checkpointing KCL
- Each KCL application must use its own DynamoDB table
- DMS & Redshift
- Add subnet CIDR range, or IP address of the replication instance in the inbound rules of the Redshift cluster SG
- Your Redshift cluster must be in the same account and same Region as the replication instance
- explanation
- Redshift underlying data-house is S3, which is Regional
- DMS doesn’t support custom DNS names when configuring an endpoint for a Redshift cluster, and you need to use the Amazon provided DNS name.
- Don’t enable versioning for the S3 bucket you use as intermediate storage for your Amazon Redshift target. If you need S3 versioning, use lifecycle policies to actively delete old versions.
- If you use the AWS DMS console to create the endpoint, then DMS creates the required IAM roles and policies automatically. If you use the AWS Command Line Interface (AWS CLI) or the AWS DMS API, you must create the IAM roles and policies manually.
- Use AWS Elemental MediaConvert for file-based video processing and CloudFront for delivery. Use video streaming protocols like HLS and create a manifest file. Point the CloudFront distribution at the manifest.
- CloudTrail log integrity
- Enable CloudTrail log file integrity validation
- Use S3 MFA Delete on the S3 bucket that holds CloudTrail logs and digest files
- DynamoDB usage
- Set up a new DynamoDB table each day and drop the table for the previous day after its data is written on S3
- drop is efficient than delete all items
- Set up SQS to buffer writes and reduce provisioned write throughput
- Set up a new DynamoDB table each day and drop the table for the previous day after its data is written on S3
- You cannot stream data from Kinesis Data Streams directly to Redshift.
- Deploy the VPC infrastructure using CloudFormation and leverage a custom resource to request a unique CIDR range from an external IP address management (IPAM) service
- OpsWorks lets you use Chef and Puppet to automate how servers are configured, deployed, and managed across your Amazon EC2 instances or on-premises compute environments. You cannot deploy the VPC infrastructure using AWS OpsWorks.
- The
dry-run
flag checks whether you have the required permissions for the action, without actually making the request, and provides an error response.
- S3 VPC endpoint
- Gateway
- uses S3 public IP
- does not allow access from on-premises
- does not all access from another Region
- not billed
- Interface
- use private IP addresses from your VPC to access S3
- allow access from on premises
- allow access from a VPC in another Region using VPC Peering or Transit Gateway
- billed
- Gateway
- 192.168.2.16/28, 192.168.2.0/28 (2^4)
- Create a customer-managed KMS key and configure the key policy to grant permissions to the S3 service principal
- During SAML-based federation, pass an attribute for
DevelopmentDept
as an STS session tag. The policy of the assumed IAM role used by the developers should be updated with a deny action and aStringNotEquals
condition for theDevelopmentDept
resource tag andaws:PrincipalTag/DevelopmentDept
- Configure a public virtual interface on the DX connection. Create an Site-to-Site VPN between the customer gateway and the virtual private gateway in the VPC
- Configure traffic monitoring on the source EC2 instances hosting the VOIP program, setup a network monitoring program on a target EC2 instance and stream the logs to an S3 bucket for further analysis
- it’s incorrect to use log to inspect network packets
- Amazon EC2 instances running with an Amazon VPC have built-in protection against packet sniffing.
- Launch a new instance in the new subnet via an AMI created from the old instance. Direct traffic to this new instance using Route53 and then terminate the old instance.
- Setup VPC flow logs for the ENI associated with the instances and configure the VPC flow logs to be filtered for rejected traffic. Publish the flow logs to CW logs
- CloudFormation cross account usage
- In account A, create a customer-managed KMS key that grants usage permissions to account A’s CodePipeline service role and account B. Also, create a S3 bucket with a bucket policy that grants account B access to the bucket
- In account B, create a cross-account IAM role. In account A, add the
AssumeRole
permission to A’s CodePipeline service role to allow it assume the cross-account role in B - in B, create a service role for the CloudFormation stack that includes the required permissions for the services deployed by the stack. in A, update the CodePipeline configuration to include the resources associated with B
- You can’t export an Amazon Issued ACM public certificate for use on an EC2 instance or another custom web server because ACM manages the private key.
- Configure the applications behind private NLBs in separate VPCs. Set up each NLB as a PrivateLink endpoint service with associated VPC endpoints in the centralized VPC. Set up a public ALB in the centralized VPC and point the target groups to the private IP addresses of each endpoint. Setup host-based routing to route application traffic to the corresponding target group through the ALB.
- Create a new private subnet in the same VPC as the RDS instance. Create a new SG with necessary inbound rules for QuickSight in the same VPC. Sign in to QuickSight as a QS admin and create a new QS VPC connection. create a new dataset from the RDS instance.
- on premises file gateway & s3
- create a VPC gateway endpoint and create the file gateway using this VPC endpoint
- create a VPC interface endpoint and create the file gateway using this VPC endpoint
- LDAP
- The application first authenticates against LDAP to retrieve the name of an IAM role associated with the user. It then assumes that role via a call to IAM STS. Afterward, the application can now use the temporary credentials from the role to access the appropriate S3 bucket.
- Authenticate against LDAP using an identity broker you created, and have it call IAM STS to retrieve IAM federated user credentials. The application then gets the IAM federated user credentials from the identity broker to access the appropriate S3 bucket.
- Create a shared transit gateway. Have each spoke VPC connect to the transit gateway. Use a fleet of firewalls, each with a VPN attachment to the transit gateway, to route the outbound internet traffic.
- Transit Gateway enables customers to connect thousands of VPCs (simpler VPC-VPC communication management over VPC Peering).
- a default limit of 50 VPC peering for each VPC
- the default limit for shared VPC subnets is 100.
- Use Systems Manager Patch Manager to manage and deploy the security patches of your EC2 instances based on the patch baselines from your on-premises data center. Install the SSM agent to all of your instances and automate the patching schedule by using SSM Maintenance Windows.
- On the CloudFormation template, create an AWS Secrets Manager secret resource for the database password. Modify the application to retrieve the database password from Secrets Manager when it launches. Use a dynamic reference for the secret resource to be placed as the value of the
MasterUserPassword
property of theAWS::RDS::DBInstance
resource. - creates an SNS topic and then adds a subscription using the ARN attribute name for the SQS resource, which is created under the logic name TutorialsDojoQueue
|
|
- Configure a challenge-handshake Authentication Protocol (CHAP) to authenticate iSCSI and initiator connections (Storage Gateway for iSCSI security)
- Oracle RAC/RMAN is not supported in RDS
- This is due to encryption overhead when copying files to the Snowball Edge device. Open multiple sessions to the Snowball Edge device and initiate parallel copy jobs to improve the overall copying throughput.
- perform multiple write operations at one time
- transfer small files in batches
- write from multiple computers
- don’t perform other operations on files during transfer
- reduce local network use
- eliminate unnecessary hops
- provision gateway cached volumes from AWS storage gateway
- Gateway stored volumes can only store up to 512 TB worth of data
- Create an IAM role and assign the required permissions to read and write from the DynamoDB table. Have the instance profile property of the application instance reference the role.
- SSM instance profile, not
AWS::SSM::Parameter
butAWS::SSM::InstanceProfile
- SSM instance profile, not
- Remove permission to use S3 URLs to read the file for anyone else
- how to provide the auditor access to the log for your AWS resources
- enable CloudTrail logging to required AWS resources
- Create an IAM user with read-only permissions to the required AWS resources
- Provide the access credential to the auditor
- Deploy the AWS IoT Greengrass client software to another local server. Run ML inference on the Greengrass server from the ML model trained from SageMaker. Use Greengrass components to interact with the Linux server API whenever a defect is detected.
- use AWS IoT Greengrass to build software that enables your devices to act locally on the data that they generate, run predictions based on machine learning models, and filter and aggregate device data.
- Outpost doesn’t support Rekognition locally.
- IoT Analytics cannot be run locally without internet connectivity.
- Create a OpsWorks stack, with two layers, and one custom recipe
- layer, web server, chat server
- two OpsWorks stacks are unnecessary since the new video chat feature is still a part of the customer support website but just deployed on a different set of servers.
- Configure an IAM policy that authorizes access to the certificate store only for the cybersecurity team and then add a configuration to terminate the SSL on the ELB.
- First, create IAM users in the master account. Then in the dev and test accounts, generate cross-account roles that have full admin permissions while granting access for the master account.
- Set up an IAM role with permissions to list and write objects to the S3 bucket. Attach the IAM role to the EC2 instance which will enable it to retrieve temporary security credentials from the instance metadata and use that access to upload the photos to the S3 bucket.
- You can use a forward web proxy server in your VPC and manage outbound access using URL-based rules. Default routes are also removed.
- SG/NACLs cannot filter requests based on URLs
- DR strategies
- Backup and restore (RPO in hours, RTO is 24 hours or less)
- Pilot light (RPO in minutes, RTO in hours)
- Warm standby (RPO in seconds, RTO in minutes)
- Multi-Region active-active (RPO near zero, RTO potentially zero)
- if you are restoring an Storage Gateway volume snapshot, you can choose to restore the snapshot as Storage Gateway volume or as EBS volume
- Use AWS Organizations to centrally manage all of your accounts. Group your accounts, which belong to a specific OU. Create an IAM Role in the production account which has a policy that allows access to the EC2 instances including resource-level permission to terminate the instances owned by a particular business unit. Provide the cross-account access and the IAM policy to every member accounts of the OU.
- Authenticate using your on-premises SAML 2.0 compliant IdP, retrieve temporary credentials using STS, and grant federated access to the AWS console via the IAM identity center.
- Set up a new S3 bucket with standard storage to store and serve the scanned files. Use
CloudSearch
for query processing and use Elastic Beanstalk to host the website across multiple AZ. - Combine an ELB in front of an ASG of web servers with CloudFront for fast delivery. The web servers will first authenticate the users by logging into their social media accounts which are integrated in Cognito, then process the user’s purchases and store them into a SQS queue using IAM roles for EC2 instances to gain permissions to the queue. Finally, the items from the queue are retrieved by a set of application servers and stored into a DynamoDB table.
- IdP authentication failure
- Ensure that the ARN of the SAML provider, the ARN of the created IAM role, and SAML assertion from the IdP are included when the federated identity web portal calls the AWWS STS
AssumeRoleWithSAML
API - ensure that the appropriate IAM roles are mapped to company users and groups in the IdP’s SAML assertions
- Ensure that the trust policy of the IAM roles created for the federated users or groups has set the SAML provider as principal.
- Ensure that the ARN of the SAML provider, the ARN of the created IAM role, and SAML assertion from the IdP are included when the federated identity web portal calls the AWWS STS
- Migrate the database to a cluster of EBS backed EC2 instances across multiple AZs. Automate the creation of EBS snapshots from EBS volumes of the EC2 instance by using DLM. Install the SSM agent to the EC2 instance and automate the patch management process using the SSM Patch Manager.
- In your VPC, launch a new web proxy server that only allows outbound access to the URLs provided by the proprietary e-commerce platform.
- Use the AWS Config Managed Rule which automatically checks whether your running EC2 instances are using approved AMIs. Set up cloud-watch alarms to notify you if there are any non-compliant instances running in your VPC.
- Set up cross-account access with a resource-based policy. Use AWS Config rules to periodically audit changes to the IAM policy and monitor the compliance of the configuration.
- cross-account access with a resource-based policy has some advantages over a rule.
- Organization monitor
- use AWS Config to monitor the compliance of your AWS Org. Set up SNS Topic or EventBridge that will send alerts to you for any changes.
- Create a trail in CloudTrail to capture all API calls to your AWS Org, including calls from the Org console and from code calls o the Org APIs. Use EventBridge and SNS to raise events when administrator-specified actions occur in an Org and send a notification to you
- Utilize AWS Batch with managed Compute Environments to create a fleet using Spot Instances. Store the raw data on an S3 bucket. Create jobs on AWS Batch Job Queues that will pull objects from S3 bucket and temporarily store them to the EC2 EBS volumes for processing. Send the processed images back to another S3 bucket.
- SQS might cause some jobs are to be processed twice
- Use trusted access by running the
enable-sharing-with-aws-organization
command in the AWS RAM CLI. Mirror the configuration changes that was performed by the account that previously managed this service.- only the linked service can assume a service-linked role
- Use ECS Anywhere to streamline software management on-premises and on AWS with a standardized container orchestrator. This makes it easy to migrate the development workloads running on-premises to ECS in an AWS Region on Fargate.
- EKS anywhere is not designed to run in the AWS cloud
- ECS anywhere does not support Outposts
- Distribute traffic to a set of web servers using ELB that performs TCP load balancing. Use CloudHSM deployed to two AZs to perform the SSL transactions and deliver your application logs to a private S3 bucket using server-side encryption.
- On AWS RAM, set up a shared services VPC on your central account. Set up VPC peering from this VPC to each VPC on the other accounts. On Route53, create a private hosted zone associated with the shared services VPC. Manage all domains and subdomains on this zone. Programmatically associate the VPCs from other accounts with this hosted zone.
- Utilize DX Gateway for inter-Region VPC access. Create a virtual private gateway in each VPC, then create a private virtual interface for each DX connection to the DX gateway.
- EC2 cannot access S3 after sometime
- The required AWS credentials in the
~/.aws/credentials
configuration file located on the EC2 instances of the online portal were misconfigured - The expiration date of the pre-signed URL is incorrectly set to expire too quickly and thus, may have already expired when they used it.
- ~ Enabling object versioning in S3 will not hinder uploads that are done via a pre-signed URL
- The required AWS credentials in the
- to ensure the tags are always added when users create any resources across all accounts
- Set up AWS service catalog to tag the provisioned resources with corresponding unique identifiers for portfolio, product and users
- Set up the CloudFormation Resource Tags property to apply tags to certain resource types upon creation.
- Generate an EBS snapshot of the static content from the AWS Storage Gateway service. Afterward, restore it to an EBS volume that you can then attach to the EC2 instance where the application server is hosted.
- Since this is using a Volume Storage Gateway, you have to generate an EBS snapshot and generate an EBS Volume to restore the data.
- Create a VPC endpoint policy that restricts access to the specific S3 bucket. Create an IAM role that grant access to the S3 bucket and attach it to the application EC2 instances. Apply an S3 bucket policy that only allows access from the VPC endpoint and those using the IAM role.
- A VPC endpoint policy is an IAM resource policy that you attach to an endpoint when you create or modify the endpoint.
- The gateway prefix list ID should be added to the route table in the VPC to allow access for the specific subnet, and not on the NACL
- Set up a Tape Gateway to back up your data in S3 and archive it in
Glacier
using your existing tape-based processes- Tape Gateway offers a durable, cost-effective solution to archive your data in the AWS Cloud. With its virtual tape library (VTL) interface, you use your existing tape-based backup infrastructure to store data on virtual tape cartridges that you create on your tape gateway.
- Set up a DNS active-active failover using latency based routing policy that resolves to an ELB. Configure the
Evaluate Target health
attribute to Yes. - RTO 30 minutes, RPO 5 minutes (application is stateless, so don’t need frequent snapshot)
- Set up a cross-Region read replica of Aurora database to the backup Region. Promote this read replica as the master database in case of a disaster in the primary Region
- Schedule a daily snapshot of the Amazon EC2 instances for the web and application tier. Copy the snapshot to the backup region. Restore the backups in case of a disaster in the primary region
Graphs
Route53 Resolver - Inbound Endpoints centralized control of security identities DX gateway (combine all VPCs to one DX connection) AWS SAML-base IdP Multi-VPC & on premise -> transitive peering AWS Organization AWS DX + Site-to-Site VPN AWS Migration Services AWS Transit Gateway + Egress VPC Migration Paths route53 resolver multi account