AWS Reset 2nd

NLB doesn’t have SG.

RAM share with prefix list (network strategy).

AWS Budgets for target/limit xx%.

AWS does not allow the association of custom IPv6 CIDR blocks with a VPC.

WebACL is a better choice than WAF, when talking about blocking UDP.

Concepts

  • multiple domain SSL
    • Create a new CloudFront web distribution and configure it to serve HTTPS requests using dedicated IP addresses in order to associate your alternate domain names with a dedicated IP address in each CloudFront edge location
    • Upload all SSL certificates of the domains in the ALB using the console and bind multiple certificates to the same secure listener on your load balancer. ALB will automatically choose the optimal TLS certificate for each client using Server Name Indicator (SNI)
    • explanation
      • If you configure CloudFront to serve HTTPS requests using SNI, CloudFront associates your alternate domain name with an IP address for each edge location.
      • A wildcard certificate can only handle multiple sub-domains but not different domain names.
  • Set up a snapshot copy grant for a master key in the destination region and enable cross-region snapshots in your Redshift cluster to copy snapshots of the cluster to another region.
  • Configure Traffic Monitoring on the ENI of the [[aws-ec2|EC2]] instances. Send the mirrored traffic to a monitor appliance for storage and (packet) inspection.
  • Launch a new CloudTrail trail using the AWS console with one new S3 bucket to store the logs and with the Enable for all accounts in my organization checkbox enabled; Enable MFA delete and log encryption on the S3 bucket.
  • Only one virtual private gateway (VGW) can be attached to a VPC at a time.
  • CloudFront, ELB HTTPS
    • Set the Viewer Protocol Policy to use Redirect HTTP to HTTPS or HTTPS Only.
    • Configure CloudFront to use its default SSL/TLS certificate by changing the Viewer Protocol Policy setting for one or more cache behaviors to require HTTPS communication.
  • Do a blue/green deployment on all upcoming changes using CodeDeploy. Using AWS CloudFormation, launch the DynamoDB tables, Lambda, OpenSearch domain in your VPC. Host the web application in Beanstalk ans set the deployment policy to Immutable.
    • Immutable deployments perform an immutable update to launch a full set of new instances running the new version of the application in a separate Auto Scaling group alongside the instances running the old version.
  • unauthorized AMI
    • Set up AWS Config rules to determine any launches of EC2 instances based on non-approved AMIs and then trigger a Lambda to automatically terminate the instance. Afterward, publish a message to an SNS topic to inform the security team about the occurence.
    • Set up a scheduled lambda to search through the list of running EC2 instances within your VPC and determine if any of these are based on unauthorized AMIs. Afterward, publish a new message to an SNS topic to inform the Security team that this occurred and then terminate the EC2 instance.
  • Use On-demand EC2 instances for both the master and core nodes and use Spot EC2 instances for the task nodes. (temporary job -> on-demand)
  • VPC and on-premises network connectivity
    • A network device in your data center that supports Border Gateway Protocol (BGP) and BGP MD5 authentication
    • A DX link between the VPC and the network housing the internal services
  • Create an DynamoDB global table to store the semi-structured data on two Regions. use on-demand capacity mode to allow DynamoDB scaling. Run the web service on an Auto Scaling Amazon ECS Fargate cluster on each region. Place each Fargate cluster behind their own ALB. Create Route53 Alias record pointed to each ALB in using latency routing policy with health checks enabled.
  • Using AWS CLI, create a new CMK with no key material and use EXTERNAL as the origin of the key. Generate a key from the on-premises HSMs and import it as CMK using the public key and import token from AWS. Apply a S3 bucket policy on the central logging bucket to require KMS as the encryption source and deny unencrypted object uploads.
    • You must use the on-premises HSMs as the source for the CMKs so you should not create your own AWS CloudHSM cluster and generate a CMK.
  • Deploy a new Windows AMI for an ASG with a minimum size of the tree instances and spans across three AZs. Create an FSx for Windows File Server file system that will be used for shared storage. Write a user data script to install the CMS application, mount the FSx for Windows File Server file system and join the instances to the AD domain.
  • Create an CW alarm to monitor the FreeStorageCapacity metric of the file system. Write a Lambda to increase the capacity of the FSx for Windows using the update-file-system command. Utilize EventBridge to invoke this lambda when the metric threshold is reached.
    • There is no option to Dynamically Allocate the file system size.
  • Add a CodeBuild stage on the deployment pipeline to automatically test on a non-production environment. Leverage change sets on CloudFormation to preview changes before applying to production. Set up a blue/green deployment pattern on CodeDeploy to deploy changes on a separate environment and to quickly rollback if needed.
  • Use CloudFormation as the deployment service to deploy the needed AWS resources such as S3 bucket for storage; CloudSearch to provide the needed search functionality; an EC2 instance to host the website.
  • The timeout behavior of a NAT instance is that, when there is a connection time out, it sends a FIN packet to resources behind the NAT instance to close the connection. It does not attempt to continue the connection which is why some database updates are falling. For better performance, use a NAT Gateway instead (RST packet).
  • Reconfigure the pipeline to create a Staging environment on Beanstalk. Deploy the newer version on the Staging environment. Swap the Staging and Production environment URLs to shift traffic to the newer version.
    • All at once
    • Rolling
    • Rolling with additional batch
    • Immutable
    • Traffic Spliting
  • To ensure continuous compliance, the security-approved AMIs must also be scanned every 30 days to check for new vulnerabilities and apply the necessary patches.
    • Create an Assement template on Inspector to target the EC2 instances. Run a detailed CVE assessment scan on all running EC2 instances launched from the AMIs that needs scanning
    • Develop a Lambda that will create automatic approval rules. Create a parameter on AWS SSM Parameter Store to save the list of all security-approved AMI. Set up a 30-day interval rule on EventBridge to trigger an AWS SSM Automation document run on all EC2 instances.
    • Automation
      • Build automations to configure and manage instances and AWS resources
      • Create custom runbooks or use pre-defined runbooks maintained by AWS
      • Monitor automation progress and details by using the SSM console
  • Set up AWS Organization by sending an invitation to all member accounts of the company from the master account of your organization. Create an OrganizationAccountAccessRole IAM role in the member account and grant permission to the master account to assume the role.
  • Update the CloudFormation template AWS::AutoScaling::AutoScalingGroup resource section and specify an UpdatePolicy attribute with an AutoScalingRollingUpdate.
    • change set will only be applied to newly spawned instances
  • For Developer Power Users, you can use the AWS managed policy name: PowerUserAccess if you have users who perform application development tasks.
  • Provision an AWS Storage Gateway - file gateway appliance on the on-premises data center. Configure the MAM solution to extract the video files from the current tape archives and move them to the file gateway share which is then synced to S3. Use Rekognition to build a collection based on the videos by using catalog of people’s faces and names. Create a Lambda that will invoke Rekognition to pull the video files from the S3 bucket, retrieve the generated metadata and then push it to the MAM solution search catalog.
  • Configure the CloudFront distribution to redirect HTTP to HTTPS protocol. Generate a new SSL certificate on AWS Certificate Manager and use it as the CloudFront distribution and origin certificate.
    • You cannot use the default certificate in CloudFront since the website is using a custom domain
  • Use AWS IoT Core with MQTT to create a new Data-ATS endpoint. Update the Route53 DNS zone record to point to the new endpoint and allow and IoT devices to send data using the MQTT protocol. Create an AWS IoT rule to directly insert the data into the DynamoDB table.
    • The AWS IoT Core message broker supports devices and clients that use MQTT and MQTT over WSS protocols to publish and subscribe to messages. It also supports devices and clients that use the HTTPS protocol to publish messages.
    • AWS IoT Greengrass needs a client software that brings intelligence to edge devices.
  • Launch an EC2 instance for both the NGINX server as well as for the database. Attach EBS volumes to the EC2 instance of the database and then use the DLM to automatically create scheduled snapshots against the EBS volumes.
  • Create a stream in KDS to collet the inbound data. Use a kinesis client to analyze the genomic data. After processing, use Amazon EMR to save the results to an Redshift cluster.
    • You can’t use Amazon Quicksight to query data lakes from AWS Lake Formation.
  • Tag all existing resources in bulk using the Tag Editor. On the Billing and Cost Management page, create new cost allocation tags for the cost center and project ID. Apply an SCP on the organizational unit that denies users from creating resources that do not have the cost center and project ID tags.
  • Write infrastructure-as-code to maintain consistency. Use AWS Organizations to centrally orchestrate the deployment of CloudFormation template from the central account. Use CloudFormation StackSets to simplify permissions and automatic provisioning of resources across multiple regions and accounts.
  • Use CloudFormation with SSM Parameter Store to retrieve the latest AMI IDs for your template. Whenever you decide to update the EC2 instances, call the update-stack API in CloudFormation in your template.
  • Implement Step Functions to orchestrate bacth processing workflows. Use the AWS Management Console to monitor workflow status and manage failure reprocessing.
  • Create an Aurora MySQL database instance. Create an Aurora replica and enable Aurora Auto Scaling for the replica. Create an ASG of EC2 instances placed behind an ALB with round-robin routing algorithm. Ensure that the sticky sessions feature is enabled for the ALB.
  • Remember that the PROD account has bought the Reserved Instance in the us-west-2a Availability Zone, which means that only the DEV account exactly matches the criteria. (Dev did run EC2 instances on us-west-2a)
  • Use the EC2Rescue tool to diagonse and trouble shoot problems on your Amazon EC2 Linux and Windows Server instances. Run the tool automatically by using the SSM Automation and the AWSSupport-ExecuteEC2Rescue document.
  • Request for an AWS Snowball device. Create a database export of the on-premises database server and load it to the Snowball device. Once the data is imported to AWS, provision an Aurora MySQL DB instance and load the data. Using the VPN connection, configure replication from the on-premises database server to the Aurora DB instance. Wait until the replication is complete then update the database DNS entry to point to the Aurora DB instance. Stop the database replication.
  • Store the SSL certificate in IAM and authorize access only to the security team using an IAM policy. Configure the ALB to use the SSL certificate instead of the EC2 instances.
  • Record the user’s information in RDS and create a role in IAM with appropriate permissions. When the user uses his/her mobile app, create temporary credentials using the AssumeRole function in STS. Store these credentials in the mobile app’s memory and use them to access the S3 bucket. Generate new credentials the next time the user runs the mobile app.
  • Configure Redshift to have automatic snapshots and do a cross-Region snapshot copy to automatically replicate the current production cluster to the disaster recovery region.
  • Split the single Lambda that processes the photos into several functions dedicated for each type of metadata. Create a workflow on AWS Step Functions that will run multiple Lambda in parallel. Create another workflow that will retrieve the list of photos for processing and execute the metadata extraction workflow of each photo.
    • An SQS queue cannot be used as a direct input for an AWS Step Function workflow.
    • AWS Batch is designed to easily and efficiently run hundreds of thousands of batch computing jobs on AWS but not with Lambda functions.
  • Establish a SSL VPN solution in a public subnet of your VPC. Install and configure SSL VPN client software on all the workstations/laptops of the users who need access to the ERP system. Create a private subnet in your VPC and place your application servers behind it.
  • Install the AWS Client VPN on each employee workstation. Create a Client VPN endpoint in the same VPC region in the main AWS account. Update the VPC route configurations to allow communication with the internal applications.
  • Setup an AWS DX gateway with two virtual private gateways. Launch and connect the required Private Virtual Interfaces to the DX gateway.
  • AWS WAF rules cannot protect a Network Load Balancer yet. It is better to use NACL rules to block the non-UDP traffic.
  • Create an IPSec VPN connection using either OpenVPN or VPN/VGW through the VPC. Prepare an instance of MySQL running external to RDS. Configure the MySQL DB instance to be the replication source. Use mysqldump to transfer the database from the RDS instance to the on-premise MySQL instance and start the replication from the RDS Read Replica.
    • Data Pipeline is for batch jobs.
  • ENI & Licenses
    • Provision a pool of ENIs. Request a license file for each ENI from the software vendor. Store the license files on S3 bucket and use bootstrap scripts to retrieve an unused license file and attach corresponding ENI when provisioning EC2 instances.
    • Create an Lambda to update the database IP addresses on the SSM Parameter Store. Create an EC2 bootstrap script that will retrieve the database IP address from SSM Parameter Store. Update the local configuration files with the parameters.
  • Enable Object-level logging in the S3 bucket to automatically track S3 actions using CloudTrail. Set up an EventBridge rule with an SNS Topic to notify the IT Compliance team when a PutObject API call with public-read permission is detected in the CloudTrail logs. Launch another CW Event rule that invokes a Lambda to turn the newly uploaded public object to private.
  • Create an ECS Fargate cluster and use containers to host web application. Create an ASG of EC2 Spot instances to process the SQS queue. Use Rekognition to analyze and categorize the videos instead of the third-party software. Store the videos and static contents on S3 buckets.
  • Create a dedicated VPC for outbound internet traffic with a NAT Gateway on it. Connect this VPC to the existing AWS Transit Gateway. Configure an AWS Network Firewall firewall for the rule-based filtering. Modify all the default routes in each account to point to the Network Firewall endpoint.
    • For centralized rule-based filtering with a Network Firewall, you will need an AWS Transit Gateway to act as a network hub and allow the connectivity between VPCs.
  • Create a new IAM role for cross-account access which allows the online auditing system account to assume the role. Assign it a policy that allows only the actions required for the compliance audit.
  • Use CW for the monitoring and configure the scaling in policy of the ASG to terminate one EC2 instance when CPU Utilization is 15% below.
  • Import a certificate that is signed by a trusted third-party certificate authority, store it to ACM then attatch it in your ALB. Set the Viewer Protocol Policy to HTTPS only in CloudFront and use an SSL/TLS certificate from a third-party certificate authority which was imported to either ACM or the IAM certificate store.
  • Amazon Kinesis Data Firehose has a feature to buffer data and supports data transformation in near-real-time.
  • User Creation under control
    • Create a rule in EventBridge that will check for patterns in CloudTrail API calls with the CreateUser event name
    • Send a message to a SNS topic. Have the security team subscribe the SNS topic.
    • Use EventBridge to invoke a Step Function state machine that will remove permissions on the newly created IAM user.
    • what’s wrong
      • Amazon EventBridge cannot directly invoke AWS Fargate tasks.
      • You can filter events on CloudTrail by searching for keywords, however, you can’t configure it to send notifications to Amazon SNS.
  • Configure Organization to group different accounts into separate OU depending on the business function. Create a SCP that restricts launching any AWS resources without a tag by including the Condition element in the policy which uses the ForAllValues qualifier and the aws:TagKeys condition. This policy will require its principals to tag resources during creation. Apply the SCP to the OU which will automatically cascade the policy to individual member accounts.
    • AWS Config only audits and evaluates if your instance and volume configurations match the rules you have created. Unlike an IAM policy, it does not permit nor restrict users from performing certain actions.
  • Deploy the custom scripts using Beanstalk platform hooks
    • prebuild
    • predeploy
    • postdeploy
  • Using Secrets Manager, create a secret resource and generate a secure database password. Write an Lambda to rotate the database password. On CloudFormation, specify a resource for Secrets Manager RotationSchedule to rotate the password every 90 days.
  • Since you are going to use Kinesis as a buffer, you don’t need the longer-term, durable storage offered by Kinesis Data Stream. Amazon Kinesis Data Firehose has a feature to buffer data and supports data transformation in near-real-time.
  • Create a pipeline in CodePipeline that is triggered automatically for commits on the private Github repository. Have the pipeline create a change set and execute the CloudFormation template. Add a CodeBuild stage on the pipeline to build and run test scripts to verify the new stack.
  • on-premises 1TB 50Mbps
    • Syncrhonize the on-premises data to an S3 bucket one week before the migration schedule using the AWS CLI’s S3 sync command
    • Perform a final synchronization task on Friday after the end of business hours
    • Set up your application hosted in a large EC2 instance in your VPC to use the S3 bucket.
  • Launch a CloudFront web distribution with the URL of the on-premises web application as the origin. Offload the DNS to AWS to handle CloudFront traffic.
  • Launch an IAM role that has the required permissions to read and write from the DynamoDB table. Reference the IAM roe as a property inside the AWS::IAM::InstanceProfile of the application instance.
  • Use an EFS volume to store the weather forecast data points. Mount this EFS volume on a fleet of Auto Scaling EC2 instances behind a ELB. Create a CloudFront distribution and point the origin to the ELB. Configuer a 15-minutes cache-control timeout for the CloudFront distribution.
    • Lambda can’t scale enough to match peak traffic
    • Lambda@edge can serve only up to 10000 requests per second
  • Migrate all media files to a S3 bucket and use this as the origin for the new CloudFront web distribution. Set up an ELB with an Auto Scaling of EC2 instances to host the web servers. User a combination of Cost Explorer and AWS Trusted Advisor checks to minitor the operating costs and identify potential savings.
  • how to ensure fixed budget
    • Use the AWS Budgets service to define a fixed monthly budget for each development account.
    • Create a Budgets Alert action to send SNS notification when the budgeted amount is reached. Invoke a Lambda to terminate all services.
    • Create a SCP that denies access to expensive services. Apply the SCP to an OU containing the development accounts.
  • Create an IAM Role and assign the required permissions to read and write from the DynamoDB table. Have the instance profile property of the application instance reference the role.
  • Create a shared transit gateway. Have each spoke VPC connect to the transit gateway. Use a fleet of firewalls, each with VPN attachment to the transit gateway, to route the outbound Internet traffic.
    • Using a VPN connection for connecting within Amazon VPCs or with the transit is not needed.
  • SSE-S3 provides strong multi-factor encryption in which each object is encrypted with a unique key. It also encrypts the key itself with a master key that it rotates regularly.
  • A conditional forwarder is configured inside the AD servers, not on the Route 53 resolver endpoint.
  • To capture the changes to the items you must use DynamoDB streams.
  • Create a seperate AWS account for identities where IAM user accounts can be created. Create roles with appropriate permissions in the production and testing accounts. Add the identity account to the trust policies for the roles.
  • Create a CloudFront distribution and deploy a Lambda@Edge function.
  • Beanstalk is less-efficient than ECS tasks.
  • To achieve resillient, adjust the workload configuration to utilize topology spread constraints based on different AZs.

Graphs

AWS CMK with HSM

.

EC2 access to S3 (instance profile + trust policy)

.

Licensed under CC BY-NC-SA 4.0
Get Things Done
Built with Hugo
Theme Stack designed by Jimmy