l01cd3v.github.io

by Loïc


Post Black Hat US 2016 blog post

Published on August 8, 2016

Last Tuesday (August 3rd), I presented "Access Keys Will Kill You Before You Kill The Password" at Black Hat US 2016. The summary is on the Black Hat website and the updated slide deck is available Here. This presentation aimed at highlighting risks associated with usage of AWS API access keys in environments that do not enforce MFA-protected API access, and documented strategies and IAM policies to help address these risks.

On Wednesday (August 4th), I presented Scout2 at Black Hat Arsenal. During two hours, I had the opportunity to demo Scout2 and meet users of the tool who shared valuable feedback with me. I look forward to implementing some of the features discussed during this event, including adding support for ECS and finishing the new rules generator.

As a reminder, Scout2 is available on Github, feedback is appreciated, and feature requests and pull requests are welcome. The Scout2 documentation is available at https://nccgroup.github.io/Scout2.


Efficient review of AWS security groups' CIDR grants

Published on November 17, 2015
[Originally published on NCC Group's blog]

A significant challenge for companies using the cloud lies in ensuring that their firewall rules follow the principle of least privilege. It is extremely common nowadays to delegate management of security groups to developers, for both production and test environments. This means that security groups and their associated rules are managed by a much larger number of employees than what used to be the case in non-cloud environments, where a unique, smaller team was in charge of managing all firewall rules. Due to the more dynamic nature of cloud-based infrastructures, companies should review their cloud environment's firewall rules on a more frequent basis than for non cloud-based systems. Unfortunately, this is a difficult exercise due to the large number of CIDRs that may be whitelisted in a given AWS account. Keeping track of all known CIDRs and what hosts or networks they represent is not easy for employees, and is almost impossible for external auditors who must perform the review within a limited timeframe.

In this post, I will document how this issue can be addressed using the AWS-Recipes tools and Scout2.

Feed custom ip-ranges files to Scout2

Today, I am excited to announce that Scout2 accepts JSON files that contain known CIDRs along with arbitrary metadata such as the host or network they represent. When provided with such files, Scout2's report displays the "friendly name" of each known CIDR that is whitelisted in security group rules. This means that, instead of reviewing a list of obscure IP ranges, users of Scout2 may now rely on the name associated with each CIDR.

In order to use this new feature, Scout2 should be run with the following arguments:

./Scout2.py --profile nccgroup --ip-ranges ip-ranges-nccgroup.json ip-ranges-ncc-offices.json --ip-ranges-key-name name

In the above command line, Scout2 receives two ip-ranges JSON files via the "--ip-ranges" argument:

  • ip-ranges-nccgroup.json, which contains the public IP addresses in the AWS IP space in use
  • ip-ranges-ncc-offices.json, which contains the public IP addresses of several offices

Furthermore, the "--ip-ranges-key-name" argument indicates which JSON field to display as the "friendly name".

The following screenshot illustrates that, in the Scout2 report, the name of each known CIDR is displayed. When an IP which belongs to a known CIDR is whitelisted, the name of the corresponding CIDR is used. In this example, 5.5.5.42/32 belongs to the 5.5.5.0/24 CIDR, which is associated with the "San Francisco" office. An "Unknown CIDR" value is displayed when an unknown value is whitelisted.

Screenshot: Security group rules display the name of each known CIDRs

The next section of this blog post documents how users can create and manage these ip-ranges JSON files.

Manage known CIDRs with aws_recipes_create_ip_ranges.py

With AWS releasing their public IP address ranges, I decided to create a tool that allows creation and management of arbitrary IP address ranges using the same JSON format. The tool is released on GitHub at https://github.com/iSECPartners/AWS-recipes/blob/master/Python/aws_recipes_create_ip_ranges.py and may be used in several scenarios:

  • Automatically create ip-ranges files based on public IP addresses in AWS (Elastic IPs and EC2 instances)
  • Automatically create ip-ranges files based on IP addresses documented in a CSV file
  • Manually create and manage ip-ranges files

Each of these use cases is detailed in an example below, with detailed input, commands, and output contents.

Note: In the commands below, the "--debug" argument is used to output pretty-printed JSON, for documentation purposes.

Automatically create ip-ranges based on public IP addresses in an AWS account

First, this tool may be used to create an ip-ranges file that contains an AWS account's elastic IP addresses and EC2 instances' public IP addresses. By doing so, AWS users will be able to maintain a list of public IP addresses in the AWS IP space that are associated with their resources. Assuming that AWS credentials are configured under the "nccgroup" profile name, the command below may be used:

$ ./aws_recipes_create_ip_ranges.py --profile nccgroup --debug

Fetching public IP information for the 'nccgroup' environment...
...in us-east-1: EC2 instances
...in us-east-1: Elastic IP addresses
...in ap-northeast-1: EC2 instances
...in ap-northeast-1: Elastic IP addresses
...in eu-west-1: EC2 instances
...in eu-west-1: Elastic IP addresses
...in ap-southeast-1: EC2 instances
...in ap-southeast-1: Elastic IP addresses
...in ap-southeast-2: EC2 instances
...in ap-southeast-2: Elastic IP addresses
...in us-west-2: EC2 instances
...in us-west-2: Elastic IP addresses
...in us-west-1: EC2 instances
...in us-west-1: Elastic IP addresses
...in eu-central-1: EC2 instances
...in eu-central-1: Elastic IP addresses
...in sa-east-1: EC2 instances
...in sa-east-1: Elastic IP addresses

My test environment has one elastic IP address that is not associated with an AWS resource, and one EC2 instance that has a non-elastic public IP. Executing the above command results in the creation of an "ip-ranges-nccgroup.json" file that has the following contents:

{
    "createDate": "2015-11-16-22-49-27",
    "prefixes": [
        {
            "instance_id": "i-11223344",
            "ip_prefix": "1.1.1.1",
            "is_elastic": false,
            "name": "Test EC2 instance",
            "region": "us-west-2"
        },
        {
            "instance_id": null,
            "ip_prefix": "2.2.2.2",
            "is_elastic": true,
            "name": null,
            "region": "us-west-2"
        }
    ]
}

Automatically create ip-ranges from CSV files

From experience, I know that many companies maintain a list of their public IP addresses, along with other network configuration information, in alternate formats, such as CSV. In order to help with the conversion, the tool supports reading CIDR information from CSV files. The tool was designed to be flexible and allow the creation of IP ranges from any CSV file. In this blog post, I provide two examples.

This first example demonstrates how to use the tool to build a JSON file based on the CSV column headers. Only attributes specified on the command line will be copied over.

Contents of test1.csv:

ip_prefix, discarded_value, name
4.4.4.0/24, ncc group, NY office
# This is a comment...
5.5.5.0/24, ncc group, Seattle office

Command line to convert the contents of the CSV file into JSON:

./aws_recipes_create_ip_ranges.py --csv-ip-ranges test1.csv --attributes ip_prefix name --profile ncc-test1 --debug

Contents of ip-ranges-ncc-test1.json:

{
    "createDate": "2015-11-17-10-22-42",
    "prefixes": [
        {
            "ip_prefix": "4.4.4.0/24",
            "name": " NY office"
        },
        {
            "ip_prefix": "5.5.5.0/24",
            "name": " Seattle office"
        }
    ]
}

The second example demonstrates how to use the tool to parse a CSV file with custom column names and separate columns for the base IP and subnet mask. The "--mappings" argument determines how columns will be mapped to the JSON file's attributes.

Contents of test2.csv

Base IP, Dotted Subnet Mask, Subnet Mask, Something, Name, Something else
3.3.3.0, 255.255.255.0, /24, Value to discard, SF Office, Other value to discard

Command line to convert the contents of the CSV file into JSON:

./aws_recipes_create_ip_ranges.py --csv-ip-ranges test2.csv --attributes ip_prefix mask name --mappings 0 2 4 --profile ncc-test2 --skip-first-line --debug

Contents of ip-ranges-ncc-test2.json

{
    "createDate": "2015-11-17-10-07-22",
    "prefixes": [
        {
            "ip_prefix": "3.3.3.0/24",
            "name": " SF Office"
        }
    ]
}

Manually create and update ip-ranges

In case CIDRs were not managed in a CSV file, the tools offers an interactive mode that may be leveraged to manually create a JSON ip-ranges file. The following snippet illustrates how to use the tool to interactively create new ip-ranges JSON files:

$ ./aws_recipes_create_ip_ranges.py --interactive --profile ncc-offices --attributes name

Add a new IP prefix to the ip ranges (y/n)? 
y
Enter the new IP prefix:
5.5.5.0/24
You entered "5.5.5.0/24". Is that correct (y/n)? 
y
Enter the 'name' value:
San Francisco
You entered "San Francisco". Is that correct (y/n)? 
y
Add a new IP prefix to the ip ranges (y/n)? 
y
Enter the new IP prefix:
6.6.6.6/32
You entered "6.6.6.6/32". Is that correct (y/n)? 
y
Enter the 'name' value:
San Francisco
You entered "San Francisco". Is that correct (y/n)? 
y
Add a new IP prefix to the ip ranges (y/n)? 
n

Contents of ip-ranges-ncc-offices.json:

{
    "createDate": "2015-11-16-22-44-38",
    "prefixes": [
        {
            "ip_prefix": "5.5.5.0/24",
            "name": "San Francisco"
        },
        {
            "ip_prefix": "6.6.6.6/32",
            "name": "San Francisco"
        }
    ]
}

The tool can also automatically add new CIDRs to existing ip-ranges files:

$ ./aws_recipes_create_ip_ranges.py --interactive --profile ncc-offices --attributes name --debug

Loading existing IP ranges from ip-ranges-ncc-offices.json
Add a new IP prefix to the ip ranges (y/n)? 
y
Enter the new IP prefix:
7.7.7.7/32
You entered "7.7.7.7/32". Is that correct (y/n)? 
y
Enter the 'name' value:
Seattle
You entered "Seattle". Is that correct (y/n)? 
y
Add a new IP prefix to the ip ranges (y/n)? 
n
File 'ip-ranges-ncc-offices.json' already exists. Do you want to overwrite it (y/n)? 
y

$ cat ip-ranges-ncc-offices.json 
{
    "createDate": "2015-11-16-22-44-38",
    "prefixes": [
        {
            "ip_prefix": "5.5.5.0/24",
            "name": "San Francisco"
        },
        {
            "ip_prefix": "6.6.6.6/32",
            "name": "San Francisco"
        },
        {
            "ip_prefix": "7.7.7.7/32",
            "name": "Seattle"
        }
    ]
}

Conclusion

This addition to Scout2 provides AWS account administrators and auditors with an improved insight into their environment. Usage of this feature should result in further hardened security groups because detection of unknown whitelisted CIDRs and understanding of existing rules is significantly easier.

I am currently working on a major rework of Scout2's reporting engine, which will further improve reporting and allow creation of new alerts when an unknown CIDR is whitelisted.


Redshift support added in Scout2

Published on August 6, 2015
[Originally published on NCC Group's blog]

Today, I am excited to announce that support for Redshift was added in Scout2. By default, Scout2 will fetch information about your Redshift clusters, cluster parameter groups, and cluster security groups if you still use EC2-Classic. At this stage, Scout2 comes with six Redshift security rules that are enabled by default:

  • Clusters
    • Check whether version upgrade is enabled
    • Check whether the cluster is publicly accessible
    • Check whether database encryption is enabled
  • Cluster parameter groups
    • Check whether SSL/TLS is required to access the database
    • Check whether user activity logging is enabled
  • Cluster security groups (EC2-classic)
    • Check whether the security group allows access to all IP addresses (0.0.0.0/0)

Scout2 was first released over a year and a half ago, and proved to be extremely helpful when performing AWS configuration reviews. While Scout2's initial release only supported three services (IAM, EC2, and S3) and included thirteen security checks, the tool rapidly grew to add support for RDS and CloudTrail. Furthermore, the tool now offers over fifty tests throughout these five supported services. I hope that support for Redshift will bring value to users of Scout2, and welcome feature requests, bug reports, and recommendations on Github at https://github.com/iSECPartners/Scout2/issues.


Introducing opinel: Scout2's favorite tool

Published on August 3, 2015
[Originally published on iSEC Partners's research blog]

With boto3 being stable and generally available1, I took the opportunity to migrate Scout2 and AWS-recipes to boto3. As part of that migration effort, I decided to publish the formerly-known-as AWSUtils repository -- used by Scout2 and AWS-recipes -- as a python package required by these tools, rather than requiring users to work with Git submodules. I've also added more flexibility when working with MFA-protected API calls and improved versioning across the project.

opinel

To avoid name conflicts, I decided to rename the shared AWSUtils code to a less misleading name: opinel. The opinel package is published on PyPI, and thus can be installed using pip and easy_install. The corresponding source code is still open-sourced on Github at https://github.com/iSECPartners/opinel. As a result, Scout2 and AWS-recipes have been modified to list opinel as a requirement, which significantly simplifies installation and management of this shared code.

Support for Python 2.7 and 3.x

Because boto3 supports both Python2 and Python3, I decided to make sure that the code built on top of that package has similar properties. As a result, the latest versions of Scout2 and AWS-recipes support Python 2.7 and 3.x. Note that opinel will NOT work with Python 2.6.

Modification of the MFA workflow

As requested by a user of AWS-recipes2, I modified the workflow when using MFA-protected API access to no longer store the long-lived credentials in a separate file. As a result, the .aws/credentials.no-mfa file is no longer supported and all credentials are stored in the standard AWS credentials file under .aws/credentials. Usage of the existing tools remains unchanged, but the long-lived credentials are now accessible via a new profile name: profile_name-nomfa. This allows users to work with both STS and long-lived credentials if need be.

If you already had configured your environment to work with MFA-protected API access, you will need to copy your long-lived credentials back to the .aws/credentials file. This can be done with a simple command such as the following:

cat ~/.aws/credentials.no-mfa | sed -e 's/]$/-nomfa]/g' >> ~/.aws/credentials

Support to use assumed-role credentials

With this new workflow implemented, I created a new recipe that allows configuration of role-credentials in the .aws/credentials file. When the following command is run, it uses the credentials associated with the isecpartners profile to request role credentials for the IAM-Scout2 role. The role credentials are then written in the .aws/credentials file in a new profile named isecpartners-Scout2, which is the profile name appended by the role session name.

$ ./aws_recipes_assume_role.py --profile isecpartners --role-arn arn:aws:iam::AWS_ACCOUNT_ID:role/IAM-Scout2 --role-session-name Scout2

Users can then use their favorite tools that support profiles. For example, Scout2 could be run with the following command line:

$ ./Scout2.py --profile isecpartners-Scout2

Note that this recipe supports MFA if the assumed role requires it:

  • If you never configured your environment to work with MFA, you can provide your MFA serial number (ARN) and current token code as arguments.
  • If you already configured your environment to work with MFA and stored your MFA serial in the .aws/credentials file, you just need to pass your token code as an additional argument.
  • Finally, if you already initiated an STS session, you do not need to provide a new token code and can run the command as above.

Conclusion

With the release of opinel, I hope to simplify distribution and management of the code shared between Scout2 and AWS-recipes. Additionally, I significantly modified the workflow and credentials storage when working with MFA-protected API calls, which allows users to use both their long-lived and STS credentials.


IAM user management strategy (part 2)

Published on June 9, 2015
[Originally published on iSEC Partners's research blog]

The previous IAM user management strategy post discussed how usage of IAM groups enables AWS administrators to consistently grant privileges and enforce a number of security rules (such as MFA-protected API access). This blog post will build on this idea by introducing category groups and documenting new tools to improve IAM user management.

Categorize your IAM users

For a variety of reasons, applying a single set of security rules to all IAM users is not always practical. For example, because many applications running in AWS predate IAM roles, numerous environments still rely on the existence of headless IAM users. Additionally, third parties may be granted access to an AWS account for a number of reasons but may not be able to comply with the same set of security rules that employees follow. For this reason, NCC recommends using category groups to sort IAM users and reliably enforce appropriate security measures. For example, one group for all human users and a second for all headless users may be created: MFA-protected API access and password management are not relevant for headless users. Furthermore, human users may be categorized into several groups such as employees and contractors: API access can be restricted to the corporate IP range for employees but might not be achievable for contractors.

Note 1: The set of category groups should define all types of IAM users that may exist in your AWS account and each IAM user should belong to one -- and only one -- category group (they may belong to other groups though).

Note 2: The common group and category groups should be used to enable enforcing security in one's AWS environment. Policies attached to these groups should be carefully reviewed and grant the minimum set of privileges necessary for this type of IAM user (e.g. credential management for humans).

Example of category groups

The rest of this article describes a number of tools developed and used by NCC to help implement this IAM user management strategy. These tools can be found in the AWS-Recipes repository. We will use our test AWS environment as an example, in which we use three category groups in addition to the AllUsers common group:

  1. AllHumans, the group all employees must belong to.
  2. AllHeadlessUsers, the group all headless IAM users must belong to.
  3. AllMisconfiguredUsers, a placeholder for sample misconfigured users.

We also have an IAM user naming convention that requires usernames to match the following schema:

  1. Employees: firstname initial appended with lastname
  2. Headless user: name of the service prefixed with HeadlessUser-
  3. Misconfigured: description of the misconfiguration prefixed with MisconfiguredUser-

Based on these rules, we created a configuration file stored under .aws/recipes/isecpartners.json, with isecpartners matching the profile's name. If you do not use profiles, the configuration will be under .aws/recipes/default.json.

{
    "common_groups": [ "AllUsers" ],
    "category_groups": [
        "AllHumanUsers",
        "AllHeadlessUsers",
        "AllMisconfiguredUsers"
    ],
    "category_regex": [
        "",
        "^Headless-(.*)",
        "^MisconfiguredUser-(.*)"
    ],
    "profile_name": [ "isecpartners" ]
}

This configuration file declares the name of the common IAM group and two lists related to the categorization of IAM users:

  1. A list of category groups.
  2. A list of regular expressions matching our naming convention.

Note 1: If you do not have a naming convention in place to distinguish the type of user, remove the category_regex attribute from your configuration file.

Note 2: If a regular expression is only applicable to a subset of category groups, you must ensure that both lists have the same length and use an empty string for groups that cannot be automatically associated (see the AllHumanUsers group in our example).

Note 3: Use of a configuration file is not necessary as all values may be passed as command line arguments. If a configuration file exists and a value is passed as an argument, the value passed via the command line will be used.

Create your default groups with aws_iam_create_default_groups.py

The purpose of this tool is to create IAM groups whose name matches the common and category groups specified in the above configuration file. Running the following command results in four new groups being created if they did not already exist.

./aws_iam_create_default_groups.py --profile isecpartners

(Automatically) sort IAM users with aws_iam_sort_users.py.

This tool iterates through all IAM users and attempts to automatically detect the IAM groups each user should belong to. For convenience, we recommend adding the following to your AWS recipes configuration files:

"aws_sort_users.py": {
    "create_groups": false,
},
"force_common_group": true

This specifies default values for additional arguments to be set when running aws_iam_sort_users.py. Specifically, with these values, running this tool will automatically add all IAM users to the common group AllUsers and will not attempt to create the default groups (not necessary as we already did this). Additionally, this tool checks that each IAM user belongs to one of the category groups. If this is not the case and the username matches a regular expression, the user is automatically added to the matching category group. Otherwise, a multi-choice prompt appears to allow manual selection of the appropriate category group.

Additional advantages of configuration files

Besides helping with simplification of these tools' usage, this new AWS-recipe configuration file can be used across tools, allowing for more consistent rule enforcement. For example, the aws_iam_create_user.py. tool uses this configuration file and applies the same business logic to add users to the common group and appropriate category group at user creation time. In our test environment, for example, running the following command automatically added the new user to the MisconfiguredUser group:

$ ./aws_iam_create_user.py --profile isecpartners --users MisconfiguredUser-BlogPostExample
Creating user MisconfiguredUser-BlogPostExample...
Save unencrypted value (y/n)? y
User 'MisconfiguredUser-BlogPostExample' does not belong to the mandatory common group 'AllUsers'. Do you want to remediate this now (y/n)? y
User 'MisconfiguredUser-BlogPostExample' does not belong to any of the category group (AllHumanUsers, AllHeadlessUsers, AllMisconfiguredUsers). Automatically adding...
Enabling MFA for user MisconfiguredUser-BlogPostExample...

Conclusion

While efficient and reliable management of IAM users can be challenging, using the right strategy and tools significantly simplifies this process. Creation and use of a naming convention for IAM users enables automated user management and enforcement of security rules.