- 10 Simple Hacks That Will Make You More Productive When Using AWS
- 1. Use the --dryrun Flag in the AWS CLI Before Performing Any Task on Production Resources
- 2. Use --dryrun To Check if Two S3 buckets Are in Sync
- 3. Use --dryrun To Check if an S3 Bucket and a Local Folder Are in Sync
- 4. Quickly Download Many Files From S3
- 5. Use the --profile Flag in the AWS CLI To Manage Multiple Accounts
- 6. Pin Your Most Commonly Used Services
- 7. Billing Federation
- 8. Monitoring Your Resources
- 9. Learn CloudFromation or Terraform
- 10. Use SQL in AWS Config To Query Metadata About Your AWS resources
- Summary
10 Simple Hacks That Will Make You More Productive When Using AWS
If you work a lot with AWS, you have probably realized that literally everything on AWS is an API call. As such, everything can be automated.
This article will discuss several tricks that will save you time when performing everyday tasks in the AWS cloud. Make sure to read until the end because I saved the most interesting one for last.
1. Use the --dryrun
Flag in the AWS CLI Before Performing Any Task on Production Resources
If you ever have to perform some S3 migration tasks or want to make changes to other existing AWS resources via the command line, it’s useful to leverage the --dryrun
flag to ensure that your CLI operation does what you expect. For instance, before uploading a bunch of CSV files to S3, you may want to check if your command only operates on CSV files and doesn't move other files to S3:
aws s3 cp /path/to/local/files/ s3://demo-datasets/path/ --exclude "*" --include "*.csv" --dryrun
Thanks to the --dryrun
flag, the command-line output provides information about which files will be copied to S3. This output is helpful to validate that our command works as expected.
2. Use --dryrun
To Check if Two S3 buckets Are in Sync
Imagine a scenario where you have to migrate hundreds of files from a region in Ireland to one in Frankfurt. After moving the object to a new S3 bucket, you may want to check if all the files have been properly copied over to the new region with no errors.
By using the aws s3 sync
command, you can check whether S3 buckets (or specific paths) are in sync:
aws s3 sync s3://bucket1 s3://bucket2 --dryrun
If there are any differences, the --dryrun
flag will show them in the console without transferring anything. By using this trick, you can easily determine the difference between the contents of two buckets without laboriously comparing the files yourself. And you don’t even have to download those files to perform the check.
3. Use --dryrun
To Check if an S3 Bucket and a Local Folder Are in Sync
Similarly to the previous scenario, we can use the same command when comparing files from S3 to those from a local computer or a remote host:
aws s3 sync /Users/.../path/ s3://mybucket --dryrun
This command is particularly useful if you have to sync only specific file types and folders with S3.
4. Quickly Download Many Files From S3
The last S3 command is aws s3 cp
, which allows us to download all files from a specified S3 path:
aws s3 cp s3://mybucket /Users/anna/path/ --recursive --storage-class STANDARD
In the same way, we can upload a bunch of files while specifying the S3 storage class that is most suitable to those objects:
aws s3 cp /Users/.../path/ s3://mybucket --recursive --storage-class STANDARD
5. Use the --profile
Flag in the AWS CLI To Manage Multiple Accounts
Imagine that you have a different IAM user for dev
and prod
environments. Switching between those two can be painful unless you utilize the --profile
flag. If you configured your CLI, you should see a text file with credentials in your directory location:
~/.aws/credentials
on Linux and Mac%USERPROFILE%\.aws\credentials
on Windows
A useful setup is to set your dev
credentials as your default profile. Then, prod
would have to be specified explicitly. Here is how the credentials
file could look:
[default]
aws_access_key_id=ASDFGHJJKLQWERTZUIOP
aws_secret_access_key=YXCVBNMasdfghjkl123456789
[prod]
aws_access_key_id=POIUZTREWQLKJHGFDSA
aws_secret_access_key=MNBVCXYasdfghjkl987654321
You can create any new profile by using this command:
aws configure --profile yourProfileName
By using the command below, we will receive the results for the default (dev) profile:
aws s3 ls
In contrast, when adding the --profile prod
, the result will show only production resources:
aws s3 ls --profile prod
6. Pin Your Most Commonly Used Services
If you use only a few services on a regular basis, you can mark them as favorites. With the growing number of services, you may find it useful.
“Recently visited” (shown below “Favorites”) provides a similar overview:
However, since AWS implemented the excellent search bar, you may consider the “Favorites” bar a bit redundant.
7. Billing Federation
After creating your AWS account, you are signed in as the account owner (i.e. the root user). AWS recommends that we never use this account for everyday activities. Instead, we should create our first IAM user and use it for working with AWS. We would then sign in as a root user only to perform service management tasks such as changing our account or payment details.
If you follow this best practice, you may still end up signing in as a root user every now and then to check your AWS bill because, by default, billing information is only available to a root user.
However, there is a better way. You can grant the Billing and Cost Management console access to your IAM user. Once you’ve done this, you will be able to access the “Billing” console from your non-root user. Here is how you can “federate” billing access to an IAM user.
First, you have to Activate IAM access in the account settings:
Then, you can attach a Billing policy to your IAM user:
You can choose between full access or read-only access:
For a more detailed description of the billing federation, see the AWS docs.
8. Monitoring Your Resources
If you work with AWS Lambda, SQS queues, SNS topics, DynamoDB tables, Kinesis Data Streams, AWS Step Functions, or ECS services, you may have realized that monitoring serverless resources can be challenging. While CloudWatch centralizes logging into a single service, switching between tens of log groups to analyze the performance of a single application can be difficult and time-consuming. The same is true about error notifications.
To improve your development experience and save time, you can start using some external serverless observability platform, such as Dashbird.
9. Learn CloudFromation or Terraform
Infrastructure as Code has gained a lot of momentum in recent years, and for very good reasons. Once you learn how to programmatically deploy or modify your resources, you can become much more productive when using AWS.
For instance, imagine that you built all your resources for a development environment with a CloudFormation template or a Terraform configuration file. In order to build the same resources for production, you would only need to change specific values in the declarative file, and it will replicate the setup for a new environment. Additionally, you can apply the same programmatically defined infrastructure setup for a new project or for recovery scenarios.
10. Use SQL in AWS Config To Query Metadata About Your AWS resources
AWS Config allows you to view all your cloud resources at a glance, track how their configuration changes over time, and establish configuration rules that automatically check whether your services match the desired configuration settings. Any violation of the rules you defined will trigger an alert informing you about non-compliant resources.
But AWS Config is not only a great resource to enforce compliance. It also gives you an overview of all resources in your AWS account. One of the most impressive features that I’ve recently encountered on AWS is the SQL query editor within the AWS Config. It allows you to easily group your resources by service or filter for only resources from a specific region.
For instance, in the query below, we are retrieving all resources with their corresponding ID, region name, time of creation, tags, and current state while filtering out all network and security group resources:
Here is how it looks in the AWS management console:
AWS Config query
The output shows all resources that match the query conditions:
One of the coolest things in this SQL editor is that you can query for resources with a specific tag. Typically, we use tags as key-value pairs to associate resources to a specific project or organizational unit, which is beneficial for cost allocation. But it can also be leveraged to ensure that you terminate all resources when a specific project is finished. As an example, here is how we can find all resources associated with the tag “medium”
:
For more SQL examples, check out the AWS Config documentation.
Summary
In this article, we looked at ten useful tricks to save time when using AWS resources.
The --dryrun
flag is useful to test any CLI operation before performing it on live resources. Similarly, the --profile
flag can be beneficial if we regularly need to switch between several AWS accounts (for instance, dev and prod). The billing federation allows you to see your AWS costs even as a non-root user. Dashbird allows you to visualize, monitor, and observe the state of your serverless resources with just a one-off two-minute setup. AWS CloudFormation and Terraform equip you with building blocks to automate the provisioning and modification of resources by using Infrastructure as Code.
Finally, AWS Config allows you to query all your AWS resources and their state using simple SQL queries so that you can efficiently keep track of your resources.
Thank you for reading!