Delete Unattached EBS Volumes with Python

In our continuation of ways to reduce your AWS bill, one source of unexpected charges are unattached and forgotten EBS volumes. While not typically the most expensive line item on the bill, all of those little volumes can add up when deploying across multiple accounts and regions.

For months, I remember getting bills for nominal amounts of money due to unattached EBS volumes, going into the AWS console to delete them and take care of it, only to realize the next month I’ve forgotten one.

Example AWS bill My sad AWS bill

Stop it before it starts - DeleteOnTermination

One proactive way of ensuring that EBS volumes are deleted when your EC2 instance is terminated is by ensuring that the DeleteOnTermination attribute is set to true for all volumes that you don’t want to stick around and add up.

To change the DeleteOnTermination attribute, use the following command:

# TODO: Insert your instance id
$ aws ec2 modify-instance-attribute —instance-id i-1234567890abcdef0 —block-device-mappings file://mapping.json

With the following mapping.json:

[
  {
    “DeviceName”: “/dev/sda1”,
    “Ebs”: {
      “DeleteOnTermination”: true
    }
  }
]

For more information, check out the AWS EC2 docs here. By default, the root volume is already configured for DeleteOnTermination to be true.

Running a Python script to audit and delete unattached EBS volumes

Ok, that’s great. But what about all the EBS volumes I have laying around my AWS account right now? Here’s a Python script that solves this problem in 3 parts:

  1. Listing and describing each of your EBS volumes in a given AWS account and region
  2. Filtering the volumes for those that don’t have any attachments
  3. Deleting the unattached EBS volumes

Let’s get started.

Step 1: Listing and describing EBS volumes

First, we configure the boto3 client to connect to our AWS account and allow us to start listing EBS volumes.

import boto3

sess = boto3.Session(
    # TODO: Supply your AWS credentials & specified region here
  aws_access_key_id='MYAWSACCESSKEYID',
  aws_secret_access_key='MYSECRETACCESSKEY',
  region_name='us-east-1', # Or whatever region you want
)

Next, we make a call to get all EC2 volumes:

ec2 = sess.resource('ec2')
volumes = ec2.volumes.all()

This will return the list of all EC2 volumes in the specified region and account. This will return a collection of EBS volume objects.

Step 2: Filtering unattached EBS volumes

Next, we can filter this list and add the ones that are unattached to a termination list to_terminate. Check out the code below:

to_terminate=[]
for volume in volumes:
    print('Evaluating volume {0}'.format(volume.id))
    print('The number of attachments for this volume is {0}'.format(len(volume.attachments)))

    # Here's where you might add other business logic for deletion criteria
    if len(volume.attachments) == 0:
        to_terminate.append(volume)

Why not just delete the volume in the same step? By decoupling the logic of filtering from the deletion logic, we can take different types of actions for volumes with different attributes.

For example, we might delete all the unattached volumes, but we might want to send a Slack notification for all of the volumes that are not encrypted (out of scope for this post).

Step 3: Deleting the volumes

The deletion of the logic is simple. Each volume has a delete() method to delete the volume. We check for the empty condition, then delete all the volumes that are on the termination list.

if len(to_terminate) == 0:
    print ("No volumes to terminate! Exiting.")
    exit()

for volume in to_terminate:
    print('Deleting volume {0}'.format(volume.id))
    volume.delete()

You can find the whole Python script here.

Conclusion

That’s it! Want to run this workflow on a schedule? Invite other people on your team to kick off workflows? Notify your team in Slack when volumes have been deleted? That’s why we built Relay.

To learn more about our mission and product, sign up for our updates on relay.sh. Our mission is to free you of tedious cloud-native workflows with event-driven automation! For more content like this, please follow our blog.