Expanding Root Partition on AWS EC2: Resolving Disk Space Issues

Expanding Root Partition on AWS EC2: Resolving Disk Space Issues

·

3 min read

Disk space shortages on the root partition can lead to system instability and service failures. This guide explains how to resolve this issue by expanding the root partition and filesystem on an AWS EC2 instance. It also includes additional steps to fix partition table issues and ensure data safety.


Scenario

The root partition / was almost full:

Filesystem       Size  Used Avail Use% Mounted on
/dev/root        6.8G  6.6G  118M  99% /

The root volume size was increased in AWS to 15 GiB, but the partition and filesystem still reflected the original size of 7 GiB. We also encountered a GPT PMBR size mismatch and warnings about the backup GPT table not being at the disk's end.


Steps to Resolve

Before making any changes to the disk or partition table, take a snapshot of the root volume:

  1. Open the AWS EC2 Console.

  2. Navigate to Volumes.

  3. Select the root volume attached to your instance.

  4. Click Actions > Create Snapshot.

  5. Wait for the snapshot to complete.

This ensures that you can recover your data in case of an issue.


Step 2: Install Required Tools

Update the system and install gdisk to manage the GPT table:

sudo apt update
sudo apt install gdisk

Step 3: Fix GPT Table Issues

Use gdisk to resolve GPT table warnings:

sudo gdisk /dev/nvme0n1

You may see warnings like:

GPT PMBR size mismatch (16777215 != 31457279) will be corrected by write.
The backup GPT table is not on the end of the disk.

Follow these steps:

  1. Type w to write changes and fix the GPT table.

  2. Confirm when prompted.

This step ensures the partition table is consistent and uses the full disk.


Step 4: Resize the Partition

Expand the root partition to utilize the entire disk:

sudo growpart /dev/nvme0n1 1

Output:

CHANGED: partition=1 start=2099200 old: size=14677983 end=16777183 new: size=27263247 end=29362447

Verify the updated partition size:

lsblk

Output:

NAME         MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
nvme0n1      259:0    0   15G  0 disk
├─nvme0n1p1  259:1    0   15G  0 part /
...

Now, the root partition (/dev/nvme0n1p1) reflects the increased size.


Step 5: Resize the Filesystem

Resize the filesystem to use the new partition size:

sudo resize2fs /dev/nvme0n1p1

Output:

resize2fs 1.46.5 (30-Dec-2021)
Filesystem at /dev/nvme0n1p1 is mounted on /; on-line resizing required
old_desc_blocks = 1, new_desc_blocks = 2
The filesystem on /dev/nvme0n1p1 is now 3679028 (4k) blocks long.

Step 6: Verify Changes

Check the filesystem and partition sizes:

df -h

Output:

Filesystem       Size  Used Avail Use% Mounted on
/dev/root         14G  6.6G  7.4G  48% /

The root partition and filesystem now reflect the expanded 14 GiB of usable space.


Step 7: Restart Affected Services

If any services were impacted by the disk space issue, restart them. For example:

sudo systemctl restart jenkins

Step 8: Monitor Disk Usage

Finally, ensure disk usage is normal and no other issues remain:

df -h

Summary of Key Commands

  1. Backup the instance: Create a snapshot of the root volume via AWS Console.

  2. Fix GPT table issues:

     sudo gdisk /dev/nvme0n1
    
  3. Resize the partition:

     sudo growpart /dev/nvme0n1 1
    
  4. Expand the filesystem:

     sudo resize2fs /dev/nvme0n1p1
    
  5. Verify disk space:

     df -h
    
  6. Restart services:

     sudo systemctl restart <service-name>
    

By following these steps, we successfully expanded the root partition and resolved the disk space issue, ensuring smooth system operation. Always remember to back up critical data before making changes to disk or partition configurations.

Did you find this article valuable?

Support NavyaDevops by becoming a sponsor. Any amount is appreciated!