Disk space shortages on the root partition can lead to system instability and service failures. This guide explains how to resolve this issue by expanding the root partition and filesystem on an AWS EC2 instance. It also includes additional steps to fix partition table issues and ensure data safety.
Scenario
The root partition /
was almost full:
Filesystem Size Used Avail Use% Mounted on
/dev/root 6.8G 6.6G 118M 99% /
The root volume size was increased in AWS to 15 GiB, but the partition and filesystem still reflected the original size of 7 GiB. We also encountered a GPT PMBR size mismatch and warnings about the backup GPT table not being at the disk's end.
Steps to Resolve
Step 1: Backup the Instance (Recommended)
Before making any changes to the disk or partition table, take a snapshot of the root volume:
Open the AWS EC2 Console.
Navigate to Volumes.
Select the root volume attached to your instance.
Click Actions > Create Snapshot.
Wait for the snapshot to complete.
This ensures that you can recover your data in case of an issue.
Step 2: Install Required Tools
Update the system and install gdisk
to manage the GPT table:
sudo apt update
sudo apt install gdisk
Step 3: Fix GPT Table Issues
Use gdisk
to resolve GPT table warnings:
sudo gdisk /dev/nvme0n1
You may see warnings like:
GPT PMBR size mismatch (16777215 != 31457279) will be corrected by write.
The backup GPT table is not on the end of the disk.
Follow these steps:
Type
w
to write changes and fix the GPT table.Confirm when prompted.
This step ensures the partition table is consistent and uses the full disk.
Step 4: Resize the Partition
Expand the root partition to utilize the entire disk:
sudo growpart /dev/nvme0n1 1
Output:
CHANGED: partition=1 start=2099200 old: size=14677983 end=16777183 new: size=27263247 end=29362447
Verify the updated partition size:
lsblk
Output:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
nvme0n1 259:0 0 15G 0 disk
├─nvme0n1p1 259:1 0 15G 0 part /
...
Now, the root partition (/dev/nvme0n1p1
) reflects the increased size.
Step 5: Resize the Filesystem
Resize the filesystem to use the new partition size:
sudo resize2fs /dev/nvme0n1p1
Output:
resize2fs 1.46.5 (30-Dec-2021)
Filesystem at /dev/nvme0n1p1 is mounted on /; on-line resizing required
old_desc_blocks = 1, new_desc_blocks = 2
The filesystem on /dev/nvme0n1p1 is now 3679028 (4k) blocks long.
Step 6: Verify Changes
Check the filesystem and partition sizes:
df -h
Output:
Filesystem Size Used Avail Use% Mounted on
/dev/root 14G 6.6G 7.4G 48% /
The root partition and filesystem now reflect the expanded 14 GiB of usable space.
Step 7: Restart Affected Services
If any services were impacted by the disk space issue, restart them. For example:
sudo systemctl restart jenkins
Step 8: Monitor Disk Usage
Finally, ensure disk usage is normal and no other issues remain:
df -h
Summary of Key Commands
Backup the instance: Create a snapshot of the root volume via AWS Console.
Fix GPT table issues:
sudo gdisk /dev/nvme0n1
Resize the partition:
sudo growpart /dev/nvme0n1 1
Expand the filesystem:
sudo resize2fs /dev/nvme0n1p1
Verify disk space:
df -h
Restart services:
sudo systemctl restart <service-name>
By following these steps, we successfully expanded the root partition and resolved the disk space issue, ensuring smooth system operation. Always remember to back up critical data before making changes to disk or partition configurations.