Disk Usage

"Disk full" errors are never fun. Let's learn to check disk space before it's a crisis.

df - Disk Free

Shows free space on mounted filesystems:

Terminal
$df -h
Filesystem Size Used Avail Use% Mounted on /dev/sda1 50G 32G 16G 67% / /dev/sda2 100G 45G 50G 48% /home tmpfs 3.9G 0 3.9G 0% /tmp

-h makes it human-readable (GB, MB instead of bytes).

Important Columns

ColumnMeaning
FilesystemDisk partition
SizeTotal capacity
UsedSpace used
AvailSpace available
Use%Percentage used
Mounted onWhere it's accessible

90% Warning

When Use% hits 90%+, you should investigate. At 100%, things break.

Check Specific Mount

Terminal
$df -h /
Filesystem Size Used Avail Use% Mounted on /dev/sda1 50G 32G 16G 67% /
$df -h /home
Filesystem Size Used Avail Use% Mounted on /dev/sda2 100G 45G 50G 48% /home

Inodes

Files also use inodes (metadata). You can run out of inodes with many small files:

Terminal
$df -i
Filesystem Inodes IUsed IFree IUse% Mounted on /dev/sda1 3276800 145234 3131566 5% /

du - Disk Usage

Shows how much space files/directories use:

Terminal
$du -sh /var/log
1.2G /var/log

-s = summary (total only), -h = human-readable.

Directory Breakdown

Terminal
$du -h /var/log --max-depth=1
500M /var/log/nginx 300M /var/log/mysql 200M /var/log/syslog 1.2G /var/log

Find Large Directories

Terminal
$du -h / --max-depth=1 2>/dev/null | sort -hr | head -10
32G / 15G /var 8G /home 5G /usr 2G /opt

This shows the biggest directories at the top.

Practical Scenarios

Disk Full - Find the Culprit

Terminal
$# Start at root, find big directories
$du -h / --max-depth=1 2>/dev/null | sort -hr | head
$
$# /var looks big, drill down
$du -h /var --max-depth=1 | sort -hr | head
$
$# /var/log is the problem
$du -h /var/log --max-depth=1 | sort -hr | head

Find Large Files

Terminal
$find /var -type f -size +100M -exec ls -lh {} \; 2>/dev/null
-rw-r--r-- 1 root root 500M /var/log/nginx/access.log -rw-r--r-- 1 root root 200M /var/log/mysql/slow.log

Check Docker

Docker can eat disk space silently:

Terminal
$docker system df
TYPE SIZE Images 5.2GB Containers 500MB Build Cache 2.1GB
$docker system prune
(cleans up unused data)

Common Disk Hogs

  • /var/log - Log files
  • /var/lib/docker - Docker images/containers
  • /home - User data
  • /tmp - Temporary files (should auto-clean)

Modern Alternative: ncdu

ncdu (NCurses Disk Usage) provides an interactive interface for exploring disk usage:

hljs bash
sudo apt install ncdu
ncdu /

Navigate with arrow keys, delete files with d. Much easier than piping du through sort.

ncdu is a Lifesaver

When you need to quickly find what's eating disk space, ncdu is the fastest way. It shows a sorted, navigable view of disk usage. Delete files directly from the interface.

Knowledge Check

Which command shows disk usage of a directory with human-readable sizes?

Quick Reference

CommandShows
df -hFree space on filesystems
df -iInode usage
du -sh pathSize of directory
du -h --max-depth=1Size of subdirectories
du -h | sort -hr | headLargest directories

Key Takeaways

  • df -h shows filesystem free space
  • du -sh shows directory size
  • Sort by size to find space hogs: | sort -hr | head
  • Watch for 90%+ usage on critical filesystems
  • Logs and Docker are common culprits

Next: checking memory usage.