Since Kali is Debian, then the method used here do apply to any Debian-based Linux systems (+ Ubuntu). Although, since we are talking about Kali, which is usually ran as 'root', then most of the screenshots will show that the user is running the commands as root. If you are not logged in as root, just add the word 'sudo' in the beginning of every command. For example: instead of issueing command 'apt-get clean', type 'sudo apt-get clean'.
Let us assume, you get an error in your Kali Linux saying that you are running out of space. In the screenshot below, My Kali is running on Oracle VirtualBox with a dynamically allocated 15Gb of space. Technically I don't have to worry about disk space because the Virtual Disk will expand when needed. But I still want to free some space.
You get an error that you are running on disk space. Kali |
df -h results show the entire disk is "full" |
In this example, it seems the entire disk (virtual disk) is full. But we still need to know what folders are the largest, etc. So we use the Linux Disk Utility du.
Step 2: Use 'du' to show the top 10-30 consumers of space. You can use this iteratively going from one folder then digging deeper into its subfolders until you are satisfied that you have pinpointed what folders you need to remove/purge in order to free space.
first du attempt is from the '/' top folder. |
The syntax is:
du <directory> -ka | sort -n -r | head -n<number to show>
The options explained:
- -ka: -k forces du to count in kilobyte blocks. while -a forces du to count all files, not just folders/subdirectories. We need -k becase the "sort" pipe cannot distinguish between bytes, KB, MB, GB, etc. It only sorts based on the numbers from largest to lowest (or vice versa).
- sort: sorts a text output. -n instructs sort to sort numerically (by string numbers), while -r instructs sort to show the output in reverse (instead of from lowest to highest, it will be sorted from highest to lowest.
- Lastly head instructs the IO to output only the top 10 (by default), and not to show the rest. the -n<#> changes the output from the default 10, to the number specified.
the du (sorted) results for var/cache |
The du results for var/cache (shown in the right) reveals that it is the apt-get archives that are consuming too much space. Specifically the metapsloit framework archives.
note: before I wrote this article, I already know that I need to purge the apt-get cache. the apt-get cache really does tend to get big especially if you are upgrading without the 'autoremove' option. But the article shows a series of repeatable steps to be performed, in case its not the apt-cache that is the culprit.
Step 3: Proceed to delete the files. You can use rm -rf <file_name> to remove the files. you can even use wildcards such as * and ?.For example, you can delete rm -rf /home/archives/* to delete everything in /home/archives.
However, since we are dealing with the apt-get archives/cache, there is a safer way of dealing with it instead of doing a rm -rf. Use apt-get clean, or apt-get autoclean. Here's the difference:
- apt-get clean removes all packages downloaded (even those not yet installed) except those locked packages in /var/apt/cache/archives and /var/apt/cache/archives/partial.
- apt-get autoclean is a little smarter. It removes old packages and archives which are unlikely to be used. For example, outdated packages where a new package is already downloaded. Although, autoclean removes less archives in the apt-get cache, then clean.
apt-get clean does not show an output, but a quick df -h reveals significant space have been removed. |
Step 4: Verify the process and if necessary repeat steps 1-3. As shown in the picture above, verify by doing df -h again (step 1), and doing another iterative set of du (step 2). Do steps 1-3 repeatedly until you are satisfied you have purged all the unwanted files.
No comments:
Post a Comment