What's new
PGBlitz.com

Register Now! Find useful tips, Interact /w Community Members and join the part the Best Community on the Internet!

Watchtower filling up the harddrive with junk

hr1232

Junior Member
While running Watchtower helps you keep your images current, it steadily fills up your harddrive with useless junk. Whenever a container is updated (i.e. deleted and recreated with the same settings and a newer image version), the old image version and the volumes associated with the old containers are left behind without being deleted.

E.g. with NZBGet, this gives you about 800 MB of junk left over for every update. You can see the leftover images and volumes in Portainer (marked with the yellow badge unused) or you can see them with the following two commands:
Code:
sudo docker image ls
sudo docker volume ls
There is a very simple solution to cleanup those leftover files, as docker has builtin functionalities for just that. Sadly Plexguide isn't using them. There should be a cronjob running daily, placed in /etc/cron.daily/docker-cleanup:
Code:
#!/bin/sh
docker volume prune -f
docker image prune -f
docker image prune deletes every image, upon which no container is based. docker volume prune deletes every volume, not assiciated with a container. The parameter -f removes the security question "Do you really want to delete..." and thereby makes the command cron-compatible.

This affects all server types and all versions of Plexguide released so far, as they all utilize Docker.
 

timekills

Legendary Member
Staff
Donor
You can install DockerGC from the "Community Apps" and it does this automatically.
One of the reasons the image and volume prunes were removed from automatic install and activation, is users occasionally want to shut down a process. If the prune is run at that point, the image and/or volume will be removed.
Not a good idea.
 

loa92

Full Member
Donor
One of the reasons the image and volume prunes were removed from automatic install and activation, is users occasionally want to shut down a process. If the prune is run at that point, the image and/or volume will be removed.
I had, and still have, Prune -f running as a cronjob (It's baked into PGBlitz as simply the prune command). And to @timekills point, I have had a container like NZBGet down at the time the cronjob ran, and it removed the container. I immediately knew what happened, and how to fix it, but the standard user might not. So it's not something I would recommend running without user interaction. (Unless you know and accept the risks of accidental container deletion)
 

hr1232

Junior Member
The comment from @timekills completely false. Prune removes all images images upon which no container is based and all volumes not linked to any container. Wether or not a container is running is completely besides the point. As long as said container exists, the image it is based on will not be pruned, nor will the volumes linked to it.

@load92 Prune will *NEVER EVER* remove a container. Whatever you did, it has not been caused by docker volume prune -f or docker image prune -f. Maybe you did docker container prune -f which deletes all stopped containers. In that case, please read the man file before typing commands you don't know.

Also, @loa92 If prune is in one of the scripts it is obviously not used at all. If it is without -f as you point out, it will never do anything because it is waiting for input ("Shall I really delete all [images|volumes] not in use?").

To avoid any more confusion:

Code:
user:~$ sudo docker volume prune --help
Usage:  docker volume prune [OPTIONS]
Remove all unused local volumes
Options:
      --filter filter   Provide filter values (e.g. 'label=<label>')
  -f, --force           Do not prompt for confirmation
Code:
user:~$ sudo docker volume prune --help
Usage:  docker image prune [OPTIONS]
Remove unused images
Options:
  -a, --all             Remove all unused images, not just dangling ones
      --filter filter   Provide filter values (e.g. 'until=<timestamp>')
  -f, --force           Do not prompt for confirmation
 
Last edited:

timekills

Legendary Member
Staff
Donor
I may have not been clear. While executing the prune command manually may not cause an issue, it certainly did when run automatically via the older scripts that were installed.

It may have been a faulty/less than perfect implementation that caused this, but that is why I said it is better to be executed either manually or by cron if desired by the user rather than forced upon users without their knowledge of what is being done.

I hope this explanation is easier to understand and meets your definition of not "completely false."
 

hr1232

Junior Member
I may have not been clear. While executing the prune command manually may not cause an issue, it certainly did when run automatically via the older scripts that were installed.

It may have been a faulty/less than perfect implementation that caused this, but that is why I said it is better to be executed either manually or by cron if desired by the user rather than forced upon users without their knowledge of what is being done.

I hope this explanation is easier to understand and meets your definition of not "completely false."
I don't know what you had in your "the older scripts", but it certainly woth neither one of the commands listed above. Maybe you did docker system prune, which cleans out unused images, unconnected volumes, stopped networks, stopped containers and all caches. In that case I can only say: Read the warnings that come up when you type commands, you don't know about.
Code:
WARNING! This will remove:
        - all stopped containers
        - all networks not used by at least one container
        - all volumes not used by at least one container
        - all images without at least one container associated to them
        - all build cache
Filling up the users' harddisk with unnecessary junk is definitely not OK. Just a simple example of a very basic system:
Code:
# docker image ls
linuxserver/radarr                  latest              d7372c3f677f        15 hours ago        599MB
containrrr/watchtower               latest              2119c57e3020        2 days ago          13.1MB
thomseddon/traefik-forward-auth     latest              4a51a6e98aa8        5 days ago          7.61MB
linuxserver/sonarr                  latest              8e856c4662eb        5 days ago          642MB
linuxserver/hydra2                  latest              4762ac768a52        6 days ago          434MB
linuxserver/qbittorrent             latest              82f0b0718a12        6 days ago          251MB
linuxserver/lidarr                  latest              8474c170a8fd        7 days ago          672MB
linuxserver/sabnzbd                 latest              18981eba5711        10 days ago         255MB
richarvey/nginx-php-fpm             latest              49774adafa34        2 weeks ago         334MB
plexinc/pms-docker                  latest              2f1235e72fee        2 weeks ago         412MB
traefik                             1.7                 fb5ce07475c6        3 weeks ago         71MB
portainer/portainer                 latest              19d07168491a        7 weeks ago         74.1MB
There are 3.8GB of images on the system. With every update cycle, this amount increases and you keep the old images around forever without any need to do so. The guys from linuxserver update their images on a regular basis, sometimes several times a week. This amounts to 600MB wasted for every Radarr-update alone.

I realize that watchtower doesn't give a cleanup/prune option, but as long as Plexguide advertises the use of Whattower, you should install a cronjob that runs docker image prune -f daily. If you feel, there are people out there, who really want their hard drives filled with useless data, ask them before you install the cronjob (right after they have chosen to activate Whatchtower), but I doubt there will very many.

---

As for the problem with the increasing number of volumes: A little more investigating showed me that this is actually caused by a fult in the way you use the docker run command that can be fixed very easily.

Most containers export volumes by default through ther Dockerfile, e.g. with Radarr they are:
  • /movies
  • /downloads
  • /config
Of those, you only use /config and don't associate anything on /movies and /downloads. However, not mounting anything on a mountpoint exported through the Dockerfile cannot be done. For every mountpoint contained in the Dockerfile and not mentioned in you docker run command, docker will create an empty volume which is then silently mounted into the mountpoint you ignored. This means, that every time, Whatchtower re-creates the Radarr-container, docker creates 2 new, empty volumes without deleting the old ones. The same thing happens with almost every other container, as almost all of them export mountpoints by default.

One way of dealing with this would be the command I described above (docker volume prune -f), but as these volumes are not needed and not created by you, the better solution would be to correct your docker run command, so that these unnecessary volumes are not created in the first place.

The simplest way to achieve this, is to create an empty directory (e.g. /mnt/empty) and mount it into every unused volume export (should be readonly, so people don't start using it). In the case of Radarr, you would have to expand the docker run command by these two statements:
  • -v /mnt/empty/:/movies:ro
  • -v /mnt/empty/:/downloads:ro
After doing so, docker will stop creating useless volumes with every update and stay clean.
 
Last edited:

timekills

Legendary Member
Staff
Donor
I'll try this one last time, since reading is a challenge.
These weren't MY scripts. I answered the original question as to why the scripts that were INCLUDED with PlexGuide were removed, because people who didn't know how to use them were complaining of deleted Docker data.
I understand your point, and don;t disagree. There are many ways to clean up safely, and automatically. But there's no accounting for people reading and following directions or doing some GoogleFu and figuring out how things are supposed to work.
That requires reading, and comprehension.
This whole discussion between the two of us is an example of how that isn't always well performed.

If you want to write a simple script for automated cleaning - even though, as I said IT'S ALREADY IN THERE if you install DockerGC- be my guest, and either fork your own, or push for update.
 

hr1232

Junior Member
Doesn't really matter who wrote the faulty script, the fact is that is is faulty if it deleted containers as image prune won't ever touch containers.

Also, if DockerGC is used: There's the problem, it doesn't do image prune but much more, including pruning stopped containers. Installing a Container do run a single command (docker image prune -f) is not only complete overkill, but also stupid.

The "script" as you called it, is a 2-line file I posted above and it is so short, that it doesn't deserve the name script:
Code:
#!/bin/sh
docker image prune -f
As for the problem of creating an endless amount of volumes, they are caused by an ERROR in your docker run command (see above).

See, I can write all-caps, too... ;-)
 

Create an account or login to comment

You must be a member in order to leave a comment

Create account

Create an account on our community. It's easy!

Log in

Already have an account? Log in here.

Similar threads


Development Donations

 

Top NZB NewsGroups!

Members - Up To a 58% Discount!

Trending

Online statistics

Members online
16
Guests online
115
Total visitors
131
Top