Docker is in volume in use, but there aren't any Docker containers
up vote
45
down vote
favorite
I've been having issues with removing Docker volumes with Docker 1.9.1.
I've removed all my stopped containers so that docker ps -a
returns empty.
When I use docker volume ls
, I'm given a whole host of Docker containers:
docker volume ls
DRIVER VOLUME NAME
local a94211ea91d66142886d72ec476ece477bb5d2e7e52a5d73b2f2f98f6efa6e66
local 4f673316d690ca2d41abbdc9bf980c7a3f8d67242d76562bbd44079f5f438317
local eb6ab93effc4b90a2162e6fab6eeeb65bd0e4bd8a9290e1bad503d2a47aa8a78
local 91acb0f7644aec16d23a70f63f70027899017a884dab1f33ac8c4cf0dabe5f2c
local 4932e2fbad8f7e6246af96208d45a266eae11329f1adf176955f80ca2e874f69
local 68fd38fc78a8f02364a94934e9dd3b5d10e51de5b2546e7497eb21d6a1e7b750
local 7043a9642614dd6e9ca013cdf662451d2b3df6b1dddff97211a65ccf9f4c6d47
#etc x 50
Since none of these volumes contain anything important, I try to purge all the volumes with docker volume rm $(docker volume ls -q)
.
In the process, the majority are removed, but I get back:
Error response from daemon: Conflict: volume is in use
Error response from daemon: Conflict: volume is in use
Error response from daemon: Conflict: volume is in use
Error response from daemon: Conflict: volume is in use
Error response from daemon: Conflict: volume is in use
For a sizeable portion of them. If I don't have any containers existing in the first place, how are these volumes being used?
docker docker-machine
|
show 3 more comments
up vote
45
down vote
favorite
I've been having issues with removing Docker volumes with Docker 1.9.1.
I've removed all my stopped containers so that docker ps -a
returns empty.
When I use docker volume ls
, I'm given a whole host of Docker containers:
docker volume ls
DRIVER VOLUME NAME
local a94211ea91d66142886d72ec476ece477bb5d2e7e52a5d73b2f2f98f6efa6e66
local 4f673316d690ca2d41abbdc9bf980c7a3f8d67242d76562bbd44079f5f438317
local eb6ab93effc4b90a2162e6fab6eeeb65bd0e4bd8a9290e1bad503d2a47aa8a78
local 91acb0f7644aec16d23a70f63f70027899017a884dab1f33ac8c4cf0dabe5f2c
local 4932e2fbad8f7e6246af96208d45a266eae11329f1adf176955f80ca2e874f69
local 68fd38fc78a8f02364a94934e9dd3b5d10e51de5b2546e7497eb21d6a1e7b750
local 7043a9642614dd6e9ca013cdf662451d2b3df6b1dddff97211a65ccf9f4c6d47
#etc x 50
Since none of these volumes contain anything important, I try to purge all the volumes with docker volume rm $(docker volume ls -q)
.
In the process, the majority are removed, but I get back:
Error response from daemon: Conflict: volume is in use
Error response from daemon: Conflict: volume is in use
Error response from daemon: Conflict: volume is in use
Error response from daemon: Conflict: volume is in use
Error response from daemon: Conflict: volume is in use
For a sizeable portion of them. If I don't have any containers existing in the first place, how are these volumes being used?
docker docker-machine
7
docker uses reference counting to check if a volume is still in use; this is all done in-memory; this may be a bug or a race condition somehow, which resulted in the container being removed, but the counter not being updated. A restart of the daemon should resolve this, but, yes it's possible there's a bug somewhere. Is there something special in your setup (e.g. Are you using docker-in-docker, Swarm?). Do you use some script or tool to cleanup your containers?
– thaJeztah
Jan 7 '16 at 23:32
4
Hey thanks @thaJeztah restarting the Docker daemon (sudo service docker stop
andsudo service docker start
) cleared out all of these ghost volumes for me. Moreover, it seems like I am now able to remove volumes without issue using the docker rm -v command. Only notable differences in usage is that I've been using docker-compose on Ubuntu 15.10. I'll report back if I'm ever able to replicate this problem but otherwise it seems like a simple restart will suffice. Thanks!
– Tkwon123
Jan 8 '16 at 13:17
2
even after reboot it still says docker volume is in use..
– holms
Jul 5 '16 at 1:31
2
If you use docker compose, you can add -v to the down command, to remove the volumes.
– Niels Bech Nielsen
Dec 1 '16 at 10:31
2
I fixed this by stopping docker then removing the volumes from the file system and starting docker again.service docker stop && rm -rf /var/lib/docker/volumes/TheVolumIdYouWantToRemove && service docker start
– jfgrissom
Nov 23 '17 at 19:09
|
show 3 more comments
up vote
45
down vote
favorite
up vote
45
down vote
favorite
I've been having issues with removing Docker volumes with Docker 1.9.1.
I've removed all my stopped containers so that docker ps -a
returns empty.
When I use docker volume ls
, I'm given a whole host of Docker containers:
docker volume ls
DRIVER VOLUME NAME
local a94211ea91d66142886d72ec476ece477bb5d2e7e52a5d73b2f2f98f6efa6e66
local 4f673316d690ca2d41abbdc9bf980c7a3f8d67242d76562bbd44079f5f438317
local eb6ab93effc4b90a2162e6fab6eeeb65bd0e4bd8a9290e1bad503d2a47aa8a78
local 91acb0f7644aec16d23a70f63f70027899017a884dab1f33ac8c4cf0dabe5f2c
local 4932e2fbad8f7e6246af96208d45a266eae11329f1adf176955f80ca2e874f69
local 68fd38fc78a8f02364a94934e9dd3b5d10e51de5b2546e7497eb21d6a1e7b750
local 7043a9642614dd6e9ca013cdf662451d2b3df6b1dddff97211a65ccf9f4c6d47
#etc x 50
Since none of these volumes contain anything important, I try to purge all the volumes with docker volume rm $(docker volume ls -q)
.
In the process, the majority are removed, but I get back:
Error response from daemon: Conflict: volume is in use
Error response from daemon: Conflict: volume is in use
Error response from daemon: Conflict: volume is in use
Error response from daemon: Conflict: volume is in use
Error response from daemon: Conflict: volume is in use
For a sizeable portion of them. If I don't have any containers existing in the first place, how are these volumes being used?
docker docker-machine
I've been having issues with removing Docker volumes with Docker 1.9.1.
I've removed all my stopped containers so that docker ps -a
returns empty.
When I use docker volume ls
, I'm given a whole host of Docker containers:
docker volume ls
DRIVER VOLUME NAME
local a94211ea91d66142886d72ec476ece477bb5d2e7e52a5d73b2f2f98f6efa6e66
local 4f673316d690ca2d41abbdc9bf980c7a3f8d67242d76562bbd44079f5f438317
local eb6ab93effc4b90a2162e6fab6eeeb65bd0e4bd8a9290e1bad503d2a47aa8a78
local 91acb0f7644aec16d23a70f63f70027899017a884dab1f33ac8c4cf0dabe5f2c
local 4932e2fbad8f7e6246af96208d45a266eae11329f1adf176955f80ca2e874f69
local 68fd38fc78a8f02364a94934e9dd3b5d10e51de5b2546e7497eb21d6a1e7b750
local 7043a9642614dd6e9ca013cdf662451d2b3df6b1dddff97211a65ccf9f4c6d47
#etc x 50
Since none of these volumes contain anything important, I try to purge all the volumes with docker volume rm $(docker volume ls -q)
.
In the process, the majority are removed, but I get back:
Error response from daemon: Conflict: volume is in use
Error response from daemon: Conflict: volume is in use
Error response from daemon: Conflict: volume is in use
Error response from daemon: Conflict: volume is in use
Error response from daemon: Conflict: volume is in use
For a sizeable portion of them. If I don't have any containers existing in the first place, how are these volumes being used?
docker docker-machine
docker docker-machine
edited Jul 24 at 16:09
Peter Mortensen
13.3k1983111
13.3k1983111
asked Jan 7 '16 at 15:22
Tkwon123
413148
413148
7
docker uses reference counting to check if a volume is still in use; this is all done in-memory; this may be a bug or a race condition somehow, which resulted in the container being removed, but the counter not being updated. A restart of the daemon should resolve this, but, yes it's possible there's a bug somewhere. Is there something special in your setup (e.g. Are you using docker-in-docker, Swarm?). Do you use some script or tool to cleanup your containers?
– thaJeztah
Jan 7 '16 at 23:32
4
Hey thanks @thaJeztah restarting the Docker daemon (sudo service docker stop
andsudo service docker start
) cleared out all of these ghost volumes for me. Moreover, it seems like I am now able to remove volumes without issue using the docker rm -v command. Only notable differences in usage is that I've been using docker-compose on Ubuntu 15.10. I'll report back if I'm ever able to replicate this problem but otherwise it seems like a simple restart will suffice. Thanks!
– Tkwon123
Jan 8 '16 at 13:17
2
even after reboot it still says docker volume is in use..
– holms
Jul 5 '16 at 1:31
2
If you use docker compose, you can add -v to the down command, to remove the volumes.
– Niels Bech Nielsen
Dec 1 '16 at 10:31
2
I fixed this by stopping docker then removing the volumes from the file system and starting docker again.service docker stop && rm -rf /var/lib/docker/volumes/TheVolumIdYouWantToRemove && service docker start
– jfgrissom
Nov 23 '17 at 19:09
|
show 3 more comments
7
docker uses reference counting to check if a volume is still in use; this is all done in-memory; this may be a bug or a race condition somehow, which resulted in the container being removed, but the counter not being updated. A restart of the daemon should resolve this, but, yes it's possible there's a bug somewhere. Is there something special in your setup (e.g. Are you using docker-in-docker, Swarm?). Do you use some script or tool to cleanup your containers?
– thaJeztah
Jan 7 '16 at 23:32
4
Hey thanks @thaJeztah restarting the Docker daemon (sudo service docker stop
andsudo service docker start
) cleared out all of these ghost volumes for me. Moreover, it seems like I am now able to remove volumes without issue using the docker rm -v command. Only notable differences in usage is that I've been using docker-compose on Ubuntu 15.10. I'll report back if I'm ever able to replicate this problem but otherwise it seems like a simple restart will suffice. Thanks!
– Tkwon123
Jan 8 '16 at 13:17
2
even after reboot it still says docker volume is in use..
– holms
Jul 5 '16 at 1:31
2
If you use docker compose, you can add -v to the down command, to remove the volumes.
– Niels Bech Nielsen
Dec 1 '16 at 10:31
2
I fixed this by stopping docker then removing the volumes from the file system and starting docker again.service docker stop && rm -rf /var/lib/docker/volumes/TheVolumIdYouWantToRemove && service docker start
– jfgrissom
Nov 23 '17 at 19:09
7
7
docker uses reference counting to check if a volume is still in use; this is all done in-memory; this may be a bug or a race condition somehow, which resulted in the container being removed, but the counter not being updated. A restart of the daemon should resolve this, but, yes it's possible there's a bug somewhere. Is there something special in your setup (e.g. Are you using docker-in-docker, Swarm?). Do you use some script or tool to cleanup your containers?
– thaJeztah
Jan 7 '16 at 23:32
docker uses reference counting to check if a volume is still in use; this is all done in-memory; this may be a bug or a race condition somehow, which resulted in the container being removed, but the counter not being updated. A restart of the daemon should resolve this, but, yes it's possible there's a bug somewhere. Is there something special in your setup (e.g. Are you using docker-in-docker, Swarm?). Do you use some script or tool to cleanup your containers?
– thaJeztah
Jan 7 '16 at 23:32
4
4
Hey thanks @thaJeztah restarting the Docker daemon (
sudo service docker stop
and sudo service docker start
) cleared out all of these ghost volumes for me. Moreover, it seems like I am now able to remove volumes without issue using the docker rm -v command. Only notable differences in usage is that I've been using docker-compose on Ubuntu 15.10. I'll report back if I'm ever able to replicate this problem but otherwise it seems like a simple restart will suffice. Thanks!– Tkwon123
Jan 8 '16 at 13:17
Hey thanks @thaJeztah restarting the Docker daemon (
sudo service docker stop
and sudo service docker start
) cleared out all of these ghost volumes for me. Moreover, it seems like I am now able to remove volumes without issue using the docker rm -v command. Only notable differences in usage is that I've been using docker-compose on Ubuntu 15.10. I'll report back if I'm ever able to replicate this problem but otherwise it seems like a simple restart will suffice. Thanks!– Tkwon123
Jan 8 '16 at 13:17
2
2
even after reboot it still says docker volume is in use..
– holms
Jul 5 '16 at 1:31
even after reboot it still says docker volume is in use..
– holms
Jul 5 '16 at 1:31
2
2
If you use docker compose, you can add -v to the down command, to remove the volumes.
– Niels Bech Nielsen
Dec 1 '16 at 10:31
If you use docker compose, you can add -v to the down command, to remove the volumes.
– Niels Bech Nielsen
Dec 1 '16 at 10:31
2
2
I fixed this by stopping docker then removing the volumes from the file system and starting docker again.
service docker stop && rm -rf /var/lib/docker/volumes/TheVolumIdYouWantToRemove && service docker start
– jfgrissom
Nov 23 '17 at 19:09
I fixed this by stopping docker then removing the volumes from the file system and starting docker again.
service docker stop && rm -rf /var/lib/docker/volumes/TheVolumIdYouWantToRemove && service docker start
– jfgrissom
Nov 23 '17 at 19:09
|
show 3 more comments
6 Answers
6
active
oldest
votes
up vote
58
down vote
You can use these functions to brutally remove everything Docker related:
removecontainers()
docker stop $(docker ps -aq)
docker rm $(docker ps -aq)
armaggedon()
removecontainers
docker network prune -f
docker rmi -f $(docker images --filter dangling=true -qa)
docker volume rm $(docker volume ls --filter dangling=true -q)
docker rmi -f $(docker images -qa)
You can add those to your ~/Xrc
file, where X is your shell interpreter (~/.bashrc
if you're using bash) file and reload them via executing source ~/Xrc
. Also, you can just copy paste them to the console and afterwards (regardless the option you took before to get the functions ready) just run:
armaggedon
It's also useful for just general Docker clean up. Have in mind that this will also remove your images, not only your containers (either running or not) and your volumes of any kind.
1
Per the question, thedocker volume rm
command was failing. From the comments, the solution appears to be to restart the docker daemon to fix the reference count.
– BMitch
Feb 8 '17 at 15:31
1
@BMitch if you read carefully through the comments, that is not the solution for this:even after reboot it still says docker volume is in use..
– David González Ruiz
Feb 8 '17 at 15:33
holms appears to have a different issue and isn't the one that posted the question. Look up one comment above that.
– BMitch
Feb 8 '17 at 15:35
1
The gonsales, 👏for the function name but it is speltarmageddon
– Joseph Sheedy
Jul 2 at 19:56
add a comment |
up vote
32
down vote
I am fairly new to Docker. I was cleaning up some initial testing mess and was not able to remove a volume either. I had stopped all the running instances, performed a docker rmi -f $(docker image ls -q)
, but still received the Error response from daemon: unable to remove volume: remove uuid: volume is in use
.
I did a docker system prune
and it cleaned up what was needed to remove the last volume:
[0]$ docker system prune
WARNING! This will remove:
- all stopped containers
- all networks not used by at least one container
- all dangling images
- all build cache
Are you sure you want to continue? [y/N] y
Deleted Containers:
... about 15 containers UUID's truncated
Total reclaimed space: 2.273MB
[0]$ docker volume ls
DRIVER VOLUME NAME
local uuid
[0]$ docker volume rm uuid
uuid
[0]$
docker system prune
The client and daemon API must both be at least 1.25 to use this command. Use
thedocker version
command on the client to check your client and daemon API versions.
Thanks! This did the trick for me
– manncito
Mar 5 at 23:58
This worked for me.
– Dinsdale
Jun 7 at 16:57
for some reason I had to do this twice before it worked.
– mameluc
Aug 6 at 13:55
add a comment |
up vote
10
down vote
Perhaps the volume was created via docker-compose
? If so, it should get removed by:
docker-compose down --volumes
Credit to Niels Bech Nielsen!
add a comment |
up vote
1
down vote
I am pretty sure that those volumes are actually mounted on your system. Look in /proc/mounts and you will see them there. You will likely need to sudo umount <path>
or sudo umount -f -n <path>
. You should be able to get the mounted path either in /proc/mounts or through docker volume inspect
add a comment |
up vote
0
down vote
As long as volumes are associated with a container(either running or not), they cannot be removed.
You have to run
docker inspect <container-id>/<container-name>
on each of the running/non-running containers where this volume might have been mounted onto.
If the volume is mounted onto any one of the containers, you should see it in the Mounts section of the inspect command output. Something like this :-
"Mounts": [
"Type": "volume",
"Name": "user1",
"Source": "/var/lib/docker/volumes/user1/_data",
"Destination": "/opt",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
],
After figuring out the responsible container(s), use :-
docker rm -f container-1 container-2 ...container-n
in case of running containers
docker rm container-1 container-2 ...container-n
in case of non-running containers
to completely remove the containers from the host machine.
Then try removing the volume using the command :-
docker volume remove <volume-name/volume-id>
1
For a useful answer this reaction needs to be extended. Add information on how to check this.
– Jeroen Heier
Nov 11 at 6:24
add a comment |
up vote
-3
down vote
You should type this command with flag -f (force):
sudo docker volume rm -f <VOLUME NAME>
add a comment |
6 Answers
6
active
oldest
votes
6 Answers
6
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
58
down vote
You can use these functions to brutally remove everything Docker related:
removecontainers()
docker stop $(docker ps -aq)
docker rm $(docker ps -aq)
armaggedon()
removecontainers
docker network prune -f
docker rmi -f $(docker images --filter dangling=true -qa)
docker volume rm $(docker volume ls --filter dangling=true -q)
docker rmi -f $(docker images -qa)
You can add those to your ~/Xrc
file, where X is your shell interpreter (~/.bashrc
if you're using bash) file and reload them via executing source ~/Xrc
. Also, you can just copy paste them to the console and afterwards (regardless the option you took before to get the functions ready) just run:
armaggedon
It's also useful for just general Docker clean up. Have in mind that this will also remove your images, not only your containers (either running or not) and your volumes of any kind.
1
Per the question, thedocker volume rm
command was failing. From the comments, the solution appears to be to restart the docker daemon to fix the reference count.
– BMitch
Feb 8 '17 at 15:31
1
@BMitch if you read carefully through the comments, that is not the solution for this:even after reboot it still says docker volume is in use..
– David González Ruiz
Feb 8 '17 at 15:33
holms appears to have a different issue and isn't the one that posted the question. Look up one comment above that.
– BMitch
Feb 8 '17 at 15:35
1
The gonsales, 👏for the function name but it is speltarmageddon
– Joseph Sheedy
Jul 2 at 19:56
add a comment |
up vote
58
down vote
You can use these functions to brutally remove everything Docker related:
removecontainers()
docker stop $(docker ps -aq)
docker rm $(docker ps -aq)
armaggedon()
removecontainers
docker network prune -f
docker rmi -f $(docker images --filter dangling=true -qa)
docker volume rm $(docker volume ls --filter dangling=true -q)
docker rmi -f $(docker images -qa)
You can add those to your ~/Xrc
file, where X is your shell interpreter (~/.bashrc
if you're using bash) file and reload them via executing source ~/Xrc
. Also, you can just copy paste them to the console and afterwards (regardless the option you took before to get the functions ready) just run:
armaggedon
It's also useful for just general Docker clean up. Have in mind that this will also remove your images, not only your containers (either running or not) and your volumes of any kind.
1
Per the question, thedocker volume rm
command was failing. From the comments, the solution appears to be to restart the docker daemon to fix the reference count.
– BMitch
Feb 8 '17 at 15:31
1
@BMitch if you read carefully through the comments, that is not the solution for this:even after reboot it still says docker volume is in use..
– David González Ruiz
Feb 8 '17 at 15:33
holms appears to have a different issue and isn't the one that posted the question. Look up one comment above that.
– BMitch
Feb 8 '17 at 15:35
1
The gonsales, 👏for the function name but it is speltarmageddon
– Joseph Sheedy
Jul 2 at 19:56
add a comment |
up vote
58
down vote
up vote
58
down vote
You can use these functions to brutally remove everything Docker related:
removecontainers()
docker stop $(docker ps -aq)
docker rm $(docker ps -aq)
armaggedon()
removecontainers
docker network prune -f
docker rmi -f $(docker images --filter dangling=true -qa)
docker volume rm $(docker volume ls --filter dangling=true -q)
docker rmi -f $(docker images -qa)
You can add those to your ~/Xrc
file, where X is your shell interpreter (~/.bashrc
if you're using bash) file and reload them via executing source ~/Xrc
. Also, you can just copy paste them to the console and afterwards (regardless the option you took before to get the functions ready) just run:
armaggedon
It's also useful for just general Docker clean up. Have in mind that this will also remove your images, not only your containers (either running or not) and your volumes of any kind.
You can use these functions to brutally remove everything Docker related:
removecontainers()
docker stop $(docker ps -aq)
docker rm $(docker ps -aq)
armaggedon()
removecontainers
docker network prune -f
docker rmi -f $(docker images --filter dangling=true -qa)
docker volume rm $(docker volume ls --filter dangling=true -q)
docker rmi -f $(docker images -qa)
You can add those to your ~/Xrc
file, where X is your shell interpreter (~/.bashrc
if you're using bash) file and reload them via executing source ~/Xrc
. Also, you can just copy paste them to the console and afterwards (regardless the option you took before to get the functions ready) just run:
armaggedon
It's also useful for just general Docker clean up. Have in mind that this will also remove your images, not only your containers (either running or not) and your volumes of any kind.
edited Jul 24 at 16:10
Peter Mortensen
13.3k1983111
13.3k1983111
answered Feb 8 '17 at 14:52
David González Ruiz
1,029510
1,029510
1
Per the question, thedocker volume rm
command was failing. From the comments, the solution appears to be to restart the docker daemon to fix the reference count.
– BMitch
Feb 8 '17 at 15:31
1
@BMitch if you read carefully through the comments, that is not the solution for this:even after reboot it still says docker volume is in use..
– David González Ruiz
Feb 8 '17 at 15:33
holms appears to have a different issue and isn't the one that posted the question. Look up one comment above that.
– BMitch
Feb 8 '17 at 15:35
1
The gonsales, 👏for the function name but it is speltarmageddon
– Joseph Sheedy
Jul 2 at 19:56
add a comment |
1
Per the question, thedocker volume rm
command was failing. From the comments, the solution appears to be to restart the docker daemon to fix the reference count.
– BMitch
Feb 8 '17 at 15:31
1
@BMitch if you read carefully through the comments, that is not the solution for this:even after reboot it still says docker volume is in use..
– David González Ruiz
Feb 8 '17 at 15:33
holms appears to have a different issue and isn't the one that posted the question. Look up one comment above that.
– BMitch
Feb 8 '17 at 15:35
1
The gonsales, 👏for the function name but it is speltarmageddon
– Joseph Sheedy
Jul 2 at 19:56
1
1
Per the question, the
docker volume rm
command was failing. From the comments, the solution appears to be to restart the docker daemon to fix the reference count.– BMitch
Feb 8 '17 at 15:31
Per the question, the
docker volume rm
command was failing. From the comments, the solution appears to be to restart the docker daemon to fix the reference count.– BMitch
Feb 8 '17 at 15:31
1
1
@BMitch if you read carefully through the comments, that is not the solution for this:
even after reboot it still says docker volume is in use..
– David González Ruiz
Feb 8 '17 at 15:33
@BMitch if you read carefully through the comments, that is not the solution for this:
even after reboot it still says docker volume is in use..
– David González Ruiz
Feb 8 '17 at 15:33
holms appears to have a different issue and isn't the one that posted the question. Look up one comment above that.
– BMitch
Feb 8 '17 at 15:35
holms appears to have a different issue and isn't the one that posted the question. Look up one comment above that.
– BMitch
Feb 8 '17 at 15:35
1
1
The gonsales, 👏for the function name but it is spelt
armageddon
– Joseph Sheedy
Jul 2 at 19:56
The gonsales, 👏for the function name but it is spelt
armageddon
– Joseph Sheedy
Jul 2 at 19:56
add a comment |
up vote
32
down vote
I am fairly new to Docker. I was cleaning up some initial testing mess and was not able to remove a volume either. I had stopped all the running instances, performed a docker rmi -f $(docker image ls -q)
, but still received the Error response from daemon: unable to remove volume: remove uuid: volume is in use
.
I did a docker system prune
and it cleaned up what was needed to remove the last volume:
[0]$ docker system prune
WARNING! This will remove:
- all stopped containers
- all networks not used by at least one container
- all dangling images
- all build cache
Are you sure you want to continue? [y/N] y
Deleted Containers:
... about 15 containers UUID's truncated
Total reclaimed space: 2.273MB
[0]$ docker volume ls
DRIVER VOLUME NAME
local uuid
[0]$ docker volume rm uuid
uuid
[0]$
docker system prune
The client and daemon API must both be at least 1.25 to use this command. Use
thedocker version
command on the client to check your client and daemon API versions.
Thanks! This did the trick for me
– manncito
Mar 5 at 23:58
This worked for me.
– Dinsdale
Jun 7 at 16:57
for some reason I had to do this twice before it worked.
– mameluc
Aug 6 at 13:55
add a comment |
up vote
32
down vote
I am fairly new to Docker. I was cleaning up some initial testing mess and was not able to remove a volume either. I had stopped all the running instances, performed a docker rmi -f $(docker image ls -q)
, but still received the Error response from daemon: unable to remove volume: remove uuid: volume is in use
.
I did a docker system prune
and it cleaned up what was needed to remove the last volume:
[0]$ docker system prune
WARNING! This will remove:
- all stopped containers
- all networks not used by at least one container
- all dangling images
- all build cache
Are you sure you want to continue? [y/N] y
Deleted Containers:
... about 15 containers UUID's truncated
Total reclaimed space: 2.273MB
[0]$ docker volume ls
DRIVER VOLUME NAME
local uuid
[0]$ docker volume rm uuid
uuid
[0]$
docker system prune
The client and daemon API must both be at least 1.25 to use this command. Use
thedocker version
command on the client to check your client and daemon API versions.
Thanks! This did the trick for me
– manncito
Mar 5 at 23:58
This worked for me.
– Dinsdale
Jun 7 at 16:57
for some reason I had to do this twice before it worked.
– mameluc
Aug 6 at 13:55
add a comment |
up vote
32
down vote
up vote
32
down vote
I am fairly new to Docker. I was cleaning up some initial testing mess and was not able to remove a volume either. I had stopped all the running instances, performed a docker rmi -f $(docker image ls -q)
, but still received the Error response from daemon: unable to remove volume: remove uuid: volume is in use
.
I did a docker system prune
and it cleaned up what was needed to remove the last volume:
[0]$ docker system prune
WARNING! This will remove:
- all stopped containers
- all networks not used by at least one container
- all dangling images
- all build cache
Are you sure you want to continue? [y/N] y
Deleted Containers:
... about 15 containers UUID's truncated
Total reclaimed space: 2.273MB
[0]$ docker volume ls
DRIVER VOLUME NAME
local uuid
[0]$ docker volume rm uuid
uuid
[0]$
docker system prune
The client and daemon API must both be at least 1.25 to use this command. Use
thedocker version
command on the client to check your client and daemon API versions.
I am fairly new to Docker. I was cleaning up some initial testing mess and was not able to remove a volume either. I had stopped all the running instances, performed a docker rmi -f $(docker image ls -q)
, but still received the Error response from daemon: unable to remove volume: remove uuid: volume is in use
.
I did a docker system prune
and it cleaned up what was needed to remove the last volume:
[0]$ docker system prune
WARNING! This will remove:
- all stopped containers
- all networks not used by at least one container
- all dangling images
- all build cache
Are you sure you want to continue? [y/N] y
Deleted Containers:
... about 15 containers UUID's truncated
Total reclaimed space: 2.273MB
[0]$ docker volume ls
DRIVER VOLUME NAME
local uuid
[0]$ docker volume rm uuid
uuid
[0]$
docker system prune
The client and daemon API must both be at least 1.25 to use this command. Use
thedocker version
command on the client to check your client and daemon API versions.
edited Jul 24 at 16:11
Peter Mortensen
13.3k1983111
13.3k1983111
answered Feb 28 at 5:54
Benjamin G. West
43934
43934
Thanks! This did the trick for me
– manncito
Mar 5 at 23:58
This worked for me.
– Dinsdale
Jun 7 at 16:57
for some reason I had to do this twice before it worked.
– mameluc
Aug 6 at 13:55
add a comment |
Thanks! This did the trick for me
– manncito
Mar 5 at 23:58
This worked for me.
– Dinsdale
Jun 7 at 16:57
for some reason I had to do this twice before it worked.
– mameluc
Aug 6 at 13:55
Thanks! This did the trick for me
– manncito
Mar 5 at 23:58
Thanks! This did the trick for me
– manncito
Mar 5 at 23:58
This worked for me.
– Dinsdale
Jun 7 at 16:57
This worked for me.
– Dinsdale
Jun 7 at 16:57
for some reason I had to do this twice before it worked.
– mameluc
Aug 6 at 13:55
for some reason I had to do this twice before it worked.
– mameluc
Aug 6 at 13:55
add a comment |
up vote
10
down vote
Perhaps the volume was created via docker-compose
? If so, it should get removed by:
docker-compose down --volumes
Credit to Niels Bech Nielsen!
add a comment |
up vote
10
down vote
Perhaps the volume was created via docker-compose
? If so, it should get removed by:
docker-compose down --volumes
Credit to Niels Bech Nielsen!
add a comment |
up vote
10
down vote
up vote
10
down vote
Perhaps the volume was created via docker-compose
? If so, it should get removed by:
docker-compose down --volumes
Credit to Niels Bech Nielsen!
Perhaps the volume was created via docker-compose
? If so, it should get removed by:
docker-compose down --volumes
Credit to Niels Bech Nielsen!
answered Sep 14 at 7:18
Robert K. Bell
2,20611935
2,20611935
add a comment |
add a comment |
up vote
1
down vote
I am pretty sure that those volumes are actually mounted on your system. Look in /proc/mounts and you will see them there. You will likely need to sudo umount <path>
or sudo umount -f -n <path>
. You should be able to get the mounted path either in /proc/mounts or through docker volume inspect
add a comment |
up vote
1
down vote
I am pretty sure that those volumes are actually mounted on your system. Look in /proc/mounts and you will see them there. You will likely need to sudo umount <path>
or sudo umount -f -n <path>
. You should be able to get the mounted path either in /proc/mounts or through docker volume inspect
add a comment |
up vote
1
down vote
up vote
1
down vote
I am pretty sure that those volumes are actually mounted on your system. Look in /proc/mounts and you will see them there. You will likely need to sudo umount <path>
or sudo umount -f -n <path>
. You should be able to get the mounted path either in /proc/mounts or through docker volume inspect
I am pretty sure that those volumes are actually mounted on your system. Look in /proc/mounts and you will see them there. You will likely need to sudo umount <path>
or sudo umount -f -n <path>
. You should be able to get the mounted path either in /proc/mounts or through docker volume inspect
answered Feb 24 '17 at 7:06
Jiri Klouda
1,2521223
1,2521223
add a comment |
add a comment |
up vote
0
down vote
As long as volumes are associated with a container(either running or not), they cannot be removed.
You have to run
docker inspect <container-id>/<container-name>
on each of the running/non-running containers where this volume might have been mounted onto.
If the volume is mounted onto any one of the containers, you should see it in the Mounts section of the inspect command output. Something like this :-
"Mounts": [
"Type": "volume",
"Name": "user1",
"Source": "/var/lib/docker/volumes/user1/_data",
"Destination": "/opt",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
],
After figuring out the responsible container(s), use :-
docker rm -f container-1 container-2 ...container-n
in case of running containers
docker rm container-1 container-2 ...container-n
in case of non-running containers
to completely remove the containers from the host machine.
Then try removing the volume using the command :-
docker volume remove <volume-name/volume-id>
1
For a useful answer this reaction needs to be extended. Add information on how to check this.
– Jeroen Heier
Nov 11 at 6:24
add a comment |
up vote
0
down vote
As long as volumes are associated with a container(either running or not), they cannot be removed.
You have to run
docker inspect <container-id>/<container-name>
on each of the running/non-running containers where this volume might have been mounted onto.
If the volume is mounted onto any one of the containers, you should see it in the Mounts section of the inspect command output. Something like this :-
"Mounts": [
"Type": "volume",
"Name": "user1",
"Source": "/var/lib/docker/volumes/user1/_data",
"Destination": "/opt",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
],
After figuring out the responsible container(s), use :-
docker rm -f container-1 container-2 ...container-n
in case of running containers
docker rm container-1 container-2 ...container-n
in case of non-running containers
to completely remove the containers from the host machine.
Then try removing the volume using the command :-
docker volume remove <volume-name/volume-id>
1
For a useful answer this reaction needs to be extended. Add information on how to check this.
– Jeroen Heier
Nov 11 at 6:24
add a comment |
up vote
0
down vote
up vote
0
down vote
As long as volumes are associated with a container(either running or not), they cannot be removed.
You have to run
docker inspect <container-id>/<container-name>
on each of the running/non-running containers where this volume might have been mounted onto.
If the volume is mounted onto any one of the containers, you should see it in the Mounts section of the inspect command output. Something like this :-
"Mounts": [
"Type": "volume",
"Name": "user1",
"Source": "/var/lib/docker/volumes/user1/_data",
"Destination": "/opt",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
],
After figuring out the responsible container(s), use :-
docker rm -f container-1 container-2 ...container-n
in case of running containers
docker rm container-1 container-2 ...container-n
in case of non-running containers
to completely remove the containers from the host machine.
Then try removing the volume using the command :-
docker volume remove <volume-name/volume-id>
As long as volumes are associated with a container(either running or not), they cannot be removed.
You have to run
docker inspect <container-id>/<container-name>
on each of the running/non-running containers where this volume might have been mounted onto.
If the volume is mounted onto any one of the containers, you should see it in the Mounts section of the inspect command output. Something like this :-
"Mounts": [
"Type": "volume",
"Name": "user1",
"Source": "/var/lib/docker/volumes/user1/_data",
"Destination": "/opt",
"Driver": "local",
"Mode": "",
"RW": true,
"Propagation": ""
],
After figuring out the responsible container(s), use :-
docker rm -f container-1 container-2 ...container-n
in case of running containers
docker rm container-1 container-2 ...container-n
in case of non-running containers
to completely remove the containers from the host machine.
Then try removing the volume using the command :-
docker volume remove <volume-name/volume-id>
edited Nov 11 at 10:54
answered Nov 11 at 6:11
Supreeth Padavala
11
11
1
For a useful answer this reaction needs to be extended. Add information on how to check this.
– Jeroen Heier
Nov 11 at 6:24
add a comment |
1
For a useful answer this reaction needs to be extended. Add information on how to check this.
– Jeroen Heier
Nov 11 at 6:24
1
1
For a useful answer this reaction needs to be extended. Add information on how to check this.
– Jeroen Heier
Nov 11 at 6:24
For a useful answer this reaction needs to be extended. Add information on how to check this.
– Jeroen Heier
Nov 11 at 6:24
add a comment |
up vote
-3
down vote
You should type this command with flag -f (force):
sudo docker volume rm -f <VOLUME NAME>
add a comment |
up vote
-3
down vote
You should type this command with flag -f (force):
sudo docker volume rm -f <VOLUME NAME>
add a comment |
up vote
-3
down vote
up vote
-3
down vote
You should type this command with flag -f (force):
sudo docker volume rm -f <VOLUME NAME>
You should type this command with flag -f (force):
sudo docker volume rm -f <VOLUME NAME>
answered Mar 26 at 10:37
Julia
496
496
add a comment |
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f34658836%2fdocker-is-in-volume-in-use-but-there-arent-any-docker-containers%23new-answer', 'question_page');
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function ()
StackExchange.helpers.onClickDraftSave('#login-link');
);
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
7
docker uses reference counting to check if a volume is still in use; this is all done in-memory; this may be a bug or a race condition somehow, which resulted in the container being removed, but the counter not being updated. A restart of the daemon should resolve this, but, yes it's possible there's a bug somewhere. Is there something special in your setup (e.g. Are you using docker-in-docker, Swarm?). Do you use some script or tool to cleanup your containers?
– thaJeztah
Jan 7 '16 at 23:32
4
Hey thanks @thaJeztah restarting the Docker daemon (
sudo service docker stop
andsudo service docker start
) cleared out all of these ghost volumes for me. Moreover, it seems like I am now able to remove volumes without issue using the docker rm -v command. Only notable differences in usage is that I've been using docker-compose on Ubuntu 15.10. I'll report back if I'm ever able to replicate this problem but otherwise it seems like a simple restart will suffice. Thanks!– Tkwon123
Jan 8 '16 at 13:17
2
even after reboot it still says docker volume is in use..
– holms
Jul 5 '16 at 1:31
2
If you use docker compose, you can add -v to the down command, to remove the volumes.
– Niels Bech Nielsen
Dec 1 '16 at 10:31
2
I fixed this by stopping docker then removing the volumes from the file system and starting docker again.
service docker stop && rm -rf /var/lib/docker/volumes/TheVolumIdYouWantToRemove && service docker start
– jfgrissom
Nov 23 '17 at 19:09