NOTE: This used to be a gist that continually expanded. It's now a github project because it's considerably easier for other people to edit, fix and expand on Docker using Github. Just click [README.md](https://github.com/wsargent/docker-cheat-sheet/blob/master/README.md), and then on the "writing pen" icon on the right to edit.
"With Docker, developers can build any app in any language using any toolchain. “Dockerized” apps are completely portable and can run anywhere - colleagues’ OS X and Windows laptops, QA servers running Ubuntu in the cloud, and production data center VMs running Red Hat.
Developers can get going quickly by starting with one of the 13,000+ apps available on Docker Hub. Docker manages and tracks changes and dependencies, making it easier for sysadmins to understand how the apps that developers build work. And with Docker Hub, developers can automate their build pipeline and share artifacts with collaborators through public or private repositories.
Docker helps developers build and ship higher-quality applications, faster." -- [What is Docker](https://www.docker.com/whatisdocker/#copy1)
I use [Oh My Zsh](https://github.com/robbyrussell/oh-my-zsh) with the [Docker plugin](https://github.com/robbyrussell/oh-my-zsh/wiki/Plugins#docker) for autocompletion of docker commands. YMMV.
[Installation instructions](https://docs.docker.com/installation/) are available. Docker has recognized that installation / deployment is a PITA, and [Docker Machine](https://github.com/docker/machine) aka [bug 8681](https://github.com/docker/docker/issues/8681) deals with this specifically.
If you're not willing to run a random shell script, please see the [installation](https://docs.docker.com/installation/) instructions for your distribution.
The canonical way to use Docker is with the aid of the boot2docker VM. However, using the out of the box boot2docker doesn't give me control over my Vagrant instances (especially the lack of port forwarding). So here's how to use boot2docker from a Vagrant instance.
We use the [YungSang modified boot2docker instance](https://github.com/YungSang/boot2docker-vagrant-box) from the [Vagrant Cloud](https://vagrantcloud.com/yungsang/boxes/boot2docker):
> NOTE: the YungSang boot2docker opens up port forwarding to the network, so is not safe on public wifi. You can make a good argument that docker without TLS is [fundamentally unsafe](https://medium.com/@kevanahlquist/never-run-docker-on-a-tcp-socket-without-tls-1e7df31cf18c). I only do it because I have [Hands Off](http://www.oneperiodic.com/products/handsoff/) installed to limit external network access.
[Your basic isolated Docker process](http://etherealmind.com/basics-docker-containers-hypervisors-coreos/). Containers are to Virtual Machines as threads are to processes. Or you can think of them as chroots on steroids.
If you want to run and then interact with a container, `docker start`, then spawn a shell as described in [Executing Commands](https://github.com/wsargent/docker-cheat-sheet/#executing-commands).
If you want to remove also the volumes associated with the container, the deletion of the container must include the -v switch like in `docker --rm -v`.
If you want to map a directory on the host to a docker container, `docker run -v $HOSTDIR:$DOCKERDIR`. Also see [Volumes](https://github.com/wsargent/docker-cheat-sheet/#volumes).
If you want to integrate a container with a [host process manager](https://docs.docker.com/articles/host_integration/), start the daemon with `-r=false` then use `docker start -a`.
There doesn't seem to be a way to use docker directly to import files into a container's filesystem. The closest thing is to mount a host file or directory as a data volume and copy it from inside the container.
* [`docker images`](https://docs.docker.com/reference/commandline/cli/#images) shows all images.
* [`docker import`](https://docs.docker.com/reference/commandline/cli/#import) creates an image from a tarball.
* [`docker build`](https://docs.docker.com/reference/commandline/cli/#build) creates image from Dockerfile.
* [`docker commit`](https://docs.docker.com/reference/commandline/cli/#commit) creates image from a container.
* [`docker rmi`](https://docs.docker.com/reference/commandline/cli/#rmi) removes an image.
* [`docker insert`](https://docs.docker.com/reference/commandline/cli/#insert) inserts a file from URL into image. (kind of odd, you'd think images would be immutable after create)
* [`docker load`](https://docs.docker.com/reference/commandline/cli/#load) loads an image from a tar archive as STDIN, including images and tags (as of 0.7).
* [`docker save`](https://docs.docker.com/reference/commandline/cli/#save) saves an image to a tar archive stream to STDOUT with all parent layers, tags & versions (as of 0.7).
Docker image ids are [sensitive information](https://medium.com/@quayio/your-docker-image-ids-are-secrets-and-its-time-you-treated-them-that-way-f55e9f14c1a4) and should not be exposed to the outside world. Treat them like passwords.
A registry is a *host* -- a server that stores repositories and provides an HTTP API for [managing the uploading and downloading of repositories](https://docs.docker.com/userguide/dockerrepos/).
Docker.com hosts its own [index](https://registry.hub.docker.com/) to a central registry which contains a large number of repositories. Having said that, the central docker registry [does not do a good job of verifying images](https://titanous.com/posts/docker-insecurity) and should be avoided if you're worried about security.
[Registry implementation](http://github.com/docker/docker-registry) has an official image for basic setup that can be launched with
[`docker run -p 5000:5000 registry`](https://github.com/docker/docker-registry#quick-start)
Note that this installation does not have any authorization controls. You may use option `-P -p 127.0.0.1:5000:5000` to limit connections to localhost only.
In order to push to this repository tag image with `repositoryHostName:5000/imageName` then push this tag.
[The configuration file](https://docs.docker.com/reference/builder/). Sets up a Docker container when you run `docker build` on it. Vastly preferable to `docker commit`. If you use [jEdit](http://jedit.org), I've put up a syntax highlighting module for [Dockerfile](https://github.com/wsargent/jedit-docker-mode) you can use. You may also like to try the [tools section](#tools).
* [Best practices for writing Dockerfiles](https://docs.docker.com/articles/dockerfile_best-practices/)
* [Michael Crosby](http://crosbymichael.com/) has some more [Dockerfiles best practices](http://crosbymichael.com/dockerfile-best-practices.html) / [take 2](http://crosbymichael.com/dockerfile-best-practices-take-2.html).
* [The Rabbit Hole of Using Docker in Automated Tests](http://gregoryszorc.com/blog/2014/10/16/the-rabbit-hole-of-using-docker-in-automated-tests/)
* [Bridget Kromhout](https://twitter.com/bridgetkromhout) has a useful blog post on [running Docker in production](http://sysadvent.blogspot.co.uk/2014/12/day-1-docker-in-production-reality-not.html) at Dramafever.
* There's also a best practices [blog post](http://developers.lyst.com/devops/2014/12/08/docker/) from Lyst.
* [A Docker Dev Environment in 24 Hours!](http://blog.relateiq.com/a-docker-dev-environment-in-24-hours-part-2-of-2/)
* [Building a Development Environment With Docker](http://tersesystems.com/2013/11/20/building-a-development-environment-with-docker/)
* [Discourse in a Docker Container](http://samsaffron.com/archive/2013/11/07/discourse-in-a-docker-container)
The versioned filesystem in Docker is based on layers. They're like [git commits or changesets for filesystems](https://docs.docker.com/terms/layer/).
Note that if you're using [aufs](http://en.wikipedia.org/wiki/Aufs) as your filesystem, Docker does not always remove data volumes containers layers when you delete a container! See [PR 8484](https://github.com/docker/docker/pull/8484) for more details.
Links are how Docker containers talk to each other [through TCP/IP ports](https://docs.docker.com/userguide/dockerlinks/). [Linking into Redis](https://docs.docker.com/examples/running_redis_service/) and [Atlassian](http://blogs.atlassian.com/2013/11/docker-all-the-things-at-atlassian-automation-and-wiring/) show worked examples. You can also (in 0.11) resolve [links by hostname](https://docs.docker.com/userguide/dockerlinks/#updating-the-etchosts-file).
NOTE: If you want containers to ONLY communicate with each other through links, start the docker daemon with `-icc=false` to disable inter process communication.
If you want to link across docker hosts then you should look at [Swarm](http://docs.docker.com/swarm/). This [link on stackoverflow](http://stackoverflow.com/questions/21283517/how-to-link-docker-services-across-hosts) provides some good information on different patterns for linking containers across docker hosts.
Docker volumes are [free-floating filesystems](http://docs.docker.com/userguide/dockervolumes/). They don't have to be connected to a particular container. You should use volumes mounted from [data-only containers](https://medium.com/@ramangupta/why-docker-data-containers-are-good-589b3c6c749e) for portability.
Volumes are useful in situations where you can't use links (which are TCP/IP only). For instance, if you need to have two docker instances communicate by leaving stuff on the filesystem.
Because volumes are isolated filesystems, they are often used to store state from computations between transient containers. That is, you can have a stateless and transient container run from a recipe, blow it away, and then have a second instance of the transient container pick up from where the last one left off.
See [advanced volumes](http://crosbymichael.com/advanced-docker-volumes.html) for more details. Container42 is [also helpful](http://container42.com/2014/11/03/docker-indepth-volumes/).
As of 1.3, you can [map MacOS host directories as docker volumes](http://docs.docker.com/userguide/dockervolumes/#mount-a-host-directory-as-a-data-volume) through boot2docker:
You may also consider running data-only containers as described [here](http://container42.com/2013/12/16/persistent-volumes-with-docker-container-as-volume-pattern/) to provide some data portability.
If you don't want to use the `-p` option on the command line, you can persist port forwarding by using [EXPOSE](https://docs.docker.com/reference/builder/#expose):
If you're running Docker in Virtualbox, you then need to forward the port there as well, using [forwarded_port](https://docs.vagrantup.com/v2/networking/forwarded_ports.html). It can be useful to define something in Vagrantfile to expose a range of ports so that you can dynamically map them:
* [Machine](https://docs.docker.com/machine/), is a tool for easily creating a docker host either locally or on cloud provider (i.e. AWS, Digital Ocean etc).
* [Swarm](https://docs.docker.com/swarm/), is clustering for docker. It provides a way to group a number of docker hosts into a single virtual entity and provides mechanisms to schedule containers across these hosts.
* [Compose](https://docs.docker.com/compose/), provides a tool for describing a multi-container application within a single file. Compose primarily Fig renamed.
The [Demo of the Machine + Swarm + Compose integration](https://www.youtube.com/watch?v=M4PFY6RZQHQ) video provides a good introduction to the various components and how they work together.
[Compose](https://github.com/docker/compose) (originally named fig) is a helper app that makes it easier to run multiple docker containers on the same host.
I would expect it to be used during dev/qa more than in production.
Compose works with a YAML file (default name can be ```docker-compose.yml```, use ```-f``` to provide a different filename) that defines the containers you wish to use with it.
Compose will take its project name from the name of the folder containing your yml configuration but you can override that with the ```-p``` parameter.
Once I have my config defined, I can use ```docker-compose up -d``` to run it (the ```-d``` runs it as a detached task). This will start and link any containers.
As you can see it follows the docker commands with the addition of names for the containers and a links section for the web container, linking it to the app container.
As part of that linking process, docker will copy any environment variables defined in the app container over to the web container, define new environment variables for the address the app container is running at and also add an app entry in the etc/hosts file for the web container. I can modify my proxy conf to address ```http://app:8080``` and fig/docker will take care of the rest.
NB - docker will [eventually](https://gist.github.com/aanand/9e7ac7185ffd64c1a91a) absorb figs functionality with docker groups and docker up but it looks like they're keeping the yml config so it should be pretty seamless when it happens.
```docker-compose run``` is a useful command for debugging issues. It allows me to startup a named container (and any it links to) and run a one off command.
This allows me to do things like ```docker-compose run web env``` which will give me a list of all the environment variables that are available on the web container including the ones generated via the link to app.
I can also use ```docker-compose run web bash``` to run my web container interactively the way it has been setup by compose with the link to app so I can debug any issues from the command line.