Getting Start With Docker

Getting Start With Docker

This blog is the personal summary of Docker’s official document named Getting Started

Demo Application

For the rest of this tutorial, we will be working with a simple todo list manager that is running in Node.js. If you’re not familiar with Node.js, don’t worry! No real JavaScript experience is needed!

At this point, your development team is quite small and you’re simply building an app to prove out your MVP (minimum viable product). You want to show how it works and what it’s capable of doing without needing to think about how it will work for a large team, multiple developers, etc.

Getting the Demo App

Firstly, we need to open the docker tutorial on terminal, in case you forgot, here’s the command:

$ docker run -d -p 80:80 docker/getting-started

You’ll notice a few flags being used. Here’s some more info on them:

  • -d - run the container in detached mode (in the background)
  • -p 80:80 - map port 80 of the host to port 80 in the container
  • docker/getting-started - the image to use

then download the zip file from the tutorial to get start with.

Building the App’s Container Image

In order to build the application, we need to use a Dockerfile. A Dockerfile is simply a text-based script of instructions that is used to create a container image. If you’ve created Dockerfiles before, you might see a few flaws in the Dockerfile below. But, don’t worry! We’ll go over them.

  1. Create a file named Dockerfile in the same folder as the file package.json with the following contents. ( Please check that the file Dockerfile has no file extension like .txt )

    FROM node:12-alpine
    # Adding build tools to make yarn install work on Apple silicon / arm64 machines
    RUN apk add --no-cache python2 g++ make
    WORKDIR /app
    COPY . .
    RUN yarn install --production
    CMD ["node", "src/index.js"]
  2. Now build the container image using the docker build command.

    $ cd app/
    $ docker build -t getting-started .

    Build an image from a Dockerfile ( docker build ):

    $ docker build [OPTIONS] PATH | URL | -

    The -t flag tags our image. Think of this simply as a human-readable name for the final image. Since we named the image getting-started, we can refer to that image when we run a container.

    The . at the end of the docker build command tells that Docker should look for the Dockerfile in the current directory.

Starting an App Container

  1. Start your container using the docker run command and specify the name of the image we just created:

    $ docker run -dp 3000:3000 getting-started

    Run a command in a new container ( docker run ):

    $ docker run [OPTIONS] IMAGE [COMMAND] [ARG...]

    Remember the -d and -p flags? We’re running the new container in “detached” mode (in the background) and creating a mapping between the host’s port 3000 to the container’s port 3000. Without the port mapping, we wouldn’t be able to access the application.

  2. After a few seconds, open your web browser to http://localhost:3000. You should see our app!

  3. Go ahead and add an item or two and see that it works as you expect. You can mark items as complete and remove items. Your frontend is successfully storing items in the backend! Pretty quick and easy, huh?

Updating our App

  1. In the src/static/js/app.js file, update line 56 to use the new empty text.

    -                <p className="text-center">No items yet! Add one above!</p>
    +                <p className="text-center">You have no todo items yet! Add one above!</p>
  2. Remove the old getting-started image.

    1. Get the ID of the container by using the docker ps command.

      $ docker ps
    2. Use the docker stop command to stop the container ( or using -f flag on docker rm command)

      # Swap out <the-container-id> with the ID from docker ps
      $ docker stop <the-container-id>

      or

      $ docker rm -f <the-container-id or the-contaniner-name>
    3. Once the container has stopped, you can remove it by using the docker rm command.

      $ docker rm <the-container-id>
  3. Let’s build our updated version of the image, using the same command we used before.

    $ docker build -t getting-started .
  4. Let’s start a new container using the updated code.

    $ docker run -dp 3000:3000 getting-started

Removing a container using the Docker Dashboard

If you open the Docker dashboard, you can remove a container with two clicks! It’s certainly much easier than having to look up the container ID and remove it.

  1. With the dashboard opened, hover over the app container and you’ll see a collection of action buttons appear on the right.
  2. Click on the trash can icon to delete the container.
  3. Confirm the removal and you’re done!

Sharing our App

Now that we’ve built an image, let’s share it! To share Docker images, you have to use a Docker registry. The default registry is Docker Hub and is where all of the images we’ve used have come from.

Create a Repo

To push an image, we first need to create a repo on Docker Hub.

  1. Go to Docker Hub and log in if you need to.
  2. Click the Create Repository button.
  3. For the repo name, use getting-started. Make sure the Visibility is Public.
  4. Click the Create button!

If you look on the right-side of the page, you’ll see a section named Docker commands. This gives an example command that you will need to run to push to this repo.

Pushing our Image

  1. In the command line, try running the push command you see on Docker Hub. Note that your command will be using your namespace, not “docker”. ( docker push )

Push an image or a repository to a registry

$ docker push [OPTIONS] NAME[:TAG]
$ docker push docker/getting-started
The push refers to repository [docker.io/docker/getting-started]
An image does not exist locally with the tag: docker/getting-started

Why did it fail? The push command was looking for an image named docker/getting-started, but didn’t find one. If you run docker image ls, you won’t see one either.

To fix this, we need to “tag” our existing image we’ve built to give it another name.

  1. Login to the Docker Hub using the command docker login -u YOUR-USER-NAME.

  2. Use the docker tag command to give the getting-started image a new name. Be sure to swap out YOUR-USER-NAME with your Docker ID. ( docker tag )

Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE

$ docker tag SOURCE_IMAGE[:TAG] TARGET_IMAGE[:TAG]
$ docker tag getting-started YOUR-USER-NAME/getting-started
  1. Now try your push command again. If you’re copying the value from Docker Hub, you can drop the tagname portion, as we didn’t add a tag to the image name. If you don’t specify a tag, Docker will use a tag called latest.
$ docker push YOUR-USER-NAME/getting-started

Running our Image on a New Instance

Now that our image has been built and pushed into a registry, let’s try running our app on a brand new instance that has never seen this container image! To do this, we will use Play with Docker.

  1. Open your browser to Play with Docker.

  2. Log in with your Docker Hub account.

  3. Once you’re logged in, click on the “+ ADD NEW INSTANCE” link in the left side bar. (If you don’t see it, make your browser a little wider.) After a few seconds, a terminal window will be opened in your browser.

  4. In the terminal, start your freshly pushed app.

    $ docker run -dp 3000:3000 YOUR-USER-NAME/getting-started

    You should see the image get pulled down and eventually start up!

  5. Click on the 3000 badge when it comes up and you should see the app with your modifications! Hooray! If the 3000 badge doesn’t show up, you can click on the “Open Port” button and type in 3000.

Persisting our DB

In case you didn’t notice, our todo list is being wiped clean every single time we launch the container. Why is this? Let’s dive into how the container is working.

The Container’s Filesystem

When a container runs, it uses the various layers from an image for its filesystem. Each container also gets its own “scratch space” to create/update/remove files. Any changes won’t be seen in another container, even if they are using the same image.

Container Volumes

With the previous experiment, we saw that each container starts from the image definition each time it starts. While containers can create, update, and delete files, those changes are lost when the container is removed and all changes are isolated to that container. With volumes, we can change all of this.

Volumes provide the ability to connect specific filesystem paths of the container back to the host machine. If a directory in the container is mounted, changes in that directory are also seen on the host machine. If we mount that same directory across container restarts, we’d see the same files.

There are two main types of volumes. We will eventually use both, but we will start with named volumes.

Persisting our Todo Data

By default, the todo app stores its data in a SQLite Database at /etc/todos/todo.db. We’ll talk about switching this to a different database engine later.

With the database being a single file, if we can persist that file on the host and make it available to the next container, it should be able to pick up where the last one left off. By creating a volume and attaching (often called “mounting”) it to the directory the data is stored in, we can persist the data. As our container writes to the todo.db file, it will be persisted to the host in the volume.

As mentioned, we are going to use a named volume. Think of a named volume as simply a bucket of data. Docker maintains the physical location on the disk and you only need to remember the name of the volume. Every time you use the volume, Docker will make sure the correct data is provided.

  1. Create a volume by using the docker volume create command. ( docker volume create )

    $ docker volume create [OPTIONS] [VOLUME]
    $ docker volume create todo-db
  2. Stop the todo app container once again in the Dashboard (or with docker rm -f <container-id>), as it is still running without using the persistent volume.

  3. Start the todo app container, but add the -v flag to specify a volume mount. We will use the named volume and mount it to /etc/todos, which will capture all files created at the path.

    $ docker run -dp 3000:3000 \
    -v todo-db:/etc/todos \
    --name named-volumes \
    getting-started
  4. Once the container starts up, open the app and add a few items to your todo list.

  5. Remove the container for the todo app. Use the Dashboard or docker ps to get the ID and then docker rm -f <container-id> to remove it.

  6. Start a new container using the same command from above.

  7. Open the app. You should see your items still in your list!

  8. Go ahead and remove the container when you’re done checking out your list.

Diving into our Volume

A lot of people frequently ask “Where is Docker actually storing my data when I use a named volume?” If you want to know, you can use the docker volume inspect command.

$ docker volume inspect todo-db
[
    {
        "CreatedAt": "2019-09-26T02:18:36Z",
        "Driver": "local",
        "Labels": {},
        "Mountpoint": "/var/lib/docker/volumes/todo-db/_data",
        "Name": "todo-db",
        "Options": {},
        "Scope": "local"
    }
]

The Mountpoint is the actual location on the disk where the data is stored. Note that on most machines, you will need to have root access to access this directory from the host. But, that’s where it is!

Using Bind Mounts

In the previous chapter, we talked about and used a named volume to persist the data in our database. Named volumes are great if we simply want to store data, as we don’t have to worry about where the data is stored.

With bind mounts, we control the exact mountpoint on the host. We can use this to persist data, but is often used to provide additional data into containers. When working on an application, we can use a bind mount to mount our source code into the container to let it see code changes, respond, and let us see the changes right away.

For Node-based applications, nodemon is a great tool to watch for file changes and then restart the application. There are equivalent tools in most other languages and frameworks.

Quick Volume Type Comparisons

Bind mounts and named volumes are the two main types of volumes that come with the Docker engine. However, additional volume drivers are available to support other use cases (SFTP, Ceph, NetApp, S3, and more).

Named Volumes Bind Mounts
Host Location Docker chooses You control
Mount Example (using -v) my-volume:/usr/local/data /path/to/data:/usr/local/data
Populates new volume with container contents Yes No
Supports Volume Drivers Yes No

Starting a Dev-Mode Container

To run our container to support a development workflow, we will do the following:

  • Mount our source code into the container
  • Install all dependencies, including the “dev” dependencies
  • Start nodemon to watch for filesystem changes

So, let’s do it!

  1. Make sure you don’t have any previous getting-started containers running.

  2. Run the following command from the source code folder. We’ll explain what’s going on afterwards:

    $ docker run -dp 3000:3000 \
        -w /app -v "$(pwd):/app" \
        node:12-alpine \
        --name Bind-Mounts \
        sh -c "yarn install && yarn run dev"

    If you are using PowerShell then use this command.

    $ docker run -dp 3000:3000 `
        -w /app -v "$(pwd):/app" `
        node:12-alpine `
        --name Bind-Mounts `
        sh -c "yarn install && yarn run dev"

    If you are using an Apple Silicon Mac or another ARM64 device then use this command.

    $ docker run -dp 3000:3000 \
        -w /app -v "$(pwd):/app" \
        node:12-alpine \
        --name Bind-Mounts \
        sh -c "apk add --no-cache python2 g++ make && yarn install && yarn run dev"
    • -dp 3000:3000 - same as before. Run in detached (background) mode and create a port mapping
    • -w /app - sets the container’s present working directory where the command will run from
    • -v "$(pwd):/app" - bind mount (link) the host’s present working directory to the container’s /app directory
    • node:12-alpine - the image to use. Note that this is the base image for our app from the Dockerfile
    • --name Bind-Mounts - rename the container as bind mounts
    • sh -c "yarn install && yarn run dev" - the command. We’re starting a shell using sh (alpine doesn’t have bash) and running yarn install to install all dependencies and then running yarn run dev. If we look in the package.json, we’ll see that the dev script is starting nodemon.
  3. You can watch the logs using docker logs -f <container-id>. You’ll know you’re ready to go when you see this…

    docker logs -f <container-id>
    $ nodemon src/index.js
    [nodemon] 1.19.2
    [nodemon] to restart at any time, enter `rs`
    [nodemon] watching dir(s): *.*
    [nodemon] starting `node src/index.js`
    Using sqlite database at /etc/todos/todo.db
    Listening on port 3000

    When you’re done watching the logs, exit out by hitting Ctrl+C.

  4. Now, let’s make a change to the app. In the src/static/js/app.js file, let’s change the “Add Item” button to simply say “Add”. This change will be on line 109 - remember to save the file.

    -                         {submitting ? 'Adding...' : 'Add Item'}
    +                         {submitting ? 'Adding...' : 'Add'}
  5. Simply refresh the page (or open it) and you should see the change reflected in the browser almost immediately. It might take a few seconds for the Node server to restart, so if you get an error, just try refreshing after a few seconds.

  6. Feel free to make any other changes you’d like to make. When you’re done, stop the container and build your new image using docker build -t getting-started ..

Using bind mounts is very common for local development setups. The advantage is that the dev machine doesn’t need to have all of the build tools and environments installed. With a single docker run command, the dev environment is pulled and ready to go. We’ll talk about Docker Compose in a future step, as this will help simplify our commands (we’re already getting a lot of flags).

Multi-Container Apps

Up to this point, we have been working with single container apps. But, we now want to add MySQL to the application stack. The following question often arises - “Where will MySQL run? Install it in the same container or run it separately?” In general, each container should do one thing and do it well. A few reasons:

  • There’s a good chance you’d have to scale APIs and front-ends differently than databases.
  • Separate containers let you version and update versions in isolation.
  • While you may use a container for the database locally, you may want to use a managed service for the database in production. You don’t want to ship your database engine with your app then.
  • Running multiple processes will require a process manager (the container only starts one process), which adds complexity to container startup/shutdown.

And there are more reasons. So, we will update our application to work like this:

Container Networking

Remember that containers, by default, run in isolation and don’t know anything about other processes or containers on the same machine. So, how do we allow one container to talk to another? The answer is networking. Simply remember this rule…

If two containers are on the same network, they can talk to each other. If they aren’t, they can’t.

Starting MySQL

There are two ways to put a container on a network: 1) Assign it at start or 2) connect an existing container. For now, we will create the network first and attach the MySQL container at startup.

  1. Create the network.

    $ docker network create todo-app
  2. Start a MySQL container and attach it to the network. We’re also going to define a few environment variables that the database will use to initialize the database (see the “Environment Variables” section in the MySQL Docker Hub listing).

    $ docker run -d \
        --network todo-app --network-alias mysql \
        -v todo-mysql-data:/var/lib/mysql \
        -e MYSQL_ROOT_PASSWORD=secret \
        -e MYSQL_DATABASE=todos \
        --name MySQL \
        mysql:5.7

    If you are using PowerShell then use this command.

    $ docker run -d `
        --network todo-app --network-alias mysql `
        -v todo-mysql-data:/var/lib/mysql `
        -e MYSQL_ROOT_PASSWORD=secret `
        -e MYSQL_DATABASE=todos `
        --name MySQL `
        mysql:5.7

    You’ll also see we specified the --network-alias flag. We’ll come back to that in just a moment.

If you see a docker: no matching manifest error, it’s because you’re trying to run the container in a different architecture than amd64, which is the only supported architecture for the mysql image at the moment. To solve this add the flag --platform linux/amd64 in the previous command. So your new command should look like this:

```bash

  1. To confirm we have the database up and running, connect to the database and verify it connects.
$ docker exec -it <mysql-container-id> mysql -p

When the password prompt comes up, type in secret. In the MySQL shell, list the databases and verify you see the todos database.

mysql> SHOW DATABASES;

You should see output that looks like this:

+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| sys                |
| todos              |
+--------------------+
5 rows in set (0.00 sec)

Hooray! We have our todos database and it’s ready for us to use!

To exit the sql terminal type exit in the terminal.

Running our App with MySQL

The todo app supports the setting of a few environment variables to specify MySQL connection settings. They are:

  • MYSQL_HOST - the hostname for the running MySQL server
  • MYSQL_USER - the username to use for the connection
  • MYSQL_PASSWORD - the password to use for the connection
  • MYSQL_DB - the database to use once connected

With all of that explained, let’s start our dev-ready container!

  1. We’ll specify each of the environment variables above, as well as connect the container to our app network.

    $ docker run -dp 3000:3000 \
      -w /app -v "$(pwd):/app" \
      --network todo-app \
      --name MySQL-App \
      -e MYSQL_HOST=mysql \
      -e MYSQL_USER=root \
      -e MYSQL_PASSWORD=secret \
      -e MYSQL_DB=todos \
      node:12-alpine \
      sh -c "yarn install && yarn run dev"

    If you updated your docker file in the Bind Mount section of the tutorial use the updated command:

    ```bash hl_lines="3 4 5 6 7"

    If you are using MacOS then use this command:

    $ docker run -dp 3000:3000 \ 
      -w /app -v "$(pwd):/app" \ 
      --network todo-app \ 
      --name MySQL-App \
      -e MYSQL_HOST=mysql \ 
      -e MYSQL_USER=root \ 
      -e MYSQL_PASSWORD=secret \ 
      -e MYSQL_DB=todos \ 
      node:12-alpine \ 
      sh -c "apk --no-cache --virtual build-dependencies add python2 make g++ && yarn install && yarn run dev"

    If you are using PowerShell then use this command:

    $ powershell hl_lines="3 4 5 6 7" 
    $ docker run -dp 3000:3000 `
      -w /app -v "$(pwd):/app"`
      --network todo-app `
      --name MySQL-App `
      -e MYSQL_HOST=mysql `
      -e MYSQL_USER=root `
      -e MYSQL_PASSWORD=secret` 
      -e MYSQL_DB=todos `
      node:12-alpine`
      sh -c "yarn install && yarn run dev" ```
  2. If we look at the logs for the container (docker logs <container-id>), we should see a message indicating it’s using the mysql database.

    # Previous log messages omitted
    $ docker logs -f <container-id>
    $ nodemon src/index.js
    [nodemon] 1.19.2
    [nodemon] to restart at any time, enter `rs`
    [nodemon] watching dir(s): *.*
    [nodemon] starting `node src/index.js`
    Connected to mysql db at host mysql
    Listening on port 3000
  3. Open the app in your browser and add a few items to your todo list.

  4. Connect to the mysql database and prove that the items are being written to the database. Remember, the password is secret.

    $ docker exec -it <mysql-container-id> mysql -p todos

    And in the mysql shell, run the following:

    mysql> select * from todo_items;
    +--------------------------------------+--------------------+-----------+
    | id                                   | name               | completed |
    +--------------------------------------+--------------------+-----------+
    | c906ff08-60e6-44e6-8f49-ed56a0853e85 | Do amazing things! |         0 |
    | 2912a79e-8486-4bc3-a4c5-460793a575ab | Be awesome!        |         0 |
    +--------------------------------------+--------------------+-----------+

    Obviously, your table will look different because it has your items. But, you should see them stored there!

If you take a quick look at the Docker Dashboard, you’ll see that we have two app containers running. But, there’s no real indication that they are grouped together in a single app. We’ll see how to make that better shortly!

Using Docker Compose

Docker Compose is a tool that was developed to help define and share multi-container applications. With Compose, we can create a YAML file to define the services and with a single command, can spin everything up or tear it all down.

The big advantage of using Compose is you can define your application stack in a file, keep it at the root of your project repo (it’s now version controlled), and easily enable someone else to contribute to your project. Someone would only need to clone your repo and start the compose app. In fact, you might see quite a few projects on GitHub/GitLab doing exactly this now.

Installing Docker Compose

If you installed Docker Desktop/Toolbox for either Windows or Mac, you already have Docker Compose! Play-with-Docker instances already have Docker Compose installed as well. If you are on a Linux machine, you will need to install Docker Compose using the instructions here.

After installation, you should be able to run the following and see version information.

$ docker-compose version

Creating our Compose File

At the root of the app project, create a file named docker-compose.yml.

Remember that the two services defined on the yaml file is the same as we metion above:

  1. Service 1 : app
$ docker run -d \
    --network todo-app --network-alias mysql \
    -v todo-mysql-data:/var/lib/mysql \
    -e MYSQL_ROOT_PASSWORD=secret \
    -e MYSQL_DATABASE=todos \
    --name MySQL \
    mysql:5.7
  1. Service 2 : mysql
$ docker run -dp 3000:3000 \ 
  -w /app -v "$(pwd):/app" \ 
  --network todo-app \ 
  --name MySQL-App \
  -e MYSQL_HOST=mysql \ 
  -e MYSQL_USER=root \ 
  -e MYSQL_PASSWORD=secret \ 
  -e MYSQL_DB=todos \ 
  node:12-alpine \ 
  sh -c "apk --no-cache --virtual build-dependencies add python2 make g++ && yarn install && yarn run dev"

Our complete docker-compose.yml should look like this:

# define the schema version
version: "3.8"

# define the list of services (or containers) we want to run as part of our application
services:
	# SERVICE 1
	# define the service entry and the image for the container
  app:
    image: node:12-alpine
    
    # migrate the command
    command: sh -c "apk --no-cache --virtual build-dependencies add python2 make g++ && yarn install && yarn run dev"
    
    # migrate the -p 3000:3000
    ports:
      - 3000:3000
      
    # migrate both the working directory (-w /app) and the volume mapping (-v "$(pwd):/app")
    working_dir: /app
    volumes:
      - ./:/app
    
    # we need to migrate the environment variable definitions using the environment key.
    environment:
      MYSQL_HOST: mysql
      MYSQL_USER: root
      MYSQL_PASSWORD: secret
      MYSQL_DB: todos
	
	# SERVICE 2
	# define the new service and name it mysql
  mysql:
    image: mysql:5.7
    
    # define the volume mapping
    volumes:
      - todo-mysql-data:/var/lib/mysql
      
    # specify the environment variables
    environment: 
      MYSQL_ROOT_PASSWORD: secret
      MYSQL_DATABASE: todos

volumes:
  todo-mysql-data:

Running our Application Stack

Now that we have our docker-compose.yml file, we can start it up!

  1. Make sure no other copies of the app/db are running first (docker ps and docker rm -f <ids>).

  2. Start up the application stack using the docker-compose up command. We’ll add the -d flag to run everything in the background.

    $ docker-compose up -d

    When we run this, we should see output like this:

    Creating network "app_default" with the default driver
    Creating volume "app_todo-mysql-data" with default driver
    Creating app_app_1   ... done
    Creating app_mysql_1 ... done

    You’ll notice that the volume was created as well as a network! By default, Docker Compose automatically creates a network specifically for the application stack (which is why we didn’t define one in the compose file).

  3. Let’s look at the logs using the docker-compose logs -f command. You’ll see the logs from each of the services interleaved into a single stream. This is incredibly useful when you want to watch for timing-related issues. The -f flag “follows” the log, so will give you live output as it’s generated.

    If you don’t already, you’ll see output that looks like this…

    mysql_1  | 2019-10-03T03:07:16.083639Z 0 [Note] mysqld: ready for connections.
    mysql_1  | Version: '5.7.27'  socket: '/var/run/mysqld/mysqld.sock'  port: 3306  MySQL Community Server (GPL)
    app_1    | Connected to mysql db at host mysql
    app_1    | Listening on port 3000

    The service name is displayed at the beginning of the line (often colored) to help distinguish messages. If you want to view the logs for a specific service, you can add the service name to the end of the logs command (for example, docker-compose logs -f app).


Getting Start With Docker
http://example.com/2022/03/29/Getting Start With Docker/
Author
Zachary Deng
Posted on
March 29, 2022
Licensed under