When to use Docker?

When to use Docker?

·

7 min read

Docker is a container technology for creating and managing portable containers that accelerates the process of deploying an app to a cloud environment by turning it into a faster and smoother experience.

What is a container?

A container is a lightweight image, a stand-alone executable package that contains everything needed to run a piece of software, including the code, a runtime, system tools, libraries and settings.

By definition, a container is a standardized unit of software that always delivers the same application and execution behavior no matter where or who is executing it.

To simplify the understanding, docker can be considered a virtualization solution such as VirtualBox o Vmware. The main difference, in this case, is that each docker container has a slimmed optimized version of the operating system that is smaller in size and encapsulates apps/environments instead of fully bloated machines, allowing in this case, for minimum disk usage, low impact on OS and faster executions.

Docker also makes sharing, re-building and distributing an easy task as opposed to virtual machines. The strategy of isolating the application dependencies turns the process of segregation of concerns, troubleshooting and distribution a faster and less painful experience for developers.

Once an image is created, it can be stored in a container registry such as Docker Hub and can be easily pulled and run on any machine that has Docker installed. This service makes it easy to share and distribute your application to other developers and teams.

But, when to use Docker?

One reason for using Docker is when you have different development and application environments and you want to build and test exactly in the same environment. For example, let's say we have a company with different environments within our teams.

Wouldn't be nice to have the same environment between developers working on the same project?

If you host your website and on a virtual machine installing, for instance, a LAMP Apache, PHP, MySQL on the same machine it may sound like a faster approach at first, but later on, when your application is in production this can be a headache, especially when you need to upgrade one library in one of these applications that could potentially affect other applications and cause dependency issues and downtimes.

You definitely don't want to update dependencies globally since this can cause a direct impact on applications with different dependency versions.

Using Docker you can have the same locked-in environment in development and production which ensures that your application will work exactly as tested.

You should also use docker if you are concerned with these architecture principles:

Architecture Principles

Isolation

Isolate your application and its dependencies from the physical or virtual machine as a self-containing package in which the application is running.

Compatibility

Ensure that your application runs consistently across different environments.

Configuration

Reduce the time required to configure a development or testing environment.

Deployment

Run a version of your application that is still in development and another version that is in production, on the same host. This allows you to test new features and bug fixes on the development version without affecting the production version.

Portability

Package your application and its dependencies into a single portable unit called a container image. This image can then be distributed and run on any system that supports docker such as Linux, Mac, Linux etc.

Orchestration

Manage, distribute and scale up or down your application in a production environment. This is probably more often used in larger, more critical applications that require high availability, scalability and manageability.

For the last one, you can use Kubernetes tool which is an open-source container orchestration tool designed by Google that allows you to scale and distribute your application across different regions.

The docker engine interfaces with the host operating system allowing each self-contained application to run on a separate container.

The docker installation is very easy and very well documented which can be found on this page. We won't go over the details of the Docker installation because each operating system may have a different installation process.

https://docs.docker.com/engine/install/

For this article, we going over the very basic Hello World, assuming of course, that you have Docker already installed on your machine.

  1. Install Docker on your system, if you haven't already. You can download it from the Docker website.

  2. Open a terminal or command prompt on your system and run the following command to download the hello-world image from Docker Registry.

docker pull hello-world

After you've downloaded the hello-world image, let's run this container.

This will start the container and run the "hello-world" container.

docker run hello-world

To check the list of running containers, just use the command:

docker ps

To list all docker images that you have stored on a machine, just run this command:

docker images

Creating our First Container

To create our first container, we are going to create a custom basic Node application and Express.

First create a project using mkdir hello, then run npm init on it.

For this, let's first install Express running npm install the terminal to install all dependencies found in package.json.

Now, open up the project on your favorite editor and let's create a new file called app.mjs.

import express from 'express';
const app  = express();

app.get('/', (req, res)=>{
    res.send('<h1>Hello World</h1>');
});

app.listen(3000);

To test our basic Hello World application in the browser we run on the terminal node app.mjs. After that, we just navigate to localhost:3000 to preview it.

The .mjs is an extension for EcmaScript modules. ECMAScript 6 (ES6) introduced the specification for ES Modules, providing a standard for implementing modules in JavaScript that allows developers to organize code into smaller reusable components.

Next, we will create a Dockerfile with a couple of instructions to deploy our application into a container.

FROM node:18

WORKDIR /app

COPY package.json .

RUN npm install

COPY . . 

EXPOSE 3000

CMD ["node", "app.mjs"]

FROM node:18 -> indicates that we want to use Node.js as the base image. This pre-defined image will be downloaded from Docker Hub or any other cloud source provider.
WORKDIR /app -> creates a folder called app inside the container.
COPY package.json . -> copies the package.json to the working directory /app inside the container.
RUN npm install -> installs all the dependencies in the package.json file.
COPY . . -> copies the rest of our project files to /app folder inside the container.
EXPOSE 3000 -> exposes the container port 3000 to the public internet.
CMD ["node", "app.mjs"] -> just executes the command node app.mjs inside our container.

Assuming that you have Docker installed on your system, we will build our docker image running the following command at the terminal.

docker build .

The "dot" tells that we are running the Dockerfile inside our current project folder.

After successfully executing all commands inside Dockerfile, this command will show the image ID when successful:

writing image sha256:b1868df66ac563d37a248443999d016c9c4db1f7a11380565e8b19303ea99ac7

To run this image locally you run the following command:

docker run -p 3001:3000 sha256:b1868df66ac563d37a248443999d016c9c4db1f7a11380565e8b19303ea99ac7

The SHA-256 digest is a unique identifier for a Docker image and is used to ensure that the image has not been altered or corrupted in any way.

In the command docker run -p 3001:3000, the left side of the colon (3001) represents the host port, and the right side of the colon (3000) represents the container port.

Since our container has a port 3000 exposed on our docker file, to communicate with it we need to publish this port to our host system, in this case, on port 3001.

By default, there is no connection between the container and the host system. If we want to send HTTP requests to an application running inside the container, we need to publish the container port to the host environment.

Navigating to http://localhost:3001/ we can use our LOCALHOST on our local system to reach the application running inside the docker container on port 3000.

Above, is our dockerized application available on port 3001 to our local system.

To stop this container, just open up another terminal window or tab and run the docker ps to list the running container :

docker ps
(Bash) ~ % docker ps
CONTAINER ID   IMAGE          COMMAND                  CREATED         STATUS         PORTS                    NAMES
7098b9e81f5d   b1868df66ac5   "docker-entrypoint.s…"   1 minute ago   Up 1 minute   0.0.0.0:3001->3000/tcp   naughty_kirch

Just copy the name of the container that just started: naughty_kirch

docker stop naughty_kirch

This command will stop the container from running and will also shut it down. If you try to access it again, you should see this message:

Conclusion

You can use docker with any programming language and technologies of your choice. Docker allows you to have different versions of an application, for example, a legacy framework or database.

You can deploy your applications faster with fewer dependencies, with better security and this makes the process of moving and maintaining applications between different hosting environments a breeze.

The best of all is that we did not need to install Node.js on our system or to clutter our local environment with third-party dependencies. Everything happened inside the container and we still were able to access the application using via HTTP request on localhost:3001.