If industry buzz is to be believed, Docker is one buzzword which is promising to rewrite the DevOps rulebooks. How?
Docker and Containerization
By definition, containerization is an operating system level virtualization where several applications are deployed without launching a dedicated VM for each application. Instead, multiple applications run on isolated systems, that run on a single kernel. E.g. Docker, rkt (pronounced as rocket).
Docker is a popular opensource containerization tool based on Linux containers. It’s currently the world’s leading tool for containerization.
When speaking of Docker, people often say that the Docker container is a “lightweight VM”. However, in actuality, a container is completely different from a VM.
In a normal virtualized environment, one or more virtual machines run on top of a physical machine using a hypervisor.
Containers, on the other hand, run on userspace on top of the operating system kernel. Containers are isolated in a host using two Linux kernel features called namespaces and control groups.
Docker containers pack up a piece of software in a complete filesystem that contains everything it needs to run (code, runtime, system tools, and system libraries – anything you can install on a server). This guarantees that it will always run the same, regardless of the environment it is running in. This helps avoid the “It works in my machine” problem.
It is largely used in the development and testing environments where you can create containers on the fly and destroy them once the requirement is verified/tested. In recent times with the help of production-ready orchestration tools like Docker Swarm and Kubernetes, it is also used in production where you can scale up and down the containers easily based on the load of the application.
Advantages and Disadvantages of Docker
Although there are several pros of using Docker, the two key advantages it offers are:
- Containers take less time to spin up which comes in handy when there are spikes in user activity.
- As containers don’t have overheads of the OS, we can always spin up more containers on a server than virtual machines.
One major disadvantage is that a container is less secure than a VM as it shares the same kernel with the other containers and has root access – which means that containers are less isolated from each other. If there is a vulnerability in the kernel it can compromise the security of other containers as well. This is a major deterrent that stops many clients from using Docker in their live environments.
Before and After Docker
In earlier days, physical machines were used to host each application, which only utilized 10% of their total capacity. With the invention of virtualization, the total capacity usage increased by a fairly good percentage. However, even then some capacity of the machine was wasted with the installation of full versions of the OS on each VM which used up a part of disk space, RAM and CPU). Docker (containerization) addressed this problem. Docker containers don’t need any extra OS installation which saves machine capacity, which can be used to host the applications.
Now, comes the question, is there anything after Docker?
The industry is now moving towards more lightweight infrastructure at the cost of heavy customizations.
Unikernels can be the most likely possibility for the path ahead.
Here, the libraries corresponding to the OS which is required for the application to run are chosen, then compiled with the application and configuration code to make a fixed-purpose image called Unikernel which runs directly on hypervisors. This further optimizes the machine capacity utilization.