Container adoption is growing exceptionally fast. 451 Research predicts a 40% annual growth rate in the application container market, reaching $2.7bn by 2020. According to Gartner Research, « By the year 2020, more than 50% of companies will use container technology, up from less than 20% in 2017 ».
Although the uptake of containers has been swift in development environments, container use is significantly more rare in production. Research led by Diamanti shows that of those that have already adopted container technology, 47% plan to deploy containers in production, and 12% have already done so.
In this article we examine the rise of containerization, the potential gains from the technology, and means to extend their use in today’s typical heterogeneous environments.
What is containerization and why is it becoming so popular?
The initial concept of containerization emerged in 1979, during the development of Unix V7, with the introduction of the "chroot system" (Aqua). Its main goal is portability: "build once, deploy everywhere."
Containerization makes it possible to isolate an application in a sort of prison (borrowing the same analogy as the "Jails" of BSD). In essence, containers provide a system of resources visible only to the process, that does not require the installation of a new OS since it uses the kernel of the host system.
The containers therefore run natively on their OS, which they share with each other. Applications or services are consequently much lighter (a few MB on average compared with several GB for VMs), enabling much faster execution.
The creation of Docker in 2013 popularized the concept of containerization by making it much easier to use and offering a complete container management ecosystem.
- Docker integrates perfectly with the concept of DevOps, especially in the area of versioning: development and production are carried out in the same container. Put simply, if the application works on the Dev side, it will also work on the Ops side. Unlike a VM or a traditional application, there will be no side effects due to installation or a specific configuration needed in production.
- Resource cost is another key factor behind the popularity of containerization. As a basis for comparison, a machine capable of running 50 VMs will be able to host 1,000 containers.
- The speed of starting a container is also a major benefit, as it does not contain the OS: only a few seconds, as opposed to over a minute for a VM.
- Lastly, orchestrators such as Mesos DC / OS or Kubernetes have emerged, to automate the deployment and management of containerized applications. These solutions bring high levels of scalability, responsiveness and elasticity, critical when handling sudden peaks of activity to meet business needs such as Black Friday for example.
In which technology environments is containerization the best fit?
Containerization can be applied to all types of technology, but is an ideal fit for managing web applications, especially in Linux environments.
It is also used in front-end development and middleware, but so far very little for back-end technologies.
The principal reason being that databases are optimized to interact directly on the hardware, and containerization would therefore bring no gain in performance.
Containerization is also valuable in "Canary Deployments", a strategy for deploying versions to a subset of users or servers. The goal is first to deploy the change on a small number of servers, test the change and monitor the possible impacts, before extending the change to the remaining servers.
Kubernetes, a container orchestration system offered by Google to the open source community, implements deployments in a standard or advanced way via tools like Istio, an open technology for connecting, managing and securing microservices at scale.
Who are the main containerization players?
Even though the concept of containerization was inspired by solutions like Chroot, FreeBSD Jails or LXC (Linux Containers), other players dominate the market today. Among them are Docker, mentioned above – the undisputed leader in containerization, - but also Rocket Core OS (recently acquired by Red Hat and renowned for its security), Canonical LXD or Virtuozzo OpenVZ, the oldest container platform.
Supporting these container solutions we also find the key orchestration players such as Kubernetes and Swarm (from Docker), and the Mesos DC/OS platform.
Once you are satisfied with the detection, the rules have been assigned, it’s almost time to mask the data. There is one area that needs to be discussed, as I know developers will scream if this isn’t working. What happens when an application in test needs consistent or homonymous data across multiple tables in multiple databases? This is where the ARCAD advanced caching technology comes in. Now before you think caching will allow a reversal of the process, the process developed by ARCAD uses non-reversible caching logic. There is no going back, it’s a one-way street.
What are the obstacles to adopting Docker in a production environment?
In small organizations, a reticence to use Docker in production can be due to be a skills gap both in the use of Docker itself and in the orchestrators.
When using Docker, applications are "stateless" because of the microservices architecture used, while most applications, even n-tier, are "stateful". For this reason, an adaptation to the software architecture is often needed before using Docker in production.
And there is a further impact: the application is no longer controlled in the same way. The use of Docker generates a cloud of applications, in which the links between services must be taken into account.
This is a new paradigm for both systems administrators and developers: it requires new tools to understand "live" how these communications are realized, in order to resolve bugs. Solving these problems is therefore much more complex.
The software architecture is impacted, as is the hardware architecture, which must be able to handle very large log volumes to achieve this.
How to solve security problems when using containerization?
The main role of solutions like Docker is in the running and managing of containers, but they are rarely deployed as-is in production. Usually, they are used in conjunction with container orchestrators, designed to manage multiple machines.
Orchestrators also offer specific services that will directly address containers created under Docker.
These include security features such as PKI for certificate management, or CNI for network management.
Orchestrators also provide high availability, management of sensitive data, and guarantee container isolation.
These tools are rich and relatively complex, requiring specific experience, explaining why very few of them are deployed on-premise.
Deploy your containers with DROPS!
As a release orchestration solution, DROPS can, like other solutions, interact with a Kubernetes cluster in order to send the various images produced in development to a production registry, be it On Premise or Cloud, such as AWS, Azure or IBM Cloud.
But the main advantage of DROPS lies in its ability to orchestrate all types of deployment in a heterogeneous environment.
Non-intrusive, DROPS works with all types of Orchestrators, utilizing the underlying features of the orchestrator itself. It relies on communication tokens and therefore does not require the installation of a plug-in.
In this way, DROPS is able to secure the deployment, update and rollback of Legacy, On Premise, Cloud or container applications in the same environment.
With DROPS, the process of deployment is comprehensive and consistent across all applications, regardless of the underlying platform, leveraging the Orchestrator infrastructure and tools.