Guide to containerisation
By Team Arrk |
|
5 mins read |
Containerisation has become one of the hottest terms in IT innovation. However, what does it actually mean and what can it do for your business? In this guide to containerisation we explain its advantages and other essentials that you should know.
What exactly are containers and what is the point of them?
The idea behind containers is to get software to run reliably when it is transferred across from one computing environment to another – similar to a ship carrying goods in containers. In a computing sense, this could be moving software from a laptop being used by a developer across to a test environment; or from a physical machine into a data centre; or indeed from a staging area into production.
A container is made up a complete runtime environment – this means that all elements, such as the applications with its dependencies, configuration files, libraries and binaries, are wrapped up together in one simple package. The advantage of doing this is that any differences in the underlying infrastructure or in the OS distributions will be abstracted.
Wait… isn’t this like virtualisation?
Those of you who are familiar with many of the latest IT jargon will already know about virtualisation and its obvious similarities with containerisation. In summary, with virtualisation technology a package is passed around a virtual machine which includes a complete operating system and the application itself. It includes a physical server which runs three virtual machines – it has a hypervisor and three operating systems will run on top of it.
Meanwhile, with containerisation, there are three contained applications that run a solo operating system – each container will also share the system kernel. Each shared part will be read only and every container features its own access point. The result is that they rely on fewer resources than virtualised machines and are generally more lightweight. So whereas a virtualised system will need its own operating system and could consist of a number of gigabytes, containerised systems may just be made up of tens of megabytes.
So is containerisation the same as Docker?
Docker and containerisation seem to go hand in hand – but they are not the same thing. Think about how the word “hoover” became part of our vernacular because the Hoover brand popularised the vacuum cleaner – however, there were and are many other types of vacuum cleaners. The same could perhaps be said about Docker and containerisation.
Containerisation is, in fact, not a particularly new concept. It can actually be traced back for at least 10 years when it was used as LXC, part of Linex. Indeed there have been other forms included in Solaris Containers, AIX Workload Partitions and FreeBSD jails.
More recently, there have even been alternatives to Docker introduced through Linux – with Rkt being perhaps the most well-known alternative. With Docker now being used for complex tasks such as building systems for clustering and launching cloud servers; Rkt is meanwhile able to deal with Docker containers as well as those that meet its App container image specification.
What are the advantages and disadvantages of containerisation?
Let’s take a look at some of the pros and cons of containerisation. Firstly, the advantages:
- Speed | Generally, containers are significantly faster than virtual machines. This is because virtual machines have to retrieve 10-20GBs from storage, whereas in a container the workload is focused on the operating system kernel. With this speed, development teams are able to activate project code quickly and even carry out testing in different ways.
- Lightweight | Whereas dozens of virtual machines can be placed on to a host server, it can be possible to load 100s or, in some cases, even 1,000s of containers on to one host server. Therefore they offer an intense form of computing but without using too much power or space.
- Proven | If you’re worried about whether something relatively new really works, take a look at Google Search as the ultimate example. Of course Google is the world’s most popular search platform and it takes advantage of Linux containers for its internal operations. With Google Search, around 7,000 containers are launched during each second – meaning around two billion a week. It could be argued that containers are the secret to the smooth and fast results you achieve with Google.
The disadvantages:
- Security concerns | As of yet there has perhaps not been a significant level of research into how safe it is to run thousands of containers. For example, if two containers are permitted to talk to each other and one of them is packed with malicious code then it might only be a matter of time before malware traps something valuable in its net. It is also possible, in theory, for malware to build up a picture of what the containers are doing. While this shouldn’t be the case because containers are meant to be isolated, it’s not yet been proven that no form of malware snoop can occur.
- Reliance on a single host | At the moment, Docker and containers rely on a single host platform – but what if an application needed 10 or 100 servers?
Containerisation – The wait to go mainstream
Overall, the advantages of containerisation are clear to see – it can help reduce IT labour, running costs, boost speed and more. There are still a number of issues to be resolved however, and more research is needed into securing containers: there is a threat with containers that if one thing goes wrong, it may go wrong in a much bigger way than it otherwise would.
However, with a big name like Google Search raising its profile, it’s clear that containerisation has massive potential – but it must overcome these caveats if it is to gain mass appeal.