27/11/2024
Newsfeed

An introduction to virtualization with Docker Containers

Computing has continued to evolve into what we’d believe to be the right direction: towards more efficiency.  And there are everyday examples of efforts being made to improve the efficiency of the hardware and software we use.  Think of when Virtualization moved from being another tech buzzword to being a damn good way to make the most of our hardware, we were still deploying separate physical servers for Mail, Databases, Directory Access and even Firewalls and remote access.  Today, the best networks are virtualized not because it’s cooler, but because it is cheaper, easier to deploy and manage and significantly cut’s back on potential downtime.

The traditional way of virtualization is to take common hardware resources and allow them to service an isolated instances of Operating systems along with the applications running on them.  And for most complex deployments, i.e. mail, this is as good as it can get.  For example, to run Windows Applications, you’ll need Windows, and to run iOS applications, you’ll need iOS, and so on and so forth. In a large network, this can easily turn into a complex mess with all kinds of management and interoperability complications. Yes, it would be easier to go with a single platform, but Windows or iOS can never match the achievable granularity Linux has to offer, at the same time Linux cannot offer the simplicity in productivity tools that Windows can offer and when you don’t want to settle, or develop your own bespoke applications, where do you turn?

containers_img1

WHAT IF WE COULD MAKE A VIRTUAL MACHINE MORE GRANULAR?

Decoupling the operating system from the hardware opened up a myriad of possibilities that VMware and other virtualization protagonists are more than happy to sing about, but the question that was left to answer was: What if, we could decouple the application from the operating system?  So instead of having a single OS tied to a single VM, what if we could tie a single OS to a handful of virtual machines, and host specific applications in each VM?

INTRODUCING DOCKER CONTAINERS

Docker Containers include the application and all of its dependencies — but share the kernel with other containers, running as isolated processes in user space on the host operating system. Docker containers are not tied to any specific infrastructure: they run on any computer, on any infrastructure, and in any cloud.  This platform independence, make it possible to move containers around in the same way that virtual machines can.

containers_img2

Advantages of Containers

  • Significantly lower resource requirements.  A single Operating system, hosting dozens if not hundreds of applications takes most of OS management out of the equation, leaving more time to tweak and improve applications
  • Simplified development cycle.  The biggest issue with traditional applications has to do with all the dependencies an application has to conform with in order to run on a platform.  And if there are any hopes to offer cross platform functionality, this list increases exponentially.  With Containers the focus is this single container and the application running in it, making it easier for developers to add new features, fix existing issues, and deploy applications faster
  • Developing applications for Windows, Mac and Linux can be a nightmare for small development houses, let alone individual developers.  Quite often the variations in the underlying structure of an application, can result in significant inconsistencies, something that does not exist in the world of containers
  • Granularity: Multiple applications running on a single OS can sometimes cause performance issues especially where multiple applications tap into the same pool of dependencies.  With Containers, this process isolation means, a single application has exclusive access to its own resource pool

Disadvantages of Containers

  • While containers do decouple the application from the OS, they don’t do this entirely and Linux containers are still incompatible with Windows Hosts.  This of course is slowly changing
  • Complex to deploy.  Without tools like Kubernetes or Apache Mesos, managing Containers across a geographical plane is impractical and cumbersome
  • Each Container requires root access.  While this is no issue for a test environment, in a production environment with web facing applications, this pauses some serious security concerns
  • Each Container interfaces with the Same Kernel, which can affect every container when a Kernel exception occurs

APPLICATIONS FOR ZAMBIAN DEVELOPERS AND ENGINEERS

There’s no running away from the fact that Containers will become an essential skill for DevOps engineers and even Systems administrators.  Mobile applications are slowly moving away from “Hosted on a server” to being hosted in purpose-built environments like the Docker container that allow for better Continuous Integration and Continuous Development (CI/CD).  As Zambia sees more university graduates take an interest in developing applications for Mobile and the web, understanding what options are available out there, their advantages and how to leverage them to remain relevant and competitive is essential.

For systems Administrators, HP recently announced that all their future renditions of servers will ship with the Docker Engine for anyone looking to deploy a hybrid Virtualized network with actual Virtual Machines, Servers and Docker Containers. Of course, Containers are impractical for Complex deployments like mail servers, however, applications like an ERP, an accounting software and other productivity told would benefit from a more isolated and granular runtime environment. #ICTZM

3 thoughts on “An introduction to virtualization with Docker Containers

  • Wow. That is so elegant and logical and clearly explained. Keep it up! I follow up your blog for future post.

Comments are closed.