What are Containers?
- Containers are an application-centric method to deliver high-performing, scalable applications on any infrastructure of your choice.
- Containers are best suited to deliver microservices by providing portable, isolated virtual environments for applications to run without interference from other running applications.
- Containers encapsulate microservices and their dependencies but do not run them directly. Containers run container images.
- A container image bundles the application along with its runtime, libraries, and dependencies, and it represents the source of a container deployed to offer an isolated executable environment for the application.
- Containers can be deployed from a specific image on many platforms, such as workstations, Virtual Machines, public cloud, etc.
Container Orchestration
- Container runtimes like runC, containerd, or cri-o we can use those pre-packaged images, to create one or more containers. All of these runtimes are good at running containers on a single host.
- Container orchestrators are tools which group systems together to form clusters where containers' deployment and management is automated at scale while meeting the requirements mentioned below:
- Fault-tolerance
- On-demand scalability
- Optimal resource usage
- Auto-discovery to automatically discover and communicate with each other
- Accessibility from the outside world
- Seamless updates/rollbacks without any downtime.
- A few different container orchestration tools and services:
- Amazon Elastic Container Service (ECS)
- Azure Container Instances
- Azure Service Fabric
- Kubernetes
- Kubernetes is an open source orchestration tool, originally started by Google, today part of the Cloud Native Computing Foundation (CNCF) project.
- Marathon
- Marathon is a framework to run containers at scale on Apache Mesos.
- Nomad
- Nomad is the container and workload orchestrator provided by HashiCorp.
- Docker Swarm
- Docker Swarm is a container orchestrator provided by Docker, Inc. It is part of Docker Engine.
Need for Container Orchestration
- Although we can manually maintain a couple of containers or write scripts to manage the lifecycle of dozens of containers, orchestrators make things much easier for operators especially when it comes to managing hundreds and thousands of containers running on a global infrastructure.
- Most container orchestrators can:
- Group hosts together while creating a cluster.
- Schedule containers to run on hosts in the cluster based on resources availability.
- Enable containers in a cluster to communicate with each other regardless of the host they are deployed to in the cluster.
- Bind containers and storage resources.
- Group sets of similar containers and bind them to load-balancing constructs to simplify access to containerized applications by creating a level of abstraction between the containers and the user.
- Manage and optimize resource usage.
- Allow for implementation of policies to secure access to applications running inside containers.
Container Orchestrator Deployments
- Most container orchestrators can be deployed on bare metal, Virtual Machines, on-premises, on public and hybrid cloud.
- Kubernetes, for example, can be deployed on a workstation, with or without a local hypervisor such as Oracle VirtualBox, inside a company’s data center, in the cloud on AWS Elastic Compute Cloud (EC2) instances, Google Compute Engine (GCE) VMs, DigitalOcean Droplets, OpenStack, etc.
- There are ready-to-use solutions which allow Kubernetes clusters to be installed, with only a few commands, on top of cloud Infrastructures-as-a-Service, such as GCE, AWS EC2, Docker Enterprise, IBM Cloud, Rancher and multi-cloud solutions through IBM Cloud Private or StackPointCloud.
- Also there is the managed container orchestration as-a-Service, more specifically the managed Kubernetes as-a-Service solution, offered and hosted by the major cloud providers, such as: