Sumo Logic ahead of the pack
Read articleComplete visibility for DevSecOps
Reduce downtime and move from reactive to proactive monitoring.
Containers-as-a-service (CaaS) is a category of cloud services where the service provider offers customers the ability to manage and deploy containerized applications and clusters. CaaS is sometimes viewed as a special sub-type of the Infrastructure-as-a-service (IaaS) cloud service delivery model, but where the main commodity is containers rather than physical hardware and virtual machines.
Containers essentially function as an alternative to the traditional virtualization approach, where instead of virtualizing the hardware stack using virtual machines, containers virtualize at the level of the operating system. As a result, containers run far more efficiently than virtual machines. They use fewer resources and a fraction of the memory as compared to virtual machines that need to boot an entire OS each time they are initialized.
Containers have been around since the late 1980s, but no organization has done more to develop and perfect the practice of container management than Google. Driven by the need to drive down software development costs and time-to-value, engineers at Google created an addition to the Linux kernel known as "cgroups," which was used to build containers that would power all of Google's applications. These containers act as isolated execution environments for individual applications using a simplified operating system.
Virtualization has been one of the most significant paradigm shifts in computing and software development over the past decade, enabling increased resource utilization and reduced time-to-value for development teams while helping to minimize the amount of repetitive work needed to deliver services. The ability to deploy applications in a virtualized environment meant that developer teams could more easily replicate the conditions of the production environment and conduct more targeted applications at a lower cost.
Virtualization meant that a user could apportion their processing power between several virtual environments running on the same machine, but each of these environments took up a substantial amount of memory, as virtual environments each require their operating system to function and run six instances of an operating system on the same hardware can be highly resource intensive.
Containers emerged as a mechanism to develop finer-grained control of virtualization. Rather than virtualizing an entire machine, including the operating system and hardware, containers create an isolated context that contains an application and all of its critical dependencies such as binaries, configuration files, and other dependencies into a discrete package.
Containers and virtual machines both allow applications to be deployed in virtual environments. The key difference is that the container environment contains only the files that the application needs to run while virtual machines contain many additional files and services that result in increased resource utilization without providing added functions. As a result, a computer that might be able to run 5 or 6 virtual machines could run tens or even hundreds of containers.
One of the key benefits of containers is that they take significantly less time to initiate than virtual machines. This is because containers share the Linux kernel while each virtual machine must fire up its operating system upon start-up.
Fast spin-up time for containers makes them ideal for large discrete applications with many individual parts of services that must be initialized, run, and terminated in a relatively short time frame. Doing this process with containers takes less time compared to virtual machines, and uses fewer CPU resources, making it significantly more efficient.
Containers fit well with applications that are constructed in a microservices application architecture rather than the traditional monolithic application architecture. While traditional monolithic applications intertwine every piece of the application together, most applications today are developed in the microservices model where applications consist of individual microservices, or features, that are deployed in containers and communicate with each other through APIs.
The use of containers makes it easy for developers to check the health and security of individual services within the application, toggle services on/off in the production environment and to ensure that individual services meet performance and CPU usage targets.
A container cluster is a dynamic system of container management that places and manages containers, grouped in pods and running on nodes. It also manages all of the interconnections and communication channels that connect the containers within the system. Container clusters have three key components:
Container clusters rely on a function called cluster scheduling, whereby workloads that are packaged in a container image can be intelligently allocated between virtual and physical machines based on their capacity, CPU and hardware requirements. A cluster scheduler enables flexible management of container-based workloads by rescheduling work automatically when failures happen, growing or shrinking the cluster when appropriate and spreading workloads across machines to reduce or eliminate risks that come from correlated failures. Dynamic container placement is all about automating the execution of workloads by sending containers to the right place for execution.
Companies that use CaaS require large enough volumes of containers that it is useful to start thinking in terms of sets of containers rather than individuals. CaaS service providers enable their customers to configure pods, collections of containers that are co-scheduled, in any way they choose. Rather than scheduling single containers, users can group containers using pods to ensure that certain sets of containers are executed together on the same host.
Many newly developed applications today consist of microservices that are networked to communicate with each other. Each of these microservices is deployed in a container that runs on nodes, and nodes must be able to communicate with each other effectively. Each node contains information such as the hostname and IP address of the node, the status of all running nodes, the node's current available capacity for scheduling additional pods and other software license data.
Communication between nodes is necessary for maintaining a failover system, where if an individual node fails, the workload can be sent to an alternative or backup node for execution instead.
With CaaS, there's an opportunity to create an architecture that is increasingly resilient to accidental disruption. Pairing together CaaS and Role-Based Access Control contributes to better control and transparency for the organization, as well as more rapid deployment of technologies that ultimately serve the end customer. The Sumo Logic platform itself supports role-based access control, making it easy for system administrators to selectively assign access to application data and resources based on the specific needs of target users, whether in security, operations or business management.
Reduce downtime and move from reactive to proactive monitoring.