Sumo Logic ahead of the pack
Read articleComplete visibility for DevSecOps
Reduce downtime and move from reactive to proactive monitoring.
December 6, 2019
The last fifteen years have seen huge increases in developer productivity for several reasons, including the arrival of open source into the mainstream and the ability to better emulate target environments. In addition, the process of resetting a development environment back to the last known stable version has been vastly improved by Vagrant and then Docker.
The arrival of open source into the mainstream gave the average user access to the same tools and libraries that the top companies in the industry use, as well as the ability to contribute improvements. Once companies realized that it was better to build a community around the development of their tools, it allowed developers much more control over their own destinies.
Emulation began when VMware introduced virtualization to the x86 market in the early 2000’s; by 2008, it was extremely popular in data centers and had already evolved significantly. While it worked on the desktop, however, it was not widely adopted for desktop use because of the complicated and time-consuming maintenance that all of the different images required.
When Vagrant arrived on the scene, they introduced several different virtualization tools in order to provide an easy and consistent cross-platform experience for developers. Docker then combined the Vagrant model with some Linux-specific technologies and took our ability to recreate environments to the next level.
Reduce downtime and move from reactive to proactive monitoring.
From vagrantup.com:
“Vagrant is a tool for building and managing virtual machine environments in a single workflow. With an easy-to-use workflow and focus on automation, Vagrant lowers development environment setup time, increases production parity, and makes the works on my machine excuse a relic of the past.”
To put it simply, Vagrant gives developers and operators the tools to replicate the exact environment that underpins the application that they want to configure on a virtual machine. That virtual machine then becomes the basis for development and enables developers to begin at a set starting point.
As Vagrant puts it, they stand on the shoulders of giants. By supporting all of the big virtualization platforms in the background, they can run on any platform that the developer needs. Most people use VirtualBox, though Hyper-V, VMware, and even cloud platforms are available as backends.
Docker is a corporation that builds and distributes tools that create and run application containers, and it was responsible for starting the current wave of container madness that is sweeping the industry. Docker found a way to leverage several components of the Linux kernel (like cgroups and namespaces) through a simple and effective mechanism to package and run applications. Several other organizations and projects (such as podman and Kubernetes) have jumped into the container ecosystem, but Docker is still the apex of the container world. The term “Docker” is used interchangeably with “container,” and everyone in the ecosystem supports Docker containers.
What Docker did was to build a daemon that launches binary container images as a single isolated process on a Linux (and now Windows) operating system. This binary image only contains the specific dependencies that the application it contains needs to run. Rather than a full operating system, it only contains a thin abstraction layer that maps to the operating system it will interact with when it runs; then, it layers on the dependencies (libraries and tools) that the application will need until it has a complete image.
So, if Wordpress was a container image, it would contain the following layers:
There is nothing else to it, and the application will run identically on any Linux host that supports containers.
Importantly, containers do not persist any data unless they explicitly ask for a persistent volume; therefore, any data that is created within a container (like log files) is not retained between instantiations. Every container starts with a fresh copy of the container image.
Since Vagrant has a completely separate instance of the operating system running each time, it has the highest level of isolation available without using separate hardware.
Docker containers run as isolated processes in the host operating system. There are security controls to limit access, but it can still interact with other resources or processes on the host operating system if it is run as a privileged process, meaning that isolation is effectively at the user level.
Vagrant is a completely separate operating system, and it has all of the benefits and disadvantages that go along with that. It can be hardened and fine-tuned to be exactly what the application needs and no more, but you still have to do things like patching in every instance on startup, which slows down developers and creates a constant need to republish the image.
Docker relies on the hardening of the host operating system, and it only includes the libraries that are essential to the application as part of its container image. This makes for much less patching as well as smaller images to republish when one of the libraries is updated. The disadvantage of Docker is that the container will only be as secure as the host allows. Fortunately, there are now container-focused distributions of Linux (such as CoreOS) which limit the number of system services that are installed and running to the bare minimum in order to reduce the attack surface and increase the baseline security.
Docker containers are faster to start and stop due to their smaller image footprint and the fact that Docker uses the existing host operating system (which has already initialized all the core processes).
Vagrant is slower because it has to load an entire virtual machine image and initialize all of the core processes plus the application tier.
The difference may not be drastic, depending on the size of the Vagrant image, but Docker will be faster overall.
Docker will consume fewer resources than Vagrant, since it only needs to load the libraries required by the application. This means that you can have more applications running in the same amount of compute capacity.
Since Vagrant has to load an entire operating system into memory, it will always consume more resources; however, this is usually fine with developers since a full operating system is traditionally easier to work with.
Vagrant can support essentially any operating system on any operating system for a consistent experience regardless of what platform the developer uses or is in production.
When a Linux container is run on Windows, or when any container is launched on Mac OS, Docker runs virtual machines in the background. For achieving all of the resource consumption and speed benefits, the exact flavor within the OS family (for example, CentOS 6 or Ubuntu 18) doesn’t matter as much, but the overall operating system family (for example, Windows vs Linux) does matter.
Vagrant requires that the target machine for installation has some kind of virtualization engine deployed. VirtualBox is the most common virtualization engine that’s used because it is cross-platform and open-source, but there are many others available, such as KVM and Hyper-V.
Once the prerequisite virtualization engine is installed, it’s just a matter of heading to the download site, picking the appropriate distribution for your operating system, and running the installer. Alternative methods using common supplementary or extras repositories on various operating systems including brew (“# brew install vagrant”) on Mac OS X and apt (“# apt install vagrant”) on Ubuntu are available.
Using Vagrant with an image that is published in their public repository is as simple as executing two commands:
$ vagrant init hashicorp/bionic64 $ vagrant up
When you are finished working with the instance, you can:
When you are ready to resume (or start over if you destroyed), just run “vagrant up” again.
The process of choosing a VM and any required parameters is defined in a Vagrantfile, which tells Vagrant how it should handle the VM images in question.
Docker, Inc. has two types of distributions that contain more or less the same core components, but they are targeted at different audiences.
The desktop edition of Docker runs on x86_64 machines running Windows 10 and Mac OS X. It includes more visual tools to start, stop, and create containers and leverage Kubernetes.
You must install VirtualBox before installing the desktop edition on Mac OS X, because Docker will silently run a Linux VM in the background in order to host the container images, since they must run on the OS for which they were built (which is almost always Linux).
For the same reason, Hyper-V must be installed before you can install the desktop edition on Windows 10. Docker can keep this image running in the background in order to make container restarts as fast as they are when Docker is running on a full Linux host, but it isn’t “pure.” A graphic-driven installer will prompt you to accept the licence and give you the option to start on boot, then it will create a menu on the taskbar for controlling and managing the Docker instances and configuration.
The server edition of Docker runs on x86_64, ARM64, or Power (IBM) for various Linux distributions. The server edition for Linux only needs a version of the Linux kernel that has been released in the last few years. Installing the Docker configuration tools, container management tools, and runtime daemon on Linux is usually as simple as running “yum install docker” or “apt install docker” from the distribution’s main package repositories or, for a slightly more up-to-date version with specific instructions for each major distribution (including CentOS/RHEL and Ubuntu), directly from Docker.
Using Docker on the server side is just as easy using Vagrant. You can simply run an existing container with one command that’s available in a public repository like Docker Hub:
$ docker run hello-world
To stop a running container, the command is: “$ docker stop hello-world”.
Building a container starts with a Dockerfile (much like Vagrantfile), which will specify the base image to use as well as the additional configurations and libraries that need to be loaded in order to support the application being packaged. Docker uses a tool called docker-compose to handle the actual building process.
It is a good idea to have everything (including temporary development environments) tied into the core logging solution so that you truly have a single source of truth and real traceability across every application’s lifecycle from the first test environment through to production.
Integrating with Vagrant is done by simply following the documentation to enable the standard collector for the operating system being used; like the Linux Collector to using app-specific components like the NGINX option.
This configuration can be done ahead of time and stored in the virtual machine image, which means that every instance of that images will have an app id and app secret stored for your Sumo Logic. This, however, may not please the overlords in the security department who want to do things like rotate keys on a routine basis; the other option is to add something to the Vagrantfile configuration that will execute the collector install when you issue the “vagrant up” command.
Simple example in a vagrant configuration file:
Vagrant.configure("2") do |config| config.vm.box = "hashicorp/bionic64" config.vm.provision :shell, path: "sudo /tmp/SumoCollector.sh -q -Vsumo.accessid=<accessid> -Vsumo.accesskey=<accesskey> -Vsources=<filepath>" end</filepath></accesskey></accessid>
You can integrate SumoLogic with Docker from both outside and inside the containers. The preferred way to integrate outside the container is by using the Sumo Logic Docker App, which can gather all of the metrics around the docker instance and its running containers. If you need to capture additional application logs that are not available through the Docker App, one viable option is to mount those application logs to a persistent volume which is available outside the container, since you will be able to retrieve the files locally using a collector or another method like HTTP streaming.
Build, run, and secure modern applications and cloud infrastructures.
Start free trial