Google Gambles on Management Tool Docker for Building Open Cloud Standards

Since the official launch of its public cloud Compute Engine, Google had left the door open to alternatives to the traditional virtualization, such as containers. In turn, Google offers a Linux technology container on its cloud platform. After the announcement of its Amazon Web Services support in Docker Beanstalk, the Mountain View company has also decided to provide its cloud platform technology, albeit more indirectly through the support of the distribution Linux CoreOS.

This Linux OS lightened developed for the specific needs of server clusters, is now available as a default image in Google Compute Cloud. The core OS has been re-architected to sit alongside Red Hat, Suse and Debian in particular. The OS has the characteristic leaning on Linux Docker container technology, and make use of a distributed shared configuration called DCE technology. The OS uses only 161 MB of RAM, or 50% less than a conventional Linux OS.

Open Source tools Docker allows users to package an application in a virtual container that can be deployed across multiple Linux servers. And even if it has similarities with hypervisor virtualization. Developers can use these extensions to easily access the large and growing library of Docker images, and the Docker community can easily deploy containers into a completely managed environment with access to services such as Cloud Datastore.

Currently Docker is used by system administrators and developers for apps in distributed applications. The open source technology is transferable, distributed and platform-independent. Google itself runs about two billion containers to manage all your Internet services, ranging from Gmail to search. For Google, containers offer a very high level of resource isolation, which ensures that an application running on a server is not slow by any other applications that are running on the same server. Google wants the technology allegedly used in their own backend, which could make Google’s servers much faster and more efficient.

Because of this resource isolation, containers provide very high levels of predictability and quality of service and low priority jobs will not interfere with the work of high priority and they always have sufficient resources to run.

Google has also released some of its tools to manage Docker. One is Kubernetes, an administrator open source containers running on multiple servers, allowing applications within containers to communicate among themselves and with the outside world. Kubernetes is a powerful extensible tool used to deploy containers into a network of machines, providing health and a multitude of other features, such as health management or replication.

The second project, cAdvisor, should provide real-time and historical statistics regarding the resource usage for the deployed containers. cAdvisor (Container Advisor) provides container users an understanding of the resource usage and performance characteristics of their running containers. It is a running daemon that collects, aggregates, processes, and exports information about running containers. Specifically, for each container it keeps resource isolation parameters, historical resource usage, and histograms of complete historical resource usage. This data is exported by container and machine-wide.

Google runs industry-leading distributed systems and their expertise shows in the speed and stability of Google Compute Engine. GCE platform tools including load balancers and replica pools are supported by the CoreOS GCE images and enable administrator to efficiently and smoothly scale cluster.

Google Cloud based mainly on containers with more than 2 billion containers are launched by Google each week. The technique of the container allows to run multiple separately installed on the same OS without the need to emulate a full virtual machine application.

Leave a Reply

Your email address will not be published. Required fields are marked *