Kubernetes has taken off in a big way. As per the CNCF survey earlier this year, nearly 58% of companies are running Kubernetes in production and among enterprises, that figure is 40% and growing. Enterprises are rapidly adopting Kubernetes which is a testimony of it being hugely successful and thus quite exciting.
No wonder, last week’s KubeCon North America 2018 was a huge success drawing nearly 8,000 attendees from leading open source and cloud native communities. Here are my key takeaways from the event for all those interested in keeping up-to-date with Kubernetes.
The community behind Kubernetes
From the time when Kubernetes was released to the world in June of 2014, the Kubernetes community has grown to become a truly open, engaged and collaborative community. What’s amazing is that the community works successfully across geographic, language and corporate boundaries. Since then more than 1700 people have contributed to Kubernetes and there are more than 500 Kubernetes meetups worldwide yearly.
Since Kubernetes is all about collaboration, let’s look at the top contributors that are making the Kubernetes one of the most successful open source collaboration efforts since Linux. The charts below show leading contributors by company including Google, Huawei and Redhat, and leading contributors by country with the United States leading followed by China and Germany.
Figure 1: Top-10 Kubernetes Contributors by Company and by Country
Kubernetes 1.13 released with new features
Kuberentes 1.13 was announced earlier this month, prior to KubeCon Seattle. This release continues to focus on stability and extensibility of Kubernetes with three major features that have graduated to general availability in the areas of Storage and Cluster Lifecycle. Notable features include: simplified cluster management with kubeadm, Container Storage Interface (CSI), and CoreDNS as the default DNS.
GigaSpaces InsightEdge in a Kubernetes environment
The most important features in v1.13 from GigaSpaces point of view include:
- Container Storage Interface (CSI) which is now GA – the Kubernetes volume layer becomes truly extensible, needed for MemoryXtend over SSD
- Support for 3rd party device monitoring plugins – GigaSpaces specific metrics such as MemoryXtend, Index usage or JVM profiler statistics can be natively exposed.
Our top choices for open source Kubernetes related projects
We’d like to cover our top choices for open source Kubernetes related projects in different phases of maturity from production ready to alpha in the areas of ML, Serverless, Security and Orchestration:
- Descheduler for Kubernetes
The Kubeflow project is dedicated to making deployments of machine learning (ML) workflows on Kubernetes simple, portable and scalable. It is the machine learning toolkit for Kubernetes and offers a Jupyter Hub to help create interactive Jupyter notebooks, TensorFlow and a number of other TensorFlow tools, like the Training Controller for native distributed training. Kubeflow also supports Argo, for managing ML workflows. Future plans for Kubeflow include support for more ML frameworks like Spark ML, XGBoost, and sklearn.
gVisor is a user-space kernel that provides secure isolation for containers, while being more lightweight than a virtual machine (VM). Although containers have revolutionized how we develop, package, and deploy applications, running untrusted or potentially malicious code is not recommended. Therefore gVisor limits the host kernel surface accessible to the application while still giving the application access to all the features it expects.
Figure 2: gVisor Architecture
Knative components offer developers Kubernetes-native APIs for deploying serverless-style functions, applications, and containers to an auto-scaling runtime. Knative together with Kubernetes form a general purpose platform with the unique ability to run serveless, stateful, batch, and machine learning (ML) workloads alongside one another. In addition to Google Kubernetes Engine (GKE), multiple partners are now delivering commercial offerings based on Knative.
This week at KubeCon, RedHat, IBM, and SAP announced their own commercial offerings based on Knative.
KubeEdge is an open source system, an optimized version of kubelet on edge, extending native containerized application orchestration and device management to hosts at the Edge. It is built upon Kubernetes and provides core infrastructure support for network, app, deployment and metadata synchronization between cloud and edge. KubeEdge can work offline when the network connection is lost. It also responds to edge events and starts corresponding functions on the edge, supporting MQTT and allowing developers to deploy custom logic.
Figure 3: KubeEdge Architecture
Descheduler for Kubernetes
Scheduling in Kubernetes is the process of binding pending pods to nodes, and is performed by a component of Kubernetes called kube-scheduler. The scheduler’s decisions, whether or where a pod can or can not be scheduled, are guided by its configurable policy which comprises of set of rules, called predicates and priorities.
As Kubernetes clusters are very dynamic and their state change over time, Descheduler, based on its policy, finds running pods that can be moved to some other nodes for various reasons: (1) some nodes are over or under utilized in terms of CPU or RAM, (2) scaling up as new nodes are added to the cluster, (3) some nodes failed and auto recovered hence the cluster is not balanced and more.
GigaSpaces is very excited to be part of the Kubernetes revolution. Version 14 of our in-memory computing platforms, InsightEdge and XAP simplify our customers’ development and deployment of their applications required for time-sensitive and mission-critical services in any environment – cloud, on-premise and hybrid.
As we at GigaSpaces continue to innovate and simplify the development and deployment of smarter, faster applications we are looking forward to Kubernetes’ continued growth and innovation in 2019.
To learn more about GigaSpaces InsightEdge and how Real-Time Analytics meets Kubernetes click here to watch the webinar.