This paper gives a short overview on deploying a functional testbed for orchestrating a service-mesh via Istio on a Kubernetes cluster with Docker Containers.
The objective of this paper is to demonstrate the deployment of a Kubernetes-based service mesh that is orchestrated with the Istio Software. As a beginning, let’s look at what a service mesh is. The Istio website defines a service mesh as follows: “The term service mesh is used to describe the network of microservices that make up […] applications and the interactions between them. As a service mesh grows in size and complexity, it can become harder to understand and manage. Its requirements can include discovery, load balancing, failure recovery, metrics, and monitoring.” [1]. Istio states to satisfy these requirements by providing control over the complete service mesh architecture. The goal is to test the deployment of Istio and how far it satisfies the proclaimed objective.
The service mesh application was evaluated to check how it performs in a real-world use-case. Some video files were uploaded to the application to use them as a reference point. They were then requested by clients together with the associated metadata in the database. At first, some single requests were made by hand using a web browser. While showing that the application works in the desired way, this proved to be too little traffic to test the limits of the service mesh deployment with Istio. It could only show that the flows in the application behaved according to the traffic routing rules that were defined via Istio (i.e. we could see that traffic routed to a different version of our containers would result in an different output to the user). To further test the service mesh, an automated way of testing was deployed.
Philipp Kalytta studiert Technische Informatik an der TH Köln und beschäftigt sich auch als Hobby mit IT-Themen