Service Map

The Service Map is a unit that configures applications and manages various resources. Kubernetes manages applications at the Namespace level. Namespaces are a way of logically dividing clusters to deploy and manage containers, serving as a kind of virtual cluster. The Service Map, provided by Cocktail, is a management space for applications based on namespaces, offering additional management features.

One of the main resources in the Service Map is Workloads, which deploy and manage containers. Other resources include persistent volumes, configuration information, and service exposure.

Pod(Containers)

Pods are resources that deploy and execute containers, composed of one or more containers. They consist of a main container responsible for application logic processing and optional sidecar containers for auxiliary tasks. While most cases require only the main container, additional sidecar containers can be added for functions like log collection, backup, or proxy.

Containers within a pod share the same network (localhost) and volumes, making it easy to scale by adding containers.

Pods can be created independently, but typically, they are managed through workloads responsible for deployment, updates, and lifecycle management.

Workloads

Workloads are Service Map resources responsible for the deployment and lifecycle of pods. They manage the entire lifecycle of pods, including how they are deployed and executed, status monitoring, and restarting in case of issues.

In Cocktail, users interact with workload settings through a web UI console, minimizing errors caused by incorrect configurations.

Even when inputting through the web UI console, users can specify almost all detailed configuration items related to workloads defined in Kubernetes.

Workloads are categorized into several types, each with differences in how pods are deployed, executed, and managed.

Workload Groups

Within a namespace, various types of workloads can be executed, and in some cases, there can be so many workloads that it becomes difficult to understand them all at a glance.

Organizing workloads into workload groups allows for a clear overview of the state of workloads by displaying them according to workload groups.

Deployment Workloads

Deployment workloads replicate the same pod multiple times to provide stable service even if some pods have issues. They are mainly used for stateless application configurations like web servers, where data management is not required. This is because replicated pods have the same container, making them unsuitable for data management.

Deployment workloads also support rolling updates, replacing pods sequentially to perform updates without affecting services.

They also support autoscaling, automatically increasing the number of pod replicas when requested CPU or memory resources are exceeded.

StatefulSet Workloads

StatefulSet workloads deploy pods performing different roles in replication to configure workloads. They are suitable for data storage and management applications such as databases, key-value stores, and big data clusters. Each pod is assigned a unique name, allowing tasks to be processed through pod communication. Each pod uses a unique persistent volume to store/manage data.

DaemonSet Workloads

DaemonSet workloads are Service Map resources that deploy and manage daemon process pods running continuously in the background. Examples of background tasks include log collection and monitoring data collection on each node.

Job Workloads

Job workloads deploy pods to process tasks in a one-time execution. They are mainly used for batch job processing such as data transformation and machine learning.

CronJob Workloads

Similar to job workloads, but they allow for scheduled or periodic execution of jobs. They use configurations similar to Linux's cron tab for scheduling.

Service Exposure

To serve applications externally, pods need to be exposed to external traffic. The Service Map exposes pods to the external traffic through service exposure resources.

Service exposure resources specify pods to expose based on labels. Therefore, even replicated pods with the same label can be specified in one service exposure. In this case, the service exposure performs load balancing to send traffic to one of the specified pods.

Service exposure resources are assigned a fixed IP address, which is a private IP address used within the cluster. This is because pod IP addresses change on restart, which can cause issues if pods are accessed directly. Therefore, the fixed address of the service exposure is used to connect to the specified pods.

Service exposure is categorized based on the exposure method.

Exposing Service with Cluster IP

Exposing services with a cluster IP allows access only within the cluster. It is used for communication between pods via fixed IP addresses within the service map.

Exposing Service with Node Port

Node port exposes services using the node's IP address. External access to pods is possible using the exposed node port (Node IP: Node Port). Node ports are assigned to all nodes in the cluster, allowing access to pods from any node. Typically, all nodes in the cluster are connected to an L4 switch to consolidate access addresses.

Node ports are assigned a range of 30000 to 32767 during cluster installation. Services are exposed automatically or by specifying a port range. This range can be user-configured.

Exposing Service with Load Balancer

When the cluster is configured in a cloud environment, a load balancer can be automatically created to expose services. Pods are exposed via node ports, and the created load balancer connects pod execution nodes with ports. External access is possible using the load balancer's address and port. Load balancer service exposure is only possible on supported cloud platforms like AWS, Azure, and GCP, configured during cluster installation by cloud providers.

Ingress

Apart from service exposure, the Service Map also has Ingress resources for external pod access. Ingress exposes pods to the outside world via hostnames/paths (e.g., abc.com/login, cdf.org/). It functions similarly to an L7 switch.

To use Ingress, an Ingress controller must be installed in the cluster. The Ingress controller receives external traffic and forwards it to pods based on routing information defined in the Ingress (hostname/path -> cluster IP service exposure). Therefore, Ingress exposes the controller to external services, and pods expose services via cluster IP for routing by Ingress rules.

In Kubernetes, the Ingress controller is deployed to pods. Therefore, when installing in the cluster, service exposure should be done via node ports or load balancers.

Note that Ingress routes traffic based on hostnames/paths, so these need to be registered in internal or external DNS servers and accessible by the Ingress controller. Cocktail Cloud provides default configurations for Ingress usage, eliminating the need for additional installation and setup.

In cases of high external traffic to pods, dedicated Ingress nodes are sometimes configured in the cluster. These nodes only have Ingress controllers deployed and are replicated for high availability. They provide scalability by adding Ingress nodes as traffic increases.

Persistence Volumes

When pods need to store and manage data, persistent volumes are necessary. Pods can restart at any time or be reassigned to healthy nodes in case of node failures, making it impossible to use node disk for data storage.

Persistent volumes ensure data integrity even when pods are restarted, reassigned, or deleted because they are managed independently of pod lifecycles. They use separate storage configured with the cluster.

Service Map's persistent volume resources select cluster storage for creation. Created persistent volumes are mounted to pods and used by containers. Persistent volumes are categorized into shared volumes, which can be mounted by multiple pods, and single volumes, which can be mounted by only one pod.

Persistent volumes require storage configured in the cluster. NFS storage is commonly used, supporting any storage that supports the NFS protocol.

Configuration Information (ConfigMap/Secret)

When deploying a web server as a container, you typically use configuration files for server execution. In cases where pods are executed with separate configurations, these settings are managed as configuration resource. While it's possible to include configuration files in the pod's container image, this would necessitate recreating the image whenever configurations change.

Configuration information is created and managed within the service map, and can be mounted to pods' containers as volumes or files. Depending on container implementation, it can also be passed as environment variables. An advantage is that configuration changes can be made and reflected even while pods are running.

Configuration resource is categorized into ConfigMaps and Secrets based on management method. Both manage configuration information, but Secrets encrypt data, making them suitable for sensitive information like database connection details.

Image Update

The pipeline resource in the service map updates container images for workloads. Upon workload deployment completion, a pipeline is automatically created to update container images in pods.

Workload-deployed container images in the service map fall into two types: those generated from Cocktail Cloud builds and those registered in external registries.

Images generated from Cocktail Cloud builds undergo the entire process from image creation to workload deployment automatically whenever application code changes (refer to 'Builds/Images' section for Cocktail Cloud builds).

External registry images are updated via replacement, where the pipeline performs automated deployment only.

Catalog

The catalog in the service map bundles one or more workloads associated with the service map for deployment and updates. It's primarily used for distributing and updating open-source software.

Cocktail Cloud provides many commonly used open-source packages in its catalog. Users can search for desired packages in the catalog and automatically deploy them to the service map. (Refer to the 'Catalog' section for Cocktail Cloud's catalog).

Packages deployed to the service map can perform state monitoring and automatic updates.

Last updated

ⓒ2023. Acornsoft Corp. All rights reserved.