Only this pageAll pages
Powered by GitBook
Couldn't generate the PDF for 108 pages, generation stopped at 100.
Extend with 50 more pages.
1 of 100

Manual - EN

Loading...

Apply for Service

Loading...

overview

Loading...

Understanding Concepts

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Getting Started

Cluster Creation

Loading...

Loading...

Loading...

Loading...

Loading...

Cluster Registration

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Create Registry

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Cluster Backup and Restore

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Log Service

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

CI/CD

Loading...

Loading...

Loading...

application

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Platform Management

Loading...

Loading...

Loading...

Loading...

Loading...

API Management

Loading...

Loading...

Loading...

Certificate Management

Using Cocktail Cloud

PartnershipCocktail Cloud
[Inquiry about Using Cocktail Cloud]
Logo

Platform

The platform is the fundamental unit for using Cocktail Cloud. All functionalities are accessible through the platform. Users perform application development, operation, and management tasks after logging into the platform, depending on their permissions.

The platform consists of one or more workspaces. Workspaces are independent workspaces provided for teams or organizations. Companies can configure workspaces for teams within the platform and allocate necessary resources to provide application development and operation environments. The platform integrates and manages workspaces, applications, and infrastructure resources.

Integrated Platform Management Features

The platform registers and manages clusters (infrastructure resources) used by applications. It allocates and manages cluster resources on a per-workspace basis, either for the entire cluster or by namespace. Applications managed by teams are serviced through the allocated resources.

The platform comprehensively monitors and manages the overall status of applications developed and operated in all workspaces. It manages applications based on their configuration, status, resource usage, and performs tasks such as resource scaling and fault response as needed.

Clusters operate based on Kubernetes. The platform provides necessary functionalities for cluster operation, such as managing Kubernetes state and versioning.

In addition to resource and status management, the platform also provides integrated management functionalities for user management, security, etc.

Integrated Monitoring

The platform centrally monitors the status and resources of multiple clusters and applications. It collects metric data for each cluster infrastructure and application, providing real-time monitoring and analysis capabilities.

In addition to collecting resource and status data, it also collects events and provides notifications based on predefined rules. It detects anomalies in advance, takes appropriate actions, and performs fault analysis and resolution when issues arise.

The platform provides integrated dashboards with various charts for monitoring and analysis purposes.

Platform Configuration

The platform has a unique identifier (ID). Users log in to the platform using this ID. Additionally, users can set the platform name and logo image to represent a unique identity.

The platform holds Cocktail Cloud product license information. This license, along with purchaser information, is managed within the platform, with a designated platform administrator acting as the representative.

Cloud accounts required for operating clusters in public clouds are also managed as platform information. These accounts are utilized for managing cloud infrastructure and authentication information.

Workspace

"Workspace" is an independent workspace provided for teams or organizations. Teams perform development, operation, and monitoring of one or more designated applications within a workspace. One or more members are registered in the workspace to collaborate.

Resources necessary for deploying and operating applications are allocated to workspaces. Resource allocation is performed by the platform administrator and targets clusters and image registries registered in the platform.

Cluster Resource Allocation

In Cocktail Cloud, there is a method for allocating cluster resources to workspaces called service mapping. The service map in Cocktail Cloud is an administrative unit that extends namespaces, or more precisely, it can be said that namespaces are allocated to workspaces.

Teams can be allocated service maps, or namespaces, which are isolated, independent spaces within a cluster, typically referred to as virtual clusters. Teams allocated with namespaces are responsible only for deploying and operating applications, while the cluster (infrastructure) is managed by a separate team. This method is suitable when teams are focused on application development and operation.

Image Registry Allocation and Sharing

Each workspace is independently allocated a registry for storing and managing application container images. Teams or applications manage their images and configure automated pipelines accordingly.

In some cases, teams or applications may share common images. In such cases, a shared image registry can be allocated to the workspace. In this scenario, one or more workspaces will use the same shared image registry.

When the accessing account's permissions are set to user-level, it is necessary, and in the case where it is registered with admin privileges, the workspace can be additionally utilized.

Cluster

Cluster is the infrastructure where containers run. Containers are the deployment units and execution processes of applications. Clusters provide computing resources (CPU, Memory, Storage, Network) necessary for container execution.

A cluster consists of nodes (physical or virtual machines) connected via a network. It is an architecture designed for distributed processing. When containers are deployed to a cluster, they are executed on appropriate nodes. This process, called scheduling, is managed by Kubernetes. Kubernetes is responsible for container scheduling and management within the cluster.

Clusters scale resources by adding nodes. If more resources are needed, nodes are added accordingly, and Kubernetes deploys and manages containers on the expanded nodes.

Container networking and storage for data storage are also components of a cluster.

Kubernetes

Kubernetes is a container orchestration engine that runs containers in clusters and manages their lifecycle. Originally developed by Google, it is now maintained as a CNCF (Cloud Native Computing Foundation) project.

Kubernetes is installed on the cluster and is responsible for managing and providing resources required by containers based on the cluster infrastructure (nodes, network, storage).

Node

A node is one or more compute machines that make up a cluster. They can be physical or virtual machines, each equipped with CPU, Memory, and Disk, connected via a network. Nodes are managed by Kubernetes for scheduling.

Nodes are divided into master nodes and worker nodes. Master nodes host the control plane components of Kubernetes and manage the cluster by communicating with worker nodes.

Worker nodes are where application containers are deployed. The number of worker nodes increases based on the number and capacity of applications. The Kubernetes scheduler on the master node is responsible for deploying containers to worker nodes.

Container Network

Containers running on one or more nodes need to communicate with each other, which is managed by the container network.

Container networking is installed as a Kubernetes component. Kubernetes itself does not provide container networking but offers a standardized interface for external providers to supply plugins, known as the Container Network Interface (CNI). Examples of open-source CNI plugins include Flannel, WeaveNet, Calico.

Cocktail Cloud offers options to configure the cluster's CNI plugin.

Ingress Controller

External access to containers is handled by the ingress controller. It routes external traffic to containers based on hostnames and paths. Routing rules are configured for each application and applied to the ingress controller.

The ingress controller is a Kubernetes component. NGINX controller is commonly used and provided as a default in Kubernetes. Other third-party ingress controllers are also available.

Cocktail Cloud offers options to configure the ingress controller.

Storage

Cluster storage provides persistent volumes for container data storage.

Since containers can be rescheduled to different nodes in case of node failure or resource shortage, storing container data on nodes can be problematic. Therefore, a separate volume called a persistent volume is needed to store and manage data safely.

Kubernetes creates and provides persistent volumes through storage classes. When configuring the cluster, an appropriate storage class for storage must be installed.

Cocktail Cloud provides storage classes as addons, allowing users to select and automatically manage suitable storage classes.

Addon

Besides networking and storage, Kubernetes has components to extend its functionality, referred to as addons.

These addons provide additional capabilities to Kubernetes clusters beyond container management. Examples include monitoring and service meshes.

Cocktail Cloud offers various Kubernetes extension components as addons. They are automatically managed from installation to upgrade, and users can choose and use the required addons.

Kubernetes and Cocktail Cloud

What is Cocktail Cloud, and describe its features and advantages.

Cocktail Cloud is a platform built on Kubernetes, offering the functionalities and APIs necessary for building, deploying, monitoring, and operating cloud-native applications from their build phase. The reason why many companies consider adopting Kubernetes is due to the increasing importance of cloud-native applications such as containers, microservices, and serverless architectures.

Cloud-native applications enhance continuity and efficiency in development/operation, ensuring high availability through automation for fault response, load-based scaling, and more. However, the adoption and operation of Kubernetes pose challenges due to the difficulty of adapting to new technologies and the complexity of management, becoming another major challenge for enterprises.

Cocktail Cloud provides an integrated platform with all the functionalities and components required for building and operating Kubernetes and cloud-native applications. Enterprises can save time and effort during initial adoption and seamlessly manage and scale thereafter.

Reducing Efforts in Kubernetes Adoption and Operation

While the number of companies adopting Kubernetes is increasing, there's a significant burden on operating and managing open-source installations, updates, and adapting organizations to new technologies. Additionally, setting up components like monitoring, networking, and security that Kubernetes doesn't inherently provide requires additional effort. Cocktail Cloud offers automated tools for configuring and scaling Kubernetes clusters, simplifying cluster management, including upgrades and node expansion. This leads to reduced efforts in initial setup and ongoing management. Cocktail Cloud extends Kubernetes configurations through addons, providing components like monitoring, networking, GPU support, and security. These addons also come with automated installation/update functionalities.

Multi-Cluster Deployment

Enterprises have various reasons for using multi-cluster setups, such as network isolation for security, separate operations for production and development systems, and leveraging public clouds. There's also a growing trend of using multiple clusters and different Kubernetes distributions simultaneously. Cocktail Cloud provides an environment for operating and managing multiple clusters from a single control plane. It supports the construction and management of multi-clusters across diverse infrastructure bases, including private and public clouds, as well as data centers.

  • Physical Equipment (Baremetal) Based Clusters

  • Virtualization-Based Private Clouds: OpenStack, VMWare, CTRIX Hypervisor, etc.

  • Public Clouds: GKE (GCP), EKS (AWS), AKS (Azure), etc.

Multi-Tenancy

Enterprises require work environments where clusters and necessary resources are allocated or shared based on the roles of organizations or teams. Particularly in application service development and operation, it's common for dedicated teams to be responsible, and unique computing resources may be required based on the characteristics of applications. Cocktail Cloud provides independent workspaces for organizations or teams, allowing allocation and management of necessary computing resources (clusters). Beyond basic computing resources like CPU, GPU, Memory, Volume, cloud-native applications also require resources for development/operation such as container image registries and automation pipelines. Resource allocation and management, as well as permission management for workspace members, in a multi-tenancy environment can be easily configured and managed within independent workspaces.

Unified Management/Monitoring

Managing multiple clusters and a multi-tenancy environment can be complex and challenging. Cocktail Cloud addresses these issues through various integrated management features. It allows monitoring and managing the status of enterprise-wide applications and infrastructure resources (clusters, repositories). Teams or organizations can track the development/operation status of applications and services and handle resource requests and issues accordingly. Cocktail Cloud centrally monitors multi-cluster infrastructure resources, application statuses, networks, etc., providing real-time alerts/events and logs to effectively respond to faults or issues. Additionally, it offers a customizable integrated monitoring environment tailored to the needs of the enterprise through metric and rule extensions.

Automation, DevOps

Ensuring continuity from application building to deployment and updates has become increasingly crucial. Swiftly responding to customer demands collected through various channels is a key factor in achieving business success. To address this, enterprises establish automated Continuous Integration/Continuous Deployment (CI/CD) pipelines. Cloud-native applications offer advanced technologies for building automated pipelines compared to before. Leveraging these advancements, Cocktail Cloud provides various functionalities for establishing and managing automated CI/CD pipelines. Enterprises can tailor pipelines according to the characteristics of their applications and development/operation environments. Additionally, they can provide DevOps platforms for teams or organizations, encompassing operations and monitoring.

Security

Security is a critical management factor for enterprises. Especially, managing authorized user access and permissions to the infrastructure (e.g., clusters in the case of Kubernetes) where applications are deployed and executed is a fundamental approach to address threats from unauthorized users. Cocktail Cloud issues access accounts and permissions to authorized users and manages risks through access periods and revocations. Additionally, it tracks the usage history of access accounts through audit logs, enabling swift responses such as cause analysis and blocking in case of security issues. Furthermore, it provides functionalities such as 'Security Policy Management' for policy formulation and application during container execution, and 'Image Scanning' to inspect vulnerabilities in container images.

What is Cocktail Cloud?

https://cocktailcloud.io/

Cocktail Cloud is an all-in-one container management platform. With the widespread adoption of cloud technology, there's a growing demand not just for infrastructure management but also for application and service management. Traditional development and operation methods are limited in fully leveraging the advantages of the cloud. Particularly in the realm of applications, there's an increasing demand for automation, efficiency, and integrated management, including continuous integration/deployment (CI/CD), migration, and the establishment of multi/hybrid cloud infrastructures.

The proliferation of container technology is natural in this context. Many companies have already adopted container technology, and this trend continues to grow. Containers package applications or services into independent and executable units, providing the same development and operation experience regardless of the infrastructure environment. Therefore, standardizing cloud management from infrastructure to services can reduce development and operation efforts. Containers offer the advantage of managing multi/hybrid clouds under a consistent environment.

Cocktail Cloud applies the benefits of containers to cloud management, streamlining development and operation tasks and providing a platform for implementing single or multi/hybrid cloud strategies.

Key features of Cocktail Cloud include:

  1. Automation of pipelines from code to build, deployment, and updates.

  2. Workload (service)-centric container management: packaging, lifecycle, resources, etc.

  3. Full-stack monitoring: monitoring the status and resources from infrastructure to containers. Alert management.

  4. Multi/hybrid cluster provisioning and management: Baremetal, private/public clouds.

Image Build

Containers are deployed and executed as images. Container images are specified by name in the pod's container spec in the format of image_name:tag (e.g., nginx:latest). When using Docker Hub, the registry address where the image is located is often omitted.

Cocktail Cloud provides independent image registries for each workspace. It also offers automated image building through the 'Build' feature.

Image Registry

An image registry stores container images and provides them when the image is required for pod execution. The storage/provisioning interface of image registries is standardized. Typically, the 'Push' API is used to store images in the registry after creation, while the 'Pull' API is used to retrieve images during container execution.

In Cocktail Cloud, image registries can be allocated for each workspace, serving as independent registries for teams. Additionally, image registries can be shared among teams.

When deploying pods from the service map using an allocated image registry, the image configuration is performed by selecting a 'build' rather than specifying the image name. A build automates the process of creating images, and the latest image can be deployed to pods based on the selected build's 'tag.'

Build

A build is a resource in Cocktail Cloud that automates the process of generating container images. Builds can have one or more tags, with each tag defining a different creation process. Tags can be seen as image versions. The process of generating images is called the build flow.

Builds store generated images in the image registry allocated to the workspace. Therefore, images and builds are synonymous, but each image tag (version) has a unique build flow. The structure is Image Registry -> Image (Build) -> Tag (Build Flow).

Users deploy builds (images) and tags (versions) in pods for workloads. The image generated by the build flow of the selected tag is then deployed and executed. The pipeline in the service map automates the entire process of updating images by executing the build flow of the image tag after code changes.

Build Flow

The build flow automates the process of generating images for a specific tag (version). Each step executed by the build flow for image creation is called a 'task.'

Cocktail Cloud offers various types of default tasks, and users can create custom tasks to configure the build flow. Default tasks include downloading code from code repositories (Git), executing user-defined scripts, and building images using Dockerfiles. Additionally, tasks for integrating with external systems' APIs and FTP-based file transfers are provided.

Users can develop and add/extend tasks to the build flow. User-defined tasks need to be containerized before adding them to the build flow.

Tasks in the build flow are executed on the 'build server.' Cocktail Cloud provides options to adjust the capacity of the build server. For build tasks requiring substantial resources, the capacity of the build server can be adjusted.

Azure (AKS)

Coming Soon

Monitoring

Cloud-native applications leverage cluster and container technologies, where clusters manage infrastructure, and containers handle application deployment and execution. Consequently, the monitoring targets differ from traditional applications.

Cluster Monitoring

Clusters consist of nodes, which are computing machines with CPU, GPU, Memory, and Disk, along with an operating system (OS) and container runtime for executing containers. Hence, monitoring of physical resource usage and performance necessary for container execution is done by collecting data (referred to as 'metrics' in monitoring) at the node level.

Container management is handled by Kubernetes, composed of multiple components installed on the cluster's master node. Monitoring the master node and installed components becomes necessary in case of Kubernetes failures, as container management becomes impossible. Monitoring involves tracking resource usage on the master node and the status of installed components.

Nodes and containers within a cluster communicate with each other. Monitoring network usage targets both the physical network and the logical network controlled by Kubernetes.

Application (Container) Monitoring

While cluster monitoring focuses on infrastructure resources required for container execution, container monitoring encompasses resource usage, execution status, and lifecycle monitoring. It also includes monitoring aspects such as communication volume between containers and request processing times.

Container monitoring provides metrics through the Kubernetes API and the Service Mesh API (for configuring container-to-container communication).

Notifications and Events

Notifications occur when monitoring metric data meets certain conditions defined by notification rules. These rules can be both predefined and user-defined.

Events occur when Kubernetes resources change. For instance, events are triggered by pod creation, execution, update, or deletion. Cocktail Cloud collects and provides events as notifications.

Both notifications and events provide real-time information during operation, facilitating proactive measures against application and cluster state changes and failures.

Logs

Kubernetes logs comprise three main types. Firstly, logs recorded by the Kubernetes master provide information necessary for master operation. Secondly, container logs are logs displayed on standard output (STDOUT/STDERR) during container execution. Lastly, application logs are logs recorded in separate files by containers in addition to standard output.

Cocktail Cloud collects all three types of logs, providing an environment for log retrieval and analysis.

Security

Security is a crucial aspect of enterprise cloud environments, with three main components in cloud-native setups:

Cluster Authentication and Authorization

Cluster access authentication and authorization refer to the permissions granted to authorized users to access the cluster and manage resources as needed. Users accessing the cluster have user accounts, and resources include applications and data. Administrators authorize user access and grant appropriate permissions for resource management, thereby managing cluster security.

In Cocktail Cloud, users can manage allocated clusters via GUI within workspaces, eliminating the need for direct cluster access for management. However, if using command-line tools or external CI/CD systems, a cluster user account is necessary. Administrators issue cluster accounts to users in such cases.

Cocktail Cloud provides integrated cluster account management, allowing users to access multiple clusters with a single user account and manage resources based on permissions. Users receive cluster accounts from administrators and can manage clusters within the validity period.

Audit Logs

Audit logs record the commands (API) executed by users logged in as Cocktail users or cluster accounts, detailing which resources were affected. In case of incidents or security issues, audit logs can be traced to analyze the root cause.

Cocktail Cloud offers the capability to collect and trace both platform (Cocktail Cloud features) and cluster (Kubernetes) audit logs.

Pod (Container) Security Policies

Pod security policies control permissions, node access, OS security settings, etc., during container execution. Typically, security settings are defined when defining pods. However, enterprises require control over security. Different security settings for each team or organization may lead to unforeseen security vulnerabilities.

Pod security policies can enforce security settings at the cluster or application level. Enterprises can enforce security policies based on their existing security policies.

Cocktail Cloud provides features to configure and apply security policies.

Image Inspection

Container execution images may contain multiple open-source components. For example, a base image is publicly available on the internet and serves as the basis for container image creation by adding user-specific components. If a base image contains malicious code, it poses a security risk.

Cocktail Cloud's image registry offers features to inspect images for malicious code. Additionally, it provides additional checks for outdated component versions or vulnerable code.

Catalog

Typically, applications consist of one or more workloads (containers), especially when deployed on Kubernetes, involving various related resources such as service exposure and volumes. This complexity makes application deployment and upgrades challenging.

Catalogs address these issues by bundling multiple application resources into a single unit called a package and deploying this package with user settings when necessary. Upgrades are also automated based on versioning. There are several open-source tools that support package creation, deployment, and management, with Helm being widely used, especially as an official Kubernetes project.

Cocktail Cloud's catalog offers the ability to search for such packages and automatically deploy them to service maps. These packages are in the form of Helm charts, which are supported by a wide range of open-source projects. Open-source packages are registered and managed in package repositories. There are numerous public package repositories where various open-source packages are available, and the catalog can search all these repositories for packages.

Packages deployed through the catalog are managed in the package menu of the service map. It provides monitoring and status of deployed packages and enables upgrades to newer versions.

AWS (EKS)

Before creating the provider, please note that any clusters provisioned via provisioning need to be deleted from Cocktail, not from the EKS console.

Cocktail continuously monitors the status of provisioned clusters. If there are any changes in the EKS console, deleting the cluster from the console will trigger Cocktail to recreate the cluster.

1. Create a Cloud Provider

1) To create a cloud provider for cluster creation, click on the "+ Create" button in the [Provisioning] - [Cloud Providers] tab, and select AWS.

2) Register AWS authentication information in the basic information and click the "Save" button.

Item (* is required)
Content

Account Name*

Enter the name for the registered AWS account

Description

Enter the description for the AWS account

AWS Access Key ID*

Input AWS Account ID

AWS Secret Access Key*

Input AWS Secret Access Key

AssumeRole ARN

Input AWS AssumeRole ARN value

3) Confirm successful registration.

2. Create an EKS Cluster

1) [Provisioning] - Navigate to the [Templates] tab, then click the "Start" button under the EKS (Elastic Kubernetes Service)​ item in the templates section

2) Select the previously created cloud provider information, choose the required version, and click "Save."

Item (* is required)
Content

Account Name*

Select the registered cloud provider

Region*

Select the region for the cluster to be created

Cluster Name*

Register the name of the cluster to be created

Version*

Select the version of the cluster to be created

3) Once saved, the cluster status changes to "CREATING" as it is being provisioned.

4) Click on the "CREATING" status to monitor the cluster creation progress.

5) Click on the [Activity] tab to check the ongoing installation details.

6) Confirm the status changes to "RUNNING" when the cluster is successfully created.

To serve the provisioned cluster, addon-manager deployment and storage class deployment are required.

3. Add AWS Node Group

Amazon Node Group creation is possible only after cluster provisioning installation is completed.

1) Once the cluster configuration is complete, select the cluster, go to the [Resources] tab, and click "+ Add Node Group."

2) Enter the required information for the node group to be created and click "Save."

Item (* is required)
Content

Node Group Name*

Enter the name of the Node to be created

Instance Type*

Select the instance (resource) to be created

Disk Size (GiB)*

Enter the disk capacity of the Node Group to be created

Desired Node*

Enter the number of Node Groups to be created

Min Node Count*

Enter the minimum number of Node Groups to be created when scaling in

Max Node Count*

Enter the maximum number of Node Groups to be created when scaling out

3) When the node group addition starts, the status is displayed in the "Node Group" section.

4) As the node group addition progresses, the status changes to "ACTIVE."

5) Check in the [Infrastructure] - [Clusters] tab if the cluster status and the number of nodes are displayed correctly.

4. Add Amazon EBS CSI Driver

Amazon EBS CSI Driver creation is possible only when there is more than one node group.

1) Once the node group configuration is complete, select the cluster, go to the [Resources] tab, and click "+ Install Amazon EBS CSI Driver."

2) During the installation process, it takes some time to create resources, and later, confirm the installation completion.

3) Once the Amazon EBS CSI Driver installation is complete, the status is displayed in the "Amazon EBS CSI Driver" section.

4) Confirm the installation of the Amazon EBS CSI Driver in [Workloads] - [Deployments].

5. Add Cluster Autoscaler

The Cluster Autoscaler can be created only when there is more than one node group.

1) Once the node group configuration is complete, select the cluster, go to the [Resources] tab, and click "+ Install Cluster Autoscaler."

2) During the installation process, it takes some time to create resources, and later, confirm the installation completion.

3) Once the installation is complete, the status is displayed in the "Cluster Autoscaler" section.

4) Confirm the installation of the Cluster Autoscaler in [Workloads] - [Deployments].

To create a cluster, certain prerequisites need to be completed. Please refer to the following

steps

GCP (GKE)

Coming Soon

Service Map

The Service Map is a unit that configures applications and manages various resources. Kubernetes manages applications at the Namespace level. Namespaces are a way of logically dividing clusters to deploy and manage containers, serving as a kind of virtual cluster. The Service Map, provided by Cocktail, is a management space for applications based on namespaces, offering additional management features.

One of the main resources in the Service Map is Workloads, which deploy and manage containers. Other resources include persistent volumes, configuration information, and service exposure.

Pod(Containers)

Pods are resources that deploy and execute containers, composed of one or more containers. They consist of a main container responsible for application logic processing and optional sidecar containers for auxiliary tasks. While most cases require only the main container, additional sidecar containers can be added for functions like log collection, backup, or proxy.

Containers within a pod share the same network (localhost) and volumes, making it easy to scale by adding containers.

Pods can be created independently, but typically, they are managed through workloads responsible for deployment, updates, and lifecycle management.

Workloads

Workloads are Service Map resources responsible for the deployment and lifecycle of pods. They manage the entire lifecycle of pods, including how they are deployed and executed, status monitoring, and restarting in case of issues.

In Cocktail, users interact with workload settings through a web UI console, minimizing errors caused by incorrect configurations.

Even when inputting through the web UI console, users can specify almost all detailed configuration items related to workloads defined in Kubernetes.

Workloads are categorized into several types, each with differences in how pods are deployed, executed, and managed.

Workload Groups

Within a namespace, various types of workloads can be executed, and in some cases, there can be so many workloads that it becomes difficult to understand them all at a glance.

Organizing workloads into workload groups allows for a clear overview of the state of workloads by displaying them according to workload groups.

Deployment Workloads

Deployment workloads replicate the same pod multiple times to provide stable service even if some pods have issues. They are mainly used for stateless application configurations like web servers, where data management is not required. This is because replicated pods have the same container, making them unsuitable for data management.

Deployment workloads also support rolling updates, replacing pods sequentially to perform updates without affecting services.

They also support autoscaling, automatically increasing the number of pod replicas when requested CPU or memory resources are exceeded.

StatefulSet Workloads

StatefulSet workloads deploy pods performing different roles in replication to configure workloads. They are suitable for data storage and management applications such as databases, key-value stores, and big data clusters. Each pod is assigned a unique name, allowing tasks to be processed through pod communication. Each pod uses a unique persistent volume to store/manage data.

DaemonSet Workloads

DaemonSet workloads are Service Map resources that deploy and manage daemon process pods running continuously in the background. Examples of background tasks include log collection and monitoring data collection on each node.

Job Workloads

Job workloads deploy pods to process tasks in a one-time execution. They are mainly used for batch job processing such as data transformation and machine learning.

CronJob Workloads

Similar to job workloads, but they allow for scheduled or periodic execution of jobs. They use configurations similar to Linux's cron tab for scheduling.

Service Exposure

To serve applications externally, pods need to be exposed to external traffic. The Service Map exposes pods to the external traffic through service exposure resources.

Service exposure resources specify pods to expose based on labels. Therefore, even replicated pods with the same label can be specified in one service exposure. In this case, the service exposure performs load balancing to send traffic to one of the specified pods.

Service exposure resources are assigned a fixed IP address, which is a private IP address used within the cluster. This is because pod IP addresses change on restart, which can cause issues if pods are accessed directly. Therefore, the fixed address of the service exposure is used to connect to the specified pods.

Service exposure is categorized based on the exposure method.

Exposing Service with Cluster IP

Exposing services with a cluster IP allows access only within the cluster. It is used for communication between pods via fixed IP addresses within the service map.

Exposing Service with Node Port

Node port exposes services using the node's IP address. External access to pods is possible using the exposed node port (Node IP: Node Port). Node ports are assigned to all nodes in the cluster, allowing access to pods from any node. Typically, all nodes in the cluster are connected to an L4 switch to consolidate access addresses.

Node ports are assigned a range of 30000 to 32767 during cluster installation. Services are exposed automatically or by specifying a port range. This range can be user-configured.

Exposing Service with Load Balancer

When the cluster is configured in a cloud environment, a load balancer can be automatically created to expose services. Pods are exposed via node ports, and the created load balancer connects pod execution nodes with ports. External access is possible using the load balancer's address and port. Load balancer service exposure is only possible on supported cloud platforms like AWS, Azure, and GCP, configured during cluster installation by cloud providers.

Ingress

Apart from service exposure, the Service Map also has Ingress resources for external pod access. Ingress exposes pods to the outside world via hostnames/paths (e.g., abc.com/login, cdf.org/). It functions similarly to an L7 switch.

To use Ingress, an Ingress controller must be installed in the cluster. The Ingress controller receives external traffic and forwards it to pods based on routing information defined in the Ingress (hostname/path -> cluster IP service exposure). Therefore, Ingress exposes the controller to external services, and pods expose services via cluster IP for routing by Ingress rules.

In Kubernetes, the Ingress controller is deployed to pods. Therefore, when installing in the cluster, service exposure should be done via node ports or load balancers.

Note that Ingress routes traffic based on hostnames/paths, so these need to be registered in internal or external DNS servers and accessible by the Ingress controller. Cocktail Cloud provides default configurations for Ingress usage, eliminating the need for additional installation and setup.

In cases of high external traffic to pods, dedicated Ingress nodes are sometimes configured in the cluster. These nodes only have Ingress controllers deployed and are replicated for high availability. They provide scalability by adding Ingress nodes as traffic increases.

Persistence Volumes

When pods need to store and manage data, persistent volumes are necessary. Pods can restart at any time or be reassigned to healthy nodes in case of node failures, making it impossible to use node disk for data storage.

Persistent volumes ensure data integrity even when pods are restarted, reassigned, or deleted because they are managed independently of pod lifecycles. They use separate storage configured with the cluster.

Service Map's persistent volume resources select cluster storage for creation. Created persistent volumes are mounted to pods and used by containers. Persistent volumes are categorized into shared volumes, which can be mounted by multiple pods, and single volumes, which can be mounted by only one pod.

Persistent volumes require storage configured in the cluster. NFS storage is commonly used, supporting any storage that supports the NFS protocol.

Configuration Information (ConfigMap/Secret)

When deploying a web server as a container, you typically use configuration files for server execution. In cases where pods are executed with separate configurations, these settings are managed as configuration resource. While it's possible to include configuration files in the pod's container image, this would necessitate recreating the image whenever configurations change.

Configuration information is created and managed within the service map, and can be mounted to pods' containers as volumes or files. Depending on container implementation, it can also be passed as environment variables. An advantage is that configuration changes can be made and reflected even while pods are running.

Configuration resource is categorized into ConfigMaps and Secrets based on management method. Both manage configuration information, but Secrets encrypt data, making them suitable for sensitive information like database connection details.

Image Update

The pipeline resource in the service map updates container images for workloads. Upon workload deployment completion, a pipeline is automatically created to update container images in pods.

Workload-deployed container images in the service map fall into two types: those generated from Cocktail Cloud builds and those registered in external registries.

Images generated from Cocktail Cloud builds undergo the entire process from image creation to workload deployment automatically whenever application code changes (refer to 'Builds/Images' section for Cocktail Cloud builds).

External registry images are updated via replacement, where the pipeline performs automated deployment only.

Catalog

The catalog in the service map bundles one or more workloads associated with the service map for deployment and updates. It's primarily used for distributing and updating open-source software.

Cocktail Cloud provides many commonly used open-source packages in its catalog. Users can search for desired packages in the catalog and automatically deploy them to the service map. (Refer to the 'Catalog' section for Cocktail Cloud's catalog).

Packages deployed to the service map can perform state monitoring and automatic updates.

NCP (NKS)

1. NCPKS Cluster Registration

1) To register the created NCPKS cluster with Cocktail Cloud, follow these steps

1-1. Navigate to the Cluster Registration Screen

1) Click the "+ Cluster Registration" button located in the upper right corner of the [Infrastructure] - [Clusters] tab.

1.2 Choose Cluster Provider and Type

1) Set the provider attribute in the Cluster Configuration form to 'Naver Cloud Platform.'

2) Choose the type attribute as 'NCPKS.'

3) Upon selecting these two attributes, the Cluster UUID setting will appear in the cluster configuration form.

4) Choose the region as the one where NCPKS was created (Example: Korea(KR)).

  • Provider and type fields are mandatory.

1.3 Cluster Registration

Item (* is required)
Content

Cluster UUID*

After creating the cluster, check and retrieve the information from the Kubernetes config file (~/.kube/config)

Cluster Name*

Enter the Cluster Name to be managed by Cocktail Cloud

Kubernetes Version*

Enter the Kubernetes Version for the Cluster to be registered

ID*

ID to be managed by Cocktail Cloud - The ID should consist of only lowercase letters, numbers, and three specified special characters (- . _) (example: eks-acornsoft-demo-cluster) - Must not overlap with the IDs of other already registered clusters

Description

Description of the Cluster to be registered

Master Address*

Confirm and input as you would with UUID, checking the Kubernetes config file (~/.kube/config)

Node Port Host Address*

Enter the Public IP of Kubernetes

Node Port Range*

Input the port range 30000-32767, which is available in Kubernetes

Cluster CA Certification*

Confirm and input as you would with UUID, checking the Kubernetes config file (~/.kube/config)

Access Key ID*

Enter the NCLOUD_ACCESS_KEY in Naver Cloud Portal > My Page > Account Management > API Key Management

Secret Access Key*

Enter the NCLOUD_SECRET_KEY in Naver Cloud Portal > My Page > Account Management > API Key Management

1) Click the "Save" button in the menu bar to initiate the cluster registration.

2) After registration is complete, the cluster list will be displayed, allowing you to verify the recently registered cluster.

NCP (NKS)

Coming Soon

ETC (Datacenter)

Coming Soon

Azure (AKS)

Coming Soon

AWS (EKS)

When you have previously registered an existing EKS cluster in Cocktail and need to delete it, simply delete the cluster from the AWS console. Please keep this in mind.

1. EKS Cluster Registration

1) Register the created EKS cluster with the Cocktail Cloud using the following procedure.

1.1 Move to the Cluster Registration screen

1) Click on the "+ Register cluster in use" button in the upper right corner of the [Infrastructure] - [Clusters] tab.

1.2 Choose Cluster Provider and Type

1) Select 'Amazon Web Service' as the provider in the cluster configuration form.

2) Choose the type attribute as 'EKS'.

3) Once these two attributes are selected, the Cluster ID setting will be displayed in the cluster configuration form.

4) Choose the region as the one where EKS was created (Example: Seoul (ap-northeast-2)).

  • The Provider and Type fields are mandatory.

1.3 Cluster Registration

Item (* is required)
Content

Cluster ID*

Cluster name managed by AWS EKS - Retrieve from the Kubernetes config file for the cluster to be registered (~/.kube/config) - Alternatively, check in the AWS console under EKS > Clusters and input the information.

ID*

The content written for the cluster ID remains the same - Must not overlap with the IDs of other already registered clusters

Kubernetes Version*

Enter the Kubernetes Version for the Cluster to be registered

Cluster Name*

The Cluster Name to be used in Cocktail Cloud

Description

Description of the Cluster to be registered

Master address*

Kubernetes Master API Address -Alternatively, check AWS Console > EKS > API Server Endpoint section and enter

Node Port Host Address*

Enter the Public IP of Kubernetes

Node Port Range*

Input the port range 30000-32767, which is available in Kubernetes

Cluster CA Certification*

Cluster CA Certificate

- check AWS Console > EKS > Certificate authority section and enter.

Access Key ID*

ACCESS_KEY of the AWS IAM user with access to the cluster to be registered - Confirm and retrieve from AWS Console > IAM > Users.

Secret Access Key*

SECRET_ACCESS_KEY of the AWS IAM user with access to the cluster to be registered

- Confirm and retrieve from AWS Console > IAM > Users.

1) Click the "Save" button in the menu bar to save the cluster registration.

2) After registration is complete, the cluster list will be displayed.

2. Create Storage Class

1) To use PV/PVC, you need to create a new storage class.

2.1 Move to the storage class creation screen

1) Click on the "+ Create" button at the top right corner of the [Storage] - [Storage Classes] tab, then select AWS EBS CSI.

2.2 Configure Storage Class

1) Fill out the settings form accordingly, then click the "Save" button.

Item (* is required)
Content

Name*

Storage Controller Name

Description

Description of the Storage Controller

Base Storage

Default Storage Settings for Use in this Cluster

Volume Binding Mode*

Volume Binding Mode Selection

- Immediate: Operates at the time PVC is created.

- WaitForFirstConsumer: Operates at the time Pod is created

Reclaim Policy*

- RETAIN: The storage remains when deleted and is automatically reattached upon recreation.

- DELETE: The storage is deleted along with the resource.

Parameters

Name and Value are separated. In each server, enter the IP of the server-side (target) for server, and for share, enter the mount path of the server-side (target).

Mount Options

Only values can be registered. For each, enter "hard" for the value and specify the NFS version (nfsvers). For default OS, set nfsversion to 4.1, and for NAS or other storage, set it to 3.

2.3 Storage Class Creation Completed

1) You can now verify the created storage class

Managing Cloud Provider

This section introduces the process of configuring users and permissions to provision resources from the cloud provider. Currently, only AWS is supported, and it is noted that additional cloud providers may be added in the future.

GCP (GKE)

1. GKE Cluster Registration

1) To register the created GKE cluster with Cocktail Cloud, follow these steps

1-1. Navigate to the Cluster Registration Screen

1) Click the "+ Cluster Registration" button located in the upper right corner of the [Infrastructure] - [Clusters] tab.

1.2 Choose Cluster Type and Authenticate

1) Set the provider attribute in the Cluster Configuration form to 'Google Cloud Platform.'

2) Choose the type attribute as 'GKE.'

3) Upon selecting these two attributes, the authentication attribute will appear in the cluster configuration form.

4) Click the authentication button, and a Google authentication window will appear. Enter your Google account credentials.

  • Provider and type fields are mandatory.

  • The Google account used must be the one used when creating the GKE cluster to be registered.

1.3 Cluster Registration

1) Once authentication is complete, the authentication attribute will display 'Authentication Completed.'

2) Click the "Save" button in the menu bar to initiate the cluster registration.

  • The remaining attributes not entered will be automatically filled in based on the information from the GKE cluster after registration.

3) After registration is complete, the cluster list will be displayed, allowing you to verify the recently registered cluster.

Item (* is required)
Content

Project ID*

Select the Google Cloud Project ID - The Project ID is associated with the project where the GKE cluster to be registered is created.

Cluster*

Select the GKE Cluster to Register

Cluster Name*

Enter the Cluster Name to be managed by Cocktail Cloud - The ID can only consist of lowercase letters, numbers, and three specified special characters (- . _) (example: GKE Acornsoft Demo Cluster - 1.00.00) - The name must not overlap with the names of other already registered clusters

ID*

Enter the Cluster ID

ETC (Datacenter)

1. ETC Cluster Registration

1) To register a non-public cloud cluster with Cocktail Cloud, follow these steps

1-1. Navigate to the Cluster Registration Screen

1) Click the "+ Register In-Use Cluster" button located in the upper right corner of the [Infrastructure] - [Clusters] tab.

1.2 Choose Cluster Provider and Type

1) Set the provider attribute in the Cluster Configuration form to 'Datacenter.'

2) Choose the type attribute as 'MANAGED.'

3) Choose the region as the one where the cluster was created (Example: Korea).

  • Provider and type fields are mandatory.

1.3 Cluster Registration

Item (* is required)
Content

Cluster Name*

The name of the cluster to be managed in Cocktail Cloud

Kubernetes Version*

Enter the Kubernetes version of the created cluster

ID*

ID for management in Cocktail Cloud - IDs can only contain lowercase letters, numbers, and three special characters (- . _). (e.g., eks-acornsoft-demo-cluster) - IDs cannot be duplicates of IDs already registered for other clusters.

Description

Description of the cluster to be registered

Master address*

Enter the IP address displayed after running the command $ sudo cat /etc/kubernetes/acloud-client.conf | grep server

Node Port Host Address*

Enter the Public IP of Kubernetes

Node Port Range*

Input the port range 30000-32767, which is available in Kubernetes

Cluster CA Certification*

Enter the portion after "certificate-authority-data:" displayed when running the command $ sudo cat /etc/kubernetes/acloud-client.conf | grep certificate-authority-data

Client Certificate Data*

Enter the portion after "client-certificate-data:" displayed when running the command $ sudo cat /etc/kubernetes/acloud-client.conf | grep client-certificate-data

Client Key Data*

Enter the value displayed after running the command $ sudo cat /etc/kubernetes/acloud-client.conf | grep client-key-data

1) Click the "Save" button in the menu bar to initiate the cluster registration.

2) After registration is complete, the cluster list will be displayed, allowing you to verify the recently registered cluster.

AWS

To set up AWS IAM users and permissions for provisioning AWS resources, along with creating roles using custom trust policies that IAM users can assume to access resources, follow these steps

User Creation

2) Click the "Create user" button in the top right corner of the IAM menu.

3) Enter the username.

5) Verify that the user has been created successfully.

6) Copy the ARN (Amazon Resource Name) of the created user.

2) Policy Creation

1) In the IAM menu, navigate to [Access management] - [Policies] and click the "Create policy" button.

2) Click on JSON in the policy editor and edit the policy as needed.

3) Set a name for the policy and click "Create policy."

3) Role Creation

1) In the IAM menu, go to [Access management] - [Roles] and click the "Create role" button.

2) Choose "Trusted entity type" as "Custom trust policy," click "Add" in the "Add trusted entities" section.

3) Add [Principal Entity Types] - [IAM users] & [AWS services].

IAM users : ARN (Amazon Resource Name) of the created user

AWS services: Name of the service you intend to use (e.g., eks)

4) Add the necessary permissions

AmazonEBSCSIDriverPolicy

AmazonEC2FullAccess

AmazonVPCFullAccess

IAMFullAccess

EKSFullPolicy

5) Set a name for the role and click "Create role."

6) Verify the created role.

4) Get Access Key and Secret Access Key

2) Click "Next" under the "Select" section, choose "Other," and click "Next."

3) Enter a description tag for the access key and click "Create access key."

4) Confirm the generated access key and secret access key.

5) Save the generated access key for later use.

1) Access the AWS Console and click on "IAM."

4) In the "Permissions" options, select "Add user to group," click "Next," and proceed with the creation.

1) Click on the user with granted permissions, go to the [Security credentials] tab, and click "Create access key" on the top right of the "Access keys" box.

Create Service Map

We have previously created a user who can log in to the platform. Now, let's create a service map to register in the workspace.

1. Move to the Service Map Creation Screen

1) Click the "+ Create" button in the upper right corner on the [Application] - [Service Map] tab.

2. Enter Service Map Creation Information

Item (* is required)
Content

Cluster*

Select the cluster to register the service map

Namespace* (Choose one)

Choose whether to create the service map in a new namespace or utilize an existing namespace

Service Map Name*

Enter the name for the service map you want to create, typically using the same name as the namespace

Namespace*

If selecting a new namespace, you can enter the "Namespace" field

If choosing an existing namespace, select from the list of namespaces currently created in the cluster

Network Policy*

Choose whether to allow or block Ingress/Egress traffic. The recommended setting is to select "Allow All Ingress/Egress Traffic"

Resource Allocation Usage

If checked, it limits the resource usage of the respective service map

Use Container Limit Range Configuration

If checked, it restricts the resource usage of containers deployed in the service map

Use Pod Limit Range Configuration

If checked, it limits the resource usage of pods deployed in the service map

Use Storage Limit Range Configuration

If checked, it limits the resource usage of volumes requested by the service map

After filling out all the details, select the "Save" button to create the service map.

3. Confirming Service Map Creation

1) Verify the newly created service map in the service map list screen.

4. Next Steps

Next, we will explain the process of creating an image registry, which is required for workspace registration. Please proceed to the "" page.

Create Registry

Creating a User

As mentioned earlier, we explained the process of "Cluster Registration." You have successfully created a cluster and registered cluster information on the Cocktail Cloud platform, completing the infrastructure preparation. Now, let's create a user who can access the Cocktail Cloud platform.

1. Navigate to the user creation screen

1) Click the "+ Create" button in the upper right corner on the [Settings] - [User] tab.

2. Enter User Information

1) Enter the user creation information below, and click the "Save" button to create the user.

Items (* indicates required)
Content

Name*

Enter the user's name

ID*

Enter the User ID

Role*

Select the permissions to assign to the user

  • Selectable between "Admin" and "User," refer to the "Security" tab

Department

Enter the user's department

Description

Enter a brief description for the account

Password*

Enter the user's password

Confirm Password*

Enter the user's password

3. Confirming User Creation

1) Verify the newly created user in the user list screen.

4. Next Steps

Next, we will explain the process of creating a service map, which is required for workspace registration. Please proceed to the "" page.

Create Service Map
[Screen] Cluster Configuration Form
[Screen] Cluster Information in AWS Console
[Screen] Initial Storage Class Screen
[Screen] Storage Class Configuration Screen
[Screen] Storage Class Creation Screen
[Screen] Cluster Configuration Form
[Screen] Cluster Configuration Form
[Screen] Cluster Configuration Form After Registration
[Screen] Accessing the Logged-in Console
[Screen] Role Creation Screen
Adding IAM Users
Adding AWS Services
[Screen] Service Map
[Screen] Service Map
[Screen] Basic Information for Service Map Creation
[Screen] Confirming Service Map Creation
[Screen] User Menu Screen
[Screen] User Creation Screen
[Screen] User Creation Confirmation
[Screen] Cloud Provider List
[Screen] Register AWS Authentication Information
[Screen] Cloud Provider Configuration Information
[Screen] Provisioning Template List
[Screen] EKS Provisioning
[Screen] EKS Cluster Creation List
[Screen] EKS Installation Information (Resource Information)
[Screen] EKS Installation Information (Resource Information)
[Screen] EKS Installation Information (Activity)
[Screen] Cluster Creation List
[Screen] EKS Installation Information (Resource Information)
[Screen] EKS Node Addition Information
[Screen] EKS Node Addition Status
[Screen] EKS Node Addition Completion Status
[Screen] Cluster Creation List
[Screen] Cluster Configuration Form
[Screen] Cluster Configuration Form After Registration

External Registry Registration

By default, Cocktail Cloud provides Harbor (Registry) when configuring the platform. Additionally, it supports integration with external registries. You can register externally created registries in Cocktail, enabling image builds and deployments.

Create Registry

We have previously created a service map. Now, let's create a registry to register in the workspace.

1. Move to the Registry Creation Screen

1) Click the "+ Create" button in the upper right corner on the [Build Configuration] - [Container Registry] tab.

2. Enter Registry Creation Information

1) Enter the name of the registry to be created in the "Registry Name" field.

2) Input a description for the registry in the "Description" field.

3) Click the "Save" button in the upper right corner.

3. Registry Creation Confirmation

1) Verify the newly created registry in the registry list screen.

4. Next Steps

[Link] AWS Console
Item (* is required)
Content

Next, we will explain the process of creating an image registry, which is required for workspace registration. Please proceed to the "" page.

Setting Up AWS ECR
Setting Up Azure ACR
Setting Up Docker Hub
Setting Up Docker Registry
Setting Up Google GCR
Setting Up Harbor
Setting Up Naver
Setting Up Quay

Registry*

Enter the name of the registry to be created

Description

Enter the description for the registry to be created

[Screen] Registry
[Screen] Registry Creation Basic Information
[Screen] Confirm Registry Creation
Create Workspace

Setting Up Azure ACR

Azure Container Registry(ACR) is a container image storage and management service provided by Azure. ACR offers various features necessary for storing and deploying Docker images.

1. Move to the external registry creation screen

1) Navigate to [Build Configuration] - [External Container Registries].

2) Click the "+ Register" button in the upper right corner to select the provider to create.

2. Enter Registry Registration Information

1) Click the "+ Register" button and select "Azure ACR."

2) After registering Azure ACR authentication information in the basic information, click the "Test Connection" button.

Item (* is required)
Content

Name*

Enter the name of the external container registry to be registered

Describe

Enter a description for the external container registry

Endpoint URL*

Enter the login server information

Registry*

Enter the name of the already registered registry

Client ID

username

Client Secret*

password

  1. Enter the name of the registry to be created in the registry.

  2. Enter a description for the registry in the description.

  3. Enter the EndPoint URL in the correct format.

  4. Select the region of the registered registry.

  5. Enter the Client ID and Client Secret.

  6. Click "Test Connection" in the upper right corner to verify if the registry is available.

  7. Click the "Save" button in the upper right corner.


How to Create Azure Container and Verify Authentication Information

1) Access the Azure Portal to retrieve the necessary information.

2) Click on "Management Groups" to efficiently manage access, policies, and compliance for subscriptions.(All services - Management and governance - Management groups)

2-1) Click the "Create" button to go to the creation screen.

2-2) Create a management group to be used internally.

3) Click on "Subscriptions" to create a set that encompasses all resources.

3-1) Click the "Add" button to go to the creation screen.

4) Click on "Resource Groups," a logical container for managing resources grouped together. (All services - Management and governance - Resource Group)

4-1) Click the "Create" button to go to the creation screen.

4-2) [Create Resource Group] - Choose a resource group name and region, then create.

5) Click on "Resources" to create resources that can be used for Azure services.

5-1) In the marketplace, search for "registry" and click on "Container Registry."

5-2) Click the "Create" button.

5-3) Register with subscription, resource group, registry name, and location (region).

6) Click on the created resource and click on "Access Keys."

Registry: Registry Name

EndPoint URL: Login Server

Client ID: User Name

Client Secret : Password

Setting Up Docker Hub

Docker Hub is a container registry built to enable developers and open-source contributors to discover, use, and share container images.

1. Move to the external registry creation screen

1) Navigate to [Build Configuration] - [External Container Registries].

2) Click the "+ Register" button in the upper right corner to select the provider for creation.

2. Enter Registry Registration Information

1) Click the "+ Register" button and select "Docker Hub"

2) After registering Docker Hub authentication information in the basic information, click the "Test Connection" button.

Item (* is required)
Content

Name*

Enter the name of the external container registry to be registered

Describe

Enter a description for the external container registry.

Endpoint URL*

Namespace*

Enter the registered registry namespace

Access ID*

Access Key

Access Secret*

Access Secret

  1. Enter the name of the registry to be created in the registry.

  2. Enter a description for the registry in the description.

  3. Enter the EndPoint URL in the correct format.

  4. Enter the name of the registered registry.

  5. Enter Access ID and Access Secret.

  6. Click "Test Connection" in the upper right corner to verify if the registry is available.

  7. Click the "Save" button in the upper right corner.


To confirm the namespace

1) Log in to Docker Hub and click on "Repositories" at the top.

2) Click on "Create repository" in the upper right corner.

3) Create a namespace and the image to be generated.

Since Docker Hub creates both the repository and image name simultaneously, the image name in the Build/Pipeline > Build section should match the Docker Hub Repository Name for image build generation.


To verify Access ID & Access Secret

1) Click on the profile in the upper right corner and select "My Account."

2) Confirm the Access ID

Access ID : User's nickname or email address

3) Click on the "Security" tab on the right, then click on "New Access Token" in the upper right.

4) Enter a description for the token and copy the Access Token upon issuance.

Access Token : Copy Access Token

Setting Up Harbor

Harbor is a container registry used in addition to the one registered through "Registry Creation" used in the internal cluster.

1. Move to the external registry creation screen

1) Navigate to [Build Configuration] - [External Container Registries].

2) Click the "+ Register" button in the upper right corner to select the provider to create.

2. Enter Registry Registration Information

1) Click the "+ Register" button and select "Harbor"

2) After registering Harbor authentication information in the basic information, click the "Test Connection" button.

Item (* is required)
Content

Name*

Enter the name of the external container registry to be registered

Describe

Enter a description for the external container registry

Endpoint URL*

Enter the Endpoint Address of the external container registry

Registry*

Enter the name of the already registered registry

Access ID*

Harbor Login ID

Access Secret*

Harbor Login Password

Ca Certificate

Enter the private certificate information

Insecure

Specify whether it's insecure

  1. Enter the name for the registry to be created in the registry.

  2. Enter a description for the registry.

  3. Enter the EndPoint URL.

  4. Enter the registry name.

  5. Enter Access ID and Access Secret.

  6. If CA Certificate exists, enter it (optional).

  7. Check the option for using Insecure (optional).

  8. Click "Test Connection" in the upper right corner to verify if the registry is available.

  9. Click the "Save" button in the upper right corner.

Setting Up AWS ECR

Amazon Elastic Container Registry (Amazon ECR) is a fully managed container registry that provides highly available and secure hosting for container images and artifacts. It allows you to deploy your applications reliably anywhere.

1. Navigate to the External Registry Creation Screen

1) Move to [Build Configuration] - [External Container Registry].

2) Click the "+ Register" button in the upper right corner and select the provider you want to create.

2. Enter Registry Registration Information

1) Click the "+ Register" button, and then select "AWS ECR"

2) After entering the AWS authentication information in the basic details, click the "Test Connection" button.

Item (* is required)
Content

Name*

Enter the name for the external container registry you want to register

Description

Enter the description for the external container registry you want to register

Endpoint URL*

Endpoint Address of the External Container Registry

Region*

Region of the Registered Registry

Registry*

Name of the Registered Registry

Access ID*

Access Key

Access Secret*

Secret Access Key

  1. Enter the name of the registry to be created in the registry.

  2. Enter a description for the registry in the description.

  3. Enter the EndPoint URL in the correct format.

  4. Select the region of the registered registry.

  5. Enter the name of the registered registry.

  6. Enter Access ID and Access Secret.

  7. Click "Test Connection" in the upper right corner to verify if the registry is available.

  8. Click the "Save" button in the upper right corner.


Access ID & Access Secret Verification Method

1) Access the AWS Console to retrieve the necessary information.

3) Click "Other" among the next buttons and click "Next."

4) Enter a description tag for the access key to be created and click the "Create access key" button.

5) Confirm the generated access key and secret access key values.

Creating a Registry

You can create both private and public repositories.

(Note: This guide is written based on the private repository in the 'us-east-1' region, and the process is the same for public repositories.)

1) Click the "Create Repository" button in the upper right corner.

2) Enter the repository name to create the repository.

(In Cocktails, builds are done with "Registry Address/Image Name" so please make sure to create the registry as "Registry Name/Image Name")

EndPoint URL & registry Verification Method

1) Retrieve the necessary information from the list of created repositories in Cocktails.

EndPoint URL : The table below provides the necessary information

Category
EndPoint URL

Private

(User Number).dkr.ecr.(Region).amazonaws.com

Public

public.ecr.aws/(User Alias)

Registry : Repository Name Created by User


※ Permissions need to be granted separately for Private Registries.

Private Registry Permission Example

1) Click on "Settings" in [Amazon Elastic Container Registry] for the [Private registry].

2) Click the "Generate Policy" button in the upper right corner of [Settings] - [Permissions].

3) lick "JSON" in the upper right corner, add the following items, and click "Save Policy" to save.

Sid: Permission Name

Principal: Specify one or more AWS account IDs to grant permission. Specify more than one account using a comma-separated list.

Action : “ecr:*”

Setting Up Docker Registry

Docker Registry based on Docker Hub for Private use.

1. Move to the external registry creation screen

1) Navigate to [Build Configuration] - [External Container Registries].

2) Click the "+ Register" button in the upper right corner to select the provider for creation.

2. Enter Registry Registration Information

1) Click the "+ Register" button and select "Docker Registry"

2) After registering Docker Registry authentication information in the basic information, click the "Test Connection" button.

Item (* is required)
Content

Name*

Enter the name of the external container registry to be registered

Describe

Enter a description for the external container registry

Endpoint URL*

External Container Registry Endpoint Address

Registry*

Enter the name of the already registered registry

Access ID*

Docker Login ID

Access Secret*

Docker Login Token

  1. Enter the name of the registry to be created in the registry.

  2. Enter a description for the registry in the description.

  3. Enter the EndPoint URL in the correct format.

  4. Enter the name of the registered registry.

  5. Enter Access ID and Access Secret.

  6. Click "Test Connection" in the upper right corner to verify if the registry is available.

  7. Click the "Save" button in the upper right corner.


Access ID & Access Secret Verification Method

1) Click on the profile in the upper right corner and select "My Account."

2) Confirm the Access ID

Access ID : User's nickname or email address

3) Click on the "Security" tab on the right, then click on "New Access Token" in the upper right.

4) Enter a description for the token and copy the Access Token upon issuance.

Access Token : Copy Access Token

https://console.aws.amazon.com/console
[Screen] Registries
[Screen] Select External Registries
[Screen] Select External Registries(Azure ACR)
[Screen] Register Azure Registry Authentication Information
[Screen] Registries
[Screen] Select External Registries
[Screen] Select External Registries(Docker Hub)
[Screen] Register Docker Hub Registry Authentication Information

External Container Registry Endpoint Address -> Mostly fixed as

[Screen] Registries
[Screen] Select External Registries
[Screen] Select External Registries(Harbor)
[Screen] Register Harbor Registry Authentication Information
[Screen] Registry
[Screen] Select External Registry
[Screen] Select External Registry (AWS ECR)
[Screen] Register AWS Registry Authentication Information

2) Click on the registered user, then click the tab, and click the button in the upper right corner of the "Access keys" box.

[Screen] Registries
[Screen] Select External Registries
[Screen] Select External Registries(Docker Registry)
[Screen] Register Docker Registry Registry Authentication Information
Logo
Microsoft Azure
https://docker.io

Setting Up Naver

Naver Container Registry allows for the easy storage and management of container images in a private Docker registry and facilitates straightforward deployment to the Naver Cloud Platform.

1. Move to the external registry creation screen

1) Navigate to [Build Configuration] - [External Container Registries].

2) Click the "+ Register" button in the upper right corner to select the provider to create.

2. Enter Registry Registration Information

1) Click the "+ Register" button and select "Naver"

2) After registering Naver authentication information in the basic information, click the "Test Connection" button.

Item (* is required)
Content

Name*

Enter the name of the external container registry to be registered

Describe

Enter a description for the external container registry

Endpoint URL*

Enter the Endpoint Address of the external container registry

Region*

Specify the region of the registered registry

Registry*

Enter the name of the registered registry

Access ID*

Enter the access key

Access Secret*

Enter the secret access key

  1. Enter the name for the registry to be created in the registry.

  2. Enter a description for the registry.

  3. Enter the EndPoint URL in the correct format.

  4. Enter the registry name.

  5. Enter Access ID and Access Secret.

  6. Click "Test Connection" in the upper right corner to verify if the registry is available.

  7. Click the "Save" button in the upper right corner.


Endpoint URL Guide

Here are the steps to enable or disable the Public Endpoint.

  1. Log in to the Naver Cloud Platform Console.

  2. Click on "Services," then navigate to "Containers" > "Container Registry"

  3. Click on the target registry name in the list.

  4. In the detailed information section, click on the gear icon in the "Configuration" tab.

  5. In the Configuration settings popup, click the toggle button for the "Public Endpoint" item to enable or disable it. After setting the preference, click the [Confirm] button to save the changes.

Aceess ID & Access Key 발급 가이드

Setting Up Quay

Quay Registry is an open-source container image registry used for storing and managing Docker images. Originally developed by CoreOS, it is currently part of Red Hat.

1. Move to the external registry creation screen

1) Navigate to [Build Configuration] - [External Container Registries].

2) Click the "+ Register" button in the upper right corner to select the provider to create.

2. Enter Registry Registration Information

1) Click the "+ Register" button and select "Quay"

2) After registering Quay authentication information in the basic information, click the "Test Connection" button.

Item (* is required)
Content

Name*

Enter the name of the external container registry to be registered

Description

Enter a description for the external container registry

Endpoint URL*

External Container Registry Endpoint Addres

Registry*

Enter the name of the already registered registry

Access ID+RobotID*

Enter the issued Access ID + Robot ID

Access Secret*

Enter the issued Secret

  1. Enter the name for the registry to be created in the registry.

  2. Enter a description for the registry.

  3. Enter the EndPoint URL in the correct format.

  4. Enter the registry name.

  5. Enter Access ID + Robot ID and Access Secret.

  6. Click "Test Connection" in the upper right corner to verify if the registry is available.

  7. Click the "Save" button in the upper right corner.


Access ID + RobotID & Access Secret Verification

The robot ID needs to be associated with an existing repository.

1) Access Quay and click on the profile in the upper right corner. Select "Account Settings."

2) Click on "Robot Accounts," the second menu in the right tab, and then click "Create Robot Account" in the upper right corner.

3) Create an ID for the robot, select the repository you created, and grant the necessary permissions.

4) Click on the created RobotID to check the Access Secret.

Access ID+ RobotID : The first value in the image.

Access Secret : The second value in the image.

Backup/Restore Preparations

To use backup and restoration in Cocktail, you need a repository where Kubernetes resources can be backed up and stored. Choose an appropriate repository and create it according to your needs.

Cocktail Backup and Restore

'Cocktail Backup/Restore' backs up and restores Kubernetes cluster resources and persistent volumes.

. Cocktail Backup/Restore provides

  1. Ensured safe backup and fast restoration.

  • Protection of all resources for quick restoration when needed.

  • Automated backup scheduled as per the defined intervals.

  • Adjustable retention periods for efficient operations.

  1. It provides excellent portability.

  • By eliminating specific vendor dependencies, you can freely utilize it in a variety of environments.

  • Rapid restoration through redundant configurations in case of disasters.

  • Service expansion through backup and restoration.

  1. Consistent UI and backup/restore status.

  • It allows easy management of multiple clusters through a unified user interface.

  • Users can conveniently monitor the backup and restoration status at a glance.

  1. With Cocktail Backup & Restore, you can easily perform tasks like.

  • Backing up and, if necessary, restoring clusters.

  • Cloning clusters.

  • Migrating cluster resources to other clusters.

  • Periodically backing up cluster resources for easy restoration to a previous state in case of unforeseen issues.

2. Cocktail Repository

  1. The Cocktail Repository is associated with object storage where backups are stored and manages connection states periodically.

  2. Adding a cluster to the Cocktail Repository is used for cluster migration.

  3. Cocktail supports integration with various object storage options.

  • AWS

  • GCP

  • Azure

  • MinIO(local storage)

3. Cocktail Backup

Cocktail Backup captures the current state of Kubernetes resources and creates a restore point.

  1. Necessary information for users to create a restore point, such as the protected target cluster, data storage, retention policy, schedule, etc., is stored, making backup management easy.

  2. Stored information in Cocktail Backup helps users efficiently replicate backups.

4. Cocktail Restore Point

  1. Cocktail Restore Point stores the state of Kubernetes resources at a specific point in time in the object storage.

5. Cocktail Restore

  1. Cocktail Restore uses the restore point to restore the Kubernetes state to a specific point in time, helpful in system failures, user errors, or other unexpected situations.

6. Overall Process

  1. Create the storage for backup and restore.

  2. Generate and execute backups to create restore points.

  1. Perform restoration using the restore points.

Create a Workspace

A workspace is an area where cluster resources are allocated for building, deploying, and operating applications, typically organized on a team basis.

1. Create Workspace

1) Click on the "+ Create" button in the upper right corner of the [Settings] - [Workspace] tab.

3) In the basic information form of the workspace creation screen, enter the workspace name in the "Name" attribute.

4) Choose a color for the workspace in the "Color" attribute.

  • The selected color will be applied to the 'Top Bar' of the screen.

5) Enter a description for the workspace in the "Description" attribute.

2. Register Registry

1) Select the created workspace name from the list on the workspace screen.

2) Click on the image registry icon on the right side of the basic information.

3) Choose the previously created registry and click the "Apply" button.

3) Confirm the registered registry.

3. Assign/Remove Clusters

3.1 AssignCluster

1) Select the created workspace name from the list on the workspace screen.

2) Click the "Assign/Remove Cluster" button on the right side of the allocated cluster, and the Assign/Remove window will appear.

3) In the "Select" of the allocation/removal window, choose the previously created cluster and click the "+ Add" button.

4) Click the "Save" button at the bottom right.

5) Confirm the list of allocated clusters.

3.2 Remove Cluster

1) In the Assign/Remove window, click the "Assign/Remove Cluster" button on the right side of the allocated cluster, and the Assign/Remove window will appear.

2) Press the "-" button on the right of the cluster you want to delete, remove it from the list, and then click the "Save" button at the bottom right.

3) Confirm that the cluster has been removed from the list of allocated clusters.

4. Allocation/retrieval Service Maps

4.1 Allocate Service Map

1) Select the created workspace name from the list on the workspace screen.

2) Click the "Service Map Allocation/retrieval " button on the right side of the allocated service map.

3) Select the cluster, choose the service map, and click the "+ Add" button.

4) Confirm the added item, select the target service map group, and click the "Save" button at the bottom right.

5) Confirm that the service map has been allocated in the list of allocated service maps.

4.2 retrieval Service Map

1) Click the "Service Map Allocation/retrieval" button on the right side of the allocated service map.

2) Press the "-" button on the right of the service map you want to revoke, remove it from the list, and then click the "Save" button at the bottom right.

3) Confirm that the service map has been revoked in the list of allocated service maps.

5. Assign/Remove Build Servers

5.1 Assign Build Server

1) Select the created workspace name from the list on the workspace screen.

2) Click "Assign/Remove Build Server" on the right side of the allocated build server.

3) After selecting the build server, click the "+ Add" button.

4) Confirm the added item and click the "Save" button at the bottom right.

5) Confirm that the build server has been allocated in the list of allocated build servers.

5.2Remove Build Server

1) Click "Assign/Remove Build Server" on the right side of the allocated build server.

2) Press the "-" button next to the build server you want to revoke, remove it from the list, and then click the "Save" button at the bottom right.

3) Confirm that the build server has been revoked in the list of allocated build servers.

6. Register/delete Members

6.1 Register Members

  • Users with USER role need to register members to access Workspaces.

1) Click the "Register/delete Members" button on the right side of the members.

2) Select the name of the member to be registered in the workspace, grant permissions, and click the "Save" button at the bottom.

3) Confirm that the member has been registered with the selected permissions.

6.2 delete Members

1) Click the "Register/delete Members" button on the right side of the members.

2) Select the name of the member to be removed, uncheck the combo box, and click the "Save" button.

3) Confirm that the member has been removed.

[Screen] Registries
[Screen] Select External Registries
[Screen] Select External Registries(Naver)
[Screen] Register Naver Registry Authentication Information
[Screen] Registries
[Screen] Select External Registries
[Screen] Select External Registries(Quay)
[Screen] Register Quay Registry Authentication Information
backup process
restore process
Permission
Feature
AWS S3 Configuration
Azure Blob Storage Configuration
Google Cloud Storage Configuration
MinIO Configuration

MANAGER

Can create and delete Workspaces, Clusters, and Service Maps, along with most other functionalities.

USER

Has access to almost all features except for certain capabilities such as Workspace settings, resource allocation, Limit Ranges, and Network Policy.

DEVOPS

In CI/CD pipelines, can only execute image builds (deployment and deletion are restricted).

DEV

Permitted to view CI/CD pipeline details but not allowed to deploy, modify, or delete.

VIEWER

In USER role, only viewing capabilities are granted.

[Screen] Workspace Creation Screen
[Screen] Basic Information Form on Workspace Creation Screen
[Screen] Screen after Clicking on the Created Workspace Name
[Screen] Image Registry Screen
[Screen] Confirmation Screen for Image Registry Registration
[Screen] Screen After Clicking on the Created Workspace Name
[Screen] Cluster Assign/Remove Window
[Screen] Cluster Assign/Remove Window
[Screen] Screen after Clicking on the Created Workspace Name
[Screen] Service Map Allocation/retrieval
[Screen] Allocation/retrieval Service Map
[Screen] Service Map Allocation/retrieval
[Screen] Screen after clicking the created workspace name
[Screen] Assign/Remove Build Server
[Screen] Allocate/Remove Build Server
[Screen] Assign/Remove Build Server
[Screen] Screen after clicking the created workspace name
[Screen] Workspace Member Registration Completed Screen
[Screen] Member Registration/delete
[Screen] Member Removal Completed

AWS S3 Configuration

Amazon S3 (Simple Storage Service) is a cloud-based, secure, and scalable object storage service provided by Amazon Web Services (AWS).

1. AWS S3

  1. To use AWS S3 storage, please refer to the following guide.

2. Creating an AWS S3 Bucket

How to Create a Bucket

1) Log in to the console and click on "S3."

(Note: This guide assumes the use of the "us-east-1" region.)

2) Click on the "Create bucket" button to navigate to the creation screen.

3) Set the region, bucket name, and default encryption according to user preferences, then click the "Create bucket" button at the bottom right to create the bucket.

Checking Authentication Information

1) Access the AWS Console to retrieve the necessary information.

3) Click "Other" among the next buttons and click "Next."

4) Enter a description tag for the access key to be created and click the "Create access key" button.

5) Confirm the generated access key and secret access key values.

Azure Blob Storage Configuration

Azure Blob Storage is an object storage service provided by Microsoft Azure cloud platform.

1. Azure Blob Storage

  1. To use Azure Storage, please refer to the guide below.

2. Azure Blob Storage Creation

Bucket Creation Method

1) Access the Azure Portal to obtain the necessary information.

2) Efficiently manage access, policies, and compliance regulations for your subscription by clicking on "Management groups."(All services - Management and governance - Management groups)

2-1) Click the "Create" button.

2-2) Create a management group for internal use.

3) Click on "Subscriptions" to create a logical container called "Resource Group" for grouping resources for efficient management.(All services - Management and governance - Subscriptions)

3-1) Click the "Create" button to go to the creation screen.

4) Click on "Subscriptions" to create a logical container called "Resource Group" for grouping resources for efficient management.(All services - Management and governance - Resource Group)

4-1) Click the "Create" button to go to the creation screen.

4-2) [Create a Resource Group] - Enter the resource group name and select the region before creating.

5) Click on "Resources" to create resources that can be used in Azure services.

5-1) Search for "Storage account" in the marketplace and click on "Storage account."

5-2) Click the "Create" button.

5-3) nter subscription, resource group, storage account name, and region to register.

6) Click on the created resource, and under [Data Storage], click on "Containers."

6-1) Click "+ Container" to create storage.

Bucket : Container Name


Authentication Information Confirmation Method

AZURE_SUBSCRIPTION_ID

  1. Select the subscription from the Azure service title. If your subscription is not displayed, use the search box to find it.

  2. Find the subscription in the list and verify the subscription ID displayed in the second column.

AZURE_TENANT_ID

  1. Use the search to find "Tenant properties."

  2. Find the Tenant ID in the Overview section of the Basic Information screen.

AZURE_CLIENT_ID & AZURE_CLIENT_SECRET

Please refer to the guide below for more details.

Backups

When a cocktail backup is created and executed, a backup target is created based on the backup information.

1. Preparation Before Backup Creation

2. Access the Backup Creation Page

  1. Select the [Backup/Restore] - [Backup] section.

  1. Click the [Create] button.

3. Enter backup information

  1. Input the details for the backup.

Select the cluster and data storage to be protected by the backup

3.2 Backup Scope

The backup scope includes selecting the entire cluster or specific namespaces, label selectors, and choosing resources to be protected by the backup

1. View Details of Backup-Eligible Resources

  • The number inside the parentheses indicates the total count of backup-eligible resources.

2. Backup Scope

  • The backup scope allows you to choose either [Entire Cluster] to back up the entire cluster or [Selecting Namespace] to choose specific namespaces.

  • If you choose [Selecting Namespace], additional items will be displayed as follows.

  • Verify the selected namespace.

3. Label Selector

  • Using the label selector during backup allows you to perform backups targeting only resources that match specific labels or label conditions.

  • The backup target is limited to resources that satisfy all specified label conditions.

4. Resources

  • This item configures the cluster backup target.

  • "If you choose [Entire Resources] under Resources, it sets all resources within the cluster as the backup target. If you choose [Selecting Resources], you can selectively backup specific resources.

  • If you choose [Selecting Resources], additional items will be displayed as follows

3.3 Schedule

  • If you choose 'not in use,' the backup will run immediately once. However, if you choose ' Schedule Execution,' the backup will automatically run according to the specified time (specified interval).

  • If you select 'Schedule Execution,' the backup will not run immediately but will be automatically executed at the specified time.

  • You can specify the desired time or interval using a cron expression or the @every notation.

4. Save Backup Information

  1. Click the [Save] button.

  2. Confirm the following message and click the [OK] button.

5. Verify Backup

5.1 Verify Backup

  1. You can check the created backup in the backup list.

  2. To view detailed information for the specific backup, click on the backup name.

5.2 To verify the restoration point

  1. When the corresponding backup is executed, a restore point is added. In the case of scheduled backups, the backup runs according to the schedule, and a restore point is added.

  1. To view detailed information for the specific restore point, click on the restore point name.

  2. You can check the list of all restore points under [Backup/Restore] - [Restore Points].

  3. To view detailed information for the specific restore point, click on the restore point name.

6. Schedule Backup Start/Stop

  1. Navigate to the detailed view of the corresponding scheduled backup.

  2. Click the [Pause Schedule] button.

  3. Click the [OK] button.

  4. Verify that the schedule has been paused.

7. Run Now

When you run the backup now, the backup will be executed, and a restore point will be created.

  1. Navigate to the detailed view of the corresponding backup.

  2. Click the [Run Now] button.

  3. Verify the created restore point.

8. Copy job

Copy job allows you to create a new backup by adding new configurations based on previously created backup settings or by modifying existing settings.

  1. Navigate to the detailed screen of the respective backup.

  2. "Click the [Replicate] button.

  3. Click the [OK] button.

  4. Add new configurations or modify existing settings.

MinIO Configuration

MinIO is an open-source object storage server known for its compatibility with the Amazon S3 API.

1. MinIO

  1. To use MinIO storage, please refer to the guide below.

2. Bucket Creation Method

Bucket Creation Method

1) Access the MinIO console by entering the installed MinIO URL.

2) After logging in, go to [Administrator] - [Buckets] and click the "Create Bucket +" button in the top right corner.

3) Provide an appropriate bucket name and click the "Create Bucket" button to create the bucket.

Authentication Information Confirmation Method

1) After logging into the console, go to [User] - [Access Keys] and click the "Create access key +" button in the top right corner.

2) Save the displayed Access Key and Secret Key separately, then click the "Create" button to generate authentication information.

aws_access_key_id : Access Key

aws_secret_access_key : Secret Key

Setting Up Google GCR

Google Container Registry provides secure private Docker storage on Google Cloud. The Container Registry is a private Docker storage compatible with widely used continuous deployment systems.

1. Move to the external registry creation screen

1) Navigate to [Build Configuration] - [External Container Registries].

2) Click the "+ Register" button in the upper right corner to select the provider for creation.

2. Enter Registry Registration Information

1) Click the "+ Register" button and select "Google GCR."

2) After registering Google GCR authentication information in the basic information, click the "Test Connection" button.

  1. Enter the name for the registry to be created in the registry.

  2. Enter a description for the registry in the description.

  3. Enter the EndPoint URL in the correct format.

  4. Enter the registry name.

  5. Enter Access ID and Access JSON.

  6. Click "Test Connection" in the upper right corner to verify if the registry is available.

  7. Click the "Save" button in the upper right corner.


Access JSON Verification Steps

1) Log in to the Google Cloud Console.

2) In the Google Cloud Console, click on the "IAM & Admin" button.

3) Under the "IAM & Admin" tab, click on "Service Accounts," then click the "+ Create Service Account" button at the top center.

4) Set the name for the service account. then click "CREATE AND CONTINUE"button.

5) Set the permissions for the service account to be created.

Owner permissions grant full access to most Google Cloud resources.

You can provide different permissions if needed, but access may be restricted based on the assigned permissions..

6) Click on the created service account, click "Add Key," then click "Create a new key," choose JSON format, and click "Create."

7) Verify that the JSON file has been generated locally.

Accss JSON : File contents

Project ID Verification

Project ID : The name in the leftmost select box in the search.

Registry & Endpoint URL Verification

레지스트리 : Repository name

Endpoint URL : Region of the registry to be created

Restoration

Cocktail restore enables the restoration of the system to a specific point in time in the Kubernetes state, addressing system failures, user errors, or other unexpected situations.

1. Preparation before Restore Creation

2. Selecting a Restore Point

  1. After selecting [Backup/Restore] - [Restore Points] section, click on the name of the restore point you want to restore.

  2. Click the [Restore] button.

3. Restore Information Entry

  1. Enter the necessary information for the restoration.

3.1 Restore Scope

The restoration scope includes selecting the entire cluster or specific namespaces, label selectors, and choosing the resources to restore.

1. Restore Scope

  • The restore scope can be selected as [Entire Cluster] to restore the entire cluster or choosing [Selecting Namespace] to restore specific namespaces.

  • "If you choose [Selecting Namespace], additional items will be displayed as follows.

  • If you want to restore to a new or different namespace, enter the namespace name in [Deploy Namespace Name]

  • Click the [Apply] button.

  • Verify the selected namespace.

2. Label Selector

  • Using a label selector during the restore allows you to perform restoration only on resources that match specific labels or label conditions.

  • The restoration target is limited to resources that satisfy all specified label conditions.

3. Resources

  • In the context you provided, selecting [Entire Resources] would designate all resources within the cluster as the restoration target, while choosing [Selecting Resources] allows for the selective restoration of specific resources.

  • If you choose [Selecting Resources], additional options will be displayed below as follows

4. Change Storage Class

  • When changing storage class is an option that allows you to switch to the storage class used by the destination cluster when restoring data to a new cluster

  • Click the [Apply] button

3.2 Restore Target

Restoration target configuration involves specifying the cluster to be restored and assigning a restoration name.

4. Save Restoration Information

  1. Save Restore Information

  2. Review the following message and click the [OK] button.

5. Restore Confirmation

  1. Created restorations can be viewed in the restoration history list.

  2. To review detailed information about a specific restoration, click on its restoration name.

2. Restore Deletion

  1. Navigate to the [Backup/Restore] - [Restore list] section, check the item you wish to delete from the list

  2. Click the [Delete] button.

  3. Click the [OK] button.

  • For backups within the same cluster, be cautious when deleting backed-up resources at restoration points, as the restore list will also be deleted

  • For backups to different clusters, deleting backed-up resources at restoration points does not delete the restore list. However, you won't be able to retrieve backup information from the restore list.

Google Cloud Storage Configuration

Google Cloud Storage is an object storage service provided by the Google Cloud Platform (GCP), offering a secure and scalable data storage solution in the cloud.

1. Google Cloud Storage

  1. To use Google Storage, please refer to the guide below.

2. Google Cloud Storage Creation

Bucket Creation Method

1) Access the Google Cloud Console.

2) Open the left menu, navigate to [Cloud Storage] - [Storage] menu.

3) Click the "Create" button in the top right corner to go to the creation screen.

4) Enter the bucket name, select the region (default: us), and click the "Create" button to create the bucket.

Authentication Information Confirmation Method

1) Access the Google Cloud Console.

2) In the Google Cloud Console, click on the "IAM & Admin" button.

3) Click on the Service accounts tab in IAM & Admin, and then click on the "+ Create Service Account" button at the top center.

4) Set the name for the service account.

5) Set the permissions for the created service account.

Owner permissions provide full access to most Google Cloud resources.

You can grant other permissions if needed, but note that access may be restricted based on the granted permissions.

6) Click on the created service account, click on "Add Key," choose JSON type, and click "Create" to generate a new key.

7) Confirm that a JSON file has been generated locally.

Authentication Information : Contents of the file

Create storages

The Cocktail repository manages the connection status with the object storages where backups are periodically stored.

1. Preparation before storage creation"

2. Access the storage creation page

  1. Select the [Backup/Restore] - [Storages] section.

  2. Click the "+ Create" button.

  3. Select the provider.

3.Enter storage information

The required information to be entered varies based on the selected provider. Please double-check and enter the details accurately.

3.1 Common

3.2 AWS

3.3 GCP

3.4 Azure

3.5 MinIO

3.6 Save storage information

  1. Click the "Save" button.

  2. Click the "OK" button.

3.7 Verify the storage

  1. Verify if the storage has been successfully created.

  2. You can click on the name to view detailed information.

Docker Hub Container Image Library | App Containerization
https://console.aws.amazon.com/console/home
https://us-east-1.console.aws.amazon.com/ecr/
Region, Bucket name
Default encryption

2) Click on the registered user, then click the tab and click the button in the top right of the [Access keys] box.

Log in to the .

Log in to the .

Make sure you are logged in to the tenant for which you want to retrieve the ID. If not, to ensure you're working in the correct tenant.

To create a backup, you need to create a storage. Please refer to

Item (* is required)
content

The button links to a screen displaying a list of selectable resources and detailed information about those resources.

Click the icon to select the namespace, then click the [Apply] button.

Click the icon to enter the name and value.

The recent execution status represents the state of the restoration point created due to the most recently executed backup. Refer to for detailed explanations of the states.

If you wish to resume the schedule, click the button and repeat the above steps.

Item (* is required)
Content

List of regions:

To create a restore, there must be a completed restore point. Refer to for more information.

Click the icon to select the namespace.

Click the icon to enter the name and value.

Clicking the icon to select Old Storage Class and New Storage Class

Item (* is required)
Content

Prior tasks are required to create a storage. Please refer to

Item (* is required)
Content
Item (* is required)
Content
Item (* is required)
Content
Item (* is required)
Content
Item (* is required)
Content

Name*

  • Name of the backup to be created

Select Cluster*

  • Cluster to be backed up

Select Storage*

  • Select the storage

  • Only select storage with Read-Write permission

Backup Retention Period*

  • Backup retention period

  • specified in hours. Enter only numeric values

#1 In Cron expression for UTC time
   
   * * * * *
   │ │ │ │ │
   │ │ │ │ └─ Day of the week (0-6, 0 and 7 represent Sunday)
   │ │ │ └─── Month (1-12)
   │ │ └───── Day of the month (1-31)
   │ └─────── Hour (0-23)
   └───────── Minute (0-59)

   
#2 @every notation : "@every 6h", "@every 24h","@every 168h"
  Maximum unit is in hours (combinations of hours, minutes, and seconds are possible)

Name*

Enter the name of the external container registry to be registered

Describe

Enter a description for the external container registry

Endpoint URL*

Enter the Endpoint URL corresponding to the registry region

Registry*

Enter the name of the already registered registry

Access ID*

_json_key

Access JSON*

Enter the google service account private key (in JSON key format)

Restore target cluster*

Cluster to restore (to be relocated)

Restore Name*

Restore Name

Name*

  • Storage Name

  • Non-overlapping

Cluster*

  • AS-IS cluster to back up is set to ReadWrite

  • The TO-BE cluster to be transferred should be set to Read Only

  • When performing backup and migration operations within the same cluster, set it to Read-Write

Bucket*

  • Bucket of Storage

prefix

  • Prefix of the bucket

  • Enter a specific path if there is any for the bucket in the storage

Authentication Information*

  • Enter Server IP information for the storage

  • The format varies depending on the provider (refer below)

Authentication information*

  • Enter authentication information with AWS storage permissions

  • Example [default] aws_access_key_id = aws_secret_access_key = # Optional - role_arn role_arn=

Region*

  • Enter the region information for AWS storage

Encryption Algorithm*

  • the server-side encryption type for AWS

  • You can choose betweenSSE-S3 and SSE-KMS,If selecting SSE-KMS, you must additionally provide the KMS key

KMS Key

  • the KMS key ID generated in AWS KMS

  • This is mandatory when selecting SSE-KMS in the encryption algorithm field

  • Input the <key-id> from the AWS KMS key ARN.

  • ARN Format

    arn:aws:kms:<region>:<account-ID>:key/<key-id>

profile

  • Enter the AWS profile configured in the authentication information section

Authentication information*

Service Account Key Json

KMS Key

  • The Cloud KMS key name to be used for backup encryption

Authentication information*

AZURE_SUBSCRIPTION_ID= AZURE_TENANT_ID= AZURE_CLIENT_ID= AZURE_CLIENT_SECRET= AZURE_CLOUD_NAME=AzurePublicCloud AZURE_ENVIRONMENT=AzurePublicCloud

Resource group name*

  • The name of the resource group containing the storage account

Storage account name*

  • The name of the storage account

Block size (Byte)

  • The block size to be used when uploading objects

Authentication information*

[default]

aws_access_key_id = aws_secret_access_key =

s3Url*

  • Object storage API URL

publicUrl*

  • Object storage API URL accessible from external sources

Skip TLS certificate

  • Use of TLS certificate

Logo
Logo
Logo
Microsoft Azure
[Screen] Azure Access Portal
[Screen] Create
[Screen] Detailed View after Creation
[Screen] Access the Backup Page
[Screen] Access the Backup Creation Page
[Screen] Enter Backup Information
[Screen] Enter Basic Settings
[Screen] Enter Backup Scope
[Screen] Select Namespace
[Screen] Select Namespace
[Screen] Confirm Selected Namespace
[Screen] Label Selector
[Screen] Select Resources
[Screen] Restore Creation - Schedule
[Screen] Save Backup Information
[Screen] Save Backup Information
[Screen] Backup List
[Screen] Backup List
[Screen] Backup Details
[Screen] Backup Details
[Screen] Backup Details - Restore Point Details
[Screen] Restore Point List
[Screen] Restore Point List
[Screen] Restore Point Details
[Screen] Backup List
[Screen] Schedule Backup Details
[Screen] Pause Schedule
[Screen] Confirm Pause Schedule
[Screen] Backup List
[Screen] Backup Details - Run Immediately
[Screen] Backup Details - Run Immediately - Create Restore Poin
[Screen] Move to Detailed Screen in Backup List
[Screen] Backup Details - Replication
[Console Access Screen]
[Screen] Registries
[Screen] Select External Registries
[Screen] Select External Registries(Google GCR)
[Screen] Register Google GCR Registry Authentication Information
[Screen] Restore Point - Restore
[Screen] Restore Creation
[Screen] Restore Creation - Restore Scope
[Screen] Restore Creation - Namespace
[Screen] Restore Creation - Namespace Popup
[Screen] Restore Creation - Namespace Confirmation
[Screen] Restore Creation - Label Selector
[Screen] Restore Creation - Restore Target Configuration
[Screen] Restore Creation - Save
[Screen] Restore Creation - Save Popup
[Screen] Restore History List
[Screen] Restore History List - Name
[Screen] Restore Details
[Screen] Restore History - Check Box
[Screen] Restore History - Delete
[Screen] Restore History - Delete Popup
[Screen] Restore Details - Unable to view restoration information
[Screen] Access to Storage Page
[Screen] Access to Storage Creation Page
[Screen] Selecting a Provider
[screen] Enter AWS Storage Information
[Screen] Enter GCP Storage Information
[Screen] Enter Azure Storage Information
[Screen] Enter MinIO Storage Information
[Screen] Save Storage Information
[Screen] Save Storage Information
[Screen] Storage List
[Screen] Storage List
[Screen] Storage Details

Installation

You can collect and analyze each log by installing and registering a log service in your cluster.

Azure Portal
Azure Portal
Azure Portal
switch directories
this place
https://cloud.google.com/artifact-registry/docs/repositories/repo-locations
this guide
this place
this link

Setting

Backup/Restore Overview

The Cocktail Backup/Restore Overview summarizes backup operation status and statistics, cluster backup agent status, and storage usage statistics.

1. Backup/Restore Overview Page

  1. Navigate to [Backup/Restore] - [Overview].

2. Backup schedules

  • The backup schedule uses different colors to represent the statuses of 'New', 'Running', 'Paused', and 'Failed', allowing users to visually assess the overall status distribution based on the total count and the count of each status.

  • Please refer to the details below for a comprehensive explanation of each status.

    Item
    Content

    New

    Ready for the backup schedule to be created (Indicates the state of being ready to create a backup schedule)

    Running

    Backup schedule job is in progress (Indicates that the backup is scheduled to be created according to the schedule)

    Paused

    Backup schedule job has been stopped

    Failed

    Indicates that the validation of the backup schedule failed, and the backup will not proceed

3. Cluster Backup Agent Status

  • The cluster backup agent status indicates the health and installation status of the backup agent.

  • The status is represented as one of the following: 'Healthy', 'Unhealthy', or 'Install.

  • Please refer to the details below for a comprehensive explanation of each status.

    Item
    Content

    Healthy

    Backup agent is in normal state

    Unhealthy

    Backup agent is in abnormal state

    Install

    Backup agent is not installed

4. Storage Usage

  • You can visualize the storage usage of repositories registered in Cocktail through a graph.

  • Each rectangle represents a repository, with colors distinguishing each one.

  • The size of the rectangle visually indicates the relative usage of the corresponding repository.

  • You can zoom in or out on the graph using the scroll function.

  • When the graph is zoomed in or out, you can click the [Storage Usage] button at the bottom to revert to its original size.

5. Restore Point

  • The restore points are represented by different colors based on five statuses: 'New', 'Running', 'Paused', 'Failed', and 'Deleting', reflecting their distribution in proportion to their respective counts. This enables users to visualize the overall status distribution by observing the total count and the count of each status

  • Please refer to the details below for a comprehensive explanation of each status.

Item
Content

New

Backup preparation is complete and ready to start. (Status indicating that you are waiting to start a backup job)

InProgress

Backup job is in progress

Completed

Backup job has successfully completed

Failed

Backup job has failed

Deleting

All data related to the backup is being deleted

6. Restore

  • Restoration is visually represented with different colors based on five statuses: 'New', 'Running', 'Paused', 'Failed', and 'Deleting', allowing users to assess the overall status distribution by considering the total count and the count of each status.

  • Please refer to the details below for a comprehensive explanation of each status.

Item
Content

New

Restore preparation is complete and ready to start (Status indicating that you are waiting to start a restore job)

InProgress

Currently undergoing the restoration process

Completed

The restoration operation has completed successfully

Failed

An issue occurred during the restoration process

Deleting

Data related to the restoration is being deleted

7. Recent Restore Points

  • The recent restore points display the five most recent restoration points as a list.

  • Each entry includes age and status information.

  • Clicking on the name of a restore point navigates to the detailed information page for that specific restore point.

8. Recent Restores

  • The recent restores display the five most recent restoration items as a list.

  • Each entry includes age and status information.

9.Storage remaining space

  • The storage remaining space displays the top 5 storage entries with Object Storage size limitations, sorted in ascending order of available space.

  • Each entry is accompanied by a graph representing the available space visually, along with the remaining space capacity and its percentage.

  • The grey area on the graph represents unused spare space.

  • Clicking on the storage name navigates to the storage details page.

  • Storage entries connected to the same MinIO object storage display the same disk usage.

Install Log Service

Log Service uses 'OpenSearch' to provide log storage and an API server to communicate with Cocktail Dashboard.

Before installing the log service, 'cert-manager' and 'nginx' require pre-installation.

1. Distribution Addon

1) Infrastructure - Clusters - Addon List - Click the "Deploy" button and click the "Deploy" button for 'cocktail-log-service' in the list.

2) Check the settings according to your environment and click the Deploy button to deploy the Addon.

Gateway Service Mode : Log service gateway type (Ingress, LoadBalancer)


[If Gateway Service Mode is set to Ingress]

Gateway Access URL : DNS to access log service through Ingress

URL Type : Cluster HostAliases type (PublicDNS or HostAliases)

Host Ip : (If URL Type is 'HostAliases') LB IP (or node IP) that can access the cluster from outside

When accessing the log service from the dashboard or collecting logs from log-agent, connect the Host IP to the Log Access URL.

Enable OpenSearch Dashboard : Use or not 'Opensearch Dashboard'


[If Gateway Service Mode is set to LoadBalancer]

Clusters created through provisioning can only be set to 'Enable OpenSearch Dashboard'.

3) Check the deployment status and if the status is 'Running', confirm that the deployment has been completed successfully.

If the status is 'pending' when deployed

4) Create a job in the namespace where the log service is installed.

Create a Job that creates policies for each container log, cluster audit log, and application log collected by Opensearch.

apiVersion: batch/v1
kind: Job
metadata:
  name: policy-generate
  namespace: cocktail-logs
spec:
  ttlSecondsAfterFinished: 600
  template:
    spec:
      containers:
        - image: [registry address]/library/cocktail-auto-injector:1.0.0-release.20240425
          imagePullPolicy: Always
          command:
            - /bin/sh
            - -c
            - sh /auto/service/gen.sh -H opensearch-cluster-master.cocktail-logs -c 30d -u 100d -a 30d -m run
          name: gen
      restartPolicy: Never
  backoffLimit: 0

registry address : Please contact our technical team.

If you create a job according to the settings above, container logs have a storage period of 30 days, cluster audit logs have a storage period of 56 days, and application logs have a storage period of 1 year.

Storage period settings can be modified in 'OpenSearch Dashboard'.

Cocktail Log Service

It plays an important role in operations and troubleshooting, managing the containers and logs running in the cluster.

1. Cocktail Log Service

  1. Collect logs for all resources within the cluster.

  • Collects all containers and audit logs within the cluster.

  • Collect application logs for workloads that exist in the cocktail.

  • In the case of application logs, only authenticated applications can be checked.

  • Collected using open source 'Opentelemetry' and 'Opensearch'.

  1. Provide a service that searches and analyzes collected logs.

  • Collected logs can be analyzed by filtering them with labels.

  • Collected logs are aggregated by time so users can analyze log trends.

  • Can search and analyze collected logs.

2. Recommended specifications

This guide is based on installation on an 8Core 16GB node.

  • master node : 1Core, 2GB

  • data node : 2Core, 4GB

  • dashboard : 500mCore, 1GB

3. Architecture

Effectively manage cluster containers and application logs to increase operational efficiency and ensure system stability. The following provides guidance on how to collect logs for each log screen and application.

Registration Log Service

Only one log service installed on the platform can be registered.

1. Load the list of log services installed as Addon and register them on the platform

1) Settings - Basic Information, click the selection box for the log service name to view the list of installed log services.

2) Specify the log service you want to register and click the “Register” button to register it on the platform.

Change log service

When changing the log service, simply select a different log service from the list and click the "Register" button to change it.

Deregister log service

You can discontinue the log service function registered on the platform by clicking the “Deregister” button.

Install Log Agent

If you install the log agent, you can check the collected container logs and audit logs on the cocktail dashboard.

1. Distribution Addon

1) Infrastructure - Cluster - Addon List - Click the "Deploy" button and click the "Deploy" button for 'cocktail-log-agent' in the list.

2) Check the settings according to your environment and click the Deploy button to deploy the Addon.

Enable Container log collecting : Whether to collect container logs

Enable Audit log collecting : Whether to collect cluster audit logs

audit-log hostPath path : If it is not a cluster installed with 'cube', you will need to change the path.

includeNamespace : List of collection processing namespaces

If you are collecting container logs for a specific namespace, uncomment and use below.

Change Opensearch Admin password

It cannot be modified in the dashboard, and must be modified only with a script.

  1. Terminal connection to master node

  2. Create a password (using the provided tool script).

  1. Load current settings

  1. Change settings and run reflection script

Install Log Operator

Log Operator is an Addon required when using automatic container log measurement among application log collection methods.

1. Distribution Addon

1) Infrastructure - Clusters - Addon List - Click the "Deploy" button and click the "Deploy" button for 'cocktail-log-operator' in the list.

2) Check the settings according to your environment and click the Deploy button to deploy the Addon.

3) Check the deployment status and if it is Running, confirm that the deployment has been completed successfully.

Application logging

We recommend that you select a method that suits your environment.

1. Automatic instrumentation of container logs

Collect container logs from workloads using automated instrumentation.

2. Manual instrumentation of file logs (SDK)

The SDK method is used when you want to collect logs from a specific service through Logger settings.

3. Manual measurement of file logs (Sidecar)

The sidecar method is used when you want to read and collect log files in a specific directory.

Application Logs

‘Application’ in Cocktail Log Service means ‘Workload that exists in the Service map'.

To check the application log, you must first register in the application management tab.

1. Inquiry Log

Logging - Application Log - Select an application from the list of applications to view.

View By Hour : You can search logs from the last 5 minutes to 48 hours ago, and check logs from up to 2 weeks ago.

Application: You can view a list of all applications that exist in the cluster.

View More : Get logs since the last time in the list.

Current number of logs / Total number of logs : This refers to the total number of logs viewed at that time and the maximum number of logs.

The maximum number of logs that can be viewed at one time is 5000. The inquiry period is up to 7 days.

The Show More button is displayed when the total number of logs exceeds 5000.

2. View log details

Click the link button for the log viewed by time to check detailed information about the log.

Log Message : You can check the contents of the log that actually occurred.

Label Information : Click the + button to expand and view label information.

Label information: You can close the expanded label information by clicking the - button.

3. View logs by time

If you click on each graph once, you can check the log for that time, and if you click again, you can check the log for the entire time again.

Graph Select : You can see that there are 60 logs of the current time.

4. Inquiry Label

Click the arrow button on the right to see the set of labels for the logs present at that time.

label list key : Indicates a list of labels for viewed logs. You can check the label value by clicking the label button.

label list values : Click the label value to search for the log you want to find by adding the conditions you want to search through AND search.

Selected label value : When you add a label condition, the condition is added to the top of the graph, and you can search by clicking the X button to remove the condition.

5. Search Log

Enter the keyword you want to search for and click the search button to view the log for the search term.

Search word : You can search logs where the string exists regardless of case.

6. Download Log

Click the “Download” button at the top of the graph to download the log.

Download: You can download log data for up to 5,000 searched logs in Excel file format. Each column contains a label value.

Container Registry 사용 가이드
보안 설정
Step 1: Create an Amazon S3 Bucket - AWS Quick Start Guide: Back Up Your Files to Amazon Simple Storage ServiceAWS Quick Start Guide: Back Up Your Files to Amazon Simple Storage Service
[AWS S3 creation official document]
https://us-east-1.console.aws.amazon.com/console/home
https://us-east-1.console.aws.amazon.com/console/home
[Azure Storage Creation Official Documentation]
[Google Storage creation official document]
[Screen] Backup Schedule
[Screen] Cluster Backup Agent Status

Clicking on the button will navigate you to the addon deployment screen for that particular cluster.

[Screen] Storage Usage
[Screen] Restore Points
[Screen] Restore
[Screen] Recent Restore Points
[Screen] Recent Restorations
cd /usr/share/opensearch/plugins/opensearch-security/tools

./hash.sh -p <new_password>

# example - output
sh-5.2$ ./hash.sh -p dhvmstjcl!
**************************************************************************
** This tool will be deprecated in the next major release of OpenSearch **
** https://github.com/opensearch-project/security/issues/1755           **
**************************************************************************
$2y$12$8CL1a9FLy1JwNe5q6yudZuTtzs/9.hkxvk1WnInwOV16JV3P3RoC6
// Some code./securityadmin.sh -backup my-backup-directory \
  -icl \
  -nhnv \
  -cacert ../../../config/admin/ca.crt \
  -cert ../../../config/admin/tls.crt \
  -key ../../../config/admin/tls.key
# Search for line target to replace
# Find the hash value line of the admin item
cat -n my-backup-directory/internal_users.yml

    23	  config_version: 2
    24	admin:
    25	  hash: "$2a$12$W5jcOBWSN6/q9bOKhFbTE.m/pK.POlHkLFCcR7W79M479kq3FiOIO"
    26	  reserved: true
    27	  hidden: false
    28	  backend_roles:

# Add content (add content after removing line)
sed -i 'Line number d' file name
sed -i 'Line number i\Content' file name
# sed -i '25d' my-backup-directory/internal_users.yml
# sed -i '25 i\  hash: "$2y$12$8CL1a9FLy1JwNe5q6yudZuTtzs/9.hkxvk1WnInwOV16JV3P3RoC6"' my-backup-directory/internal_users.yml

# reflection
./securityadmin.sh -f my-backup-directory/internal_users.yml \
  -t internalusers \
  -icl \
  -nhnv \
  -cacert ../../../config/admin/ca.crt \
  -cert ../../../config/admin/tls.crt \
  -key ../../../config/admin/tls.key
https://console.cloud.google.com/console.cloud.google.com
Automatic instrumentation of container logs
Manual measurement of file logs(SDK)
Manual measurement of file logs (Sidecar)
Application Management
Logo
Logo
Logo

Manual measurement of file logs(SDK)

This is a method of installing into an existing application using the SDK provided by Opentelemetry.

1. Java

2. Python

Automatic instrumentation of container logs

Using Opentelemetry-operateor, you can simply log using container logs with annotations. Since container logs are collected only in the namespace unit set by the user, the method of collecting them fo

You must install the cocktail-log-operator add-on to use it.

1. Java

2. Python

Python

1) CRD installation

You can create one by searching for Infrastructure - Custom Resources - 'instrumentations'.

You can create it by clicking the Create button, selecting the namespace where you want to collect logs, and modifying the form below.

The above CRD is applied on a namespace basis, and automatic container logs can be collected for other languages ​​in the same namespace.

log-agent Service Address : Infrastructure - Cluster - Add-ons - Click 'log-agent' and check the service name.

( http port = 4318 , grpc port = 4317)

2) Add annotations to the applications you want to collect

Add annotations to the workloads in the namespace for which you want to collect logs.

Application - Service Map - Service Map to collect logs - Workload - Select the application to collect logs - Click the "Settings" button.

Change the Yaml view and add the following annotations to the template - metadata - annotations section.

3) Add service name and token value to environment variables.

4. Check Application log

1) Logging - Application Log - Search for the application you set in the application list.

Java

You can leverage Custom Resources to configure the OpenTelemetry auto-instrumentation library and add annotations to your workloads to easily collect logs.

1) CRD installation

You can create one by searching for Infrastructure - Custom Resources - 'instrumentations'.

You can create it by clicking the Create button, selecting the namespace where you want to collect logs, and modifying the form below.

The above CRD is applied on a namespace basis, and automatic container logs can be collected for other languages ​​in the same namespace.

log-agent Service address : Infrastructure - Cluster - Add-ons - Click 'log-agent' and check the service name.

( http port = 4318 , grpc port = 4317)

By adding an environment variable for each individual application rather than all applications in the namespace through CRD, you can only collect logs for a specific application.

2) Add annotations to the individual applications you want to collect

Add annotations to the workloads in the namespace for which you want to collect logs.

Application - Service Map - Service Map to collect logs - Workload - Select the application to collect logs - Click the "Settings" button.

Change the Yaml view and add the following annotations to the template - metadata - annotations section.

3) Add service name and token value to environment variables

4. Check Application log

1) Logging - Application Log - Search for the application you set in the application list.

Java

This is a method of installing into an existing application using the SDK provided by Opentelemetry.

This guide is for existing JAVA applications that have been build on Cocktail Cloud.

1. Log Appender Setting

Log Appender is an interface provided by a logging framework or library that provides the ability to collect and process log messages. OpenTelemetry interacts with Log Appender through the Log Bridge API to collect log messages and associate them with tracking data from OpenTelemetry. Therefore, log appenders can be used to collect and integrate log data from OpenTelemetry.

We introduce how to collect data using logback and log4j, which are representative loggers.

1) logback

1-1) dependency addition

1-2) logback.xml setting

2) log4J

2-1) dependency addition

2-2) log4j.xml setting

3) Logger setting

2. Image creation via SDK

1) Build/Pipeline - Build details of the Java application for which you want to collect logs from the build.

2) Click “Image Build Task” to edit

3) Setting environment variables and downloading SDK of Opentelemetry-java-instrumentation

3. Container Setting

1) Logging - Copy the token of the application created in Application Management

2) In Container Details - Settings - Environment Variables tab, set environment variables as follows

log-agent Service Address : Infrastructure - Cluster - Add-ons - Click 'log-agent' and check the service name.

( http port = 4318 , grpc port = 4317)

4. Check Application Log Check

1) Logging - Application Log - Search for the application you set in the application list.

Create a storage account - Azure StorageMicrosoftLearn
Register a confidential client app in Microsoft Entra ID - Azure API for FHIRMicrosoftLearn
MinIO Object Storage for Linux — MinIO Object Storage for Linux
Google Cloud console - Web UI AdminGoogle Cloud
Google Cloud console - Web UI AdminGoogle Cloud
Create buckets  |  Cloud Storage  |  Google CloudGoogle Cloud
Troubleshooting | Cocktail Cloud Online
Java
Python
Java
Python
apiVersion: opentelemetry.io/v1alpha1
kind: Instrumentation
metadata:
  name: {Instrumentation name}
spec:
  exporter:
    endpoint: {log-agent Service address}:4318
  python:
    image: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-python:0.44b0
    env:
      - name: OTEL_LOGS_EXPORTER
        value: otlp_proto_http
      - name: OTEL_EXPORTER_OTLP_LOGS_ENDPOINT
        value: {log-agent Service address}:4318/v1/logs
      - name: OTEL_PYTHON_LOGGING_AUTO_INSTRUMENTATION_ENABLED
        value: 'true'
        
      # Used when you want to collect all Java applications for which the anotation value of the namespace is 'true'.
      - name: OTEL_EXPORTER_OTLP_LOGS_HEADERS
        value: app_token={Application Token},app_name={Application Name}
instrumentation.opentelemetry.io/inject-python: 'true'
apiVersion: apps/v1
kind: Deployment
...
spec:
...
  template:
    spec:
      containers:
      - env:

	# Application name
	- name: OTEL_SERVICE_NAME
          value: {Application name}
        
        # Settings for authentication
        - name: OTEL_EXPORTER_OTLP_LOGS_HEADERS
          value: app_token={Application token},app_name={application name}     
        
        image: {python-application image}
        imagePullPolicy: Always
        ...
apiVersion: opentelemetry.io/v1alpha1
kind: Instrumentation
metadata:
  name: {Instrumentation name}
spec:
  exporter:
    endpoint: {log-agent Service address}:4318
  java:
    image: ghcr.io/open-telemetry/opentelemetry-operator/autoinstrumentation-java:2.0.0
    env:
      - name: OTEL_LOGS_EXPORTER
        value: otlp
      - name: OTEL_METRICS_EXPORTER
        value: none
      - name: OTEL_TRACES_EXPORTER
        value: none
        
      # Used when you want to collect all Java applications for which the anotation value of the namespace is 'true'..                
      - name: OTEL_EXPORTER_OTLP_LOGS_ENDPOINT
        value: {log-agent Service address}:4318/v1/logs
instrumentation.opentelemetry.io/inject-java: 'true'
apiVersion: apps/v1
kind: Deployment
...
spec:
...
  template:
    spec:
      containers:
      - env:

	# Application name
	- name: OTEL_SERVICE_NAME
          value: {Application name}
        
        # Settings for authentication
        - name: OTEL_EXPORTER_OTLP_LOGS_HEADERS
          value: app_token={Application token},app_name={Application name}        
        
        image: {java-application image}
        imagePullPolicy: Always
        ...
<dependency>
    <groupId>io.opentelemetry.instrumentation</groupId>
    <artifactId>opentelemetry-logback-appender-1.0</artifactId>
    <version>2.0.0-alpha</version>
    <scope>runtime</scope>
</dependency>
runtimeOnly group: 'io.opentelemetry.instrumentation', name: 'opentelemetry-logback-appender-1.0', version: '2.0.0-alpha'
    ...

    <appender name="OpenTelemetry"
              class="io.opentelemetry.instrumentation.logback.appender.v1_0.OpenTelemetryAppender">
    </appender>

    ...
 
    <logger name="OTLP" additivity="false">
        <appender-ref ref="OpenTelemetry"/>
        <level value="INFO"/>
    </logger>
<dependency>
    <groupId>io.opentelemetry.instrumentation</groupId>
    <artifactId>opentelemetry-log4j-appender-2.17</artifactId>
    <version>OPENTELEMETRY_VERSION</version>
    <scope>runtime</scope>
  </dependency>
</dependencies>
runtimeOnly("io.opentelemetry.instrumentation:opentelemetry-log4j-appender-2.17:OPENTELEMETRY_VERSION")
<Configuration status="WARN" packages="io.opentelemetry.instrumentation.log4j.appender.v2_17">
  <Appenders>
    <Console name="Console" target="SYSTEM_OUT">
      <PatternLayout
          pattern="%d{HH:mm:ss.SSS} [%t] %-5level %logger{36} trace_id: %X{trace_id} span_id: %X{span_id} trace_flags: %X{trace_flags} - %msg%n"/>
    </Console>
    <OpenTelemetry name="OpenTelemetryAppender"/>
  </Appenders>
  <Loggers>
    <Root>
      <AppenderRef ref="OpenTelemetryAppender" level="All"/>
      <AppenderRef ref="Console" level="All"/>
    </Root>
  </Loggers>
</Configuration>
private static final Logger logger = LoggerFactory.getLogger("OTLP");
// Logging with a logger called OTLP. This can be changed
// If you set it to 'ALL' in the above document, you can log with console and opentelemetry.
...

# Required settings to export log
ENV OTEL_EXPORTER_OTLP_ENDPOINT=$OTEL_EXPORTER_OTLP_ENDPOINT
ENV OTEL_EXPORTER_OTLP_LOGS_ENDPOINT=$OTEL_EXPORTER_OTLP_LOGS_ENDPOINT
ENV OTEL_EXPORTER_OTLP_LOGS_HEADERS=$OTEL_EXPORTER_OTLP_LOGS_HEADERS
ENV OTEL_LOGS_EXPORTER=$OTEL_LOGS_EXPORTER
ENV OTEL_SERVICE_NAME=$OTEL_SERVICE_NAME

# None processing to not collect metrics and traces
ENV OTEL_METRICS_EXPORTER=none
ENV OTEL_TRACES_EXPORTER=none


# Download sdk
RUN wget -q https://github.com/open-telemetry/opentelemetry-java-instrumentation/releases/download/v2.0.0/opentelemetry-javaagent.jar
ENTRYPOINT ["java", "-javaagent:opentelemetry-javaagent.jar", "-jar", "app.jar"]

...
apiVersion: apps/v1
kind: Deployment
...
spec:
...
  template:
    spec:
      containers:
      - env:
	# opentelemetry collector svc address (eg. http://log-agent-cocktail-log-agent.cocktail-addon:4318)
	- name: OTEL_EXPORTER_OTLP_ENDPOINT
          value: {log-agent Service address}:4318

	# Log service address of opentelemetry collector (eg. http://log-agent-cocktail-log-agent.cocktail-addon:4318/v1/logs)
        - name: OTEL_EXPORTER_OTLP_LOGS_ENDPOINT
          value: {log-agent Service address}:4318/v1/logs

	# Settings using opelemetry protocol
        - name: OTEL_LOGS_EXPORTER
          value: otlp
        
	# Application Name
	- name: OTEL_SERVICE_NAME
          value: {Application Name}
        
        # Settings for authentication
        - name: OTEL_EXPORTER_OTLP_LOGS_HEADERS
          value: app_token={Application Token},app_name={Application Name}        
        
        image: {java-application image}
        imagePullPolicy: Always
        ...
Logo
Logo

Python

This is a method of installing into an existing application using the SDK provided by Opentelemetry. Logging in Python is currently under development at opentelemetry.

This guide is for existing Python applications that have been build on Cocktail Cloud.

Additionally, the Python application in this guide was created based on 'Flask'

1. Image creation via SDK

1) Build/Pipeline - Build details of the Python application for which you want to collect logs from the build.

2) Click on the image build task to edit

...

# flask 애플리케이션용 opentelemetry Sdk 다운로드
RUN pip install --upgrade pip && pip install opentelemetry-distro && opentelemetry-bootstrap -a install && pip install flask

# Log를 Export할 필수설정
ENV OTEL_EXPORTER_OTLP_ENDPOINT=$OTEL_EXPORTER_OTLP_ENDPOINT
ENV OTEL_EXPORTER_OTLP_LOGS_ENDPOINT=$OTEL_EXPORTER_OTLP_LOGS_ENDPOINT
ENV OTEL_EXPORTER_OTLP_LOGS_HEADERS=$OTEL_EXPORTER_OTLP_LOGS_HEADERS
ENV OTEL_SERVICE_NAME=$OTEL_SERVICE_NAME
ENV OTEL_LOGS_EXPORTER=$OTEL_LOGS_EXPORTER

# Python 애플리케이션 로깅을 위한 필수 설정
ENV OTEL_PYTHON_LOGGING_AUTO_INSTRUMENTATION_ENABLED=true

# metric과 trace를 수집하지 않기 위한 none 처리
ENV OTEL_METRICS_EXPORTER=none
ENV OTEL_TRACES_EXPORTER=none

...

3. Container Setting

1) Logging - Copy the token of the application created in Application Management

2) In Container Details - Settings - Environment Variables tab, set environment variables as follows

apiVersion: apps/v1
kind: Deployment
...
spec:
...
  template:
    spec:
      containers:
      - env:
	# opentelemetry collector svc address (eg. http://log-agent-cocktail-log-agent.cocktail-addon:4318)
	- name: OTEL_EXPORTER_OTLP_ENDPOINT
          value: {log-agent Service address}:4318

	# Log service address of opentelemetry collector (eg. http://log-agent-cocktail-log-agent.cocktail-addon:4318/v1/logs)
        - name: OTEL_EXPORTER_OTLP_LOGS_ENDPOINT
          value: {log-agent Service address}:4318/v1/logs

	# Settings using opelemetry protocol
        - name: OTEL_LOGS_EXPORTER
          value: otlp_proto_http
        
	# Application Name
	- name: OTEL_SERVICE_NAME
          value: {Application Name}
        
        # Settings for authentication
        - name: OTEL_EXPORTER_OTLP_LOGS_HEADERS
          value: app_token={Application Token},app_name={Application Name}        
        
        image: {python-application image}
        imagePullPolicy: Always
        ...

log-agent Service Address : Infrastructure - Cluster - Add-ons - Click 'log-agent' and check the service name.

( http port = 4318 , grpc port = 4317)

4. Check Application Log

1) Logging - Application Log - Search for the application you set in the application list

Application Management

The log service can perform logging depending on various applications, and the process of registering to enable logging is explained.

1. Application registration

Logging - Application Management - Click the Registration button to register.

Name : Actual application service name

Description : Description field to distinguish

Cluster : Cluster with application

Namespace : Namespace where the application resides

Developing Language : Application development language

Log Service Information : Log service information useful for the current platform

2. Application modification

The log service can collect and search logs of authenticated applications through tokens.

A token is automatically issued when you first register an application, and a new token can be issued through renewal.

Even if the token is renewed, the token applied to the application is not automatically renewed, so you must renew it manually to collect logs.

You can also click the "Action" button to disable the application to stop logging, or delete it from the application list.

Fluent-bit

Fluent Bit is a lightweight log data collector that is used to collect and process data. By installing Flentbit as a sidecar in your application, you parse the application's logs and forward them to O

The above guide explains how the application stores logs in the /var/log directory. Please modify the directory and log pattern to suit your environment.

1. Fluent-bit Setting

1) Create Container

Application to collect logs - Settings - Container Click the "Add" button to create a container as follows.

Image: fluent/fluent-bit:3.0.0

When you press the save button, the container runs in the existing application in fluent-bit sidecar format.

2. Application Setting

1) Logging - Copy the token of the application created in Application Management

2) Volume Mount - Log

Logs are stored in the path set in Log Appender, so you need to create a volume in the container and mount it.

Application to collect logs - Settings - Volume - Click the "Create" button to create a volume as follows.

Volume Type : Empty Dir

Volume Name : custom name

The following is the process of mounting the created volume.

Application to collect logs - Settings - Volume mount - Click the "Add" button to mount the volume with the following settings.

Container Path : File path set in Log Appender (eg. /var/log)

3) Volume Mount - Fluent-bit

The container must mount the directory path where it stores the logs before it can read the file and parse the logs.

You can also add labels or change the label name through Config provided by fluent-bit.

Create fluent-bit Config Map

Service map to collect logs - Configuration information - Click the "Create" button to create a configuration map.

Name : The name of the config map you want to set.

Description : Additionally, a description of the config map to be specified by the user.

Click the “Add” button to add the config file.

The following config file is not absolute. The location where the log is loaded or the log pattern may vary, so please set it according to your environment.

fluent-bit.conf

[SERVICE]
        Flush         1
        Log_Level     info
        Daemon        off
        Parsers_File  parsers.conf


[INPUT]
        Name         tail
        Path         [The path to the log file where the log will be loaded]
        
[OUTPUT]
        Name         stdout
        Match        *

[FILTER]
        Name parser
        Match *
        Key_Name log
        Parser nginx
        # Setting to preserve existing log messages before parsing (true: preservation)
        Preserve_Key true
        Reserve_Data true

# instrumentation_scope, service.name.. in body
[FILTER]
        Name        modify
        Match       *
        Add         service.name 'Application Name'       

# Handled by user by inserting script
[FILTER]
        Name        lua
        Match       *
        script      rewrite.lua
        call        rewrite_tag
        

[OUTPUT]
        Name         opentelemetry
        Match        *
        # Host	     log-agent-cocktail-log-agent.cocktail-addon
        Host         'log-agent Service Address'
        Port         4318
        metrics_uri  /v1/metrics
        logs_uri     /v1/logs
        traces_uri   /v1/traces
        header       app_token 'Token'
        header	     app_name  'Application Name'
        Log_response_payload True
        tls          off
        tls.verify   off
        logs_body_key_attributes true

log-agent Service Address : Infrastructure - Cluster - Add-ons - Click 'log-agent' and check the service name.

( http port = 4318 , grpc port = 4317)

parsers.conf

[PARSER]
    Name   nginx
    Format regex
    Regex ^(?<remote>[^ ]*) (?<host>[^ ]*) (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^\"]*?)(?: +\S*)?)?" (?<code>[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)")
    Time_Key time
    Time_Format %d/%b/%Y:%H:%M:%S 

Application logs create a label called 'level' to provide users with the ability to filter by level. The following is an example of converting nginx's code value to level when the user's application does not have a value called level.

rewrite.lua

# A script that adds the value INFO to the value level if the code value is 200, and the value ERROR if it is not 200.
function rewrite_tag(tag, timestamp, record)
    if record["code"] == "200" then
        record["level"] = "INFO"
    else
        record["level"] = "ERROR"
    end
    return 1, timestamp, record
end

Once the config map creation is complete, return to the application to create the volume.

Application to collect logs - Settings - Volume - Click the "Create" button to create a volume as follows.

Volume Type : Config Map

Volume Name : Custom Name

Config Map : User-created ConfigMap name

Permission : 644

The following is the process of mounting the created volume.

Application to collect logs - Settings - Volume mount - Click the "Add" button to mount the volume with the following settings.

Container Path : Log Data - Directory path where logs are stored (eg. /var/log)

Container Path : Fluent-bit -conf -fluent-bit configuration file path (eg. /fluent-bit/etc)

When the fluent-bit container does not operate properly

4. Check Application Log

1) Logging - Application Log - Search for the application you set in the application list.

Container Logs

Logs are collected and displayed for each namespace within the cluster.

1. Inquiry Log

Logging - Container Logs - Select a namespace from the Namespaces to view list.

View By Hour : You can search logs from the last 5 minutes to 48 hours ago, and check logs from up to 2 weeks ago.

Namespace : You can view a list of all namespaces that exist in that cluster.

View More : Get logs since the last time in the list.

Current number of logs / Total number of logs : This refers to the total number of logs viewed at that time and the maximum number of logs.

The maximum number of logs that can be viewed at one time is 5000. The inquiry period is up to 7 days.

The Show More button is displayed when the total number of logs exceeds 5000.

2. View log details

Click the link button for the log viewed by time to check detailed information about the log.

Log Message : You can check the contents of the log that actually occurred.

Label Information : Click the + button to expand and view label information.

Label information: You can close the expanded label information by clicking the - button.

3. View logs by time

If you click on each graph once, you can check the log for that time, and if you click again, you can check the log for the entire time again.

Click on the graph: You can see that there are 2,687 logs of the current time.

4. Inquiry Label

Click the arrow button on the right to see the set of labels for the logs present at that time.

label list key : Indicates a list of labels for viewed logs. You can check the label value by clicking the label button.

label list values : Click the label value to search for the log you want to find by adding the conditions you want to search through AND search.

Selected label value : When you add a label condition, the condition is added to the top of the graph, and you can search by clicking the X button to remove the condition.

5. Search Log

Enter the keyword you want to search for and click the search button to view the log for the search term.

Search word : You can search logs where the string exists regardless of case.

6. Download Log

Click the “Download” button at the top of the graph to download the log.

Download: You can download log data for up to 5,000 searched logs in Excel file format. Each column contains a label value

Cluster Audit Logs

This is a log about events that occur within the cluster.

1. Inquiry Log

Logging - Cluster Audit Logs - Select a cluster from the list of clusters to view.

View By Hour : You can search logs from the last 5 minutes to 48 hours ago, and check logs from up to 2 weeks ago.

Cluster: You can view the entire list of clusters.

View More : Get logs since the last time in the list.

Current number of logs / Total number of logs : This refers to the total number of logs viewed at that time and the maximum number of logs.

The maximum number of logs that can be viewed at one time is 5000. The inquiry period is up to 7 days.

The Show More button is displayed when the total number of logs exceeds 5000.

2. View log details

Click the link button for the log viewed by time to check detailed information about the log.

User Account : This refers to the account name of the cluster.

Timestamp : This refers to the time when the audit log occurred.

Source IP : This refers to the IP of the user who generated the audit log.

Request URL : Refers to the URL for the action in which the audit log occurred.

Request Time : This refers to the time the task started.

Response time : refers to the time the task was completed.

Response status : refers to the result of the action.

Verb : It means what action the task performed. (eg. create, delete, patch)

  • None - Events corresponding to this rule are not logged.

  • Metadata - Logs request metadata (requesting user, timestamp, resource, verb, etc.), but does not log request/response body.

  • Request - Logs event metadata and request body, but does not log response body. It does not apply to requests other than resources.

  • RequestResponse - Logs event metadata and request/response body. It does not apply to requests other than resources.

Stage: Refers to the process or stage in which work was performed.

3. View logs by time

If you click on each graph once, you can check the log for that time, and if you click again, you can check the log for the entire time again.

그래프 클릭 : 현재 시간의 로그가 8,704개인것을 확인할 수 있습니다.

4. Inquiry Label

Click the arrow button on the right to see the set of labels for the logs present at that time.

label list key : Indicates a list of labels for viewed logs. You can check the label value by clicking the label button.

label list values : Click the label value to search for the log you want to find by adding the conditions you want to search through AND search.

Selected label value : When you add a label condition, the condition is added to the top of the graph, and you can search by clicking the X button to remove the condition.

5. Search Log

Enter the keyword you want to search for and click the search button to view the log for the search term.

Search word : You can search logs where the string exists regardless of case.

6. Download Log

Click the “Download” button at the top of the graph to download the log.

Download: You can download log data for up to 5,000 searched logs in Excel file format. Each column contains a label value

Troubleshooting

This is a solution to problems that frequently occur when installing and operating the log service. If any additional problems arise, please contact us.

When the master/data nodes of the log service do not start properly during installation

When the fluent-bit container does not work properly

[error] could not open configuration file, aborting.

This log is an error that occurred because the fluent-bit configuration file mount information is different. Please check the configuration file name and location carefully before distributing.

Creating a Build Server

To build images in Cocktail, creating a build server is essential.

1. Navigate to the Build Server Creation Screen

Follow the steps below to reach the build server creation screen.

1) Go to [Build Configuration] - [Build Server] tab and click on the "+ Create" button in the upper right corner.

2. Input Build Server Information

3. Confirm Build Server Creation

1) Check the build server list screen to verify the creation of the build server.

2) Click on the created build server to review and confirm its configuration details.

4. Next Steps

Catalog

Recently, there has been a growing trend in enterprises towards building applications and platforms based on open source. Open source solutions often require significant time and effort for validation, deployment, configuration, and maintenance.

Cocktail Cloud addresses this challenge by offering a catalog-style provisioning of various open source and commercial software packages needed for application configuration. This approach enables users to easily install the required components, streamlining the process of creating an environment for application development and deployment.

User What packages are provided?

  • Official Packages: Managed packages with validated configurations (for AI, IoT, Blockchain, Big Data, etc.), designed for enterprise digital transformation.

  • Open Source Packages: It's possible to search for and deploy open-source packages, offering flexibility and customization.

What are the special features of the catalog?

One-Click Package Deployment

Allows for one-click deployment of pre-configured packages to the cluster. Users can customize information such as environment variables for different deployment scenarios.

Monitoring Deployed Package Status

Enables monitoring and tracking of the workload configuration status of deployed packages.

Package Version Upgrades and Configuration Updates

Facilitates easy version upgrades for packages through the web GUI after deployment. Additionally, users can seamlessly perform updates to package configurations. If there are changes to parameters set during package deployment, users can effortlessly modify and apply these changes.

1. Package Deploymen

1.1 Catalog Package List Inquiry

1) Click on [Application Catalog] - [Catalog] tab to display the list of available packages.

1.2 Package Search

1) You can search for the desired package for installation through the search bar at the top right of the [Catalog] screen.

1.3 Package Information Inquiry

1) Click the "Deploy" button for the package you are interested in or want to distribute, and it provides an overview of the latest package version along with descriptions of parameters that can be configured during package deployment. To view information about previous package versions, use the version selection box at the top of the screen to choose the desired version.

1.4 Package Deployment

1) In the [Deploy] tab, enter deployment information (deployment type, target cluster, service map, namespace, release name), then click the "Deploy" button.

2) Find the parameters you want to change in the YAML editor displayed below the deployment settings section and modify the values directly.

3) When additional configurations are required during deployment beyond the default settings, you edit them in the custom YAML editor.

If different values are registered for the same configuration, the priority in the YAML file is given to the settings applied in the custom YAML.

4) When you press the deployment button, you can execute a dry-run to see if the package deployment proceeds correctly.

5) Upon executing the dry-run, you can verify the success or failure.

6) After deployment, the screen will move to the detailed view of the package.

2. Deployment Package Status Inquiry

2.1 Catalogs Deployed in the Application Catalog

1) To inquire about deployed packages, select [Application Catalog] - [Deployed Catalog] option.

2.2 Package Detail Screen Inquiry

1) Click on the name (release name) of a specific package in the package deployment list to display the detailed view of that package.

2) Depending on the deployment status, states like ContainerCreating, Pending, CrashLoopBackOff, Error may appear. Upon successful deployment, it will be displayed as Running.

ERROR: [1] bootstrap checks failed [1]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

The following error occurs because during the bootstrap check process, if the max_map_count value is less than 262,144, it is not executed for stability.

Therefore, connect to the relevant node, enter the following command, and then restart.

Item (* is required)
Content

The next steps will guide you through the actual process of building an image. Please proceed to the "" page for detailed instructions.

Item (* is required)
Content
/$ vi /etc/sysctl.conf

# record
vm.max_map_count=262144

# apply
/$ sysctl -p

Name*

Enter the name for the build server to be created

Description

Provide a description for the build server to be created

Cluster*

Select the cluster

Namespace*

Choose the namespace where the build server will be executed

Insecure Registries*

Specify the public IP of the Harbor instance from which images will be pulled or pushed

Release Name*

Enter the version to be deployed

Cluster*

Choose the cluster to deploy the package

Namespace*

Choose the namespace to deploy the package

[Screen] Default Build Server Configuration
[Screen] Input Build Server Information
[Screen] Check Build Server List
[Screen] View Package List
[Screen] Search for Catalog Packages
[Screen] View Package Documentation in the 'Documents' Tab
[Screen] Package Deployment Configuration Information Input Screen
[Screen] Modify Package Deployment Parameters(Basic YAML)
[Screen] Changing Package Deployment Parameters (Custom YAML)
[Screen] Pop-up Window for Selecting Dry run after Clicking Deployment Button
[Screen] Verification of Dry run Execution Status (Success)
[Screen] Screen displaying the list of deployed catalogs in the application catalog
[Screen] Detailed View of Deployed Catalog
Build Image

Manual measurement of file logs (Sidecar)

The advantage of using the open source fluent-bit is that the user can handle it by reading log files stored in the directory.

Configuration Information Creation

Cocktail Cloud provides ConfigMap and Secret types as configuration information.

What types of configuration information are provided?

  • ConfigMap: ConfigMap is a Kubernetes object for injecting configuration data into containers.

  • Secret: Secret is a Kubernetes object that includes sensitive data such as passwords, tokens, and key values. Kubernetes defines types like Docker-registry, generic, and tls for secrets, and Cocktail Cloud supports all these types.

What are the advantages of configuration information management?

Ease of configuration and utilization

When users create configuration information in Key-Value format in the configuration information management menu, they can easily select the configuration information to be used in containers from the dropdown menu during container configuration.

Task List

  • Create ConfigMap

  • Create Secret

  • View Configuration Information

  • Use Configuration Information in Workloads

1. Create ConfigMap

1.1 Move to Configuration Information Screen

1) Go to [Application] - [Service Map] tab, select Configuration Information, click the "+ Create" button, then choose ConfigMap.

1.2 Enter Basic Information

1) Enter the name, description, labels, and annotations for the ConfigMap.

1.3 Enter Key-Value Information

1) Click the "+ Add" button on the bottom right of the ConfigMap information input screen to enter key-value information, then click "Apply." If managing multiple key-value information in the ConfigMap, repeat the key-value information entry as needed.

  • The KEY field is a required input.

1.4 Save

1) After entering ConfigMap information, click the "Save" button to actually create it.

2. Create Secret

2.1 Move to Configuration Information Screen

1) Go to [Application] - [Service Map] tab, select Configuration Information, click the "+ Create" button, then choose Secret.

2.2 Enter Basic Information

2.3 Enter Key-Value Information

1) Click the "+" button on the bottom right of the Secret information input screen to enter key-value information. If managing multiple key-value information in the Secret, repeat the key-value information entry as needed.

  • The KEY field is a required input.

2.4 Save

1) After entering Secret information, click the "Save" button to actually create it.

2.5 Creating imagePullSecrets

The previous method of forcibly generating and assigning imagePullSecrets in Cocktail is no longer used. Users can now directly create imagePullSecrets as needed and use them in their workloads. (imagePullSecrets provide authentication tokens in the form of Kubernetes Secrets, storing Docker authentication information used to access private registries.)

1) In the [Application] - [Service Map] tab, select the service map where you want to create the secret, and go to the settings information, then click the "+ Create" button.

2) Click on "Secret."

3. View Configuration Information

3.1 List Configuration Information

1) Go to [Application] - [Service Map] tab, select Configuration Information to view the list of configuration information.

3.2 View Detailed Information of Configuration Information

1) Select the name of the configuration information to see its details.

2) Detailed information of the configuration information can also be viewed in YAML format. Move to the configuration tab in the upper right corner of the screen, then select "YAML View" from the displayed screen. YAML-formatted information will be shown.

4. Use Configuration Information in Workloads

1) Select the workload that will use the configuration information and go to the configuration tab to display the detailed configuration screen for the workload.

4.1 Set Environment Variables in Containers

1) Select the container name and go to the [Environment Variables] tab.

2) Choose the type of configuration information you want to use.

If you have selected a ConfigMap value or a Secret value, you can easily select the key-value of the configuration information resource and the key-value it contains. After accessing the selected key-value of the configuration information in the container, enter the corresponding environment variable key-value separately, then click the "Apply" button.

  • The KEY and VALUE fields for direct input, ConfigMap value, Secret value, Field Ref, Resource Field Ref are mandatory.

4.2 Restart the Workload

1) To apply the environment variables, a restart of the container is required. Click the "Save and Start" button at the top right of the detailed configuration screen for the workload to restart it.

4.3 Confirm the Application of Environment Variables

1) In the detailed deployment information view of the workload, find the container with the applied environment variables, then click the terminal icon on the right. Clicking the terminal icon displays an interactive shell screen for that container. In the interactive shell, use the env command to display and confirm that the environment variable content is correctly set.

Item (* is required)
Content
Item (* is required)
Content
Item(* is required)
Content
Fluent-bit

Name*

Enter the name for the Config Map to be created (use only uppercase, lowercase, numbers, and special characters (-.))

Description

Provide a description for the Config Map to be created

Label

Input the labels to be written in the Config Map

Annotation

Enter any Annotation you want for the Config Map

Name*

Enter the name for the Config Map to be created (use only uppercase, lowercase, numbers, and special characters (-.))

Description

Enter a description for the Config Map to be created

Type

  • Generic:

  • DockerRegistry:

  • TLS: Certificates for public certification registration

Label

Input the labels to be written in the Config Map

Annotation

Enter any Annotation you want for the Config Map

Name*

Enter the name of the Secret to be created

Description

Provide a description for the Secret

Type*

Select DockerRegistry to store authentication information for pulling images from a Docker registry.

Label

Specify Key / Value pairs to identify the information

Annotation

Used for additional explanation without any special functionality

Setting type*

  • Direct input : Manually enter the registry authentication information

  • Select from registry: Choose from previously registered registries

Registry

Select from previously registered registries

[Screen] Access Configuration Information for Service Map
[Screen] Enter Basic Information for Config Map
[Screen] Enter Key-Value Information for Config Map
[Screen] Access Configuration Information for Service Map
[Screen] Enter Basic Information for Secret
[Screen] Enter Key-Value Information for Secret
[Screen] Create Secret
[Screen] Enter Secret Information
[Screen] View List of Configuration Information
[Screen] View Detailed Configuration Information
[Screen] View Configuration Information in YAML Format
[Screen] Detailed Configuration Screen for Workload
[Screen] Container Environment Variable Configuration Screen
[Screen] Select Configuration Information and Enter Environment Variable Key-Value

Security

1. User Account Management

User Account Management (IAM, Identity & Access Management) is crucial for security management, covering the entire lifecycle from issuance to revocation. To achieve this, only authorized users should have permission to create, delete, and modify accounts. Additionally, the platform should allow the verification of existing account permissions and statuses.

Navigate to [Settings] - [Users] to access this information.

1.1 User Account Creation and Operation

Users logging into the Cocktail Cloud platform require an account. For maintaining security levels and role separation, it is recommended to perform major configuration tasks and platform resource management operations with 'Admin' privileges. This is akin to requesting and using root permissions only temporarily for specific tasks in an OS operating environment.

1.2 Account Permissions and Roles

Admin

  • Possesses the highest level of authority, capable of creating and modifying other user accounts, viewing and searching audit logs.

  • Can create platforms and allocate resources.

  • Can grant cluster access and terminal access permissions.

  • Can create workspaces on the platform and add members to them.

  • Add service maps, which represent the actual service units in operation.

  • When adding a service map, allocate and limit resources such as CPU, Memory, and the total number of Pods.

  • Register clusters for use on the platform.

  • Can register clusters for use on the platform, monitor the resources and status of allocated clusters.

  • Can add or reinstall addons, restart them, check the status of deployed applications.

  • Can view the status of deployed applications, add or create container images.

  • Add or create container images.

  • Create and manage registries.

  • Deploy Helm charts with publicly available packages on the platform.

User

  • Can manage resources assigned to them by an administrator and serve applications.

  • Can create workloads, expose services, request and use volumes, configure application deployment, and utilize package and pipeline features.

  • Can add or create container images.

  • Can deploy packages exposed in the Helm chart on the platform.

Application Deployment

1. Workload Group Creation

Create a workload group on the Workload tab of the Service Map.

1) Click on [Application] - [Service Map] tab, select the service map where you want to create the workload, and navigate to the Workload.

2) Click the expand menu (three dots) next to the workload group name.

3) Choose the desired direction for adding a group from the additional items (e.g., Add Group to the Right).

4) A text input form for the name of the workload group will appear. Enter the name of the workload group and press Enter.

  • The workload group name is a mandatory field.

5) Confirm that the workload group has been added.

2. Workload Creation

Create workloads such as Deployment, Stateful Set, Daemon Set, Job, Cron Job, etc. Although the types of workloads may differ, the process of entering container information is fundamentally the same.

1) Click on [Application] - [Service Map] tab, select the service map where you want to create the workload, go to Workloads, and click the "+ Create" button.

2) Choose the type of workload you want to create.

2.1 Enter Basic Workload Information

1) Enter basic information for the workload (type, name, group, description, labels, annotations), deployment and management policies (tolerations, deployment policies, autoscaling, update policies), container information (init containers, containers), and storage information (volumes, volume mounts). Click the "Save" button.

Not all information needs to be entered. You must set the name, group, description, and at least one container information. Other information can be entered as needed.

Item (* is required)
Content

Type

It is displayed according to the type selected when creating the workload

Name*

Enter the name for the workload to be created

Group*

Choose one from the existing workload group names

Description*

Write a description for the workload

Label

Specify key/value pairs for identification using this information

Annotation

There are no specific features, but this is used as additional explanation

Node Affinity

Check the labels of nodes and configure deployment only on nodes with the specified label

Toleration

Set rules to allow pod placement on nodes with taints

Deployment policy

Configure overall policies for pod deployment regarding replicas, hosts, startup/shutdown times, permissions, etc

Auto Scaling

Set the system to automatically adjust (scale) based on resource considerations

RollingUpdate Strategy

Define policies needed for pod updates

Image Pull Secret

Automatically register Harbor login information to access and retrieve container images from Harbor

2.1.1 Register Image Pull Secret with Workload

1) Select the workload where you want to register the secret, then click on the icon next to "image pull secret"

2) Choose the secret to register, click "+ Add", and then click "Save"

2.2 Enter Container Information

1) Container Basic Information

Enter container name, image information, and resource requests and limits for CPU/Memory/GPU. Container name and image information are mandatory. If CPU/Memory resource requests and limits are not entered separately, the default values displayed in gray on the input screen will be set.

Item (* is required)
Content

Name*

Enter the container name to be created, using only lowercase letters, numbers, and the hyphen (-) for special characters

Image*

Provide image information for creating the pod

CPU *

Set the amount requested and the limit amount to configure the necessary CPU (amount requested) during pod startup and the maximum CPU that can be allocated (limit amount) The default is 100

Memory*

Set the Amount Requested for memory and the Limit Amount for the maximum memory allocation during pod startup

GPU resources

If the pod uses GPU, specify the Limit Amount and Amount Requested for GPU

2) Container Commands

  • Container commands are not mandatory but can be used if necessary.

Enter the commands and arguments to be executed in the container.

  • Command and arguments can be optionally added with the [+ Add] button.

  • If unnecessary, use the [ - ] button to the right of the text field to delete.

Item
Content

Command

Enter the command values to be executed when the pod starts

Arguments

Provide arguments for the command to be executed when the pod starts

3) Container Environment Variables

  • Container environment variables are not mandatory but can be used if necessary.

Set various configuration information to be used in the container. Configuration information includes environment variables, config maps, secrets, and field references for workload metadata. Config maps and secrets to be used in the container must be pre-created on a separate configuration information screen.

Item (* is required)
Content

Direct input (KEY)*

Enter the "key" directly for the environment variable to be registered when setting up pod environment variables

Direct input (VALUE)*

Input the "value" directly for the environment variable to be registered when setting up pod environment variables

Config map Value (KEY)*

Enter the name of the ConfigMap value to be registered in the environment variables

Config map Value(VALUE)*

Select the name of the previously configured ConfigMap

Secret Value (KEY)*

Enter the name of the Secret value to be registered in the environment variables

Secret Value(VALUE)*

Select the name of the previously configured Secret

Field Ref(KEY)

Enter the key that references the field value of the pod

Field Ref(VALUE)*

Input the value that references the field value of the pod

Resource Field Ref(KEY)

Enter the key that references the resource field value of the pod

Resource Field Ref(VALUE)*

Input the value that references the resource field value of the pod

4) Security Settings

  • Security settings are not mandatory but can be used if necessary.

Set user and permissions for the container or Linux capabilities.

Item (* is required)
Content

Run as Non ROOT

If the container is not going to run as the root user but as a regular user, it is necessary

Run as User

Input the user to be used when the container is running

Run as Group

Input the group to which the container will belong

Run Privilleged Mode

It is necessary if the container needs to interact directly with the host system's kernel

Allow Privillege Escalation

Decide whether to allow privilege escalation

Read Only Root filesystem

Set whether the container's root file system should be read-only

seLinuxOptions(level)

Set the level used in SELinux security policy

seLinuxOptions(role)

Set the role used in SELinux security policy

seLinuxOptions(type)

Set the type used in SELinux security policy

seLinuxOptions(user)

Set the user used in SELinux security policy

Linux Capabilities(add)

Add additional Linux kernel features

Linux Capabilities(drop)

Remove specific Linux kernel features

5) Health Check

Health check settings are not mandatory but can be used if necessary.

Set Liveness Probe and Readiness Probe for the container.

  • You can choose the probe type on the Liveness Probe tab and Readiness Probe tab.

    • EXEC: Execute a specified command inside the container and check the exit code.

    • TCP SOCKET: Attempt to establish a TCP socket connection to a specific host and port and check success.

    • HTTP GET: Send a GET request to the specified HTTP endpoint and check success.

6) LifeCycle Hook

LifeCycle Hook settings are not mandatory but can be used if necessary.

Enter PostStart and PreStop lifecycle hooks.

  • You can choose the hook type on the PostStart tab and PreStop tab.

    • EXEC: Register a command to be executed internally in the container before it starts (PostStart) or before it terminates (PreStop).

    • HTTP GET: Register an HTTP GET request to a specified HTTP endpoint after the container has started to ensure it is ready to serve or check before termination.

7) Container Ports

Enter container port information.

  • The Container Port field is a mandatory input.

  • The Protocol field allows you to choose TCP, UDP, or SCTP.

Item (* is required)
Content

Container Port*

Enter the port number for the container port to be created

Protocol (Choose one)

Specify a specific communication protocol used for network communication

name

Enter the name of the container port to be created

Host IP

Input the IP address of the host machine

Host Port

Specify the port number on the host machine that connects to the corresponding container port

2.3 Enter Init Container Information

1) The input items for init container information are the same as for regular containers. (Only the execution order is different.)

2) An init container is a one-time-use container that runs before the main application container starts within a pod. Init containers are used to perform specific tasks before the application container starts and to pass the results to the application container through a shared volume.

2.4 Enter Deployment, Autoscaling, and Update Policies

The deployment, autoscaling, and update policy input sections are located below the basic workload creation information input section. The order of input does not matter, and you only need to set the information as needed.

1) Toleration Settings

Item (* is required)
Content

Effect (Choose one)

You can set rules for placing Pods on nodes, with three options: NoSchedule, PreferNoSchedule, and NoExecute

Key*

Write the Key value for Toleration

Operator (Choose one)

Choose between Exists and Equal. Equal checks if both the key and value effect match, while Exists ignores any taint

Value*

Write the Value for Toleration. If you choose the Equal option for Operator, it becomes active

Toleration Seconds

When a Pod is scheduled on a specific node, this represents the maximum time the Pod is temporarily allowed on that node, even if the node has a specific Taint. This is activated when you choose the NoExecute option for Effect

2) Deployment Policy Settings

  • The Replicas field is a mandatory input. Enter the number of instances to replicate as a positive integer.

Item (* is required)
Content

Number of copies

Write the number of instances to replicate

Host Name

Write the hostname

Grace period (seconds) on exit

Used to set the time to wait before a container or pod is terminated

Waiting time after preparation(seconds)

Time to wait after the task is completed before executing additional actions

Node Label KEY

The Key value of the label that the node has when deploying instances to a specified node

Node label value

The value of the label that the node has when deploying instances to a specified node

Access authority (RBAC services Account)

Service account used to manage access permissions for resources

3) Autoscaling Settings

  • If using CPU and Memory types, the HPA name field is activated and is a mandatory input.

Item (* is required)
Content

CPU Type

If you check the box on the right, choose between Utilization and AverageValu - Utilization : The percentage of CPU used to process tasks - AverageValue : Average CPU usage

CPU Utilization(%)

If you select CPU type as Utilization, it becomes active

CPU Average Usage Value(mCore)

If you select CPU type as AverageValue, it becomes active (minimum value must be greater than or equal to 1)

Memory Type

If you check the box on the right, choose between Utilization and AverageValue. - Utilization : The percentage of memory used to process tasks - AverageValue : Average memory usage

Memory Utilization(%)

If you select Memory type as Utilization, it becomes active

memory average usage value(MB)

If you select Memory type as AverageValue, it becomes active (minimum value must be greater than or equal to 1)

HPA name

Set the HPA configuration name

Max Replicas, Min Replicas

Write the maximum and minimum number of instances to be maintained

Scale Use

Either CPU type or Memory type must be used for activation - Scale Down : Choose between Disabled, Max, and Min - Scale Up: Choose between Disabled, Max, and Min

4) Update Policies

Item (* is required)
Content

RollingUpdate Strategy

Choose one between Rolling Update and Recreate

Percentage of Interruption to Replication

It becomes active when Rolling Update is selected

Choose one between Percentage and InstanceCount

Expansion ratio vs. number of copies

It becomes active when Rolling Update is selected Choose one between Percentage and InstanceCount

3. Modify Workload

3.1 Detailed Inquiry of Workload Settings

To update the settings for a configured workload, access the configuration screen for that workload. Here, we'll use the example of modifying the container image. The process remains the same for other configuration changes; save the modified settings and restart the workload.

3.2 Change Workload Settings (in the case of modifying the container image)

1) Click on the "Settings" tab after selecting the workload to be changed.

2) Single-click on the container name, modify the image name, and apply the changes.

3) After completing the modifications, click "Save and Start."

3.3 Check the Application of the Modified Settings

Monitor the situation where the container restarts with the updated image settings on the detailed workload monitoring screen.

4. Stop/Restart/Delete Workload

To stop, restart, or delete a specific workload, access the detailed deployment information screen for that workload.

4.1 Stop/Restart Workload

Click the "Actions" button at the top right of the detailed deployment information screen for the running workload. A selection box will appear, allowing you to choose to stop or restart the workload. Select either "Stop" or "Restart" based on your needs.

4.2 Delete Workload

Before deleting a running workload, you must first stop the workload. Click the "Actions" button at the top right of the detailed deployment information screen for the stopped workload. A selection box will appear, allowing you to start or delete the workload. Choose "Delete," and the workload will be deleted.

1) Click "Actions," choose "Stop" to halt the running workload.

2) After stopping the workload, click "Actions" for the stopped workload, choose "Delete" to remove the workload.

5. Workload Group Management

5.1 Change Workload Group Display

When accessing the workload query menu in the service map, workloads are sorted and displayed based on workload groups. The display method of workload group names or arrangements can be changed as follows.

  • Change Group Name

  • Change Column Count

  • Move Left

  • Move Right

  • Add Group on the Left

  • Add Group on the Right

To perform these actions, click on the "expand menu (three dots)" displayed to the right of the workload group name.

5.2 Delete Workload Group

To delete a workload group, there should be no workloads within that group. If there were existing workloads in the group, they must be deleted first.

To delete a workload group, click the "expand menu (three dots)" displayed to the right of the workload group name. You will see "Delete Group" is activated and displayed in the popup. Select this option.

Volume Requests

A volume, in simple terms, refers to a directory existing on a disk or within a container. Typically, the lifespan of a volume is the same as the Pod that encapsulates it. When the Pod ceases to exist, the volume disappears as well.

However, in some cases, it may be necessary to preserve the data on the disk even if the Pod disappears. In such cases, persistent volumes (PVs) are used.

What types of volumes are provided?

  • Regular Volumes: Supports emptyDir and hostPath methods.

  • Persistent Volumes (PVs): Supports Single type (usable only on one node) and Shared type (can be shared across multiple nodes).

What are the advantages of persistent volume management?

Automatic creation of PV and PVC

When users input the minimum required information for a persistent volume, Cocktail Cloud automatically generates related Persistent Volume (PV) and Persistent Volume Claim (PVC) resources and matches the PVC with the corresponding PV.

Ease of volume and volume mount configuration

Developers only need to select the PVC created in the configured Pod to set up volume and volume mounts.

Task List

  • Create volume requests

  • View volume requests

  • Use volumes in containers

1. Create Volume Requests

1.1 Navigate to the volume request creation screen

1) Go to [Application] - [Service Map] - [Volume Requests], then click the "+ Create" button in the top right to move to the volume request creation screen.

1.2 Fill in the information for volume request creation

Item (* is required)
Content

Name*

Write the name of the volume request you want to create

Persistent Volume type*

Choose between SINGLE and SHARED

Storage*

Select the pre-registered storage

Access Mode*

  • If you choose SINGLE for the storage volume type, only ReadWriteOnce can be selected in the access mode

  • If you choose SHARED for the storage volume type, ReadWriteMany and ReanOnlyMany can be selected in the access mode.

Capacity(GB)*

Enter the volume amount to be created (only positive integers are allowed)

Label

Input labels to be registered for volume request creation

Annotation

Input Annotation to be registered for volume request creation

2. View Volume Requests

2.1 View the list of volume requests

1) Access the volume request screen in the service map to check the list of volume requests created by the user.

2.2 View detailed information on volume requests

1) Click on the "Name" of the volume request you want to check in the volume request list.

2) To view detailed information about the created PVC in YAML format, click the settings button on the top screen, then select "YAML View" from the left checkbox.

2.3 View PV

1) Select the "Volume (PV)" of the volume request you want to check in the volume request list.

2) To view detailed information about the created PV in YAML format, go to the "Settings" tab on the top screen.

3. Use Volumes in Containers

3.1 Navigate to the workload configuration screen

1) Select the workload that will use the volume request, then click the "Settings" tab to go to the detailed workload configuration screen.

3.2 Add Volumes

1) Click the "+ Add" button in the volume section of the workload configuration information.

2) Choose the desired volume type and enter the corresponding volume name.

The volume type field can be Empty Dir, Host Path, Config Map, Secret, Persistent Volume, and additional input information may be required based on the selected volume type.

3) After completing the volume type and volume name, click the "Apply" button to save.

3.3 Configure Volume Mounts

After adding a volume, it needs to be mounted in the workload to be used.

1) Click the "+ Add" button in the volume addition section of the workload configuration information.

2) Select the container and volume to mount, then click the "+ Add" button.

3) Specify the path to mount the volume in the container.

  • The container field and volume selection field can be created if containers and volumes already exist.

  • The container path field is a mandatory input.

4) Click the "Apply" button to create the volume mount.

3.4 Restart the Workload

1) After adding volumes and configuring volume mounts, click the "Save and Start" button at the top right of the workload's detailed configuration screen.

2) You can check that the configured volume and volume request are applied by confirming that the container restarts.

Multicluster Configuration

One of the key advantages of Cocktail Cloud lies in the ability to build a service environment as a multicluster to meet the business demands of providing services in a multi-cloud environment.

Refer to the link for instructions on how to register clusters.

1. What are the advantages of Multicluster?

When configuring a multicluster, all types of environments can be organized into clusters according to business requirements, allowing centralized management on a single platform. In other words, based on business needs, various forms of clusters, including cloud service providers and on-premises environments, can be strategically selected and used without constraints imposed by vendors.

Configurable server infrastructure environments include

  • On-premises

  • Data centers

  • Private clouds

  • Public clouds

2. How is Hybrid Cloud Configured?

With the increasing number of companies choosing hybrid clouds to meet business requirements, as well as regional, legal, and security requirements, a hybrid cloud can be configured in various forms of clusters on a single platform.

Service Exposure

To invoke the service functionality provided by a workload both within and outside the cluster, services are defined using Cluster IP and Node Port methods.

If services are defined using the Node Port method in a cloud-configured cluster, a load balancer can be configured in front of it, allowing external invocation of services through the load balancer address and port.

What types of service exposure do you support?

  • Cluster IP: Groups pods set with the same label to perform load balancing (not in round-robin but random connection), facilitating internal communication.

  • Node Port: Distributes the same port to each POD, performs load balancing using Cluster IP and port, and allows external exposure.

  • When using Node Port, you need to register the Node Port for KT LB to use it.

  • For inquiries regarding KT firewall and LB Port Open, please contact MSP directly.

What are the benefits of service exposure?

Easy Service Creation

Users can easily create various service exposure types through the web UI console.

Automatic Load Balancer Creation

When a cluster is configured in a public cloud, Cocktail Cloud automatically creates a load balancer. Service exposure using the load balancer type is possible only on clouds that support it, such as AWS, Azure, and GCP.

Task List

  • Create services

  • View services

1. Create Services

1.1 Navigate to the service exposure screen

1) Select [Application] - [Service Map], then click the "+ Create" button to move to the service exposure screen.

1.2 Choose the service exposure type

1) Select ClusterIP or Nodeport as needed.

[For KT customers] When creating a Node Port type service, you need to contact MSP for load balancer creation and firewall setup.

1.3 Enter basic service exposure information

1) Enter basic service exposure information.

2) Click "Label Selector," choose the workload and label to connect the service to, then click the "+ Add" button.

3) Confirm that the selected label is displayed at the bottom as Key, Value, and click the "Apply" button.

You can either select labels pre-set in the workload or directly input a label name and value.

When directly adding a workload, input fields for Key and Value will be added at the bottom, and you can enter them directly.

1.4 Configure service target ports

1) In the service exposure settings, click the "Edit" button in the Target Ports section.

2) Click "+ Add," then enter the Name, Protocol, Target Port, and Service Port at the bottom.

Name, Protocol, Target Port, and Service Port are mandatory, and you can choose between TCP and UDP for the protocol.

1.5 Save the configuration

After completing the information input, click the "Save" button to create the service.

2. View Services

Search the generated service information on the service exposure screen of the service map.

2.1 View the list of services

1) Access the service exposure screen in the service map to check the list of created services.

2.2 View detailed service information

1) Click on the service name displayed in the service list to view the service's configuration and status information.

2) You can also view service configuration and status information in YAML format by clicking the settings button on the top screen, then selecting "YAML View" from the left checkbox.

Ingress

1. Ingress Creation

Ingress is a feature that allows controlling HTTP/HTTPS routing from outside the cluster to internal services within the cluster. To create Ingress, it is necessary to install the Ingress controller in the cluster beforehand through the Cocktail Cloud's addon management screen.

1.1 Accessing the Ingress Screen

1) Click on the "+ Create" button at the top right of the Ingress screen in the Service Map.

1.2 Entering Basic Ingress Information

1) Provide the necessary basic information for Ingress configuration.

To configure automatic redirection from HTTP to HTTPS, set SSL Redirect to true and include force-ssl-redirect: true in the comments.

1.3 Entering Ingress Rules

1) Click the "Edit" button in the "Rules" section of Ingress settings.

2) When adding a host, enter the desired host name and click "+ Add."

If there are pre-registered hosts, select "Select from existing hosts" and click "+ Add.".

1.4 Entering Ingress TLS Information

Configure TLS-related information for Ingress, including secrets used to terminate TLS traffic on port 443 and host information included in TLS certificates.

1) Click the "Edit" button in the "TLS" section of Ingress settings.

2) Select the Secret and target host, then click "+ Add."

3) After completion, click "Apply."

1.5 Save

1) To actually create the Ingress, be sure to click the "Save" button.

2. Viewing Ingress

Access the [Ingress] screen in the Service Map to view the created Ingress information.

2.1 Viewing Ingress List

1) In the [Ingress] screen of the Service Map, view the list of Ingress.

2.2 Viewing Detailed Ingress Information

1) Click on the Ingress name link displayed in the Ingress list. The configuration and status information of the Ingress will be displayed.

2) You can also view Ingress settings information and status information in YAML format. After clicking the "Settings" button at the top of the screen, select "YAML View" as the settings view at the top of the screen to display information in YAML format.

Troubleshooting | Cocktail Cloud Online

Before creating a new workload, you need to create and register imagePullSecrets. Please refer to

[Screen] Add Workload Group
[Screen] Input Workload Group Name
[Screen] Confirm Added Workload Group
[Screen] Select Workload Type
[Screen] Input Overview for Workload
[Screen] Registering image pull secret
[Screen] Enter Container Command Information
[Screen] Enter Environment Variable Information
[Screen] Enter Security Contexts Information
[Screen] Enter Health Check
[Screen] Enter LifeCycle Hook
[Screen] Enter Container Port Information
[Screen] Enter Deployment, Autoscaling, Update, Policy
[Screen] Enter Toleration Information
[Screen] Enter Deployment Policy
[Screen] Enter Autoscaling Policy
[Screen] Enter RollingUpdate Strategy
[Screen] Detailed View of Workload Configuration
[Screen] Image Configuration Change
[Screen] View Container Restart Situations
[Screen] Perform Stop/Restart for Workload
[Screen] Perform Workload Deletion
[Screen] Change Display of Workload Group
[Screen] Volume Request Creation
[Screen] View Volume Request List
[Screen] View PVC Detailed Information
[Screen] View PVC Detailed Information (in YAML format)
[Screen] View PV Detailed Information
[Screen] View PV Detailed Information (in YAML format)
[Screen] Detailed Workload Screen
[Screen] Workload Configuration Screen
[Screen] Volume Type Selection Screen When Adding Volumes
[Screen] Select Volume Request When Adding Persistent Volumes
[Screen] Enter Volume Mount Information
[Screen] Confirm Workload Restart After Adding Volume and Setting Volume Request
Item (* is required)
Content
Item (* is required)
Content
Item (* is required)
Content
Item (* is required)
Content
this link.

Service Exposure Name*

Write the service exposure name you want to create

Service Expose type

It is displayed according to the type selected when creating the service exposure

Sticky Session

If you want to use Sticky Session, check TRUE and enter the session timeout

Headless Service

If you want to use Headless Service, check its availability

Label selector*

Select the workload and label you want to connect to the service

Label

Input labels to be registered for service exposure

Annotation

Input comments to be registered for service exposure

ingress name*

Write the name of the Ingress you want to register

Ingress Controller*

Select the installed Ingress controller

SSL Redirect

If using an SSL certificate and need to automatically redirect from HTTP to HTTPS, choose TRUE

Label

Input labels to be registered for the Ingress

Annotation

Input comments to be registered for the Ingress

host*

The entered host is specified by adding it

path*

Select the installed Ingress controller

Path Type*

  • Prefix: Matches values by separating the prefix of the URL path based on /, distinguishing between uppercase and lowercase.

  • ImplementationSpecific: Varies depending on IngressClass, can be separated by a separate pathType, or use the path type the same as Prefix or Exact

  • Exact: Strictly distinguishes the case of the URL path

Target Service*

Choose the service to connect to the Ingress among the currently created exposed services

Targeted Service Port*

Select the port to be served by the service

Secret*

Select the public certificate Secret that has been previously registered

Targeted host*

Choose the host for which TLS certificates will be applied

[Screen] Multi-Cluster Integrated Management Platform
[Screen] Example Configuration of Multi-Cluster Hybrid Cloud
[Screen] Service Map - Service Exposure
[Screen] Select Service Exposure Type
[Screen] Service Exposure Configuration
[Screen] Select labels set in the workload to add as service labels
[Screen] Select existing labels set in the workload to input as service labels
[Screen] Enter Label Information for Service as Name-Value Pair
[Screen] Configure Target Ports for Service
[Screen] View Service List
[Screen] View Service Configuration and Status Information
[Screen] View YAML Format of Service Configuration and Status Information
[Screen] Ingress Screen in Service Map
[Screen] Enter Basic Information for Ingress
[Screen] Ingress Name and Controller Information Input Screen
[Screen] Ingress Annotation Input Screen
[Screen] Ingress Rule Configuration
[Screen] Ingress TLS Configuration
[Screen] Ingress List View
[Screen] Ingress Detailed Information View
[Screen] View YAML Format of Ingress Configuration and Status Information
Cluster Registration

Workspace Management

1. Workspace Management

A Workspace is a dedicated space for building, deploying, and operating applications with allocated cluster resources. Typically created on a team basis, Workspaces allow registration of users, clusters, and libraries based on the intended use.

Refer to the link below for instructions on creating a Workspace.

2. Role-based Access and Cluster Resources for User Groups

Cluster resources are allocated on a Workspace level based on requests made by authorized administrators.

Administrators can efficiently allocate and manage resources based on the scale of services and requests. User groups can then use explicitly assigned resources strategically. Moreover, Workspaces provide a secure and isolated environment, ensuring that resource usage by other teams/groups has no impact.

3. Allocating and Managing Cluster Resources

Clusters can be allocated exclusively for a user group or shared among two or more Workspaces. Exclusive resource usage guarantees higher independence. Sharing resources enables the utilization of large clusters or specific resources (such as GPU Nodes) without redundant investments.

Cluster Management

1. Cluster Management

The primary components of a cluster are nodes, storage, and applications. To effectively manage a configured cluster and ensure it operates according to plan, monitoring, alerts, and security settings are additionally required.

Let's explore the tools and content needed for cluster management one by one.

Navigate to [Infrastructure] - [Clusters] to access functionalities related to cluster management.

Information and Functions on Cluster Management Screen

  • Cluster Provider (Cloud Service Provider) type, Physical Location (Region)

  • Cluster Operation Status (Running/Stop)

  • Cluster Resource Allocation Type (Cluster/Service Map)

  • Number of Nodes Allocated to the Cluster

  • Allocated Resources of the Cluster

  • Number of GPU Nodes Allocated to the Cluster

  • Cluster Incident Alerts

  • [Function] Cluster Registration

  • [Function] Connect to Cluster Web Terminal

  • [Function] Download External Connection Certificate for the Cluster

How to Register a Cluster?

Cocktail Cloud can be implemented in on-premises environments (physical servers) and cloud services, and continuous integration is under development.

  • Amazon Web Service

  • Microsoft Azure

  • Google Cloud Platform

  • Naver Cloud Platform

  • VMware

  • Alibaba Cloud

  • Tencent Cloud

  • Rovius Cloud

  • On-Premise (physical servers)

  • Datacenter

For the detailed process of Cluster Registration (Creation), refer to the link provided.

How to Check the Resource Status Allocated to a Cluster?

To check the resources and status of the registered cluster, navigate to the Cluster List screen.

Click on [Infrastructure] - [Clusters], and a list of accessible clusters will be displayed.

Information provided on the Cluster List screen includes

  • Cluster Name (User-defined)

  • Kubernetes Version

  • Status (Running, Stop)

  • Number of Nodes

  • Cluster Resource (CPU, Memory, Storage) Status

  • GPU Nodes (Number of GPU nodes configured in the cluster)

  • Alarms (Number of incidents occurred)

3.How to Check Detailed Resource Status of Cluster Configuration?

To modify the configured resources or registration information of a registered cluster, select [Infrastructure] - [Clusters], and move to the Registration Information tab.

3.1 Cluster Configuration Overview

  • Cloud Service Provider

  • Cloud Service Type

  • Region (Provider and server's regional/physical location)

  • Cluster Name (Name represented in Cocktail Cloud)

  • Kubernetes Version (Information about the Kubernetes version used in the cluster)

  • ID (Shared ID for the cluster, required for redirecting alarm messages)

  • Description (User description of the cluster)

  • Master Address (Kubernetes API address in the format "https://host:port")

  • Ingress Host (Host IP Address for Ingress method, Master IP or Load balancer IP)

  • Node Port Host Address (IP Service to be used in front of the port in the method of exposing services by attaching ports to nodes, Master IP or Load balancer IP)

  • Node Port Range (Range of ports to be used behind IP in the method of exposing services by attaching ports to nodes, recommended 30000~32767)

  • Cluster CA Certification (Enter the value of the ca.crt file after moving to the /etc/kubernetes/pki path on the master server)

  • Client Certificate Data (Enter the value of the admin.crt file after moving to the /etc/kubernetes/pki path on the master server)

  • Client Key Data (Enter the value of the admin.key file after moving to the /etc/kubernetes/pki path on the master server)

3.2 Monitoring Cluster Configuration Node Resources

Move to the Node tab after navigating to [Infrastructure] - [Clusters] Select the specific node and move to the Monitoring tab.

Information provided includes resource usage status (CPU, Memory, Disk, Network), resource summary (Capacity, Availability, Request), and status (Event type, State, Recent occurrence time, Time elapsed since the last occurrence, Cause of occurrence, Message). Monitoring information for nodes can also be obtained from the Unified Monitoring menu, providing additional details.

3.3 Creating Storage in the Cluster

To allocate storage to the cluster, navigate to [Infrastructure] - [Clusters] - [Storage Volume] and click the "+ Create" button to access the storage creation screen.

Choose the storage type for creation. Commonly, NFS and NFS Named types are available, and Azure services additionally provide Azure Disk and Azure File types.

Based on the selected type, detailed configurations for storage creation are possible. The configurable information (specifications) includes

  • Name: Storage name

  • Description: Description of storage usage

  • Default Storage: Option to use as the default storage

  • Storage Plugin: Plugin for storage

  • Policy: Policy for storage deletion (Retain or Delete)

  • Total Capacity: Total storage capacity in GB

  • Parameters: Storage parameter settings

  • Mount Options: Storage mount option settings

  • Label: Storage label settings

  • Annotation: Storage annotation settings

4. Application Deployment Status

Applications deployed in Cocktail Cloud are deployed at the workload level, and their status can be checked by selecting the corresponding workload in [Workloads].

Details about the deployed application, including workload name, workload status, deployment type (Deployment, Daemon Set, Stateful Set, Job, Cron Job), number of instances, current resource usage (CPU, Memory), and service uptime (Age) after deployment, can be reviewed.

5. What does Alert mean in a cluster?

When alerts occur in the running workload (or instance), real-time status is provided through SMS (Slack, etc.), email, and the dashboard.

Navigate to [Infrastructure] - [Clusters] - [Alerts], where unresolved alerts are displayed in the alert list. Each alert includes the alert name (status summary), severity (Critical, Warning), and occurrence timestamp.

To view detailed information about an alert, select the alert name, and additional information will be provided through a popup.

6. What does the Addon do?

In Cocktail Cloud, add-ons, including Prometheus, a cluster management component, provide convenience for cluster operations. The add-on manager functionality enables registration, deletion, rollback, and redeployment of components. Users can add/modify metric targets for collecting/storing add-on metrics based on their requirements.

  1. modifying the monitoring add-on

    1. Customize metric targets for status and resources like CPU/MEM.

    2. Set custom thresholds for metrics (min/max values).

    3. Trigger events and alerts based on specified metric values.

    4. Specify individual monitoring metrics based on add-on versions.

  2. Deploy modified metrics

    1. Store modified metric information (Rule, Config) in ETCD.

    2. Provide add-on registration/deletion/rollback/redeployment based on modified user information.

7. Storage Expansion in Configured Clusters

If storage is increased due to insufficient space or planned tasks in a configured cluster, the existing pods may not immediately reflect the increased storage information. To utilize the increased storage properly, already deployed pods need to be restarted.

Navigate to [Applications] - [Service Map] - [Workloads], select the workload list, and click the "+ Create" button to restart pods.

Service Mesh Configuration

Service mesh is a concept used to describe the network of microservices and their interactions, forming the foundation for managing service-to-service communications.

Istio is the industry-standard technology for managing service mesh, and Cocktail supports the installation and monitoring of Istio.

What information does it provide?

  • It visualizes service-to-service connection configurations.

  • It provides network traffic monitoring information.

What are the advantages of service mesh configuration?

Ease of Istio installation

Istio installation is made easy through Cocktail Cloud's addon features.

Integration of Cocktail Cloud and Istio monitoring screens

Istio's monitoring screen is integrated with Cocktail Cloud's web UI.

Task List

  • Install Service Mesh

  • View Service Mesh

1. Install Service Mesh

1) To use the service mesh, the platform administrator needs to deploy Istio to a specific cluster through the addon feature.

1.1 View Cluster List

1) Go to [Infrastructure] - [Clusters] tab and select the registered cluster.

1.2 View Installed Addons for the Cluster

1) The overview screen for the selected cluster will be displayed. Click on the [Addon] menu at the top to see the list of installed addons for that cluster.

1.3 View Available Addons for the Cluster

1) To install Istio, click on the "Deploy" button in the upper-right corner of the Addons list screen. The list of addons already installed and those available for installation on the cluster will be displayed.

1.4 View Istio Addon Information

1) Click on the "Deploy" button for the Istio card. Information about the Istio addon and explanations of parameters that can be configured during deployment will be shown.

1.5 Deploy Istio

1) Navigate to the "Settings" tab at the top of the Istio addon information screen. The configuration information input screen for deploying Istio will appear. After entering the configuration information, click the "Deploy" button.

Item (* is required)
Content

Kiali Service Type

Set the workload type for Kiali deployment (Default: Client IP)

Kiali Username

Kiali login account

Kiali Password

Kiali account password

Kiali TLS Cert Chain(Base64 Encoded)

Base64 encoding value of ca.crt from Kiali certificate.

Kiali TLS Key(Base64 Encoded)

Base64 encoding value of ca.key from the Kiali certificate

2) After Istio is installed on the cluster, the "Service Mesh" menu will be displayed at the top when accessing the service map screen for that cluster.

2. View Service Mesh

1) Access the "Service Mesh" menu for a specific service map. In the center of the screen, it displays the connection relationships between services and workloads, along with information about requests and responses. On the right side of the screen, it shows traffic-related information specific to the selected connection relationship in the center of the screen.

2.1 Accessing Graph Information Guide

1) Click on the question mark button at the top of the Graph to get information on how to view the graph

2.2 Select Graph Type

1) Choose the desired graph type to view.

2) Available graph types include

  • App Graph: Displays the connection relationships between services and workloads, representing all versions of workloads as a single graph node.

  • Service Graph: Displays the connection relationships between services.

  • Versioned App Graph: Displays the connection relationships between services and workloads, showing individual connections with multiple versions of workloads. Additionally, it includes multiple versions of the same workload within a single box.

  • Workload Graph: Displays the connection relationships between services and workloads, showing individual connections with multiple versions of workloads. Unlike the Versioned App Graph, it does not include multiple versions of the same workload within a single box.

2.3 Configure Graph Display Information

Configure the information displayed on the edges connecting nodes in the graph. Additionally, configure the elements displayed on the graph.

API Issuance History

1. API Issuance History

1) Once you receive an API token, you can check the status, expiration date, and API scope for the current token in [External APIs] - [History].

2) By selecting the API scope icon, you can view a list of APIs available for the current token.

2. API Invocation

Set the Authorization Header with the previously issued token.

#CURL Command Sample
curl -X POST http://${API-GATEWAY}/api/pl/list/service/2 \
-H 'Content-Type: application/json' \
-H 'Authorization: Bearer [token]
  1. curl -X POST http://${API-GATEWAY}/api/pl/list/service/2 \
  • ${API-GATEWAY} : Enter the domain or IP:port information to connect to the API gateway.

  1. -H 'Content-Type: application/json' \
  2. -H 'Authorization: Bearer [token]

[token] : Enter 'Bearer' followed by a space, then paste the issued token.

3. API Types

1) Available API types can be checked below

Integrated Monitoring

1. What types of resources can be monitored?

Cocktail Cloud utilizes over 200 metrics for resources and states in a multi-cluster environment, providing more than 100 monitoring panels.

Each panel is arranged in views for clusters, ingresses, ETCD, nodes, and namespaces. Additionally, an alarm/event page is provided to review alarms/events chronologically and maximize the visualization of the user platform's status.

2. Where can monitoring information be accessed?

Monitoring information in Cocktail Cloud can be accessed in the left-hand [Monitoring] menu. Sub-menus include clusters, ingresses, ETCD, nodes, GPUs, namespaces, and alarms/events.

3. How to check the cluster status?

The platform offers up-to-date status information at the cluster level. Key status information provided in the cluster view includes

  • Number of API server calls per second

  • CPU usage

  • Disk usage

  • Disk I/O speed

  • Memory usage

  • Restarted Pod tracking

  • Average request time over the last 10 minutes

  • Pod executions by status

  • Top 5 Pods with high CPU usage

  • Top 5 Pods with high memory usage

4. Checking Ingress Status

Ingress exposes HTTP and HTTPS paths from outside the cluster to internal services. It provides configuration options for externally accessible URLs, load balancing traffic, SSL/TLS termination, and name-based virtual hosting. Ingress plays a crucial role in the network area of services, making multidimensional monitoring essential.

The status information provided in the integrated dashboard's Ingress view includes

  • Ingress controller requests

  • Ingress controller connections

  • Ingress controller request success rate

  • Recent Ingress configuration reload success and failure

  • Ingress controller request trends

  • Ingress controller success rate trends

  • Network I/O trends

  • Average memory usage trends

  • Average CPU usage trends

5. Checking ETCD Status

The status information provided in the integrated dashboard's ETCD view includes

  • Presence of ETCD leader

  • Number of recent leader changes

  • Number of recent leader change proposal failures

  • RPC ratio

  • Database usage

  • Node disk processing speed

  • Overall disk processing speed

  • Client traffic In/Out

  • ETCD server-specific processing status

  • Network usage

  • Snapshot processing speed

6. Checking Node Status

The status information provided in the integrated dashboard's Node view includes

  • Cluster CPU usage frequency

  • Cluster memory usage

  • Cluster disk usage

  • Cluster network usage

  • Recent changes and current values of the file system's free space ratio

  • List of file systems and their usage

7. Checking GPU Status

The status information provided in the integrated dashboard's GPU view includes

  • Average GPU utilization

  • GPU usage trends

  • Average GPU memory utilization

  • GPU memory usage trends

  • GPU temperature and power

  • GPUs/MIGs

  • Timeslicing

8. Checking Namespace Status

The status information provided in the integrated dashboard's Namespace view includes

  • Number of containers

  • Namespace creation time

  • Total number of Pods in the namespace

  • Namespace PVC status

  • Namespace CPU allocation

  • Namespace memory allocation

  • Number of Pods running in the namespace

9. Checking Notification/Event History

The monitoring metrics displayed in the integrated dashboard are delivered through dashboard, SMS, and E-mail channels based on user configurations. Users can filter and view metrics by cluster, namespace, and major resource groups.

In the dashboard, events occurring in the past hour can be reviewed, and accumulated events per minute are provided with detailed event descriptions, enabling quick identification of the cause based on event content alone.

Each event is categorized into five levels of importance, and real-time notifications are sent through SMS or E-mail (or both) according to user preferences. Users can filter and view recently occurring events and notifications, with the option to retain data for up to one year based on user settings.

Setting up a Pipeline

1. Setting up a Pipeline

1-1. Creating a Pipeline

1) To create a pipeline, click on the '+ Create Pipeline' button located in the top right corner of the [CI/CD] - [Pipeline] tab.

2) After entering the pipeline creation information, click the 'Save' button located in the top right corner.

3) Click the "Add Resource" button in the deployment resources section to apply the items you want to configure for the pipeline.

  • Workloads need to be created by default

4) After selecting the workload to add from the workload section, click the "Save" button

5) Once the workload is registered, confirm that the container images registered with the workload are automatically added.

  • Only images built using the image build feature in Cocktail are integrated

6) After selecting the service to add from the service exposure section, click the "Save" button.

7) After selecting the Ingress to add from the Ingress section, click the "Save" button.

8) After completing the registration of all resources, click the "Run" button located in the top right corner.

9) When the [run popup] appears, enter the content for the execution note regarding this pipeline version, then click the "Save" button.

10) Once the pipeline execution is complete, the release version will be indicated correctly in the top left corner.

2. Modify Pipeline

  • When modifying each workload, service exposure, and Ingress in the [Service Map] tab, changes are not reflected in the pipeline. You need to make modifications directly in the pipeline.

  • Modifying the pipeline ensures that each workload and deployment resource is updated to the latest version

1) Select the pipeline name that needs modification in the [Pipeline] tab.

2) Click the "Create Pipeline Version" button in the top right corner of the pipeline, enter the version, and then click "Create".

2-1. If there are changes in the source requiring rebuilding of images

1) Activate the "Build Run" button on the right side of the [Image Build] section, then click "Run".

  • The image build is re-executed and immediately reflected in the workload

2) The image is rebuilt, and you can check the progress of each step in the process.

3) When the image is rebuilt through the pipeline, verify that the image name in the workload is updated to the tag of the image built through the pipeline.

2-2. If there are changes to the workload, such as replicas or other configurations

  • Deactivate the "Build Run" button on the right side of the [Image Build] section (no image changes required)

1) Select the workload name in the [Deployment Resources] section.

2) Make the necessary modifications by selecting the relevant parts, then click the "Save" button in the top right corner.

3) After confirming the change in replicas from 1 to 2 in the workload, click the "Close" button in the top right corner.

4) Once you return to the pipeline modification section, click the "Run" button in the top right corner, and enter the changes in the execution note."

5) In the pipeline's [Deployment Status] tab, verify that there are two pods running.

3. Pipeline Rollback

  • If you need to rollback to a previous configuration while continuously registering versions through the pipeline.

1) Select the pipeline name that needs modification in the [Pipeline] tab. Once changes are made in the modification section, click the "Rollback" button in the top right corner.

2) When the [Rollback Popup] window appears, review the execution notes of the versions created so far, select the desired version, then click the "Save" button.

3) Once the rollback is completed successfully, confirm that the modified version has been changed to the rollback target (e.g., V3 -> V2)

4) Verify that the pod count has returned to normal, such as 2 -> 1.

API Execution Logs

1. API Execution Logs

1) Navigate to [External APIs] - [Audit Logs], and when you make API calls with the issued API token, a log is generated detailing which API was requested and when.

API Token Issuance

1. Generate API Token

1) To issue an API Token, click the "+ Issue" button in the upper right corner of the [API Management] - [API Token Issuance] tab.

2) Fill in the basic information required for API Token issuance.

3) After saving, click the 🔒 button next to the 'Token' field to copy the token.

Build Image

The build server required for image building has been created. Now, let's proceed with building the image.

1. Build Image

1-1. Build Image Info

1) Select the [Build/Pipeline] - [Builds] section.

2) Click on the "+ Create" button in the upper right corner.

3) Once the build information window is generated, enter the build details as follows.

4) Click the "+ Add a Build operation" button at the bottom to select the necessary items for the build process.

1-2. Code Repository Work

1) Select the [Code Repository Work] and enter details about the Git or other source from which to load the code, then save.

1-3. User Work

1) Click the "+Add a Build operation" button to select [User Work]

2) In the [Execution Information] section, enter the necessary commands for source build and apply.

[User Work - Execution Information (Maven)]

  1. Work Name: Provide information about the purpose or content of this task.

  2. Execution Path: Specify the path where the build will be executed in the build container. 1. Enter "/build" as a fixed value when writing.

[User Work - Execution Information (Ant)]

  1. Work Name: Provide information about the purpose or content of this task.

  2. Execution Path: Specify the path where the build will be executed in the build container. 1. Enter "/build" as a fixed value when writing.

3) In the [Work Volume] section, enter the directory path required for source build and apply

1-4. Build Image Task

1) Click the "+ Add a Build operation" button and select [Build Image Work].

2) In the [Build Image Work] section, write the Dockerfile to create a container image after the source build and apply.

3) After clicking the "Save" button, a popup for [Build Notes] for the build creation will appear below. Write comments for this build and save.

1-5. Run Image Build Task

1) Once the save is complete, the image build automatically proceeds, and you can review details about the build as shown in the screen below.

2) In the [Build Info], clicking the "View Log" button allows you to check the logs for the image build.

3) Upon successful completion of the build, confirm that all progress is marked as "Done" as shown below.

2. Additional Configuration for Image Build

2-1. File(FTP) Task

1) Write files or directories for downloading or uploading between the remote host containing resources related to the build target and the build host where the build task will be performed.

2) Click the "+ Add a Build operation" button and select [File (FTP) Work].

2-2. Calling REST Work

1) If integration with an external service is required using the REST method, configure the REST call task.

2) Click the "+ Add a Build operation" button and select [Calling REST Work].

3) If headers are required, select "+ Header Add " enter the Header and Value, and click the "Apply" button.

  • Enter in the format: Header: Authorization, Value: Basic {authentication string}.

  • The {authentication string} should be a base64-encoded string of the Image Registry's id:password.

2-3. Script Work

1) Define tasks for cases where scripts are needed during image builds.

2) Click the "+ Add a Build operation" button and select [Script Work].

3) Complete the script task and click the "Apply" button in the bottom right corner.

[Screen] Resource Management Model
[Screen] Resource Management by Administrator for Each User
[Screen] Cluster Allocation Method (Dedicated, Shared)
[Screen] Cluster Management
[Screen] List and Resource Status of Accessible Clusters
[Screen] Application Deployment Status Inquiry
[Screen] Cluster Incident Alert List
[Screen] Configuration After Increasing Cluster Storage
[Screen] View Cluster List
[Screen] View List of Add-ons Installed on the Cluster
[Screen] View Add-on List
[Screen] View Istio Add-on Information
[Screen] Enter Configuration Information for Istio Deployment
[Screen] Service Mesh Monitoring
[Screen] Method of Viewing Graphs
[Screen] Selecting Graph Types
[Screen] Selecting Display Information on Edges
[Screen] Selecting Elements to Display on the Graph
[Screen] API Issuance History
[Screen] API Query Scope
[Screen] Integrated Monitoring Dashboard Main
[Screen] Cluster Monitoring
[Screen] Ingress Monitoring
[Screen] ETCD Monitoring
[Screen] Node Monitoring
[Screen] GPU Monitoring
[Screen] Namespace Monitoring
[Screen] Alerts
Item (* is required)
content
Items(*is required)
Content
Item (* is required)
Content
Item (* is required)
Content
Item (* is required)
content
Item (* is required)
Content
Item (* is required)
Content
Item (* is required)
Content
Item (* is required)
Content
Item (* is required)
Content

Name*

Enter the pipeline name to create

Version*

Input the version for the pipeline

Service Map*

Select the service map to execute the pipeline

Name*

- Token name (non-editable)

Description

- Description of the token

Expiration Date

- Indefinite or specific date designation

Allow IP

- List of allowed IP addresses - CIDR notation is allowed - Allow all IPs if left blank

Block IP

- Blocked IP list - CIDR notation allowed - If the same IP is listed in the allowed IP list, it will be blocked instead

Request Limit

- Not entered or 0 (0 means unlimited and cannot be modified with this setting)

Range*

Click the checkbox to set the permission scope

Image Name*

Specify what the image represents in detail (Note: Avoid the use of uppercase)

Registry*

Select the registry you created

(If there are multiple registries, choose the applicable registry)

Use Auto Tag (Choose 1)

Indicate whether tags should be automatically set when updating or changing the image

(Split into "Use/Do Not Use," and if "Do Not Use" is selected, the tag may be fixed, overwriting the existing image)

Tag*

Provide details for the tag to be attached to the created image

(If "Do Not Use" is selected for auto-tagging, the tag entered here will be used consistently)

Auto-Increment Type (Choose 1)

If auto-tagging is set to "Use," it can be specified as "Time/Sequence"

Build Execution Server (Choose 1)

Select the server to perform the build

Code Repository Work

Configure information to fetch source from git or similar sources

User Work

Configure information related to the build of the source fetched from git or similar sources

File (FTP) Work

Set up tasks to download or upload files or directories between the remote host and the build host where the build tasks are performed

Calling REST Work

If integration with external services is required using the REST method, configure REST call tasks

Script Work

Configure script information if a specific script is needed

Build Image Work*

Write a Dockerfile to apply the built source to a Base image and create a new image

Work Name*

Refers to the job stage for the image build, and enters a title for that job

Repository Address*

Enter the address for the Git or other source repository from which to import the source code.

Branch*

Enter the source branch applied to the repository

Authentication

When selecting the combo box for authentication, you must enter the user account and password required to access the git

Code Storage Path

Enter the directory to store the source (Automatically create git project name if not created)

Container Path*

Write the path to the container where the source will be built

Build Host Path*

When building the source, a temporary container is created to proceed with the build. This is the path to the temporary container used during the source build

Work name*

Refers to the job stage for the image build, and enters a title for that job

enter docker file content*

Write a Dockerfile to create the actual container image

Work name*

It refers to the operation stage for the file (FTP), and enter a title for that operation

Host address*

Server address with the directory or file that needs to be uploaded

Certification*

Need to set up if you have an account and password for the host address

User/Password*

Connection account and password for the host address

Task type (choose one)

File Download (If you want to include it in the image during image build, select this type)

Remote Directory/File*

Absolute path to the file to be uploaded to the image when building the image (Host address must have that file)

Build Host Directory*

Directory location to upload (/tmp/ fixed)

Work name*

Indicates the operation stage for a REST call, and enters a title for that operation

REST Method (choose one)

Choose the API call method

URL*

Write the URL for the API call

Certification

Configuration required if there is an account and password for the host address

User/Password

Username and password for the host address

Connection Timeout*

Write the response time for the API call

Expected response code*

Write the success code after the API call (ex] 200)

Expected response content

Must be left blank

Save the response to the build host path

Write the filename if response value storage is required (ex] response.txt)

Work name*

Define the steps for the image build process along with the corresponding work title.

Enter the script content*

Enter the content of the script to be executed.

[Screen] Initial Pipeline Creation Screen
[Screen] Pipeline Creation Information Input Screen
[Screen] Add Deployment Resources to the Pipeline
[Screen] Select Workload in Add Deployment Resources
[Screen] Registering Workload for Pipeline
[Screen] Selecting Service Exposure in Add Deployment Resources
[Screen] Selecting Ingress in Add Deployment Resources
[Screen] Screen after registering all resources for the pipeline
[Screen] Inputting Pipeline Execution Notes
[Screen] Screen after Pipeline Execution Completion
[Screen] Pipeline List
[Screen] Pipeline Version Creation
[Screen] Screen with Build Execution Enabled and in Progress
[Screen] Image Build Process
[Screen] Confirmation of Image Tag Change
[Screen] Screen with Build Execution Disabled
[Screen] Workload Configuration Change Screen
[Screen] Information after Workload Configuration Change
[Screen] Screen after Clicking the "Run" Button
[Screen] Pipeline Deployment Status
[Screen] Pipeline Configuration Screen
[Screen] Popup window with Execution Notes after Clicking Rollback
[Screen] Confirmation of Rollback Version Change
[Screen] Pipeline Deployment Status
[Screen] API Execution Logs
[Screen] Default screen for issuing API tokens in API Management
[Screen] Basic information input screen for issuing API tokens
[Screen] Screen for copying API tokens
[Screen] Initial Image Build Screen
[Screen] Image Build Information
[Screen] List of Build Jobs
[Screen] Code Repository Operations
[Screen] Code Repository Operations After Writing, Click "Add a Build operation" Button
[Screen] User Task Execution Information for Maven Build
[Screen] User Task Execution Information during ant Build
[Screen] User Work WORK VOLUME
[Screen] After writing the user Work, click the "Build image Work" button.
[Screen] Write Dockerfile
[Screen] Completion of writing the build image work
[Screen] Build Image Creation Popup Window
[Screen] Run Build Image
[Screen] Build Image Execution Log
[Screen] build Image complete
[Screen] File (FTP) Work
[Screen] Calling REST Work
[Screen] Calling REST Work Screen - HEADER
[Screen] Script Work
GitBook
[Creating a Workspace]
What is Cocktail Open API?OpenAPI Document - R4.8.1 - EN
[Available API List]
Logo
Logo
Logo
Logo
Logo
Logo
Logo
Logo
Logo
Logo
Cluster RegistrationCocktail Cloud Online - EN
Cluster registration
Integrated MonitoringCocktail Cloud Online - EN
Intergrated Monitoring
Logo
Logo