Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
https://cocktailcloud.io/
Cocktail Cloud is an all-in-one container management platform. With the widespread adoption of cloud technology, there's a growing demand not just for infrastructure management but also for application and service management. Traditional development and operation methods are limited in fully leveraging the advantages of the cloud. Particularly in the realm of applications, there's an increasing demand for automation, efficiency, and integrated management, including continuous integration/deployment (CI/CD), migration, and the establishment of multi/hybrid cloud infrastructures.
The proliferation of container technology is natural in this context. Many companies have already adopted container technology, and this trend continues to grow. Containers package applications or services into independent and executable units, providing the same development and operation experience regardless of the infrastructure environment. Therefore, standardizing cloud management from infrastructure to services can reduce development and operation efforts. Containers offer the advantage of managing multi/hybrid clouds under a consistent environment.
Cocktail Cloud applies the benefits of containers to cloud management, streamlining development and operation tasks and providing a platform for implementing single or multi/hybrid cloud strategies.
Key features of Cocktail Cloud include:
Automation of pipelines from code to build, deployment, and updates.
Workload (service)-centric container management: packaging, lifecycle, resources, etc.
Full-stack monitoring: monitoring the status and resources from infrastructure to containers. Alert management.
Multi/hybrid cluster provisioning and management: Baremetal, private/public clouds.
What is Cocktail Cloud, and describe its features and advantages.
Cocktail Cloud is a platform built on Kubernetes, offering the functionalities and APIs necessary for building, deploying, monitoring, and operating cloud-native applications from their build phase. The reason why many companies consider adopting Kubernetes is due to the increasing importance of cloud-native applications such as containers, microservices, and serverless architectures.
Cloud-native applications enhance continuity and efficiency in development/operation, ensuring high availability through automation for fault response, load-based scaling, and more. However, the adoption and operation of Kubernetes pose challenges due to the difficulty of adapting to new technologies and the complexity of management, becoming another major challenge for enterprises.
Cocktail Cloud provides an integrated platform with all the functionalities and components required for building and operating Kubernetes and cloud-native applications. Enterprises can save time and effort during initial adoption and seamlessly manage and scale thereafter.
While the number of companies adopting Kubernetes is increasing, there's a significant burden on operating and managing open-source installations, updates, and adapting organizations to new technologies. Additionally, setting up components like monitoring, networking, and security that Kubernetes doesn't inherently provide requires additional effort. Cocktail Cloud offers automated tools for configuring and scaling Kubernetes clusters, simplifying cluster management, including upgrades and node expansion. This leads to reduced efforts in initial setup and ongoing management. Cocktail Cloud extends Kubernetes configurations through addons, providing components like monitoring, networking, GPU support, and security. These addons also come with automated installation/update functionalities.
Enterprises have various reasons for using multi-cluster setups, such as network isolation for security, separate operations for production and development systems, and leveraging public clouds. There's also a growing trend of using multiple clusters and different Kubernetes distributions simultaneously. Cocktail Cloud provides an environment for operating and managing multiple clusters from a single control plane. It supports the construction and management of multi-clusters across diverse infrastructure bases, including private and public clouds, as well as data centers.
Physical Equipment (Baremetal) Based Clusters
Virtualization-Based Private Clouds: OpenStack, VMWare, CTRIX Hypervisor, etc.
Public Clouds: GKE (GCP), EKS (AWS), AKS (Azure), etc.
Enterprises require work environments where clusters and necessary resources are allocated or shared based on the roles of organizations or teams. Particularly in application service development and operation, it's common for dedicated teams to be responsible, and unique computing resources may be required based on the characteristics of applications. Cocktail Cloud provides independent workspaces for organizations or teams, allowing allocation and management of necessary computing resources (clusters). Beyond basic computing resources like CPU, GPU, Memory, Volume, cloud-native applications also require resources for development/operation such as container image registries and automation pipelines. Resource allocation and management, as well as permission management for workspace members, in a multi-tenancy environment can be easily configured and managed within independent workspaces.
Managing multiple clusters and a multi-tenancy environment can be complex and challenging. Cocktail Cloud addresses these issues through various integrated management features. It allows monitoring and managing the status of enterprise-wide applications and infrastructure resources (clusters, repositories). Teams or organizations can track the development/operation status of applications and services and handle resource requests and issues accordingly. Cocktail Cloud centrally monitors multi-cluster infrastructure resources, application statuses, networks, etc., providing real-time alerts/events and logs to effectively respond to faults or issues. Additionally, it offers a customizable integrated monitoring environment tailored to the needs of the enterprise through metric and rule extensions.
Ensuring continuity from application building to deployment and updates has become increasingly crucial. Swiftly responding to customer demands collected through various channels is a key factor in achieving business success. To address this, enterprises establish automated Continuous Integration/Continuous Deployment (CI/CD) pipelines. Cloud-native applications offer advanced technologies for building automated pipelines compared to before. Leveraging these advancements, Cocktail Cloud provides various functionalities for establishing and managing automated CI/CD pipelines. Enterprises can tailor pipelines according to the characteristics of their applications and development/operation environments. Additionally, they can provide DevOps platforms for teams or organizations, encompassing operations and monitoring.
Security is a critical management factor for enterprises. Especially, managing authorized user access and permissions to the infrastructure (e.g., clusters in the case of Kubernetes) where applications are deployed and executed is a fundamental approach to address threats from unauthorized users. Cocktail Cloud issues access accounts and permissions to authorized users and manages risks through access periods and revocations. Additionally, it tracks the usage history of access accounts through audit logs, enabling swift responses such as cause analysis and blocking in case of security issues. Furthermore, it provides functionalities such as 'Security Policy Management' for policy formulation and application during container execution, and 'Image Scanning' to inspect vulnerabilities in container images.
"Workspace" is an independent workspace provided for teams or organizations. Teams perform development, operation, and monitoring of one or more designated applications within a workspace. One or more members are registered in the workspace to collaborate.
Resources necessary for deploying and operating applications are allocated to workspaces. Resource allocation is performed by the platform administrator and targets clusters and image registries registered in the platform.
In Cocktail Cloud, there is a method for allocating cluster resources to workspaces called service mapping. The service map in Cocktail Cloud is an administrative unit that extends namespaces, or more precisely, it can be said that namespaces are allocated to workspaces.
Teams can be allocated service maps, or namespaces, which are isolated, independent spaces within a cluster, typically referred to as virtual clusters. Teams allocated with namespaces are responsible only for deploying and operating applications, while the cluster (infrastructure) is managed by a separate team. This method is suitable when teams are focused on application development and operation.
Each workspace is independently allocated a registry for storing and managing application container images. Teams or applications manage their images and configure automated pipelines accordingly.
In some cases, teams or applications may share common images. In such cases, a shared image registry can be allocated to the workspace. In this scenario, one or more workspaces will use the same shared image registry.
When the accessing account's permissions are set to user-level, it is necessary, and in the case where it is registered with admin privileges, the workspace can be additionally utilized.
Cluster is the infrastructure where containers run. Containers are the deployment units and execution processes of applications. Clusters provide computing resources (CPU, Memory, Storage, Network) necessary for container execution.
A cluster consists of nodes (physical or virtual machines) connected via a network. It is an architecture designed for distributed processing. When containers are deployed to a cluster, they are executed on appropriate nodes. This process, called scheduling, is managed by Kubernetes. Kubernetes is responsible for container scheduling and management within the cluster.
Clusters scale resources by adding nodes. If more resources are needed, nodes are added accordingly, and Kubernetes deploys and manages containers on the expanded nodes.
Container networking and storage for data storage are also components of a cluster.
Kubernetes is a container orchestration engine that runs containers in clusters and manages their lifecycle. Originally developed by Google, it is now maintained as a CNCF (Cloud Native Computing Foundation) project.
Kubernetes is installed on the cluster and is responsible for managing and providing resources required by containers based on the cluster infrastructure (nodes, network, storage).
A node is one or more compute machines that make up a cluster. They can be physical or virtual machines, each equipped with CPU, Memory, and Disk, connected via a network. Nodes are managed by Kubernetes for scheduling.
Nodes are divided into master nodes and worker nodes. Master nodes host the control plane components of Kubernetes and manage the cluster by communicating with worker nodes.
Worker nodes are where application containers are deployed. The number of worker nodes increases based on the number and capacity of applications. The Kubernetes scheduler on the master node is responsible for deploying containers to worker nodes.
Containers running on one or more nodes need to communicate with each other, which is managed by the container network.
Container networking is installed as a Kubernetes component. Kubernetes itself does not provide container networking but offers a standardized interface for external providers to supply plugins, known as the Container Network Interface (CNI). Examples of open-source CNI plugins include Flannel, WeaveNet, Calico.
Cocktail Cloud offers options to configure the cluster's CNI plugin.
External access to containers is handled by the ingress controller. It routes external traffic to containers based on hostnames and paths. Routing rules are configured for each application and applied to the ingress controller.
The ingress controller is a Kubernetes component. NGINX controller is commonly used and provided as a default in Kubernetes. Other third-party ingress controllers are also available.
Cocktail Cloud offers options to configure the ingress controller.
Cluster storage provides persistent volumes for container data storage.
Since containers can be rescheduled to different nodes in case of node failure or resource shortage, storing container data on nodes can be problematic. Therefore, a separate volume called a persistent volume is needed to store and manage data safely.
Kubernetes creates and provides persistent volumes through storage classes. When configuring the cluster, an appropriate storage class for storage must be installed.
Cocktail Cloud provides storage classes as addons, allowing users to select and automatically manage suitable storage classes.
Besides networking and storage, Kubernetes has components to extend its functionality, referred to as addons.
These addons provide additional capabilities to Kubernetes clusters beyond container management. Examples include monitoring and service meshes.
Cocktail Cloud offers various Kubernetes extension components as addons. They are automatically managed from installation to upgrade, and users can choose and use the required addons.
The platform is the fundamental unit for using Cocktail Cloud. All functionalities are accessible through the platform. Users perform application development, operation, and management tasks after logging into the platform, depending on their permissions.
The platform consists of one or more workspaces. Workspaces are independent workspaces provided for teams or organizations. Companies can configure workspaces for teams within the platform and allocate necessary resources to provide application development and operation environments. The platform integrates and manages workspaces, applications, and infrastructure resources.
The platform registers and manages clusters (infrastructure resources) used by applications. It allocates and manages cluster resources on a per-workspace basis, either for the entire cluster or by namespace. Applications managed by teams are serviced through the allocated resources.
The platform comprehensively monitors and manages the overall status of applications developed and operated in all workspaces. It manages applications based on their configuration, status, resource usage, and performs tasks such as resource scaling and fault response as needed.
Clusters operate based on Kubernetes. The platform provides necessary functionalities for cluster operation, such as managing Kubernetes state and versioning.
In addition to resource and status management, the platform also provides integrated management functionalities for user management, security, etc.
The platform centrally monitors the status and resources of multiple clusters and applications. It collects metric data for each cluster infrastructure and application, providing real-time monitoring and analysis capabilities.
In addition to collecting resource and status data, it also collects events and provides notifications based on predefined rules. It detects anomalies in advance, takes appropriate actions, and performs fault analysis and resolution when issues arise.
The platform provides integrated dashboards with various charts for monitoring and analysis purposes.
The platform has a unique identifier (ID). Users log in to the platform using this ID. Additionally, users can set the platform name and logo image to represent a unique identity.
The platform holds Cocktail Cloud product license information. This license, along with purchaser information, is managed within the platform, with a designated platform administrator acting as the representative.
Cloud accounts required for operating clusters in public clouds are also managed as platform information. These accounts are utilized for managing cloud infrastructure and authentication information.
The Service Map is a unit that configures applications and manages various resources. Kubernetes manages applications at the Namespace level. Namespaces are a way of logically dividing clusters to deploy and manage containers, serving as a kind of virtual cluster. The Service Map, provided by Cocktail, is a management space for applications based on namespaces, offering additional management features.
One of the main resources in the Service Map is Workloads, which deploy and manage containers. Other resources include persistent volumes, configuration information, and service exposure.
Pods are resources that deploy and execute containers, composed of one or more containers. They consist of a main container responsible for application logic processing and optional sidecar containers for auxiliary tasks. While most cases require only the main container, additional sidecar containers can be added for functions like log collection, backup, or proxy.
Containers within a pod share the same network (localhost) and volumes, making it easy to scale by adding containers.
Pods can be created independently, but typically, they are managed through workloads responsible for deployment, updates, and lifecycle management.
Workloads are Service Map resources responsible for the deployment and lifecycle of pods. They manage the entire lifecycle of pods, including how they are deployed and executed, status monitoring, and restarting in case of issues.
In Cocktail, users interact with workload settings through a web UI console, minimizing errors caused by incorrect configurations.
Even when inputting through the web UI console, users can specify almost all detailed configuration items related to workloads defined in Kubernetes.
Workloads are categorized into several types, each with differences in how pods are deployed, executed, and managed.
Within a namespace, various types of workloads can be executed, and in some cases, there can be so many workloads that it becomes difficult to understand them all at a glance.
Organizing workloads into workload groups allows for a clear overview of the state of workloads by displaying them according to workload groups.
Deployment workloads replicate the same pod multiple times to provide stable service even if some pods have issues. They are mainly used for stateless application configurations like web servers, where data management is not required. This is because replicated pods have the same container, making them unsuitable for data management.
Deployment workloads also support rolling updates, replacing pods sequentially to perform updates without affecting services.
They also support autoscaling, automatically increasing the number of pod replicas when requested CPU or memory resources are exceeded.
StatefulSet workloads deploy pods performing different roles in replication to configure workloads. They are suitable for data storage and management applications such as databases, key-value stores, and big data clusters. Each pod is assigned a unique name, allowing tasks to be processed through pod communication. Each pod uses a unique persistent volume to store/manage data.
DaemonSet workloads are Service Map resources that deploy and manage daemon process pods running continuously in the background. Examples of background tasks include log collection and monitoring data collection on each node.
Job workloads deploy pods to process tasks in a one-time execution. They are mainly used for batch job processing such as data transformation and machine learning.
Similar to job workloads, but they allow for scheduled or periodic execution of jobs. They use configurations similar to Linux's cron tab for scheduling.
To serve applications externally, pods need to be exposed to external traffic. The Service Map exposes pods to the external traffic through service exposure resources.
Service exposure resources specify pods to expose based on labels. Therefore, even replicated pods with the same label can be specified in one service exposure. In this case, the service exposure performs load balancing to send traffic to one of the specified pods.
Service exposure resources are assigned a fixed IP address, which is a private IP address used within the cluster. This is because pod IP addresses change on restart, which can cause issues if pods are accessed directly. Therefore, the fixed address of the service exposure is used to connect to the specified pods.
Service exposure is categorized based on the exposure method.
Exposing services with a cluster IP allows access only within the cluster. It is used for communication between pods via fixed IP addresses within the service map.
Node port exposes services using the node's IP address. External access to pods is possible using the exposed node port (Node IP: Node Port). Node ports are assigned to all nodes in the cluster, allowing access to pods from any node. Typically, all nodes in the cluster are connected to an L4 switch to consolidate access addresses.
Node ports are assigned a range of 30000 to 32767 during cluster installation. Services are exposed automatically or by specifying a port range. This range can be user-configured.
When the cluster is configured in a cloud environment, a load balancer can be automatically created to expose services. Pods are exposed via node ports, and the created load balancer connects pod execution nodes with ports. External access is possible using the load balancer's address and port. Load balancer service exposure is only possible on supported cloud platforms like AWS, Azure, and GCP, configured during cluster installation by cloud providers.
Apart from service exposure, the Service Map also has Ingress resources for external pod access. Ingress exposes pods to the outside world via hostnames/paths (e.g., abc.com/login, cdf.org/). It functions similarly to an L7 switch.
To use Ingress, an Ingress controller must be installed in the cluster. The Ingress controller receives external traffic and forwards it to pods based on routing information defined in the Ingress (hostname/path -> cluster IP service exposure). Therefore, Ingress exposes the controller to external services, and pods expose services via cluster IP for routing by Ingress rules.
In Kubernetes, the Ingress controller is deployed to pods. Therefore, when installing in the cluster, service exposure should be done via node ports or load balancers.
Note that Ingress routes traffic based on hostnames/paths, so these need to be registered in internal or external DNS servers and accessible by the Ingress controller. Cocktail Cloud provides default configurations for Ingress usage, eliminating the need for additional installation and setup.
In cases of high external traffic to pods, dedicated Ingress nodes are sometimes configured in the cluster. These nodes only have Ingress controllers deployed and are replicated for high availability. They provide scalability by adding Ingress nodes as traffic increases.
When pods need to store and manage data, persistent volumes are necessary. Pods can restart at any time or be reassigned to healthy nodes in case of node failures, making it impossible to use node disk for data storage.
Persistent volumes ensure data integrity even when pods are restarted, reassigned, or deleted because they are managed independently of pod lifecycles. They use separate storage configured with the cluster.
Service Map's persistent volume resources select cluster storage for creation. Created persistent volumes are mounted to pods and used by containers. Persistent volumes are categorized into shared volumes, which can be mounted by multiple pods, and single volumes, which can be mounted by only one pod.
Persistent volumes require storage configured in the cluster. NFS storage is commonly used, supporting any storage that supports the NFS protocol.
When deploying a web server as a container, you typically use configuration files for server execution. In cases where pods are executed with separate configurations, these settings are managed as configuration resource. While it's possible to include configuration files in the pod's container image, this would necessitate recreating the image whenever configurations change.
Configuration information is created and managed within the service map, and can be mounted to pods' containers as volumes or files. Depending on container implementation, it can also be passed as environment variables. An advantage is that configuration changes can be made and reflected even while pods are running.
Configuration resource is categorized into ConfigMaps and Secrets based on management method. Both manage configuration information, but Secrets encrypt data, making them suitable for sensitive information like database connection details.
The pipeline resource in the service map updates container images for workloads. Upon workload deployment completion, a pipeline is automatically created to update container images in pods.
Workload-deployed container images in the service map fall into two types: those generated from Cocktail Cloud builds and those registered in external registries.
Images generated from Cocktail Cloud builds undergo the entire process from image creation to workload deployment automatically whenever application code changes (refer to 'Builds/Images' section for Cocktail Cloud builds).
External registry images are updated via replacement, where the pipeline performs automated deployment only.
The catalog in the service map bundles one or more workloads associated with the service map for deployment and updates. It's primarily used for distributing and updating open-source software.
Cocktail Cloud provides many commonly used open-source packages in its catalog. Users can search for desired packages in the catalog and automatically deploy them to the service map. (Refer to the 'Catalog' section for Cocktail Cloud's catalog).
Packages deployed to the service map can perform state monitoring and automatic updates.
Security is a crucial aspect of enterprise cloud environments, with three main components in cloud-native setups:
Cluster access authentication and authorization refer to the permissions granted to authorized users to access the cluster and manage resources as needed. Users accessing the cluster have user accounts, and resources include applications and data. Administrators authorize user access and grant appropriate permissions for resource management, thereby managing cluster security.
In Cocktail Cloud, users can manage allocated clusters via GUI within workspaces, eliminating the need for direct cluster access for management. However, if using command-line tools or external CI/CD systems, a cluster user account is necessary. Administrators issue cluster accounts to users in such cases.
Cocktail Cloud provides integrated cluster account management, allowing users to access multiple clusters with a single user account and manage resources based on permissions. Users receive cluster accounts from administrators and can manage clusters within the validity period.
Audit logs record the commands (API) executed by users logged in as Cocktail users or cluster accounts, detailing which resources were affected. In case of incidents or security issues, audit logs can be traced to analyze the root cause.
Cocktail Cloud offers the capability to collect and trace both platform (Cocktail Cloud features) and cluster (Kubernetes) audit logs.
Pod security policies control permissions, node access, OS security settings, etc., during container execution. Typically, security settings are defined when defining pods. However, enterprises require control over security. Different security settings for each team or organization may lead to unforeseen security vulnerabilities.
Pod security policies can enforce security settings at the cluster or application level. Enterprises can enforce security policies based on their existing security policies.
Cocktail Cloud provides features to configure and apply security policies.
Container execution images may contain multiple open-source components. For example, a base image is publicly available on the internet and serves as the basis for container image creation by adding user-specific components. If a base image contains malicious code, it poses a security risk.
Cocktail Cloud's image registry offers features to inspect images for malicious code. Additionally, it provides additional checks for outdated component versions or vulnerable code.
Cloud-native applications leverage cluster and container technologies, where clusters manage infrastructure, and containers handle application deployment and execution. Consequently, the monitoring targets differ from traditional applications.
Clusters consist of nodes, which are computing machines with CPU, GPU, Memory, and Disk, along with an operating system (OS) and container runtime for executing containers. Hence, monitoring of physical resource usage and performance necessary for container execution is done by collecting data (referred to as 'metrics' in monitoring) at the node level.
Container management is handled by Kubernetes, composed of multiple components installed on the cluster's master node. Monitoring the master node and installed components becomes necessary in case of Kubernetes failures, as container management becomes impossible. Monitoring involves tracking resource usage on the master node and the status of installed components.
Nodes and containers within a cluster communicate with each other. Monitoring network usage targets both the physical network and the logical network controlled by Kubernetes.
While cluster monitoring focuses on infrastructure resources required for container execution, container monitoring encompasses resource usage, execution status, and lifecycle monitoring. It also includes monitoring aspects such as communication volume between containers and request processing times.
Container monitoring provides metrics through the Kubernetes API and the Service Mesh API (for configuring container-to-container communication).
Notifications occur when monitoring metric data meets certain conditions defined by notification rules. These rules can be both predefined and user-defined.
Events occur when Kubernetes resources change. For instance, events are triggered by pod creation, execution, update, or deletion. Cocktail Cloud collects and provides events as notifications.
Both notifications and events provide real-time information during operation, facilitating proactive measures against application and cluster state changes and failures.
Kubernetes logs comprise three main types. Firstly, logs recorded by the Kubernetes master provide information necessary for master operation. Secondly, container logs are logs displayed on standard output (STDOUT/STDERR) during container execution. Lastly, application logs are logs recorded in separate files by containers in addition to standard output.
Cocktail Cloud collects all three types of logs, providing an environment for log retrieval and analysis.
Containers are deployed and executed as images. Container images are specified by name in the pod's container spec in the format of image_name:tag (e.g., nginx:latest). When using Docker Hub, the registry address where the image is located is often omitted.
Cocktail Cloud provides independent image registries for each workspace. It also offers automated image building through the 'Build' feature.
An image registry stores container images and provides them when the image is required for pod execution. The storage/provisioning interface of image registries is standardized. Typically, the 'Push' API is used to store images in the registry after creation, while the 'Pull' API is used to retrieve images during container execution.
In Cocktail Cloud, image registries can be allocated for each workspace, serving as independent registries for teams. Additionally, image registries can be shared among teams.
When deploying pods from the service map using an allocated image registry, the image configuration is performed by selecting a 'build' rather than specifying the image name. A build automates the process of creating images, and the latest image can be deployed to pods based on the selected build's 'tag.'
A build is a resource in Cocktail Cloud that automates the process of generating container images. Builds can have one or more tags, with each tag defining a different creation process. Tags can be seen as image versions. The process of generating images is called the build flow.
Builds store generated images in the image registry allocated to the workspace. Therefore, images and builds are synonymous, but each image tag (version) has a unique build flow. The structure is Image Registry -> Image (Build) -> Tag (Build Flow).
Users deploy builds (images) and tags (versions) in pods for workloads. The image generated by the build flow of the selected tag is then deployed and executed. The pipeline in the service map automates the entire process of updating images by executing the build flow of the image tag after code changes.
The build flow automates the process of generating images for a specific tag (version). Each step executed by the build flow for image creation is called a 'task.'
Cocktail Cloud offers various types of default tasks, and users can create custom tasks to configure the build flow. Default tasks include downloading code from code repositories (Git), executing user-defined scripts, and building images using Dockerfiles. Additionally, tasks for integrating with external systems' APIs and FTP-based file transfers are provided.
Users can develop and add/extend tasks to the build flow. User-defined tasks need to be containerized before adding them to the build flow.
Tasks in the build flow are executed on the 'build server.' Cocktail Cloud provides options to adjust the capacity of the build server. For build tasks requiring substantial resources, the capacity of the build server can be adjusted.
Typically, applications consist of one or more workloads (containers), especially when deployed on Kubernetes, involving various related resources such as service exposure and volumes. This complexity makes application deployment and upgrades challenging.
Catalogs address these issues by bundling multiple application resources into a single unit called a package and deploying this package with user settings when necessary. Upgrades are also automated based on versioning. There are several open-source tools that support package creation, deployment, and management, with Helm being widely used, especially as an official Kubernetes project.
Cocktail Cloud's catalog offers the ability to search for such packages and automatically deploy them to service maps. These packages are in the form of Helm charts, which are supported by a wide range of open-source projects. Open-source packages are registered and managed in package repositories. There are numerous public package repositories where various open-source packages are available, and the catalog can search all these repositories for packages.
Packages deployed through the catalog are managed in the package menu of the service map. It provides monitoring and status of deployed packages and enables upgrades to newer versions.
Before creating the provider, please note that any clusters provisioned via provisioning need to be deleted from Cocktail, not from the EKS console.
Cocktail continuously monitors the status of provisioned clusters. If there are any changes in the EKS console, deleting the cluster from the console will trigger Cocktail to recreate the cluster.
1) To create a cloud provider for cluster creation, click on the "+ Create" button in the [Provisioning] - [Cloud Providers] tab, and select AWS.
2) Register AWS authentication information in the basic information and click the "Save" button.
3) Confirm successful registration.
1) [Provisioning] - Navigate to the [Templates] tab, then click the "Start" button under the EKS (Elastic Kubernetes Service) item in the templates section
2) Select the previously created cloud provider information, choose the required version, and click "Save."
3) Once saved, the cluster status changes to "CREATING" as it is being provisioned.
4) Click on the "CREATING" status to monitor the cluster creation progress.
5) Click on the [Activity] tab to check the ongoing installation details.
6) Confirm the status changes to "RUNNING" when the cluster is successfully created.
To serve the provisioned cluster, addon-manager deployment and storage class deployment are required.
Amazon Node Group creation is possible only after cluster provisioning installation is completed.
1) Once the cluster configuration is complete, select the cluster, go to the [Resources] tab, and click "+ Add Node Group."
2) Enter the required information for the node group to be created and click "Save."
3) When the node group addition starts, the status is displayed in the "Node Group" section.
4) As the node group addition progresses, the status changes to "ACTIVE."
5) Check in the [Infrastructure] - [Clusters] tab if the cluster status and the number of nodes are displayed correctly.
Amazon EBS CSI Driver creation is possible only when there is more than one node group.
1) Once the node group configuration is complete, select the cluster, go to the [Resources] tab, and click "+ Install Amazon EBS CSI Driver."
2) During the installation process, it takes some time to create resources, and later, confirm the installation completion.
3) Once the Amazon EBS CSI Driver installation is complete, the status is displayed in the "Amazon EBS CSI Driver" section.
4) Confirm the installation of the Amazon EBS CSI Driver in [Workloads] - [Deployments].
The Cluster Autoscaler can be created only when there is more than one node group.
1) Once the node group configuration is complete, select the cluster, go to the [Resources] tab, and click "+ Install Cluster Autoscaler."
2) During the installation process, it takes some time to create resources, and later, confirm the installation completion.
3) Once the installation is complete, the status is displayed in the "Cluster Autoscaler" section.
4) Confirm the installation of the Cluster Autoscaler in [Workloads] - [Deployments].
Item (* is required) | Content |
---|---|
Item (* is required) | Content |
---|---|
Item (* is required) | Content |
---|---|
Account Name*
Enter the name for the registered AWS account
Description
Enter the description for the AWS account
AWS Access Key ID*
Input AWS Account ID
AWS Secret Access Key*
Input AWS Secret Access Key
AssumeRole ARN
Input AWS AssumeRole ARN value
Account Name*
Select the registered cloud provider
Region*
Select the region for the cluster to be created
Cluster Name*
Register the name of the cluster to be created
Version*
Select the version of the cluster to be created
Node Group Name*
Enter the name of the Node to be created
Instance Type*
Select the instance (resource) to be created
Disk Size (GiB)*
Enter the disk capacity of the Node Group to be created
Desired Node*
Enter the number of Node Groups to be created
Min Node Count*
Enter the minimum number of Node Groups to be created when scaling in
Max Node Count*
Enter the maximum number of Node Groups to be created when scaling out
When you have previously registered an existing EKS cluster in Cocktail and need to delete it, simply delete the cluster from the AWS console. Please keep this in mind.
1) Register the created EKS cluster with the Cocktail Cloud using the following procedure.
1) Click on the "+ Register cluster in use" button in the upper right corner of the [Infrastructure] - [Clusters] tab.
1) Select 'Amazon Web Service' as the provider in the cluster configuration form.
2) Choose the type attribute as 'EKS'.
3) Once these two attributes are selected, the Cluster ID setting will be displayed in the cluster configuration form.
4) Choose the region as the one where EKS was created (Example: Seoul (ap-northeast-2)).
The Provider and Type fields are mandatory.
1) Click the "Save" button in the menu bar to save the cluster registration.
2) After registration is complete, the cluster list will be displayed.
1) To use PV/PVC, you need to create a new storage class.
1) Click on the "+ Create" button at the top right corner of the [Storage] - [Storage Classes] tab, then select AWS EBS CSI.
1) Fill out the settings form accordingly, then click the "Save" button.
1) You can now verify the created storage class
Item (* is required) | Content |
---|---|
Item (* is required) | Content |
---|---|
Cluster ID*
Cluster name managed by AWS EKS - Retrieve from the Kubernetes config file for the cluster to be registered (~/.kube/config) - Alternatively, check in the AWS console under EKS > Clusters and input the information.
ID*
The content written for the cluster ID remains the same - Must not overlap with the IDs of other already registered clusters
Kubernetes Version*
Enter the Kubernetes Version for the Cluster to be registered
Cluster Name*
The Cluster Name to be used in Cocktail Cloud
Description
Description of the Cluster to be registered
Master address*
Kubernetes Master API Address -Alternatively, check AWS Console > EKS > API Server Endpoint section and enter
Node Port Host Address*
Enter the Public IP of Kubernetes
Node Port Range*
Input the port range 30000-32767, which is available in Kubernetes
Cluster CA Certification*
Cluster CA Certificate
- check AWS Console > EKS > Certificate authority section and enter.
Access Key ID*
ACCESS_KEY of the AWS IAM user with access to the cluster to be registered - Confirm and retrieve from AWS Console > IAM > Users.
Secret Access Key*
SECRET_ACCESS_KEY of the AWS IAM user with access to the cluster to be registered
- Confirm and retrieve from AWS Console > IAM > Users.
Name*
Storage Controller Name
Description
Description of the Storage Controller
Base Storage
Default Storage Settings for Use in this Cluster
Volume Binding Mode*
Volume Binding Mode Selection
- Immediate: Operates at the time PVC is created.
- WaitForFirstConsumer: Operates at the time Pod is created
Reclaim Policy*
- RETAIN: The storage remains when deleted and is automatically reattached upon recreation.
- DELETE: The storage is deleted along with the resource.
Parameters
Name and Value are separated. In each server, enter the IP of the server-side (target) for server, and for share, enter the mount path of the server-side (target).
Mount Options
Only values can be registered. For each, enter "hard" for the value and specify the NFS version (nfsvers). For default OS, set nfsversion to 4.1, and for NAS or other storage, set it to 3.
1) To register the created NCPKS cluster with Cocktail Cloud, follow these steps
1) Click the "+ Cluster Registration" button located in the upper right corner of the [Infrastructure] - [Clusters] tab.
1) Set the provider attribute in the Cluster Configuration form to 'Naver Cloud Platform.'
2) Choose the type attribute as 'NCPKS.'
3) Upon selecting these two attributes, the Cluster UUID setting will appear in the cluster configuration form.
4) Choose the region as the one where NCPKS was created (Example: Korea(KR)).
Provider and type fields are mandatory.
1) Click the "Save" button in the menu bar to initiate the cluster registration.
2) After registration is complete, the cluster list will be displayed, allowing you to verify the recently registered cluster.
Item (* is required) | Content |
---|---|
Cluster UUID*
After creating the cluster, check and retrieve the information from the Kubernetes config file (~/.kube/config)
Cluster Name*
Enter the Cluster Name to be managed by Cocktail Cloud
Kubernetes Version*
Enter the Kubernetes Version for the Cluster to be registered
ID*
ID to be managed by Cocktail Cloud - The ID should consist of only lowercase letters, numbers, and three specified special characters (- . _) (example: eks-acornsoft-demo-cluster) - Must not overlap with the IDs of other already registered clusters
Description
Description of the Cluster to be registered
Master Address*
Confirm and input as you would with UUID, checking the Kubernetes config file (~/.kube/config)
Node Port Host Address*
Enter the Public IP of Kubernetes
Node Port Range*
Input the port range 30000-32767, which is available in Kubernetes
Cluster CA Certification*
Confirm and input as you would with UUID, checking the Kubernetes config file (~/.kube/config)
Access Key ID*
Enter the NCLOUD_ACCESS_KEY in Naver Cloud Portal > My Page > Account Management > API Key Management
Secret Access Key*
Enter the NCLOUD_SECRET_KEY in Naver Cloud Portal > My Page > Account Management > API Key Management
1) To register the created GKE cluster with Cocktail Cloud, follow these steps
1) Click the "+ Cluster Registration" button located in the upper right corner of the [Infrastructure] - [Clusters] tab.
1) Set the provider attribute in the Cluster Configuration form to 'Google Cloud Platform.'
2) Choose the type attribute as 'GKE.'
3) Upon selecting these two attributes, the authentication attribute will appear in the cluster configuration form.
4) Click the authentication button, and a Google authentication window will appear. Enter your Google account credentials.
Provider and type fields are mandatory.
The Google account used must be the one used when creating the GKE cluster to be registered.
1) Once authentication is complete, the authentication attribute will display 'Authentication Completed.'
2) Click the "Save" button in the menu bar to initiate the cluster registration.
The remaining attributes not entered will be automatically filled in based on the information from the GKE cluster after registration.
3) After registration is complete, the cluster list will be displayed, allowing you to verify the recently registered cluster.
Item (* is required) | Content |
---|---|
Project ID*
Select the Google Cloud Project ID - The Project ID is associated with the project where the GKE cluster to be registered is created.
Cluster*
Select the GKE Cluster to Register
Cluster Name*
Enter the Cluster Name to be managed by Cocktail Cloud - The ID can only consist of lowercase letters, numbers, and three specified special characters (- . _) (example: GKE Acornsoft Demo Cluster - 1.00.00) - The name must not overlap with the names of other already registered clusters
ID*
Enter the Cluster ID
1) To register a non-public cloud cluster with Cocktail Cloud, follow these steps
1) Click the "+ Register In-Use Cluster" button located in the upper right corner of the [Infrastructure] - [Clusters] tab.
1) Set the provider attribute in the Cluster Configuration form to 'Datacenter.'
2) Choose the type attribute as 'MANAGED.'
3) Choose the region as the one where the cluster was created (Example: Korea).
Provider and type fields are mandatory.
1) Click the "Save" button in the menu bar to initiate the cluster registration.
2) After registration is complete, the cluster list will be displayed, allowing you to verify the recently registered cluster.
Item (* is required) | Content |
---|---|
Cluster Name*
The name of the cluster to be managed in Cocktail Cloud
Kubernetes Version*
Enter the Kubernetes version of the created cluster
ID*
ID for management in Cocktail Cloud - IDs can only contain lowercase letters, numbers, and three special characters (- . _). (e.g., eks-acornsoft-demo-cluster) - IDs cannot be duplicates of IDs already registered for other clusters.
Description
Description of the cluster to be registered
Master address*
Enter the IP address displayed after running the command $ sudo cat /etc/kubernetes/acloud-client.conf | grep server
Node Port Host Address*
Enter the Public IP of Kubernetes
Node Port Range*
Input the port range 30000-32767, which is available in Kubernetes
Cluster CA Certification*
Enter the portion after "certificate-authority-data:" displayed when running the command $ sudo cat /etc/kubernetes/acloud-client.conf | grep certificate-authority-data
Client Certificate Data*
Enter the portion after "client-certificate-data:" displayed when running the command $ sudo cat /etc/kubernetes/acloud-client.conf | grep client-certificate-data
Client Key Data*
Enter the value displayed after running the command $ sudo cat /etc/kubernetes/acloud-client.conf | grep client-key-data
To set up AWS IAM users and permissions for provisioning AWS resources, along with creating roles using custom trust policies that IAM users can assume to access resources, follow these steps
User Creation
1) Access the AWS Console and click on "IAM."
2) Click the "Create user" button in the top right corner of the IAM menu.
3) Enter the username.
5) Verify that the user has been created successfully.
6) Copy the ARN (Amazon Resource Name) of the created user.
1) In the IAM menu, navigate to [Access management] - [Policies] and click the "Create policy" button.
2) Click on JSON in the policy editor and edit the policy as needed.
3) Set a name for the policy and click "Create policy."
1) In the IAM menu, go to [Access management] - [Roles] and click the "Create role" button.
2) Choose "Trusted entity type" as "Custom trust policy," click "Add" in the "Add trusted entities" section.
3) Add [Principal Entity Types] - [IAM users] & [AWS services].
IAM users : ARN (Amazon Resource Name) of the created user
AWS services: Name of the service you intend to use (e.g., eks)
4) Add the necessary permissions
AmazonEBSCSIDriverPolicy
AmazonEC2FullAccess
AmazonVPCFullAccess
IAMFullAccess
EKSFullPolicy
5) Set a name for the role and click "Create role."
6) Verify the created role.
2) Click "Next" under the "Select" section, choose "Other," and click "Next."
3) Enter a description tag for the access key and click "Create access key."
4) Confirm the generated access key and secret access key.
5) Save the generated access key for later use.
4) In the "Permissions" options, select "Add user to group," click "Next," and proceed with the creation.
1) Click on the user with granted permissions, go to the [Security credentials] tab, and click "Create access key" on the top right of the "Access keys" box.
This section introduces the process of configuring users and permissions to provision resources from the cloud provider. Currently, only AWS is supported, and it is noted that additional cloud providers may be added in the future.
As mentioned earlier, we explained the process of "Cluster Registration." You have successfully created a cluster and registered cluster information on the Cocktail Cloud platform, completing the infrastructure preparation. Now, let's create a user who can access the Cocktail Cloud platform.
1) Click the "+ Create" button in the upper right corner on the [Settings] - [User] tab.
1) Enter the user creation information below, and click the "Save" button to create the user.
1) Verify the newly created user in the user list screen.
Next, we will explain the process of creating a service map, which is required for workspace registration. Please proceed to the "Create Service Map" page.
We have previously created a user who can log in to the platform. Now, let's create a service map to register in the workspace.
1) Click the "+ Create" button in the upper right corner on the [Application] - [Service Map] tab.
After filling out all the details, select the "Save" button to create the service map.
1) Verify the newly created service map in the service map list screen.
Items (* indicates required) | Content |
---|---|
Item (* is required) | Content |
---|
Next, we will explain the process of creating an image registry, which is required for workspace registration. Please proceed to the "" page.
Name*
Enter the user's name
ID*
Enter the User ID
Role*
Select the permissions to assign to the user
Selectable between "Admin" and "User," refer to the "Security" tab
Department
Enter the user's department
Description
Enter a brief description for the account
Password*
Enter the user's password
Confirm Password*
Enter the user's password
Cluster* | Select the cluster to register the service map |
Namespace* (Choose one) | Choose whether to create the service map in a new namespace or utilize an existing namespace |
Service Map Name* | Enter the name for the service map you want to create, typically using the same name as the namespace |
Namespace* | If selecting a new namespace, you can enter the "Namespace" field If choosing an existing namespace, select from the list of namespaces currently created in the cluster |
Network Policy* | Choose whether to allow or block Ingress/Egress traffic. The recommended setting is to select "Allow All Ingress/Egress Traffic" |
Resource Allocation Usage | If checked, it limits the resource usage of the respective service map |
Use Container Limit Range Configuration | If checked, it restricts the resource usage of containers deployed in the service map |
Use Pod Limit Range Configuration | If checked, it limits the resource usage of pods deployed in the service map |
Use Storage Limit Range Configuration | If checked, it limits the resource usage of volumes requested by the service map |
By default, Cocktail Cloud provides Harbor (Registry) when configuring the platform. Additionally, it supports integration with external registries. You can register externally created registries in Cocktail, enabling image builds and deployments.
We have previously created a service map. Now, let's create a registry to register in the workspace.
1) Click the "+ Create" button in the upper right corner on the [Build Configuration] - [Container Registry] tab.
1) Enter the name of the registry to be created in the "Registry Name" field.
2) Input a description for the registry in the "Description" field.
3) Click the "Save" button in the upper right corner.
1) Verify the newly created registry in the registry list screen.
Azure Container Registry(ACR) is a container image storage and management service provided by Azure. ACR offers various features necessary for storing and deploying Docker images.
1) Navigate to [Build Configuration] - [External Container Registries].
2) Click the "+ Register" button in the upper right corner to select the provider to create.
1) Click the "+ Register" button and select "Azure ACR."
2) After registering Azure ACR authentication information in the basic information, click the "Test Connection" button.
Enter the name of the registry to be created in the registry.
Enter a description for the registry in the description.
Enter the EndPoint URL in the correct format.
Select the region of the registered registry.
Enter the Client ID and Client Secret.
Click "Test Connection" in the upper right corner to verify if the registry is available.
Click the "Save" button in the upper right corner.
1) Access the Azure Portal to retrieve the necessary information.
2) Click on "Management Groups" to efficiently manage access, policies, and compliance for subscriptions.(All services - Management and governance - Management groups)
2-1) Click the "Create" button to go to the creation screen.
2-2) Create a management group to be used internally.
3) Click on "Subscriptions" to create a set that encompasses all resources.
3-1) Click the "Add" button to go to the creation screen.
4) Click on "Resource Groups," a logical container for managing resources grouped together. (All services - Management and governance - Resource Group)
4-1) Click the "Create" button to go to the creation screen.
4-2) [Create Resource Group] - Choose a resource group name and region, then create.
5) Click on "Resources" to create resources that can be used for Azure services.
5-1) In the marketplace, search for "registry" and click on "Container Registry."
5-2) Click the "Create" button.
5-3) Register with subscription, resource group, registry name, and location (region).
6) Click on the created resource and click on "Access Keys."
Registry
: Registry Name
EndPoint URL
: Login Server
Client ID
: User Name
Client Secret
: Password
Amazon Elastic Container Registry (Amazon ECR) is a fully managed container registry that provides highly available and secure hosting for container images and artifacts. It allows you to deploy your applications reliably anywhere.
1) Move to [Build Configuration] - [External Container Registry].
2) Click the "+ Register" button in the upper right corner and select the provider you want to create.
1) Click the "+ Register" button, and then select "AWS ECR"
2) After entering the AWS authentication information in the basic details, click the "Test Connection" button.
Enter the name of the registry to be created in the registry.
Enter a description for the registry in the description.
Enter the EndPoint URL in the correct format.
Select the region of the registered registry.
Enter the name of the registered registry.
Enter Access ID and Access Secret.
Click "Test Connection" in the upper right corner to verify if the registry is available.
Click the "Save" button in the upper right corner.
1) Access the AWS Console to retrieve the necessary information.
3) Click "Other" among the next buttons and click "Next."
4) Enter a description tag for the access key to be created and click the "Create access key" button.
5) Confirm the generated access key and secret access key values.
You can create both private and public repositories.
(Note: This guide is written based on the private repository in the 'us-east-1' region, and the process is the same for public repositories.)
1) Click the "Create Repository" button in the upper right corner.
2) Enter the repository name to create the repository.
(In Cocktails, builds are done with "Registry Address/Image Name" so please make sure to create the registry as "Registry Name/Image Name")
1) Retrieve the necessary information from the list of created repositories in Cocktails.
EndPoint URL
: The table below provides the necessary information
Registry : Repository Name Created by User
※ Permissions need to be granted separately for Private Registries.
1) Click on "Settings" in [Amazon Elastic Container Registry] for the [Private registry].
2) Click the "Generate Policy" button in the upper right corner of [Settings] - [Permissions].
3) lick "JSON" in the upper right corner, add the following items, and click "Save Policy" to save.
Sid: Permission Name
Principal: Specify one or more AWS account IDs to grant permission. Specify more than one account using a comma-separated list.
Action : “ecr:*”
Item (* is required) | Content |
---|
Next, we will explain the process of creating an image registry, which is required for workspace registration. Please proceed to the "" page.
Item (* is required) | Content |
---|
Item (* is required) | Content |
---|
2) Click on the registered user, then click the tab, and click the button in the upper right corner of the "Access keys" box.
Category | EndPoint URL |
---|
Registry* | Enter the name of the registry to be created |
Description | Enter the description for the registry to be created |
Name* | Enter the name of the external container registry to be registered |
Describe | Enter a description for the external container registry |
Endpoint URL* | Enter the login server information |
Registry* | Enter the name of the already registered registry |
Client ID | username |
Client Secret* | password |
Name* | Enter the name for the external container registry you want to register |
Description | Enter the description for the external container registry you want to register |
Endpoint URL* | Endpoint Address of the External Container Registry |
Region* | Region of the Registered Registry |
Registry* | Name of the Registered Registry |
Access ID* | Access Key |
Access Secret* | Secret Access Key |
Private | (User Number).dkr.ecr.(Region).amazonaws.com |
Public | public.ecr.aws/(User Alias) |
Docker Hub is a container registry built to enable developers and open-source contributors to discover, use, and share container images.
1) Navigate to [Build Configuration] - [External Container Registries].
2) Click the "+ Register" button in the upper right corner to select the provider for creation.
1) Click the "+ Register" button and select "Docker Hub"
2) After registering Docker Hub authentication information in the basic information, click the "Test Connection" button.
Enter the name of the registry to be created in the registry.
Enter a description for the registry in the description.
Enter the EndPoint URL in the correct format.
Enter the name of the registered registry.
Enter Access ID and Access Secret.
Click "Test Connection" in the upper right corner to verify if the registry is available.
Click the "Save" button in the upper right corner.
1) Log in to Docker Hub and click on "Repositories" at the top.
2) Click on "Create repository" in the upper right corner.
3) Create a namespace and the image to be generated.
Since Docker Hub creates both the repository and image name simultaneously, the image name in the Build/Pipeline > Build section should match the Docker Hub Repository Name for image build generation.
1) Click on the profile in the upper right corner and select "My Account."
2) Confirm the Access ID
Access ID
: User's nickname or email address
3) Click on the "Security" tab on the right, then click on "New Access Token" in the upper right.
4) Enter a description for the token and copy the Access Token upon issuance.
Access Token
: Copy Access Token
Google Container Registry provides secure private Docker storage on Google Cloud. The Container Registry is a private Docker storage compatible with widely used continuous deployment systems.
1) Navigate to [Build Configuration] - [External Container Registries].
2) Click the "+ Register" button in the upper right corner to select the provider for creation.
1) Click the "+ Register" button and select "Google GCR."
2) After registering Google GCR authentication information in the basic information, click the "Test Connection" button.
Enter the name for the registry to be created in the registry.
Enter a description for the registry in the description.
Enter the EndPoint URL in the correct format.
Enter the registry name.
Enter Access ID and Access JSON.
Click "Test Connection" in the upper right corner to verify if the registry is available.
Click the "Save" button in the upper right corner.
1) Log in to the Google Cloud Console.
2) In the Google Cloud Console, click on the "IAM & Admin" button.
3) Under the "IAM & Admin" tab, click on "Service Accounts," then click the "+ Create Service Account" button at the top center.
4) Set the name for the service account. then click "CREATE AND CONTINUE"button.
5) Set the permissions for the service account to be created.
Owner permissions grant full access to most Google Cloud resources.
You can provide different permissions if needed, but access may be restricted based on the assigned permissions..
6) Click on the created service account, click "Add Key," then click "Create a new key," choose JSON format, and click "Create."
7) Verify that the JSON file has been generated locally.
Accss JSON
: File contents
Project ID
: The name in the leftmost select box in the search.
레지스트리
: Repository name
Endpoint URL
: Region of the registry to be created
Docker Registry based on Docker Hub for Private use.
1) Navigate to [Build Configuration] - [External Container Registries].
2) Click the "+ Register" button in the upper right corner to select the provider for creation.
1) Click the "+ Register" button and select "Docker Registry"
2) After registering Docker Registry authentication information in the basic information, click the "Test Connection" button.
Enter the name of the registry to be created in the registry.
Enter a description for the registry in the description.
Enter the EndPoint URL in the correct format.
Enter the name of the registered registry.
Enter Access ID and Access Secret.
Click "Test Connection" in the upper right corner to verify if the registry is available.
Click the "Save" button in the upper right corner.
1) Click on the profile in the upper right corner and select "My Account."
2) Confirm the Access ID
Access ID
: User's nickname or email address
3) Click on the "Security" tab on the right, then click on "New Access Token" in the upper right.
4) Enter a description for the token and copy the Access Token upon issuance.
Access Token
: Copy Access Token
'Cocktail Backup/Restore' backs up and restores Kubernetes cluster resources and persistent volumes.
Ensured safe backup and fast restoration.
Protection of all resources for quick restoration when needed.
Automated backup scheduled as per the defined intervals.
Adjustable retention periods for efficient operations.
It provides excellent portability.
By eliminating specific vendor dependencies, you can freely utilize it in a variety of environments.
Rapid restoration through redundant configurations in case of disasters.
Service expansion through backup and restoration.
Consistent UI and backup/restore status.
It allows easy management of multiple clusters through a unified user interface.
Users can conveniently monitor the backup and restoration status at a glance.
With Cocktail Backup & Restore, you can easily perform tasks like.
Backing up and, if necessary, restoring clusters.
Cloning clusters.
Migrating cluster resources to other clusters.
Periodically backing up cluster resources for easy restoration to a previous state in case of unforeseen issues.
The Cocktail Repository is associated with object storage where backups are stored and manages connection states periodically.
Adding a cluster to the Cocktail Repository is used for cluster migration.
Cocktail supports integration with various object storage options.
AWS
GCP
Azure
MinIO(local storage)
Cocktail Backup captures the current state of Kubernetes resources and creates a restore point.
Necessary information for users to create a restore point, such as the protected target cluster, data storage, retention policy, schedule, etc., is stored, making backup management easy.
Stored information in Cocktail Backup helps users efficiently replicate backups.
Cocktail Restore Point stores the state of Kubernetes resources at a specific point in time in the object storage.
Cocktail Restore uses the restore point to restore the Kubernetes state to a specific point in time, helpful in system failures, user errors, or other unexpected situations.
Create the storage for backup and restore.
Generate and execute backups to create restore points.
Perform restoration using the restore points.
Harbor is a container registry used in addition to the one registered through "Registry Creation" used in the internal cluster.
1) Navigate to [Build Configuration] - [External Container Registries].
2) Click the "+ Register" button in the upper right corner to select the provider to create.
1) Click the "+ Register" button and select "Harbor"
2) After registering Harbor authentication information in the basic information, click the "Test Connection" button.
Enter the name for the registry to be created in the registry.
Enter a description for the registry.
Enter the EndPoint URL.
Enter the registry name.
Enter Access ID and Access Secret.
If CA Certificate exists, enter it (optional).
Check the option for using Insecure (optional).
Click "Test Connection" in the upper right corner to verify if the registry is available.
Click the "Save" button in the upper right corner.
Naver Container Registry allows for the easy storage and management of container images in a private Docker registry and facilitates straightforward deployment to the Naver Cloud Platform.
1) Navigate to [Build Configuration] - [External Container Registries].
2) Click the "+ Register" button in the upper right corner to select the provider to create.
1) Click the "+ Register" button and select "Naver"
2) After registering Naver authentication information in the basic information, click the "Test Connection" button.
Enter the name for the registry to be created in the registry.
Enter a description for the registry.
Enter the EndPoint URL in the correct format.
Enter the registry name.
Enter Access ID and Access Secret.
Click "Test Connection" in the upper right corner to verify if the registry is available.
Click the "Save" button in the upper right corner.
Here are the steps to enable or disable the Public Endpoint.
Log in to the Naver Cloud Platform Console.
Click on "Services," then navigate to "Containers" > "Container Registry"
Click on the target registry name in the list.
In the detailed information section, click on the gear icon in the "Configuration" tab.
In the Configuration settings popup, click the toggle button for the "Public Endpoint" item to enable or disable it. After setting the preference, click the [Confirm] button to save the changes.
Quay Registry is an open-source container image registry used for storing and managing Docker images. Originally developed by CoreOS, it is currently part of Red Hat.
1) Navigate to [Build Configuration] - [External Container Registries].
2) Click the "+ Register" button in the upper right corner to select the provider to create.
1) Click the "+ Register" button and select "Quay"
2) After registering Quay authentication information in the basic information, click the "Test Connection" button.
Enter the name for the registry to be created in the registry.
Enter a description for the registry.
Enter the EndPoint URL in the correct format.
Enter the registry name.
Enter Access ID + Robot ID and Access Secret.
Click "Test Connection" in the upper right corner to verify if the registry is available.
Click the "Save" button in the upper right corner.
The robot ID needs to be associated with an existing repository.
1) Access Quay and click on the profile in the upper right corner. Select "Account Settings."
2) Click on "Robot Accounts," the second menu in the right tab, and then click "Create Robot Account" in the upper right corner.
3) Create an ID for the robot, select the repository you created, and grant the necessary permissions.
4) Click on the created RobotID to check the Access Secret.
Access ID+ RobotID
: The first value in the image.
Access Secret
: The second value in the image.
To use backup and restoration in Cocktail, you need a repository where Kubernetes resources can be backed up and stored. Choose an appropriate repository and create it according to your needs.
A workspace is an area where cluster resources are allocated for building, deploying, and operating applications, typically organized on a team basis.
1) Click on the "+ Create" button in the upper right corner of the [Settings] - [Workspace] tab.
3) In the basic information form of the workspace creation screen, enter the workspace name in the "Name" attribute.
4) Choose a color for the workspace in the "Color" attribute.
The selected color will be applied to the 'Top Bar' of the screen.
5) Enter a description for the workspace in the "Description" attribute.
1) Select the created workspace name from the list on the workspace screen.
2) Click on the image registry icon on the right side of the basic information.
3) Choose the previously created registry and click the "Apply" button.
3) Confirm the registered registry.
1) Select the created workspace name from the list on the workspace screen.
2) Click the "Assign/Remove Cluster" button on the right side of the allocated cluster, and the Assign/Remove window will appear.
3) In the "Select" of the allocation/removal window, choose the previously created cluster and click the "+ Add" button.
4) Click the "Save" button at the bottom right.
5) Confirm the list of allocated clusters.
1) In the Assign/Remove window, click the "Assign/Remove Cluster" button on the right side of the allocated cluster, and the Assign/Remove window will appear.
2) Press the "-" button on the right of the cluster you want to delete, remove it from the list, and then click the "Save" button at the bottom right.
3) Confirm that the cluster has been removed from the list of allocated clusters.
1) Select the created workspace name from the list on the workspace screen.
2) Click the "Service Map Allocation/retrieval " button on the right side of the allocated service map.
3) Select the cluster, choose the service map, and click the "+ Add" button.
4) Confirm the added item, select the target service map group, and click the "Save" button at the bottom right.
5) Confirm that the service map has been allocated in the list of allocated service maps.
1) Click the "Service Map Allocation/retrieval" button on the right side of the allocated service map.
2) Press the "-" button on the right of the service map you want to revoke, remove it from the list, and then click the "Save" button at the bottom right.
3) Confirm that the service map has been revoked in the list of allocated service maps.
1) Select the created workspace name from the list on the workspace screen.
2) Click "Assign/Remove Build Server" on the right side of the allocated build server.
3) After selecting the build server, click the "+ Add" button.
4) Confirm the added item and click the "Save" button at the bottom right.
5) Confirm that the build server has been allocated in the list of allocated build servers.
1) Click "Assign/Remove Build Server" on the right side of the allocated build server.
2) Press the "-" button next to the build server you want to revoke, remove it from the list, and then click the "Save" button at the bottom right.
3) Confirm that the build server has been revoked in the list of allocated build servers.
Users with USER role need to register members to access Workspaces.
1) Click the "Register/delete Members" button on the right side of the members.
2) Select the name of the member to be registered in the workspace, grant permissions, and click the "Save" button at the bottom.
3) Confirm that the member has been registered with the selected permissions.
1) Click the "Register/delete Members" button on the right side of the members.
2) Select the name of the member to be removed, uncheck the combo box, and click the "Save" button.
3) Confirm that the member has been removed.
Item (* is required) | Content |
---|---|
Item (* is required) | Content |
---|---|
Item (* is required) | Content |
---|---|
Item (* is required) | Content |
---|
Item (* is required) | Content |
---|
Item (* is required) | Content |
---|
Permission | Feature |
---|
Name*
Enter the name of the external container registry to be registered
Describe
Enter a description for the external container registry.
Endpoint URL*
External Container Registry Endpoint Address -> Mostly fixed as https://docker.io
Namespace*
Enter the registered registry namespace
Access ID*
Access Key
Access Secret*
Access Secret
Name*
Enter the name of the external container registry to be registered
Describe
Enter a description for the external container registry
Endpoint URL*
Enter the Endpoint URL corresponding to the registry region
Registry*
Enter the name of the already registered registry
Access ID*
_json_key
Access JSON*
Enter the google service account private key (in JSON key format)
Name*
Enter the name of the external container registry to be registered
Describe
Enter a description for the external container registry
Endpoint URL*
External Container Registry Endpoint Address
Registry*
Enter the name of the already registered registry
Access ID*
Docker Login ID
Access Secret*
Docker Login Token
Name* | Enter the name of the external container registry to be registered |
Describe | Enter a description for the external container registry |
Endpoint URL* | Enter the Endpoint Address of the external container registry |
Registry* | Enter the name of the already registered registry |
Access ID* | Harbor Login ID |
Access Secret* | Harbor Login Password |
Ca Certificate | Enter the private certificate information |
Insecure | Specify whether it's insecure |
Name* | Enter the name of the external container registry to be registered |
Describe | Enter a description for the external container registry |
Endpoint URL* | Enter the Endpoint Address of the external container registry |
Region* | Specify the region of the registered registry |
Registry* | Enter the name of the registered registry |
Access ID* | Enter the access key |
Access Secret* | Enter the secret access key |
Name* | Enter the name of the external container registry to be registered |
Description | Enter a description for the external container registry |
Endpoint URL* | External Container Registry Endpoint Addres |
Registry* | Enter the name of the already registered registry |
Access ID+RobotID* | Enter the issued Access ID + Robot ID |
Access Secret* | Enter the issued Secret |
MANAGER | Can create and delete Workspaces, Clusters, and Service Maps, along with most other functionalities. |
USER | Has access to almost all features except for certain capabilities such as Workspace settings, resource allocation, Limit Ranges, and Network Policy. |
DEVOPS | In CI/CD pipelines, can only execute image builds (deployment and deletion are restricted). |
DEV | Permitted to view CI/CD pipeline details but not allowed to deploy, modify, or delete. |
VIEWER | In USER role, only viewing capabilities are granted. |
You can collect and analyze each log by installing and registering a log service in your cluster.
The Cocktail repository manages the connection status with the object storages where backups are periodically stored.
Prior tasks are required to create a storage. Please refer to this place
Select the [Backup/Restore] - [Storages] section.
Click the "+ Create" button.
Select the provider.
The required information to be entered varies based on the selected provider. Please double-check and enter the details accurately.
Click the "Save" button.
Click the "OK" button.
Verify if the storage has been successfully created.
You can click on the name to view detailed information.
When a cocktail backup is created and executed, a backup target is created based on the backup information.
To create a backup, you need to create a storage. Please refer to this place
Select the [Backup/Restore] - [Backup] section.
Click the [Create] button.
Input the details for the backup.
Select the cluster and data storage to be protected by the backup
The backup scope includes selecting the entire cluster or specific namespaces, label selectors, and choosing resources to be protected by the backup
The button links to a screen displaying a list of selectable resources and detailed information about those resources.
The number inside the parentheses indicates the total count of backup-eligible resources.
The backup scope allows you to choose either [Entire Cluster] to back up the entire cluster or [Selecting Namespace] to choose specific namespaces.
If you choose [Selecting Namespace], additional items will be displayed as follows.
Click the icon to select the namespace, then click the [Apply] button.
Verify the selected namespace.
Using the label selector during backup allows you to perform backups targeting only resources that match specific labels or label conditions.
Click the icon to enter the name and value.
The backup target is limited to resources that satisfy all specified label conditions.
This item configures the cluster backup target.
"If you choose [Entire Resources] under Resources, it sets all resources within the cluster as the backup target. If you choose [Selecting Resources], you can selectively backup specific resources.
If you choose [Selecting Resources], additional items will be displayed as follows
If you choose 'not in use,' the backup will run immediately once. However, if you choose ' Schedule Execution,' the backup will automatically run according to the specified time (specified interval).
If you select 'Schedule Execution,' the backup will not run immediately but will be automatically executed at the specified time.
You can specify the desired time or interval using a cron expression or the @every notation.
Click the [Save] button.
Confirm the following message and click the [OK] button.
You can check the created backup in the backup list.
To view detailed information for the specific backup, click on the backup name.
The recent execution status represents the state of the restoration point created due to the most recently executed backup. Refer to this link for detailed explanations of the states.
When the corresponding backup is executed, a restore point is added. In the case of scheduled backups, the backup runs according to the schedule, and a restore point is added.
To view detailed information for the specific restore point, click on the restore point name.
You can check the list of all restore points under [Backup/Restore] - [Restore Points].
To view detailed information for the specific restore point, click on the restore point name.
Navigate to the detailed view of the corresponding scheduled backup.
Click the [Pause Schedule] button.
Click the [OK] button.
Verify that the schedule has been paused.
If you wish to resume the schedule, click the button and repeat the above steps.
When you run the backup now, the backup will be executed, and a restore point will be created.
Navigate to the detailed view of the corresponding backup.
Click the [Run Now] button.
Verify the created restore point.
Copy job allows you to create a new backup by adding new configurations based on previously created backup settings or by modifying existing settings.
Navigate to the detailed screen of the respective backup.
"Click the [Replicate] button.
Click the [OK] button.
Add new configurations or modify existing settings.
It plays an important role in operations and troubleshooting, managing the containers and logs running in the cluster.
Collect logs for all resources within the cluster.
Collects all containers and audit logs within the cluster.
Collect application logs for workloads that exist in the cocktail.
In the case of application logs, only authenticated applications can be checked.
Collected using open source 'Opentelemetry' and 'Opensearch'.
Provide a service that searches and analyzes collected logs.
Collected logs can be analyzed by filtering them with labels.
Collected logs are aggregated by time so users can analyze log trends.
Can search and analyze collected logs.
master node : 1Core, 2GB
data node : 2Core, 4GB
dashboard : 500mCore, 1GB
Effectively manage cluster containers and application logs to increase operational efficiency and ensure system stability. The following provides guidance on how to collect logs for each log screen and application.
Amazon S3 (Simple Storage Service) is a cloud-based, secure, and scalable object storage service provided by Amazon Web Services (AWS).
To use AWS S3 storage, please refer to the following guide.
1) Log in to the console and click on "S3."
(Note: This guide assumes the use of the "us-east-1" region.)
2) Click on the "Create bucket" button to navigate to the creation screen.
3) Set the region, bucket name, and default encryption according to user preferences, then click the "Create bucket" button at the bottom right to create the bucket.
1) Access the AWS Console to retrieve the necessary information.
2) Click on the registered user, then click the tab and click the button in the top right of the [Access keys] box.
3) Click "Other" among the next buttons and click "Next."
4) Enter a description tag for the access key to be created and click the "Create access key" button.
5) Confirm the generated access key and secret access key values.
MinIO is an open-source object storage server known for its compatibility with the Amazon S3 API.
To use MinIO storage, please refer to the guide below.
1) Access the MinIO console by entering the installed MinIO URL.
2) After logging in, go to [Administrator] - [Buckets] and click the "Create Bucket +" button in the top right corner.
3) Provide an appropriate bucket name and click the "Create Bucket" button to create the bucket.
1) After logging into the console, go to [User] - [Access Keys] and click the "Create access key +" button in the top right corner.
2) Save the displayed Access Key and Secret Key separately, then click the "Create" button to generate authentication information.
aws_access_key_id
: Access Key
aws_secret_access_key
: Secret Key
Google Cloud Storage is an object storage service provided by the Google Cloud Platform (GCP), offering a secure and scalable data storage solution in the cloud.
To use Google Storage, please refer to the guide below.
1) Access the Google Cloud Console.
2) Open the left menu, navigate to [Cloud Storage] - [Storage] menu.
3) Click the "Create" button in the top right corner to go to the creation screen.
4) Enter the bucket name, select the region (default: us), and click the "Create" button to create the bucket.
1) Access the Google Cloud Console.
2) In the Google Cloud Console, click on the "IAM & Admin" button.
3) Click on the Service accounts tab in IAM & Admin, and then click on the "+ Create Service Account" button at the top center.
4) Set the name for the service account.
5) Set the permissions for the created service account.
Owner permissions provide full access to most Google Cloud resources.
You can grant other permissions if needed, but note that access may be restricted based on the granted permissions.
6) Click on the created service account, click on "Add Key," choose JSON type, and click "Create" to generate a new key.
7) Confirm that a JSON file has been generated locally.
Authentication Information
: Contents of the file
Azure Blob Storage is an object storage service provided by Microsoft Azure cloud platform.
To use Azure Storage, please refer to the guide below.
1) Access the Azure Portal to obtain the necessary information.
2) Efficiently manage access, policies, and compliance regulations for your subscription by clicking on "Management groups."(All services - Management and governance - Management groups)
2-1) Click the "Create" button.
2-2) Create a management group for internal use.
3) Click on "Subscriptions" to create a logical container called "Resource Group" for grouping resources for efficient management.(All services - Management and governance - Subscriptions)
3-1) Click the "Create" button to go to the creation screen.
4) Click on "Subscriptions" to create a logical container called "Resource Group" for grouping resources for efficient management.(All services - Management and governance - Resource Group)
4-1) Click the "Create" button to go to the creation screen.
4-2) [Create a Resource Group] - Enter the resource group name and select the region before creating.
5) Click on "Resources" to create resources that can be used in Azure services.
5-1) Search for "Storage account" in the marketplace and click on "Storage account."
5-2) Click the "Create" button.
5-3) nter subscription, resource group, storage account name, and region to register.
6) Click on the created resource, and under [Data Storage], click on "Containers."
6-1) Click "+ Container" to create storage.
Bucket : Container Name
Log in to the Azure Portal.
Select the subscription from the Azure service title. If your subscription is not displayed, use the search box to find it.
Find the subscription in the list and verify the subscription ID displayed in the second column.
Azure PortalLog in to the Azure Portal.
Make sure you are logged in to the tenant for which you want to retrieve the ID. If not, switch directories to ensure you're working in the correct tenant.
Use the search to find "Tenant properties."
Find the Tenant ID in the Overview section of the Basic Information screen.
Please refer to the guide below for more details.
Log Service uses 'OpenSearch' to provide log storage and an API server to communicate with Cocktail Dashboard.
Before installing the log service, 'cert-manager' and 'nginx' require pre-installation.
1) Infrastructure - Clusters - Addon List - Click the "Deploy" button and click the "Deploy" button for 'cocktail-log-service' in the list.
2) Check the settings according to your environment and click the Deploy button to deploy the Addon.
Gateway Service Mode
: Log service gateway type (Ingress, LoadBalancer)
[If Gateway Service Mode is set to Ingress]
Gateway Access URL
: DNS to access log service through Ingress
URL Type
: Cluster HostAliases type (PublicDNS or HostAliases)
Host Ip
: (If URL Type is 'HostAliases') LB IP (or node IP) that can access the cluster from outside
When accessing the log service from the dashboard or collecting logs from log-agent, connect the Host IP to the Log Access URL.
Enable OpenSearch Dashboard
: Use or not 'Opensearch Dashboard'
[If Gateway Service Mode is set to LoadBalancer]
Clusters created through provisioning can only be set to 'Enable OpenSearch Dashboard'.
3) Check the deployment status and if the status is 'Running', confirm that the deployment has been completed successfully.
If the status is 'pending' when deployed
4) Create a job in the namespace where the log service is installed.
Create a Job that creates policies for each container log, cluster audit log, and application log collected by Opensearch.
registry address
: Please contact our technical team.
If you create a job according to the settings above, container logs have a storage period of 30 days, cluster audit logs have a storage period of 56 days, and application logs have a storage period of 1 year.
Storage period settings can be modified in 'OpenSearch Dashboard'.
Only one log service installed on the platform can be registered.
1) Settings - Basic Information, click the selection box for the log service name to view the list of installed log services.
2) Specify the log service you want to register and click the “Register” button to register it on the platform.
When changing the log service, simply select a different log service from the list and click the "Register" button to change it.
You can discontinue the log service function registered on the platform by clicking the “Deregister” button.
Cocktail restore enables the restoration of the system to a specific point in time in the Kubernetes state, addressing system failures, user errors, or other unexpected situations.
To create a restore, there must be a completed restore point. Refer to for more information.
After selecting [Backup/Restore] - [Restore Points] section, click on the name of the restore point you want to restore.
Click the [Restore] button.
Enter the necessary information for the restoration.
The restoration scope includes selecting the entire cluster or specific namespaces, label selectors, and choosing the resources to restore.
The restore scope can be selected as [Entire Cluster] to restore the entire cluster or choosing [Selecting Namespace] to restore specific namespaces.
"If you choose [Selecting Namespace], additional items will be displayed as follows.
If you want to restore to a new or different namespace, enter the namespace name in [Deploy Namespace Name]
Click the [Apply] button.
Verify the selected namespace.
Using a label selector during the restore allows you to perform restoration only on resources that match specific labels or label conditions.
The restoration target is limited to resources that satisfy all specified label conditions.
In the context you provided, selecting [Entire Resources] would designate all resources within the cluster as the restoration target, while choosing [Selecting Resources] allows for the selective restoration of specific resources.
If you choose [Selecting Resources], additional options will be displayed below as follows
When changing storage class is an option that allows you to switch to the storage class used by the destination cluster when restoring data to a new cluster
Click the [Apply] button
Restoration target configuration involves specifying the cluster to be restored and assigning a restoration name.
Save Restore Information
Review the following message and click the [OK] button.
Created restorations can be viewed in the restoration history list.
To review detailed information about a specific restoration, click on its restoration name.
Navigate to the [Backup/Restore] - [Restore list] section, check the item you wish to delete from the list
Click the [Delete] button.
Click the [OK] button.
For backups within the same cluster, be cautious when deleting backed-up resources at restoration points, as the restore list will also be deleted
For backups to different clusters, deleting backed-up resources at restoration points does not delete the restore list. However, you won't be able to retrieve backup information from the restore list.
The Cocktail Backup/Restore Overview summarizes backup operation status and statistics, cluster backup agent status, and storage usage statistics.
Navigate to [Backup/Restore] - [Overview].
The backup schedule uses different colors to represent the statuses of 'New', 'Running', 'Paused', and 'Failed', allowing users to visually assess the overall status distribution based on the total count and the count of each status.
Please refer to the details below for a comprehensive explanation of each status.
The cluster backup agent status indicates the health and installation status of the backup agent.
The status is represented as one of the following: 'Healthy', 'Unhealthy', or 'Install.
Please refer to the details below for a comprehensive explanation of each status.
You can visualize the storage usage of repositories registered in Cocktail through a graph.
Each rectangle represents a repository, with colors distinguishing each one.
The size of the rectangle visually indicates the relative usage of the corresponding repository.
You can zoom in or out on the graph using the scroll function.
When the graph is zoomed in or out, you can click the [Storage Usage] button at the bottom to revert to its original size.
The restore points are represented by different colors based on five statuses: 'New', 'Running', 'Paused', 'Failed', and 'Deleting', reflecting their distribution in proportion to their respective counts. This enables users to visualize the overall status distribution by observing the total count and the count of each status
Please refer to the details below for a comprehensive explanation of each status.
Restoration is visually represented with different colors based on five statuses: 'New', 'Running', 'Paused', 'Failed', and 'Deleting', allowing users to assess the overall status distribution by considering the total count and the count of each status.
Please refer to the details below for a comprehensive explanation of each status.
The recent restore points display the five most recent restoration points as a list.
Each entry includes age and status information.
Clicking on the name of a restore point navigates to the detailed information page for that specific restore point.
The recent restores display the five most recent restoration items as a list.
Each entry includes age and status information.
The storage remaining space displays the top 5 storage entries with Object Storage size limitations, sorted in ascending order of available space.
Each entry is accompanied by a graph representing the available space visually, along with the remaining space capacity and its percentage.
The grey area on the graph represents unused spare space.
Clicking on the storage name navigates to the storage details page.
Storage entries connected to the same MinIO object storage display the same disk usage.
Item (* is required) | Content |
---|---|
Item (* is required) | Content |
---|---|
Item (* is required) | Content |
---|---|
Item (* is required) | Content |
---|---|
Item (* is required) | Content |
---|---|
Item (* is required) | content |
---|---|
Click the icon to select the namespace.
Click the icon to enter the name and value.
Clicking the icon to select Old Storage Class and New Storage Class
Item (* is required) | Content |
---|
Item | Content |
---|
Item | Content |
---|
Clicking on the button will navigate you to the addon deployment screen for that particular cluster.
Item | Content |
---|
Item | Content |
---|
Name*
Storage Name
Non-overlapping
Cluster*
AS-IS cluster to back up is set to ReadWrite
The TO-BE cluster to be transferred should be set to Read Only
When performing backup and migration operations within the same cluster, set it to Read-Write
Bucket*
Bucket of Storage
prefix
Prefix of the bucket
Enter a specific path if there is any for the bucket in the storage
Authentication Information*
Enter Server IP information for the storage
The format varies depending on the provider (refer below)
Authentication information*
Enter authentication information with AWS storage permissions
Example
[default]
aws_access_key_id = aws_secret_access_key =
# Optional - role_arn
role_arn=
Region*
Enter the region information for AWS storage
Encryption Algorithm*
the server-side encryption type for AWS
You can choose betweenSSE-S3
and SSE-KMS,
If selecting SSE-KMS
, you must additionally provide the KMS key
KMS Key
the KMS key ID generated in AWS KMS
This is mandatory when selecting SSE-KMS in the encryption algorithm field
Input the <key-id>
from the AWS KMS key ARN.
ARN Format
arn:aws:kms:<region>:<account-ID>:key/<key-id>
profile
Enter the AWS profile configured in the authentication information
section
Authentication information*
Service Account Key Json
KMS Key
The Cloud KMS key name to be used for backup encryption
Authentication information*
AZURE_SUBSCRIPTION_ID= AZURE_TENANT_ID= AZURE_CLIENT_ID= AZURE_CLIENT_SECRET= AZURE_CLOUD_NAME=AzurePublicCloud AZURE_ENVIRONMENT=AzurePublicCloud
Resource group name*
The name of the resource group containing the storage account
Storage account name*
The name of the storage account
Block size (Byte)
The block size to be used when uploading objects
Authentication information*
[default]
aws_access_key_id = aws_secret_access_key =
s3Url*
Object storage API URL
publicUrl*
Object storage API URL accessible from external sources
Skip TLS certificate
Use of TLS certificate
Name*
Name of the backup to be created
Select Cluster*
Cluster to be backed up
Select Storage*
Select the storage
Only select storage with Read-Write permission
Backup Retention Period*
Backup retention period
specified in hours. Enter only numeric values
Restore target cluster* | Cluster to restore (to be relocated) |
Restore Name* | Restore Name |
New | Ready for the backup schedule to be created (Indicates the state of being ready to create a backup schedule) |
Running | Backup schedule job is in progress (Indicates that the backup is scheduled to be created according to the schedule) |
Paused | Backup schedule job has been stopped |
Failed | Indicates that the validation of the backup schedule failed, and the backup will not proceed |
Healthy | Backup agent is in normal state |
Unhealthy | Backup agent is in abnormal state |
Install | Backup agent is not installed |
New | Backup preparation is complete and ready to start. (Status indicating that you are waiting to start a backup job) |
InProgress | Backup job is in progress |
Completed | Backup job has successfully completed |
Failed | Backup job has failed |
Deleting | All data related to the backup is being deleted |
New | Restore preparation is complete and ready to start (Status indicating that you are waiting to start a restore job) |
InProgress | Currently undergoing the restoration process |
Completed | The restoration operation has completed successfully |
Failed | An issue occurred during the restoration process |
Deleting | Data related to the restoration is being deleted |
If you install the log agent, you can check the collected container logs and audit logs on the cocktail dashboard.
1) Infrastructure - Cluster - Addon List - Click the "Deploy" button and click the "Deploy" button for 'cocktail-log-agent' in the list.
2) Check the settings according to your environment and click the Deploy button to deploy the Addon.
Enable Container log collecting
: Whether to collect container logs
Enable Audit log collecting
: Whether to collect cluster audit logs
audit-log hostPath path
: If it is not a cluster installed with 'cube', you will need to change the path.
includeNamespace
: List of collection processing namespaces
If you are collecting container logs for a specific namespace, uncomment and use below.
The log service can perform logging depending on various applications, and the process of registering to enable logging is explained.
Logging - Application Management - Click the Registration button to register.
Name
: Actual application service name
Description
: Description field to distinguish
Cluster
: Cluster with application
Namespace
: Namespace where the application resides
Developing Language
: Application development language
Log Service Information
: Log service information useful for the current platform
The log service can collect and search logs of authenticated applications through tokens.
A token is automatically issued when you first register an application, and a new token can be issued through renewal.
Even if the token is renewed, the token applied to the application is not automatically renewed, so you must renew it manually to collect logs.
You can also click the "Action" button to disable the application to stop logging, or delete it from the application list.
Log Operator is an Addon required when using automatic container log measurement among application log collection methods.
1) Infrastructure - Clusters - Addon List - Click the "Deploy" button and click the "Deploy" button for 'cocktail-log-operator' in the list.
2) Check the settings according to your environment and click the Deploy button to deploy the Addon.
3) Check the deployment status and if it is Running, confirm that the deployment has been completed successfully.
It cannot be modified in the dashboard, and must be modified only with a script.
Terminal connection to master node
Create a password (using the provided tool script).
Load current settings
Change settings and run reflection script
To check the application log, you must first register in the application management tab.
Logging - Application Log - Select an application from the list of applications to view.
View By Hour
: You can search logs from the last 5 minutes to 48 hours ago, and check logs from up to 2 weeks ago.
Application
: You can view a list of all applications that exist in the cluster.
View More
: Get logs since the last time in the list.
Current number of logs
/
Total number of logs
: This refers to the total number of logs viewed at that time and the maximum number of logs.
The maximum number of logs that can be viewed at one time is 5000. The inquiry period is up to 7 days.
The Show More button is displayed when the total number of logs exceeds 5000.
Click the link button for the log viewed by time to check detailed information about the log.
Log Message
: You can check the contents of the log that actually occurred.
Label Information
: Click the + button to expand and view label information.
Label information: You can close the expanded label information by clicking the - button.
If you click on each graph once, you can check the log for that time, and if you click again, you can check the log for the entire time again.
Graph Select
: You can see that there are 60 logs of the current time.
Click the arrow button on the right to see the set of labels for the logs present at that time.
label list key
: Indicates a list of labels for viewed logs. You can check the label value by clicking the label button.
label list values
: Click the label value to search for the log you want to find by adding the conditions you want to search through AND search.
Selected label value
: When you add a label condition, the condition is added to the top of the graph, and you can search by clicking the X button to remove the condition.
Enter the keyword you want to search for and click the search button to view the log for the search term.
Search word
: You can search logs where the string exists regardless of case.
Click the “Download” button at the top of the graph to download the log.
Download
: You can download log data for up to 5,000 searched logs in Excel file format. Each column contains a label value.
We recommend that you select a method that suits your environment.
Collect container logs from workloads using automated instrumentation.
The SDK method is used when you want to collect logs from a specific service through Logger settings.
The sidecar method is used when you want to read and collect log files in a specific directory.
You can leverage Custom Resources to configure the OpenTelemetry auto-instrumentation library and add annotations to your workloads to easily collect logs.
You can create one by searching for Infrastructure - Custom Resources - 'instrumentations'.
You can create it by clicking the Create button, selecting the namespace where you want to collect logs, and modifying the form below.
The above CRD is applied on a namespace basis, and automatic container logs can be collected for other languages in the same namespace.
log-agent Service address
: Infrastructure - Cluster - Add-ons - Click 'log-agent' and check the service name.
( http port = 4318 , grpc port = 4317)
By adding an environment variable for each individual application rather than all applications in the namespace through CRD, you can only collect logs for a specific application.
Add annotations to the workloads in the namespace for which you want to collect logs.
Application - Service Map - Service Map to collect logs - Workload - Select the application to collect logs - Click the "Settings" button.
Change the Yaml view and add the following annotations to the template - metadata - annotations section.
You can create one by searching for Infrastructure - Custom Resources - 'instrumentations'.
You can create it by clicking the Create button, selecting the namespace where you want to collect logs, and modifying the form below.
The above CRD is applied on a namespace basis, and automatic container logs can be collected for other languages in the same namespace.
log-agent Service Address
: Infrastructure - Cluster - Add-ons - Click 'log-agent' and check the service name.
( http port = 4318 , grpc port = 4317)
Add annotations to the workloads in the namespace for which you want to collect logs.
Application - Service Map - Service Map to collect logs - Workload - Select the application to collect logs - Click the "Settings" button.
Change the Yaml view and add the following annotations to the template - metadata - annotations section.
This is a method of installing into an existing application using the SDK provided by Opentelemetry.
This guide is for existing JAVA applications that have been build on Cocktail Cloud.
Log Appender is an interface provided by a logging framework or library that provides the ability to collect and process log messages. OpenTelemetry interacts with Log Appender through the Log Bridge API to collect log messages and associate them with tracking data from OpenTelemetry. Therefore, log appenders can be used to collect and integrate log data from OpenTelemetry.
We introduce how to collect data using logback
and log4j
, which are representative loggers.
log-agent Service Address
: Infrastructure - Cluster - Add-ons - Click 'log-agent' and check the service name.
( http port = 4318 , grpc port = 4317)
This is a log about events that occur within the cluster.
Logging - Cluster Audit Logs - Select a cluster from the list of clusters to view.
View By Hour
: You can search logs from the last 5 minutes to 48 hours ago, and check logs from up to 2 weeks ago.
Cluster
: You can view the entire list of clusters.
View More
: Get logs since the last time in the list.
Current number of logs
/
Total number of logs
: This refers to the total number of logs viewed at that time and the maximum number of logs.
The maximum number of logs that can be viewed at one time is 5000. The inquiry period is up to 7 days.
The Show More button is displayed when the total number of logs exceeds 5000.
Click the link button for the log viewed by time to check detailed information about the log.
User Account
: This refers to the account name of the cluster.
Timestamp
: This refers to the time when the audit log occurred.
Source IP
: This refers to the IP of the user who generated the audit log.
Request URL
: Refers to the URL for the action in which the audit log occurred.
Request Time
: This refers to the time the task started.
Response time
: refers to the time the task was completed.
Response status
: refers to the result of the action.
Verb
: It means what action the task performed. (eg. create, delete, patch)
None
- Events corresponding to this rule are not logged.
Metadata
- Logs request metadata (requesting user, timestamp, resource, verb, etc.), but does not log request/response body.
Request
- Logs event metadata and request body, but does not log response body. It does not apply to requests other than resources.
RequestResponse
- Logs event metadata and request/response body. It does not apply to requests other than resources.
Stage
: Refers to the process or stage in which work was performed.
If you click on each graph once, you can check the log for that time, and if you click again, you can check the log for the entire time again.
그래프 클릭
: 현재 시간의 로그가 8,704개인것을 확인할 수 있습니다.
Click the arrow button on the right to see the set of labels for the logs present at that time.
label list key
: Indicates a list of labels for viewed logs. You can check the label value by clicking the label button.
label list values
: Click the label value to search for the log you want to find by adding the conditions you want to search through AND search.
Selected label value
: When you add a label condition, the condition is added to the top of the graph, and you can search by clicking the X button to remove the condition.
Enter the keyword you want to search for and click the search button to view the log for the search term.
Search word
: You can search logs where the string exists regardless of case.
Click the “Download” button at the top of the graph to download the log.
Download
: You can download log data for up to 5,000 searched logs in Excel file format. Each column contains a label value
This is a method of installing into an existing application using the SDK provided by Opentelemetry. Logging in Python is currently under development at opentelemetry.
This guide is for existing Python applications that have been build on Cocktail Cloud.
Additionally, the Python application in this guide was created based on 'Flask'
log-agent Service Address
: Infrastructure - Cluster - Add-ons - Click 'log-agent' and check the service name.
( http port = 4318 , grpc port = 4317)
Logs are collected and displayed for each namespace within the cluster.
Logging - Container Logs - Select a namespace from the Namespaces to view list.
View By Hour
: You can search logs from the last 5 minutes to 48 hours ago, and check logs from up to 2 weeks ago.
Namespace
: You can view a list of all namespaces that exist in that cluster.
View More
: Get logs since the last time in the list.
Current number of logs
/
Total number of logs
: This refers to the total number of logs viewed at that time and the maximum number of logs.
The maximum number of logs that can be viewed at one time is 5000. The inquiry period is up to 7 days.
The Show More button is displayed when the total number of logs exceeds 5000.
Click the link button for the log viewed by time to check detailed information about the log.
Log Message
: You can check the contents of the log that actually occurred.
Label Information
: Click the + button to expand and view label information.
Label information
: You can close the expanded label information by clicking the - button.
If you click on each graph once, you can check the log for that time, and if you click again, you can check the log for the entire time again.
Click on the graph
: You can see that there are 2,687 logs of the current time.
Click the arrow button on the right to see the set of labels for the logs present at that time.
label list key
: Indicates a list of labels for viewed logs. You can check the label value by clicking the label button.
label list values
: Click the label value to search for the log you want to find by adding the conditions you want to search through AND search.
Selected label value
: When you add a label condition, the condition is added to the top of the graph, and you can search by clicking the X button to remove the condition.
Enter the keyword you want to search for and click the search button to view the log for the search term.
Search word
: You can search logs where the string exists regardless of case.
Click the “Download” button at the top of the graph to download the log.
Download
: You can download log data for up to 5,000 searched logs in Excel file format. Each column contains a label value
Fluent Bit is a lightweight log data collector that is used to collect and process data. By installing Flentbit as a sidecar in your application, you parse the application's logs and forward them to O
The above guide explains how the application stores logs in the /var/log directory. Please modify the directory and log pattern to suit your environment.
Application to collect logs - Settings - Container Click the "Add" button to create a container as follows.
Image:
fluent/fluent-bit:3.0.0
When you press the save button, the container runs in the existing application in fluent-bit sidecar format.
Logs are stored in the path set in Log Appender, so you need to create a volume in the container and mount it.
Application to collect logs - Settings - Volume - Click the "Create" button to create a volume as follows.
Volume Type
: Empty Dir
Volume Name
: custom name
The following is the process of mounting the created volume.
Application to collect logs - Settings - Volume mount - Click the "Add" button to mount the volume with the following settings.
Container Path
: File path set in Log Appender (eg. /var/log)
The container must mount the directory path where it stores the logs before it can read the file and parse the logs.
You can also add labels or change the label name through Config provided by fluent-bit.
Service map to collect logs - Configuration information - Click the "Create" button to create a configuration map.
Name
: The name of the config map you want to set.
Description
: Additionally, a description of the config map to be specified by the user.
Click the “Add” button to add the config file.
The following config file is not absolute. The location where the log is loaded or the log pattern may vary, so please set it according to your environment.
log-agent Service Address
: Infrastructure - Cluster - Add-ons - Click 'log-agent' and check the service name.
( http port = 4318 , grpc port = 4317)
parsers.conf
Application logs create a label called 'level' to provide users with the ability to filter by level. The following is an example of converting nginx's code value to level when the user's application does not have a value called level.
rewrite.lua
Once the config map creation is complete, return to the application to create the volume.
Application to collect logs - Settings - Volume - Click the "Create" button to create a volume as follows.
Volume Type
: Config Map
Volume Name
: Custom Name
Config Map
: User-created ConfigMap name
Permission
: 644
The following is the process of mounting the created volume.
Application to collect logs - Settings - Volume mount - Click the "Add" button to mount the volume with the following settings.
Container Path
: Log Data - Directory path where logs are stored (eg. /var/log)
Container Path
: Fluent-bit -conf -fluent-bit configuration file path (eg. /fluent-bit/etc)
When the fluent-bit container does not operate properly
Recently, there has been a growing trend in enterprises towards building applications and platforms based on open source. Open source solutions often require significant time and effort for validation, deployment, configuration, and maintenance.
Cocktail Cloud addresses this challenge by offering a catalog-style provisioning of various open source and commercial software packages needed for application configuration. This approach enables users to easily install the required components, streamlining the process of creating an environment for application development and deployment.
Official Packages: Managed packages with validated configurations (for AI, IoT, Blockchain, Big Data, etc.), designed for enterprise digital transformation.
Open Source Packages: It's possible to search for and deploy open-source packages, offering flexibility and customization.
Allows for one-click deployment of pre-configured packages to the cluster. Users can customize information such as environment variables for different deployment scenarios.
Enables monitoring and tracking of the workload configuration status of deployed packages.
Facilitates easy version upgrades for packages through the web GUI after deployment. Additionally, users can seamlessly perform updates to package configurations. If there are changes to parameters set during package deployment, users can effortlessly modify and apply these changes.
1) Click on [Application Catalog] - [Catalog] tab to display the list of available packages.
1) You can search for the desired package for installation through the search bar at the top right of the [Catalog] screen.
1) Click the "Deploy" button for the package you are interested in or want to distribute, and it provides an overview of the latest package version along with descriptions of parameters that can be configured during package deployment. To view information about previous package versions, use the version selection box at the top of the screen to choose the desired version.
1) In the [Deploy] tab, enter deployment information (deployment type, target cluster, service map, namespace, release name), then click the "Deploy" button.
2) Find the parameters you want to change in the YAML editor displayed below the deployment settings section and modify the values directly.
3) When additional configurations are required during deployment beyond the default settings, you edit them in the custom YAML editor.
If different values are registered for the same configuration, the priority in the YAML file is given to the settings applied in the custom YAML.
4) When you press the deployment button, you can execute a dry-run to see if the package deployment proceeds correctly.
5) Upon executing the dry-run, you can verify the success or failure.
6) After deployment, the screen will move to the detailed view of the package.
1) To inquire about deployed packages, select [Application Catalog] - [Deployed Catalog] option.
1) Click on the name (release name) of a specific package in the package deployment list to display the detailed view of that package.
2) Depending on the deployment status, states like ContainerCreating, Pending, CrashLoopBackOff, Error may appear. Upon successful deployment, it will be displayed as Running.
Using Opentelemetry-operateor, you can simply log using container logs with annotations. Since container logs are collected only in the namespace unit set by the user, the method of collecting them fo
The advantage of using the open source fluent-bit is that the user can handle it by reading log files stored in the directory.
To build images in Cocktail, creating a build server is essential.
Follow the steps below to reach the build server creation screen.
1) Go to [Build Configuration] - [Build Server] tab and click on the "+ Create" button in the upper right corner.
1) Check the build server list screen to verify the creation of the build server.
2) Click on the created build server to review and confirm its configuration details.
This is a solution to problems that frequently occur when installing and operating the log service. If any additional problems arise, please contact us.
1) To create a pipeline, click on the '+ Create Pipeline' button located in the top right corner of the [CI/CD] - [Pipeline] tab.
2) After entering the pipeline creation information, click the 'Save' button located in the top right corner.
3) Click the "Add Resource" button in the deployment resources section to apply the items you want to configure for the pipeline.
Workloads need to be created by default
4) After selecting the workload to add from the workload section, click the "Save" button
5) Once the workload is registered, confirm that the container images registered with the workload are automatically added.
Only images built using the image build feature in Cocktail are integrated
6) After selecting the service to add from the service exposure section, click the "Save" button.
7) After selecting the Ingress to add from the Ingress section, click the "Save" button.
8) After completing the registration of all resources, click the "Run" button located in the top right corner.
9) When the [run popup] appears, enter the content for the execution note regarding this pipeline version, then click the "Save" button.
10) Once the pipeline execution is complete, the release version will be indicated correctly in the top left corner.
When modifying each workload, service exposure, and Ingress in the [Service Map] tab, changes are not reflected in the pipeline. You need to make modifications directly in the pipeline.
Modifying the pipeline ensures that each workload and deployment resource is updated to the latest version
1) Select the pipeline name that needs modification in the [Pipeline] tab.
2) Click the "Create Pipeline Version" button in the top right corner of the pipeline, enter the version, and then click "Create".
1) Activate the "Build Run" button on the right side of the [Image Build] section, then click "Run".
The image build is re-executed and immediately reflected in the workload
2) The image is rebuilt, and you can check the progress of each step in the process.
3) When the image is rebuilt through the pipeline, verify that the image name in the workload is updated to the tag of the image built through the pipeline.
Deactivate the "Build Run" button on the right side of the [Image Build] section (no image changes required)
1) Select the workload name in the [Deployment Resources] section.
2) Make the necessary modifications by selecting the relevant parts, then click the "Save" button in the top right corner.
3) After confirming the change in replicas from 1 to 2 in the workload, click the "Close" button in the top right corner.
4) Once you return to the pipeline modification section, click the "Run" button in the top right corner, and enter the changes in the execution note."
5) In the pipeline's [Deployment Status] tab, verify that there are two pods running.
If you need to rollback to a previous configuration while continuously registering versions through the pipeline.
1) Select the pipeline name that needs modification in the [Pipeline] tab. Once changes are made in the modification section, click the "Rollback" button in the top right corner.
2) When the [Rollback Popup] window appears, review the execution notes of the versions created so far, select the desired version, then click the "Save" button.
3) Once the rollback is completed successfully, confirm that the modified version has been changed to the rollback target (e.g., V3 -> V2)
4) Verify that the pod count has returned to normal, such as 2 -> 1.
The build server required for image building has been created. Now, let's proceed with building the image.
1) Select the [Build/Pipeline] - [Builds] section.
2) Click on the "+ Create" button in the upper right corner.
3) Once the build information window is generated, enter the build details as follows.
4) Click the "+ Add a Build operation" button at the bottom to select the necessary items for the build process.
1) Select the [Code Repository Work] and enter details about the Git or other source from which to load the code, then save.
1) Click the "+Add a Build operation" button to select [User Work]
2) In the [Execution Information] section, enter the necessary commands for source build and apply.
[User Work - Execution Information (Maven)]
Work Name: Provide information about the purpose or content of this task.
Execution Path: Specify the path where the build will be executed in the build container. 1. Enter "/build" as a fixed value when writing.
[User Work - Execution Information (Ant)]
Work Name: Provide information about the purpose or content of this task.
Execution Path: Specify the path where the build will be executed in the build container. 1. Enter "/build" as a fixed value when writing.
3) In the [Work Volume] section, enter the directory path required for source build and apply
1) Click the "+ Add a Build operation" button and select [Build Image Work].
2) In the [Build Image Work] section, write the Dockerfile to create a container image after the source build and apply.
3) After clicking the "Save" button, a popup for [Build Notes] for the build creation will appear below. Write comments for this build and save.
1) Once the save is complete, the image build automatically proceeds, and you can review details about the build as shown in the screen below.
2) In the [Build Info], clicking the "View Log" button allows you to check the logs for the image build.
3) Upon successful completion of the build, confirm that all progress is marked as "Done" as shown below.
1) Write files or directories for downloading or uploading between the remote host containing resources related to the build target and the build host where the build task will be performed.
2) Click the "+ Add a Build operation" button and select [File (FTP) Work].
1) If integration with an external service is required using the REST method, configure the REST call task.
2) Click the "+ Add a Build operation" button and select [Calling REST Work].
3) If headers are required, select "+ Header Add " enter the Header and Value, and click the "Apply" button.
Enter in the format: Header: Authorization, Value: Basic {authentication string}.
The {authentication string} should be a base64-encoded string of the Image Registry's id:password.
1) Define tasks for cases where scripts are needed during image builds.
2) Click the "+ Add a Build operation" button and select [Script Work].
3) Complete the script task and click the "Apply" button in the bottom right corner.
Before creating a new workload, you need to create and register imagePullSecrets. Please refer to
Create a workload group on the Workload tab of the Service Map.
1) Click on [Application] - [Service Map] tab, select the service map where you want to create the workload, and navigate to the Workload.
2) Click the expand menu (three dots) next to the workload group name.
3) Choose the desired direction for adding a group from the additional items (e.g., Add Group to the Right).
4) A text input form for the name of the workload group will appear. Enter the name of the workload group and press Enter.
The workload group name is a mandatory field.
5) Confirm that the workload group has been added.
Create workloads such as Deployment, Stateful Set, Daemon Set, Job, Cron Job, etc. Although the types of workloads may differ, the process of entering container information is fundamentally the same.
1) Click on [Application] - [Service Map] tab, select the service map where you want to create the workload, go to Workloads, and click the "+ Create" button.
2) Choose the type of workload you want to create.
1) Enter basic information for the workload (type, name, group, description, labels, annotations), deployment and management policies (tolerations, deployment policies, autoscaling, update policies), container information (init containers, containers), and storage information (volumes, volume mounts). Click the "Save" button.
Not all information needs to be entered. You must set the name, group, description, and at least one container information. Other information can be entered as needed.
1) Select the workload where you want to register the secret, then click on the icon next to "image pull secret"
2) Choose the secret to register, click "+ Add", and then click "Save"
1) Container Basic Information
Enter container name, image information, and resource requests and limits for CPU/Memory/GPU. Container name and image information are mandatory. If CPU/Memory resource requests and limits are not entered separately, the default values displayed in gray on the input screen will be set.
2) Container Commands
Container commands are not mandatory but can be used if necessary.
Enter the commands and arguments to be executed in the container.
Command and arguments can be optionally added with the [+ Add] button.
If unnecessary, use the [ - ] button to the right of the text field to delete.
3) Container Environment Variables
Container environment variables are not mandatory but can be used if necessary.
Set various configuration information to be used in the container. Configuration information includes environment variables, config maps, secrets, and field references for workload metadata. Config maps and secrets to be used in the container must be pre-created on a separate configuration information screen.
4) Security Settings
Security settings are not mandatory but can be used if necessary.
Set user and permissions for the container or Linux capabilities.
5) Health Check
Health check settings are not mandatory but can be used if necessary.
Set Liveness Probe and Readiness Probe for the container.
You can choose the probe type on the Liveness Probe tab and Readiness Probe tab.
EXEC: Execute a specified command inside the container and check the exit code.
TCP SOCKET: Attempt to establish a TCP socket connection to a specific host and port and check success.
HTTP GET: Send a GET request to the specified HTTP endpoint and check success.
6) LifeCycle Hook
LifeCycle Hook settings are not mandatory but can be used if necessary.
Enter PostStart and PreStop lifecycle hooks.
You can choose the hook type on the PostStart tab and PreStop tab.
EXEC: Register a command to be executed internally in the container before it starts (PostStart) or before it terminates (PreStop).
HTTP GET: Register an HTTP GET request to a specified HTTP endpoint after the container has started to ensure it is ready to serve or check before termination.
7) Container Ports
Enter container port information.
The Container Port field is a mandatory input.
The Protocol field allows you to choose TCP, UDP, or SCTP.
1) The input items for init container information are the same as for regular containers. (Only the execution order is different.)
2) An init container is a one-time-use container that runs before the main application container starts within a pod. Init containers are used to perform specific tasks before the application container starts and to pass the results to the application container through a shared volume.
The deployment, autoscaling, and update policy input sections are located below the basic workload creation information input section. The order of input does not matter, and you only need to set the information as needed.
1) Toleration Settings
2) Deployment Policy Settings
The Replicas field is a mandatory input. Enter the number of instances to replicate as a positive integer.
3) Autoscaling Settings
If using CPU and Memory types, the HPA name field is activated and is a mandatory input.
4) Update Policies
To update the settings for a configured workload, access the configuration screen for that workload. Here, we'll use the example of modifying the container image. The process remains the same for other configuration changes; save the modified settings and restart the workload.
1) Click on the "Settings" tab after selecting the workload to be changed.
2) Single-click on the container name, modify the image name, and apply the changes.
3) After completing the modifications, click "Save and Start."
Monitor the situation where the container restarts with the updated image settings on the detailed workload monitoring screen.
To stop, restart, or delete a specific workload, access the detailed deployment information screen for that workload.
Click the "Actions" button at the top right of the detailed deployment information screen for the running workload. A selection box will appear, allowing you to choose to stop or restart the workload. Select either "Stop" or "Restart" based on your needs.
Before deleting a running workload, you must first stop the workload. Click the "Actions" button at the top right of the detailed deployment information screen for the stopped workload. A selection box will appear, allowing you to start or delete the workload. Choose "Delete," and the workload will be deleted.
1) Click "Actions," choose "Stop" to halt the running workload.
2) After stopping the workload, click "Actions" for the stopped workload, choose "Delete" to remove the workload.
When accessing the workload query menu in the service map, workloads are sorted and displayed based on workload groups. The display method of workload group names or arrangements can be changed as follows.
Change Group Name
Change Column Count
Move Left
Move Right
Add Group on the Left
Add Group on the Right
To perform these actions, click on the "expand menu (three dots)" displayed to the right of the workload group name.
To delete a workload group, there should be no workloads within that group. If there were existing workloads in the group, they must be deleted first.
To delete a workload group, click the "expand menu (three dots)" displayed to the right of the workload group name. You will see "Delete Group" is activated and displayed in the popup. Select this option.
Item (* is required) | Content |
---|
Item (* is required) | Content |
---|
The next steps will guide you through the actual process of building an image. Please proceed to the "" page for detailed instructions.
Item (* is required) | content |
---|
Item (* is required) | Content |
---|
Item (* is required) | Content |
---|
Item (* is required) | content |
---|
Item (* is required) | Content |
---|
Item (* is required) | Content |
---|
Item (* is required) | Content |
---|
Item (* is required) | Content |
---|
Item (* is required) | Content |
---|
Item (* is required) | Content |
---|
Item (* is required) | Content |
---|
Item | Content |
---|
Item (* is required) | Content |
---|
Item (* is required) | Content |
---|
Item (* is required) | Content |
---|
Item (* is required) | Content |
---|
Item (* is required) | Content |
---|
Item (* is required) | Content |
---|
Item (* is required) | Content |
---|
Name* | Enter the name for the build server to be created |
Description | Provide a description for the build server to be created |
Cluster* | Select the cluster |
Namespace* | Choose the namespace where the build server will be executed |
Insecure Registries* | Specify the public IP of the Harbor instance from which images will be pulled or pushed |
Release Name* | Enter the version to be deployed |
Cluster* | Choose the cluster to deploy the package |
Namespace* | Choose the namespace to deploy the package |
Name* | Enter the pipeline name to create |
Version* | Input the version for the pipeline |
Service Map* | Select the service map to execute the pipeline |
Image Name* | Specify what the image represents in detail (Note: Avoid the use of uppercase) |
Registry* | Select the registry you created (If there are multiple registries, choose the applicable registry) |
Use Auto Tag (Choose 1) | Indicate whether tags should be automatically set when updating or changing the image (Split into "Use/Do Not Use," and if "Do Not Use" is selected, the tag may be fixed, overwriting the existing image) |
Tag* | Provide details for the tag to be attached to the created image (If "Do Not Use" is selected for auto-tagging, the tag entered here will be used consistently) |
Auto-Increment Type (Choose 1) | If auto-tagging is set to "Use," it can be specified as "Time/Sequence" |
Build Execution Server (Choose 1) | Select the server to perform the build |
Code Repository Work | Configure information to fetch source from git or similar sources |
User Work | Configure information related to the build of the source fetched from git or similar sources |
File (FTP) Work | Set up tasks to download or upload files or directories between the remote host and the build host where the build tasks are performed |
Calling REST Work | If integration with external services is required using the REST method, configure REST call tasks |
Script Work | Configure script information if a specific script is needed |
Build Image Work* | Write a Dockerfile to apply the built source to a Base image and create a new image |
Work Name* | Refers to the job stage for the image build, and enters a title for that job |
Repository Address* | Enter the address for the Git or other source repository from which to import the source code. |
Branch* | Enter the source branch applied to the repository |
Authentication | When selecting the combo box for authentication, you must enter the user account and password required to access the git |
Code Storage Path | Enter the directory to store the source (Automatically create git project name if not created) |
Container Path* | Write the path to the container where the source will be built |
Build Host Path* | When building the source, a temporary container is created to proceed with the build. This is the path to the temporary container used during the source build |
Work name* | Refers to the job stage for the image build, and enters a title for that job |
enter docker file content* | Write a Dockerfile to create the actual container image |
Work name* | It refers to the operation stage for the file (FTP), and enter a title for that operation |
Host address* | Server address with the directory or file that needs to be uploaded |
Certification* | Need to set up if you have an account and password for the host address |
User/Password* | Connection account and password for the host address |
Task type (choose one) | File Download (If you want to include it in the image during image build, select this type) |
Remote Directory/File* | Absolute path to the file to be uploaded to the image when building the image (Host address must have that file) |
Build Host Directory* | Directory location to upload (/tmp/ fixed) |
Work name* | Indicates the operation stage for a REST call, and enters a title for that operation |
REST Method (choose one) | Choose the API call method |
URL* | Write the URL for the API call |
Certification | Configuration required if there is an account and password for the host address |
User/Password | Username and password for the host address |
Connection Timeout* | Write the response time for the API call |
Expected response code* | Write the success code after the API call (ex] 200) |
Expected response content | Must be left blank |
Save the response to the build host path | Write the filename if response value storage is required (ex] response.txt) |
Work name* | Define the steps for the image build process along with the corresponding work title. |
Enter the script content* | Enter the content of the script to be executed. |
Type | It is displayed according to the type selected when creating the workload |
Name* | Enter the name for the workload to be created |
Group* | Choose one from the existing workload group names |
Description* | Write a description for the workload |
Label | Specify key/value pairs for identification using this information |
Annotation | There are no specific features, but this is used as additional explanation |
Node Affinity | Check the labels of nodes and configure deployment only on nodes with the specified label |
Toleration | Set rules to allow pod placement on nodes with taints |
Deployment policy | Configure overall policies for pod deployment regarding replicas, hosts, startup/shutdown times, permissions, etc |
Auto Scaling | Set the system to automatically adjust (scale) based on resource considerations |
RollingUpdate Strategy | Define policies needed for pod updates |
Image Pull Secret | Automatically register Harbor login information to access and retrieve container images from Harbor |
Name* | Enter the container name to be created, using only lowercase letters, numbers, and the hyphen (-) for special characters |
Image* | Provide image information for creating the pod |
CPU * | Set the amount requested and the limit amount to configure the necessary CPU (amount requested) during pod startup and the maximum CPU that can be allocated (limit amount) The default is 100 |
Memory* | Set the Amount Requested for memory and the Limit Amount for the maximum memory allocation during pod startup |
GPU resources | If the pod uses GPU, specify the Limit Amount and Amount Requested for GPU |
Command | Enter the command values to be executed when the pod starts |
Arguments | Provide arguments for the command to be executed when the pod starts |
Direct input (KEY)* | Enter the "key" directly for the environment variable to be registered when setting up pod environment variables |
Direct input (VALUE)* | Input the "value" directly for the environment variable to be registered when setting up pod environment variables |
Config map Value (KEY)* | Enter the name of the ConfigMap value to be registered in the environment variables |
Config map Value(VALUE)* | Select the name of the previously configured ConfigMap |
Secret Value (KEY)* | Enter the name of the Secret value to be registered in the environment variables |
Secret Value(VALUE)* | Select the name of the previously configured Secret |
Field Ref(KEY) | Enter the key that references the field value of the pod |
Field Ref(VALUE)* | Input the value that references the field value of the pod |
Resource Field Ref(KEY) | Enter the key that references the resource field value of the pod |
Resource Field Ref(VALUE)* | Input the value that references the resource field value of the pod |
Run as Non ROOT | If the container is not going to run as the root user but as a regular user, it is necessary |
Run as User | Input the user to be used when the container is running |
Run as Group | Input the group to which the container will belong |
Run Privilleged Mode | It is necessary if the container needs to interact directly with the host system's kernel |
Allow Privillege Escalation | Decide whether to allow privilege escalation |
Read Only Root filesystem | Set whether the container's root file system should be read-only |
seLinuxOptions(level) | Set the level used in SELinux security policy |
seLinuxOptions(role) | Set the role used in SELinux security policy |
seLinuxOptions(type) | Set the type used in SELinux security policy |
seLinuxOptions(user) | Set the user used in SELinux security policy |
Linux Capabilities(add) | Add additional Linux kernel features |
Linux Capabilities(drop) | Remove specific Linux kernel features |
Container Port* | Enter the port number for the container port to be created |
Protocol (Choose one) | Specify a specific communication protocol used for network communication |
name | Enter the name of the container port to be created |
Host IP | Input the IP address of the host machine |
Host Port | Specify the port number on the host machine that connects to the corresponding container port |
Effect (Choose one) | You can set rules for placing Pods on nodes, with three options: NoSchedule, PreferNoSchedule, and NoExecute |
Key* | Write the Key value for Toleration |
Operator (Choose one) | Choose between Exists and Equal. Equal checks if both the key and value effect match, while Exists ignores any taint |
Value* | Write the Value for Toleration. If you choose the Equal option for Operator, it becomes active |
Toleration Seconds | When a Pod is scheduled on a specific node, this represents the maximum time the Pod is temporarily allowed on that node, even if the node has a specific Taint. This is activated when you choose the NoExecute option for Effect |
Number of copies | Write the number of instances to replicate |
Host Name | Write the hostname |
Grace period (seconds) on exit | Used to set the time to wait before a container or pod is terminated |
Waiting time after preparation(seconds) | Time to wait after the task is completed before executing additional actions |
Node Label KEY | The Key value of the label that the node has when deploying instances to a specified node |
Node label value | The value of the label that the node has when deploying instances to a specified node |
Access authority (RBAC services Account) | Service account used to manage access permissions for resources |
CPU Type | If you check the box on the right, choose between Utilization and AverageValu - Utilization : The percentage of CPU used to process tasks - AverageValue : Average CPU usage |
CPU Utilization(%) | If you select CPU type as Utilization, it becomes active |
CPU Average Usage Value(mCore) | If you select CPU type as AverageValue, it becomes active (minimum value must be greater than or equal to 1) |
Memory Type | If you check the box on the right, choose between Utilization and AverageValue. - Utilization : The percentage of memory used to process tasks - AverageValue : Average memory usage |
Memory Utilization(%) | If you select Memory type as Utilization, it becomes active |
memory average usage value(MB) | If you select Memory type as AverageValue, it becomes active (minimum value must be greater than or equal to 1) |
HPA name | Set the HPA configuration name |
Max Replicas, Min Replicas | Write the maximum and minimum number of instances to be maintained |
Scale Use | Either CPU type or Memory type must be used for activation - Scale Down : Choose between Disabled, Max, and Min - Scale Up: Choose between Disabled, Max, and Min |
RollingUpdate Strategy | Choose one between Rolling Update and Recreate |
Percentage of Interruption to Replication | It becomes active when Rolling Update is selected Choose one between Percentage and InstanceCount |
Expansion ratio vs. number of copies | It becomes active when Rolling Update is selected Choose one between Percentage and InstanceCount |
User Account Management (IAM, Identity & Access Management) is crucial for security management, covering the entire lifecycle from issuance to revocation. To achieve this, only authorized users should have permission to create, delete, and modify accounts. Additionally, the platform should allow the verification of existing account permissions and statuses.
Navigate to [Settings] - [Users] to access this information.
Users logging into the Cocktail Cloud platform require an account. For maintaining security levels and role separation, it is recommended to perform major configuration tasks and platform resource management operations with 'Admin' privileges. This is akin to requesting and using root permissions only temporarily for specific tasks in an OS operating environment.
Possesses the highest level of authority, capable of creating and modifying other user accounts, viewing and searching audit logs.
Can create platforms and allocate resources.
Can grant cluster access and terminal access permissions.
Can create workspaces on the platform and add members to them.
Add service maps, which represent the actual service units in operation.
When adding a service map, allocate and limit resources such as CPU, Memory, and the total number of Pods.
Register clusters for use on the platform.
Can register clusters for use on the platform, monitor the resources and status of allocated clusters.
Can add or reinstall addons, restart them, check the status of deployed applications.
Can view the status of deployed applications, add or create container images.
Add or create container images.
Create and manage registries.
Deploy Helm charts with publicly available packages on the platform.
Can manage resources assigned to them by an administrator and serve applications.
Can create workloads, expose services, request and use volumes, configure application deployment, and utilize package and pipeline features.
Can add or create container images.
Can deploy packages exposed in the Helm chart on the platform.
Ingress is a feature that allows controlling HTTP/HTTPS routing from outside the cluster to internal services within the cluster. To create Ingress, it is necessary to install the Ingress controller in the cluster beforehand through the Cocktail Cloud's addon management screen.
1) Click on the "+ Create" button at the top right of the Ingress screen in the Service Map.
1) Provide the necessary basic information for Ingress configuration.
To configure automatic redirection from HTTP to HTTPS, set SSL Redirect to true and include force-ssl-redirect: true in the comments.
1) Click the "Edit" button in the "Rules" section of Ingress settings.
2) When adding a host, enter the desired host name and click "+ Add."
If there are pre-registered hosts, select "Select from existing hosts" and click "+ Add.".
Configure TLS-related information for Ingress, including secrets used to terminate TLS traffic on port 443 and host information included in TLS certificates.
1) Click the "Edit" button in the "TLS" section of Ingress settings.
2) Select the Secret and target host, then click "+ Add."
3) After completion, click "Apply."
1) To actually create the Ingress, be sure to click the "Save" button.
Access the [Ingress] screen in the Service Map to view the created Ingress information.
1) In the [Ingress] screen of the Service Map, view the list of Ingress.
1) Click on the Ingress name link displayed in the Ingress list. The configuration and status information of the Ingress will be displayed.
2) You can also view Ingress settings information and status information in YAML format. After clicking the "Settings" button at the top of the screen, select "YAML View" as the settings view at the top of the screen to display information in YAML format.
To invoke the service functionality provided by a workload both within and outside the cluster, services are defined using Cluster IP and Node Port methods.
If services are defined using the Node Port method in a cloud-configured cluster, a load balancer can be configured in front of it, allowing external invocation of services through the load balancer address and port.
Cluster IP: Groups pods set with the same label to perform load balancing (not in round-robin but random connection), facilitating internal communication.
Node Port: Distributes the same port to each POD, performs load balancing using Cluster IP and port, and allows external exposure.
When using Node Port, you need to register the Node Port for KT LB to use it.
For inquiries regarding KT firewall and LB Port Open, please contact MSP directly.
Users can easily create various service exposure types through the web UI console.
When a cluster is configured in a public cloud, Cocktail Cloud automatically creates a load balancer. Service exposure using the load balancer type is possible only on clouds that support it, such as AWS, Azure, and GCP.
Create services
View services
1) Select [Application] - [Service Map], then click the "+ Create" button to move to the service exposure screen.
1) Select ClusterIP or Nodeport as needed.
[For KT customers] When creating a Node Port type service, you need to contact MSP for load balancer creation and firewall setup.
1) Enter basic service exposure information.
2) Click "Label Selector," choose the workload and label to connect the service to, then click the "+ Add" button.
3) Confirm that the selected label is displayed at the bottom as Key, Value, and click the "Apply" button.
You can either select labels pre-set in the workload or directly input a label name and value.
When directly adding a workload, input fields for Key and Value will be added at the bottom, and you can enter them directly.
1) In the service exposure settings, click the "Edit" button in the Target Ports section.
2) Click "+ Add," then enter the Name, Protocol, Target Port, and Service Port at the bottom.
Name, Protocol, Target Port, and Service Port are mandatory, and you can choose between TCP and UDP for the protocol.
After completing the information input, click the "Save" button to create the service.
Search the generated service information on the service exposure screen of the service map.
1) Access the service exposure screen in the service map to check the list of created services.
1) Click on the service name displayed in the service list to view the service's configuration and status information.
2) You can also view service configuration and status information in YAML format by clicking the settings button on the top screen, then selecting "YAML View" from the left checkbox.
Service mesh is a concept used to describe the network of microservices and their interactions, forming the foundation for managing service-to-service communications.
Istio is the industry-standard technology for managing service mesh, and Cocktail supports the installation and monitoring of Istio.
It visualizes service-to-service connection configurations.
It provides network traffic monitoring information.
Istio installation is made easy through Cocktail Cloud's addon features.
Istio's monitoring screen is integrated with Cocktail Cloud's web UI.
Install Service Mesh
View Service Mesh
1) To use the service mesh, the platform administrator needs to deploy Istio to a specific cluster through the addon feature.
1) Go to [Infrastructure] - [Clusters] tab and select the registered cluster.
1) The overview screen for the selected cluster will be displayed. Click on the [Addon] menu at the top to see the list of installed addons for that cluster.
1) To install Istio, click on the "Deploy" button in the upper-right corner of the Addons list screen. The list of addons already installed and those available for installation on the cluster will be displayed.
1) Click on the "Deploy" button for the Istio card. Information about the Istio addon and explanations of parameters that can be configured during deployment will be shown.
1) Navigate to the "Settings" tab at the top of the Istio addon information screen. The configuration information input screen for deploying Istio will appear. After entering the configuration information, click the "Deploy" button.
2) After Istio is installed on the cluster, the "Service Mesh" menu will be displayed at the top when accessing the service map screen for that cluster.
1) Access the "Service Mesh" menu for a specific service map. In the center of the screen, it displays the connection relationships between services and workloads, along with information about requests and responses. On the right side of the screen, it shows traffic-related information specific to the selected connection relationship in the center of the screen.
1) Click on the question mark button at the top of the Graph to get information on how to view the graph
1) Choose the desired graph type to view.
2) Available graph types include
App Graph: Displays the connection relationships between services and workloads, representing all versions of workloads as a single graph node.
Service Graph: Displays the connection relationships between services.
Versioned App Graph: Displays the connection relationships between services and workloads, showing individual connections with multiple versions of workloads. Additionally, it includes multiple versions of the same workload within a single box.
Workload Graph: Displays the connection relationships between services and workloads, showing individual connections with multiple versions of workloads. Unlike the Versioned App Graph, it does not include multiple versions of the same workload within a single box.
Configure the information displayed on the edges connecting nodes in the graph. Additionally, configure the elements displayed on the graph.
Cocktail Cloud provides ConfigMap and Secret types as configuration information.
ConfigMap: ConfigMap is a Kubernetes object for injecting configuration data into containers.
Secret: Secret is a Kubernetes object that includes sensitive data such as passwords, tokens, and key values. Kubernetes defines types like Docker-registry, generic, and tls for secrets, and Cocktail Cloud supports all these types.
When users create configuration information in Key-Value format in the configuration information management menu, they can easily select the configuration information to be used in containers from the dropdown menu during container configuration.
Create ConfigMap
Create Secret
View Configuration Information
Use Configuration Information in Workloads
1) Go to [Application] - [Service Map] tab, select Configuration Information, click the "+ Create" button, then choose ConfigMap.
1) Enter the name, description, labels, and annotations for the ConfigMap.
1) Click the "+ Add" button on the bottom right of the ConfigMap information input screen to enter key-value information, then click "Apply." If managing multiple key-value information in the ConfigMap, repeat the key-value information entry as needed.
The KEY field is a required input.
1) After entering ConfigMap information, click the "Save" button to actually create it.
1) Go to [Application] - [Service Map] tab, select Configuration Information, click the "+ Create" button, then choose Secret.
1) Click the "+" button on the bottom right of the Secret information input screen to enter key-value information. If managing multiple key-value information in the Secret, repeat the key-value information entry as needed.
The KEY field is a required input.
1) After entering Secret information, click the "Save" button to actually create it.
The previous method of forcibly generating and assigning imagePullSecrets in Cocktail is no longer used. Users can now directly create imagePullSecrets as needed and use them in their workloads. (imagePullSecrets provide authentication tokens in the form of Kubernetes Secrets, storing Docker authentication information used to access private registries.)
1) In the [Application] - [Service Map] tab, select the service map where you want to create the secret, and go to the settings information, then click the "+ Create" button.
2) Click on "Secret."
1) Go to [Application] - [Service Map] tab, select Configuration Information to view the list of configuration information.
1) Select the name of the configuration information to see its details.
2) Detailed information of the configuration information can also be viewed in YAML format. Move to the configuration tab in the upper right corner of the screen, then select "YAML View" from the displayed screen. YAML-formatted information will be shown.
1) Select the workload that will use the configuration information and go to the configuration tab to display the detailed configuration screen for the workload.
1) Select the container name and go to the [Environment Variables] tab.
2) Choose the type of configuration information you want to use.
If you have selected a ConfigMap value or a Secret value, you can easily select the key-value of the configuration information resource and the key-value it contains. After accessing the selected key-value of the configuration information in the container, enter the corresponding environment variable key-value separately, then click the "Apply" button.
The KEY and VALUE fields for direct input, ConfigMap value, Secret value, Field Ref, Resource Field Ref are mandatory.
1) To apply the environment variables, a restart of the container is required. Click the "Save and Start" button at the top right of the detailed configuration screen for the workload to restart it.
1) In the detailed deployment information view of the workload, find the container with the applied environment variables, then click the terminal icon on the right. Clicking the terminal icon displays an interactive shell screen for that container. In the interactive shell, use the env
command to display and confirm that the environment variable content is correctly set.
A volume, in simple terms, refers to a directory existing on a disk or within a container. Typically, the lifespan of a volume is the same as the Pod that encapsulates it. When the Pod ceases to exist, the volume disappears as well.
However, in some cases, it may be necessary to preserve the data on the disk even if the Pod disappears. In such cases, persistent volumes (PVs) are used.
Regular Volumes: Supports emptyDir and hostPath methods.
Persistent Volumes (PVs): Supports Single type (usable only on one node) and Shared type (can be shared across multiple nodes).
When users input the minimum required information for a persistent volume, Cocktail Cloud automatically generates related Persistent Volume (PV) and Persistent Volume Claim (PVC) resources and matches the PVC with the corresponding PV.
Developers only need to select the PVC created in the configured Pod to set up volume and volume mounts.
Create volume requests
View volume requests
Use volumes in containers
1) Go to [Application] - [Service Map] - [Volume Requests], then click the "+ Create" button in the top right to move to the volume request creation screen.
1) Access the volume request screen in the service map to check the list of volume requests created by the user.
1) Click on the "Name" of the volume request you want to check in the volume request list.
2) To view detailed information about the created PVC in YAML format, click the settings button on the top screen, then select "YAML View" from the left checkbox.
1) Select the "Volume (PV)" of the volume request you want to check in the volume request list.
2) To view detailed information about the created PV in YAML format, go to the "Settings" tab on the top screen.
1) Select the workload that will use the volume request, then click the "Settings" tab to go to the detailed workload configuration screen.
1) Click the "+ Add" button in the volume section of the workload configuration information.
2) Choose the desired volume type and enter the corresponding volume name.
The volume type field can be Empty Dir, Host Path, Config Map, Secret, Persistent Volume, and additional input information may be required based on the selected volume type.
3) After completing the volume type and volume name, click the "Apply" button to save.
After adding a volume, it needs to be mounted in the workload to be used.
1) Click the "+ Add" button in the volume addition section of the workload configuration information.
2) Select the container and volume to mount, then click the "+ Add" button.
3) Specify the path to mount the volume in the container.
The container field and volume selection field can be created if containers and volumes already exist.
The container path field is a mandatory input.
4) Click the "Apply" button to create the volume mount.
1) After adding volumes and configuring volume mounts, click the "Save and Start" button at the top right of the workload's detailed configuration screen.
2) You can check that the configured volume and volume request are applied by confirming that the container restarts.
One of the key advantages of Cocktail Cloud lies in the ability to build a service environment as a multicluster to meet the business demands of providing services in a multi-cloud environment.
Refer to the link for instructions on how to register clusters.
When configuring a multicluster, all types of environments can be organized into clusters according to business requirements, allowing centralized management on a single platform. In other words, based on business needs, various forms of clusters, including cloud service providers and on-premises environments, can be strategically selected and used without constraints imposed by vendors.
Configurable server infrastructure environments include
On-premises
Data centers
Private clouds
Public clouds
With the increasing number of companies choosing hybrid clouds to meet business requirements, as well as regional, legal, and security requirements, a hybrid cloud can be configured in various forms of clusters on a single platform.
1) Once you receive an API token, you can check the status, expiration date, and API scope for the current token in [External APIs] - [History].
2) By selecting the API scope icon, you can view a list of APIs available for the current token.
Set the Authorization Header with the previously issued token.
${API-GATEWAY} : Enter the domain or IP:port information to connect to the API gateway.
[token] : Enter 'Bearer' followed by a space, then paste the issued token.
1) Available API types can be checked below
A Workspace is a dedicated space for building, deploying, and operating applications with allocated cluster resources. Typically created on a team basis, Workspaces allow registration of users, clusters, and libraries based on the intended use.
Refer to the link below for instructions on creating a Workspace.
Cluster resources are allocated on a Workspace level based on requests made by authorized administrators.
Administrators can efficiently allocate and manage resources based on the scale of services and requests. User groups can then use explicitly assigned resources strategically. Moreover, Workspaces provide a secure and isolated environment, ensuring that resource usage by other teams/groups has no impact.
Clusters can be allocated exclusively for a user group or shared among two or more Workspaces. Exclusive resource usage guarantees higher independence. Sharing resources enables the utilization of large clusters or specific resources (such as GPU Nodes) without redundant investments.
Cocktail Cloud utilizes over 200 metrics for resources and states in a multi-cluster environment, providing more than 100 monitoring panels.
Each panel is arranged in views for clusters, ingresses, ETCD, nodes, and namespaces. Additionally, an alarm/event page is provided to review alarms/events chronologically and maximize the visualization of the user platform's status.
Monitoring information in Cocktail Cloud can be accessed in the left-hand [Monitoring] menu. Sub-menus include clusters, ingresses, ETCD, nodes, GPUs, namespaces, and alarms/events.
The platform offers up-to-date status information at the cluster level. Key status information provided in the cluster view includes
Number of API server calls per second
CPU usage
Disk usage
Disk I/O speed
Memory usage
Restarted Pod tracking
Average request time over the last 10 minutes
Pod executions by status
Top 5 Pods with high CPU usage
Top 5 Pods with high memory usage
Ingress exposes HTTP and HTTPS paths from outside the cluster to internal services. It provides configuration options for externally accessible URLs, load balancing traffic, SSL/TLS termination, and name-based virtual hosting. Ingress plays a crucial role in the network area of services, making multidimensional monitoring essential.
The status information provided in the integrated dashboard's Ingress view includes
Ingress controller requests
Ingress controller connections
Ingress controller request success rate
Recent Ingress configuration reload success and failure
Ingress controller request trends
Ingress controller success rate trends
Network I/O trends
Average memory usage trends
Average CPU usage trends
The status information provided in the integrated dashboard's ETCD view includes
Presence of ETCD leader
Number of recent leader changes
Number of recent leader change proposal failures
RPC ratio
Database usage
Node disk processing speed
Overall disk processing speed
Client traffic In/Out
ETCD server-specific processing status
Network usage
Snapshot processing speed
The status information provided in the integrated dashboard's Node view includes
Cluster CPU usage frequency
Cluster memory usage
Cluster disk usage
Cluster network usage
Recent changes and current values of the file system's free space ratio
List of file systems and their usage
The status information provided in the integrated dashboard's GPU view includes
Average GPU utilization
GPU usage trends
Average GPU memory utilization
GPU memory usage trends
GPU temperature and power
GPUs/MIGs
Timeslicing
The status information provided in the integrated dashboard's Namespace view includes
Number of containers
Namespace creation time
Total number of Pods in the namespace
Namespace PVC status
Namespace CPU allocation
Namespace memory allocation
Number of Pods running in the namespace
The monitoring metrics displayed in the integrated dashboard are delivered through dashboard, SMS, and E-mail channels based on user configurations. Users can filter and view metrics by cluster, namespace, and major resource groups.
In the dashboard, events occurring in the past hour can be reviewed, and accumulated events per minute are provided with detailed event descriptions, enabling quick identification of the cause based on event content alone.
Each event is categorized into five levels of importance, and real-time notifications are sent through SMS or E-mail (or both) according to user preferences. Users can filter and view recently occurring events and notifications, with the option to retain data for up to one year based on user settings.
The primary components of a cluster are nodes, storage, and applications. To effectively manage a configured cluster and ensure it operates according to plan, monitoring, alerts, and security settings are additionally required.
Let's explore the tools and content needed for cluster management one by one.
Navigate to [Infrastructure] - [Clusters] to access functionalities related to cluster management.
Cluster Provider (Cloud Service Provider) type, Physical Location (Region)
Cluster Operation Status (Running/Stop)
Cluster Resource Allocation Type (Cluster/Service Map)
Number of Nodes Allocated to the Cluster
Allocated Resources of the Cluster
Number of GPU Nodes Allocated to the Cluster
Cluster Incident Alerts
[Function] Cluster Registration
[Function] Connect to Cluster Web Terminal
[Function] Download External Connection Certificate for the Cluster
Cocktail Cloud can be implemented in on-premises environments (physical servers) and cloud services, and continuous integration is under development.
Amazon Web Service
Microsoft Azure
Google Cloud Platform
Naver Cloud Platform
VMware
Alibaba Cloud
Tencent Cloud
Rovius Cloud
On-Premise (physical servers)
Datacenter
For the detailed process of Cluster Registration (Creation), refer to the link provided.
To check the resources and status of the registered cluster, navigate to the Cluster List screen.
Click on [Infrastructure] - [Clusters], and a list of accessible clusters will be displayed.
Information provided on the Cluster List screen includes
Cluster Name (User-defined)
Kubernetes Version
Status (Running, Stop)
Number of Nodes
Cluster Resource (CPU, Memory, Storage) Status
GPU Nodes (Number of GPU nodes configured in the cluster)
Alarms (Number of incidents occurred)
To modify the configured resources or registration information of a registered cluster, select [Infrastructure] - [Clusters], and move to the Registration Information tab.
Cloud Service Provider
Cloud Service Type
Region (Provider and server's regional/physical location)
Cluster Name (Name represented in Cocktail Cloud)
Kubernetes Version (Information about the Kubernetes version used in the cluster)
ID (Shared ID for the cluster, required for redirecting alarm messages)
Description (User description of the cluster)
Master Address (Kubernetes API address in the format "https://host:port")
Ingress Host (Host IP Address for Ingress method, Master IP or Load balancer IP)
Node Port Host Address (IP Service to be used in front of the port in the method of exposing services by attaching ports to nodes, Master IP or Load balancer IP)
Node Port Range (Range of ports to be used behind IP in the method of exposing services by attaching ports to nodes, recommended 30000~32767)
Cluster CA Certification (Enter the value of the ca.crt file after moving to the /etc/kubernetes/pki path on the master server)
Client Certificate Data (Enter the value of the admin.crt file after moving to the /etc/kubernetes/pki path on the master server)
Client Key Data (Enter the value of the admin.key file after moving to the /etc/kubernetes/pki path on the master server)
Move to the Node tab after navigating to [Infrastructure] - [Clusters] Select the specific node and move to the Monitoring tab.
Information provided includes resource usage status (CPU, Memory, Disk, Network), resource summary (Capacity, Availability, Request), and status (Event type, State, Recent occurrence time, Time elapsed since the last occurrence, Cause of occurrence, Message). Monitoring information for nodes can also be obtained from the Unified Monitoring menu, providing additional details.
To allocate storage to the cluster, navigate to [Infrastructure] - [Clusters] - [Storage Volume] and click the "+ Create" button to access the storage creation screen.
Choose the storage type for creation. Commonly, NFS and NFS Named types are available, and Azure services additionally provide Azure Disk and Azure File types.
Based on the selected type, detailed configurations for storage creation are possible. The configurable information (specifications) includes
Name: Storage name
Description: Description of storage usage
Default Storage: Option to use as the default storage
Storage Plugin: Plugin for storage
Policy: Policy for storage deletion (Retain or Delete)
Total Capacity: Total storage capacity in GB
Parameters: Storage parameter settings
Mount Options: Storage mount option settings
Label: Storage label settings
Annotation: Storage annotation settings
Applications deployed in Cocktail Cloud are deployed at the workload level, and their status can be checked by selecting the corresponding workload in [Workloads].
Details about the deployed application, including workload name, workload status, deployment type (Deployment, Daemon Set, Stateful Set, Job, Cron Job), number of instances, current resource usage (CPU, Memory), and service uptime (Age) after deployment, can be reviewed.
When alerts occur in the running workload (or instance), real-time status is provided through SMS (Slack, etc.), email, and the dashboard.
Navigate to [Infrastructure] - [Clusters] - [Alerts], where unresolved alerts are displayed in the alert list. Each alert includes the alert name (status summary), severity (Critical, Warning), and occurrence timestamp.
To view detailed information about an alert, select the alert name, and additional information will be provided through a popup.
In Cocktail Cloud, add-ons, including Prometheus, a cluster management component, provide convenience for cluster operations. The add-on manager functionality enables registration, deletion, rollback, and redeployment of components. Users can add/modify metric targets for collecting/storing add-on metrics based on their requirements.
modifying the monitoring add-on
Customize metric targets for status and resources like CPU/MEM.
Set custom thresholds for metrics (min/max values).
Trigger events and alerts based on specified metric values.
Specify individual monitoring metrics based on add-on versions.
Deploy modified metrics
Store modified metric information (Rule, Config) in ETCD.
Provide add-on registration/deletion/rollback/redeployment based on modified user information.
If storage is increased due to insufficient space or planned tasks in a configured cluster, the existing pods may not immediately reflect the increased storage information. To utilize the increased storage properly, already deployed pods need to be restarted.
Navigate to [Applications] - [Service Map] - [Workloads], select the workload list, and click the "+ Create" button to restart pods.
Item (* is required) | Content |
---|---|
Item (* is required) | Content |
---|---|
Item (* is required) | Content |
---|---|
Item (* is required) | Content |
---|---|
Item (* is required) | Content |
---|---|
Item (* is required) | Content |
---|---|
Item (* is required) | Content |
---|---|
Item(* is required) | Content |
---|---|
Item (* is required) | Content |
---|---|
Items(*is required) | Content |
---|
ingress name*
Write the name of the Ingress you want to register
Ingress Controller*
Select the installed Ingress controller
SSL Redirect
If using an SSL certificate and need to automatically redirect from HTTP to HTTPS, choose TRUE
Label
Input labels to be registered for the Ingress
Annotation
Input comments to be registered for the Ingress
host*
The entered host is specified by adding it
path*
Select the installed Ingress controller
Path Type*
Prefix: Matches values by separating the prefix of the URL path based on /, distinguishing between uppercase and lowercase.
ImplementationSpecific: Varies depending on IngressClass, can be separated by a separate pathType, or use the path type the same as Prefix or Exact
Exact: Strictly distinguishes the case of the URL path
Target Service*
Choose the service to connect to the Ingress among the currently created exposed services
Targeted Service Port*
Select the port to be served by the service
Secret*
Select the public certificate Secret that has been previously registered
Targeted host*
Choose the host for which TLS certificates will be applied
Service Exposure Name*
Write the service exposure name you want to create
Service Expose type
It is displayed according to the type selected when creating the service exposure
Sticky Session
If you want to use Sticky Session, check TRUE and enter the session timeout
Headless Service
If you want to use Headless Service, check its availability
Label selector*
Select the workload and label you want to connect to the service
Label
Input labels to be registered for service exposure
Annotation
Input comments to be registered for service exposure
Kiali Service Type
Set the workload type for Kiali deployment (Default: Client IP)
Kiali Username
Kiali login account
Kiali Password
Kiali account password
Kiali TLS Cert Chain(Base64 Encoded)
Base64 encoding value of ca.crt from Kiali certificate.
Kiali TLS Key(Base64 Encoded)
Base64 encoding value of ca.key from the Kiali certificate
Name*
Enter the name for the Config Map to be created (use only uppercase, lowercase, numbers, and special characters (-.))
Description
Provide a description for the Config Map to be created
Label
Input the labels to be written in the Config Map
Annotation
Enter any Annotation you want for the Config Map
Name*
Enter the name for the Config Map to be created (use only uppercase, lowercase, numbers, and special characters (-.))
Description
Enter a description for the Config Map to be created
Type
Generic:
DockerRegistry:
TLS: Certificates for public certification registration
Label
Input the labels to be written in the Config Map
Annotation
Enter any Annotation you want for the Config Map
Name*
Enter the name of the Secret to be created
Description
Provide a description for the Secret
Type*
Select DockerRegistry to store authentication information for pulling images from a Docker registry.
Label
Specify Key / Value pairs to identify the information
Annotation
Used for additional explanation without any special functionality
Setting type*
Direct input : Manually enter the registry authentication information
Select from registry: Choose from previously registered registries
Registry
Select from previously registered registries
Name*
Write the name of the volume request you want to create
Persistent Volume type*
Choose between SINGLE and SHARED
Storage*
Select the pre-registered storage
Access Mode*
If you choose SINGLE for the storage volume type, only ReadWriteOnce can be selected in the access mode
If you choose SHARED for the storage volume type, ReadWriteMany and ReanOnlyMany can be selected in the access mode.
Capacity(GB)*
Enter the volume amount to be created (only positive integers are allowed)
Label
Input labels to be registered for volume request creation
Annotation
Input Annotation to be registered for volume request creation
Name* | - Token name (non-editable) |
Description | - Description of the token |
Expiration Date | - Indefinite or specific date designation |
Allow IP | - List of allowed IP addresses - CIDR notation is allowed - Allow all IPs if left blank |
Block IP | - Blocked IP list - CIDR notation allowed - If the same IP is listed in the allowed IP list, it will be blocked instead |
Request Limit | - Not entered or 0 (0 means unlimited and cannot be modified with this setting) |
Range* | Click the checkbox to set the permission scope |