Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Containers are deployed and executed as images. Container images are specified by name in the pod's container spec in the format of image_name:tag (e.g., nginx:latest). When using Docker Hub, the registry address where the image is located is often omitted.
Cocktail Cloud provides independent image registries for each workspace. It also offers automated image building through the 'Build' feature.
An image registry stores container images and provides them when the image is required for pod execution. The storage/provisioning interface of image registries is standardized. Typically, the 'Push' API is used to store images in the registry after creation, while the 'Pull' API is used to retrieve images during container execution.
In Cocktail Cloud, image registries can be allocated for each workspace, serving as independent registries for teams. Additionally, image registries can be shared among teams.
When deploying pods from the service map using an allocated image registry, the image configuration is performed by selecting a 'build' rather than specifying the image name. A build automates the process of creating images, and the latest image can be deployed to pods based on the selected build's 'tag.'
A build is a resource in Cocktail Cloud that automates the process of generating container images. Builds can have one or more tags, with each tag defining a different creation process. Tags can be seen as image versions. The process of generating images is called the build flow.
Builds store generated images in the image registry allocated to the workspace. Therefore, images and builds are synonymous, but each image tag (version) has a unique build flow. The structure is Image Registry -> Image (Build) -> Tag (Build Flow).
Users deploy builds (images) and tags (versions) in pods for workloads. The image generated by the build flow of the selected tag is then deployed and executed. The pipeline in the service map automates the entire process of updating images by executing the build flow of the image tag after code changes.
The build flow automates the process of generating images for a specific tag (version). Each step executed by the build flow for image creation is called a 'task.'
Cocktail Cloud offers various types of default tasks, and users can create custom tasks to configure the build flow. Default tasks include downloading code from code repositories (Git), executing user-defined scripts, and building images using Dockerfiles. Additionally, tasks for integrating with external systems' APIs and FTP-based file transfers are provided.
Users can develop and add/extend tasks to the build flow. User-defined tasks need to be containerized before adding them to the build flow.
Tasks in the build flow are executed on the 'build server.' Cocktail Cloud provides options to adjust the capacity of the build server. For build tasks requiring substantial resources, the capacity of the build server can be adjusted.
Security is a crucial aspect of enterprise cloud environments, with three main components in cloud-native setups:
Cluster access authentication and authorization refer to the permissions granted to authorized users to access the cluster and manage resources as needed. Users accessing the cluster have user accounts, and resources include applications and data. Administrators authorize user access and grant appropriate permissions for resource management, thereby managing cluster security.
In Cocktail Cloud, users can manage allocated clusters via GUI within workspaces, eliminating the need for direct cluster access for management. However, if using command-line tools or external CI/CD systems, a cluster user account is necessary. Administrators issue cluster accounts to users in such cases.
Cocktail Cloud provides integrated cluster account management, allowing users to access multiple clusters with a single user account and manage resources based on permissions. Users receive cluster accounts from administrators and can manage clusters within the validity period.
Audit logs record the commands (API) executed by users logged in as Cocktail users or cluster accounts, detailing which resources were affected. In case of incidents or security issues, audit logs can be traced to analyze the root cause.
Cocktail Cloud offers the capability to collect and trace both platform (Cocktail Cloud features) and cluster (Kubernetes) audit logs.
Pod security policies control permissions, node access, OS security settings, etc., during container execution. Typically, security settings are defined when defining pods. However, enterprises require control over security. Different security settings for each team or organization may lead to unforeseen security vulnerabilities.
Pod security policies can enforce security settings at the cluster or application level. Enterprises can enforce security policies based on their existing security policies.
Cocktail Cloud provides features to configure and apply security policies.
Container execution images may contain multiple open-source components. For example, a base image is publicly available on the internet and serves as the basis for container image creation by adding user-specific components. If a base image contains malicious code, it poses a security risk.
Cocktail Cloud's image registry offers features to inspect images for malicious code. Additionally, it provides additional checks for outdated component versions or vulnerable code.
"Workspace" is an independent workspace provided for teams or organizations. Teams perform development, operation, and monitoring of one or more designated applications within a workspace. One or more members are registered in the workspace to collaborate.
Resources necessary for deploying and operating applications are allocated to workspaces. Resource allocation is performed by the platform administrator and targets clusters and image registries registered in the platform.
In Cocktail Cloud, there is a method for allocating cluster resources to workspaces called service mapping. The service map in Cocktail Cloud is an administrative unit that extends namespaces, or more precisely, it can be said that namespaces are allocated to workspaces.
Teams can be allocated service maps, or namespaces, which are isolated, independent spaces within a cluster, typically referred to as virtual clusters. Teams allocated with namespaces are responsible only for deploying and operating applications, while the cluster (infrastructure) is managed by a separate team. This method is suitable when teams are focused on application development and operation.
Each workspace is independently allocated a registry for storing and managing application container images. Teams or applications manage their images and configure automated pipelines accordingly.
In some cases, teams or applications may share common images. In such cases, a shared image registry can be allocated to the workspace. In this scenario, one or more workspaces will use the same shared image registry.
When the accessing account's permissions are set to user-level, it is necessary, and in the case where it is registered with admin privileges, the workspace can be additionally utilized.
The platform is the fundamental unit for using Cocktail Cloud. All functionalities are accessible through the platform. Users perform application development, operation, and management tasks after logging into the platform, depending on their permissions.
The platform consists of one or more workspaces. Workspaces are independent workspaces provided for teams or organizations. Companies can configure workspaces for teams within the platform and allocate necessary resources to provide application development and operation environments. The platform integrates and manages workspaces, applications, and infrastructure resources.
The platform registers and manages clusters (infrastructure resources) used by applications. It allocates and manages cluster resources on a per-workspace basis, either for the entire cluster or by namespace. Applications managed by teams are serviced through the allocated resources.
The platform comprehensively monitors and manages the overall status of applications developed and operated in all workspaces. It manages applications based on their configuration, status, resource usage, and performs tasks such as resource scaling and fault response as needed.
Clusters operate based on Kubernetes. The platform provides necessary functionalities for cluster operation, such as managing Kubernetes state and versioning.
In addition to resource and status management, the platform also provides integrated management functionalities for user management, security, etc.
The platform centrally monitors the status and resources of multiple clusters and applications. It collects metric data for each cluster infrastructure and application, providing real-time monitoring and analysis capabilities.
In addition to collecting resource and status data, it also collects events and provides notifications based on predefined rules. It detects anomalies in advance, takes appropriate actions, and performs fault analysis and resolution when issues arise.
The platform provides integrated dashboards with various charts for monitoring and analysis purposes.
The platform has a unique identifier (ID). Users log in to the platform using this ID. Additionally, users can set the platform name and logo image to represent a unique identity.
The platform holds Cocktail Cloud product license information. This license, along with purchaser information, is managed within the platform, with a designated platform administrator acting as the representative.
Cloud accounts required for operating clusters in public clouds are also managed as platform information. These accounts are utilized for managing cloud infrastructure and authentication information.
Cloud-native applications leverage cluster and container technologies, where clusters manage infrastructure, and containers handle application deployment and execution. Consequently, the monitoring targets differ from traditional applications.
Clusters consist of nodes, which are computing machines with CPU, GPU, Memory, and Disk, along with an operating system (OS) and container runtime for executing containers. Hence, monitoring of physical resource usage and performance necessary for container execution is done by collecting data (referred to as 'metrics' in monitoring) at the node level.
Container management is handled by Kubernetes, composed of multiple components installed on the cluster's master node. Monitoring the master node and installed components becomes necessary in case of Kubernetes failures, as container management becomes impossible. Monitoring involves tracking resource usage on the master node and the status of installed components.
Nodes and containers within a cluster communicate with each other. Monitoring network usage targets both the physical network and the logical network controlled by Kubernetes.
While cluster monitoring focuses on infrastructure resources required for container execution, container monitoring encompasses resource usage, execution status, and lifecycle monitoring. It also includes monitoring aspects such as communication volume between containers and request processing times.
Container monitoring provides metrics through the Kubernetes API and the Service Mesh API (for configuring container-to-container communication).
Notifications occur when monitoring metric data meets certain conditions defined by notification rules. These rules can be both predefined and user-defined.
Events occur when Kubernetes resources change. For instance, events are triggered by pod creation, execution, update, or deletion. Cocktail Cloud collects and provides events as notifications.
Both notifications and events provide real-time information during operation, facilitating proactive measures against application and cluster state changes and failures.
Kubernetes logs comprise three main types. Firstly, logs recorded by the Kubernetes master provide information necessary for master operation. Secondly, container logs are logs displayed on standard output (STDOUT/STDERR) during container execution. Lastly, application logs are logs recorded in separate files by containers in addition to standard output.
Cocktail Cloud collects all three types of logs, providing an environment for log retrieval and analysis.
Cluster is the infrastructure where containers run. Containers are the deployment units and execution processes of applications. Clusters provide computing resources (CPU, Memory, Storage, Network) necessary for container execution.
A cluster consists of nodes (physical or virtual machines) connected via a network. It is an architecture designed for distributed processing. When containers are deployed to a cluster, they are executed on appropriate nodes. This process, called scheduling, is managed by Kubernetes. Kubernetes is responsible for container scheduling and management within the cluster.
Clusters scale resources by adding nodes. If more resources are needed, nodes are added accordingly, and Kubernetes deploys and manages containers on the expanded nodes.
Container networking and storage for data storage are also components of a cluster.
Kubernetes is a container orchestration engine that runs containers in clusters and manages their lifecycle. Originally developed by Google, it is now maintained as a CNCF (Cloud Native Computing Foundation) project.
Kubernetes is installed on the cluster and is responsible for managing and providing resources required by containers based on the cluster infrastructure (nodes, network, storage).
A node is one or more compute machines that make up a cluster. They can be physical or virtual machines, each equipped with CPU, Memory, and Disk, connected via a network. Nodes are managed by Kubernetes for scheduling.
Nodes are divided into master nodes and worker nodes. Master nodes host the control plane components of Kubernetes and manage the cluster by communicating with worker nodes.
Worker nodes are where application containers are deployed. The number of worker nodes increases based on the number and capacity of applications. The Kubernetes scheduler on the master node is responsible for deploying containers to worker nodes.
Containers running on one or more nodes need to communicate with each other, which is managed by the container network.
Container networking is installed as a Kubernetes component. Kubernetes itself does not provide container networking but offers a standardized interface for external providers to supply plugins, known as the Container Network Interface (CNI). Examples of open-source CNI plugins include Flannel, WeaveNet, Calico.
Cocktail Cloud offers options to configure the cluster's CNI plugin.
External access to containers is handled by the ingress controller. It routes external traffic to containers based on hostnames and paths. Routing rules are configured for each application and applied to the ingress controller.
The ingress controller is a Kubernetes component. NGINX controller is commonly used and provided as a default in Kubernetes. Other third-party ingress controllers are also available.
Cocktail Cloud offers options to configure the ingress controller.
Cluster storage provides persistent volumes for container data storage.
Since containers can be rescheduled to different nodes in case of node failure or resource shortage, storing container data on nodes can be problematic. Therefore, a separate volume called a persistent volume is needed to store and manage data safely.
Kubernetes creates and provides persistent volumes through storage classes. When configuring the cluster, an appropriate storage class for storage must be installed.
Cocktail Cloud provides storage classes as addons, allowing users to select and automatically manage suitable storage classes.
Besides networking and storage, Kubernetes has components to extend its functionality, referred to as addons.
These addons provide additional capabilities to Kubernetes clusters beyond container management. Examples include monitoring and service meshes.
Cocktail Cloud offers various Kubernetes extension components as addons. They are automatically managed from installation to upgrade, and users can choose and use the required addons.
The Service Map is a unit that configures applications and manages various resources. Kubernetes manages applications at the Namespace level. Namespaces are a way of logically dividing clusters to deploy and manage containers, serving as a kind of virtual cluster. The Service Map, provided by Cocktail, is a management space for applications based on namespaces, offering additional management features.
One of the main resources in the Service Map is Workloads, which deploy and manage containers. Other resources include persistent volumes, configuration information, and service exposure.
Pods are resources that deploy and execute containers, composed of one or more containers. They consist of a main container responsible for application logic processing and optional sidecar containers for auxiliary tasks. While most cases require only the main container, additional sidecar containers can be added for functions like log collection, backup, or proxy.
Containers within a pod share the same network (localhost) and volumes, making it easy to scale by adding containers.
Pods can be created independently, but typically, they are managed through workloads responsible for deployment, updates, and lifecycle management.
Workloads are Service Map resources responsible for the deployment and lifecycle of pods. They manage the entire lifecycle of pods, including how they are deployed and executed, status monitoring, and restarting in case of issues.
In Cocktail, users interact with workload settings through a web UI console, minimizing errors caused by incorrect configurations.
Even when inputting through the web UI console, users can specify almost all detailed configuration items related to workloads defined in Kubernetes.
Workloads are categorized into several types, each with differences in how pods are deployed, executed, and managed.
Within a namespace, various types of workloads can be executed, and in some cases, there can be so many workloads that it becomes difficult to understand them all at a glance.
Organizing workloads into workload groups allows for a clear overview of the state of workloads by displaying them according to workload groups.
Deployment workloads replicate the same pod multiple times to provide stable service even if some pods have issues. They are mainly used for stateless application configurations like web servers, where data management is not required. This is because replicated pods have the same container, making them unsuitable for data management.
Deployment workloads also support rolling updates, replacing pods sequentially to perform updates without affecting services.
They also support autoscaling, automatically increasing the number of pod replicas when requested CPU or memory resources are exceeded.
StatefulSet workloads deploy pods performing different roles in replication to configure workloads. They are suitable for data storage and management applications such as databases, key-value stores, and big data clusters. Each pod is assigned a unique name, allowing tasks to be processed through pod communication. Each pod uses a unique persistent volume to store/manage data.
DaemonSet workloads are Service Map resources that deploy and manage daemon process pods running continuously in the background. Examples of background tasks include log collection and monitoring data collection on each node.
Job workloads deploy pods to process tasks in a one-time execution. They are mainly used for batch job processing such as data transformation and machine learning.
Similar to job workloads, but they allow for scheduled or periodic execution of jobs. They use configurations similar to Linux's cron tab for scheduling.
To serve applications externally, pods need to be exposed to external traffic. The Service Map exposes pods to the external traffic through service exposure resources.
Service exposure resources specify pods to expose based on labels. Therefore, even replicated pods with the same label can be specified in one service exposure. In this case, the service exposure performs load balancing to send traffic to one of the specified pods.
Service exposure resources are assigned a fixed IP address, which is a private IP address used within the cluster. This is because pod IP addresses change on restart, which can cause issues if pods are accessed directly. Therefore, the fixed address of the service exposure is used to connect to the specified pods.
Service exposure is categorized based on the exposure method.
Exposing services with a cluster IP allows access only within the cluster. It is used for communication between pods via fixed IP addresses within the service map.
Node port exposes services using the node's IP address. External access to pods is possible using the exposed node port (Node IP: Node Port). Node ports are assigned to all nodes in the cluster, allowing access to pods from any node. Typically, all nodes in the cluster are connected to an L4 switch to consolidate access addresses.
Node ports are assigned a range of 30000 to 32767 during cluster installation. Services are exposed automatically or by specifying a port range. This range can be user-configured.
When the cluster is configured in a cloud environment, a load balancer can be automatically created to expose services. Pods are exposed via node ports, and the created load balancer connects pod execution nodes with ports. External access is possible using the load balancer's address and port. Load balancer service exposure is only possible on supported cloud platforms like AWS, Azure, and GCP, configured during cluster installation by cloud providers.
Apart from service exposure, the Service Map also has Ingress resources for external pod access. Ingress exposes pods to the outside world via hostnames/paths (e.g., abc.com/login, cdf.org/). It functions similarly to an L7 switch.
To use Ingress, an Ingress controller must be installed in the cluster. The Ingress controller receives external traffic and forwards it to pods based on routing information defined in the Ingress (hostname/path -> cluster IP service exposure). Therefore, Ingress exposes the controller to external services, and pods expose services via cluster IP for routing by Ingress rules.
In Kubernetes, the Ingress controller is deployed to pods. Therefore, when installing in the cluster, service exposure should be done via node ports or load balancers.
Note that Ingress routes traffic based on hostnames/paths, so these need to be registered in internal or external DNS servers and accessible by the Ingress controller. Cocktail Cloud provides default configurations for Ingress usage, eliminating the need for additional installation and setup.
In cases of high external traffic to pods, dedicated Ingress nodes are sometimes configured in the cluster. These nodes only have Ingress controllers deployed and are replicated for high availability. They provide scalability by adding Ingress nodes as traffic increases.
When pods need to store and manage data, persistent volumes are necessary. Pods can restart at any time or be reassigned to healthy nodes in case of node failures, making it impossible to use node disk for data storage.
Persistent volumes ensure data integrity even when pods are restarted, reassigned, or deleted because they are managed independently of pod lifecycles. They use separate storage configured with the cluster.
Service Map's persistent volume resources select cluster storage for creation. Created persistent volumes are mounted to pods and used by containers. Persistent volumes are categorized into shared volumes, which can be mounted by multiple pods, and single volumes, which can be mounted by only one pod.
Persistent volumes require storage configured in the cluster. NFS storage is commonly used, supporting any storage that supports the NFS protocol.
When deploying a web server as a container, you typically use configuration files for server execution. In cases where pods are executed with separate configurations, these settings are managed as configuration resource. While it's possible to include configuration files in the pod's container image, this would necessitate recreating the image whenever configurations change.
Configuration information is created and managed within the service map, and can be mounted to pods' containers as volumes or files. Depending on container implementation, it can also be passed as environment variables. An advantage is that configuration changes can be made and reflected even while pods are running.
Configuration resource is categorized into ConfigMaps and Secrets based on management method. Both manage configuration information, but Secrets encrypt data, making them suitable for sensitive information like database connection details.
The pipeline resource in the service map updates container images for workloads. Upon workload deployment completion, a pipeline is automatically created to update container images in pods.
Workload-deployed container images in the service map fall into two types: those generated from Cocktail Cloud builds and those registered in external registries.
Images generated from Cocktail Cloud builds undergo the entire process from image creation to workload deployment automatically whenever application code changes (refer to 'Builds/Images' section for Cocktail Cloud builds).
External registry images are updated via replacement, where the pipeline performs automated deployment only.
The catalog in the service map bundles one or more workloads associated with the service map for deployment and updates. It's primarily used for distributing and updating open-source software.
Cocktail Cloud provides many commonly used open-source packages in its catalog. Users can search for desired packages in the catalog and automatically deploy them to the service map. (Refer to the 'Catalog' section for Cocktail Cloud's catalog).
Packages deployed to the service map can perform state monitoring and automatic updates.
Typically, applications consist of one or more workloads (containers), especially when deployed on Kubernetes, involving various related resources such as service exposure and volumes. This complexity makes application deployment and upgrades challenging.
Catalogs address these issues by bundling multiple application resources into a single unit called a package and deploying this package with user settings when necessary. Upgrades are also automated based on versioning. There are several open-source tools that support package creation, deployment, and management, with Helm being widely used, especially as an official Kubernetes project.
Cocktail Cloud's catalog offers the ability to search for such packages and automatically deploy them to service maps. These packages are in the form of Helm charts, which are supported by a wide range of open-source projects. Open-source packages are registered and managed in package repositories. There are numerous public package repositories where various open-source packages are available, and the catalog can search all these repositories for packages.
Packages deployed through the catalog are managed in the package menu of the service map. It provides monitoring and status of deployed packages and enables upgrades to newer versions.