Introduction to Control Center
Control Center is an open-source application service orchestrator based on Docker Community Edition (Docker CE, or just Docker).
Control Center can manage any Docker application, from a simple web application to a multi-tiered stateful application stack. Control Center is based on a service-oriented architecture, which enables applications to run as a set of distributed services spanning hosts, datacenters, and geographic regions.
Control Center relies on declarations of application requirements to integrate Docker containers. A service definition template contains the specifications of application services in JSON format. The definition of each service includes the IDs of the Docker images needed to run the service.
Control Center includes the following key features:
- Intuitive HTML5 interface for deploying and managing applications
- Integrated backup and restore, and incremental snapshots and rollbacks
- Centralized logging through Logstash and Elasticsearch
- Integration with database services and other persistent services
- Encrypted communications among all services and containers
- Delegate host authentication to prevent unauthorized system access
- Storage monitoring and emergency shutdown of services to minimize the risk of data corruption
- Rolling restart of services to reduce downtime of multi-instance services
- Audit logging, including application audit logging
Docker fundamentals
This section summarizes the architecture description provided by Docker as customized for Control Center. For additional information, refer to the Docker site.
Docker provides convenient tools that make use of the control groups feature of the Linux kernel to develop, distribute, and run applications. Docker internals include images, registries, and containers.
Docker images
Docker images are read-only templates that are used to create Docker containers. Images are easy to build, and image updates are change layers, not wholesale replacements.
Docker registries
Docker registries hold images. Control Center uses a private Docker registry for its own images and Zenoss application images.
Docker containers
Docker containers have everything needed to run a service instance, and are created from images. Control Center launches each service instance in its own Docker container.
Docker storage
Docker and Control Center data are stored in customized LVM thin pools that are created from one or more block devices or partitions, or from one or more LVM volume groups.
Control Center internal services
Elasticsearch
A distributed, real-time search and analytics engine. Control Center uses it to index log files and store service definitions.
Kibana
A browser-based user interface that enables the display and search of Elasticsearch databases, including the log files that Control Center monitors.
Logstash
A log file collector and aggregator that forwards parsed log file entries to Elasticsearch.
OpenTSDB
A time series database that Control Center uses to store its service performance metrics.
ZooKeeper (Apache ZooKeeper)
A centralized service that Control Center uses for configuration maintenance, naming, distributed synchronization, and providing group services.
ZooKeeper and Control Center
Control Center relies on Apache ZooKeeper to distribute and manage application services. ZooKeeper maintains the definitions of each service and the list of services assigned to each host. The scheduler, which runs on the master host, determines assignments and sends them to the ZooKeeper node that is serving as the ensemble leader. The leader replicates the assignments to the other ensemble nodes, so that the other nodes can assume the role of leader if the leader node fails.
All Control Center hosts retrieve assignments and service definitions from the ZooKeeper ensemble leader and then start services in Docker containers as required. So, the Control Center configuration files of all Control Center hosts must include a definition for the SERVICED_ZK variable, which specifies the ZooKeeper endpoints of the ensemble nodes. Additional variables are required on ensemble nodes.
A ZooKeeper ensemble requires a minimum of three nodes, which is sufficient for most environments. An odd number of nodes is recommended and an even number of nodes is strongly discouraged. A five-node ensemble improves failover protection during maintenance windows but larger ensembles yield no benefits.
The Control Center master host is always an ensemble node. All ensemble nodes should be on the same subnet.
Control Center application data storage
Control Center uses a dedicated LVM thin pool on the master host to store application data and snapshots of application data.
- The distributed file system (DFS) of each tenant application that serviced manages is stored in separate virtual devices. The initial size of each tenant device is copied from the base device, which is created during the initial startup of serviced.
- Snapshots of tenant data, used as temporary restore points, are stored in separate virtual devices, outside of tenant virtual devices. The size of a snapshot depends on the size of the tenant device, and grows over time.
The Control Center master host requires high-performance, persistent storage. Storage can be local or remote.
- For local storage, solid-state disk (SSD) devices are recommended.
- For remote storage, storage-area network (SAN) systems have been tested. High-performance SAN systems are recommended, as is assigning separate logical unit numbers (LUNs) for each mounted path.
The overall response times of master host storage affect the performance and stability of Control Center internal services and the applications it manages. For example, ZooKeeper (a key internal service) is sensitive to storage latency greater than 1000 milliseconds.
Warning
The physical devices associated with the application data thin pool must be persistent. If removable or re-connectable storage such as a SAN based on iSCSI is used, then the Device-Mapper Multipath feature of RHEL/CentOS must be configured and enabled.
Control Center includes the serviced-storage
utility for creating and
managing its thin pool. The serviced-storage
utility can:
- use physical devices or partitions, or LVM volume groups, to create a new LVM thin pool for application data
- add space to a tenant device at any time
- identify and clean up orphaned snapshots
- create an LVM thin pool for Docker data