Skip to content

Single-host deployments

Single-host deployments are recommended for testing and development use only.

Preparing the master host operating system

Use this procedure to prepare a RHEL/CentOS host as a Control Center master host. Before performing this procedure, download and stage required files.

  1. Log in to the candidate master host as root or as a user with superuser privileges.

  2. Ensure the host has a persistent numeric ID. Skip this step if you are installing a single-host deployment. Each Control Center host must have a unique host ID, and the ID must be persistent (not change when the host reboots).

    test -f /etc/hostid || genhostid ; hostid
    

    Record the ID for comparison with other Control Center hosts.

  3. Disable the firewall, if necessary. This step is required for installation but not for deployment. For more information, see Planning a Resource Manager deployment.

    1. Determine whether the firewalld service is enabled.

      systemctl status firewalld.service
      
      • If the result includes Active: inactive (dead), the service is disabled. Proceed to the next step.
      • If the result includes Active: active (running), the service is enabled. Perform the following substep.
    2. Disable the firewalld service.

      systemctl stop firewalld && systemctl disable firewalld
      

      On success, the preceding commands display messages similar to the following example:

      rm '/etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service'
      rm '/etc/systemd/system/basic.target.wants/firewalld.service'
      
  4. Optional: Enable persistent storage for log files, if desired. By default, RHEL/CentOS systems store log data only in memory or in a ring buffer in the /run/log/journal directory. By performing this step, log data persists and can be saved indefinitely, if you implement log file rotation practices. For more information, refer to your operating system documentation. Note: The following commands are safe when performed during an installation, before Docker or Control Center are installed or running. To enable persistent log files after installation, stop Control Center, stop Docker, and then enter the following commands.

    mkdir -p /var/log/journal && systemctl restart systemd-journald
    
  5. Enable and start the Dnsmasq package.The package facilitates networking among Docker containers.

    systemctl enable dnsmasq && systemctl start dnsmasq
    

    Most deployments do not need specific configuration for dnsmasq, however if name resolution in your environment relies solely on entries in /etc/hosts, configure dsnmasq so that containers can use the file:

    1. Open /etc/dnsmasq.conf with a text editor.

    2. Add the following lines to the file:

      domain-needed
      bogus-priv
      local=/local/
      domain=local
      interface=docker0
      
    3. Save the file, and then close the text editor.

    4. Restart the dnsmasq service.

      systemctl restart dnsmasq
      
  6. Enable a higher limit for open files in containers.

    1. Append parameters to the sysctl configuration file:

      cat <<EOF >> /etc/sysctl.conf
      # added for Control Center containers on [$(date)]
      fs.inotify.max_user_instances=10000
      fs.inotify.max_user_watches=640000
      EOF
      
    2. Apply the configuration file changes:

      sysctl -p
      
  7. Start chrony, if necessary. The chrony utility provides time synchronization, which is required by multiple Control Center internal services. You may use ntp if you prefer. Determine whether the chrony service is running; if not, enable and start it:

    test "$(systemctl is-active chronyd)" = "active" && systemctl enable chronyd && systemctl start chronyd
    
  8. Install the iptable_nat kernel module, if necessary.

    The kernel module iptable_nat is required by the Hbase component of Resource Manager. However, it is not installed by default on all versions of RHEL 8. Follow these steps to check if it's installed and enable it, if needed.

    1. Check that the module is installed:

      lsmod | grep iptable_nat
      

      If the previous command returns nothing, the module is not installed.

    2. Enable the module:

      sudo modprobe iptable_nat
      
    3. Allow the module to persist across reboots:

      sudo echo iptable_nat > /etc/modules-load.d/iptable_nat.conf
      

Installing Docker and Control Center

  1. Log in to the host as root or as a user with superuser privileges.

  2. Install Docker CE 20.10.17 from the local repository mirror.

    1. Install Docker CE. All hosts except RHEL 8.4+ hosts:

      yum install --enablerepo=zenoss-mirror docker-ce-20.10.17-3.el7 docker-ce-cli
      

      RHEL 8.4+ hosts only:

      yum install --enablerepo=zenoss-mirror docker-ce-20.10.17-3.el7 docker-ce-cli containerd.io-1.6.7-3.1.el7
      

      If yum returns an error due to dependency issues, see Resolving package dependency conflicts for potential resolutions.

    2. Enable automatic startup.

      systemctl enable docker
      
  3. Install Control Center from the local repository mirror.

    1. Install Control Center.

      yum install --enablerepo=zenoss-mirror /opt/zenoss-repo-mirror/serviced-*.x86_64.rpm
      

      If yum returns an error due to dependency issues, see Resolving package dependency conflicts for potential resolutions.

    2. Enable automatic startup.

      systemctl enable serviced
      
  4. Make a backup copy of the Control Center configuration file.

    1. Make a copy of /etc/default/serviced.

      cp /etc/default/serviced /etc/default/serviced-VERSION-orig
      
    2. Set the backup file permissions to read-only.

      chmod 0440 /etc/default/serviced-VERSION-orig
      
  5. On delegate hosts only, remove unused maintenance scripts. For more information about maintenance scripts, see Control Center maintenance scripts. On all delegate hosts (never the master host), enter the following command:

    rm -f /etc/cron.hourly/serviced  /etc/cron.weekly/serviced-fstrim
    

Configuring Docker on a master host

Use this procedure to configure Docker.

  1. Log in to the master host as root or as a user with superuser privileges.

  2. Create a symbolic link for the Docker temporary directory. Docker uses its temporary directory to spool images. The default directory is /var/lib/docker/tmp. The following command specifies the same directory that Control Center uses, /tmp. You can specify any directory that has a minimum of 10GB of unused space.

    1. Create the docker directory in /var/lib.

      mkdir /var/lib/docker
      
    2. Create the link to /tmp.

      ln -s /tmp /var/lib/docker/tmp
      
  3. Create a systemd drop-in file for Docker.

    1. Create the override directory.

      mkdir -p /etc/systemd/system/docker.service.d
      
    2. Create the unit drop-in file.

      cat <<EOF > /etc/systemd/system/docker.service.d/docker.conf
      [Service]
      TimeoutSec=300
      EOF
      
    3. Reload the systemd manager configuration.

      systemctl daemon-reload
      
  4. Create an LVM thin pool for Docker data. For more information about the serviced-storage command, see serviced-storage.To use an entire block device or partition for the thin pool, replace Device-Path with the device path:

    serviced-storage create-thin-pool docker Device-Path
    

    On success, the result is the device mapper name of the thin pool, which always starts with /dev/mapper.

  5. Configure and start the Docker service.

    1. Create a variable for the name of the Docker thin pool.

      Replace Thin-Pool-Device with the name of the thin pool device created in the previous step:

      myPool="Thin-Pool-Device"
      
    2. Create the Docker daemon configuration file. The exec-opts field is a workaround for a Docker issue on RHEL/CentOS 7.x systems.

      mkdir /etc/docker && cat <<EOF > /etc/docker/daemon.json
      {
        "bip": "172.17.0.1/16",
        "dns": [
          "172.17.0.1"
        ],
        "exec-opts": [
          "native.cgroupdriver=cgroupfs"
        ],
        "log-level": "error",
        "storage-driver": "devicemapper",
        "storage-opts": [
          "dm.mountopt=discard",
          "dm.thinpooldev=$myPool"
        ]
      }
      EOF
      
    3. Review the file to ensure it is correct.

      cat /etc/docker/daemon.json
      
    4. Start or restart Docker.

      systemctl restart docker
      

      The startup may take up to a minute, and may fail. If startup fails, repeat the restart command.

  6. Configure name resolution in containers. Each time it starts, docker selects an IPv4 subnet for its virtual Ethernet bridge. The selection can change; this step ensures consistency.

    1. Identify the IPv4 address and netmask docker has selected for its virtual Ethernet bridge.

      ip addr show docker0 | awk '/^ *inet/ { print $2 }'
      

      If the result is not 172.17.0.1/16, open /etc/docker/daemon.json, update the dns and bip fields, and then restart the Docker service.

      systemctl restart docker
      
    2. Confirm that there are no stale connections still using the original subnet. If any are found, Docker may require a full stop and start:

      conntrack -F && conntrack -L
      
    3. Verify there is an iptables masquerade rule for the configured docker0 subnet

      iptables -t nat -L -n
      

Loading image files

Use this procedure to load images into the local Docker registry on a host.

  1. Log in to the host as root or as a user with superuser privileges.

  2. Change directory to /root.

    cd /root
    
  3. Load the images.

    for image in install-zenoss-*.run
    do
      /bin/echo -en "\nLoading $image..."
      yes | ./$image
    done
    
  4. List the images in the registry.

    docker images
    

    The result should show one image for each archive file.

  5. Optional: Delete the archive files, if desired.

    rm -i ./install-zenoss-*.run
    

Creating the application data thin pool

Use this procedure to create a thin pool for application data storage.

This procedure does not include a specific value for the size of the thin pool. For more information about sizing this resource, see Master host storage areas. Or, use the suggested minimum value, 200GB. You can add storage to an LVM thin pool at any time.

Perform these steps:

  1. Log in to the master host as root or as a user with superuser privileges.

  2. Create an LVM thin pool for application data. For more information, see serviced-storage.To use an entire block device or partition for the thin pool, replace Device-Path with the device path:

    serviced-storage create-thin-pool serviced Device-Path
    

    On success, the result is the device mapper name of the thin pool, which always starts with /dev/mapper. Record the name for use in the next step.

  3. Edit storage variables in the Control Center configuration file.

    1. Open /etc/default/serviced in a text editor.
    2. Locate the line for the SERVICED_FS_TYPE variable, and then make a copy of the line, immediately below the original.
    3. Remove the number sign character (#) from the beginning of the line.
    4. Locate the line for the SERVICED_DM_THINPOOLDEV variable, and then make a copy of the line, immediately below the original.
    5. Remove the number sign character (#) from the beginning of the line.
    6. Set the value to the device mapper name of the thin pool for application data.
    7. Save the file, and then close the editor.

User access control

Control Center provides a browser interface and a command-line interface.

To gain access to the Control Center browser interface, users must have login accounts on the Control Center master host. In addition, users must be members of the Control Center browser interface access group, which by default is the system group, wheel. To enhance security, you may change the browser interface access group from wheel to any other group.

To use the Control Center command-line interface (CLI) on a Control Center host, a user must have login account on the host, and the account must be a member of the serviced group. The serviced group is created when the Control Center (serviced) RPM package is installed. In addition, some utilities (for example, the ZenPack installer) require that users be members of the docker group.

You can use two different groups to control access to the browser interface and the CLI. You can enable access to both interfaces for the same users by choosing the serviced group as the browser interface access group.

Pluggable Authentication Modules (PAM) has been tested and is recommended for enabling access to both the browser interface and the command-line interface. However, the PAM configuration must include the sudo service. Control Center relies on the host's sudo configuration, and if no configuration is present, PAM defaults to the configuration for other, which is typically too restrictive for Control Center users. For more information about configuring PAM, refer to your operating system documentation.

Adding users to the default browser interface access group

Use this procedure to add users to the default browser interface access group of Control Center, wheel.

Perform this procedure or the next procedure, but not both.

  1. Log in to the host as root or as a user with superuser privileges.

  2. Add a user to the wheel group.

    Replace User with the name of a login account on the master host.

    usermod -aG wheel User
    

    Repeat the preceding command for each user to add.

Configuring a regular group as the Control Center browser interface access group

Use this procedure to change the default browser interface access group of Control Center from wheel to a non-system group.

Perform this procedure or the previous procedure, but not both.

  1. Log in to the host as root or as a user with superuser privileges.

  2. Create a variable for the group to designate as the administrative group. In this example, the group is ccuser. You may choose a different group, or choose the serviced group. (Choosing the serviced group allows all browser interface users to use the CLI.)

    myGROUP=ccuser
    
  3. Create a new group, if necessary.

    groupadd $myGROUP
    
  4. Add one or more existing users to the group.

    Replace User with the name of a login account on the host:

    usermod -aG $myGROUP User
    

    Repeat the preceding command for each user to add.

  5. Specify the new administrative group in the serviced configuration file.

    1. Open /etc/default/serviced in a text editor.
    2. Locate the line for the SERVICED_ADMIN_GROUP variable, and then make a copy of the line, immediately below the original.
    3. Remove the number sign character (#) from the beginning of the line.
    4. Change the value from wheel to the name of the group you chose earlier.
    5. Save the file, and then close the editor.
  6. Optional: Prevent the root user from gaining access to the Control Center browser interface, if desired.

    1. Open /etc/default/serviced in a text editor.
    2. Locate the line for the SERVICED_ALLOW_ROOT_LOGIN variable, and then make a copy of the line, immediately below the original.
    3. Remove the number sign character (#) from the beginning of the line.
    4. Change the value from 1 to 0.
    5. Save the file, and then close the editor.

Enabling use of the command-line interface

Use this procedure to enable a user to perform administrative tasks with the Control Center command-line interface.

  1. Log in to the host as root or as a user with superuser privileges.

  2. Add a user to the serviced and docker groups.

    Replace User with the name of a login account on the host.

    usermod -aG serviced,docker User
    

    Repeat the preceding command for each user to add.

Configuring the base size device for tenant data storage

Use this procedure to configure the base size of virtual storage devices for tenants in the application data thin pool. The base size is used each time a tenant device is created. In particular, the first time serviced starts, it creates the base size device and then creates a tenant device from the base size device.

Perform these steps:

  1. Log in to the master host as root or as a user with superuser privileges.
  2. Identify the size of the thin pool for application data. The size is required to set an accurate value for the SERVICED_DM_BASESIZE variable.

    lvs --options=lv_name,lv_size | grep serviced-pool
    
  3. Edit storage variables in the Control Center configuration file.

    1. Open /etc/default/serviced in a text editor.
    2. Locate the line for the SERVICED_DM_BASESIZE variable, and then make a copy of the line, immediately below the original.
    3. Remove the number sign character (#) from the beginning of the line.
    4. Change the value, if necessary.Replace Fifty-Percent with the value that is less than or equal to 50% of the size of the thin pool for application data. Include the symbol for gigabytes, G:

      SERVICED_DM_BASESIZE=Fifty-PercentG
      
    5. Save the file, and then close the editor.

    6. Verify the settings in the serviced configuration file.
    grep -E '^[[:space:]]*[A-Z_]+' /etc/default/serviced
    

Setting the host role to master

Use this procedure to configure a host as the master host.

Perform these steps:

  1. Log in to the host as root or as a user with superuser privileges.
  2. Edit the Control Center configuration file.
    1. Open /etc/default/serviced in a text editor.
    2. Locate the line for the SERVICED_MASTER variable, and then make a copy of the line, immediately below the original.
    3. Remove the number sign character (#) from the beginning of the line.
    4. Save the file, and then close the editor.
  3. Verify the settings in the serviced configuration file.

    grep -E '^[[:space:]]*[A-Z_]+' /etc/default/serviced
    

Setting the internal services OpenTSDB credentials

Starting with release 1.10.2, Control Center requires a username and password to run the internal services OpenTSDB database.

  1. Log in to the master host as root or as a user with superuser privileges.

  2. Edit the Control Center configuration file.

    1. Open /etc/default/serviced in a text editor.
    2. Locate the line for the SERVICED_ISVCS_OPENTSDB_USERNAME variable, and then make a copy of the line, immediately below the original.
    3. Remove the number sign character (#) from the beginning of the line.
    4. Enter a username for the OpenTSDB server. You can choose any username you wish.
    5. Locate the line for the SERVICED_ISVCS_OPENTSDB_PASSWORD variable, and then make a copy of the line, immediately below the original.
    6. Remove the number sign character (#) from the beginning of the line.
    7. Enter a password for the OpenTSDB server. You can choose any password you wish.
    8. Save the file, and then close the editor.
  3. Verify the credentials in the serviced configuration file.

    grep -E '^[[:space:]]*SERVICED_ISVCS_OPENTSDB' /etc/default/serviced
    
  4. Restart serviced.

    systemctl restart serviced
    

Optional procedures

The following procedures may be required for your installation. Perform these procedures only if you are sure they are necessary for your use case.

Changing the local Docker registry endpoint

Use this procedure to configure the master host with the endpoint of an alternative local Docker registry. Control Center includes a local Docker registry, but you may use an existing registry in your environment, if desired. For more information about configuring a local Docker registry, please refer to Docker documentation. Note: Changing the local Docker registry endpoint is rare. Perform this procedure only if you are sure it is necessary and the alternative local Docker registry is already available in your environment.

Perform these steps:

  1. Log in to the master host as root or as a user with superuser privileges.
  2. Edit the Control Center configuration file.
    1. Open /etc/default/serviced in a text editor.
    2. Locate the line for the SERVICED_DOCKER_REGISTRY variable, and then make a copy of the line, immediately below the original.
    3. Remove the number sign character (#) from the beginning of the line.
    4. Replace localhost:5000 with the endpoint of the local Docker registry. Use the IP address or fully-qualified domain name of the host and the port number.
    5. Save the file, and then close the editor.
  3. Verify the settings in the serviced configuration file.

    grep -E '^[[:space:]]*[A-Z_]+' /etc/default/serviced
    
  4. Add the insecure registry flag to the Docker configuration file.

    1. Open /etc/docker/daemon.json in a text editor.

    2. Add the following configuration option to the JSON.

      Replace Registry-Endpoint with the same value used for the SERVICED_DOCKER_REGISTRY variable:

      {
        "insecure-registries" : ["Registry-Endpoint"]
      }
      

      Note

      The option above assumes an otherwise blank file. If there are other options already in place, the insecure-registries option should be placed at the correct level. See the Docker documentation for more information.

    3. Save the file, and then close the editor.

  5. Restart the Docker service.

    systemctl restart docker
    

Configuring offline use

Use this procedure to configure a host to operate without internet access.

Perform these steps:

  1. Log in to the host as root or as a user with superuser privileges.
  2. Identify the IPv4 address of the host.

    hostname -i
    
  3. Edit the Control Center configuration file.

    1. Open /etc/default/serviced in a text editor.
    2. Locate the line for the SERVICED_OUTBOUND_IP variable, and then make a copy of the line, immediately below the original.
    3. Remove the number sign character (#) from the beginning of the line.
    4. Change the value to the IPv4 address identified in the previous step.
    5. Save the file, and then close the editor.
  4. Verify the settings in the serviced configuration file.

    grep -E '^\b*[A-Z_]+' /etc/default/serviced
    

Installing a security certificate

The default, insecure certificate that Control Center uses for TLS-encrypted communications are based on a public certificate compiled into serviced. Use this procedure to replace the default certificate files with your own files.

  • If you are using virtual host public endpoints for your Zenoss Service Dynamics deployment, you need a wildcard certificate or a subject alternative name (SAN) certificate.
  • If your end users access the browser interface through a reverse proxy, the reverse proxy may provide the browser with its own SSL certificate. If so, please contact Zenoss Support for additional assistance.

To perform this procedure, you need valid certificate files. For information about generating a self-signed certificate, see Creating a self-signed security certificate.

To use your own certificate files, perform this procedure on the Control Center master host and on each Control Center delegate host in your environment.

Follow these steps:

  1. Log in to the host as root or as a user with superuser privileges.

  2. Use a secure copy program to copy the key and certificate files to /tmp.

  3. Move the key file to the /etc/pki/tls/private directory. Replace <KEY_FILE> with the name of your key file:

    mv /tmp/<KEY_FILE>.key /etc/pki/tls/private
    
  4. Move the certificate file to the /etc/pki/tls/certs directory. Replace <CERT_FILE> with the name of your certificate file:

    mv /tmp/<CERT_FILE>.crt /etc/pki/tls/certs
    
  5. Updates only: Create a backup copy of the Control Center configuration file. Do not perform this step for a fresh install:

    cp /etc/default/serviced /etc/default/serviced.before-cert-files
    
  6. Edit the Control Center configuration file.

    1. Open /etc/default/serviced in a text editor.
    2. Locate the line for the SERVICED_KEY_FILE variable, and then make a copy of the line, immediately below the original.
    3. Remove the number sign character (#) from the beginning of the line.
    4. Replace the default value with the full pathname of your key file.
    5. Locate the line for the SERVICED_CERT_FILE variable, and then make a copy of the line, immediately below the original.
    6. Remove the number sign character (#) from the beginning of the line.
    7. Replace the default value with the full pathname of your certificate file.
    8. Save the file, and then close the editor.
  7. Verify the settings in the configuration file.

    grep -E '^[[:space:]]*[A-Z_]+' /etc/default/serviced
    
  8. Updates only: Reload the systemd manager configuration. Do not perform this step for a fresh install:

    systemctl daemon-reload
    

Setting Zookeeper security variables

Use this procedure to add security to the Zookeeper instances for Control Center and Resource Manager. For more information, see ZooKeeper security.

  1. Log in to the Control Center host as root or as a user with superuser privileges.
  2. Open /etc/default/serviced in a text editor.
  3. Add a user account name to secure the leader-election phase.
    1. Locate the line for the SERVICED_ISVCS_ZOOKEEPER_USERNAME variable, and then make a copy of the line, immediately below the original.
    2. Remove the number sign character (#) from the beginning of the line.
    3. Add a value for the variable.
  4. Add a password to secure the leader-election phase.
    1. Locate the line for the SERVICED_ISVCS_ZOOKEEPER_PASSWD variable, and then make a copy of the line, immediately below the original.
    2. Remove the number sign character (#) from the beginning of the line.
    3. Add a value for the variable.
  5. Add a user account name to secure data nodes.
    1. Locate the line for the SERVICED_ZOOKEEPER_ACL_USER variable, and then make a copy of the line, immediately below the original.
    2. Remove the number sign character (#) from the beginning of the line.
    3. Add a value for the variable.
  6. Add a password to secure data nodes.
    1. Locate the line for the SERVICED_ZOOKEEPER_ACL_PASSWD variable, and then make a copy of the line, immediately below the original.
    2. Remove the number sign character (#) from the beginning of the line.
    3. Add a value for the variable.
  7. Save the file, and then close the editor.
  8. Verify the settings.

    grep -E '^[[:space:]]*[A-Z_]+' /etc/default/serviced
    

Additional features

You may wish to adjust other Control Center settings or behaviors before starting Control Center for the first time. If so, please refer to the master host and universal configuration variables pages.

Starting Control Center for the first time

Use this procedure to start the Control Center service (serviced) on a master host after installing and configuring it. This procedure is valid for single-host and multi-host deployments, whether the deployment has internet access or not, and is only performed once.

  1. Log in to the master host as root or as a user with superuser privileges.

  2. Verify the settings in the serviced configuration file.

    grep -E '^[[:space:]]*[A-Z_]+' /etc/default/serviced
    
  3. Start the Control Center service.

    systemctl start serviced
    

    To monitor progress, enter the following command:

    journalctl -flu serviced -o cat
    

    The serviced daemon tags images in the local Docker registry and starts its internal services. The Control Center browser and command-line interfaces are unavailable for about 3 minutes.

    When the message Host Master successfully started is displayed, Control Center is ready for the next procedure.

Until the master host is added to a pool, all serviced CLI commands that use the RPC server must be run as root.

Adding the master host to a resource pool

Use this procedure to add the master host to the default resource pool.

  1. Log in to the master host as root or as a user with superuser privileges.

  2. Add the master host to the pool.Replace Hostname-Or-IP with the hostname or IP address of the Control Center master host:

    serviced host add --register Hostname-Or-IP:4979 default
    

    If you use a hostname, all Control Center hosts must be able to resolve the name, either through an entry in /etc/hosts or through a nameserver on the network.