Skip to content

Prerequisites

The following procedures are common to multi-host and single host deployments.

Verify hosts

This section describes how to use the zenoss-installer script to verify hosts for their roles in a Control Center deployment. In addition, this section includes procedures for preparing required filesystems on the master host.

The verify action of the zenoss-installer script performs read-only tests of the compute, memory, operating system, and storage resources of a host. The verify action is intended for iterative use; run the script, correct an error, and then run the script again. You can perform the verify action as many times as you wish without affecting the host.

The zenoss-installer script is updated regularly. Please download the latest version before creating a new deployment of Control Center. The script is not needed on Zenoss Resource Manager virtual appliances.

Download the zenoss-installer script

To perform this procedure, you need:

  • A workstation with internet access.

  • Permission to download files from delivery.zenoss.io. Customers can request permission by filing a ticket at the Zenoss Support site.

  • A secure network copy program.

Follow these steps:

  1. In a web browser, navigate to delivery.zenoss.io, and then log in.
  2. Download the zenoss-installer script.
  3. Use a secure copy program to copy the script to Control Center candidate hosts.

Verify candidate host resources

Use this procedure to run the zenoss-installer script on a candidate host.

  1. Log in to the host as root or as a user with superuser privileges.

  2. Add execute permissions to the script. The following example assumes the script is located in /tmp; adjust the path, if necessary.

    chmod +x /tmp/zenoss-installer
    
  3. Run the script with the arguments that match the role the host will play in your Control Center deployment.

    Role Deployment Invocation
    Master Single-host zenoss-installer -a verify -d single -r master
    Master Multi-host zenoss-installer -a verify -d multi -r master
    Delegate Multi-host zenoss-installer -a verify -r delegate
    Collector Multi-host zenoss-installer -a verify -r collector

Hardware errors (VHW)

Error Issue Solution
VHW01 Control Center supports only the x86_64 processor architecture. Select a different host.
VHW02 The number of available CPU cores does not meet the minimum required for the specified host role. Increase the number of cores assigned to the host or select a different host.
VHW03 One or more CPU cores does not support the AES instruction set, which speeds encryption and decryption processing. If the candidate host is a virtual machine, the managing hypervisor may be configured in Hyper-V compatibility mode. Check the setting and disable it or select a different host.
VHW04 The amount of available main memory does not support the specified host role. (Memory is measured in kibibytes.) Increase the amount of memory assigned to the host or select a different host.

Software errors (VSW)

Error Issue Solution
VSW01 Control Center supports only the x86_64 kernel architecture. Upgrade the operating system or select a different host.
VSW02 The installed kernel version is less than the required minimum version. Upgrade the kernel or select a different host.
VSW03 The installed kernel patch is less than the required minimum patch. Upgrade the kernel or select a different host.
VSW04 The installed operating system is not RHEL or CentOS. Install a supported operating system or select a different host.
VSW05 The installed operating system release is not supported. Upgrade the operating system or select a different host.
VSW06 The operating system locale is not supported. Set the system locale to en_US.UTF-8.

Network errors (VNW)

Error Issue Solution
VNW01 The hostname resolves to 127.0.0.1 only or does not resolve to a recognizable IPv4 address. Add an entry for the host to the network nameserver, or to /etc/hosts.
VNW02 The /etc/hosts file does not include an entry for 127.0.0.1. Add an entry to /etc/hosts that maps 127.0.0.1 to localhost.
VNW03 The /etc/hosts file does not include an entry that maps localhost to 127.0.0.1. Add an entry to /etc/hosts that maps 127.0.0.1 to localhost.

Storage errors (VST)

Storage space is computed as kilobytes (1000 per byte), not kibibytes (1024 per byte).

Error Issue Solution
VST01 The amount of available swap space is less than the required minimum. Add space to the swap device or partition.
VST02 Swap is not mounted on a block device or partition. Add a separate device or partition for swap.
VST03 (Master hosts only) One or both of the following paths do not exist:
  • /opt/serviced/var/backups
  • /opt/serviced/var/isvcs
Perform one or both of the following procedures:
VST04 One or more required mount points do not have the required minimum space. This test does not examine space for thin pools. Add storage as recommended in Recommended storage layout.
VST05 One or more required mount points are mounted on the same device or partition. Add devices or partitions for each required mount point.
VST06 The amount of space available on unused block storage devices is not enough for the thin pools the specified role requires.

On master hosts, separate devices or partitions are required for the thin pool for Docker data and the thin pool for application data. On delegate and collector hosts, a separate device is required for the Docker data thin pool. For more information, see Recommended storage layout.

Best practice is to dedicate block devices to thin pools, so this test only examines top-level block devices. Partitions and logical volumes are not considered.

The tests of available block storage are based on best practice recommendations for master hosts and delegate or collector hosts. Enter the following command to display information about available block storage:

lsblk -ap --output=NAME,SIZE,TYPE,FSTYPE,MOUNTPOINT

The suggested minimum sizes for application data and application data backups should be replaced with sizes that meet your application requirements. To calculate the appropriate sizes for these storage areas, use the following guidelines:

  • Application data storage includes space for both data and snapshots. The default base size for data is 100GB, and the recommended space for snapshots is 100% of the base size. Adding the two yields the suggested minimum size of 200GB.
  • For application data backups, the recommended minimum space is 150% of the base size for data, or a minimum of 400GB, whichever is greater.

For large environments, the space for application data backups should be much greater. Individual backup files can be 50GB to 100GB each, or more.

Control Center master host storage

A Control Center master host should have a total of 7 separate block storage devices or partitions. The following table identifies the purpose and recommended minimum size of each.


Purpose

Minimum size

1

Root (/)

30GB

2

Swap

16GB

3

Temporary (/tmp)

16GB

4

Docker data

50GB

5

Control Center internal services data (/opt/serviced/var/isvcs)

50GB

6

Application data

200GB

7

Application data backups (/opt/serviced/var/backups)

150GB

Control Center delegate host storage

A Control Center delegate or collector host should have a total of 4 separate block storage devices or partitions. The following table identifies the purpose and recommended minimum size of each.


Purpose Minimum size
1 Root (/) 30GB
2 Swap 16GB
3 Temporary (/tmp) 16GB
4 Docker data 50GB

Create a filesystem for application data backups

This procedure requires one unused device or partition, or a remote file server that is compatible with XFS. Use this procedure create an XFS filesystem on a device or partition, or to mount a remote filesystem, for application data backups.

If you are using a partition on a local device for backups, ensure that the storage for Control Center internal services data is not on the same device.

  1. Log in to the target host as root or as a user with superuser privileges.

  2. Optional: Identify the target device or partition for the filesystem to create, if necessary. Skip this step if you are using a remote file server.

    lsblk -ap --output=NAME,SIZE,TYPE,FSTYPE,MOUNTPOINT
    
  3. Optional: Create an XFS filesystem, if necessary. Skip this step if you are using a remote file server.Replace Storage with the path of the target device or partition:

    mkfs.xfs Storage
    
  4. Create an entry in the /etc/fstab file.

    Replace File-System-Specification with one of the following values:

    • the path of the device or partition used in the previous step
    • the remote server specification
    echo "File-System-Specification /opt/serviced/var/backups xfs defaults 0 0" >> /etc/fstab
    
  5. Create the mount point for backup data.

    mkdir -p /opt/serviced/var/backups
    
  6. Mount the filesystem, and then verify it mounted correctly.

    mount -a && mount | grep backups
    

    Example result:

    /dev/sdb3 on /opt/serviced/var/backups type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
    

Create a filesystem for Control Center internal services

This procedure requires one unused device or partition. Use this procedure to create an XFS filesystem on an unused device or partition.

  1. Log in to the target host as root or as a user with superuser privileges.
  2. Identify the target device or partition for the filesystem to create.

    lsblk -ap --output=NAME,SIZE,TYPE,FSTYPE,MOUNTPOINT
    
  3. Create an XFS filesystem.Replace Storage with the path of the target device or partition:

    mkfs.xfs Storage
    
  4. Enter the following command to add an entry to the /etc/fstab file.Replace Storage with the path of the device or partition used in the previous step:

    echo "Storage /opt/serviced/var/isvcs xfs defaults 0 0" >> /etc/fstab
    
  5. Create the mount point for internal services data.

    mkdir -p /opt/serviced/var/isvcs
    
  6. Mount the filesystem, and then verify it mounted correctly.

    mount -a && mount | grep isvcs
    

    Example result:

    /dev/xvdb1 on /opt/serviced/var/isvcs type xfs (rw,relatime,attr2,inode64,noquota)
    

Downloading and staging required files

This section describes how to download and install or stage Control Center software and its operating system dependencies. The procedures in this section are required to perform an installation.

Downloading Control Center files for release 1.10.2

Use this procedure to download required files to a workstation, and then copy the files to the hosts that need them.

To perform this procedure, you need:

  • A workstation with internet access.
  • Permission to download files from delivery.zenoss.io. Customers can request permission by filing a ticket at the Zenoss Support site.
  • A secure network copy program.

Follow these steps:

  1. In a web browser, navigate to delivery.zenoss.io, and then log in.

  2. Download the self-installing Docker image files.

    • install-zenoss-serviced-isvcs:v73.run
    • install-zenoss-isvcs-zookeeper:v16.run
  3. Download the Control Center RPM file.

    • serviced-1.10.2-1.x86_64.rpm
  4. Identify the operating system release on Control Center hosts. Enter the following command on each Control Center host in your deployment, if necessary. All Control Center hosts should be running the same operating system release and kernel.

    cat /etc/redhat-release
    
  5. Download the RHEL/CentOS repository mirror file for your deployment. The download site provides repository mirror files containing the packages that Control Center requires. For RHEL/CentOS 7.7, use the 7.6 file. For RHEL 8.x, use the 7.9 file.

    • yum-mirror-centos7.2-1511-serviced-1.10.2.x86_64.rpm

    • yum-mirror-centos7.3-1611-serviced-1.10.2.x86_64.rpm

    • yum-mirror-centos7.4-1708-serviced-1.10.2.x86_64.rpm

    • yum-mirror-centos7.5-1804-serviced-1.10.2.x86_64.rpm

    • yum-mirror-centos7.6-1810-serviced-1.10.2.x86_64.rpm

    • yum-mirror-centos7.8.2003-serviced-1.10.2.x86_64.rpm

    • yum-mirror-centos7.9.2009-serviced-1.10.2.x86_64.rpm

  6. Optional: Download the Zenoss Pretty Good Privacy (PGP) key, if desired. You can use the Zenoss PGP key to verify Zenoss RPM files and the yum metadata of the repository mirror.

    1. Download the key.

      curl --location -o /tmp/tmp.html 'http://keys.gnupg.net/pks/lookup?op=get&search=0xED0A5FD2AA5A1AD7'
      
    2. Determine whether the download succeeded.

      grep -Ec '^\-\-\-\-\-BEGIN PGP' /tmp/tmp.html
      
      • If the result is 0, return to the previous substep.
      • If the result is 1, proceed to the next substep.
    3. Extract the key.

      awk '/^-----BEGIN PGP.*$/,/^-----END PGP.*$/' /tmp/tmp.html > ./RPM-PGP-KEY-Zenoss
      
  7. Use a secure copy program to copy the files to Control Center hosts.

    • Copy all files to the master host.
    • Copy the following files to to all delegate hosts:

      • RHEL/CentOS RPM file
      • Control Center RPM file
      • Zenoss PGP key file
    • Copy the Docker image file for ZooKeeper to delegate hosts that are ZooKeeper ensemble nodes.

Installing the repository mirror

Use this procedure to install the Zenoss repository mirror on a Control Center host. The mirror contains packages that are required on all Control Center hosts. Repeat this procedure on each host in your deployment.

  1. Log in to the target host as root or as a user with superuser privileges.

  2. Move the RPM files and the Zenoss PGP key file to /tmp.

  3. Install the repository mirror.

    yum install /tmp/yum-mirror-*.rpm
    

    The yum command copies the contents of the RPM file to /opt/zenoss-repo-mirror.

  4. Optional: Install the Zenoss PGP key, and then test the package files, if desired.

    1. Move the Zenoss PGP key to the mirror directory.

      mv /tmp/RPM-PGP-KEY-Zenoss /opt/zenoss-repo-mirror
      
    2. Install the key.

      rpm --import /opt/zenoss-repo-mirror/RPM-PGP-KEY-Zenoss
      
    3. Test the repository mirror package file.

      rpm -K /tmp/yum-mirror-*.rpm
      

      On success, the result includes the file name and the following information:

      (sha1) dsa sha1 md5 gpg OK
      
    4. Test the Control Center package file.

      rpm -K /tmp/serviced-VERSION-1.x86_64.rpm
      
  5. Optional: Update the configuration file of the Zenoss repository mirror to enable PGP key verification, if desired.

    1. Open the repository mirror configuration file (/etc/yum.repos.d/zenoss-mirror.repo) with a text editor, and then add the following lines to the end of the file.

      repo_gpgcheck=1
      gpgkey=file:///opt/zenoss-repo-mirror/RPM-PGP-KEY-Zenoss
      
    2. Save the file, and then close the editor.

    3. Update the yum metadata cache.

      yum makecache fast
      

      The cache update process includes the following prompt:

      Retrieving key from file:///opt/zenoss-repo-mirror/RPM-PGP-KEY-Zenoss
      Importing GPG key 0xAA5A1AD7:
       Userid     : "Zenoss, Inc. <dev@zenoss.com>"
       Fingerprint: f31f fd84 6a23 b3d5 981d a728 ed0a 5fd2 aa5a 1ad7
       From       : /opt/zenoss-repo-mirror/RPM-PGP-KEY-Zenoss
      Is this ok [y/N]:
      

      Enter y.

  6. Move the Control Center package file to the mirror directory.

    mv /tmp/serviced-VERSION-1.x86_64.rpm /opt/zenoss-repo-mirror
    
  7. Optional: Delete the mirror package file, if desired.

    rm /tmp/yum-mirror-*.rpm
    

Staging Docker image files on the master host

Before performing this procedure, verify that approximately 640MB of temporary space is available on the file system where /root is located.

Use this procedure to copy Docker image files to the Control Center master host. The files are used when Docker is fully configured.

  1. Log in to the master host as root or as a user with superuser privileges.

  2. Copy or move the archive files to /root.

  3. Add execute permission to the files.

    chmod +x /root/*.run
    

Staging a Docker image file on ZooKeeper ensemble nodes

Before performing this procedure, verify that approximately 170MB of temporary space is available on the file system where /root is located.

Use this procedure to add a Docker image file to the Control Center delegate hosts that are ZooKeeper ensemble nodes. Delegate hosts that are not ZooKeeper ensemble nodes do not need the file.

  1. Log in to a delegate host as root or as a user with superuser privileges.

  2. Copy or move the install-zenoss-isvcs-zookeeper-v*.run file to /root.

  3. Add execute permission to the file.

    chmod +x /root/*.run