Skip to content

Updating Control Center to release 1.9.1

This section includes procedures for updating Control Center 1.9.0 to release 1.9.1.

The following list outlines recommended best practices for updating Control Center deployments:

  1. Review the release notes for this release and relevant prior releases. The latest information is provided there.
  2. On delegate hosts, most of the update steps are identical. Use screen, tmux or a similar program to establish sessions on each delegate host and perform the steps at the same time.
  3. Review and verify the settings in delegate host configuration files (/etc/default/serviced) before starting the update. Ideally, the settings on all delegate hosts are identical, except on ZooKeeper nodes and delegate hosts that do not mount the DFS.
  4. Review the update procedures before performing them. Every effort is made to avoid mistakes and anticipate needs; nevertheless, the instructions may be incorrect or inadequate for some requirements or environments.

Updating 1.9.0 to 1.9.1

  1. Download the required files
  2. Stage Docker image file
  3. Update the master host:
    1. Load image file
    2. Stop Control Center
    3. Update the serviced binary
  4. Update the delegate hosts:
    1. Stop Control Center
    2. Update the serviced binary
  5. Start Control Center:
  6. Perform post-upgrade procedures:

Downloading Control Center files for release 1.9.1

Use this procedure to download required files to a workstation, and then copy the files to the hosts that need them.

To perform this procedure, you need:

  • A workstation with internet access.
  • Permission to download files from delivery.zenoss.io. Customers can request permission by filing a ticket at the Zenoss Support site.
  • A secure network copy program.

Follow these steps:

  1. In a web browser, navigate to delivery.zenoss.io, and then log in.

  2. Download the self-installing Docker image files.

    • install-zenoss-serviced-isvcs:v70.run (located in ZSD - Resource Manager > control-center > cc191)
    • install-zenoss-isvcs-zookeeper:v14.run (located in ZSD - Resource Manager > resource-manager-66x > cc190-rm660; new installations only; not needed for updates)
  3. Download the Control Center RPM file.

    • serviced-1.9.1-1.x86_64.rpm(located in ZSD - Resource Manager > control-center > cc191)
  4. Identify the operating system release on Control Center hosts. Enter the following command on each Control Center host in your deployment, if necessary. All Control Center hosts should be running the same operating system release and kernel.

    cat /etc/redhat-release
    
  5. RHEL 8.4+ only: Download the containerd RPM file from Docker:

    curl -#O https://download.docker.com/linux/centos/7/x86_64/stable/Packages/containerd.io-1.2.13-3.1.el7.x86_64.rpm
    
  6. Download the RHEL/CentOS repository mirror file for your deployment (located in ZSD - Resource Manager > resource-manager-66x > cc190-rm660). The download site provides repository mirror files containing the packages that Control Center requires. For RHEL 8.3, use the 7.9 file.

    • yum-mirror-centos7.2-1511-serviced-1.9.0.x86_64.rpm

    • yum-mirror-centos7.3-1611-serviced-1.9.0.x86_64.rpm

    • yum-mirror-centos7.4-1708-serviced-1.9.0.x86_64.rpm

    • yum-mirror-centos7.5-1804-serviced-1.9.0.x86_64.rpm

    • yum-mirror-centos7.6-1810-serviced-1.9.0.x86_64.rpm

    • yum-mirror-centos7.8.2003-serviced-1.9.0.x86_64.rpm

    • yum-mirror-centos7.9.2009-serviced-1.9.0.x86_64.rpm

  7. Optional: Download the Zenoss Pretty Good Privacy (PGP) key, if desired.

    You can use the Zenoss PGP key to verify Zenoss RPM files and the yum metadata of the repository mirror. 1. Download the key.

    ```sh
    curl --location -o /tmp/tmp.html 'https://keyserver.ubuntu.com/pks/lookup?op=get&search=0xED0A5FD2AA5A1AD7'
    ```
    
    1. Determine whether the download succeeded.

      grep -Ec '^\-\-\-\-\-BEGIN PGP' /tmp/tmp.html
      
      • If the result is 0, return to the previous substep.
      • If the result is 1, proceed to the next substep.
    2. Extract the key.

      awk '/^-----BEGIN PGP.*$/,/^-----END PGP.*$/' /tmp/tmp.html > ./RPM-PGP-KEY-Zenoss
      
  8. Use a secure copy program to copy the files to Control Center hosts.

    • Copy all files to the master host.
    • Copy the following files to to all delegate hosts:

      • RHEL/CentOS RPM file
      • containerd RPM file
      • Control Center RPM file
      • Zenoss PGP key file
    • Copy the Docker image file for ZooKeeper to delegate hosts that are ZooKeeper ensemble nodes.

Staging Docker image files on the master host

Before performing this procedure, verify that approximately 640MB of temporary space is available on the file system where /root is located.

Use this procedure to copy Docker image files to the Control Center master host. The files are used when Docker is fully configured.

  1. Log in to the master host as root or as a user with superuser privileges.

  2. Copy or move the archive files to /root.

  3. Add execute permission to the files.

    chmod +x /root/*.run
    

Staging a Docker image file on ZooKeeper ensemble nodes

Before performing this procedure, verify that approximately 170MB of temporary space is available on the file system where /root is located.

Use this procedure to add a Docker image file to the Control Center delegate hosts that are ZooKeeper ensemble nodes. Delegate hosts that are not ZooKeeper ensemble nodes do not need the file.

  1. Log in to a delegate host as root or as a user with superuser privileges.

  2. Copy or move the install-zenoss-isvcs-zookeeper-v*.run file to /root.

  3. Add execute permission to the file.

    chmod +x /root/*.run
    

Update the master host

The following procedures are to be performed on the master host of a multi-host deployment or on the single host in a single-host deployment.

The procedures for updating a delegate host can be found below.

Loading image files

Use this procedure to load images into the local Docker registry on a host.

  1. Log in to the host as root or as a user with superuser privileges.
  2. Change directory to /root.

    cd /root
    
  3. Load the images.

    for image in install-zenoss-*.run
    do
      /bin/echo -en "\nLoading $image..."
      yes | ./$image
    done
    
  4. List the images in the registry.

    docker images
    

    The result should show one image for each archive file.

  5. Optional: Delete the archive files, if desired.

    rm -i ./install-zenoss-*.run
    

Stopping Control Center on the master host

Use this procedure to stop the Control Center service (serviced) on the master host.

  1. Log in to the master host as root or as a user with superuser privileges.
  2. Stop the top-level service serviced is managing, if necessary.

    1. Show the status of running services.

      serviced service status
      

      The top-level service is the service listed immediately below the headings line. - If the status of the top-level service and all child services is stopped, proceed to the next step. - If the status of the top-level service and all child services is not stopped, perform the remaining substeps.

    2. Stop the top-level service.

      serviced service stop Zenoss.resmgr
      
    3. Monitor the stop.

      serviced service status
      

      When the status of the top-level service and all child services is stopped, proceed to the next step.

  3. Stop the Control Center service.

    systemctl stop serviced
    
  4. Ensure that no containers remain in the local repository.

    1. Display the identifiers of all containers, running and exited.

      docker ps -qa
      
      • If the command returns no result, stop. This procedure is complete.
      • If the command returns a result, perform the following substeps.
    2. Remove all remaining containers.

      docker ps -qa | xargs --no-run-if-empty docker rm -fv
      
    3. Display the identifiers of all containers, running and exited.

      docker ps -qa
      
      • If the command returns no result, stop. This procedure is complete.
      • If the command returns a result, perform the remaining substeps.
    4. Disable the automatic startup of serviced.

      systemctl disable serviced
      
    5. Reboot the host.

      reboot
      
    6. Log in to the master host as root, or as a user with superuser privileges.

    7. Enable the automatic startup of serviced.

      systemctl enable serviced
      

Updating the serviced binary for release 1.9.1

Use this procedure to update the serviced binary on a Control Center host. Perform this procedure on each host in your Control Center deployment.

In multi-host deployments, stop serviced on the master host first.

  1. Log in to the host as root or as a user with superuser privileges.

  2. Start the docker service, if necessary.

    systemctl is-active docker || systemctl start docker
    
  3. Disable SELinux temporarily, if necessary.

    1. Determine the current mode.

      sestatus | awk '/Current mode:/ { print $3 }'
      
      • If the result is not enforcing, proceed to step 4.
      • If the result is enforcing, perform the next substep.
    2. Disable SELinux temporarily.

      setenforce 0
      
  4. Save the current serviced configuration file as a reference and set permissions to read-only.

    cp -p /etc/default/serviced /etc/default/serviced-pre-1.9.1
    chmod 0440 /etc/default/serviced-pre-1.9.1
    
  5. Create a temporary directory, if necessary. The RPM installation process writes a script to TMP and starts it with exec to migrate Elasticsearch.

    1. Determine whether the /tmp directory is mounted on a partition with the noexec option set.

      awk '$2 ~ /\/tmp/ { if ($4 ~ /noexec/) print "protected" }' < /proc/mounts
      

      If the command returns a result, perform the following substeps. Otherwise, proceed to step 6.

    2. Create a temporary storage variable for TMP, if necessary.

      test -n "$TMP" && tempTMP=$TMP
      
    3. Create a temporary directory.

      mkdir $HOME/tempdir && export TMP=$HOME/tempdir
      
  6. Install the new serviced RPM package.

    yum install --enablerepo=zenoss-mirror /opt/zenoss-repo-mirror/serviced-1.9.1-1.x86_64.rpm
    

    If yum returns an error due to dependency issues, see Resolving package dependency conflicts for potential resolutions.

  7. Enable SELinux, if necessary. Perform this step only if you disabled SELinux in step 3:

    setenforce 1
    
  8. Restore the previous value of TMP, if necessary.

    test -n "$tempTMP" && export TMP=$tempTMP
    
  9. Make a backup copy of the new configuration file and set permissions to read-only.

    cp -p /etc/default/serviced /etc/default/serviced-1.9.1-orig
    chmod 0440 /etc/default/serviced-1.9.1-orig
    
  10. Compare the new configuration file with the configuration file of the previous release.

    1. Identify the configuration files to compare.

      ls -l /etc/default/serviced*
      

      The original versions of the configuration files should end with orig, but you may have to compare the dates of the files.

    2. Compare the new and previous configuration files.Replace New-Version with the name of the new configuration file, and replace Previous-Version with the name of the previous configuration file:

      diff New-Version Previous-Version
      

      For example, to compare versions 1.9.1 and the most recent version, enter the following command:

      diff /etc/default/serviced-1.9.0-orig /etc/default/serviced-1.9.1-orig
      
    3. If the command returns no result, restore the backup of the previous configuration file.

      cp -p /etc/default/serviced-pre-1.9.1 /etc/default/serviced && chmod 0644 /etc/default/serviced
      
    4. If the command returns a result, restore the backup of the previous configuration file, and then optionally, use the results to edit the restored version.

      cp -p /etc/default/serviced-pre-1.9.1 /etc/default/serviced && chmod 0644 /etc/default/serviced
      

      For more information about configuring a host, see Configuration variables.

  11. Reload the systemd manager configuration.

    systemctl daemon-reload
    

Update the delegate hosts

Stopping Control Center on a delegate host

Use this procedure to stop the Control Center service (serviced) on a delegate host in a multi-host deployment. Repeat this procedure on each delegate host in your deployment.

Before performing this procedure on any delegate host, stop Control Center on the master host.

  1. Log in to the delegate host as root or as a user with superuser privileges.
  2. Stop the Control Center service.

    systemctl stop serviced
    
  3. Ensure that no containers remain in the local repository.

    1. Display the identifiers of all containers, running and exited.

      docker ps -qa
      
      • If the command returns no result, proceed to the next step.
      • If the command returns a result, perform the following substeps.
    2. Remove all remaining containers.

      docker ps -qa | xargs --no-run-if-empty docker rm -fv
      
      • If the remove command completes, proceed to the next step.
      • If the remove command does not complete, the most likely cause is an NFS conflict. Perform the following substeps.
    3. Stop the NFS and Docker services.

      systemctl stop nfs && systemctl stop docker
      
    4. Start the NFS and Docker services.

      systemctl start nfs && systemctl start docker
      
    5. Repeat the attempt to remove all remaining containers.

      docker ps -qa | xargs --no-run-if-empty docker rm -fv
      
      • If the remove command completes, proceed to the next step.
      • If the remove command does not complete, perform the remaining substeps.
    6. Disable the automatic startup of serviced.

      systemctl disable serviced
      
    7. Reboot the host.

      reboot
      
    8. Log in to the delegate host as root, or as a user with superuser privileges.

    9. Enable the automatic startup of serviced.

      systemctl enable serviced
      
  4. Dismount all filesystems mounted from the Control Center master host.

    This step ensures no stale mounts remain when the storage on the master host is replaced.

    1. Identify filesystems mounted from the master host.

      awk '/serviced/ { print $1, $2 }' < /proc/mounts | grep -v '/opt/serviced/var/isvcs'
      
      • If the preceding command returns no result, stop. This procedure is complete.
      • If the preceding command returns a result, perform the following substeps.
    2. Force the filesystems to dismount.

      for FS in $(awk '/serviced/ { print $2 }' < /proc/mounts | grep -v '/opt/serviced/var/isvcs')
      do
        umount -f $FS
      done
      
    3. Identify filesystems mounted from the master host.

      awk '/serviced/ { print $1, $2 }' < /proc/mounts | grep -v '/opt/serviced/var/isvcs'
      
      • If the preceding command returns no result, stop. This procedure is complete.
      • If the preceding command returns a result, perform the following substeps.
    4. Perform a lazy dismount.

      for FS in $(awk '/serviced/ { print $2 }' < /proc/mounts | grep -v '/opt/serviced/var/isvcs')
      do
        umount -f -l $FS
      done
      
    5. Restart the NFS service.

      systemctl restart nfs
      
    6. Determine whether any filesystems remain mounted from the master host.

      awk '/serviced/ { print $1, $2 }' < /proc/mounts | grep -v '/opt/serviced/var/isvcs'
      
      • If the preceding command returns no result, stop. This procedure is complete.
      • If the preceding command returns a result, perform the remaining substeps.
    7. Disable the automatic startup of serviced.

      systemctl disable serviced
      
    8. Reboot the host.

      reboot
      
    9. Log in to the delegate host as root, or as a user with superuser privileges.

    10. Enable the automatic startup of serviced.

      systemctl enable serviced
      

Updating the serviced binary for release 1.9.1

Use this procedure to update the serviced binary on a Control Center host. Perform this procedure on each host in your Control Center deployment.

In multi-host deployments, stop serviced on the master host first.

  1. Log in to the host as root or as a user with superuser privileges.

  2. Start the docker service, if necessary.

    systemctl is-active docker || systemctl start docker
    
  3. Disable SELinux temporarily, if necessary.

    1. Determine the current mode.

      sestatus | awk '/Current mode:/ { print $3 }'
      
      • If the result is not enforcing, proceed to step 4.
      • If the result is enforcing, perform the next substep.
    2. Disable SELinux temporarily.

      setenforce 0
      
  4. Save the current serviced configuration file as a reference and set permissions to read-only.

    cp -p /etc/default/serviced /etc/default/serviced-pre-1.9.1
    chmod 0440 /etc/default/serviced-pre-1.9.1
    
  5. Create a temporary directory, if necessary. The RPM installation process writes a script to TMP and starts it with exec to migrate Elasticsearch.

    1. Determine whether the /tmp directory is mounted on a partition with the noexec option set.

      awk '$2 ~ /\/tmp/ { if ($4 ~ /noexec/) print "protected" }' < /proc/mounts
      

      If the command returns a result, perform the following substeps. Otherwise, proceed to step 6.

    2. Create a temporary storage variable for TMP, if necessary.

      test -n "$TMP" && tempTMP=$TMP
      
    3. Create a temporary directory.

      mkdir $HOME/tempdir && export TMP=$HOME/tempdir
      
  6. Install the new serviced RPM package.

    yum install --enablerepo=zenoss-mirror /opt/zenoss-repo-mirror/serviced-1.9.1-1.x86_64.rpm
    

    If yum returns an error due to dependency issues, see Resolving package dependency conflicts for potential resolutions.

  7. Enable SELinux, if necessary. Perform this step only if you disabled SELinux in step 3:

    setenforce 1
    
  8. Restore the previous value of TMP, if necessary.

    test -n "$tempTMP" && export TMP=$tempTMP
    
  9. Make a backup copy of the new configuration file and set permissions to read-only.

    cp -p /etc/default/serviced /etc/default/serviced-1.9.1-orig
    chmod 0440 /etc/default/serviced-1.9.1-orig
    
  10. Compare the new configuration file with the configuration file of the previous release.

    1. Identify the configuration files to compare.

      ls -l /etc/default/serviced*
      

      The original versions of the configuration files should end with orig, but you may have to compare the dates of the files.

    2. Compare the new and previous configuration files.Replace New-Version with the name of the new configuration file, and replace Previous-Version with the name of the previous configuration file:

      diff New-Version Previous-Version
      

      For example, to compare versions 1.9.1 and the most recent version, enter the following command:

      diff /etc/default/serviced-1.9.0-orig /etc/default/serviced-1.9.1-orig
      
    3. If the command returns no result, restore the backup of the previous configuration file.

      cp -p /etc/default/serviced-pre-1.9.1 /etc/default/serviced && chmod 0644 /etc/default/serviced
      
    4. If the command returns a result, restore the backup of the previous configuration file, and then optionally, use the results to edit the restored version.

      cp -p /etc/default/serviced-pre-1.9.1 /etc/default/serviced && chmod 0644 /etc/default/serviced
      

      For more information about configuring a host, see Configuration variables.

  11. Reload the systemd manager configuration.

    systemctl daemon-reload
    

Start Control Center

There are separate start-up procedures for single- and multi-host deployments of Control Center. Please use the procedure appropriate to your deployment.

Starting Control Center (single-host deployment)

Use this procedure to start Control Center in a single-host deployment. The default configuration of the Control Center service (serviced) is to start when the host starts. This procedure is only needed after stopping serviced to perform maintenance tasks.

  1. Log in to the master host as root or as a user with superuser privileges.
  2. Enable serviced, if necessary.

    systemctl is-enabled serviced || systemctl enable serviced
    
  3. Start the Control Center service.

    systemctl start serviced
    
  4. Optional: Monitor the startup, if desired.

    journalctl -u serviced -f -o cat
    

Starting Control Center (multi-host deployment)

Use this procedure to start Control Center in a multi-host deployment. The default configuration of the Control Center service (serviced) is to start when the host starts. This procedure is only needed after stopping serviced to perform maintenance tasks.

  1. Log in to the master host as root or as a user with superuser privileges.
  2. Enable serviced, if necessary.

    systemctl is-enabled serviced || systemctl enable serviced
    
  3. Identify the hosts in the ZooKeeper ensemble.

    grep -E '^[[:space:]]*SERVICED_ZK=' /etc/default/serviced
    

    The result is a list of 1, 3, or 5 hosts, separated by the comma character (,). The master host is always a node in the ZooKeeper ensemble.

  4. In separate windows, log in to each of the delegate hosts that are nodes in the ZooKeeper ensemble as root, or as a user with superuser privileges.

  5. On all ensemble hosts, start serviced.

    The window of time for starting a ZooKeeper ensemble is relatively short. The goal of this step is to start Control Center on each ensemble node at about the same time, so that each node can participate in electing the leader.

    systemctl start serviced
    
  6. On the master host, check the status of the ZooKeeper ensemble.

    1. Attach to the container of the ZooKeeper service.

      docker exec -it serviced-isvcs_zookeeper bash
      
    2. Query the master host and identify its role in the ensemble.

      Replace Master with the hostname or IP address of the master host:

      { echo stats; sleep 1; } | nc Master 2181 | grep Mode
      

      The result includes leader or follower. When multiple hosts rely on the ZooKeeper instance on the master host, the result includes standalone.

    3. Query the other delegate hosts to identify their role in the ensemble.

      Replace Delegate with the hostname or IP address of a delegate host:

      { echo stats; sleep 1; } | nc Delegate 2181 | grep Mode
      
    4. Detach from the container of the ZooKeeper service.

      exit
      

    If none of the nodes reports that it is the ensemble leader within a few minutes of starting serviced, reboot the ensemble hosts.

  7. Log in to each of the delegate hosts that are not nodes in the ZooKeeper ensemble as root, or as a user with superuser privileges, and then start serviced.

    systemctl start serviced
    
  8. Optional: Monitor the startup, if desired.

    journalctl -u serviced -f -o cat
    

Post-upgrade procedures

Removing unused images

Use this procedure to identify and remove unused Control Center images.

  1. Log in to the master host as root or as a user with superuser privileges.
  2. Identify the images associated with the installed version of serviced.

    serviced version | grep Images
    

    Example result:

    IsvcsImages: [zenoss/serviced-isvcs:v73 zenoss/isvcs-zookeeper:v16]
    
  3. Start Docker, if necessary.

    systemctl status docker || systemctl start docker
    
  4. Display the serviced images in the local repository.

    docker images | awk '/REPO|isvcs/'
    

    Example result (edited to fit):

    REPOSITORY                     TAG       IMAGE ID
    zenoss/serviced-isvcs          v73       0bd933b3fb2f
    zenoss/serviced-isvcs          v70       c19f1e317158
    zenoss/isvcs-zookeeper         v16       7ea8c92ca1ad
    zenoss/isvcs-zookeeper         v14       0ff3b3117fb8
    

    The example result shows the current versions and one set of previous versions. Your result may include additional previous versions and will show different images IDs.

  5. Remove unused images.

    Replace Image-ID with the image ID of an image for a previous version.

    docker rmi Image-ID
    

    Repeat this command for each unused image.