Skip to content
CA Unified Infrastructure Management Probes
Documentation powered by DocOps

cluster (Clustered Environment Monitoring) Release Notes

Last update September 24, 2018

The cluster probe monitors Microsoft, Red Hat, Redhat Pacemaker Cluster, HP-UX service guard and Veritas clusters.

The probe enables failover support for the probes in clustered environments. The probe configurations that are saved in a resource group remain the same, even when that resource group moves to another cluster node. The cluster probe sends alarms when a resource group or cluster node changes state.

All cluster nodes within a cluster must have the same set of probes.

The probe version 3.30 and later provides Logical Volume Manager (LVM) support for HP-UX platform. If you upgrade to cluster version 3.30, cdm LVM disk profiles are created in cluster.

Contents

Revision History

This section describes the history of the revisions for this probe.

Note: Support case(s) may not be viewable to all customers.

Version
Description
State
Date
3.50

What’s New:

  • Added support for Redhat Pacemaker Cluster monitoring.

Note: The Redhat Pacemaker Cluster behaves differently from other supported clusters. If a resource group’s state is Down, these groups are not displayed on the probe GUI, although the alarms and QoS are sent by the node to which the group belongs (last owner).

GA July 2018
3.43

Fixed Defects:

  • The output of the clustat command truncated the cluster information to only 80 characters column size. Salesforce case number: 00639419
  • Partition disks did not display as a part of the cluster. Salesforce case number: 00517081
  • The cluster probe is trying to communicate with the port number 48000 even when the controller probe is using a different port. Salesforce case number: 00502664
  • Communication error when the probe attempts to communicate with the cluster. Salesforce case number: 481762
  • The probe crashes and results in communication error when deployed on 15 HP-UX machines. Salesforce case number: 00246784
GA December 2017
3.42

What's New:

  • Added support to suppress the Available Storage Group alarms for Microsoft Cluster Shared Volume and Microsoft Cluster Services. Support case number 442501
  • Updated the name of the resource group for Microsoft Cluster Shared Volume. Support case number 245292

Fixed Defect:

  • The probe incorrectly identified the drives that are managed by the storage foundation on Microsoft Cluster Service. Support case number 252181
GA September 2016
3.41

Fixed Defect:

  • After a failover, the probe displayed clustered drives in the cdm probe, with disabled monitoring on the new passive node. Support case number 298709
GA April 2016
3.40

What's New:

  • Added support for Microsoft Cluster Shared Volume monitoring.
Beta March 2016
3.33

Fixed Defects:

  • On RHEL platform, the cluster disks were displayed as local or network disks on the cdm Infrastructure Manager (IM). Support case number 00160058
    Note: You must use cdm version 5.61 with cluster version 3.33 to view the cluster disks on the cdm Infrastructure Manager (IM).
  • When node service was stopped, cluster probe marked resources offline and kept sending _restart command to other probes. Support case number 70002275
GA December 2015
3.32

Fixed Defects:

  • On Linux platform, the cluster disks were displayed as local disks on the cdm Infrastructure Manager (IM). Salesforce case 00135028
  • The probe displayed incorrect status of clustered disks on the IM. Salesforce cases 00169389, 00170460
  • Updated the Supported Probes section in Release Notes to describe the cluster configuration that is supported by sqlserver, oracle, and ntperf. Salesforce case 00169432

GA

October 2015
3.31

What's New:

  • Updated support for factory templates.

Fixed Defects:

  • Fixed an issue where the probe did not clear package halted\failed alarms when the failover is unsuccessful in HP-UX clusters and the package starts on the primary node.
GA June 2015
3.30

What's New:

  • Added Logical Volume Manager (LVM) support for HP-UX platform
  • Added Log Size field
  • Provided an option to select the protocol key as TCP or UDP for raw configuration. By default, the protocol key is TCP. Salesforce case 00160858
Beta May 2015
3.20

What's New:

  • Added support for HP-UX service guard A.11.19.00 cluster
  • Added support for factory templates

Fixed Defects:

  • Fixed a defect where cdm disks were not getting activated/deactivated on failover. Salesforce case 00133038
  • Fixed a defect where on Red Hat cluster another node was coming for quorum disks. Salesforce case 00144966
  • Fixed the defect where the probe used to give an error message “The node has no valid cluster probe address and you cannot add any resources until this is resolved”. Salesforce cases: 00151733, 00142247, 00137235, 00115400, 00121291
  March 2015
3.13 Added support for configuring the probe through the Admin Console (web-based) GUI.   September 2014
3.12 Fixed memory leak issue.   September 2013
3.11
  • Fixed issue that is related to leaks
  • Fixed issue where the probe is unable to define any monitoring profile
  • Fixed an issue where the probe does not clear alarms after failover to next node
  June 2013
3.10
  • Added Probe Defaults
  • Added resource information for a resource group in the alarm message
  • Added support for RHEL 5.0x64 bit
  December 2012
3.01 Updated cluster information in cdm for RHCS.   September 2011
3.00
  • Added support for Red Hat cluster.
  • Fixed message synchronization for the NTServices probe.
  August 2011
2.72 Fixed group synchronization when not using node IPs in cluster.cfg (applied fix from v2.66 into 2.7x release).   March 2011
2.72 Fixed the group synchronization when not using node IPs in cluster.cfg (applied fix from v2.66 into v2.7x release).   January 2011
2.66 Fixed the group synchronization when not using node IPs in cluster.cfg   January 2011
2.71 Applied fixes from v2.65 into 2.7x release.   January 2011
2.65 Fixed a potential program failure on SOLARIS (logging of NULL pointer terminates probe on SOLARIS).   January 2011
2.70 Added support for internationalization.   December 2010
2.64 Fixed a potential program failure on SOLARIS (no node IP in cfg causing failure).   September 2010
2.63 Added fix in the GUI to use named APIs instead of IP.   September 2010
2.62
  • Changed the haclus -list to haclus -value ClusterName
  • Fixed the cluster compatibility issue on IA64 platform
  • Fixed the issue of wrong IP in NAT environment in the GUI and the probe.
  • Fixed the issue of double cluster group devices listing in CDM
  • Fixed the issue of cluster drives being reported as local in CDM on non-owner nodes
  • Added a validation while adding shared sections or subsections in GUI
  • Removed whites spaces from the cluster names at the time of discovery
  • Version 2.62 withdrawn because of potential communication errors when configuring.
  August 2010
2.61 Added support for AIX platform.   June 2010
2.60
  • Added support for extended NIS database information.
  • Added support for Resources Failed and Resources Not Probed in hastatus.
  April 2010
2.52

Fixed the issue of Drive reported as Disk3Partition1 in case the device is down on the cluster

  March 2010
2.51 Fixed the CDM mount point handling issue in the Microsoft cluster plugin dll.   March 2010
2.50
  • Added support for merging configuration when configuration is done across different cluster nodes
  • Added support for configuring shared resources individually and in bulk.
  March 2010
2.30 Added a callback get_cluster_group_all to get the complete status of cluster   March 2010
2.21
  • Built with new libraries
  • Added QoS messages for state changes on Node and Group state
  • Added levels of alarm severity that is based on Node and Group state
  • Added support for fetching of cluster resources
  • Added support for identifying clustered disks
  • Added option of removing Resource Groups no longer part of the cluster
  • Changed callback get_nodename
  • Fixed retrieval of evs_name (correct case) for Exchange 2007 in get_EVS
  • Fixed issue regarding "illegal SID" upon cluster probe synchronization
  • Fixed fetching of resource type in calls to get_EVS and get_cluster_resources
  • Fixed callback get_EVS (input argument nodename is no longer case sensitive)
  • Added support for Windows on Itanium 2 systems (IA64)
  • Fixed synchronizing issue in NAT environments. (Add the key use_local_ip = 1 in /setup section (use "raw configure"))
  July 2009
2.04
  • Fixed problem with change of alarm subsystem IDs
  • Fixed association of same profile to multiple Service Groups (This is not allowed).
  June 2008
2.03
  • Fixed minor GUI issues
  • Fixed GUI refresh. Fixed logging on Solaris
  • Fixed program failure on Solaris
  • Fixed handling of Service Groups in PARTIAL state for VCS
  • Fixed probe security settings for LINUX and SOLARIS
  • Fixed OS key for Solaris plug-in (wizard failed)
  • Added the port library on Solaris (load plug-in failed)
  • Added support for Veritas Cluster Server (VCS) on Windows, Linux, and Solaris
  April 2008
1.64
  • Fixed synchronization issues
  • Fixed memory leak (IP=NULL)
  September 2007
1.62 Fixed saving of Resource Groups containing slash (/)   June 2007
1.61
  • Share "partially online" resource group setting with exchange_monitor probe
  • Added support for identifying Exchange Virtual Servers
  • Added support for NimBUS exchange_monitor in A/A, A/P, and N+1 node cluster configurations
  June 2007
1.50
  • Fixed several GUI issues
  • Added multiple profile selection
  • Removed "add node", "delete node" and "add resource group" options
  • Added support for edit/disable alarms
  • Fixed several run-time error situations
  • Changed source field on node alarms
  • Added support for shared sections and probe profiles that are found in /cluster_setup section of other probes
  September 2006
1.26
  • Fixed issue with resource groups not having their states set correctly when alarms were turned off
  • Fixed issues relating to synchronization between cluster probes, especially when adding new resources
  • Fixed security issue when synchronizing probe configuration between nodes
  April 2006
1.22
  • Cosmetic GUI changes.
  • Added Refresh to menu.
  • Fixed text for clear alarms
  December 2005

Probe Specific Hardware Requirements

The cluster probe must be installed on systems with the following minimum resources:

  • Memory: 2 GB to 4 GB of RAM
  • CPU: 3-GHz dual-core processor, 32-bit, or 64-bit.

Probe Specific Software Requirements

The cluster probe requires the following software environment:

  • CA Unified Infrastructure Management 8.0 or later

    Note: To run the probe version 3.32 on Admin Console, you require:

    • CA UIM 8.31 or later

    • ppm probe version 3.23 or later

  • Robot 7.62 or later (recommended)
  • ci_defn_pack 1.03 or later (required for HP-UX platform on CA UIM 8.1 or earlier)

    Note: Restart the nis_server when you deploy the ci_defn_pack probe.

  • Probe Provisioning Manager (PPM) probe version 2.38 or later (required for Admin Console)
  • Java JRE version 6 or later (required for Admin Console)

Upgrade Considerations

This section lists the upgrade considerations for the cluster probe.

  • The following cluster disk profiles in cdm are not supported on upgrade:
    • 3.40 or earlier to 3.41 or later
    • 3.31 or earlier to 3.32 or later
    Configure a cluster disk in the cdm probe and then create cluster disk profile in the cluster probe.
  • Restart the cdm probe to see the cluster disks.
  • Any cdm disk that is converted from local to cluster and is enabled for monitoring, is deactivated.

Supported Veritas Cluster Versions

The cluster probe is certified on Veritas Cluster versions 5.1 and 6.1. 

Supported Probes

The following table lists the supported probes and their corresponding cluster environment:

Important! Use the cluster probe with other probes only when you configure the cluster probe on a robot in a clustered environment.

Probe

Cluster Node Configuration

cdm

The probe supports only disk profile monitoring on cluster version 2.20 and later.

Note: For more information, see the Set up Cluster Monitoring in cdm section. 

  • Active/Passive
  • N+1 node cluster

dirscan

  • 2-node Active/Passive
  • N+1 node clusters if the profile names are unique

exchange_monitor

The probe supports only Microsoft Cluster Server (MSCS) monitoring on cluster version 1.61 and later.

  • Active/Active
  • Active/Passive
  • N+1 node cluster

logmon

  • 2-node Active/Passive
  • N+1 node clusters if the profile names are unique

ntperf

  • 2-node Active/Passive
  • N+1 node clusters if the profile names are unique

ntservices

  • 2-node Active/Passive
  • N+1 node clusters if the profile names are unique

oracle 

  • 2-node Active/Passive
  • N+1 node clusters if the profile names are unique

processes

  • 2-node Active/Passive
  • N+1 node clusters if the profile names are unique

sqlserver

  • 2-node Active/Passive
  • N+1 node clusters if the profile names are unique

Set up Cluster Monitoring in cdm 

The cdm probe receives information about the cluster disk resources from the cluster probe. Monitoring profiles are created for the resources that are based on the fixed_default settings in the cdm probe. The profile is automatically registered with the cluster probe to ensure continuous monitoring on cluster group failover. In the cdm probe, the cluster IP is used as Alarm and QoS source instead of the cluster node. You can change the source to cluster name or group name from the Infrastructure Manager (IM).

Update Subsystem ID

Alarms are classified by their subsystem ID, identifying the part of the system that the alarm relates to. These subsystem IDs are kept in a table that is maintained by the Alarm Server (nas) probe. If you are working on CA UIM 8.1 or earlier, you must add the subsystem IDs manually using ci_defn_pack version 1.02 or later. If you have upgraded to CA UIM 8.2 or later, then you do not have to add the subsystem IDs.

Important! Restart the nis_server after you deploy the ci_defn_pack.

The subsystem IDs that must be added are:

Key Name

Value

1.1.16

Cluster

1.1.16.1

Node

1.1.16.2

Resource Group

1.1.16.3 Package

Note: The 1.1.16.3 subsystem ID is specific to the HP-UX service guard cluster.

Follow these steps:

  1. Open Raw Configure for the nas probe.
  2. Click Subsystems.
  3. Add the new key name and value.
  4. Repeat this process for all the required subsystem IDs.

Known Issues

The known issues of the probe are:

  • (On Version 3.41) Upon failover, the cluster disk profiles in cdm that do not have an associated profile in cluster will still be visible in the cdm probe on the new passive node. However, these profiles will not be available for configuring or monitoring.
Was this helpful?

Please log in to post comments.