VMWARE-NSX-MIB

In much the same way that server virtualization programmatically
creates, snapshots, deletes, and restores software-based virtual
machines (VMs), NSX Data Center network virtualization
programmatically creates, deletes, and restores software-based
virtual networks.
        
With network virtualization, the functional equivalent of a network
hypervisor reproduces the complete set of Layer 2 through Layer 7
networking services (for example, switching, routing, access control,
firewalling, QoS) in software. As a result, these services can be
programmatically assembled in any arbitrary combination, to produce
unique, isolated virtual networks in a matter of seconds.
        
For more information about NSX Data Center, please visit:
https://docs.vmware.com/en/VMware-NSX-T-Data-Center/
    

Imported Objects

MODULE-COMPLIANCE, OBJECT-GROUP, NOTIFICATION-GROUPSNMPv2-CONF
MODULE-IDENTITY, OBJECT-IDENTITY, OBJECT-TYPE, NOTIFICATION-TYPESNMPv2-SMI
TEXTUAL-CONVENTION, DateAndTimeSNMPv2-TC
vmwNSXsysVMWARE-ROOT-MIB

Type Definitions (165)

Name Base Type Values/Constraints
VmwNsxTDataCenterActiveGlobalManagersTypeOctetStringrange: 0..256
VmwNsxTDataCenterActiveGlobalManagerTypeOctetStringrange: 0..256
VmwNsxTDataCenterAPICollectionPathTypeOctetStringrange: 0..256
VmwNsxTDataCenterApplianceAddressTypeOctetStringrange: 0..256
VmwNsxTDataCenterApplianceFQDNTypeOctetStringrange: 0..256
VmwNsxTDataCenterBGPNeighborIPTypeOctetStringrange: 0..256
VmwNsxTDataCenterBGPSourceIPTypeOctetStringrange: 0..256
VmwNsxTDataCenterCABundleAgeThresholdTypeOctetStringrange: 0..256
VmwNsxTDataCenterCapacityDisplayNameTypeOctetStringrange: 0..256
VmwNsxTDataCenterCapacityUsageCountTypeOctetStringrange: 0..256
VmwNsxTDataCenterCentralControlPlaneIdTypeOctetStringrange: 0..256
VmwNsxTDataCenterCollectorIPTypeOctetStringrange: 0..256
VmwNsxTDataCenterConstraintLimitTypeOctetStringrange: 0..256
VmwNsxTDataCenterConstraintTypePathTypeOctetStringrange: 0..256
VmwNsxTDataCenterConstraintTypeTypeOctetStringrange: 0..256
VmwNsxTDataCenterCoreDumpCountTypeOctetStringrange: 0..256
VmwNsxTDataCenterCoreIdTypeOctetStringrange: 0..256
VmwNsxTDataCenterCurrentCountTypeOctetStringrange: 0..256
VmwNsxTDataCenterCurrentGatewayStateTypeOctetStringrange: 0..256
VmwNsxTDataCenterCurrentServiceStateTypeOctetStringrange: 0..256
VmwNsxTDataCenterDatapathResourceUsageTypeOctetStringrange: 0..256
VmwNsxTDataCenterDHCPPoolUsageTypeOctetStringrange: 0..256
VmwNsxTDataCenterDHCPServerIdTypeOctetStringrange: 0..256
VmwNsxTDataCenterDirectoryDomainTypeOctetStringrange: 0..256
VmwNsxTDataCenterDiskPartitionNameTypeOctetStringrange: 0..256
VmwNsxTDataCenterDisplayedLicenseKeyTypeOctetStringrange: 0..256
VmwNsxTDataCenterDNSIdTypeOctetStringrange: 0..256
VmwNsxTDataCenterDNSUpstreamIPTypeOctetStringrange: 0..256
VmwNsxTDataCenterDPUIdTypeOctetStringrange: 0..256
VmwNsxTDataCenterDuplicateIPAddressTypeOctetStringrange: 0..256
VmwNsxTDataCenterDvsAliasTypeOctetStringrange: 0..256
VmwNsxTDataCenterDvsNameTypeOctetStringrange: 0..256
VmwNsxTDataCenterEdgeClusterHighestHwVersionTypeOctetStringrange: 0..256
VmwNsxTDataCenterEdgeClusterIdTypeOctetStringrange: 0..256
VmwNsxTDataCenterEdgeClusterNameTypeOctetStringrange: 0..256
VmwNsxTDataCenterEdgeCryptoDrvNameTypeOctetStringrange: 0..256
VmwNsxTDataCenterEdgeNICNameTypeOctetStringrange: 0..256
VmwNsxTDataCenterEdgeNodeAndvSphereSettingsMismatchReasonTypeOctetStringrange: 0..256
VmwNsxTDataCenterEdgeNodeSettingMismatchReasonTypeOctetStringrange: 0..256
VmwNsxTDataCenterEdgeNodeTypeOctetStringrange: 0..256
VmwNsxTDataCenterEdgeServiceNameTypeOctetStringrange: 0..256
VmwNsxTDataCenterEdgeThreadNameTypeOctetStringrange: 0..256
VmwNsxTDataCenterEdgeTNHwVersionTypeOctetStringrange: 0..256
VmwNsxTDataCenterEdgeVMvSphereSettingsMismatchReasonTypeOctetStringrange: 0..256
VmwNsxTDataCenterEdgevSphereLocationMismatchReasonTypeOctetStringrange: 0..256
VmwNsxTDataCenterEntityIdTypeOctetStringrange: 0..256
VmwNsxTDataCenterEventIdTypeOctetStringrange: 0..256
VmwNsxTDataCenterEventTypeTypeOctetStringrange: 0..256
VmwNsxTDataCenterFailureReasonTypeOctetStringrange: 0..256
VmwNsxTDataCenterFeatureIdTypeEnumerationmanagerHealth(1), edgeHealth(2), certificates(3), passwordManagement(4), licenses(5), intelligenceHealth(6), infrastructureCommunication(7), intelligenceCommunication(9), cniHealth(10), ncpHealth(11), nodeAgentsHealth(12), endpointProtection(13), serviceInsertion(14), vpn(15), alarmManagement(16), loadBalancer(17), transportNodeHealth(18), infrastructureService(19), dhcp(20), highAvailability(21), capacity(22), auditLogHealth(24), routing(28), dns(30), distributedFirewall(31), federation(32), distributedIdsIps(33), communication(35), identityFirewall(36), ipam(38), gatewayFirewall(39), clustering(40), nsxApplicationPlatformCommunication(41), mtuCheck(42), nsxApplicationPlatformHealth(43), edge(45), nat(46), physicalServer(47), malwarePreventionHealth(48), edgeCluster(49), vmcApp(50), tepHealth(51), policyConstraint(53), groups(54), securityCompliance(56), nsxaasHealth(58)
VmwNsxTDataCenterFirewallHalfopenFlowUsageTypeOctetStringrange: 0..256
VmwNsxTDataCenterFirewallICMPFlowUsageTypeOctetStringrange: 0..256
VmwNsxTDataCenterFirewallIPFlowUsageTypeOctetStringrange: 0..256
VmwNsxTDataCenterFirewallSNATPortsUsageTypeOctetStringrange: 0..256
VmwNsxTDataCenterFirewallUDPFlowUsageTypeOctetStringrange: 0..256
VmwNsxTDataCenterFlowCacheThresholdTypeOctetStringrange: 0..256
VmwNsxTDataCenterFlowIdentifierTypeOctetStringrange: 0..256
VmwNsxTDataCenterFromGMPathTypeOctetStringrange: 0..256
VmwNsxTDataCenterGroupIdTypeOctetStringrange: 0..256
VmwNsxTDataCenterGroupMaxNumberLimitTypeOctetStringrange: 0..256
VmwNsxTDataCenterGroupSizeTypeOctetStringrange: 0..256
VmwNsxTDataCenterGroupTypeTypeOctetStringrange: 0..256
VmwNsxTDataCenterHaState2TypeOctetStringrange: 0..256
VmwNsxTDataCenterHaStateTypeOctetStringrange: 0..256
VmwNsxTDataCenterHeapTypeTypeOctetStringrange: 0..256
VmwNsxTDataCenterHostnameOrIPAddressWithPortTypeOctetStringrange: 0..256
VmwNsxTDataCenterIDSEventsCountTypeOctetStringrange: 0..256
VmwNsxTDataCenterIntelligenceNodeIdTypeOctetStringrange: 0..256
VmwNsxTDataCenterIntentPathTypeOctetStringrange: 0..256
VmwNsxTDataCenterIPCountTypeOctetStringrange: 0..256
VmwNsxTDataCenterIPv4AddressTypeOctetStringrange: 0..256
VmwNsxTDataCenterIPv6AddressTypeOctetStringrange: 0..256
VmwNsxTDataCenterLatencySourceTypeOctetStringrange: 0..256
VmwNsxTDataCenterLatencyThresholdTypeOctetStringrange: 0..256
VmwNsxTDataCenterLatencyValueTypeOctetStringrange: 0..256
VmwNsxTDataCenterLDAPServerTypeOctetStringrange: 0..256
VmwNsxTDataCenterLicenseEditionTypeTypeOctetStringrange: 0..256
VmwNsxTDataCenterLrIdTypeOctetStringrange: 0..256
VmwNsxTDataCenterLrpCountTypeOctetStringrange: 0..256
VmwNsxTDataCenterLrportIdTypeOctetStringrange: 0..256
VmwNsxTDataCenterLspCountTypeOctetStringrange: 0..256
VmwNsxTDataCenterMacCountTypeOctetStringrange: 0..256
VmwNsxTDataCenterManagerNodeIDSTypeOctetStringrange: 0..256
VmwNsxTDataCenterManagerNodeIdTypeOctetStringrange: 0..256
VmwNsxTDataCenterManagerNodeNameTypeOctetStringrange: 0..256
VmwNsxTDataCenterMaxCapacityThresholdTypeOctetStringrange: 0..256
VmwNsxTDataCenterMaxIDSEventsAllowedTypeOctetStringrange: 0..256
VmwNsxTDataCenterMaxSupportedCapacityCountTypeOctetStringrange: 0..256
VmwNsxTDataCenterMemberIndexIdTypeOctetStringrange: 0..256
VmwNsxTDataCenterMempoolNameTypeOctetStringrange: 0..256
VmwNsxTDataCenterMetricsTargetAddressTypeOctetStringrange: 0..256
VmwNsxTDataCenterMetricsTargetAliasTypeOctetStringrange: 0..256
VmwNsxTDataCenterMetricsTargetPortTypeOctetStringrange: 0..256
VmwNsxTDataCenterMinCapacityThresholdTypeOctetStringrange: 0..256
VmwNsxTDataCenterMpsServiceNameTypeOctetStringrange: 0..256
VmwNsxTDataCenterNappClusterIdTypeOctetStringrange: 0..256
VmwNsxTDataCenterNappMessagingLAGThresholdTypeOctetStringrange: 0..256
VmwNsxTDataCenterNappNodeIdTypeOctetStringrange: 0..256
VmwNsxTDataCenterNappNodeNameTypeOctetStringrange: 0..256
VmwNsxTDataCenterNappServiceNameTypeOctetStringrange: 0..256
VmwNsxTDataCenterNewVMMorefIdTypeOctetStringrange: 0..256
VmwNsxTDataCenterNICThroughputThresholdTypeOctetStringrange: 0..256
VmwNsxTDataCenterNICThroughputTypeOctetStringrange: 0..256
VmwNsxTDataCenterNodeDisplayOrHostNameTypeOctetStringrange: 0..256
VmwNsxTDataCenterNodeIdTypeOctetStringrange: 0..256
VmwNsxTDataCenterNodeTypeTypeEnumerationmanager(0), edge(1), esx(2), kvm(3), publiccloudgateway(4), intelligence(5), autonomousedge(6), globalmanager(7)
VmwNsxTDataCenterNSXaaSServiceNameTypeOctetStringrange: 0..256
VmwNsxTDataCenterNSXEdgeTNNameTypeOctetStringrange: 0..256
VmwNsxTDataCenterNSXESXTNNameTypeOctetStringrange: 0..256
VmwNsxTDataCenterPasswordExpirationDaysTypeOctetStringrange: 0..256
VmwNsxTDataCenterPeerAddressTypeOctetStringrange: 0..256
VmwNsxTDataCenterPolicyEdgeVMNameTypeOctetStringrange: 0..256
VmwNsxTDataCenterPrefixesCountMaxTypeOctetStringrange: 0..256
VmwNsxTDataCenterPrefixesCountThresholdTypeOctetStringrange: 0..256
VmwNsxTDataCenterPreviousGatewayStateTypeOctetStringrange: 0..256
VmwNsxTDataCenterPreviousServiceStateTypeOctetStringrange: 0..256
VmwNsxTDataCenterProtocolNameTypeOctetStringrange: 0..256
VmwNsxTDataCenterQueueNameTypeOctetStringrange: 0..256
VmwNsxTDataCenterQueueSizeThresholdTypeOctetStringrange: 0..256
VmwNsxTDataCenterQueueSizeTypeOctetStringrange: 0..256
VmwNsxTDataCenterRemoteApplianceAddressTypeOctetStringrange: 0..256
VmwNsxTDataCenterRemoteManagerNodeIdTypeOctetStringrange: 0..256
VmwNsxTDataCenterRemoteSiteIdTypeOctetStringrange: 0..256
VmwNsxTDataCenterRemoteSiteNameTypeOctetStringrange: 0..256
VmwNsxTDataCenterRouteLimitMaximumTypeOctetStringrange: 0..256
VmwNsxTDataCenterRouteLimitThresholdTypeOctetStringrange: 0..256
VmwNsxTDataCenterRxMissesTypeOctetStringrange: 0..256
VmwNsxTDataCenterRxProcessedTypeOctetStringrange: 0..256
VmwNsxTDataCenterRxRingBufferOverflowPercentageTypeOctetStringrange: 0..256
VmwNsxTDataCenterServiceDownReasonTypeOctetStringrange: 0..256
VmwNsxTDataCenterServiceIPTypeOctetStringrange: 0..256
VmwNsxTDataCenterServiceNameTypeOctetStringrange: 0..256
VmwNsxTDataCenterServiceRouterIdTypeOctetStringrange: 0..256
VmwNsxTDataCenterSessionDownReasonTypeOctetStringrange: 0..256
VmwNsxTDataCenterSeverityTypeEnumerationemergency(0), alert(1), critical(2), error(3), warning(4), notice(5), informational(6), debug(7)
VmwNsxTDataCenterSidCountTypeOctetStringrange: 0..256
VmwNsxTDataCenterSiteIdTypeOctetStringrange: 0..256
VmwNsxTDataCenterSiteNameTypeOctetStringrange: 0..256
VmwNsxTDataCenterSNATIPAddressTypeOctetStringrange: 0..256
VmwNsxTDataCenterSrIdTypeOctetStringrange: 0..256
VmwNsxTDataCenterStackAliasTypeOctetStringrange: 0..256
VmwNsxTDataCenterStaticAddressTypeOctetStringrange: 0..256
VmwNsxTDataCenterSubsequentAddressFamilyTypeOctetStringrange: 0..256
VmwNsxTDataCenterSyncIssueReasonTypeOctetStringrange: 0..256
VmwNsxTDataCenterSystemResourceUsageTypeOctetStringrange: 0..256
VmwNsxTDataCenterSystemUsageThresholdTypeOctetStringrange: 0..256
VmwNsxTDataCenterTimeoutInMinutesTypeOctetStringrange: 0..256
VmwNsxTDataCenterTNCountTypeOctetStringrange: 0..256
VmwNsxTDataCenterToGMPathTypeOctetStringrange: 0..256
VmwNsxTDataCenterTransportNodeAddressTypeOctetStringrange: 0..256
VmwNsxTDataCenterTransportNodeId2TypeOctetStringrange: 0..256
VmwNsxTDataCenterTransportNodeIdTypeOctetStringrange: 0..256
VmwNsxTDataCenterTransportNodeNameTypeOctetStringrange: 0..256
VmwNsxTDataCenterTunnelDownReasonTypeOctetStringrange: 0..256
VmwNsxTDataCenterTxMissesTypeOctetStringrange: 0..256
VmwNsxTDataCenterTxProcessedTypeOctetStringrange: 0..256
VmwNsxTDataCenterTxRingBufferOverflowPercentageTypeOctetStringrange: 0..256
VmwNsxTDataCenterUsernameTypeOctetStringrange: 0..256
VmwNsxTDataCentervCenterClusterIdTypeOctetStringrange: 0..256
VmwNsxTDataCenterVerticalNameTypeOctetStringrange: 0..256
VmwNsxTDataCenterVifCountTypeOctetStringrange: 0..256
VmwNsxTDataCenterVMCountTypeOctetStringrange: 0..256
VmwNsxTDataCenterVMMorefIdTypeOctetStringrange: 0..256
VmwNsxTDataCenterVtepFaultReasonTypeOctetStringrange: 0..256
VmwNsxTDataCenterVtepNameTypeOctetStringrange: 0..256

Objects

vmwNSXsysMIB .1.3.6.1.4.1.6876.120.1
vmwNsxTDataCenterNotifications .1.3.6.1.4.1.6876.120.1.0
vmwNsxTManagerHealthFeaturePrefix .1.3.6.1.4.1.6876.120.1.0.1
vmwNsxTManagerHealthFeature .1.3.6.1.4.1.6876.120.1.0.1.0
vmwNsxTCniHealthFeaturePrefix .1.3.6.1.4.1.6876.120.1.0.10
vmwNsxTCniHealthFeature .1.3.6.1.4.1.6876.120.1.0.10.0
vmwNsxTNCPHealthFeaturePrefix .1.3.6.1.4.1.6876.120.1.0.11
vmwNsxTNCPHealthFeature .1.3.6.1.4.1.6876.120.1.0.11.0
vmwNsxTNodeAgentsHealthFeaturePrefix .1.3.6.1.4.1.6876.120.1.0.12
vmwNsxTNodeAgentsHealthFeature .1.3.6.1.4.1.6876.120.1.0.12.0
vmwNsxTEndpointProtectionFeaturePrefix .1.3.6.1.4.1.6876.120.1.0.13
vmwNsxTEndpointProtectionFeature .1.3.6.1.4.1.6876.120.1.0.13.0
vmwNsxTServiceInsertionFeaturePrefix .1.3.6.1.4.1.6876.120.1.0.14
vmwNsxTServiceInsertionFeature .1.3.6.1.4.1.6876.120.1.0.14.0
vmwNsxTVPNFeaturePrefix .1.3.6.1.4.1.6876.120.1.0.15
vmwNsxTVPNFeature .1.3.6.1.4.1.6876.120.1.0.15.0
vmwNsxTAlarmManagementFeaturePrefix .1.3.6.1.4.1.6876.120.1.0.16
vmwNsxTAlarmManagementFeature .1.3.6.1.4.1.6876.120.1.0.16.0
vmwNsxTLoadBalancerFeaturePrefix .1.3.6.1.4.1.6876.120.1.0.17
vmwNsxTLoadBalancerFeature .1.3.6.1.4.1.6876.120.1.0.17.0
vmwNsxTTransportNodeHealthFeaturePrefix .1.3.6.1.4.1.6876.120.1.0.18
vmwNsxTTransportNodeHealthFeature .1.3.6.1.4.1.6876.120.1.0.18.0
vmwNsxTInfrastructureServiceFeaturePrefix .1.3.6.1.4.1.6876.120.1.0.19
vmwNsxTInfrastructureServiceFeature .1.3.6.1.4.1.6876.120.1.0.19.0
vmwNsxTEdgeHealthFeaturePrefix .1.3.6.1.4.1.6876.120.1.0.2
vmwNsxTEdgeHealthFeature .1.3.6.1.4.1.6876.120.1.0.2.0
vmwNsxTDHCPFeaturePrefix .1.3.6.1.4.1.6876.120.1.0.20
vmwNsxTDHCPFeature .1.3.6.1.4.1.6876.120.1.0.20.0
vmwNsxTHighAvailabilityFeaturePrefix .1.3.6.1.4.1.6876.120.1.0.21
vmwNsxTHighAvailabilityFeature .1.3.6.1.4.1.6876.120.1.0.21.0
vmwNsxTCapacityFeaturePrefix .1.3.6.1.4.1.6876.120.1.0.22
vmwNsxTCapacityFeature .1.3.6.1.4.1.6876.120.1.0.22.0
vmwNsxTAuditLogHealthFeaturePrefix .1.3.6.1.4.1.6876.120.1.0.24
vmwNsxTAuditLogHealthFeature .1.3.6.1.4.1.6876.120.1.0.24.0
vmwNsxTRoutingFeaturePrefix .1.3.6.1.4.1.6876.120.1.0.28
vmwNsxTRoutingFeature .1.3.6.1.4.1.6876.120.1.0.28.0
vmwNsxTCertificatesFeaturePrefix .1.3.6.1.4.1.6876.120.1.0.3
vmwNsxTCertificatesFeature .1.3.6.1.4.1.6876.120.1.0.3.0
vmwNsxTDNSFeaturePrefix .1.3.6.1.4.1.6876.120.1.0.30
vmwNsxTDNSFeature .1.3.6.1.4.1.6876.120.1.0.30.0
vmwNsxTDistributedFirewallFeaturePrefix .1.3.6.1.4.1.6876.120.1.0.31
vmwNsxTDistributedFirewallFeature .1.3.6.1.4.1.6876.120.1.0.31.0
vmwNsxTFederationFeaturePrefix .1.3.6.1.4.1.6876.120.1.0.32
vmwNsxTFederationFeature .1.3.6.1.4.1.6876.120.1.0.32.0
vmwNsxTDistributedIDSIPSFeaturePrefix .1.3.6.1.4.1.6876.120.1.0.33
vmwNsxTDistributedIDSIPSFeature .1.3.6.1.4.1.6876.120.1.0.33.0
vmwNsxTCommunicationFeaturePrefix .1.3.6.1.4.1.6876.120.1.0.35
vmwNsxTCommunicationFeature .1.3.6.1.4.1.6876.120.1.0.35.0
vmwNsxTIdentityFirewallFeaturePrefix .1.3.6.1.4.1.6876.120.1.0.36
vmwNsxTIdentityFirewallFeature .1.3.6.1.4.1.6876.120.1.0.36.0
vmwNsxTIPAMFeaturePrefix .1.3.6.1.4.1.6876.120.1.0.38
vmwNsxTIPAMFeature .1.3.6.1.4.1.6876.120.1.0.38.0
vmwNsxTGatewayFirewallFeaturePrefix .1.3.6.1.4.1.6876.120.1.0.39
vmwNsxTGatewayFirewallFeature .1.3.6.1.4.1.6876.120.1.0.39.0
vmwNsxTPasswordManagementFeaturePrefix .1.3.6.1.4.1.6876.120.1.0.4
vmwNsxTPasswordManagementFeature .1.3.6.1.4.1.6876.120.1.0.4.0
vmwNsxTClusteringFeaturePrefix .1.3.6.1.4.1.6876.120.1.0.40
vmwNsxTClusteringFeature .1.3.6.1.4.1.6876.120.1.0.40.0
vmwNsxTNSXApplicationPlatformCommunicationFeaturePrefix .1.3.6.1.4.1.6876.120.1.0.41
vmwNsxTNSXApplicationPlatformCommunicationFeature .1.3.6.1.4.1.6876.120.1.0.41.0
vmwNsxTMTUCheckFeaturePrefix .1.3.6.1.4.1.6876.120.1.0.42
vmwNsxTMTUCheckFeature .1.3.6.1.4.1.6876.120.1.0.42.0
vmwNsxTNSXApplicationPlatformHealthFeaturePrefix .1.3.6.1.4.1.6876.120.1.0.43
vmwNsxTNSXApplicationPlatformHealthFeature .1.3.6.1.4.1.6876.120.1.0.43.0
vmwNsxTEdgeFeaturePrefix .1.3.6.1.4.1.6876.120.1.0.45
vmwNsxTEdgeFeature .1.3.6.1.4.1.6876.120.1.0.45.0
vmwNsxTNATFeaturePrefix .1.3.6.1.4.1.6876.120.1.0.46
vmwNsxTNATFeature .1.3.6.1.4.1.6876.120.1.0.46.0
vmwNsxTPhysicalServerFeaturePrefix .1.3.6.1.4.1.6876.120.1.0.47
vmwNsxTPhysicalServerFeature .1.3.6.1.4.1.6876.120.1.0.47.0
vmwNsxTMalwarePreventionHealthFeaturePrefix .1.3.6.1.4.1.6876.120.1.0.48
vmwNsxTMalwarePreventionHealthFeature .1.3.6.1.4.1.6876.120.1.0.48.0
vmwNsxTEdgeClusterFeaturePrefix .1.3.6.1.4.1.6876.120.1.0.49
vmwNsxTEdgeClusterFeature .1.3.6.1.4.1.6876.120.1.0.49.0
vmwNsxTLicensesFeaturePrefix .1.3.6.1.4.1.6876.120.1.0.5
vmwNsxTLicensesFeature .1.3.6.1.4.1.6876.120.1.0.5.0
vmwNsxTVMCAppFeaturePrefix .1.3.6.1.4.1.6876.120.1.0.50
vmwNsxTVMCAppFeature .1.3.6.1.4.1.6876.120.1.0.50.0
vmwNsxTTEPHealthFeaturePrefix .1.3.6.1.4.1.6876.120.1.0.51
vmwNsxTTEPHealthFeature .1.3.6.1.4.1.6876.120.1.0.51.0
vmwNsxTPolicyConstraintFeaturePrefix .1.3.6.1.4.1.6876.120.1.0.53
vmwNsxTPolicyConstraintFeature .1.3.6.1.4.1.6876.120.1.0.53.0
vmwNsxTGroupsFeaturePrefix .1.3.6.1.4.1.6876.120.1.0.54
vmwNsxTGroupsFeature .1.3.6.1.4.1.6876.120.1.0.54.0
vmwNsxTSecurityComplianceFeaturePrefix .1.3.6.1.4.1.6876.120.1.0.56
vmwNsxTSecurityComplianceFeature .1.3.6.1.4.1.6876.120.1.0.56.0
vmwNsxTIntelligenceHealthFeaturePrefix .1.3.6.1.4.1.6876.120.1.0.6
vmwNsxTIntelligenceHealthFeature .1.3.6.1.4.1.6876.120.1.0.6.0
vmwNsxTInfrastructureCommunicationFeaturePrefix .1.3.6.1.4.1.6876.120.1.0.7
vmwNsxTInfrastructureCommunicationFeature .1.3.6.1.4.1.6876.120.1.0.7.0
vmwNsxTIntelligenceCommunicationFeaturePrefix .1.3.6.1.4.1.6876.120.1.0.9
vmwNsxTIntelligenceCommunicationFeature .1.3.6.1.4.1.6876.120.1.0.9.0
vmwNsxTDataCenterData .1.3.6.1.4.1.6876.120.1.1
vmwNsxTDataCenterTimestamp
.1.3.6.1.4.1.6876.120.1.1.1
vmwNsxTDataCenterDNSId .1.3.6.1.4.1.6876.120.1.1.100
vmwNsxTDataCenterDNSUpstreamIP .1.3.6.1.4.1.6876.120.1.1.101
vmwNsxTDataCenterCABundleAgeThreshold .1.3.6.1.4.1.6876.120.1.1.121
vmwNsxTDataCenterAPICollectionPath .1.3.6.1.4.1.6876.120.1.1.122
vmwNsxTDataCenterEdgeNodeSettingMismatchReason .1.3.6.1.4.1.6876.120.1.1.123
vmwNsxTDataCenterEdgeVMvSphereSettingsMismatchReason .1.3.6.1.4.1.6876.120.1.1.124
vmwNsxTDataCenterFirewallSNATPortsUsage .1.3.6.1.4.1.6876.120.1.1.125
vmwNsxTDataCenterEdgevSphereLocationMismatchReason .1.3.6.1.4.1.6876.120.1.1.126
vmwNsxTDataCenterEdgeNodeAndvSphereSettingsMismatchReason .1.3.6.1.4.1.6876.120.1.1.127
vmwNsxTDataCenterSNATIPAddress .1.3.6.1.4.1.6876.120.1.1.128
vmwNsxTDataCenterNappClusterId .1.3.6.1.4.1.6876.120.1.1.129
vmwNsxTDataCenterNappMessagingLAGThreshold .1.3.6.1.4.1.6876.120.1.1.130
vmwNsxTDataCenterNappNodeId .1.3.6.1.4.1.6876.120.1.1.131
vmwNsxTDataCenterNappServiceName .1.3.6.1.4.1.6876.120.1.1.132
vmwNsxTDataCenterFlowIdentifier .1.3.6.1.4.1.6876.120.1.1.133
vmwNsxTDataCenterSyncIssueReason .1.3.6.1.4.1.6876.120.1.1.134
vmwNsxTDataCenterQueueName .1.3.6.1.4.1.6876.120.1.1.135
vmwNsxTDataCenterQueueSize .1.3.6.1.4.1.6876.120.1.1.136
vmwNsxTDataCenterQueueSizeThreshold .1.3.6.1.4.1.6876.120.1.1.137
vmwNsxTDataCenterGroupType .1.3.6.1.4.1.6876.120.1.1.138
vmwNsxTDataCenterManagerNodeIDS .1.3.6.1.4.1.6876.120.1.1.139
vmwNsxTDataCenterServiceRouterId .1.3.6.1.4.1.6876.120.1.1.140
vmwNsxTDataCenterTransportNodeId .1.3.6.1.4.1.6876.120.1.1.141
vmwNsxTDataCenterFromGMPath .1.3.6.1.4.1.6876.120.1.1.142
vmwNsxTDataCenterToGMPath .1.3.6.1.4.1.6876.120.1.1.143
vmwNsxTDataCenterNICThroughput .1.3.6.1.4.1.6876.120.1.1.144
vmwNsxTDataCenterNICThroughputThreshold .1.3.6.1.4.1.6876.120.1.1.145
vmwNsxTDataCenterEdgeCryptoDrvName .1.3.6.1.4.1.6876.120.1.1.146
vmwNsxTDataCenterNappNodeName .1.3.6.1.4.1.6876.120.1.1.147
vmwNsxTDataCenterNewVMMorefId .1.3.6.1.4.1.6876.120.1.1.148
vmwNsxTDataCenterPolicyEdgeVMName .1.3.6.1.4.1.6876.120.1.1.149
vmwNsxTDataCenterVMMorefId .1.3.6.1.4.1.6876.120.1.1.150
vmwNsxTDataCenterDPUId .1.3.6.1.4.1.6876.120.1.1.171
vmwNsxTDataCenterEdgeNode .1.3.6.1.4.1.6876.120.1.1.172
vmwNsxTDataCenterPrefixesCountMax .1.3.6.1.4.1.6876.120.1.1.173
vmwNsxTDataCenterPrefixesCountThreshold .1.3.6.1.4.1.6876.120.1.1.174
vmwNsxTDataCenterRouteLimitMaximum .1.3.6.1.4.1.6876.120.1.1.175
vmwNsxTDataCenterRouteLimitThreshold .1.3.6.1.4.1.6876.120.1.1.176
vmwNsxTDataCenterSubsequentAddressFamily .1.3.6.1.4.1.6876.120.1.1.177
vmwNsxTDataCentervCenterClusterId .1.3.6.1.4.1.6876.120.1.1.179
vmwNsxTDataCenterMpsServiceName .1.3.6.1.4.1.6876.120.1.1.180
vmwNsxTDataCenterMemberIndexId .1.3.6.1.4.1.6876.120.1.1.182
vmwNsxTDataCenterEdgeClusterId .1.3.6.1.4.1.6876.120.1.1.184
vmwNsxTDataCenterCoreDumpCount .1.3.6.1.4.1.6876.120.1.1.185
vmwNsxTDataCenterNodeDisplayOrHostName .1.3.6.1.4.1.6876.120.1.1.186
vmwNsxTDataCenterNSXEdgeTNName .1.3.6.1.4.1.6876.120.1.1.187
vmwNsxTDataCenterNSXESXTNName .1.3.6.1.4.1.6876.120.1.1.188
vmwNsxTDataCenterCollectorIP .1.3.6.1.4.1.6876.120.1.1.189
vmwNsxTDataCenterDvsAlias .1.3.6.1.4.1.6876.120.1.1.190
vmwNsxTDataCenterStackAlias .1.3.6.1.4.1.6876.120.1.1.191
vmwNsxTDataCenterVerticalName .1.3.6.1.4.1.6876.120.1.1.192
vmwNsxTDataCenterEdgeClusterHighestHwVersion .1.3.6.1.4.1.6876.120.1.1.195
vmwNsxTDataCenterEdgeTNHwVersion .1.3.6.1.4.1.6876.120.1.1.196
vmwNsxTDataCenterEdgeClusterName .1.3.6.1.4.1.6876.120.1.1.197
vmwNsxTDataCenterFeatureName .1.3.6.1.4.1.6876.120.1.1.2
vmwNsxTDataCenterEntityId .1.3.6.1.4.1.6876.120.1.1.21
vmwNsxTDataCenterSystemResourceUsage .1.3.6.1.4.1.6876.120.1.1.22
vmwNsxTDataCenterHaState .1.3.6.1.4.1.6876.120.1.1.220
vmwNsxTDataCenterHaState2 .1.3.6.1.4.1.6876.120.1.1.221
vmwNsxTDataCenterTransportNodeId2 .1.3.6.1.4.1.6876.120.1.1.222
vmwNsxTDataCenterDvsName .1.3.6.1.4.1.6876.120.1.1.223
vmwNsxTDataCenterVtepFaultReason .1.3.6.1.4.1.6876.120.1.1.224
vmwNsxTDataCenterVtepName .1.3.6.1.4.1.6876.120.1.1.225
vmwNsxTDataCenterMetricsTargetAddress .1.3.6.1.4.1.6876.120.1.1.226
vmwNsxTDataCenterMetricsTargetAlias .1.3.6.1.4.1.6876.120.1.1.227
vmwNsxTDataCenterMetricsTargetPort .1.3.6.1.4.1.6876.120.1.1.228
vmwNsxTDataCenterIPv4Address .1.3.6.1.4.1.6876.120.1.1.229
vmwNsxTDataCenterDiskPartitionName .1.3.6.1.4.1.6876.120.1.1.23
vmwNsxTDataCenterIPv6Address .1.3.6.1.4.1.6876.120.1.1.230
vmwNsxTDataCenterConstraintLimit .1.3.6.1.4.1.6876.120.1.1.231
vmwNsxTDataCenterConstraintType .1.3.6.1.4.1.6876.120.1.1.232
vmwNsxTDataCenterCurrentCount .1.3.6.1.4.1.6876.120.1.1.233
vmwNsxTDataCenterGroupId .1.3.6.1.4.1.6876.120.1.1.234
vmwNsxTDataCenterGroupMaxNumberLimit .1.3.6.1.4.1.6876.120.1.1.235
vmwNsxTDataCenterGroupSize .1.3.6.1.4.1.6876.120.1.1.236
vmwNsxTDataCenterIPCount .1.3.6.1.4.1.6876.120.1.1.237
vmwNsxTDataCenterLrpCount .1.3.6.1.4.1.6876.120.1.1.238
vmwNsxTDataCenterLspCount .1.3.6.1.4.1.6876.120.1.1.239
vmwNsxTDataCenterLicenseEditionType .1.3.6.1.4.1.6876.120.1.1.24
vmwNsxTDataCenterMacCount .1.3.6.1.4.1.6876.120.1.1.240
vmwNsxTDataCenterSidCount .1.3.6.1.4.1.6876.120.1.1.241
vmwNsxTDataCenterTNCount .1.3.6.1.4.1.6876.120.1.1.244
vmwNsxTDataCenterVifCount .1.3.6.1.4.1.6876.120.1.1.245
vmwNsxTDataCenterVMCount .1.3.6.1.4.1.6876.120.1.1.246
vmwNsxTDataCenterCoreId .1.3.6.1.4.1.6876.120.1.1.247
vmwNsxTDataCenterFlowCacheThreshold .1.3.6.1.4.1.6876.120.1.1.248
vmwNsxTDataCenterNSXaaSServiceName .1.3.6.1.4.1.6876.120.1.1.249
vmwNsxTDataCenterApplianceAddress .1.3.6.1.4.1.6876.120.1.1.25
vmwNsxTDataCenterProtocolName .1.3.6.1.4.1.6876.120.1.1.250
vmwNsxTDataCenterConstraintTypePath .1.3.6.1.4.1.6876.120.1.1.251
vmwNsxTDataCenterCurrentGatewayState .1.3.6.1.4.1.6876.120.1.1.26
vmwNsxTDataCenterCurrentServiceState .1.3.6.1.4.1.6876.120.1.1.27
vmwNsxTDataCenterDatapathResourceUsage .1.3.6.1.4.1.6876.120.1.1.28
vmwNsxTDataCenterDHCPPoolUsage .1.3.6.1.4.1.6876.120.1.1.29
vmwNsxTDataCenterEventType .1.3.6.1.4.1.6876.120.1.1.3
vmwNsxTDataCenterEdgeServiceName .1.3.6.1.4.1.6876.120.1.1.30
vmwNsxTDataCenterFailureReason .1.3.6.1.4.1.6876.120.1.1.31
vmwNsxTDataCenterPreviousGatewayState .1.3.6.1.4.1.6876.120.1.1.32
vmwNsxTDataCenterPreviousServiceState .1.3.6.1.4.1.6876.120.1.1.33
vmwNsxTDataCenterSystemUsageThreshold .1.3.6.1.4.1.6876.120.1.1.34
vmwNsxTDataCenterUsername .1.3.6.1.4.1.6876.120.1.1.35
vmwNsxTDataCenterDHCPServerId .1.3.6.1.4.1.6876.120.1.1.36
vmwNsxTDataCenterServiceName .1.3.6.1.4.1.6876.120.1.1.37
vmwNsxTDataCenterIntelligenceNodeId .1.3.6.1.4.1.6876.120.1.1.38
vmwNsxTDataCenterHostnameOrIPAddressWithPort .1.3.6.1.4.1.6876.120.1.1.39
vmwNsxTDataCenterEventSeverity .1.3.6.1.4.1.6876.120.1.1.4
vmwNsxTDataCenterEventId .1.3.6.1.4.1.6876.120.1.1.40
vmwNsxTDataCenterActiveGlobalManager .1.3.6.1.4.1.6876.120.1.1.41
vmwNsxTDataCenterActiveGlobalManagers .1.3.6.1.4.1.6876.120.1.1.42
vmwNsxTDataCenterSessionDownReason .1.3.6.1.4.1.6876.120.1.1.43
vmwNsxTDataCenterManagerNodeName .1.3.6.1.4.1.6876.120.1.1.44
vmwNsxTDataCenterTransportNodeAddress .1.3.6.1.4.1.6876.120.1.1.45
vmwNsxTDataCenterTransportNodeName .1.3.6.1.4.1.6876.120.1.1.46
vmwNsxTDataCenterCentralControlPlaneId .1.3.6.1.4.1.6876.120.1.1.47
vmwNsxTDataCenterTunnelDownReason .1.3.6.1.4.1.6876.120.1.1.48
vmwNsxTDataCenterHeapType .1.3.6.1.4.1.6876.120.1.1.49
vmwNsxTDataCenterNodeId .1.3.6.1.4.1.6876.120.1.1.5
vmwNsxTDataCenterMempoolName .1.3.6.1.4.1.6876.120.1.1.50
vmwNsxTDataCenterPasswordExpirationDays .1.3.6.1.4.1.6876.120.1.1.51
vmwNsxTDataCenterBGPNeighborIP .1.3.6.1.4.1.6876.120.1.1.52
vmwNsxTDataCenterLDAPServer .1.3.6.1.4.1.6876.120.1.1.53
vmwNsxTDataCenterPeerAddress .1.3.6.1.4.1.6876.120.1.1.54
vmwNsxTDataCenterMaxIDSEventsAllowed .1.3.6.1.4.1.6876.120.1.1.55
vmwNsxTDataCenterStaticAddress .1.3.6.1.4.1.6876.120.1.1.56
vmwNsxTDataCenterDuplicateIPAddress .1.3.6.1.4.1.6876.120.1.1.57
vmwNsxTDataCenterCapacityDisplayName .1.3.6.1.4.1.6876.120.1.1.58
vmwNsxTDataCenterCapacityUsageCount .1.3.6.1.4.1.6876.120.1.1.59
vmwNsxTDataCenterNodeType .1.3.6.1.4.1.6876.120.1.1.6
vmwNsxTDataCenterEdgeNICName .1.3.6.1.4.1.6876.120.1.1.60
vmwNsxTDataCenterRxRingBufferOverflowPercentage .1.3.6.1.4.1.6876.120.1.1.61
vmwNsxTDataCenterTxRingBufferOverflowPercentage .1.3.6.1.4.1.6876.120.1.1.62
vmwNsxTDataCenterSrId .1.3.6.1.4.1.6876.120.1.1.63
vmwNsxTDataCenterIDSEventsCount .1.3.6.1.4.1.6876.120.1.1.64
vmwNsxTDataCenterRemoteSiteName .1.3.6.1.4.1.6876.120.1.1.65
vmwNsxTDataCenterBGPSourceIP .1.3.6.1.4.1.6876.120.1.1.66
vmwNsxTDataCenterRemoteSiteId .1.3.6.1.4.1.6876.120.1.1.67
vmwNsxTDataCenterSiteId .1.3.6.1.4.1.6876.120.1.1.68
vmwNsxTDataCenterSiteName .1.3.6.1.4.1.6876.120.1.1.69
vmwNsxTDataCenterLrId .1.3.6.1.4.1.6876.120.1.1.70
vmwNsxTDataCenterRxMisses .1.3.6.1.4.1.6876.120.1.1.71
vmwNsxTDataCenterRxProcessed .1.3.6.1.4.1.6876.120.1.1.72
vmwNsxTDataCenterTxMisses .1.3.6.1.4.1.6876.120.1.1.73
vmwNsxTDataCenterTxProcessed .1.3.6.1.4.1.6876.120.1.1.74
vmwNsxTDataCenterLrportId .1.3.6.1.4.1.6876.120.1.1.75
vmwNsxTDataCenterServiceIP .1.3.6.1.4.1.6876.120.1.1.77
vmwNsxTDataCenterRemoteManagerNodeId .1.3.6.1.4.1.6876.120.1.1.80
vmwNsxTDataCenterDirectoryDomain .1.3.6.1.4.1.6876.120.1.1.81
vmwNsxTDataCenterTimeoutInMinutes .1.3.6.1.4.1.6876.120.1.1.82
vmwNsxTDataCenterMaxCapacityThreshold .1.3.6.1.4.1.6876.120.1.1.83
vmwNsxTDataCenterMinCapacityThreshold .1.3.6.1.4.1.6876.120.1.1.84
vmwNsxTDataCenterMaxSupportedCapacityCount .1.3.6.1.4.1.6876.120.1.1.85
vmwNsxTDataCenterLatencySource .1.3.6.1.4.1.6876.120.1.1.86
vmwNsxTDataCenterLatencyThreshold .1.3.6.1.4.1.6876.120.1.1.87
vmwNsxTDataCenterLatencyValue .1.3.6.1.4.1.6876.120.1.1.88
vmwNsxTDataCenterApplianceFQDN .1.3.6.1.4.1.6876.120.1.1.89
vmwNsxTDataCenterRemoteApplianceAddress .1.3.6.1.4.1.6876.120.1.1.90
vmwNsxTDataCenterManagerNodeId .1.3.6.1.4.1.6876.120.1.1.91
vmwNsxTDataCenterDisplayedLicenseKey .1.3.6.1.4.1.6876.120.1.1.92
vmwNsxTDataCenterEdgeThreadName .1.3.6.1.4.1.6876.120.1.1.93
vmwNsxTDataCenterIntentPath .1.3.6.1.4.1.6876.120.1.1.94
vmwNsxTDataCenterFirewallHalfopenFlowUsage .1.3.6.1.4.1.6876.120.1.1.95
vmwNsxTDataCenterFirewallICMPFlowUsage .1.3.6.1.4.1.6876.120.1.1.96
vmwNsxTDataCenterServiceDownReason .1.3.6.1.4.1.6876.120.1.1.97
vmwNsxTDataCenterFirewallUDPFlowUsage .1.3.6.1.4.1.6876.120.1.1.98
vmwNsxTDataCenterFirewallIPFlowUsage .1.3.6.1.4.1.6876.120.1.1.99
vmwNsxTDataCenterConformance .1.3.6.1.4.1.6876.120.1.2
vmwNsxTDataCenterCompliances .1.3.6.1.4.1.6876.120.1.2.1
vmwNsxTDataCenterSMIBGroups .1.3.6.1.4.1.6876.120.1.2.2

Notifications/Traps

NameOIDDescription
vmwNsxTManagerHealthManagerCPUUsageHigh









.1.3.6.1.4.1.6876.120.1.0.1.0.1
age on Manager node vmwNsxTDataCenterEntityId has reached
vmwNsxTDataCenterSystemResourceUsage% which is at or above the high
threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
Review the configuration, running services and sizing of this Manager node.
Consider adjusting the Manager appliance form factor size.
vmwNsxTManagerHealthManagerDiskUsageHighClear










.1.3.6.1.4.1.6876.120.1.0.1.0.10
sage for the Manager node disk partition vmwNsxTDataCenterDiskPartitionName
has reached vmwNsxTDataCenterSystemResourceUsage% which is below the high threshold
value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTManagerHealthManagerDiskUsageVeryHigh










.1.3.6.1.4.1.6876.120.1.0.1.0.11
sage for the Manager node disk partition vmwNsxTDataCenterDiskPartitionName
has reached vmwNsxTDataCenterSystemResourceUsage% which is at or above the very high
threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
Examine the partition with high usage and see if there are any
unexpected large files that can be removed.
vmwNsxTManagerHealthManagerDiskUsageVeryHighClear










.1.3.6.1.4.1.6876.120.1.0.1.0.12
sage for the Manager node disk partition vmwNsxTDataCenterDiskPartitionName
has reached vmwNsxTDataCenterSystemResourceUsage% which is below the very high
threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTManagerHealthManagerConfigDiskUsageHigh









.1.3.6.1.4.1.6876.120.1.0.1.0.13
sage for the Manager node disk partition /config has reached
vmwNsxTDataCenterSystemResourceUsage% which is at or above the high threshold value of
vmwNsxTDataCenterSystemUsageThreshold%. This can be an indication of rising disk
usage by the NSX Datastore service under the /config/corfu directory.
          
Action required:
Run the following tool and contact GSS if any issues are reported
/opt/vmware/tools/support/inspect_checkpoint_issues.py
vmwNsxTManagerHealthManagerConfigDiskUsageHighClear









.1.3.6.1.4.1.6876.120.1.0.1.0.14
sage for the Manager node disk partition /config has reached
vmwNsxTDataCenterSystemResourceUsage% which is below the high threshold value of
vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTManagerHealthManagerConfigDiskUsageVeryHigh









.1.3.6.1.4.1.6876.120.1.0.1.0.15
sage for the Manager node disk partition /config has reached
vmwNsxTDataCenterSystemResourceUsage% which is at or above the very high threshold value
of vmwNsxTDataCenterSystemUsageThreshold%. This can be an indication of high disk usage
by the NSX Datastore service under the /config/corfu directory.
          
Action required:
Run the following tool and contact GSS if any issues are reported
/opt/vmware/tools/support/inspect_checkpoint_issues.py
vmwNsxTManagerHealthManagerConfigDiskUsageVeryHighClear









.1.3.6.1.4.1.6876.120.1.0.1.0.16
sage for the Manager node disk partition /config has reached
vmwNsxTDataCenterSystemResourceUsage% which is below the very high threshold value of
vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTManagerHealthDuplicateIPAddress








.1.3.6.1.4.1.6876.120.1.0.1.0.17
de vmwNsxTDataCenterEntityId IP address vmwNsxTDataCenterDuplicateIPAddress is
currently being used by another device in the network.
          
Action required:
1. Determine which device is using the Manager's IP address
and assign the device a new IP address. Note, reconfiguring
the Manager to use a new IP address is not supported.
2. Ensure the static IP address pool/DHCP server is configured correctly.
3. Correct the IP address of the device if it is manually assigned.
vmwNsxTManagerHealthDuplicateIPAddressClear








.1.3.6.1.4.1.6876.120.1.0.1.0.18
 using the IP address assigned to Manager node
vmwNsxTDataCenterEntityId appears to no longer be using vmwNsxTDataCenterDuplicateIPAddress.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTManagerHealthOperationsDbDiskUsageHigh









.1.3.6.1.4.1.6876.120.1.0.1.0.19
sage for the Manager node disk partition /nonconfig has reached
vmwNsxTDataCenterSystemResourceUsage% which is at or above the high threshold value of
vmwNsxTDataCenterSystemUsageThreshold%. This can be an indication of rising disk
usage by the NSX Datastore service under the /nonconfig/corfu directory.
          
Action required:
run the following tool and contact GSS if any issues are reported
/opt/vmware/tools/support/inspect_checkpoint_issues.py --nonconfig
vmwNsxTManagerHealthManagerCPUUsageHighClear









.1.3.6.1.4.1.6876.120.1.0.1.0.2
age on Manager node vmwNsxTDataCenterEntityId has reached
vmwNsxTDataCenterSystemResourceUsage% which is below the high
threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTManagerHealthOperationsDbDiskUsageHighClear









.1.3.6.1.4.1.6876.120.1.0.1.0.20
sage for the Manager node disk partition /nonconfig has reached
vmwNsxTDataCenterSystemResourceUsage% which is below the high threshold value of
vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTManagerHealthOperationsDbDiskUsageVeryHigh









.1.3.6.1.4.1.6876.120.1.0.1.0.21
sage for the Manager node disk partition /nonconfig has reached
vmwNsxTDataCenterSystemResourceUsage% which is at or above the very high threshold value
of vmwNsxTDataCenterSystemUsageThreshold%. This can be an indication of high disk usage
by the NSX Datastore service under the /nonconfig/corfu directory.
          
Action required:
Run the following tool and contact GSS if any issues are reported
/opt/vmware/tools/support/inspect_checkpoint_issues.py --nonconfig
vmwNsxTManagerHealthOperationsDbDiskUsageVeryHighClear









.1.3.6.1.4.1.6876.120.1.0.1.0.22
sage for the Manager node disk partition /nonconfig has reached
vmwNsxTDataCenterSystemResourceUsage% which is below the very high threshold value of
vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTManagerHealthStorageError








.1.3.6.1.4.1.6876.120.1.0.1.0.23
ing disk partition on the Manager node vmwNsxTDataCenterEntityId is in
read-only mode: vmwNsxTDataCenterDiskPartitionName
          
Action required:
Examine the read-only partition to see if reboot resolves the issue
or the disk needs to be replaced. Contact GSS for more information.
vmwNsxTManagerHealthStorageErrorClear








.1.3.6.1.4.1.6876.120.1.0.1.0.24
ing disk partition on the Manager node vmwNsxTDataCenterEntityId
has recovered from read-only mode: vmwNsxTDataCenterDiskPartitionName
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTManagerHealthMissingDNSEntryForManagerFQDN








.1.3.6.1.4.1.6876.120.1.0.1.0.28
nfiguration for Manager node vmwNsxTDataCenterManagerNodeName (vmwNsxTDataCenterEntityId) is incorrect.
The Manager node is dual-stack and/or CA-signed API certificate is used, but the IP
address(es) of the Manager node do not resolve to an FQDN or resolve to different FQDNs.
          
Action required:
1. Ensure proper DNS servers are configured in the Manager node.
2. Ensure proper A records and PTR records are configured in the DNS servers
such that reverse lookup of the IP addresses of the Manager node return the same FQDN,
and forward lookup of the FQDN return all IP addresses of the Manager node.
3. Alternatively, if the Manager node is not dual-stack, replace the CA-signed
certificate for API service type with a self-signed certificate.
vmwNsxTManagerHealthMissingDNSEntryForManagerFQDNClear








.1.3.6.1.4.1.6876.120.1.0.1.0.29
nfiguration for Manager node vmwNsxTDataCenterManagerNodeName (vmwNsxTDataCenterEntityId) is correct.
Either the Manager node is not dual-stack and CA-signed API certificate is no longer used,
or the IP address(es) of the Manager node resolve to the same FQDN.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTManagerHealthManagerCPUUsageVeryHigh









.1.3.6.1.4.1.6876.120.1.0.1.0.3
age on Manager node vmwNsxTDataCenterEntityId has reached
vmwNsxTDataCenterSystemResourceUsage% which is at or above the very high
threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
Review the configuration, running services and sizing of this Manager node.
Consider adjusting the Manager appliance form factor size.
vmwNsxTManagerHealthMissingDNSEntryForVipFQDN









.1.3.6.1.4.1.6876.120.1.0.1.0.33
 dual stack or CA-signed API certificate for a NSX Manager,
virtual IPv4 address vmwNsxTDataCenterIPv4Address and virtual IPv6 address vmwNsxTDataCenterIPv6Address for
Manager node vmwNsxTDataCenterEntityId should resolve to the same FQDN.
          
Action required:
Examine the DNS entry for the VIP addresses to see if they resolve to the same FQDN.
vmwNsxTManagerHealthMissingDNSEntryForVipFQDNClear







.1.3.6.1.4.1.6876.120.1.0.1.0.34
 VIP addresses for Manager node vmwNsxTDataCenterEntityId resolved to same FQDN.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTManagerHealthManagerCPUUsageVeryHighClear









.1.3.6.1.4.1.6876.120.1.0.1.0.4
age on Manager node vmwNsxTDataCenterEntityId has reached
vmwNsxTDataCenterSystemResourceUsage% which is below the very high
threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTManagerHealthManagerMemoryUsageHigh









.1.3.6.1.4.1.6876.120.1.0.1.0.5
 usage on Manager node vmwNsxTDataCenterEntityId has reached
vmwNsxTDataCenterSystemResourceUsage% which is at or above the high
threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
Review the configuration, running services and sizing of this Manager node.
Consider adjusting the Manager appliance form factor size.
vmwNsxTManagerHealthManagerMemoryUsageHighClear









.1.3.6.1.4.1.6876.120.1.0.1.0.6
 usage on Manager node vmwNsxTDataCenterEntityId has reached
vmwNsxTDataCenterSystemResourceUsage% which is below the high
threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTManagerHealthManagerMemoryUsageVeryHigh









.1.3.6.1.4.1.6876.120.1.0.1.0.7
 usage on Manager node vmwNsxTDataCenterEntityId has reached
vmwNsxTDataCenterSystemResourceUsage% which is at or above the very
high threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
Review the configuration, running services and sizing of this Manager node.
Consider adjusting the Manager appliance form factor size.
vmwNsxTManagerHealthManagerMemoryUsageVeryHighClear









.1.3.6.1.4.1.6876.120.1.0.1.0.8
 usage on Manager node vmwNsxTDataCenterEntityId has reached
vmwNsxTDataCenterSystemResourceUsage% which is below the very high
threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTManagerHealthManagerDiskUsageHigh










.1.3.6.1.4.1.6876.120.1.0.1.0.9
sage for the Manager node disk partition vmwNsxTDataCenterDiskPartitionName
has reached vmwNsxTDataCenterSystemResourceUsage% which is at or above the high
threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
Examine the partition with high usage and see if there are any
unexpected large files that can be removed.
vmwNsxTCniHealthHyperbusManagerConnectionDown







.1.3.6.1.4.1.6876.120.1.0.10.0.3
annot communicate with the Manager node.
          
Action required:
The hyperbus vmkernel interface (vmk50) may be missing. Refer to
Knowledge Base article https://kb.vmware.com/s/article/67432.
vmwNsxTCniHealthHyperbusManagerConnectionDownClear







.1.3.6.1.4.1.6876.120.1.0.10.0.4
an communicate with the Manager node.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTCniHealthHyperbusManagerConnectionDownOnDPU








.1.3.6.1.4.1.6876.120.1.0.10.0.5
n DPU vmwNsxTDataCenterDPUId cannot communicate with the Manager node.
          
Action required:
The hyperbus vmkernel interface (vmk50) on DPU vmwNsxTDataCenterDPUId may be missing. Refer to
Knowledge Base article https://kb.vmware.com/s/article/67432.
vmwNsxTCniHealthHyperbusManagerConnectionDownOnDPUClear








.1.3.6.1.4.1.6876.120.1.0.10.0.6
n DPU vmwNsxTDataCenterDPUId can communicate with the Manager node.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTNCPHealthNCPPluginDown







.1.3.6.1.4.1.6876.120.1.0.11.0.3
de has detected the NCP is down or unhealthy.
          
Action required:
To find the clusters which are having issues, use the NSX UI and
navigate to the Alarms page. The Entity name value for this alarm
instance identifies the cluster name. Or invoke the NSX API GET
/api/v1/systemhealth/container-cluster/ncp/status to fetch all
cluster statuses and determine the name of any clusters that report
DOWN or UNKNOWN. Then on the NSX UI Inventory | Container | Clusters
page find the cluster by name and click the Nodes tab which lists
all Kubernetes and PAS cluster members.
For Kubernetes cluster:
1. Check NCP Pod liveness by finding the K8s master node from all the
cluster members and log onto the master node. Then invoke the kubectl
command `kubectl get pods --all-namespaces`. If there is an issue with
the NCP Pod, use kubectl logs command to check the issue and fix the
error.
2. Check the connection between NCP and Kubernetes API server. The
NSX CLI can be used inside the NCP Pod to check this connection status
by invoking the following commands from the master VM.
`kubectl exec -it  -n nsx-system bash`
`nsxcli`
`get ncp-k8s-api-server status`
If there is an issue with the connection, check both the network
and NCP configurations.
3. Check the connection between NCP and NSX Manager. The NSX CLI can be
used inside the NCP Pod to check this connection status by invoking the
following command from the master VM.
`kubectl exec -it  -n nsx-system bash`
`nsxcli`
`get ncp-nsx status`
If there is an issue with the connection, check both the network
and NCP configurations.
For PAS cluster:
1. Check the network connections between virtual machines and fix any
network issues.
2. Check the status of both nodes and services and fix crashed nodes
or services. Invoke the command `bosh vms` and `bosh instances -p` to
check the status of nodes and services.
vmwNsxTNCPHealthNCPPluginDownClear







.1.3.6.1.4.1.6876.120.1.0.11.0.4
de has detected the NCP is up or healthy again.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTNodeAgentsHealthNodeAgentsDown







.1.3.6.1.4.1.6876.120.1.0.12.0.3
 running inside the Node VM appear to be down.
          
Action required:
For ESX:
1. If Vmk50 is missing, refer to this Knowledge Base article
 https://kb.vmware.com/s/article/67432.
2. If Hyperbus 4094 is missing, restarting nsx-cfgagent or restarting the
container host VM may help.
3. If container host VIF is blocked, check the connection to the Controller
to make sure all configurations are sent down.
4. If nsx-cfg-agent has stopped, restart nsx-cfgagent.
For KVM:
1. If Hyperbus namespace is missing, restarting the  nsx-opsagent may help
recreate the namespace.
2. If Hyperbus interface is missing inside the hyperbus namespace,
restarting the nsx-opsagent may help.
3. If nsx-agent has stopped, restart nsx-agent.
For Both ESX and KVM:
1. If the node-agent package is missing, check whether node-agent package
has been successfully installed in the container host vm.
2. If the interface for node-agent in container host vm is down, check the
eth1 interface status inside the container host vm.
vmwNsxTNodeAgentsHealthNodeAgentsDownClear







.1.3.6.1.4.1.6876.120.1.0.12.0.4
 inside the Node VM are running.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTNodeAgentsHealthNodeAgentsDownOnDPU








.1.3.6.1.4.1.6876.120.1.0.12.0.5
 running inside the Node VM appear to be down on DPU vmwNsxTDataCenterDPUId.
          
Action required:
1. If Vmk50 on DPU vmwNsxTDataCenterDPUId is missing, refer to this Knowledge Base article
 https://kb.vmware.com/s/article/67432.
2. If Hyperbus 4094 on DPU vmwNsxTDataCenterDPUId is missing, restarting nsx-cfgagent on DPU vmwNsxTDataCenterDPUId
 or restarting the container host VM may help.
3. If container host VIF is blocked, check the connection to the Controller
to make sure all configurations are sent down.
4. If nsx-cfg-agent on DPU vmwNsxTDataCenterDPUId has stopped, restart nsx-cfgagent on DPU vmwNsxTDataCenterDPUId .
5. If the node-agent package is missing, check whether node-agent package
has been successfully installed in the container host vm.
6. If the interface for node-agent in container host vm is down, check the
eth1 interface status inside the container host vm.
vmwNsxTNodeAgentsHealthNodeAgentsDownOnDPUClear








.1.3.6.1.4.1.6876.120.1.0.12.0.6
 inside the Node VM are running on DPU vmwNsxTDataCenterDPUId.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTEndpointProtectionEAMStatusDown







.1.3.6.1.4.1.6876.120.1.0.13.0.1
Manager (EAM) service on compute manager vmwNsxTDataCenterEntityId is down.
          
Action required:
Start the ESX Agent Manager (EAM) service. SSH into vCenter and invoke
the command `service vmware-eam start`.
vmwNsxTEndpointProtectionEAMStatusDownClear







.1.3.6.1.4.1.6876.120.1.0.13.0.2
Manager (EAM) service on compute manager vmwNsxTDataCenterEntityId is either up
or compute manager vmwNsxTDataCenterEntityId has been removed.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTEndpointProtectionPartnerChannelDown







.1.3.6.1.4.1.6876.120.1.0.13.0.3
tion between host module and Partner SVM vmwNsxTDataCenterEntityId
is down.
          
Action required:
Refer to https://kb.vmware.com/s/article/85844 and make sure
that Partner SVM vmwNsxTDataCenterEntityId is re-connected to the host module.
vmwNsxTEndpointProtectionPartnerChannelDownClear







.1.3.6.1.4.1.6876.120.1.0.13.0.4
tion between host module and Partner SVM vmwNsxTDataCenterEntityId
is up.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTServiceInsertionServiceDeploymentFailedClear







.1.3.6.1.4.1.6876.120.1.0.14.0.10
 service deployment vmwNsxTDataCenterEntityId has been removed.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTServiceInsertionServiceDeploymentSucceeded









.1.3.6.1.4.1.6876.120.1.0.14.0.11
e deployment vmwNsxTDataCenterEntityId for service vmwNsxTDataCenterServiceName on cluster vmwNsxTDataCentervCenterClusterId has succeeded.
          
Action required:
No action needed.
vmwNsxTServiceInsertionServiceDeploymentSucceededClear








.1.3.6.1.4.1.6876.120.1.0.14.0.12
e deployment vmwNsxTDataCenterEntityId on cluster vmwNsxTDataCentervCenterClusterId has succeeded, no action needed.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTServiceInsertionServiceUndeploymentFailed










.1.3.6.1.4.1.6876.120.1.0.14.0.15
on of service deployment vmwNsxTDataCenterEntityId for service vmwNsxTDataCenterServiceName on cluster vmwNsxTDataCentervCenterClusterId has failed.
Reason : vmwNsxTDataCenterFailureReason
          
Action required:
Delete the service deployment using NSX UI or API.
Perform any corrective action from the KB and retry deleting the service deployment again.
Resolve the alarm manually after checking all the VM and objects are deleted.
vmwNsxTServiceInsertionServiceUndeploymentFailedClear







.1.3.6.1.4.1.6876.120.1.0.14.0.16
 service deployment name vmwNsxTDataCenterEntityId has been removed.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTServiceInsertionServiceUndeploymentSucceeded









.1.3.6.1.4.1.6876.120.1.0.14.0.17
on of service deployment vmwNsxTDataCenterEntityId for service vmwNsxTDataCenterServiceName on cluster vmwNsxTDataCentervCenterClusterId has succeeded.
          
Action required:
No action needed.
vmwNsxTServiceInsertionServiceUndeploymentSucceededClear








.1.3.6.1.4.1.6876.120.1.0.14.0.18
on of service deployment vmwNsxTDataCenterEntityId on cluster vmwNsxTDataCentervCenterClusterId has succeeded, no action needed.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTServiceInsertionServiceChainPathDown







.1.3.6.1.4.1.6876.120.1.0.14.0.21
ain path is down on vmwNsxTDataCenterEntityId and traffic flow is impacted.
          
Action required:
Perform any corrective action from the KB and check if the status is up.
vmwNsxTServiceInsertionServiceChainPathDownClear







.1.3.6.1.4.1.6876.120.1.0.14.0.22
ain path is up and configured as expected.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTServiceInsertionSVMHealthStatusDown










.1.3.6.1.4.1.6876.120.1.0.14.0.23
 check for SVM vmwNsxTDataCenterEntityId for service vmwNsxTDataCenterServiceName is not working correctly on vmwNsxTDataCenterHostnameOrIPAddressWithPort.
Reason : vmwNsxTDataCenterFailureReason.
          
Action required:
Delete the service deployment using NSX UI or API.
Perform any corrective action from the KB and retry service deployment again if necessary.
vmwNsxTServiceInsertionSVMHealthStatusDownClear







.1.3.6.1.4.1.6876.120.1.0.14.0.24
wNsxTDataCenterEntityId with wrong state has been removed.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTServiceInsertionSVMHealthStatusUp









.1.3.6.1.4.1.6876.120.1.0.14.0.25
 check for SVM vmwNsxTDataCenterEntityId for service vmwNsxTDataCenterServiceName is working correctly on vmwNsxTDataCenterHostnameOrIPAddressWithPort.
          
Action required:
No action needed.
vmwNsxTServiceInsertionSVMHealthStatusUpClear







.1.3.6.1.4.1.6876.120.1.0.14.0.26
wNsxTDataCenterEntityId is working correctly, no action needed.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTServiceInsertionSVMLivenessStateDown







.1.3.6.1.4.1.6876.120.1.0.14.0.27
ss state is down on vmwNsxTDataCenterEntityId and traffic flow is impacted.
          
Action required:
Perform any corrective action from the KB and check if the state is up.
vmwNsxTServiceInsertionSVMLivenessStateDownClear







.1.3.6.1.4.1.6876.120.1.0.14.0.28
ss state is up and configured as expected.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTServiceInsertionServiceInsertionInfraStatusDown









.1.3.6.1.4.1.6876.120.1.0.14.0.29
abled at port level on host vmwNsxTDataCenterTransportNodeId and the status is down.
Reason : vmwNsxTDataCenterFailureReason.
          
Action required:
Perform any corrective action from the KB and check if the status is up.
Resolve the alarm manually after checking the status.
vmwNsxTServiceInsertionServiceInsertionInfraStatusDownClear







.1.3.6.1.4.1.6876.120.1.0.14.0.30
sertion infrastructure status is up and has been correctly enabled on host.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTServiceInsertionNewHostAdded








.1.3.6.1.4.1.6876.120.1.0.14.0.7
dded in cluster vmwNsxTDataCentervCenterClusterId and SVM will be deployed.
          
Action required:
Check for the VM deployment status and wait till it powers on.
vmwNsxTServiceInsertionNewHostAddedClear







.1.3.6.1.4.1.6876.120.1.0.14.0.8
dded successfully.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTServiceInsertionServiceDeploymentFailed










.1.3.6.1.4.1.6876.120.1.0.14.0.9
e deployment vmwNsxTDataCenterEntityId for service vmwNsxTDataCenterServiceName on cluster vmwNsxTDataCentervCenterClusterId has failed.
Reason : vmwNsxTDataCenterFailureReason
          
Action required:
Delete the service deployment using NSX UI or API.
Perform any corrective action from the KB and retry service deployment again.
vmwNsxTVPNIPsecPolicyBasedTunnelDownClear







.1.3.6.1.4.1.6876.120.1.0.15.0.10
 based IPsec VPN tunnels in session vmwNsxTDataCenterEntityId are up.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTVPNIPsecRouteBasedSessionDown








.1.3.6.1.4.1.6876.120.1.0.15.0.11
based IPsec VPN session vmwNsxTDataCenterEntityId is down.
Reason: vmwNsxTDataCenterSessionDownReason.
          
Action required:
Check IPsec VPN session configuration and resolve errors based on the
session down reason.
vmwNsxTVPNIPsecRouteBasedSessionDownClear







.1.3.6.1.4.1.6876.120.1.0.15.0.12
based IPsec VPN session vmwNsxTDataCenterEntityId is up.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTVPNIPsecRouteBasedTunnelDown








.1.3.6.1.4.1.6876.120.1.0.15.0.13
based IPsec VPN tunnel in session vmwNsxTDataCenterEntityId is down.
Reason: vmwNsxTDataCenterTunnelDownReason.
          
Action required:
Check IPsec VPN session configuration and resolve errors based on the
tunnel down reason.
vmwNsxTVPNIPsecRouteBasedTunnelDownClear







.1.3.6.1.4.1.6876.120.1.0.15.0.14
based IPsec VPN tunnel in session vmwNsxTDataCenterEntityId is up.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTVPNL2VpnSessionDown







.1.3.6.1.4.1.6876.120.1.0.15.0.15
session vmwNsxTDataCenterEntityId is down.
          
Action required:
Check L2VPN session status for session down reason and resolve errors based
on the reason.
vmwNsxTVPNL2VpnSessionDownClear







.1.3.6.1.4.1.6876.120.1.0.15.0.16
session vmwNsxTDataCenterEntityId is up.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTVPNIPsecServiceDown








.1.3.6.1.4.1.6876.120.1.0.15.0.17
service vmwNsxTDataCenterEntityId is down. Reason: vmwNsxTDataCenterServiceDownReason.
          
Action required:
1. Disable and enable the IPsec service from NSX Manager UI.
2. If the issue still persists, check syslog for error logs and contact
   VMware support.
vmwNsxTVPNIPsecServiceDownClear







.1.3.6.1.4.1.6876.120.1.0.15.0.18
service vmwNsxTDataCenterEntityId is up.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTVPNIPsecPolicyBasedSessionDown








.1.3.6.1.4.1.6876.120.1.0.15.0.7
 based IPsec VPN session vmwNsxTDataCenterEntityId is down.
Reason: vmwNsxTDataCenterSessionDownReason.
          
Action required:
Check IPsec VPN session configuration and resolve errors based on the
session down reason.
vmwNsxTVPNIPsecPolicyBasedSessionDownClear







.1.3.6.1.4.1.6876.120.1.0.15.0.8
 based IPsec VPN session vmwNsxTDataCenterEntityId is up.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTVPNIPsecPolicyBasedTunnelDown







.1.3.6.1.4.1.6876.120.1.0.15.0.9
e policy based IPsec VPN tunnels in session vmwNsxTDataCenterEntityId are down.
          
Action required:
Check IPsec VPN session configuration and resolve errors based on the
tunnel down reason.
vmwNsxTAlarmManagementAlarmServiceOverloaded







.1.3.6.1.4.1.6876.120.1.0.16.0.1
vy volume of alarms reported, the alarm service is temporarily
overloaded. The NSX UI and GET /api/v1/alarms NSX API have stopped
reporting new alarms; however, syslog entries and SNMP traps (if enabled)
are still being emitted reporting the underlying event details. When the
underlying issues causing the heavy volume of alarms are addressed, the
alarm service will start reporting new alarms again.
          
Action required:
Review all active alarms using the Alarms page in the NSX UI or using the
GET /api/v1/alarms?status=OPEN,ACKNOWLEDGED,SUPPRESSED NSX API. For each
active alarm investigate the root cause by following the recommended action
for the alarm. When sufficient alarms are resolved, the alarm service will
start reporting new alarms again.
vmwNsxTAlarmManagementAlarmServiceOverloadedClear







.1.3.6.1.4.1.6876.120.1.0.16.0.2
volume of alarms has subsided and new alarms are being reported
again.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTAlarmManagementHeavyVolumeOfAlarms








.1.3.6.1.4.1.6876.120.1.0.16.0.3
vy volume of vmwNsxTDataCenterEventId alarms, the alarm service has
temporarily stopped reporting alarms of this type. The NSX UI and
GET /api/v1/alarms NSX API are not reporting new instances of these
alarms; however, syslog entries and SNMP traps (if enabled) are
still being emitted reporting the underlying event details. When the
underlying issues causing the heavy volume of vmwNsxTDataCenterEventId alarms are
addressed, the alarm service will start reporting new vmwNsxTDataCenterEventId
alarms when new issues are detected again.
          
Action required:
Review all active alarms of type vmwNsxTDataCenterEventId using the
Alarms page in the NSX UI or using the NSX API GET
/api/v1/alarms?status=OPEN,ACKNOWLEDGED,SUPPRESSED. For each active
alarm investigate the root cause by following the recommended action
for the alarm. When sufficient alarms are resolved, the alarm service
will start reporting new vmwNsxTDataCenterEventId alarms again.
vmwNsxTAlarmManagementHeavyVolumeOfAlarmsClear








.1.3.6.1.4.1.6876.120.1.0.16.0.4
volume of vmwNsxTDataCenterEventId alarms has subsided and new alarms of
this type are being reported again.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTLoadBalancerLBCPUVeryHigh









.1.3.6.1.4.1.6876.120.1.0.17.0.1
age of load balancer vmwNsxTDataCenterEntityId is very high. The threshold is
vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
If the load balancer CPU utilization is higher than system usage threshold,
the workload is too high for this load balancer. Rescale the load balancer
service by changing the load balancer size from small to medium or from
medium to large. If the CPU utilization of this load balancer is still high,
consider adjusting the Edge appliance form factor size or moving load
balancer services to other Edge nodes for the applicable workload.
vmwNsxTLoadBalancerPoolStatusDownClear







.1.3.6.1.4.1.6876.120.1.0.17.0.10
alancer pool vmwNsxTDataCenterEntityId status is up
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTLoadBalancerVirtualServerStatusDown







.1.3.6.1.4.1.6876.120.1.0.17.0.11
alancer virtual server vmwNsxTDataCenterEntityId is down.
          
Action required:
Consult the load balancer pool to determine its status and verify its
configuration. If incorrectly configured, reconfigure it and remove the
load balancer pool from the virtual server then re-add it to the
virtual server again.
vmwNsxTLoadBalancerVirtualServerStatusDownClear







.1.3.6.1.4.1.6876.120.1.0.17.0.12
alancer virtual server vmwNsxTDataCenterEntityId is up.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTLoadBalancerLBStatusDegraded







.1.3.6.1.4.1.6876.120.1.0.17.0.15
alancer service vmwNsxTDataCenterEntityId is degraded.
          
Action required:
For centralized load balancer:
Check the load balancer status on standby Edge node as the degraded status
means the load balancer status on standby Edge node is not ready. On standby
Edge node, invoke the NSX CLI command `get load-balancer  status`.
If the LB-State of load balancer service is not_ready or there is no output,
make the Edge node enter maintenance mode, then exit maintenance mode.
For distributed load balancer:
1. Get detailed status by invoking NSX API GET
   /policy/api/v1/infra/lb-services//detailed-status?source=realtime
2. From API output, find ESXi host reporting a non-zero instance_number
   with status NOT_READY or CONFLICT.
3. On ESXi host node, invoke the NSX CLI command `get load-balancer
    status`.
   If 'Conflict LSP'is reported, check whether this LSP is attached
   to other load balancer service. Check whether this conflict is
   acceptable.
   If 'Not Ready LSP' is reported, check the status of this LSP by
   invoking NSX CLI command `get logical-switch-port status`.
NOTE: You should ignore the alarm if it can be resolved automatically in 5 mins
      because the degraded status can be a transient status.
vmwNsxTLoadBalancerLBStatusDegradedClear







.1.3.6.1.4.1.6876.120.1.0.17.0.16
alancer service vmwNsxTDataCenterEntityId is not degraded.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTLoadBalancerDLBStatusDown







.1.3.6.1.4.1.6876.120.1.0.17.0.17
buted load balancer service vmwNsxTDataCenterEntityId is down.
          
Action required:
On ESXi host node, invoke the NSX CLI command `get load-balancer
 status`.
If 'Conflict LSP' is reported, check whether this LSP is attached
to other load balancer service. Check whether this conflict is
acceptable.
If 'Not Ready LSP' is reported, check the status of this LSP by
invoking NSX CLI command `get logical-switch-port status`.
vmwNsxTLoadBalancerDLBStatusDownClear







.1.3.6.1.4.1.6876.120.1.0.17.0.18
buted load balancer service vmwNsxTDataCenterEntityId is up.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTLoadBalancerConfigurationNotRealizedDueToLowMemory








.1.3.6.1.4.1.6876.120.1.0.17.0.19
alancer configuration vmwNsxTDataCenterEntityId is not realized,
due to high memory usage on Edge node vmwNsxTDataCenterTransportNodeId.
          
Action required:
Prefer defining small and medium sized load balancers over large sized load balancers.
Spread out load balancer services among the available Edge nodes.
Reduce number of Virtual Servers defined.
vmwNsxTLoadBalancerLBCPUVeryHighClear









.1.3.6.1.4.1.6876.120.1.0.17.0.2
age of load balancer vmwNsxTDataCenterEntityId is low enough. The threshold is
vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTLoadBalancerConfigurationNotRealizedDueToLowMemoryClear








.1.3.6.1.4.1.6876.120.1.0.17.0.20
alancer configuration vmwNsxTDataCenterEntityId is realized on vmwNsxTDataCenterTransportNodeId.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTLoadBalancerLBEdgeCapacityInUseHigh








.1.3.6.1.4.1.6876.120.1.0.17.0.3
of load balancer service in Edge node vmwNsxTDataCenterEntityId is high.
The threshold is vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
If multiple LB instances have been configurerd in this Edge node, deploy a
new Edge node and move some LB instances to that new Edge node. If only a
single LB instance (small/medium/etc) has been configured in an Edge node of
same size (small/medium/etc), deploy a new Edge of bigger size and move the
LB instance to that new Edge node.
vmwNsxTLoadBalancerLBEdgeCapacityInUseHighClear








.1.3.6.1.4.1.6876.120.1.0.17.0.4
of load balancer service in Edge node vmwNsxTDataCenterEntityId is low enough.
The threshold is vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTLoadBalancerLBPoolMemberCapacityInUseVeryHigh








.1.3.6.1.4.1.6876.120.1.0.17.0.5
of pool members in Edge node vmwNsxTDataCenterEntityId is very high.
The threshold is vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
Deploy a new Edge node and move the load balancer service from existing Edge
nodes to the newly deployed Edge node.
vmwNsxTLoadBalancerLBPoolMemberCapacityInUseVeryHighClear








.1.3.6.1.4.1.6876.120.1.0.17.0.6
of pool members in Edge node vmwNsxTDataCenterEntityId is low enough.
The threshold is vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTLoadBalancerLBStatusDown







.1.3.6.1.4.1.6876.120.1.0.17.0.7
lized load balancer service vmwNsxTDataCenterEntityId is down.
          
Action required:
On active Edge node, check load balancer status by invoking the NSX CLI
command `get load-balancer  status`. If the LB-State of load
balancer service is not_ready or there is no output, make the Edge node
enter maintenance mode, then exit maintenance mode.
vmwNsxTLoadBalancerLBStatusDownClear







.1.3.6.1.4.1.6876.120.1.0.17.0.8
lized load balancer service vmwNsxTDataCenterEntityId is up.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTLoadBalancerPoolStatusDown







.1.3.6.1.4.1.6876.120.1.0.17.0.9
alancer pool vmwNsxTDataCenterEntityId status is down.
          
Action required:
Consult the load balancer pool to determine which members are down by
invoking the NSX CLI command
`get load-balancer  pool  status`
or NSX API GET
/policy/api/v1/infra/lb-services//lb-pools//detailed-status
If DOWN or UNKNOWN is reported, verify the pool member.
Check network connectivity from the load balancer to the impacted pool members.
Validate application health of each pool member. Also validate the health
of each pool member using the configured monitor. When the health of the
member is established, the pool member status is updated to healthy based
on the 'Rise Count' configuration in the monitor.
Remediate the issue by rebooting the pool member or make the Edge node
enter maintenance mode, then exit maintenance mode.
vmwNsxTTransportNodeHealthLAGMemberDownOnDPUClear








.1.3.6.1.4.1.6876.120.1.0.18.0.10
U vmwNsxTDataCenterDPUId reporting member up.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTTransportNodeHealthTransportNodeUplinkDownOnDPU








.1.3.6.1.4.1.6876.120.1.0.18.0.11
DPU vmwNsxTDataCenterDPUId is going down.
          
Action required:
Check the physical NICs' status of uplinks on DPU vmwNsxTDataCenterDPUId.
Find out the mapped name of this physical NIC on host, then perform checking on UI.
1. In the NSX UI navigate to Fabric | Nodes | Transport Nodes |
Host Transport Nodes.
2. In the Host Transport Nodes list, check the Node Status column. Find the
Transport node with the degraded or down Node Status.
3. Select  | Monitor. Check the status details of the
bond(uplink) which is reporting degraded or down. To avoid a degraded state,
ensure all uplink interfaces are connected and up regardless of whether they
are in use or not.
vmwNsxTTransportNodeHealthTransportNodeUplinkDownOnDPUClear








.1.3.6.1.4.1.6876.120.1.0.18.0.12
DPU vmwNsxTDataCenterDPUId is going up.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTTransportNodeHealthNVDSUplinkDown







.1.3.6.1.4.1.6876.120.1.0.18.0.3
going down.
          
Action required:
Check the physical NICs' status of uplinks on hosts.
1. In the NSX UI navigate to Fabric | Nodes | Transport Nodes |
Host Transport Nodes.
2. In the Host Transport Nodes list, check the Node Status column. Find the
Transport node with the degraded or down Node Status.
3. Select  | Monitor. Check the status details of the
bond(uplink) which is reporting degraded or down. To avoid a degraded state,
ensure all uplink interfaces are connected and up regardless of whether they
are in use or not.
vmwNsxTTransportNodeHealthNVDSUplinkDownClear







.1.3.6.1.4.1.6876.120.1.0.18.0.4
going up.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTTransportNodeHealthLAGMemberDown







.1.3.6.1.4.1.6876.120.1.0.18.0.5
ting member down.
          
Action required:
Check the connection status of LAG members on hosts.
1. In the NSX UI navigate to Fabric | Nodes | Transport Nodes |
Host Transport Nodes.
2. In the Host Transport Nodes list, check the Node Status column. Find the
Transport node with the degraded or down Node Status.
3. Select  | Monitor.  Find the bond(uplink) which is
reporting degraded or down.
4. Check the LACP member status details by logging into the failed host and
invoking `esxcli network vswitch dvs vmware lacp status get` on an ESXi
host or `ovs-appctl bond/show` and `ovs-appctl lacp/show` on a KVM host.
vmwNsxTTransportNodeHealthLAGMemberDownClear







.1.3.6.1.4.1.6876.120.1.0.18.0.6
ting member up.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTTransportNodeHealthTransportNodeUplinkDown







.1.3.6.1.4.1.6876.120.1.0.18.0.7
going down.
          
Action required:
Check the physical NICs' status of uplinks on hosts.
1. In the NSX UI navigate to Fabric | Nodes | Transport Nodes |
Host Transport Nodes.
2. In the Host Transport Nodes list, check the Node Status column. Find the
Transport node with the degraded or down Node Status.
3. Select  | Monitor. Check the status details of the
bond(uplink) which is reporting degraded or down. To avoid a degraded state,
ensure all uplink interfaces are connected and up regardless of whether they
are in use or not.
vmwNsxTTransportNodeHealthTransportNodeUplinkDownClear







.1.3.6.1.4.1.6876.120.1.0.18.0.8
going up.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTTransportNodeHealthLAGMemberDownOnDPU








.1.3.6.1.4.1.6876.120.1.0.18.0.9
U vmwNsxTDataCenterDPUId reporting member down.
          
Action required:
Check the connection status of LAG members on DPU vmwNsxTDataCenterDPUId.
Find out the mapped name of related physical NIC on host, then perform checking on UI.
1. In the NSX UI navigate to Fabric | Nodes | Transport Nodes |
Host Transport Nodes.
2. In the Host Transport Nodes list, check the Node Status column. Find the
Transport node with the degraded or down Node Status.
3. Select  | Monitor.  Find the bond(uplink) which is
reporting degraded or down.
4. Check the LACP member status details by logging into the failed DPU vmwNsxTDataCenterDPUId and
invoking `esxcli network vswitch dvs vmware lacp status get`.
vmwNsxTInfrastructureServiceEdgeServiceStatusChanged











.1.3.6.1.4.1.6876.120.1.0.19.0.1
e vmwNsxTDataCenterEdgeServiceName changed from vmwNsxTDataCenterPreviousServiceState
to vmwNsxTDataCenterCurrentServiceState.
vmwNsxTDataCenterServiceDownReason
          
Action required:
On the Edge node, verify the service hasn't exited due to an error by
looking for core files in the /var/log/core directory. In addition,
invoke the NSX CLI command `get services` to confirm whether the service
is stopped. If so, invoke `start service ` to restart the
service.
vmwNsxTInfrastructureServiceApplicationCrashedClear







.1.3.6.1.4.1.6876.120.1.0.19.0.10
ump files are withdrawn from system.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTInfrastructureServiceServiceStatusUnknownOnDPU









.1.3.6.1.4.1.6876.120.1.0.19.0.11
e vmwNsxTDataCenterServiceName on DPU vmwNsxTDataCenterDPUId has been unresponsive for 10 seconds.
          
Action required:
Verify vmwNsxTDataCenterServiceName service on DPU vmwNsxTDataCenterDPUId is still running by invoking
`/etc/init.d/vmwNsxTDataCenterServiceName status`. If the service is reported as running, it may
need to get restarted which can be done by `/etc/init.d/vmwNsxTDataCenterServiceName restart`.
Rerun the status command to verify the service is now running. If restarting the
service does not resolve the issue or if the issue reoccurs after a successful restart,
contact VMware Support.
vmwNsxTInfrastructureServiceServiceStatusUnknownOnDPUClear









.1.3.6.1.4.1.6876.120.1.0.19.0.12
e vmwNsxTDataCenterServiceName on DPU vmwNsxTDataCenterDPUId is responsive again.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTInfrastructureServiceMetricsDeliveryFailure










.1.3.6.1.4.1.6876.120.1.0.19.0.19
deliver metrics from SHA to target vmwNsxTDataCenterMetricsTargetAlias(vmwNsxTDataCenterMetricsTargetAddress:vmwNsxTDataCenterMetricsTargetPort).
          
Action required:
User should perform the following checks in order to exclude the problem causing the failure:
  1. Check if target address vmwNsxTDataCenterMetricsTargetAddress and port vmwNsxTDataCenterMetricsTargetPort (default is 443 in the case port is not specified)
     passed down to connect is the expected target,
  2. Check if the certificate is correct by
     `/opt/vmware/nsx-nestdb/bin/nestdb-cli --cmd 'put vmware.nsx.nestdb.CommonAgentHostConfigMsg'`,
  3. Check if target vmwNsxTDataCenterMetricsTargetAddress is reachable,
  4. Check if the metric manager on target vmwNsxTDataCenterMetricsTargetAddress is running by `docker ps | grep metrics_manager`,
  5. Check if port vmwNsxTDataCenterMetricsTargetPort is opened by `netstat -a | grep vmwNsxTDataCenterMetricsTargetPort` on target,
  6. Check if ALLOW firewall rule is installed on the node by `iptables -S OUTPUT | grep vmwNsxTDataCenterMetricsTargetPort`(EDGE/UA) or
     `localcli network firewall ruleset list | grep nsx-sha-tsdb`(ESX),
  7. Restart SHA daemon to see if it could be solved by `/etc/init.d/netopa restart`(ESX) or `/etc/init.d/nsx-netopa restart`(EDGE)
     or `/etc/init.d/nsx-sha restart`(UA).
vmwNsxTInfrastructureServiceEdgeServiceStatusChangedClear










.1.3.6.1.4.1.6876.120.1.0.19.0.2
e vmwNsxTDataCenterEdgeServiceName changed from vmwNsxTDataCenterPreviousServiceState
to vmwNsxTDataCenterCurrentServiceState.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTInfrastructureServiceMetricsDeliveryFailureClear










.1.3.6.1.4.1.6876.120.1.0.19.0.20
livery to target vmwNsxTDataCenterMetricsTargetAlias(vmwNsxTDataCenterMetricsTargetAddress:vmwNsxTDataCenterMetricsTargetPort) recovered.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTInfrastructureServiceEdgeServiceStatusDown









.1.3.6.1.4.1.6876.120.1.0.19.0.3
e vmwNsxTDataCenterEdgeServiceName is down for at least one minute.
vmwNsxTDataCenterServiceDownReason
          
Action required:
On the Edge node, verify the service hasn't exited due to an error by
looking for core files in the /var/log/core directory. In addition,
invoke the NSX CLI command `get services` to confirm whether the service
is stopped. If so, invoke `start service ` to restart the
service.
vmwNsxTInfrastructureServiceEdgeServiceStatusDownClear








.1.3.6.1.4.1.6876.120.1.0.19.0.4
e vmwNsxTDataCenterEdgeServiceName is up.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTInfrastructureServiceServiceStatusUnknown








.1.3.6.1.4.1.6876.120.1.0.19.0.7
e vmwNsxTDataCenterServiceName has been unresponsive for 10 seconds.
          
Action required:
Verify vmwNsxTDataCenterServiceName service is still running by invoking `/etc/init.d/vmwNsxTDataCenterServiceName status`.
If the service is reported as running, it may need to get restarted which can be done by
`/etc/init.d/vmwNsxTDataCenterServiceName restart`. Rerun the status command to verify the service is now
running. If the script `/etc/init.d/vmwNsxTDataCenterServiceName` is unavailable, invoke
`systemctl vmwNsxTDataCenterServiceName status` and restart by `systemctl vmwNsxTDataCenterServiceName restart` with root
priviledges. If restarting the service does not resolve the issue or if the issue reoccurs after
a successful restart, contact VMware Support.
vmwNsxTInfrastructureServiceServiceStatusUnknownClear








.1.3.6.1.4.1.6876.120.1.0.19.0.8
e vmwNsxTDataCenterServiceName is responsive again.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTInfrastructureServiceApplicationCrashed









.1.3.6.1.4.1.6876.120.1.0.19.0.9
n on NSX node vmwNsxTDataCenterNodeDisplayOrHostName has crashed.
The number of core files found is vmwNsxTDataCenterCoreDumpCount.
Collect the Support Bundle including core dump files
and contact VMware Support team.
          
Action required:
Collect Support Bundle for NSX node vmwNsxTDataCenterNodeDisplayOrHostName
using NSX Manager UI or API.
Note, core dumps can be set to move or copy into NSX Tech Support Bundle
in order to remove or preserve the local copy on node.
Copy of Support Bundle with core dump files is essential for
VMware Support team to troubleshoot the issue and it is best recommended to
save a latest copy of Tech Support Bundle including core dump files
before removing core dump files from system.
Refer KB article for more details.
vmwNsxTEdgeHealthEdgeCPUUsageHigh









.1.3.6.1.4.1.6876.120.1.0.2.0.1
age on Edge node vmwNsxTDataCenterEntityId has reached
vmwNsxTDataCenterSystemResourceUsage% which is at or above the high threshold
value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
Review the configuration, running services and sizing of this Edge
node. Consider adjusting the Edge appliance form factor size or rebalancing
services to other Edge nodes for the applicable workload.
vmwNsxTEdgeHealthEdgeDiskUsageHighClear










.1.3.6.1.4.1.6876.120.1.0.2.0.10
sage for the Edge node disk partition vmwNsxTDataCenterDiskPartitionName
has reached vmwNsxTDataCenterSystemResourceUsage% which is below the high
threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTEdgeHealthEdgeDiskUsageVeryHigh










.1.3.6.1.4.1.6876.120.1.0.2.0.11
sage for the Edge node disk partition vmwNsxTDataCenterDiskPartitionName
has reached vmwNsxTDataCenterSystemResourceUsage% which is at or above
the very high threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
Examine the partition with high usage and see if there are any
unexpected large files that can be removed.
vmwNsxTEdgeHealthEdgeDiskUsageVeryHighClear










.1.3.6.1.4.1.6876.120.1.0.2.0.12
sage for the Edge node disk partition vmwNsxTDataCenterDiskPartitionName
has reached vmwNsxTDataCenterSystemResourceUsage% which is below the
very high threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTEdgeHealthEdgeDatapathCPUHigh








.1.3.6.1.4.1.6876.120.1.0.2.0.17
th CPU usage on Edge node vmwNsxTDataCenterEntityId has reached
vmwNsxTDataCenterDatapathResourceUsage% which is at or above the high
threshold for at least two minutes.
          
Action required:
Review the CPU statistics on the Edge node by invoking the NSX CLI command
`get dataplane cpu stats` to show packet rates per CPU core.  Higher CPU
usage is expected with higher packet rates. Consider increasing the Edge
appliance form factor size and rebalancing services on this Edge node to
other Edge nodes in the same cluster or other Edge clusters.
vmwNsxTEdgeHealthEdgeDatapathCPUHighClear







.1.3.6.1.4.1.6876.120.1.0.2.0.18
age on Edge node vmwNsxTDataCenterEntityId has reached below the
high threshold.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTEdgeHealthEdgeDatapathCPUVeryHigh








.1.3.6.1.4.1.6876.120.1.0.2.0.19
th CPU usage on Edge node vmwNsxTDataCenterEntityId has reached
vmwNsxTDataCenterDatapathResourceUsage% which is at or above the very high
threshold for at least two minutes.
          
Action required:
Review the CPU statistics on the Edge node by invoking the NSX CLI command
`get dataplane cpu stats` to show packet rates per CPU core.  Higher CPU
usage is expected with higher packet rates. Consider increasing the Edge
appliance form factor size and rebalancing services on this Edge node to
other Edge nodes in the same cluster or other Edge clusters.
vmwNsxTEdgeHealthEdgeCPUUsageHighClear









.1.3.6.1.4.1.6876.120.1.0.2.0.2
age on Edge node vmwNsxTDataCenterEntityId has reached
vmwNsxTDataCenterSystemResourceUsage% which is below the high threshold
value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTEdgeHealthEdgeDatapathCPUVeryHighClear







.1.3.6.1.4.1.6876.120.1.0.2.0.20
age on Edge node vmwNsxTDataCenterEntityId has reached below the
very high threshold.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTEdgeHealthEdgeDatapathConfigurationFailure







.1.3.6.1.4.1.6876.120.1.0.2.0.21
enable the datapath on the Edge node after three attempts.
          
Action required:
Ensure the Edge node's connectivity to the Manager node is healthy.  From
the Edge node's NSX CLI, invoke the command `get services` to check the
health of services. If the dataplane service is stopped, invoke the
command `start service dataplane` to start it.
vmwNsxTEdgeHealthEdgeDatapathConfigurationFailureClear







.1.3.6.1.4.1.6876.120.1.0.2.0.22
n the Edge node has been successfully enabled.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTEdgeHealthEdgeDatapathCryptodrvDown








.1.3.6.1.4.1.6876.120.1.0.2.0.23
crypto driver vmwNsxTDataCenterEdgeCryptoDrvName is down.
          
Action required:
Upgrade the Edge node as needed.
vmwNsxTEdgeHealthEdgeDatapathCryptodrvDownClear








.1.3.6.1.4.1.6876.120.1.0.2.0.24
crypto driver vmwNsxTDataCenterEdgeCryptoDrvName is up.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTEdgeHealthEdgeDatapathMempoolHigh










.1.3.6.1.4.1.6876.120.1.0.2.0.25
th mempool usage for vmwNsxTDataCenterMempoolName on Edge node
vmwNsxTDataCenterEntityId has reached vmwNsxTDataCenterSystemResourceUsage% which
is at or above the high threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
Log in as the root user and invoke the command
`edge-appctl -t /var/run/vmware/edge/dpd.ctl mempool/show` and
`edge-appctl -t /var/run/vmware/edge/dpd.ctl memory/show malloc_heap` to
check DPDK memory usage.
vmwNsxTEdgeHealthEdgeDatapathMempoolHighClear










.1.3.6.1.4.1.6876.120.1.0.2.0.26
th mempool usage for vmwNsxTDataCenterMempoolName on Edge node
vmwNsxTDataCenterEntityId has reached vmwNsxTDataCenterSystemResourceUsage% which
is below the high threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTEdgeHealthEdgeGlobalARPTableUsageHigh








.1.3.6.1.4.1.6876.120.1.0.2.0.27
 table usage on Edge node vmwNsxTDataCenterEntityId has reached
vmwNsxTDataCenterDatapathResourceUsage% which is above the high threshold
for over two minutes.
          
Action required:
Log in as the root user and invoke the command
`edge-appctl -t /var/run/vmware/edge/dpd.ctl neigh/show` and check
if neigh cache usage is normal. If it is normal, invoke the command
`edge-appctl -t /var/run/vmware/edge/dpd.ctl neigh/set_param max_entries`
to increase the ARP table size.
vmwNsxTEdgeHealthEdgeGlobalARPTableUsageHighClear







.1.3.6.1.4.1.6876.120.1.0.2.0.28
 table usage on Edge node vmwNsxTDataCenterEntityId has reached
below the high threshold.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTEdgeHealthEdgeNICLinkStatusDown








.1.3.6.1.4.1.6876.120.1.0.2.0.29
NIC vmwNsxTDataCenterEdgeNICName link is down.
          
Action required:
On the Edge node confirm if the NIC link is physically down by invoking
the NSX CLI command `get interfaces`. If it is down, verify the cable
connection.
vmwNsxTEdgeHealthEdgeCPUUsageVeryHigh









.1.3.6.1.4.1.6876.120.1.0.2.0.3
age on Edge node vmwNsxTDataCenterEntityId has reached
vmwNsxTDataCenterSystemResourceUsage% which is at or above the very high
threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
Review the configuration, running services and sizing of this Edge
node. Consider adjusting the Edge appliance form factor size or rebalancing
services to other Edge nodes for the applicable workload.
vmwNsxTEdgeHealthEdgeNICLinkStatusDownClear








.1.3.6.1.4.1.6876.120.1.0.2.0.30
NIC vmwNsxTDataCenterEdgeNICName link is up.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTEdgeHealthEdgeNICOutOfReceiveBuffer











.1.3.6.1.4.1.6876.120.1.0.2.0.31
mwNsxTDataCenterEdgeNICName receive ring buffer has overflowed by
vmwNsxTDataCenterRxRingBufferOverflowPercentage% on Edge node vmwNsxTDataCenterEntityId.
The missed packet count is vmwNsxTDataCenterRxMisses and processed packet count
is vmwNsxTDataCenterRxProcessed.
          
Action required:
Run the NSX CLI command `get dataplane cpu stats` on the edge node and check:
1. If cpu usage is high, i.e., > 90%, then take a packet capture on
   the interface using the command `start capture interface
    direction input` or `start capture interface
    direction input core ` (to capture
   packets ingressing on specific core whose usage is high).
   Then analyze the capture to see if there are majority of
   fragmented packets or ipsec packets. If yes, then it is expected
   behavior. If not, datapath is probably busy with other operations.
   If this alarm lasts more than 2-3 minutes, contact VMware Support.
2. If cpu usage is not high, i.e., < 90%, then check if rx pps is high
   using the command `get dataplane cpu stats` (just to make sure the
   traffic rate is increasing). Then increase the ring size by 1024
   using the command `set dataplane ring-size rx `.
   NOTE - The continuous increase of ring size by 1024 factor can lead
   to some performance issues.
   If even after increasing the ring size, the issue persists then
   it is an indication that edge needs a larger form factor deployment
   to accommodate the traffic.
3. If the alarm keeps on flapping i.e., triggers and resolves very soon,
   then it is due to bursty traffic. In this case check if rx pps as
   described above, if it is not high during the alarm active period
   then contact VMware Support. If pps is high it confirms bursty traffic.
   Consider suppressing the alarm.
   NOTE - There is no specific benchmark to decide what is regarded as a
   high pps value. It depends on infrastructure and type of traffic.
   The comparison can be made by noting down when alarm is inactive and
   when it is active.
vmwNsxTEdgeHealthEdgeNICOutOfReceiveBufferClear








.1.3.6.1.4.1.6876.120.1.0.2.0.32
mwNsxTDataCenterEdgeNICName receive ring buffer usage on Edge node
vmwNsxTDataCenterEntityId is no longer overflowing.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTEdgeHealthEdgeNICOutOfTransmitBuffer











.1.3.6.1.4.1.6876.120.1.0.2.0.33
mwNsxTDataCenterEdgeNICName transmit ring buffer has overflowed by
vmwNsxTDataCenterTxRingBufferOverflowPercentage% on Edge node vmwNsxTDataCenterEntityId.
The missed packet count is vmwNsxTDataCenterTxMisses and processed packet count
is vmwNsxTDataCenterTxProcessed.
          
Action required:
1. If a lot of VMs are accommodated along with edge by the hypervisor
   then edge VM might not get time to run, hence the packets might
   not be retrieved by hypervisor. Then probably migrating the edge VM
   to a host with fewer VMs.
2. Increase the ring size by 1024 using the command `set dataplane
   ring-size tx `. If even after increasing the ring size,
   the issue persists then contact VMware Support as the ESX side
   transmit ring buffer might be of lower value. If there is no
   issue on ESX side, it indicates the edge needs to be scaled to
   a larger form factor deployment to accommodate the traffic.
3. If the alarm keeps on flapping, i.e., triggers and resolves very soon,
   then it is due to bursty traffic. In this case check if tx pps using
   the command `get dataplane cpu stats`. If it is not high during the
   alarm active period then contact VMware Support.
   If pps is high it confirms bursty traffic. Consider suppressing the
   alarm.
   NOTE - There is no specific benchmark to decide what is regarded as a
   high pps value. It depends on infrastructure and type of traffic.
   The comparison can be made by noting down when alarm is inactive and
   when it is active.
vmwNsxTEdgeHealthEdgeNICOutOfTransmitBufferClear








.1.3.6.1.4.1.6876.120.1.0.2.0.34
mwNsxTDataCenterEdgeNICName transmit ring buffer usage on Edge node
vmwNsxTDataCenterEntityId is no longer overflowing.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTEdgeHealthStorageError








.1.3.6.1.4.1.6876.120.1.0.2.0.37
ing disk partitions on the Edge node are in
read-only mode: vmwNsxTDataCenterDiskPartitionName
          
Action required:
Examine the read-only partition to see if reboot resolves the issue
or the disk needs to be replaced. Contact GSS for more information.
vmwNsxTEdgeHealthStorageErrorClear








.1.3.6.1.4.1.6876.120.1.0.2.0.38
ing disk partitions on the Edge node have recovered from
read-only mode: vmwNsxTDataCenterDiskPartitionName
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTEdgeHealthEdgeCPUUsageVeryHighClear









.1.3.6.1.4.1.6876.120.1.0.2.0.4
age on Edge node vmwNsxTDataCenterEntityId has reached
vmwNsxTDataCenterSystemResourceUsage% which is below the very high threshold
value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTEdgeHealthDatapathThreadDeadlocked








.1.3.6.1.4.1.6876.120.1.0.2.0.45
datapath thread vmwNsxTDataCenterEdgeThreadName is deadlocked.
          
Action required:
Restart the dataplane service by invoking the NSX CLI command `restart service dataplane`.
vmwNsxTEdgeHealthDatapathThreadDeadlockedClear








.1.3.6.1.4.1.6876.120.1.0.2.0.46
datapath thread vmwNsxTDataCenterEdgeThreadName is free from deadlock.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTEdgeHealthEdgeDatapathNICThroughputHigh










.1.3.6.1.4.1.6876.120.1.0.2.0.49
th NIC throughput for vmwNsxTDataCenterEdgeNICName on Edge node
vmwNsxTDataCenterEntityId has reached vmwNsxTDataCenterNICThroughput% which is at or above
the high threshold value of vmwNsxTDataCenterNICThroughputThreshold%.
          
Action required:
Examine the traffic thoughput levels on the NIC and determine whether
configuration changes are needed. The 'get dataplane thoughput '
command can be used to monitor throughput.
vmwNsxTEdgeHealthEdgeMemoryUsageHigh









.1.3.6.1.4.1.6876.120.1.0.2.0.5
 usage on Edge node vmwNsxTDataCenterEntityId has reached
vmwNsxTDataCenterSystemResourceUsage% which is at or above the high
threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
Review the configuration, running services and sizing of this Edge
node. Consider adjusting the Edge appliance form factor size or rebalancing
services to other Edge nodes for the applicable workload.
vmwNsxTEdgeHealthEdgeDatapathNICThroughputHighClear










.1.3.6.1.4.1.6876.120.1.0.2.0.50
th NIC throughput for vmwNsxTDataCenterEdgeNICName on Edge node
vmwNsxTDataCenterEntityId has reached vmwNsxTDataCenterNICThroughput% which
is below the high threshold value of vmwNsxTDataCenterNICThroughputThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTEdgeHealthEdgeDatapathNICThroughputVeryHigh










.1.3.6.1.4.1.6876.120.1.0.2.0.51
th NIC throughput for vmwNsxTDataCenterEdgeNICName on Edge node
vmwNsxTDataCenterEntityId has reached vmwNsxTDataCenterNICThroughput% which is at or above
the very high threshold value of vmwNsxTDataCenterNICThroughputThreshold%.
          
Action required:
Examine the traffic thoughput levels on the NIC and determine whether
configuration changes are needed. The 'get dataplane thoughput '
command can be used to monitor throughput.
vmwNsxTEdgeHealthEdgeDatapathNICThroughputVeryHighClear










.1.3.6.1.4.1.6876.120.1.0.2.0.52
th NIC throughput for vmwNsxTDataCenterEdgeNICName on Edge node
vmwNsxTDataCenterEntityId has reached vmwNsxTDataCenterNICThroughput% which
is below the very high threshold value of vmwNsxTDataCenterNICThroughputThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTEdgeHealthFailureDomainDown








.1.3.6.1.4.1.6876.120.1.0.2.0.53
s of failure domain vmwNsxTDataCenterTransportNodeId are down.
          
Action required:
1. On the Edge node identified by vmwNsxTDataCenterTransportNodeId, check the connectivity
to the management and control planes by invoking the NSX CLI command
`get managers` and `get controllers`.
2. Invoke the NSX CLI command `get interface eth0` to check the management
interface status.
3. Invoke the CLI `get services` to check the core services status like
dataplane/local-controller/nestdb/router, etc.
4. Inspect the /var/log/syslog to find the suspecting error.
5. Reboot the Edge node.
vmwNsxTEdgeHealthFailureDomainDownClear








.1.3.6.1.4.1.6876.120.1.0.2.0.54
s of failure domain vmwNsxTDataCenterTransportNodeId are reachable.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTEdgeHealthMegaFlowCacheHitRateLow









.1.3.6.1.4.1.6876.120.1.0.2.0.55
Cache hit rate on Edge node vmwNsxTDataCenterEntityId has decreased below the
specified threshold of vmwNsxTDataCenterFlowCacheThreshold% for core vmwNsxTDataCenterCoreId, and
the Datapath CPU usage has increased for the last 30 minutes.
          
Action required:
The Cache Flow hit rate has decreased for the last 30 minutes which is an
indication that there may be degradation on Edge performance. The traffic will
continue to be forwarded and you may not experience any issues.
Check the datapath CPU utilization for Edge vmwNsxTDataCenterEntityId core vmwNsxTDataCenterCoreId if it is
high for the last 30 minutes. The Edge will have low flow-cache hit rate when there
are continuously new flows getting created because the first packet of any new flow
will be used to setup to flow-cache for fast path processing.
You may want to increase your Edge appliance size or
increase the number of Edge nodes used for Active/Active Gateways.
vmwNsxTEdgeHealthMegaFlowCacheHitRateLowClear







.1.3.6.1.4.1.6876.120.1.0.2.0.56
 hit rate is in the normal range.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTEdgeHealthMicroFlowCacheHitRateLow









.1.3.6.1.4.1.6876.120.1.0.2.0.57
 Cache hit rate on Edge node vmwNsxTDataCenterEntityId has decreased below the
specified threshold of vmwNsxTDataCenterFlowCacheThreshold% for core vmwNsxTDataCenterCoreId, and
the Datapath CPU usage has increased for the last 30 minutes.
          
Action required:
The Cache Flow hit rate has decreased for the last 30 minutes which is an
indication that there may be degradation on Edge performance. The traffic will
continue to be forwarded and you may not experience any issues.
Check the datapath CPU utilization for Edge vmwNsxTDataCenterEntityId core vmwNsxTDataCenterCoreId if it is
high for the last 30 minutes. The Edge will have low flow-cache hit rate when there
are continuously new flows getting created because the first packet of any new flow
will be used to setup to flow-cache for fast path processing.
You may want to increase your Edge appliance size or
increase the number of Edge nodes used for Active/Active Gateways.
vmwNsxTEdgeHealthMicroFlowCacheHitRateLowClear







.1.3.6.1.4.1.6876.120.1.0.2.0.58
 hit rate is in the normal range.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTEdgeHealthEdgeMemoryUsageHighClear









.1.3.6.1.4.1.6876.120.1.0.2.0.6
 usage on Edge node vmwNsxTDataCenterEntityId has reached
vmwNsxTDataCenterSystemResourceUsage% which is below the high threshold
value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTEdgeHealthEdgeMemoryUsageVeryHigh









.1.3.6.1.4.1.6876.120.1.0.2.0.7
 usage on Edge node vmwNsxTDataCenterEntityId has reached
vmwNsxTDataCenterSystemResourceUsage% which is at or above the very
high threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
Review the configuration, running services and sizing of this Edge
node. Consider adjusting the Edge appliance form factor size or rebalancing
services to other Edge nodes for the applicable workload.
vmwNsxTEdgeHealthEdgeMemoryUsageVeryHighClear









.1.3.6.1.4.1.6876.120.1.0.2.0.8
 usage on Edge node vmwNsxTDataCenterEntityId has reached
vmwNsxTDataCenterSystemResourceUsage% which is below the very high
threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTEdgeHealthEdgeDiskUsageHigh










.1.3.6.1.4.1.6876.120.1.0.2.0.9
sage for the Edge node disk partition vmwNsxTDataCenterDiskPartitionName
has reached vmwNsxTDataCenterSystemResourceUsage% which is at or above
the high threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
Examine the partition with high usage and see if there are any
unexpected large files that can be removed.
vmwNsxTDHCPPoolLeaseAllocationFailed








.1.3.6.1.4.1.6876.120.1.0.20.0.1
ses in IP Pool vmwNsxTDataCenterEntityId of DHCP Server vmwNsxTDataCenterDHCPServerId have
been exhausted. The last DHCP request has failed and future requests will
fail.
          
Action required:
Review the DHCP pool configuration in the NSX UI or on the Edge node where
the DHCP server is running by invoking the NSX CLI command `get dhcp ip-pool`.
Also review the current active leases on the Edge node by invoking the NSX
CLI command `get dhcp lease`.  Compare the leases to the number of active
VMs. Consider reducing the lease time on the DHCP server configuration if
the number of VMs are low compared to the number of active leases. Also
consider expanding the pool range for the DHCP server by visiting the
Networking | Segments | Segment page in the NSX UI.
vmwNsxTDHCPPoolLeaseAllocationFailedClear








.1.3.6.1.4.1.6876.120.1.0.20.0.2
wNsxTDataCenterEntityId of DHCP Server vmwNsxTDataCenterDHCPServerId is no longer exhausted.
A lease is successfully allocated to the last DHCP request.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTDHCPPoolOverloaded









.1.3.6.1.4.1.6876.120.1.0.20.0.3
erver vmwNsxTDataCenterDHCPServerId IP Pool vmwNsxTDataCenterEntityId usage is approaching
exhaustion with vmwNsxTDataCenterDHCPPoolUsage% IPs allocated.
          
Action required:
Review the DHCP pool configuration in the NSX UI or on the Edge node where
the DHCP server is running by invoking the NSX CLI command `get dhcp ip-pool`.
Also review the current active leases on the Edge node by invoking the NSX
CLI command `get dhcp lease`.  Compare the leases to the number of active
VMs. Consider reducing the lease time on the DHCP server configuration if
the number of VMs are low compared to the number of active leases. Also
consider expanding the pool range for the DHCP server by visiting the
Networking | Segments | Segment page in the NSX UI.
vmwNsxTDHCPPoolOverloadedClear








.1.3.6.1.4.1.6876.120.1.0.20.0.4
erver vmwNsxTDataCenterDHCPServerId IP Pool vmwNsxTDataCenterEntityId has fallen below the
high usage threshold.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTHighAvailabilityTier0GatewayFailoverClear







.1.3.6.1.4.1.6876.120.1.0.21.0.10
gateway vmwNsxTDataCenterEntityId is now up.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTHighAvailabilityTier1GatewayFailover










.1.3.6.1.4.1.6876.120.1.0.21.0.11
gateway vmwNsxTDataCenterEntityId failover from vmwNsxTDataCenterPreviousGatewayState
to vmwNsxTDataCenterCurrentGatewayState, service-router vmwNsxTDataCenterServiceRouterId.
          
Action required:
Invoke the NSX CLI command `get logical-router ` to
identify the tier1 service-router vrf ID. Switch to the vrf context by
invoking `vrf ` then invoke `get high-availability status`
to determine the service that is down.
vmwNsxTHighAvailabilityTier1GatewayFailoverClear







.1.3.6.1.4.1.6876.120.1.0.21.0.12
gateway vmwNsxTDataCenterEntityId is now up.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTHighAvailabilityTier0ServiceGroupFailover











.1.3.6.1.4.1.6876.120.1.0.21.0.13
oup cluster vmwNsxTDataCenterEntityId currently does not have an active
instance. It is in state vmwNsxTDataCenterHaState (where 0 is down, 1 is standby and
2 is active) on Edge node vmwNsxTDataCenterTransportNodeId and in state vmwNsxTDataCenterHaState2
on Edge node vmwNsxTDataCenterTransportNodeId2.
          
Action required:
Invoke the NSX CLI command `get logical-router  service_group` to
check all service-groups configured under a given service-router. Examine
the output for reason for a service-group leaving active state.
vmwNsxTHighAvailabilityTier0ServiceGroupFailoverClear








.1.3.6.1.4.1.6876.120.1.0.21.0.14
ice-group cluster vmwNsxTDataCenterEntityId now has one active instance
on Edge node vmwNsxTDataCenterTransportNodeId.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTHighAvailabilityTier1ServiceGroupFailover











.1.3.6.1.4.1.6876.120.1.0.21.0.15
oup cluster vmwNsxTDataCenterEntityId currently does not have an active
instance. It is in state vmwNsxTDataCenterHaState (where 0 is down, 1 is standby and
2 is active) on Edge node vmwNsxTDataCenterTransportNodeId and in state vmwNsxTDataCenterHaState2
on Edge node vmwNsxTDataCenterTransportNodeId2.
          
Action required:
Invoke the NSX CLI command `get logical-router  service_group` to
check all service-groups configured under a given service-router. Examine
the output for reason for a service-group leaving active state.
vmwNsxTHighAvailabilityTier1ServiceGroupFailoverClear








.1.3.6.1.4.1.6876.120.1.0.21.0.16
ice-group cluster vmwNsxTDataCenterEntityId now has one active instance
on Edge node vmwNsxTDataCenterTransportNodeId.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTHighAvailabilityTier0ServiceGroupReducedRedundancy









.1.3.6.1.4.1.6876.120.1.0.21.0.17
oup cluster vmwNsxTDataCenterEntityId attached to Tier0 service-router
vmwNsxTDataCenterServiceRouterId on Edge node vmwNsxTDataCenterTransportNodeId has failed.
As a result, the service-group cluster currently does not have a
standby instance.
          
Action required:
Invoke the NSX CLI command `get logical-router  service_group` to
check all service-groups configured under a given service-router. Examine
the output for failure reason for a previously standby service-group.
vmwNsxTHighAvailabilityTier0ServiceGroupReducedRedundancyClear











.1.3.6.1.4.1.6876.120.1.0.21.0.18
oup cluster vmwNsxTDataCenterEntityId is in state vmwNsxTDataCenterHaState (where 0 is down,
1 is standby and 2 is active) on Edge node vmwNsxTDataCenterTransportNodeId
and state vmwNsxTDataCenterHaState2 on Edge node vmwNsxTDataCenterTransportNodeId2.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTHighAvailabilityTier1ServiceGroupReducedRedundancy









.1.3.6.1.4.1.6876.120.1.0.21.0.19
oup cluster vmwNsxTDataCenterEntityId attached to Tier1 service-router
vmwNsxTDataCenterServiceRouterId on Edge node vmwNsxTDataCenterTransportNodeId has failed.
As a result, the service-group cluster currently does not have a
standby instance.
          
Action required:
Invoke the NSX CLI command `get logical-router  service_group` to
check all service-groups configured under a given service-router. Examine
the output for failure reason for a previously standby service-group.
vmwNsxTHighAvailabilityTier1ServiceGroupReducedRedundancyClear











.1.3.6.1.4.1.6876.120.1.0.21.0.20
oup cluster vmwNsxTDataCenterEntityId is in state vmwNsxTDataCenterHaState (where 0 is down,
1 is standby and 2 is active) on Edge node vmwNsxTDataCenterTransportNodeId
and state vmwNsxTDataCenterHaState2 on Edge node vmwNsxTDataCenterTransportNodeId2.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTHighAvailabilityTier0GatewayFailover










.1.3.6.1.4.1.6876.120.1.0.21.0.9
gateway vmwNsxTDataCenterEntityId failover from vmwNsxTDataCenterPreviousGatewayState
to vmwNsxTDataCenterCurrentGatewayState, service-router vmwNsxTDataCenterServiceRouterId.
          
Action required:
Invoke the NSX CLI command `get logical-router ` to
identify the tier0 service-router vrf ID. Switch to the vrf context by
invoking `vrf ` then invoke `get high-availability status`
to determine the service that is down.
vmwNsxTCapacityMaximumCapacity










.1.3.6.1.4.1.6876.120.1.0.22.0.1
 of objects defined in the system for vmwNsxTDataCenterCapacityDisplayName has
reached vmwNsxTDataCenterCapacityUsageCount which is above the maximum supported
count of vmwNsxTDataCenterMaxSupportedCapacityCount.
          
Action required:
Ensure that the number of NSX objects created is within the limits
supported by NSX. If there are any unused objects, delete them using the
respective NSX UI or API from the system.
Consider increasing the form factor of all Manager nodes and/or Edge
nodes. Note that the form factor of each node type should be the
same. If not the same, the capacity limits for the lowest form factor
deployed are used.
vmwNsxTCapacityMaximumCapacityClear










.1.3.6.1.4.1.6876.120.1.0.22.0.2
 of objects defined in the system for vmwNsxTDataCenterCapacityDisplayName has
reached vmwNsxTDataCenterCapacityUsageCount and is at or below the maximum supported count of
vmwNsxTDataCenterMaxSupportedCapacityCount.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTCapacityMaximumCapacityThreshold










.1.3.6.1.4.1.6876.120.1.0.22.0.3
 of objects defined in the system for vmwNsxTDataCenterCapacityDisplayName has
reached vmwNsxTDataCenterCapacityUsageCount which is above the maximum capacity
threshold of vmwNsxTDataCenterMaxCapacityThreshold%.
          
Action required:
Navigate to the capacity page in the NSX UI and review current usage versus
threshold limits. If the current usage is expected, consider increasing the
maximum threshold values. If the current usage is unexpected, review the
network policies configured to decrease usage at or below the maximum threshold.
vmwNsxTCapacityMaximumCapacityThresholdClear










.1.3.6.1.4.1.6876.120.1.0.22.0.4
 of objects defined in the system for vmwNsxTDataCenterCapacityDisplayName has
reached vmwNsxTDataCenterCapacityUsageCount and is at or below the maximum capacity threshold
of vmwNsxTDataCenterMaxCapacityThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTCapacityMinimumCapacityThreshold










.1.3.6.1.4.1.6876.120.1.0.22.0.5
 of objects defined in the system for vmwNsxTDataCenterCapacityDisplayName has
reached vmwNsxTDataCenterCapacityUsageCount which is above the minimum capacity
threshold of vmwNsxTDataCenterMinCapacityThreshold%.
          
Action required:
Navigate to the capacity page in the NSX UI and review current usage versus
threshold limits. If the current usage is expected, consider increasing the
minimum threshold values. If the current usage is unexpected, review the
network policies configured to decrease usage at or below the minimum threshold.
vmwNsxTCapacityMinimumCapacityThresholdClear










.1.3.6.1.4.1.6876.120.1.0.22.0.6
 of objects defined in the system for vmwNsxTDataCenterCapacityDisplayName has
reached vmwNsxTDataCenterCapacityUsageCount and is at or below the minimum capacity threshold
of vmwNsxTDataCenterMinCapacityThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTAuditLogHealthAuditLogFileUpdateError







.1.3.6.1.4.1.6876.120.1.0.24.0.1
ne of the monitored log files has read-only permissions or has
incorrect user/group ownership on Manager, Global Manager, Edge, Public
Cloud Gateway, KVM or Linux Physical Server nodes. Or log folder is
missing in Windows Physical Server nodes. Or rsyslog.log is
missing on Manager, Global Manager, Edge or Public Cloud Gateway nodes.
          
Action required:
1. On Manager and Global Managaer nodes, Edge and Public Cloud Gateway
nodes, Ubuntu KVM Host nodes ensure the permissions for the /var/log
directory is 775 and the ownership is root:syslog. One Rhel KVM and BMS
Host nodes ensure the permission for the /var/log directory is 755 and
the ownership is root:root.
2. On Manager and Global Manager nodes, ensure the file permissions
for auth.log, nsx-audit.log, nsx-audit-write.log, rsyslog.log and syslog
under /var/log is 640 and ownership is syslog:admin.
3. On Edge and Public Cloud Gateway nodes, ensure the file permissions
for rsyslog.log and syslog under /var/log is 640 and ownership is
syslog:admin.
4. On Ubuntu KVM Host and Ubuntu Physical Server nodes, ensure the
file permissions of auth.log and vmware/nsx-syslog under /var/log is
640 and ownership is syslog:admin.
5. On Rhel KVM Host nodes and Centos/Rhel/Sles Physical Server nodes,
ensure the file permission of vmware/nsx-syslog under /var/log is 640
and ownership is root:root.
6. If any of these files have incorrect permissions or ownership, invoke
the commands `chmod  ` and `chown : `.
7. If rsyslog.log is missing on Manager, Global Manager, Edge or Public
Cloud Gateway nodes, invoke the NSX CLI command `restart service syslog`
which restarts the logging service and regenerates /var/log/rsyslog.log.
8. On Windows Physical Server nodes, ensure the log folder:
C:\ProgramData\VMware\NSX\Logs exists. If not, re-install NSX on the
Windows Physical Server nodes.
vmwNsxTAuditLogHealthAuditLogFileUpdateErrorClear







.1.3.6.1.4.1.6876.120.1.0.24.0.2
red log files have the correct file permissions and ownership
and rsyslog.log exists on Manager, Global Manager, Edge or Public
Cloud Gateway nodes.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTAuditLogHealthRemoteLoggingServerError








.1.3.6.1.4.1.6876.120.1.0.24.0.3
es to logging server vmwNsxTDataCenterHostnameOrIPAddressWithPort (vmwNsxTDataCenterEntityId)
cannot be delivered possibly due to an unresolvable FQDN, an invalid TLS
certificate or missing NSX appliance iptables rule.
          
Action required:
1. Ensure that vmwNsxTDataCenterHostnameOrIPAddressWithPort is the correct hostname or
IP address and port.
2. If the logging server is specified using a FQDN, ensure the FQDN is resolvable
from the NSX appliance using the NSX CLI command `nslookup `. If not
resolvable, verify the correct FQDN is specified and the network DNS server has
the required entry for the FQDN.
3. If the logging server is configured to use TLS, verify the specified certificate
is valid. For example, ensure the logging server is actually using the certificate
or verify the certificate has not expired using the openssl command
`openssl x509 -in  -noout -dates`.
4. NSX appliances use iptables rules to explicitly allow outgoing traffic. Verify
the iptables rule for the logging server is configured properly by invoking the
NSX CLI command `verify logging-servers` which re-configures logging server
iptables rules as needed.
5. If for any reason the logging server is misconfigured, it should be deleted
using the NSX CLI `del logging-server 
proto  level ` command and re-added with the correct configuration.
vmwNsxTAuditLogHealthRemoteLoggingServerErrorClear








.1.3.6.1.4.1.6876.120.1.0.24.0.4
ion for logging server vmwNsxTDataCenterHostnameOrIPAddressWithPort (vmwNsxTDataCenterEntityId)
appear correct.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTRoutingBGPDown











.1.3.6.1.4.1.6876.120.1.0.28.0.1
vmwNsxTDataCenterLrId, BGP neighbor vmwNsxTDataCenterEntityId (vmwNsxTDataCenterBGPNeighborIP) is down.
Reason: vmwNsxTDataCenterFailureReason.
          
Action required:
1. Invoke the NSX CLI command `get logical-routers`.
2. Switch to service-router vmwNsxTDataCenterSrId.
If the reason indicates Network or config error -
3. Invoke the NSX CLI command `get bgp neighbor summary` to check the
BGP neighbor status.
If the reason indicates `Edge is not ready`, check why the Edge node is not in good state.
4. Invoke the NSX CLI command `get edge-cluster status` to check reason why Edge node might be down.
5. Invoke the NSX CLI commands `get bfd-config` and `get bfd-sessions` to check if BFD is running well.
6. Check any Edge health related alarms to get more information.
Check /var/log/syslog to see if there are any errors related to BGP
connectivity.
vmwNsxTRoutingRoutingDownClear







.1.3.6.1.4.1.6876.120.1.0.28.0.10
ne BGP/BFD session up.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTRoutingOSPFNeighborWentDown








.1.3.6.1.4.1.6876.120.1.0.28.0.11
bor vmwNsxTDataCenterPeerAddress moved from full to another state.
          
Action required:
1. Invoke the NSX CLI command `get logical-routers` to get the vrf id and
   switch to TIER0 service router.
2. Run `get ospf neighbor` to check the current state of this neighbor. If the
   neighbor is not listed in the output, the neighbor has gone down or out of
   the network.
3. Invoke the NSX CLI command `ping ` to verify the
   connectivity.
4. Also, verify the configuration for both NSX and peer router to ensure that
   timers and area-id match.
5. Check /var/log/syslog to see if there are any errors related to connectivity.
vmwNsxTRoutingOSPFNeighborWentDownClear








.1.3.6.1.4.1.6876.120.1.0.28.0.12
bor vmwNsxTDataCenterPeerAddress moved to full state.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTRoutingProxyARPNotConfiguredForServiceIP










.1.3.6.1.4.1.6876.120.1.0.28.0.13
for Service IP vmwNsxTDataCenterServiceIP and Service entity vmwNsxTDataCenterEntityId
is not configured as the number of ARP proxy entries generated due to overlap of
the Service IP with subnet of lrport vmwNsxTDataCenterLrportId on Router vmwNsxTDataCenterLrId has exceeded
the allowed threshold limit of 16384.
          
Action required:
Reconfigure the Service IP vmwNsxTDataCenterServiceIP for the Service entity vmwNsxTDataCenterEntityId
or change the subnet of the lrport vmwNsxTDataCenterLrportId on Router vmwNsxTDataCenterLrId so that the proxy ARP entries
generated due to the overlap between the Service IP and the subnet of lrport is less than
the allowed threshold limit of 16384.
vmwNsxTRoutingProxyARPNotConfiguredForServiceIPClear









.1.3.6.1.4.1.6876.120.1.0.28.0.14
for Service entity vmwNsxTDataCenterEntityId is generated successfully as the
overlap of service IP with subnet of lrport vmwNsxTDataCenterLrportId on Router vmwNsxTDataCenterLrId is
within the allowed limit of 16384 entries.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTRoutingBGPDownClear









.1.3.6.1.4.1.6876.120.1.0.28.0.2
vmwNsxTDataCenterLrId, BGP neighbor vmwNsxTDataCenterEntityId (vmwNsxTDataCenterBGPNeighborIP) is up.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTRoutingMaximumIPv4PrefixesFromBGPNeighborApproaching











.1.3.6.1.4.1.6876.120.1.0.28.0.39
IPv4 vmwNsxTDataCenterSubsequentAddressFamily prefixes received from vmwNsxTDataCenterBGPNeighborIP reaches vmwNsxTDataCenterPrefixesCountThreshold. Limit defined for this peer is vmwNsxTDataCenterPrefixesCountMax.
          
Action required:
1. Check the BGP routing policies in the external router.
2. Consider reducing the number of routes advertised by the BGP peer by applying routing policies and filters to the external router.
3. If required, increase the maximum prefixes settings under the BGP neighbor configuration section.
vmwNsxTRoutingMaximumIPv4PrefixesFromBGPNeighborApproachingClear










.1.3.6.1.4.1.6876.120.1.0.28.0.40
IPv4 vmwNsxTDataCenterSubsequentAddressFamily prefixes received from vmwNsxTDataCenterBGPNeighborIP is within the limit vmwNsxTDataCenterPrefixesCountThreshold.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTRoutingMaximumIPv4PrefixesFromBGPNeighborExceeded










.1.3.6.1.4.1.6876.120.1.0.28.0.41
IPv4 vmwNsxTDataCenterSubsequentAddressFamily prefixes received from vmwNsxTDataCenterBGPNeighborIP exceeded the limit defined for this peer of vmwNsxTDataCenterPrefixesCountMax.
          
Action required:
1. Check the BGP routing policies in the external router.
2. Consider reducing the number of routes advertised by the BGP peer by applying routing policies and filters to the external router.
3. If required, increase the maximum prefixes settings under the BGP neighbor configuration section.
vmwNsxTRoutingMaximumIPv4PrefixesFromBGPNeighborExceededClear










.1.3.6.1.4.1.6876.120.1.0.28.0.42
IPv4 vmwNsxTDataCenterSubsequentAddressFamily prefixes received from vmwNsxTDataCenterBGPNeighborIP is within the limit vmwNsxTDataCenterPrefixesCountMax.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTRoutingMaximumIPv4RouteLimitApproaching









.1.3.6.1.4.1.6876.120.1.0.28.0.43
s limit has reached vmwNsxTDataCenterRouteLimitThreshold on Tier0 Gateway and all Tier0 VRFs on Edge node vmwNsxTDataCenterEdgeNode.
          
Action required:
1. Check route redistribution policies and routes received from all external peers.
2. Consider reducing the number of routes by applying routing policies and filters accordingly.
vmwNsxTRoutingMaximumIPv4RouteLimitApproachingClear









.1.3.6.1.4.1.6876.120.1.0.28.0.44
s are within the limit of vmwNsxTDataCenterRouteLimitThreshold on Tier0 Gateway and all Tier0 VRFs on Edge node vmwNsxTDataCenterEdgeNode.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTRoutingMaximumIPv4RouteLimitExceeded









.1.3.6.1.4.1.6876.120.1.0.28.0.45
s has exceeded limit of vmwNsxTDataCenterRouteLimitMaximum on Tier0 Gateway and all Tier0 VRFs on Edge node vmwNsxTDataCenterEdgeNode.
          
Action required:
1. Check route redistribution policies and routes received from all external peers.
2. Consider reducing the number of routes by applying routing policies and filters accordingly.
vmwNsxTRoutingMaximumIPv4RouteLimitExceededClear









.1.3.6.1.4.1.6876.120.1.0.28.0.46
s are within the limit of vmwNsxTDataCenterRouteLimitMaximum on Tier0 Gateway and all Tier0 VRFs on Edge node vmwNsxTDataCenterEdgeNode.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTRoutingMaximumIPv6PrefixesFromBGPNeighborApproaching











.1.3.6.1.4.1.6876.120.1.0.28.0.47
IPv6 vmwNsxTDataCenterSubsequentAddressFamily prefixes received from vmwNsxTDataCenterBGPNeighborIP reaches vmwNsxTDataCenterPrefixesCountThreshold. Limit defined for this peer is vmwNsxTDataCenterPrefixesCountMax.
          
Action required:
1. Check the BGP routing policies in the external router.
2. Consider reducing the number of routes advertised by the BGP peer by applying routing policies and filters to the external router.
3. If required, increase the maximum prefixes settings under the BGP neighbor configuration section.
vmwNsxTRoutingMaximumIPv6PrefixesFromBGPNeighborApproachingClear










.1.3.6.1.4.1.6876.120.1.0.28.0.48
IPv6 vmwNsxTDataCenterSubsequentAddressFamily prefixes received from vmwNsxTDataCenterBGPNeighborIP is within the limit vmwNsxTDataCenterPrefixesCountThreshold.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTRoutingMaximumIPv6PrefixesFromBGPNeighborExceeded










.1.3.6.1.4.1.6876.120.1.0.28.0.49
IPv6 vmwNsxTDataCenterSubsequentAddressFamily prefixes received from vmwNsxTDataCenterBGPNeighborIP exceeded the limit defined for this peer of vmwNsxTDataCenterPrefixesCountMax.
          
Action required:
1. Check the BGP routing policies in the external router.
2. Consider reducing the number of routes advertised by the BGP peer by applying routing policies and filters to the external router.
3. If required, increase the maximum prefixes settings under the BGP neighbor configuration section.
vmwNsxTRoutingStaticRoutingRemoved










.1.3.6.1.4.1.6876.120.1.0.28.0.5
vmwNsxTDataCenterLrId, static route vmwNsxTDataCenterEntityId (vmwNsxTDataCenterStaticAddress) was removed because
BFD was down.
          
Action required:
The static routing entry was removed because the BFD session was down.
1. Invoke the NSX CLI command `get logical-routers`.
2. Switch to the service-router vmwNsxTDataCenterSrId.
3. Invoke the NSX CLI command `ping ` to verify the
connectivity.
Also, verify the configuration in both NSX and the BFD peer to ensure that
timers have not been changed.
vmwNsxTRoutingMaximumIPv6PrefixesFromBGPNeighborExceededClear










.1.3.6.1.4.1.6876.120.1.0.28.0.50
IPv6 vmwNsxTDataCenterSubsequentAddressFamily prefixes received from vmwNsxTDataCenterBGPNeighborIP is within the limit vmwNsxTDataCenterPrefixesCountMax.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTRoutingMaximumIPv6RouteLimitApproaching









.1.3.6.1.4.1.6876.120.1.0.28.0.51
s limit has reached vmwNsxTDataCenterRouteLimitThreshold on Tier0 Gateway and all Tier0 VRFs on Edge node vmwNsxTDataCenterEdgeNode.
          
Action required:
1. Check route redistribution policies and routes received from all external peers.
2. Consider reducing the number of routes by applying routing policies and filters accordingly.
vmwNsxTRoutingMaximumIPv6RouteLimitApproachingClear









.1.3.6.1.4.1.6876.120.1.0.28.0.52
s are within the limit of vmwNsxTDataCenterRouteLimitThreshold on Tier0 Gateway and all Tier0 VRFs on Edge node vmwNsxTDataCenterEdgeNode.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTRoutingMaximumIPv6RouteLimitExceeded









.1.3.6.1.4.1.6876.120.1.0.28.0.53
s has exceeded limit of vmwNsxTDataCenterRouteLimitMaximum on Tier0 Gateway and all Tier0 VRFs on Edge node vmwNsxTDataCenterEdgeNode.
          
Action required:
1. Check route redistribution policies and routes received from all external peers.
2. Consider reducing the number of routes by applying routing policies and filters accordingly.
vmwNsxTRoutingMaximumIPv6RouteLimitExceededClear









.1.3.6.1.4.1.6876.120.1.0.28.0.54
s are within the limit of vmwNsxTDataCenterRouteLimitMaximum on Tier0 Gateway and all Tier0 VRFs on Edge node vmwNsxTDataCenterEdgeNode.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTRoutingStaticRoutingRemovedClear









.1.3.6.1.4.1.6876.120.1.0.28.0.6
vmwNsxTDataCenterLrId, static route vmwNsxTDataCenterEntityId (vmwNsxTDataCenterStaticAddress) was re-added as BFD
recovered.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTRoutingBFDDownOnExternalInterface










.1.3.6.1.4.1.6876.120.1.0.28.0.7
vmwNsxTDataCenterLrId, BFD session for peer vmwNsxTDataCenterPeerAddress is down.
          
Action required:
1. Invoke the NSX CLI command `get logical-routers`.
2. Switch to the service router vmwNsxTDataCenterSrId
3. Invoke the NSX CLI command `ping vmwNsxTDataCenterPeerAddress` to verify the
connectivity.
vmwNsxTRoutingBFDDownOnExternalInterfaceClear









.1.3.6.1.4.1.6876.120.1.0.28.0.8
vmwNsxTDataCenterLrId, BFD session for peer vmwNsxTDataCenterPeerAddress is up.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTRoutingRoutingDown







.1.3.6.1.4.1.6876.120.1.0.28.0.9
D sessions are down.
          
Action required:
Invoke the NSX CLI command `get logical-routers` to get the tier0 service router
and switch to this vrf, then invoke the following NSX CLI commands.
1. `ping ` to verify connectivity.
2. `get bfd-config` and `get bfd-sessions` to check if BFD is running well.
3. `get bgp neighbor summary` to check if BGP is running well.
Also check /var/log/syslog to see if there are any errors related to BGP
connectivity.
vmwNsxTCertificatesCertificateExpirationApproaching








.1.3.6.1.4.1.6876.120.1.0.3.0.1
e vmwNsxTDataCenterEntityId is approaching expiration.
          
Action required:
Ensure services that are currently using the certificate are updated
to use a new, non-expiring certificate. Once the expiring
certificate is no longer in use, it should be deleted by invoking the
DELETE vmwNsxTDataCenterAPICollectionPathvmwNsxTDataCenterEntityId NSX API.
vmwNsxTCertificatesCABundleUpdateSuggestedClear







.1.3.6.1.4.1.6876.120.1.0.3.0.10
d CA bundle vmwNsxTDataCenterEntityId has been removed, updated, or is no longer
in use.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTCertificatesTransportNodeCertificateExpirApproaching







.1.3.6.1.4.1.6876.120.1.0.3.0.17
e for Transport node vmwNsxTDataCenterEntityId is approaching expiration.
          
Action required:
Replace the Transport node vmwNsxTDataCenterEntityId certificate
with a non-expired certificate. The expired
certificate should be replaced by invoking the
POST /api/v1/trust-management/certificates/action/replace-host-certificate/vmwNsxTDataCenterEntityId NSX API.
If the certificate is not replaced, when the certificate expires the connection between
the Transport node and the Manager node will be broken.
vmwNsxTCertificatesTransportNodeCertificateExpirApproachingClear







.1.3.6.1.4.1.6876.120.1.0.3.0.18
ng certificate for Transport node vmwNsxTDataCenterEntityId has been removed or is no longer
approaching expiration.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTCertificatesTransportNodeCertificateExpired







.1.3.6.1.4.1.6876.120.1.0.3.0.19
e has expired for Transport node vmwNsxTDataCenterEntityId.
          
Action required:
Replace the Transport node vmwNsxTDataCenterEntityId certificate
with a non-expired certificate. The expired
certificate should be replaced by invoking the
POST /api/v1/trust-management/certificates/action/replace-host-certificate/vmwNsxTDataCenterEntityId NSX API.
If the expired certificate is used by Transport node,
the connection is broken between Transport node and Manager node.
vmwNsxTCertificatesCertificateExpirationApproachingClear







.1.3.6.1.4.1.6876.120.1.0.3.0.2
ng certificate vmwNsxTDataCenterEntityId has been removed or is no longer
approaching expiration.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTCertificatesTransportNodeCertificateExpiredClear







.1.3.6.1.4.1.6876.120.1.0.3.0.20
d certificate for Transport node vmwNsxTDataCenterEntityId has been replaced or is no longer
expired.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTCertificatesTransportNodeCertificateIsAboutToExpire







.1.3.6.1.4.1.6876.120.1.0.3.0.21
e for Transport node vmwNsxTDataCenterEntityId is about to expire.
          
Action required:
Replace the Transport node vmwNsxTDataCenterEntityId certificate
with a non-expired certificate. The expired
certificate should be replaced by invoking the
POST /api/v1/trust-management/certificates/action/replace-host-certificate/vmwNsxTDataCenterEntityId NSX API.
If the certificate is not replaced, when the certificate expires the connection between
the Transport node and the Manager node will be broken.
vmwNsxTCertificatesTransportNodeCertificateIsAboutToExpireClear







.1.3.6.1.4.1.6876.120.1.0.3.0.22
ng certificate for Transport node vmwNsxTDataCenterEntityId has been removed or is no longer
about to expire.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTCertificatesCertificateExpired








.1.3.6.1.4.1.6876.120.1.0.3.0.3
e vmwNsxTDataCenterEntityId has expired.
          
Action required:
Ensure services that are currently using the certificate are updated
to use a new, non-expired certificate. Once the expired
certificate is no longer in use, it should be deleted by invoking the
DELETE vmwNsxTDataCenterAPICollectionPathvmwNsxTDataCenterEntityId NSX API.
If the expired certificate is used by NAPP Platform,
the connection is broken between NSX and NAPP Platform.
Check the NAPP Platform troubleshooting document
to use a self-signed NAPP CA certificate for recovering the connection.
          
For more information see:
https://www.vmware.com/esx/support/askvmware/index.php?eventtype=certificates.certificate_expired
vmwNsxTCertificatesCertificateExpiredClear







.1.3.6.1.4.1.6876.120.1.0.3.0.4
d certificate vmwNsxTDataCenterEntityId has been removed or is no longer
expired.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTCertificatesCertificateIsAboutToExpire








.1.3.6.1.4.1.6876.120.1.0.3.0.5
e vmwNsxTDataCenterEntityId is about to expire.
          
Action required:
Ensure services that are currently using the certificate are updated
to use a new, non-expiring certificate. Once the expiring
certificate is no longer in use, it should be deleted by invoking the
DELETE vmwNsxTDataCenterAPICollectionPathvmwNsxTDataCenterEntityId NSX API.
vmwNsxTCertificatesCertificateIsAboutToExpireClear







.1.3.6.1.4.1.6876.120.1.0.3.0.6
ng certificate vmwNsxTDataCenterEntityId has been removed or is no longer
about to expire.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTCertificatesCABundleUpdateRecommended








.1.3.6.1.4.1.6876.120.1.0.3.0.7
d CA bundle vmwNsxTDataCenterEntityId was updated more than
vmwNsxTDataCenterCABundleAgeThreshold days ago.
Update for the trusted CA bundle is recommended.
          
Action required:
Ensure services that are currently using the trusted CA bundle are updated
to use a recently-updated trusted CA bundle. Unless it is system-provided
bundle, the bundle can be updated using the
PUT /policy/api/v1/infra/cabundles/vmwNsxTDataCenterEntityId NSX API.
Once the expired bundle is no longer in use, it should be deleted (if not
system-provided) by invoking the
DELETE /policy/api/v1/infra/cabundles/vmwNsxTDataCenterEntityId NSX API.
vmwNsxTCertificatesCABundleUpdateRecommendedClear







.1.3.6.1.4.1.6876.120.1.0.3.0.8
d CA bundle vmwNsxTDataCenterEntityId has been removed, updated, or is no longer
in use.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTCertificatesCABundleUpdateSuggested








.1.3.6.1.4.1.6876.120.1.0.3.0.9
d CA bundle vmwNsxTDataCenterEntityId was updated more than
vmwNsxTDataCenterCABundleAgeThreshold days ago.
Update for the trusted CA bundle is suggested.
          
Action required:
Ensure services that are currently using the trusted CA bundle are updated
to use a recently-updated trusted CA bundle. Unless it is system-provided
bundle, the bundle can be updated using the
PUT /policy/api/v1/infra/cabundles/vmwNsxTDataCenterEntityId NSX API.
Once the expired bundle is no longer in use, it should be deleted (if not
system-provided) by invoking the
DELETE /policy/api/v1/infra/cabundles/vmwNsxTDataCenterEntityId NSX API.
vmwNsxTDNSForwarderDisabled







.1.3.6.1.4.1.6876.120.1.0.30.0.1
der vmwNsxTDataCenterEntityId is disabled.
          
Action required:
1. Invoke the NSX CLI command `get dns-forwarders status` to verify
if the DNS forwarder is in the disabled state.
2. Use NSX Policy API or Manager API to enable the DNS forwarder it
should not be in the disabled state.
vmwNsxTDNSForwarderDisabledClear







.1.3.6.1.4.1.6876.120.1.0.30.0.2
der vmwNsxTDataCenterEntityId is enabled.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTDNSForwarderDown







.1.3.6.1.4.1.6876.120.1.0.30.0.3
der vmwNsxTDataCenterEntityId is not running. This is impacting the
identified DNS Forwarder that is currently enabled.
          
Action required:
1. Invoke the NSX CLI command `get dns-forwarders status` to verify
if the DNS forwarder is in down state.
2. Check /var/log/syslog to see if there are errors reported.
3. Collect a support bundle and contact the NSX support team.
vmwNsxTDNSForwarderDownClear







.1.3.6.1.4.1.6876.120.1.0.30.0.4
der vmwNsxTDataCenterEntityId is running again.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTDNSForwarderUpstreamServerTimeout










.1.3.6.1.4.1.6876.120.1.0.30.0.5
der vmwNsxTDataCenterIntentPath(vmwNsxTDataCenterDNSId) did not receive a timely response
from upstream server vmwNsxTDataCenterDNSUpstreamIP. Compute instance connectivity to
timed out FQDNs may be impacted.
          
Action required:
1. Invoke the NSX API GET /api/v1/dns/forwarders/vmwNsxTDataCenterDNSId/nslookup?
   address=
&server_ip=vmwNsxTDataCenterDNSUpstreamIP&source_ip=. This API request triggers a DNS lookup to the upstream server in the DNS forwarder's network namespace.
is the IP address or FQDN in the same domain as the upstream server. is an IP address in the upstream server's zone. If the API returns a connection timed out response, there is likely a network error or upstream server problem. Check why DSN lookups are not reaching the upstream server or why the upstream server is not returning a response. If the API response indicates the upstream server is answering, proceed to step 2. 2. Invoke the NSX API GET /api/v1/dns/forwarders/vmwNsxTDataCenterDNSId/nslookup? address=
. This API request triggers a DNS lookup to the DNS forwarder. If the API returns a valid response, the upstream server may have recovered and this alarm should get resolved within a few minutes. If the API returns a connection timed out response, proceed to step 3. 3. Invoke the NSX CLI command `get dns-forwarder vmwNsxTDataCenterDNSId live-debug server-ip vmwNsxTDataCenterDNSUpstreamIP`. This command triggers live debugging on the upstream server and logs details and statistics showing why the DNS forwarder is not getting a response.
vmwNsxTDNSForwarderUpstreamServerTimeoutClear










.1.3.6.1.4.1.6876.120.1.0.30.0.6
der vmwNsxTDataCenterIntentPath(vmwNsxTDataCenterDNSId) upstream server vmwNsxTDataCenterDNSUpstreamIP
is normal.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTDistributedFirewallDFWCPUUsageVeryHigh









.1.3.6.1.4.1.6876.120.1.0.31.0.1
U usage on Transport node vmwNsxTDataCenterEntityId has reached
vmwNsxTDataCenterSystemResourceUsage% which is at or above the very high
threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
Consider re-balancing the VM workloads on this host to other hosts. Review
the security design for optimization. For example, use the apply-to
configuration if the rules are not applicable to the entire datacenter.
vmwNsxTDistributedFirewallDFWRulesLimitPervNICExceededClear








.1.3.6.1.4.1.6876.120.1.0.31.0.10
les limit for VIF vmwNsxTDataCenterEntityId on the destination host
vmwNsxTDataCenterTransportNodeName dropped below maximum limit.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTDistributedFirewallDFWRulesLimitPerHostApproaching








.1.3.6.1.4.1.6876.120.1.0.31.0.11
les limit for host vmwNsxTDataCenterTransportNodeName is approaching
the maximum limit.
          
Action required:
Log in into the ESX host vmwNsxTDataCenterTransportNodeName and invoke the NSX CLI
command `get firewall rule-stats total` to get the rule statistics
for rules configured on the ESX host vmwNsxTDataCenterTransportNodeName.
Reduce the number of rules configured for host vmwNsxTDataCenterTransportNodeName.
Check the number of rules configured for various VIFs by using NSX CLI
command `get firewall  ruleset rules`.
Reduce the number of rules configured for various VIFs.
vmwNsxTDistributedFirewallDFWRulesLimitPerHostApproachingClear








.1.3.6.1.4.1.6876.120.1.0.31.0.12
les limit for host vmwNsxTDataCenterTransportNodeName dropped below the
threshold.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTDistributedFirewallDFWRulesLimitPerHostExceeded








.1.3.6.1.4.1.6876.120.1.0.31.0.13
les limit for host vmwNsxTDataCenterTransportNodeName
is about to exceed the maximum limit.
          
Action required:
Log in into the ESX host vmwNsxTDataCenterTransportNodeName and invoke the NSX CLI
command `get firewall rule-stats total` to get the rule statistics
for rules configured on the ESX host vmwNsxTDataCenterTransportNodeName.
Reduce the number of rules configured for host vmwNsxTDataCenterTransportNodeName.
Check the number of rules configured for various VIFs by using NSX CLI
command `get firewall  ruleset rules`.
Reduce the number of rules configured for various VIFs.
vmwNsxTDistributedFirewallDFWRulesLimitPerHostExceededClear








.1.3.6.1.4.1.6876.120.1.0.31.0.14
les limit for host vmwNsxTDataCenterTransportNodeName dropped below maximum
limit.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTDistributedFirewallDFWRulesLimitPervNICApproaching








.1.3.6.1.4.1.6876.120.1.0.31.0.15
les limit for VIF vmwNsxTDataCenterEntityId on destination host
vmwNsxTDataCenterTransportNodeName is approaching the maximum limit.
          
Action required:
Log in into the ESX host vmwNsxTDataCenterTransportNodeName and invoke the NSX CLI
command `get firewall  ruleset rules` to get the rule
statistics for rules configured on the corresponding VIF.
Reduce the number of rules configured for VIF vmwNsxTDataCenterEntityId.
vmwNsxTDistributedFirewallDFWRulesLimitPervNICApproachingClear








.1.3.6.1.4.1.6876.120.1.0.31.0.16
les limit for VIF vmwNsxTDataCenterEntityId on the destination host
vmwNsxTDataCenterTransportNodeName dropped below the threshold.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTDistributedFirewallDFWCPUUsageVeryHighOnDPU










.1.3.6.1.4.1.6876.120.1.0.31.0.17
U usage on Transport node vmwNsxTDataCenterEntityId has reached
vmwNsxTDataCenterSystemResourceUsage% on DPU vmwNsxTDataCenterDPUId which is at or above the very high
threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
Consider re-balancing the VM workloads on this host to other hosts. Review
the security design for optimization. For example, use the apply-to
configuration if the rules are not applicable to the entire datacenter.
vmwNsxTDistributedFirewallDFWCPUUsageVeryHighOnDPUClear










.1.3.6.1.4.1.6876.120.1.0.31.0.18
U usage on Transport node vmwNsxTDataCenterEntityId has reached
vmwNsxTDataCenterSystemResourceUsage% on DPU vmwNsxTDataCenterDPUId which is below the very high
threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTDistributedFirewallDFWMemoryUsageVeryHighOnDPU











.1.3.6.1.4.1.6876.120.1.0.31.0.19
mory usage vmwNsxTDataCenterHeapType on Transport node
vmwNsxTDataCenterEntityId has reached vmwNsxTDataCenterSystemResourceUsage% on DPU vmwNsxTDataCenterDPUId which
is at or above the very high threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
View the current DFW memory usage by invoking the NSX CLI command
`get firewall thresholds` on the DPU. Consider re-balancing the
workloads on this host to other hosts.
vmwNsxTDistributedFirewallDFWCPUUsageVeryHighClear









.1.3.6.1.4.1.6876.120.1.0.31.0.2
U usage on Transport node vmwNsxTDataCenterEntityId has reached
vmwNsxTDataCenterSystemResourceUsage% which is below the very high
threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTDistributedFirewallDFWMemoryUsageVeryHighOnDPUClear











.1.3.6.1.4.1.6876.120.1.0.31.0.20
mory usage vmwNsxTDataCenterHeapType on Transport node
vmwNsxTDataCenterEntityId has reached vmwNsxTDataCenterSystemResourceUsage% on DPU vmwNsxTDataCenterDPUId which
is below the very high threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTDistributedFirewallDFWFloodLimitWarning









.1.3.6.1.4.1.6876.120.1.0.31.0.21
ood limit for DFW filter vmwNsxTDataCenterEntityId on host
vmwNsxTDataCenterTransportNodeName has reached warning level of 80% of the configured limit for protocol vmwNsxTDataCenterProtocolName.
          
Action required:
Check VMs on the host in NSX Manager, check configured flood warning level of the
DFW filter vmwNsxTDataCenterEntityId for protocol vmwNsxTDataCenterProtocolName.
vmwNsxTDistributedFirewallDFWFloodLimitWarningClear









.1.3.6.1.4.1.6876.120.1.0.31.0.22
g flood limit condition for DFW filter vmwNsxTDataCenterEntityId on host
vmwNsxTDataCenterTransportNodeName for protocol vmwNsxTDataCenterProtocolName is cleared.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTDistributedFirewallDFWFloodLimitCritical









.1.3.6.1.4.1.6876.120.1.0.31.0.23
ood limit for DFW filter vmwNsxTDataCenterEntityId on host
vmwNsxTDataCenterTransportNodeName has reached critical level of 98% of the configured limit for protocol vmwNsxTDataCenterProtocolName.
          
Action required:
Check VMs on the host in NSX Manager, check configured flood critical level of the
DFW filter vmwNsxTDataCenterEntityId for protocol vmwNsxTDataCenterProtocolName.
vmwNsxTDistributedFirewallDFWFloodLimitCriticalClear









.1.3.6.1.4.1.6876.120.1.0.31.0.24
al flood limit condition for DFW filter vmwNsxTDataCenterEntityId on host
vmwNsxTDataCenterTransportNodeName for protocol vmwNsxTDataCenterProtocolName is cleared.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTDistributedFirewallDFWMemoryUsageVeryHigh










.1.3.6.1.4.1.6876.120.1.0.31.0.3
mory usage vmwNsxTDataCenterHeapType on Transport node
vmwNsxTDataCenterEntityId has reached vmwNsxTDataCenterSystemResourceUsage% which
is at or above the very high threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
View the current DFW memory usage by invoking the NSX CLI command
`get firewall thresholds` on the host. Consider re-balancing the
workloads on this host to other hosts.
vmwNsxTDistributedFirewallDFWMemoryUsageVeryHighClear










.1.3.6.1.4.1.6876.120.1.0.31.0.4
mory usage vmwNsxTDataCenterHeapType on Transport node
vmwNsxTDataCenterEntityId has reached vmwNsxTDataCenterSystemResourceUsage% which
is below the very high threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTDistributedFirewallDFWSessionCountHigh









.1.3.6.1.4.1.6876.120.1.0.31.0.5
ssion count is high on Transport node vmwNsxTDataCenterEntityId, it has
reached vmwNsxTDataCenterSystemResourceUsage% which is at or above the threshold
value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
Review the network traffic load level of the workloads on the host.
Consider re-balancing the workloads on this host to other hosts.
vmwNsxTDistributedFirewallDFWSessionCountHighClear









.1.3.6.1.4.1.6876.120.1.0.31.0.6
ssion count on Transport node vmwNsxTDataCenterEntityId has reached
vmwNsxTDataCenterSystemResourceUsage% which is below the the threshold value of
vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTDistributedFirewallDFWVmotionFailure








.1.3.6.1.4.1.6876.120.1.0.31.0.7
otion for DFW filter vmwNsxTDataCenterEntityId on destination host
vmwNsxTDataCenterTransportNodeName has failed and the port for the entity has been
disconnected.
          
Action required:
Check VMs on the host in NSX Manager, manually repush the DFW configuration
through NSX Manager UI. The DFW policy to be repushed can be traced by the
DFW filter vmwNsxTDataCenterEntityId. Also consider finding the VM to which the DFW filter
is attached and restart it.
vmwNsxTDistributedFirewallDFWVmotionFailureClear








.1.3.6.1.4.1.6876.120.1.0.31.0.8
nfiguration for DFW filter vmwNsxTDataCenterEntityId on the destination host
vmwNsxTDataCenterTransportNodeName has succeeded and error caused by DFW vMotion failure
cleared.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTDistributedFirewallDFWRulesLimitPervNICExceeded








.1.3.6.1.4.1.6876.120.1.0.31.0.9
les limit for VIF vmwNsxTDataCenterEntityId on destination host
vmwNsxTDataCenterTransportNodeName is about to exceed the maximum limit.
          
Action required:
Log in into the ESX host vmwNsxTDataCenterTransportNodeName and invoke the NSX CLI
command `get firewall  ruleset rules` to get the rule
statistics for rules configured on the corresponding VIF.
Reduce the number of rules configured for VIF vmwNsxTDataCenterEntityId.
vmwNsxTFederationRtepBGPDown











.1.3.6.1.4.1.6876.120.1.0.32.0.1
te Tunnel Endpoint) BGP session from source IP vmwNsxTDataCenterBGPSourceIP
to remote location vmwNsxTDataCenterRemoteSiteName neighbor IP vmwNsxTDataCenterBGPNeighborIP is down.
Reason: vmwNsxTDataCenterFailureReason.
          
Action required:
1. Invoke the NSX CLI command `get logical-routers` on the affected edge node.
2. Switch to REMOTE_TUNNEL_VRF context.
3. Invoke the NSX CLI command `get bgp neighbor summary` to check the BGP neighbor status.
4. Alternatively, invoke the NSX API GET /api/v1/transport-nodes//inter-site/bgp/summary
   to get the BGP neighbor status.
5. Invoke the NSX CLI command `get interfaces` and check if the correct RTEP IP address is assigned
   to the interface with name remote-tunnel-endpoint.
6. Check if the ping is working successfully between assigned RTEP IP address (vmwNsxTDataCenterBGPSourceIP) and the
   remote location vmwNsxTDataCenterRemoteSiteName neighbor IP vmwNsxTDataCenterBGPNeighborIP.
7. Check /var/log/syslog for any errors related to BGP.
8. Invoke the NSX API GET or PUT /api/v1/transport-nodes/ to get/update remote_tunnel_endpoint
   configuration on the edge node.
   This will update the RTEP IP assigned to the affected edge node.
          
If the reason indicates `Edge is not ready`, check why the Edge node is not in good state.
1. Invoke the NSX CLI command `get edge-cluster status` to check reason why Edge node might be down.
2. Invoke the NSX CLI commands `get bfd-config` and `get bfd-sessions` to check if BFD is running well.
3. Check any Edge health related alarms to get more information.
vmwNsxTFederationGMToGMSplitBrainClear








.1.3.6.1.4.1.6876.120.1.0.32.0.10
ager node vmwNsxTDataCenterActiveGlobalManager is the only active Global Manager node now.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTFederationGMToGMLatencyWarning











.1.3.6.1.4.1.6876.120.1.0.32.0.17
 higher than expected between Global Managers vmwNsxTDataCenterFromGMPath and vmwNsxTDataCenterToGMPath.
          
Action required:
Check the connectivity from Global Manager vmwNsxTDataCenterFromGMPath(vmwNsxTDataCenterSiteId) to the Global Manager
vmwNsxTDataCenterToGMPath(vmwNsxTDataCenterRemoteSiteId) via ping. If they are not pingable, check for flakiness in WAN connectivity.
vmwNsxTFederationGMToGMLatencyWarningClear









.1.3.6.1.4.1.6876.120.1.0.32.0.18
 below expected levels between Global Managers vmwNsxTDataCenterFromGMPath and vmwNsxTDataCenterToGMPath.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTFederationGMToGMSynchronizationError











.1.3.6.1.4.1.6876.120.1.0.32.0.19
bal Manager vmwNsxTDataCenterFromGMPath to Standby Global Manager vmwNsxTDataCenterToGMPath cannot
synchronize for more than 5 minutes.
          
Action required:
Check the connectivity from Global Manager vmwNsxTDataCenterFromGMPath(vmwNsxTDataCenterSiteId) to the Global Manager
vmwNsxTDataCenterToGMPath(vmwNsxTDataCenterRemoteSiteId) via ping.
vmwNsxTFederationRtepBGPDownClear










.1.3.6.1.4.1.6876.120.1.0.32.0.2
te Tunnel Endpoint) BGP session from source IP vmwNsxTDataCenterBGPSourceIP
to remote location vmwNsxTDataCenterRemoteSiteName neighbor IP vmwNsxTDataCenterBGPNeighborIP is established.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTFederationGMToGMSynchronizationErrorClear









.1.3.6.1.4.1.6876.120.1.0.32.0.20
ation from active Global Manager vmwNsxTDataCenterFromGMPath to standby vmwNsxTDataCenterToGMPath is healthy.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTFederationGMToGMSynchronizationWarning











.1.3.6.1.4.1.6876.120.1.0.32.0.21
bal Manager vmwNsxTDataCenterFromGMPath to Standby Global Manager vmwNsxTDataCenterToGMPath can not synchronize.
          
Action required:
Check the connectivity from Global Manager vmwNsxTDataCenterFromGMPath(vmwNsxTDataCenterSiteId) to the Global Manager
vmwNsxTDataCenterToGMPath(vmwNsxTDataCenterRemoteSiteId) via ping.
vmwNsxTFederationGMToGMSynchronizationWarningClear









.1.3.6.1.4.1.6876.120.1.0.32.0.22
ation from active Global Manager vmwNsxTDataCenterFromGMPath to standby vmwNsxTDataCenterToGMPath is healthy.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTFederationGMToLMSynchronizationError













.1.3.6.1.4.1.6876.120.1.0.32.0.23
ronization between sites vmwNsxTDataCenterSiteName(vmwNsxTDataCenterSiteId) and vmwNsxTDataCenterRemoteSiteName(vmwNsxTDataCenterRemoteSiteId)
failed for the vmwNsxTDataCenterFlowIdentifier for an extended period. Reason: vmwNsxTDataCenterSyncIssueReason.
          
Action required:
1. Check the network connectivity between remote site and local site via ping.
2. Ensure port TCP/1236 traffic is allowed between the local and remote sites.
3. Ensure the async-replicator service is running on both local and remote sites.
Invoke the GET /api/v1/node/services/async_replicator/status NSX API or the
`get service async_replicator` NSX CLI command to determine if the service is running.
If not running, invoke the POST /api/v1/node/services/async_replicator?action=restart NSX API or
the `restart service async_replicator` NSX CLI to restart the service.
4. Check /var/log/async-replicator/ar.log to see if there are errors reported.
5. Collect a support bundle and contact the NSX support team.
vmwNsxTFederationGMToLMSynchronizationErrorClear












.1.3.6.1.4.1.6876.120.1.0.32.0.24
sxTDataCenterSiteName(vmwNsxTDataCenterSiteId) and vmwNsxTDataCenterRemoteSiteName(vmwNsxTDataCenterRemoteSiteId) are now synchronized for vmwNsxTDataCenterFlowIdentifier.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTFederationGMToLMSynchronizationWarning













.1.3.6.1.4.1.6876.120.1.0.32.0.25
ronization between sites vmwNsxTDataCenterSiteName(vmwNsxTDataCenterSiteId) and vmwNsxTDataCenterRemoteSiteName(vmwNsxTDataCenterRemoteSiteId) failed for the
vmwNsxTDataCenterFlowIdentifier. Reason: vmwNsxTDataCenterSyncIssueReason
          
Action required:
1. Check the network connectivity between remote site and local site via ping.
2. Ensure port TCP/1236 traffic is allowed between the local and remote sites.
3. Ensure the async-replicator service is running on both local and remote sites.
   Invoke the GET /api/v1/node/services/async_replicator/status NSX API or the
   `get service async_replicator` NSX CLI command to determine if the service is running.
   If not running, invoke the POST /api/v1/node/services/async_replicator?action=restart NSX API or
   the `restart service async_replicator` NSX CLI to restart the service.
4. Check /var/log/async-replicator/ar.log to see if there are errors reported.
vmwNsxTFederationGMToLMSynchronizationWarningClear












.1.3.6.1.4.1.6876.120.1.0.32.0.26
sxTDataCenterSiteName(vmwNsxTDataCenterSiteId) and vmwNsxTDataCenterRemoteSiteName(vmwNsxTDataCenterRemoteSiteId) are now synchronized for vmwNsxTDataCenterFlowIdentifier.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTFederationGMToLMLatencyWarning













.1.3.6.1.4.1.6876.120.1.0.32.0.27
tween sites vmwNsxTDataCenterSiteName(vmwNsxTDataCenterSiteId) and vmwNsxTDataCenterRemoteSiteName(vmwNsxTDataCenterRemoteSiteId) has reached
vmwNsxTDataCenterLatencyValue which is above the threshold value of vmwNsxTDataCenterLatencyThreshold.
          
Action required:
1. Check the network connectivity between remote site and local site via ping.
2. Ensure port TCP/1236 traffic is allowed between the local and remote sites.
3. Check /var/log/async-replicator/ar.log to see if there are errors reported.
vmwNsxTFederationGMToLMLatencyWarningClear













.1.3.6.1.4.1.6876.120.1.0.32.0.28
tween sites vmwNsxTDataCenterSiteName(vmwNsxTDataCenterSiteId) and vmwNsxTDataCenterRemoteSiteName(vmwNsxTDataCenterRemoteSiteId) has reached
vmwNsxTDataCenterLatencyValue which below the threshold value of vmwNsxTDataCenterLatencyThreshold.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTFederationQueueOccupancyThresholdExceeded














.1.3.6.1.4.1.6876.120.1.0.32.0.29
NsxTDataCenterQueueName) used for syncing data between sites vmwNsxTDataCenterSiteName(vmwNsxTDataCenterSiteId) and vmwNsxTDataCenterRemoteSiteName(vmwNsxTDataCenterRemoteSiteId)
has reached size vmwNsxTDataCenterQueueSize which is at or above the maximum threshold of vmwNsxTDataCenterQueueSizeThreshold%.
          
Action required:
Queue size can exceed threshold due to communication issue with remote site or an overloaded system.
Check system performance and /var/log/async-replicator/ar.log to see if there are any errors reported.
vmwNsxTFederationLmToLmSynchronizationError











.1.3.6.1.4.1.6876.120.1.0.32.0.3
onization between vmwNsxTDataCenterSiteName(vmwNsxTDataCenterSiteId) and vmwNsxTDataCenterRemoteSiteName(vmwNsxTDataCenterRemoteSiteId)
failed for more than 15 minutes.
          
Action required:
1. Invoke the NSX CLI command `get site-replicator remote-sites` to get connection
   state between the remote locations. If a remote location is connected but not synchronized,
   it is possible that the location is still in the process of master resolution. In
   this case, wait for around 10 seconds and try invoking the CLI again to
   check for the state of the remote location. If a location is disconnected, try the next
   step.
2. Check the connectivity from Local Manager (LM) in location vmwNsxTDataCenterSiteName(vmwNsxTDataCenterSiteId) to the LMs
   in location vmwNsxTDataCenterRemoteSiteName(vmwNsxTDataCenterRemoteSiteId) via ping. If they are not pingable,
   check for flakiness in WAN connectivity. If there are no physical network
   connectivity issues, try the next step.
3. Check the /var/log/cloudnet/nsx-ccp.log file on the Manager nodes in the local cluster in
   location vmwNsxTDataCenterSiteName(vmwNsxTDataCenterSiteId) that triggered the alarm to see if there are any cross-site
   communication errors. In addition, also look for errors being logged by the
   nsx-appl-proxy subcomponent within /var/log/syslog.
vmwNsxTFederationQueueOccupancyThresholdExceededClear














.1.3.6.1.4.1.6876.120.1.0.32.0.30
NsxTDataCenterQueueName) used for syncing data between sites vmwNsxTDataCenterSiteName(vmwNsxTDataCenterSiteId) and vmwNsxTDataCenterRemoteSiteName(vmwNsxTDataCenterRemoteSiteId)
has reached size vmwNsxTDataCenterQueueSize which is below the maximum threshold of vmwNsxTDataCenterQueueSizeThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTFederationLMRestoreWhileConfigImportInProgress









.1.3.6.1.4.1.6876.120.1.0.32.0.31
ort from site vmwNsxTDataCenterSiteName(vmwNsxTDataCenterSiteId) is in progress. However site vmwNsxTDataCenterSiteName(vmwNsxTDataCenterSiteId) is restored
from backup by the administrator leaving it in an inconsistent state.
          
Action required:
1. Log in to NSX Global Manager appliance CLI.
2. Switch to root.
3. Invoke the NSX API DELETE http://localhost:64440/gm/api/v1/infra/sites//onboarding/status in local mode,
   this will delete site on-boarding status for Global Manager.
4. Re-initiate config on-boarding again.
vmwNsxTFederationLMRestoreWhileConfigImportInProgressClear









.1.3.6.1.4.1.6876.120.1.0.32.0.32
onsistency at site vmwNsxTDataCenterSiteName(vmwNsxTDataCenterSiteId) is resolved.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTFederationLmToLmSynchronizationErrorClear











.1.3.6.1.4.1.6876.120.1.0.32.0.4
es vmwNsxTDataCenterSiteName(vmwNsxTDataCenterSiteId) and vmwNsxTDataCenterRemoteSiteName(vmwNsxTDataCenterRemoteSiteId) are now synchronized.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTFederationLmToLmSynchronizationWarning











.1.3.6.1.4.1.6876.120.1.0.32.0.5
onization between vmwNsxTDataCenterSiteName(vmwNsxTDataCenterSiteId) and vmwNsxTDataCenterRemoteSiteName(vmwNsxTDataCenterRemoteSiteId) failed for more than 3 minutes.
          
Action required:
1. Invoke the NSX CLI command `get site-replicator remote-sites` to get connection
   state between the remote locations. If a remote location is connected but not synchronized,
   it is possible that the location is still in the process of master resolution. In
   this case, wait for around 10 seconds and try invoking the CLI again to
   check for the state of the remote location. If a location is disconnected, try the next
   step.
2. Check the connectivity from Local Manager (LM) in location vmwNsxTDataCenterSiteName(vmwNsxTDataCenterSiteId) to the LMs
   in location vmwNsxTDataCenterRemoteSiteName(vmwNsxTDataCenterRemoteSiteId) via ping. If they are not pingable,
   check for flakiness in WAN connectivity. If there are no physical network
   connectivity issues, try the next step.
3. Check the /var/log/cloudnet/nsx-ccp.log file on the Manager nodes in the local cluster in
   location vmwNsxTDataCenterSiteName(vmwNsxTDataCenterSiteId) that triggered the alarm to see if there are any cross-site
   communication errors. In addition, also look for errors being logged by the
   nsx-appl-proxy subcomponent within /var/log/syslog.
vmwNsxTFederationLmToLmSynchronizationWarningClear











.1.3.6.1.4.1.6876.120.1.0.32.0.6
ations vmwNsxTDataCenterSiteName(vmwNsxTDataCenterSiteId) and vmwNsxTDataCenterRemoteSiteName(vmwNsxTDataCenterRemoteSiteId) are now synchronized.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTFederationRtepConnectivityLost









.1.3.6.1.4.1.6876.120.1.0.32.0.7
vmwNsxTDataCenterTransportNodeName lost RTEP (Remote Tunnel Endpoint) connectivity
with remote location vmwNsxTDataCenterRemoteSiteName.
          
Action required:
1. Invoke the NSX CLI command `get logical-routers` on the affected edge node vmwNsxTDataCenterTransportNodeName.
2. Switch to REMOTE_TUNNEL_VRF context.
3. Invoke the NSX CLI command `get bgp neighbor summary` to check the BGP neighbor status.
4. Alternatively, invoke the NSX API GET /api/v1/transport-nodes//inter-site/bgp/summary
   to get the BGP neighbor status.
5. Invoke the NSX CLI command `get interfaces` and check if the correct RTEP IP address is assigned
   to the interface with name remote-tunnel-endpoint.
6. Check if the ping is working successfully between assigned RTEP IP address and the RTEP IP addresses
   on the remote location vmwNsxTDataCenterRemoteSiteName.
7. Check /var/log/syslog for any errors related to BGP.
8. Invoke the NSX API GET or PUT /api/v1/transport-nodes/ to get/update remote_tunnel_endpoint
   configuration on the edge node.
   This will update the RTEP IP assigned to the affected edge node vmwNsxTDataCenterTransportNodeName.
vmwNsxTFederationRtepConnectivityLostClear









.1.3.6.1.4.1.6876.120.1.0.32.0.8
vmwNsxTDataCenterTransportNodeName has restored RTEP (Remote Tunnel Endpoint) connectivity
with remote location vmwNsxTDataCenterRemoteSiteName.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTFederationGMToGMSplitBrain








.1.3.6.1.4.1.6876.120.1.0.32.0.9
lobal Manager nodes are active: vmwNsxTDataCenterActiveGlobalManagers. Only one Global Manager node must be active
at any time.
          
Action required:
Configure only one Global Manager node as active and all other Global Manager nodes as standby.
vmwNsxTDistributedIDSIPSNSXIDPSEngineCPUUsageVeryHighClear








.1.3.6.1.4.1.6876.120.1.0.33.0.10
ngine CPU usage has reached
vmwNsxTDataCenterSystemResourceUsage%, which is below the very
high threshold value of 95%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTDistributedIDSIPSNSXIDPSEngineDown







.1.3.6.1.4.1.6876.120.1.0.33.0.13
s enabled via NSX policy and IDPS rules are configured, but
NSX-IDPS engine is down.
          
Action required:
1. Check /var/log/nsx-syslog.log to see if there are errors reported.
2. Invoke the NSX CLI command `get ids engine status` to check
   if NSX Distributed IDPS is in disabled state.  If so,
   invoke `/etc/init.d/nsx-idps start` to start the service.
3. Invoke `/etc/init.d/nsx-vdpi status` to check if nsx-vdpi is running.
   If not, invoke `/etc/init.d/nsx-vdpi start` to start the service.
vmwNsxTDistributedIDSIPSNSXIDPSEngineDownClear







.1.3.6.1.4.1.6876.120.1.0.33.0.14
s in one of the cases below.
1. NSX IDPS is disabled via NSX policy.
2. NSX IDPS engine is enabled, NSX-IDPS engine and vdpi are up, and
   NSX IDPS has been enabled and IDPS rules are configured
   via NSX Policy.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTDistributedIDSIPSNSXIDPSEngineMemoryUsageHigh








.1.3.6.1.4.1.6876.120.1.0.33.0.15
ngine memory usage has reached
vmwNsxTDataCenterSystemResourceUsage%, which is at or above the high
threshold value of 75%.
          
Action required:
Consider re-balancing the VM workloads on this host to other hosts.
vmwNsxTDistributedIDSIPSNSXIDPSEngineMemoryUsageHighClear








.1.3.6.1.4.1.6876.120.1.0.33.0.16
ngine memory usage has reached
vmwNsxTDataCenterSystemResourceUsage%, which is below the high threshold
value of 75%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTDistributedIDSIPSNSXIDPSEngineMemoryUsageMediumHigh








.1.3.6.1.4.1.6876.120.1.0.33.0.17
ngine memory usage has reached
vmwNsxTDataCenterSystemResourceUsage%, which is at or above the medium
high threshold value of 85%.
          
Action required:
Consider re-balancing the VM workloads on this host to other hosts.
vmwNsxTDistributedIDSIPSNSXIDPSEngineMemoryUsageMediumHighClear








.1.3.6.1.4.1.6876.120.1.0.33.0.18
ngine memory usage has reached
vmwNsxTDataCenterSystemResourceUsage%, which is below the medium
high threshold value of 85%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTDistributedIDSIPSNSXIDPSEngineMemoryUsageVeryHigh








.1.3.6.1.4.1.6876.120.1.0.33.0.19
ngine  memory usage has reached
vmwNsxTDataCenterSystemResourceUsage%, which is at or above the
very high threshold value of 95%.
          
Action required:
Consider re-balancing the VM workloads on this host to other hosts.
vmwNsxTDistributedIDSIPSNSXIDPSEngineMemoryUsageVeryHighClear








.1.3.6.1.4.1.6876.120.1.0.33.0.20
ngine  memory usage has reached
vmwNsxTDataCenterSystemResourceUsage%, which is below the very high
threshold value of 95%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTDistributedIDSIPSNSXIDPSEngineCPUUsageMediumHigh








.1.3.6.1.4.1.6876.120.1.0.33.0.21
ngine CPU usage has reached
vmwNsxTDataCenterSystemResourceUsage%, which is at or above the medium
high threshold value of 85%.
          
Action required:
Consider re-balancing the VM workloads on this host to other hosts.
vmwNsxTDistributedIDSIPSNSXIDPSEngineCPUUsageMediumHighClear








.1.3.6.1.4.1.6876.120.1.0.33.0.22
ngine CPU usage has reached
vmwNsxTDataCenterSystemResourceUsage%, which is below the medium high
threshold value of 85%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTDistributedIDSIPSNSXIDPSEngineMemoryUsageHighOnDPU









.1.3.6.1.4.1.6876.120.1.0.33.0.23
ngine memory usage has reached
vmwNsxTDataCenterSystemResourceUsage%, which is at or above the high
threshold value of 75% on DPU vmwNsxTDataCenterDPUId.
          
Action required:
Consider re-balancing the VM workloads on this host to other hosts.
vmwNsxTDistributedIDSIPSNSXIDPSEngineMemoryUsageHighOnDPUClear









.1.3.6.1.4.1.6876.120.1.0.33.0.24
ngine memory usage has reached on DPU vmwNsxTDataCenterDPUId,
vmwNsxTDataCenterSystemResourceUsage%, which is below the high threshold
value of 75%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTDistributedIDSIPSMaxEventsReached









.1.3.6.1.4.1.6876.120.1.0.33.0.3
 of intrusion events in the system is vmwNsxTDataCenterIDSEventsCount
which is higher than the maximum allowed value vmwNsxTDataCenterMaxIDSEventsAllowed.
          
Action required:
There is no manual intervention required. A purge job will kick in automatically
every 3 minutes and delete 10% of the older records to bring the total intrusion events
count in the system to below the threshold value of 1.5 million events.
vmwNsxTDistributedIDSIPSNSXIDPSEngineDownOnDPU








.1.3.6.1.4.1.6876.120.1.0.33.0.31
s enabled via NSX policy and IDPS rules are configured, but
NSX-IDPS engine is down on DPU vmwNsxTDataCenterDPUId.
          
Action required:
1. Check /var/log/nsx-idps/nsx-idps.log and
   /var/log/nsx-syslog.log to see if there are errors reported.
2. Invoke the NSX CLI command `get ids engine status` to check
   if NSX Distributed IDPS is in disabled state.  If so,
   invoke `/etc/init.d/nsx-idps start` to start the service.
3. Invoke `/etc/init.d/nsx-vdpi status` to check if nsx-vdpi is running.
   If not, invoke `/etc/init.d/nsx-vdpi start` to start the service.
vmwNsxTDistributedIDSIPSNSXIDPSEngineDownOnDPUClear








.1.3.6.1.4.1.6876.120.1.0.33.0.32
s in one of the cases below on DPU vmwNsxTDataCenterDPUId.
1. NSX IDPS is disabled via NSX policy.
2. NSX IDPS engine is enabled, NSX-IDPS engine and vdpi are up, and
   NSX IDPS has been enabled and IDPS rules are configured
   via NSX Policy.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTDistributedIDSIPSNSXIDPSEngineMemUsageMedHighOnDPU









.1.3.6.1.4.1.6876.120.1.0.33.0.33
ngine memory usage has reached
vmwNsxTDataCenterSystemResourceUsage%, which is at or above the medium
high threshold value of 85% on DPU vmwNsxTDataCenterDPUId.
          
Action required:
Consider re-balancing the VM workloads on this host to other hosts.
vmwNsxTDistributedIDSIPSNSXIDPSEngineMemUsageMedHighOnDPUClear









.1.3.6.1.4.1.6876.120.1.0.33.0.34
ngine memory usage has reached on DPU vmwNsxTDataCenterDPUId,
vmwNsxTDataCenterSystemResourceUsage%, which is below the medium
high threshold value of 85%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTDistributedIDSIPSNSXIDPSEngineMemUsageVeryHighDPU









.1.3.6.1.4.1.6876.120.1.0.33.0.35
ngine  memory usage has reached
vmwNsxTDataCenterSystemResourceUsage%, which is at or above the
very high threshold value of 95% on DPU vmwNsxTDataCenterDPUId.
          
Action required:
Consider re-balancing the VM workloads on this host to other hosts.
vmwNsxTDistributedIDSIPSNSXIDPSEngineMemUsageVeryHighDPUClear









.1.3.6.1.4.1.6876.120.1.0.33.0.36
ngine  memory usage has reached on DPU vmwNsxTDataCenterDPUId,
vmwNsxTDataCenterSystemResourceUsage%, which is below the very high
threshold value of 95%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTDistributedIDSIPSMaxEventsReachedClear









.1.3.6.1.4.1.6876.120.1.0.33.0.4
 of intrusion events in the system is vmwNsxTDataCenterIDSEventsCount
which is below the maximum allowed value vmwNsxTDataCenterMaxIDSEventsAllowed.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTDistributedIDSIPSIDPSBypassedCPUOversubscribed







.1.3.6.1.4.1.6876.120.1.0.33.0.53
ngine has insufficient CPU resources and is unable to keep pace
with the incoming traffic resulting in the excess traffic being bypassed.
For more details, login to the ESX host and issue the following command:
`vsipioctl getdpiinfo -s` and look at oversubscription stats.
          
Action required:
Review reason for oversubscription. Move certain applications to different
host.
vmwNsxTDistributedIDSIPSIDPSBypassedCPUOversubscribedClear







.1.3.6.1.4.1.6876.120.1.0.33.0.54
buted IDPS engine has adequate CPU resources and is not bypassing
any traffic.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTDistributedIDSIPSIDPSBypassedNetworkOversubscribed







.1.3.6.1.4.1.6876.120.1.0.33.0.55
ngine is unable to keep pace with the rate of incoming traffic
resulting in the excess traffic being bypassed. For more details, login to
the ESX host and issue the following command: `vsipioctl getdpiinfo -s` and
look at oversubscription stats.
          
Action required:
Review reason for oversubscription. Review the IDPS rules to reduce the
amount of traffic being subject to IDPS service.
vmwNsxTDistributedIDSIPSIDPSBypassedNetworkOversubscribedClear







.1.3.6.1.4.1.6876.120.1.0.33.0.56
buted IDPS engine is not bypassing any traffic.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTDistributedIDSIPSIDPSEngineCPUOversubscriptionHigh








.1.3.6.1.4.1.6876.120.1.0.33.0.57
ation for the distributed IDPS engine is at or above the high
threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
Review reason for oversubscription. Move certain applications to different
host.
vmwNsxTDistributedIDSIPSIDPSEngineCPUOversubscriptionHighClear








.1.3.6.1.4.1.6876.120.1.0.33.0.58
ation for the distributed IDPS engine is below the high threshold
value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTDistributedIDSIPSIDPSCPUOversubscriptionVeryHigh








.1.3.6.1.4.1.6876.120.1.0.33.0.59
ation for the distributed IDPS engine is at or above the very
high threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
Review reason for oversubscription. Move certain applications to different
host.
vmwNsxTDistributedIDSIPSIDPSCPUOversubscriptionVeryHighClear








.1.3.6.1.4.1.6876.120.1.0.33.0.60
ation for the distributed IDPS engine is below the very high
threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTDistributedIDSIPSIDPSDroppedCPUOversubscribed







.1.3.6.1.4.1.6876.120.1.0.33.0.61
ngine has insufficient CPU resources and is unable to keep pace
with the incoming traffic resulting in the excess traffic being dropped.
For more details, login to the ESX host and issue the following command:
`vsipioctl getdpiinfo -s` and look at oversubscription stats.
          
Action required:
Review reason for oversubscription. Move certain applications to different
host.
vmwNsxTDistributedIDSIPSIDPSDroppedCPUOversubscribedClear







.1.3.6.1.4.1.6876.120.1.0.33.0.62
buted IDPS engine has adequate CPU resources and is not dropping
any traffic.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTDistributedIDSIPSIDPSDroppedNetworkOversubscribed







.1.3.6.1.4.1.6876.120.1.0.33.0.63
ngine is unable to keep pace with the rate of incoming traffic
resulting in the excess traffic being dropped. For more details, login to
the ESX host and issue the following command: `vsipioctl getdpiinfo -s` and
look at oversubscription stats.
          
Action required:
Review reason for oversubscription. Review the IDPS rules to reduce the
amount of traffic being subject to IDPS service.
vmwNsxTDistributedIDSIPSIDPSDroppedNetworkOversubscribedClear







.1.3.6.1.4.1.6876.120.1.0.33.0.64
buted IDPS engine is not dropping any traffic.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTDistributedIDSIPSIDPSNetworkOversubscriptionHigh








.1.3.6.1.4.1.6876.120.1.0.33.0.65
ilization for the distributed IDPS engine is at or above the high
threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
Review reason for oversubscription. Review the IDPS rules to reduce the
amount of traffic being subject to IDPS service.
vmwNsxTDistributedIDSIPSIDPSNetworkOversubscriptionHighClear








.1.3.6.1.4.1.6876.120.1.0.33.0.66
ilization for the distributed IDPS engine is below the high
threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTDistributedIDSIPSIDPSNetworkOversubscriptionVeryHigh








.1.3.6.1.4.1.6876.120.1.0.33.0.67
ilization for the distributed IDPS engine is at or above the very
high threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
Review reason for oversubscription. Review the IDPS rules to reduce the
amount of traffic being subject to IDPS service.
vmwNsxTDistributedIDSIPSIDPSNetworkOversubscriptionVeryHighClear








.1.3.6.1.4.1.6876.120.1.0.33.0.68
ilization for the distributed IDPS engine is below the very high
threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTDistributedIDSIPSNSXIDPSEngineCPUUsageHigh








.1.3.6.1.4.1.6876.120.1.0.33.0.7
ngine CPU usage has reached
vmwNsxTDataCenterSystemResourceUsage%, which is at or above the high
threshold value of 75%.
          
Action required:
Consider re-balancing the VM workloads on this host to other hosts.
vmwNsxTDistributedIDSIPSNSXIDPSEngineCPUUsageHighClear








.1.3.6.1.4.1.6876.120.1.0.33.0.8
ngine CPU usage has reached
vmwNsxTDataCenterSystemResourceUsage%, which is below the high
threshold value of 75%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTDistributedIDSIPSNSXIDPSEngineCPUUsageVeryHigh








.1.3.6.1.4.1.6876.120.1.0.33.0.9
ngine CPU usage has reached
vmwNsxTDataCenterSystemResourceUsage%, which is at or above the very
high threshold value of 95%.
          
Action required:
Consider re-balancing the VM workloads on this host to other hosts.
vmwNsxTCommunicationManagementChannelToTransportNodeDown









.1.3.6.1.4.1.6876.120.1.0.35.0.1
 channel to Transport Node vmwNsxTDataCenterTransportNodeName
(vmwNsxTDataCenterTransportNodeAddress) is down for 5 minutes.
          
Action required:
Ensure there is network connectivity between the Manager nodes
and Transport node vmwNsxTDataCenterTransportNodeName (vmwNsxTDataCenterTransportNodeAddress)
and no firewalls are blocking traffic between the nodes.
On Windows Transport nodes, ensure the nsx-proxy service is running on the
Transport node by invoking the command `C:\NSX\nsx-proxy\nsx-proxy.ps1 status`
in the Windows PowerShell. If it is not running, restart it by
invoking the command `C:\NSX\nsx-proxy\nsx-proxy.ps1 restart`.
On all other Transport nodes, ensure the nsx-proxy service is running on the
Transport node by invoking the command `/etc/init.d/nsx-proxy status`.
If it is not running, restart it by invoking the command
`/etc/init.d/nsx-proxy restart`.
vmwNsxTCommunicationControlChannelToManagerNodeDownTooLongClear








.1.3.6.1.4.1.6876.120.1.0.35.0.10
ort node vmwNsxTDataCenterEntityId restores the control plane connection to Manager node
vmwNsxTDataCenterApplianceAddress.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTCommunicationControlChannelToTransportNodeDown










.1.3.6.1.4.1.6876.120.1.0.35.0.11
 service on Manager node vmwNsxTDataCenterApplianceAddress (vmwNsxTDataCenterCentralControlPlaneId) to Transport
node vmwNsxTDataCenterTransportNodeName (vmwNsxTDataCenterEntityId) down for at least three minutes from Controller service's point of view.
          
Action required:
1. Check the connectivity from the Controller service vmwNsxTDataCenterCentralControlPlaneId and
Transport node vmwNsxTDataCenterEntityId interface via ping and traceroute. This can be done on the NSX
Manager node admin CLI. The ping test should not see drops and have consistent latency values.
VMware recommends latency values of 150ms or less.
2. Navigate to System | Fabric | Nodes | Transport node vmwNsxTDataCenterEntityId on the NSX UI to check if
the TCP connections between the Controller service on Manager node vmwNsxTDataCenterApplianceAddress
(vmwNsxTDataCenterCentralControlPlaneId) and Transport node vmwNsxTDataCenterEntityId is established. If not, check
firewall rules on the network and the hosts to see if port 1235 is blocking Transport node vmwNsxTDataCenterEntityId
connection requests. Ensure that there are no host firewalls or network firewalls in the
underlay are blocking the required IP ports between Manager nodes and Transport nodes. This is
documented in our ports and protocols tool which is here: https://ports.vmware.com/.
vmwNsxTCommunicationControlChannelToTransportNodeDownClear









.1.3.6.1.4.1.6876.120.1.0.35.0.12
 service on Manager node vmwNsxTDataCenterApplianceAddress (vmwNsxTDataCenterCentralControlPlaneId) restores
connection to Transport node vmwNsxTDataCenterEntityId.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTCommunicationManagementChannelToManagerNodeDown










.1.3.6.1.4.1.6876.120.1.0.35.0.13
 channel to Manager Node vmwNsxTDataCenterManagerNodeId
(vmwNsxTDataCenterApplianceAddress) is down for 5 minutes.
          
Action required:
Ensure there is network connectivity between the
Transport node vmwNsxTDataCenterTransportNodeId
and master Manager node. Also ensure no firewalls are blocking traffic
between the nodes. Ensure the messaging manager service is running on
Manager nodes by invoking the command `/etc/init.d/messaging-manager status`.
If the messaging manager is not running, restart it by
invoking the command `/etc/init.d/messaging-manager restart`.
vmwNsxTCommunicationManagementChannelToManagerNodeDownClear









.1.3.6.1.4.1.6876.120.1.0.35.0.14
 channel to Manager Node vmwNsxTDataCenterManagerNodeId
(vmwNsxTDataCenterApplianceAddress) is up.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTCommunicationManagerClusterLatencyHigh











.1.3.6.1.4.1.6876.120.1.0.35.0.17
e network latency between Manager nodes vmwNsxTDataCenterManagerNodeId (vmwNsxTDataCenterApplianceAddress)
and vmwNsxTDataCenterRemoteManagerNodeId (vmwNsxTDataCenterRemoteApplianceAddress) is more than 10ms for the last 5 minutes.
          
Action required:
Ensure there are no firewall rules blocking ping traffic between the Manager nodes.
If there are other high bandwidth servers and applications sharing the local network,
consider moving these to a different network.
vmwNsxTCommunicationManagerClusterLatencyHighClear











.1.3.6.1.4.1.6876.120.1.0.35.0.18
e network latency between Manager nodes vmwNsxTDataCenterManagerNodeId (vmwNsxTDataCenterApplianceAddress)
and vmwNsxTDataCenterRemoteManagerNodeId (vmwNsxTDataCenterRemoteApplianceAddress) is within 10ms.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTCommunicationControlChannelToTransportNodeDownLong










.1.3.6.1.4.1.6876.120.1.0.35.0.19
 service on Manager node vmwNsxTDataCenterApplianceAddress (vmwNsxTDataCenterCentralControlPlaneId) to Transport
node vmwNsxTDataCenterTransportNodeName (vmwNsxTDataCenterEntityId) down for at least 15 minutes from Controller service's point of view.
          
Action required:
1. Check the connectivity from the Controller service vmwNsxTDataCenterCentralControlPlaneId and
Transport node vmwNsxTDataCenterEntityId interface via ping and traceroute. This can be done on the NSX
Manager node admin CLI. The ping test should not see drops and have consistent latency values.
VMware recommends latency values of 150ms or less.
2. Navigate to System | Fabric | Nodes | Transport node vmwNsxTDataCenterEntityId on the NSX UI to check if
the TCP connections between the Controller service on Manager node vmwNsxTDataCenterApplianceAddress
(vmwNsxTDataCenterCentralControlPlaneId) and Transport node vmwNsxTDataCenterEntityId is established. If not, check
firewall rules on the network and the hosts to see if port 1235 is blocking Transport node vmwNsxTDataCenterEntityId
connection requests. Ensure that there are no host firewalls or network firewalls in the
underlay are blocking the required IP ports between Manager nodes and Transport nodes. This is
documented in our ports and protocols tool which is here: https://ports.vmware.com/.
vmwNsxTCommunicationManagementChannelToTransportNodeDownClear









.1.3.6.1.4.1.6876.120.1.0.35.0.2
 channel to Transport Node vmwNsxTDataCenterTransportNodeName
(vmwNsxTDataCenterTransportNodeAddress) is up.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTCommunicationControlChannelToTransportNodeDownLongClear









.1.3.6.1.4.1.6876.120.1.0.35.0.20
 service on Manager node vmwNsxTDataCenterApplianceAddress (vmwNsxTDataCenterCentralControlPlaneId) restores
connection to Transport node vmwNsxTDataCenterEntityId.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTCommunicationManagerFQDNLookupFailure








.1.3.6.1.4.1.6876.120.1.0.35.0.21
 failed for Manager node vmwNsxTDataCenterEntityId with FQDN
vmwNsxTDataCenterApplianceFQDN and the publish_fqdns flag was set.
          
Action required:
1. Assign correct FQDNs to all Manager nodes and verify the DNS
configuration is correct for successful lookup of all Manager
nodes' FQDNs.
2. Alternatively, disable the use of FQDNs by invoking the NSX API
PUT /api/v1/configs/management with publish_fqdns set to false in the
request body. After that calls from Transport nodes and from Federation
to Manager nodes in this cluster will use only IP addresses.
vmwNsxTCommunicationManagerFQDNLookupFailureClear








.1.3.6.1.4.1.6876.120.1.0.35.0.22
p succeeded for Manager node vmwNsxTDataCenterEntityId with FQDN
vmwNsxTDataCenterApplianceFQDN or the publish_fqdns flag was cleared.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTCommunicationManagerFQDNReverseLookupFailure








.1.3.6.1.4.1.6876.120.1.0.35.0.23
S lookup failed for Manager node vmwNsxTDataCenterEntityId with IP address
vmwNsxTDataCenterApplianceAddress and the publish_fqdns flag was set.
          
Action required:
1. Assign correct FQDNs to all Manager nodes and verify the DNS
configuration is correct for successful reverse lookup of the Manager
node's IP address.
2. Alternatively, disable the use of FQDNs by invoking the NSX API
PUT /api/v1/configs/management with publish_fqdns set to false in the
request body. After that calls from Transport nodes and from Federation
to Manager nodes in this cluster will use only IP addresses.
vmwNsxTCommunicationManagerFQDNReverseLookupFailureClear








.1.3.6.1.4.1.6876.120.1.0.35.0.24
S lookup succeeded for Manager node vmwNsxTDataCenterEntityId with IP address
vmwNsxTDataCenterApplianceAddress or the publish_fqdns flag was cleared.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTCommunicationManagementChannelToManagerNodeDownLong










.1.3.6.1.4.1.6876.120.1.0.35.0.25
 channel to Manager Node vmwNsxTDataCenterManagerNodeId
(vmwNsxTDataCenterApplianceAddress) is down for 15 minutes.
          
Action required:
Ensure there is network connectivity between the
Transport node vmwNsxTDataCenterTransportNodeId
and master Manager nodes. Also ensure no firewalls are blocking traffic
between the nodes. Ensure the messaging manager service is running on
Manager nodes by invoking the command `/etc/init.d/messaging-manager status`.
If the messaging manager is not running, restart it by
invoking the command `/etc/init.d/messaging-manager restart`.
vmwNsxTCommunicationManagementChannelToManagerNodeDownLongClear









.1.3.6.1.4.1.6876.120.1.0.35.0.26
 channel to Manager Node vmwNsxTDataCenterManagerNodeId
(vmwNsxTDataCenterApplianceAddress) is up.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTCommunicationNetworkLatencyHigh









.1.3.6.1.4.1.6876.120.1.0.35.0.29
e network latency between manager nodes and host
vmwNsxTDataCenterTransportNodeName (vmwNsxTDataCenterTransportNodeAddress) is more than
 150 ms for 5 minutes.
          
Action required:
1. Wait for 5 minutes to see if the alarm automatically gets resolved.
2. Ping the NSX Transport node from Manager node. The ping test should not see drops and
have consistent latency values. VMware recommends latency values of 150ms or less.
3. Inspect for any other physical network layer issues. If the problem persists, contact
VMware support.
vmwNsxTCommunicationManagerControlChannelDown









.1.3.6.1.4.1.6876.120.1.0.35.0.3
ication between the management function and the control
function has failed on Manager node vmwNsxTDataCenterManagerNodeName (vmwNsxTDataCenterApplianceAddress).
          
Action required:
1. On Manager node vmwNsxTDataCenterManagerNodeName (vmwNsxTDataCenterApplianceAddress), invoke the
following NSX CLI command: `get service applianceproxy` to check the status
of the service periodically for 60 minutes.
          
2. If the service is not running for more than 60 minutes, invoke
the following NSX CLI command: `restart service applianceproxy` and recheck the
status. If the service is still down, contact VMware support.
vmwNsxTCommunicationNetworkLatencyHighClear









.1.3.6.1.4.1.6876.120.1.0.35.0.30
e network latency between manager nodes and host
vmwNsxTDataCenterTransportNodeName (vmwNsxTDataCenterTransportNodeAddress) is normal.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTCommunicationLimitedReachabilityOnDPU












.1.3.6.1.4.1.6876.120.1.0.35.0.31
TDataCenterVerticalName collector vmwNsxTDataCenterCollectorIP can not be reached via vmknic(s)(stack vmwNsxTDataCenterStackAlias)
on DVS vmwNsxTDataCenterDvsAlias on DPU vmwNsxTDataCenterDPUId, but is reachable via vmknic(s)(stack vmwNsxTDataCenterStackAlias)
on other DVS(es).
          
Action required:
If the warning is on, it does not mean the collector is unreachable. The exported flows
generated by the vertical based on DVS vmwNsxTDataCenterDvsAlias can still reach the collector
vmwNsxTDataCenterCollectorIP via vmknic(s) on DVS(es) besides of DVS vmwNsxTDataCenterDvsAlias. If this is unacceptable,
user can try to create vmknic(s) with stack vmwNsxTDataCenterStackAlias on DVS vmwNsxTDataCenterDvsAlias and
configure it with approriate IPv4(6) address, then check if the vmwNsxTDataCenterVerticalName collector vmwNsxTDataCenterCollectorIP
can be reached via the newly-created vmknic(s) on DPU vmwNsxTDataCenterDPUId by invoking
`vmkping vmwNsxTDataCenterCollectorIP -S vmwNsxTDataCenterStackAlias -I vmkX` with SSH to DPU via ESXi enabled.
vmwNsxTCommunicationLimitedReachabilityOnDPUClear












.1.3.6.1.4.1.6876.120.1.0.35.0.32
TDataCenterVerticalName collector vmwNsxTDataCenterCollectorIP can be reached via vmknic(s) (stack vmwNsxTDataCenterStackAlias) on
DVS vmwNsxTDataCenterDvsAlias on DPU vmwNsxTDataCenterDPUId, or the vmwNsxTDataCenterVerticalName collector vmwNsxTDataCenterCollectorIP is unreachable completely.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTCommunicationUnreachableCollectorOnDPU











.1.3.6.1.4.1.6876.120.1.0.35.0.33
TDataCenterVerticalName collector vmwNsxTDataCenterCollectorIP can not be reached via existing vmknic(s)(stack vmwNsxTDataCenterStackAlias)
on any DVS on DPU vmwNsxTDataCenterDPUId.
          
Action required:
To make the collector reachable for given vertical on the DVS, user has to make sure there
is(are) vmknic(s) with expected stack vmwNsxTDataCenterStackAlias created and configured with approriate IPv4(6)
addresses, and the network connection to vmwNsxTDataCenterVerticalName collector vmwNsxTDataCenterCollectorIP is also fine. So user has
to do the checking on DPU vmwNsxTDataCenterDPUId, and perform required configuration to make sure the condition
is met. Finally if `vmkping vmwNsxTDataCenterCollectorIP -S vmwNsxTDataCenterStackAlias`  with SSH to DPU via ESXi enabled succeeds,
this indicates the problem is gone.
vmwNsxTCommunicationUnreachableCollectorOnDPUClear











.1.3.6.1.4.1.6876.120.1.0.35.0.34
TDataCenterVerticalName collector vmwNsxTDataCenterCollectorIP can be reached with existing vmknic(s)(stack vmwNsxTDataCenterStackAlias)
now on DPU vmwNsxTDataCenterDPUId.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTCommunicationManagerControlChannelDownClear









.1.3.6.1.4.1.6876.120.1.0.35.0.4
ication between the management function and the control function
has been restored on Manager node vmwNsxTDataCenterManagerNodeName (vmwNsxTDataCenterApplianceAddress).
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTCommunicationManagementChannelToTransportNodeDownLg









.1.3.6.1.4.1.6876.120.1.0.35.0.5
 channel to Transport Node vmwNsxTDataCenterTransportNodeName
(vmwNsxTDataCenterTransportNodeAddress) is down for 15 minutes.
          
Action required:
Ensure there is network connectivity between the Manager nodes
and Transport node vmwNsxTDataCenterTransportNodeName (vmwNsxTDataCenterTransportNodeAddress)
and no firewalls are blocking traffic between the nodes.
On Windows Transport nodes, ensure the nsx-proxy service is running on the
Transport node by invoking the command `C:\NSX\nsx-proxy\nsx-proxy.ps1 status`
in the Windows PowerShell. If it is not running, restart it by
invoking the command `C:\NSX\nsx-proxy\nsx-proxy.ps1 restart`.
On all other Transport nodes, ensure the nsx-proxy service is running on the
Transport node by invoking the command `/etc/init.d/nsx-proxy status`.
If it is not running, restart it by invoking the command
`/etc/init.d/nsx-proxy restart`.
vmwNsxTCommunicationManagementChannelToTransportNodeDownLgClear









.1.3.6.1.4.1.6876.120.1.0.35.0.6
 channel to Transport Node vmwNsxTDataCenterTransportNodeName
(vmwNsxTDataCenterTransportNodeAddress) is up.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTCommunicationControlChannelToManagerNodeDown









.1.3.6.1.4.1.6876.120.1.0.35.0.7
ort node vmwNsxTDataCenterEntityId control plane connection to Manager node vmwNsxTDataCenterApplianceAddress is
down for at least vmwNsxTDataCenterTimeoutInMinutes minutes from the Transport node's point of view.
          
Action required:
1. Check the connectivity from Transport node vmwNsxTDataCenterEntityId to Manager node vmwNsxTDataCenterApplianceAddress
interface via ping. If they are not pingable, check for flakiness in network connectivity.
2. Check to see if the TCP connections are established using the netstat output to see if the
Controller service on the Manager node vmwNsxTDataCenterApplianceAddress is listening for connections on port
1235. If not, check firewall (or) iptables rules to see if port 1235 is blocking Transport node
vmwNsxTDataCenterEntityId connection requests. Ensure that there are no host firewalls or network firewalls in
the underlay are blocking the required IP ports between Manager nodes and Transport nodes.
This is documented in our ports and protocols tool which is here: https://ports.vmware.com/.
3. It is possible that the Transport node vmwNsxTDataCenterEntityId may still be in maintenance mode.
You can check whether the Transport node is in maintenance mode via the following API:
GET https:///api/v1/transport-nodes/
When maintenance mode is set, the Transport node will not be connected to the Controller
service. This is usually the case when host upgrade is in progress. Wait for a few minutes and
check connectivity again.
Note: This alarm is not critical and should be resolved. GSS need not be contacted for the
notification of this alarm unless the alarm remains unresolved over an extended period of time.
vmwNsxTCommunicationControlChannelToManagerNodeDownClear








.1.3.6.1.4.1.6876.120.1.0.35.0.8
ort node vmwNsxTDataCenterEntityId restores the control plane connection to Manager node
vmwNsxTDataCenterApplianceAddress.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTCommunicationControlChannelToManagerNodeDownTooLong









.1.3.6.1.4.1.6876.120.1.0.35.0.9
ort node vmwNsxTDataCenterEntityId control plane connection to Manager node vmwNsxTDataCenterApplianceAddress is
down for at least vmwNsxTDataCenterTimeoutInMinutes minutes from the Transport node's point of view.
          
Action required:
1. Check the connectivity from Transport node vmwNsxTDataCenterEntityId to Manager node vmwNsxTDataCenterApplianceAddress
interface via ping. If they are not pingable, check for flakiness in network connectivity.
2. Check to see if the TCP connections are established using the netstat output to see if the
Controller service on the Manager node vmwNsxTDataCenterApplianceAddress is listening for connections on port
1235. If not, check firewall (or) iptables rules to see if port 1235 is blocking Transport node
vmwNsxTDataCenterEntityId connection requests. Ensure that there are no host firewalls or network firewalls in
the underlay are blocking the required IP ports between Manager nodes and Transport nodes.
This is documented in our ports and protocols tool which is here: https://ports.vmware.com/.
3. It is possible that the Transport node vmwNsxTDataCenterEntityId may still be in maintenance mode.
You can check whether the Transport node is in maintenance mode via the following API:
GET https:///api/v1/transport-nodes/.
When maintenance mode is set, the Transport node will not be connected to the Controller
service. This is usually the case when host upgrade is in progress. Wait for a few minutes and
check connectivity again.
vmwNsxTIdentityFirewallConnectivityToLDAPServerLost








.1.3.6.1.4.1.6876.120.1.0.36.0.1
tivity to LDAP server vmwNsxTDataCenterLDAPServer is lost.
          
Action required:
Check
1. The LDAP server is reachable from NSX nodes.
2. The LDAP server details are configured correctly in NSX.
3. The LDAP server is running correctly.
4. There are no firewalls blocking access between the LDAP server
   and NSX nodes.
Afer the issue is fixed, use TEST CONNECTION in NSX UI under
Identity Firewall AD to test the connection.
vmwNsxTIdentityFirewallConnectivityToLDAPServerLostClear








.1.3.6.1.4.1.6876.120.1.0.36.0.2
tivity to LDAP server vmwNsxTDataCenterLDAPServer is restored.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTIdentityFirewallErrorInDeltaSync








.1.3.6.1.4.1.6876.120.1.0.36.0.3
urred while performing delta sync with vmwNsxTDataCenterDirectoryDomain.
          
Action required:
1. Check if there are any connectivity to LDAP server lost alarms.
2. Find the error details in /var/log/syslog. Around the alarm trigger
   time, search for text: Error happened when synchronize LDAP objects.
3. Check with AD administrator if there are any recent AD changes which
   may cause the errors.
4. If the errors persist, collect the technical support bundle and
   contact VMware technical support.
vmwNsxTIdentityFirewallErrorInDeltaSyncClear








.1.3.6.1.4.1.6876.120.1.0.36.0.4
occurred while performing delta sync with vmwNsxTDataCenterDirectoryDomain.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTIPAMIPBlockUsageVeryHigh








.1.3.6.1.4.1.6876.120.1.0.38.0.1
sage of vmwNsxTDataCenterIntentPath is very high.
IP block nearing its total capacity, creation of subnet using IP block might fail.
          
Action required:
Review IP block usage. Use new IP block for resource creation or delete unused IP
subnet from the IP block. To check subnet being used for IP Block. From NSX UI,
navigate to Networking | IP Address pools | IP Address pools tab. Select IP pools
where IP block being used, check Subnets and Allocated IPs column on UI. If no
allocation has been used for the IP pool and it is not going to be used in future
then delete subnet or IP pool. Use following API to check if IP block being used
by IP pool and also check if any IP allocation done:
To get configured subnets of an IP pool, invoke the NSX API
GET /policy/api/v1/infra/ip-pools//ip-subnets
To get IP allocations, invoke the NSX API
GET /policy/api/v1/infra/ip-pools//ip-allocations
Note: Deletion of IP pool/subnet should only be done if it does not have any
allocated IPs and it is not going to be used in future.
vmwNsxTIPAMIPBlockUsageVeryHighClear








.1.3.6.1.4.1.6876.120.1.0.38.0.2
sage of vmwNsxTDataCenterIntentPath is below threshold level.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTIPAMIPPoolUsageVeryHigh








.1.3.6.1.4.1.6876.120.1.0.38.0.3
age of vmwNsxTDataCenterIntentPath is very high.
Creation of entity/service depends on IP being allocated from IP pool might fail.
          
Action required:
Review IP pool usage.
Release unused ip allocations from IP pool or create new IP pool and use it.
From NSX UI navigate to Networking | IP Address pools | IP Address pools tab.
Select IP pools and check Allocated IPs column, this will show IPs allocated from the
IP pool. If user see any IPs are not being used then those IPs can be released.
To release unused IP allocations, invoke the NSX API
DELETE /policy/api/v1/infra/ip-pools//ip-allocations/
vmwNsxTIPAMIPPoolUsageVeryHighClear








.1.3.6.1.4.1.6876.120.1.0.38.0.4
age of vmwNsxTDataCenterIntentPath is normal now.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTGatewayFirewallICMPFlowCountExceeded









.1.3.6.1.4.1.6876.120.1.0.39.0.21
rewall flow table usage for ICMP traffic on logical
router vmwNsxTDataCenterEntityId has reached vmwNsxTDataCenterFirewallICMPFlowUsage% which is at or above the
high threshold  value of vmwNsxTDataCenterSystemUsageThreshold%.
New flows will be dropped by Gateway firewall when usage reaches the maximum limit.
          
Action required:
Log in as the admin user on Edge node and invoke the NSX CLI command
`get firewall  interface stats  | json` by using
right interface uuid and check flow table usage for ICMP flows.
Check traffic flows going through the gateway is not a DOS attack or anomalous burst. If the traffic appears to be within
the normal load but the alarm threshold is hit, consider increasing the alarm threshold or route new traffic to another Edge node.
vmwNsxTGatewayFirewallICMPFlowCountExceededClear








.1.3.6.1.4.1.6876.120.1.0.39.0.22
rewall flow table usage on logical router vmwNsxTDataCenterEntityId has reached
below the high threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTGatewayFirewallICMPFlowCountHigh









.1.3.6.1.4.1.6876.120.1.0.39.0.23
rewall flow table usage for ICMP on logical router vmwNsxTDataCenterEntityId has reached
vmwNsxTDataCenterFirewallICMPFlowUsage% which is at or above the high threshold value of
vmwNsxTDataCenterSystemUsageThreshold%.
New flows will be dropped by Gateway firewall when usage reaches the maximum limit.
          
Action required:
Log in as the admin user on Edge node and invoke the NSX CLI command
`get firewall  interface stats  | json` by using
right interface uuid and check flow table usege for ICMP flows.
Check traffic flows going through the gateway is not a DOS attack or anomalous burst. If the traffic appears to be within
the normal load but the alarm threshold is hit, consider increasing the alarm threshold or route new traffic to another Edge node.
vmwNsxTGatewayFirewallICMPFlowCountHighClear








.1.3.6.1.4.1.6876.120.1.0.39.0.24
rewall flow table usage for ICMP on logical router
vmwNsxTDataCenterEntityId has reached below the high threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTGatewayFirewallIPFlowCountExceeded









.1.3.6.1.4.1.6876.120.1.0.39.0.25
rewall flow table usage for IP traffic on logical router vmwNsxTDataCenterEntityId has
reached vmwNsxTDataCenterFirewallIPFlowUsage% which is at or above the high threshold value of
vmwNsxTDataCenterSystemUsageThreshold%.
New flows will be dropped by Gateway firewall when usage reaches the maximum limit.
          
Action required:
Log in as the admin user on Edge node and invoke the NSX CLI command
`get firewall  interface stats  | json` by using
right interface uuid and check flow table usage for IP flows.
Check traffic flows going through the gateway is not a DOS attack or anomalous burst. If the traffic appears to be within
the normal load but the alarm threshold is hit, consider increasing the alarm threshold or route new traffic to another Edge node.
vmwNsxTGatewayFirewallIPFlowCountExceededClear








.1.3.6.1.4.1.6876.120.1.0.39.0.26
rewall flow table usage on logical router vmwNsxTDataCenterEntityId has reached
below the high threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTGatewayFirewallIPFlowCountHigh









.1.3.6.1.4.1.6876.120.1.0.39.0.27
rewall flow table usage for IP on logical router vmwNsxTDataCenterEntityId has reached
vmwNsxTDataCenterFirewallIPFlowUsage% which is at or above the high threshold value of
vmwNsxTDataCenterSystemUsageThreshold%.
New flows will be dropped by Gateway firewall when usage reaches the maximum limit.
          
Action required:
Log in as the admin user on Edge node and invoke the NSX CLI command
`get firewall  interface stats  | json` by using
right interface uuid and check flow table usege for IP flows.
Check traffic flows going through the gateway is not a DOS attack or anomalous burst. If the traffic appears to be within
the normal load but the alarm threshold is hit, consider increasing the alarm threshold or route new traffic to another Edge node.
vmwNsxTGatewayFirewallIPFlowCountHighClear








.1.3.6.1.4.1.6876.120.1.0.39.0.28
rewall flow table usage for non IP flows on logical router
vmwNsxTDataCenterEntityId has reached below the high threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTGatewayFirewallTcpHalfOpenFlowCountExceeded









.1.3.6.1.4.1.6876.120.1.0.39.0.29
rewall flow table usage for TCP half-open traffic on logical
router vmwNsxTDataCenterEntityId has reached vmwNsxTDataCenterFirewallHalfopenFlowUsage% which is at or above the
high threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
New flows will be dropped by Gateway firewall when usage reaches the maximum limit.
          
Action required:
Log in as the admin user on Edge node and invoke the NSX CLI command
`get firewall  interface stats  | json` by using
right interface uuid and check flow table usage for TCP half-open flows.
Check traffic flows going through the gateway is not a DOS attack or anomalous burst. If the traffic appears to be within
the normal load but the alarm threshold is hit, consider increasing the alarm threshold or route new traffic to another Edge node.
vmwNsxTGatewayFirewallTcpHalfOpenFlowCountExceededClear








.1.3.6.1.4.1.6876.120.1.0.39.0.30
rewall flow table usage on logical router vmwNsxTDataCenterEntityId has reached
below the high threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTGatewayFirewallTcpHalfOpenFlowCountHigh









.1.3.6.1.4.1.6876.120.1.0.39.0.31
rewall flow table usage for TCP on logical router vmwNsxTDataCenterEntityId has reached
vmwNsxTDataCenterFirewallHalfopenFlowUsage% which is at or above the high threshold value of
vmwNsxTDataCenterSystemUsageThreshold%.
New flows will be dropped by Gateway firewall when usage reaches the maximum limit.
          
Action required:
Log in as the admin user on Edge node and invoke the NSX CLI command
`get firewall  interface stats  | json` by using
right interface uuid and check flow table usege for TCP half-open flow.
Check traffic flows going through the gateway is not a DOS attack or anomalous burst. If the traffic appears to be within
the normal load but the alarm threshold is hit, consider increasing the alarm threshold or route new traffic to another Edge node.
vmwNsxTGatewayFirewallTcpHalfOpenFlowCountHighClear








.1.3.6.1.4.1.6876.120.1.0.39.0.32
rewall flow table usage for TCP half-open on logical router
vmwNsxTDataCenterEntityId has reached below the high threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTGatewayFirewallUDPFlowCountExceeded









.1.3.6.1.4.1.6876.120.1.0.39.0.33
rewall flow table usage for UDP traffic on logical
router vmwNsxTDataCenterEntityId has reached vmwNsxTDataCenterFirewallUDPFlowUsage% which is at or above the
high threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
New flows will be dropped by Gateway firewall when usage reaches the maximum limit.
          
Action required:
Log in as the admin user on Edge node and invoke the NSX CLI command
`get firewall  interface stats  | json` by using
right interface uuid and check flow table usage for UDP flows.
Check traffic flows going through the gateway is not a DOS attack or anomalous burst. If the traffic appears to be within
the normal load but the alarm threshold is hit, consider increasing the alarm threshold or route new traffic to another Edge node.
vmwNsxTGatewayFirewallUDPFlowCountExceededClear







.1.3.6.1.4.1.6876.120.1.0.39.0.34
rewall flow table usage on logical router vmwNsxTDataCenterEntityId has reached
below the high threshold.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTGatewayFirewallUDPFlowCountHigh









.1.3.6.1.4.1.6876.120.1.0.39.0.35
rewall flow table usage for UDP on logical router vmwNsxTDataCenterEntityId has reached
vmwNsxTDataCenterFirewallUDPFlowUsage% which is at or above the high threshold value of
vmwNsxTDataCenterSystemUsageThreshold%.
New flows will be dropped by Gateway firewall when usage reaches the maximum limit.
          
Action required:
Log in as the admin user on Edge node and invoke the NSX CLI command
`get firewall  interface stats  | json` by using
right interface uuid and check flow table usege for UDP flows.
Check traffic flows going through the gateway is not a DOS attack or anomalous burst. If the traffic appears to be within
the normal load but the alarm threshold is hit, consider increasing the alarm threshold or route new traffic to another Edge node.
vmwNsxTGatewayFirewallUDPFlowCountHighClear







.1.3.6.1.4.1.6876.120.1.0.39.0.36
rewall flow table usage for UDP on logical router
vmwNsxTDataCenterEntityId has reached below the high threshold.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTPasswordManagementPasswordExpirationApproaching









.1.3.6.1.4.1.6876.120.1.0.4.0.1
rd for user vmwNsxTDataCenterUsername is approaching expiration in
vmwNsxTDataCenterPasswordExpirationDays days.
          
Action required:
The password for the user vmwNsxTDataCenterUsername needs to be changed soon. For example,
to apply a new password to a user, invoke the following NSX API with a valid
password in the request body: PUT /api/v1/node/users/ where 
is the ID of the user.
vmwNsxTPasswordManagementPasswordExpirationApproachingClear








.1.3.6.1.4.1.6876.120.1.0.4.0.2
rd for the user vmwNsxTDataCenterUsername has been changed successfully or
is no longer expired or the user is no longer active.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTPasswordManagementPasswordExpired








.1.3.6.1.4.1.6876.120.1.0.4.0.3
rd for user vmwNsxTDataCenterUsername has expired.
          
Action required:
The password for user vmwNsxTDataCenterUsername must be changed now to access the
system. For example, to apply a new password to a user, invoke the
following NSX API with a valid password in the request body: PUT
/api/v1/node/users/ where  is the ID of the user. If the
admin user (with  10000) password has expired, admin must login to
the system via SSH (if enabled) or console in order to change the password.
Upon entering the current expired password, admin will be prompted to enter
a new password.
vmwNsxTPasswordManagementPasswordExpiredClear








.1.3.6.1.4.1.6876.120.1.0.4.0.4
rd for user vmwNsxTDataCenterUsername has been changed successfully or
is no longer expired or the user is no longer active.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTPasswordManagementPasswordIsAboutToExpire









.1.3.6.1.4.1.6876.120.1.0.4.0.5
rd for user vmwNsxTDataCenterUsername is about to expire in
vmwNsxTDataCenterPasswordExpirationDays days.
          
Action required:
Ensure the password for the user vmwNsxTDataCenterUsername is changed immediately. For
example, to apply a new password to a user, invoke the following NSX API
with a valid password in the request body: PUT /api/v1/node/users/
where  is the ID of the user.
vmwNsxTPasswordManagementPasswordIsAboutToExpireClear








.1.3.6.1.4.1.6876.120.1.0.4.0.6
rd for the user vmwNsxTDataCenterUsername has been changed successfully or
is no longer expired or the user is no longer active.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTClusteringClusterDegraded









.1.3.6.1.4.1.6876.120.1.0.40.0.1
er vmwNsxTDataCenterManagerNodeId of service vmwNsxTDataCenterGroupType is down.
          
Action required:
1. Invoke the NSX CLI command 'get cluster status' to view the status
   of group members of the cluster.
2. Ensure the service for vmwNsxTDataCenterGroupType is running on node. Invoke the GET
   /api/v1/node/services//status NSX API or the
   `get service ` NSX CLI command to determine if the service is running.
   If not running, invoke the POST /api/v1/node/services/?action=restart NSX API or
   the `restart service ` NSX CLI to restart the service.
3. Check /var/log/ of service vmwNsxTDataCenterGroupType to see if there are errors reported.
vmwNsxTClusteringClusterDegradedClear









.1.3.6.1.4.1.6876.120.1.0.40.0.2
er vmwNsxTDataCenterManagerNodeId of vmwNsxTDataCenterGroupType is up.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTClusteringClusterUnavailable









.1.3.6.1.4.1.6876.120.1.0.40.0.3
members vmwNsxTDataCenterManagerNodeIDS of service vmwNsxTDataCenterGroupType are down.
          
Action required:
1. Ensure the service for vmwNsxTDataCenterGroupType is running on node. Invoke the GET
   /api/v1/node/services//status NSX API or the
   `get service ` NSX CLI command to determine if the service is running.
   If not running, invoke the POST /api/v1/node/services/?action=restart NSX API or
   the `restart service ` NSX CLI to restart the service.
2. Check /var/log/ of service vmwNsxTDataCenterGroupType to see if there are errors reported.
vmwNsxTClusteringClusterUnavailableClear









.1.3.6.1.4.1.6876.120.1.0.40.0.4
members vmwNsxTDataCenterManagerNodeIDS of service vmwNsxTDataCenterGroupType are up.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTNSXApplicationPlatformCommunicationMgrDisconnected








.1.3.6.1.4.1.6876.120.1.0.41.0.1
plication Platform cluster vmwNsxTDataCenterNappClusterId is disconnected
from the NSX management cluster.
          
Action required:
Check whether the manager cluster certificate, manager node certificates,
kafka certificate and ingress certificate match on both NSX Manager and the
NSX Application Platform cluster. Check expiration dates of the above mentioned
certificates to make sure they are valid. Check the network connection between
NSX Manager and NSX Application Platform cluster and resolve any network connection failures.
vmwNsxTNSXApplicationPlatformCommunicationDelayInRawflow








.1.3.6.1.4.1.6876.120.1.0.41.0.11
 of pending messages in the messaging topic Raw Flow is above the
pending message threshold of vmwNsxTDataCenterNappMessagingLAGThreshold.
          
Action required:
Add nodes and then scale up the NSX Application Platform cluster. If the bottleneck can be attributed to a specific service,
for example, the analytics service, then scale up the specific service when the new nodes are added.
vmwNsxTNSXApplicationPlatformCommunicationDelayInRawflowClear








.1.3.6.1.4.1.6876.120.1.0.41.0.12
 of pending messages in the messaging topic Raw Flow is below the pending
message threshold of vmwNsxTDataCenterNappMessagingLAGThreshold.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTNSXApplicationPlatformCommunicationExpDisconnected







.1.3.6.1.4.1.6876.120.1.0.41.0.13
xporter on Transport node vmwNsxTDataCenterEntityId is disconnected from
the NSX Application Platform cluster's messaging broker. Data collection is affected.
          
Action required:
Restart the messaging service if it is not running in the NSX Application
Platform cluster. Resolve the network connection failure between the Transport node
flow exporter and the NSX Application Platform cluster.
vmwNsxTNSXApplicationPlatformCommunicationExpDisconnectedClear







.1.3.6.1.4.1.6876.120.1.0.41.0.14
xporter on Transport node vmwNsxTDataCenterEntityId has reconnected to
the NSX Application Platform cluster's messaging broker.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTNSXApplicationPlatformCommunicationExpDisconnectDPU








.1.3.6.1.4.1.6876.120.1.0.41.0.17
xporter on Transport node vmwNsxTDataCenterEntityId DPU vmwNsxTDataCenterDPUId is disconnected
from the Intelligence node's messaging broker. Data collection is affected.
          
Action required:
Restart the messaging service if it is not running in the Intelligence
node. Resolve the network connection failure between the Transport node
flow exporter and the Intelligence node.
vmwNsxTNSXApplicationPlatformCommunicationExpDisconnectDPUClear








.1.3.6.1.4.1.6876.120.1.0.41.0.18
xporter on Transport node vmwNsxTDataCenterEntityId DPU vmwNsxTDataCenterDPUId has reconnected
to the Intelligence node's messaging broker.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTNSXApplicationPlatformCommunicationMgrDisconnectedClear








.1.3.6.1.4.1.6876.120.1.0.41.0.2
plication Platform cluster vmwNsxTDataCenterNappClusterId is reconnected
to the NSX management cluster.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTNSXApplicationPlatformCommunicationDelayInOverflow








.1.3.6.1.4.1.6876.120.1.0.41.0.7
 of pending messages in the messaging topic Over Flow is above the
pending message threshold of vmwNsxTDataCenterNappMessagingLAGThreshold.
          
Action required:
Add nodes and then scale up the NSX Application Platform cluster. If bottleneck can be attributed to a specific service,
for example, the analytics service, then scale up the specific service when the new nodes are added.
vmwNsxTNSXApplicationPlatformCommunicationDelayInOverflowClear








.1.3.6.1.4.1.6876.120.1.0.41.0.8
 of pending messages in the messaging topic Over Flow is below the pending
message threshold of vmwNsxTDataCenterNappMessagingLAGThreshold.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTMTUCheckMTUMismatchWithinTransportZone







.1.3.6.1.4.1.6876.120.1.0.42.0.1
uration mismatch between Transport Nodes (ESXi, KVM and Edge) attached to the same Transport Zone.
MTU values on all switches attached to the same Transport Zone not being consistent will cause
connectivity issues.
          
Action required:
1. Navigate to System | Fabric | Settings | MTU Configuration Check | Inconsistent on the NSX UI to check
more mismatch details.
2. Set the same MTU value on all switches attached to the same Transport Zone
by invoking the NSX API PUT /api/v1/host-switch-profiles/
with mtu in the request body, or API PUT /api/v1/global-configs/SwitchingGlobalConfig
with physical_uplink_mtu in request body.
vmwNsxTMTUCheckMTUMismatchWithinTransportZoneClear







.1.3.6.1.4.1.6876.120.1.0.42.0.2
lues between Transport Nodes attached to the same Transport Zone are consistent now.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTMTUCheckGlobalRouterMTUTooBig







.1.3.6.1.4.1.6876.120.1.0.42.0.3
 router MTU configuration is bigger than MTU of switches in overlay Transport Zone which connects
to Tier0 or Tier1. Global router MTU value should be less than all switches MTU value by at least
a 100 as we require 100 quota for Geneve encapsulation.
          
Action required:
1. Navigate to System | Fabric | Settings | MTU Configuration Check | Inconsistent on the NSX UI to check
more mismatch details.
2. Set the bigger MTU value on switches by invoking the NSX API
PUT /api/v1/host-switch-profiles/ with mtu in the
request body, or API PUT /api/v1/global-configs/SwitchingGlobalConfig
with physical_uplink_mtu in request body.
3. Or set the smaller MTU value of global router configuration by invoking the NSX API PUT
/api/v1/global-configs/RoutingGlobalConfig with logical_uplink_mtu in the request body.
vmwNsxTMTUCheckGlobalRouterMTUTooBigClear







.1.3.6.1.4.1.6876.120.1.0.42.0.4
 router MTU is less than the MTU of overlay Transport Zone now.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTNSXApplicationPlatformHealthAnalyticsCPUUsageHigh








.1.3.6.1.4.1.6876.120.1.0.43.0.1
age of Analytics service is above the high threshold
value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
Scale out all services or the Analytics service.
vmwNsxTNSXApplicationPlatformHealthAnalyticsMemoryUsageHighClear








.1.3.6.1.4.1.6876.120.1.0.43.0.10
 usage of Analytics service is below the high threshold
value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTNSXApplicationPlatformHealthMetricsCPUUsageVeryHighClear








.1.3.6.1.4.1.6876.120.1.0.43.0.100
age of Metrics service is below the very high threshold
value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTNSXApplicationPlatformHealthMetricsDiskUsageHi








.1.3.6.1.4.1.6876.120.1.0.43.0.101
sage of Metrics service is above the high
threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
Clean up files not needed.
Scale out all services.
vmwNsxTNSXApplicationPlatformHealthMetricsDiskUsageHiClear








.1.3.6.1.4.1.6876.120.1.0.43.0.102
sage of Metrics service is below the high
threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTNSXApplicationPlatformHealthMetricsDiskUsageVeryHigh








.1.3.6.1.4.1.6876.120.1.0.43.0.103
sage of Metrics service is above the very high
threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
Clean up files not needed.
Scale out all services.
vmwNsxTNSXApplicationPlatformHealthMetricsDiskUsageVeryHighClear








.1.3.6.1.4.1.6876.120.1.0.43.0.104
sage of Metrics service is below the very high
threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTNSXApplicationPlatformHealthMetricsMemUasgeHi








.1.3.6.1.4.1.6876.120.1.0.43.0.105
 usage of Metrics service is above the high threshold
value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
Scale out all services.
vmwNsxTNSXApplicationPlatformHealthMetricsMemUasgeHiClear








.1.3.6.1.4.1.6876.120.1.0.43.0.106
 usage of Metrics service is below the high threshold
value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTNSXApplicationPlatformHealthMetricsMemUsageVeryHigh








.1.3.6.1.4.1.6876.120.1.0.43.0.107
 usage of Metrics service is above the very high threshold
value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
Scale out all services.
vmwNsxTNSXApplicationPlatformHealthMetricsMemUsageVeryHighClear








.1.3.6.1.4.1.6876.120.1.0.43.0.108
 usage of Metrics service is below the very high threshold
value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTNSXApplicationPlatformHealthConfigDbMemUsageHigh








.1.3.6.1.4.1.6876.120.1.0.43.0.109
 usage of Configuration Database service is above the high threshold
value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
Scale out all services.
vmwNsxTNSXApplicationPlatformHealthAnalyticsMemUsageVeryHi








.1.3.6.1.4.1.6876.120.1.0.43.0.11
 usage of Analytics service is above the very high threshold
value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
Scale out all services or the Analytics service.
vmwNsxTNSXApplicationPlatformHealthConfigDbMemUsageHighClear








.1.3.6.1.4.1.6876.120.1.0.43.0.110
 usage of Configuration Database service is below the high threshold
value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTNSXApplicationPlatformHealthNappStatusDegraded








.1.3.6.1.4.1.6876.120.1.0.43.0.117
ation Platform cluster vmwNsxTDataCenterNappClusterId overall status is degraded.
          
Action required:
Get more information from alarms of nodes and services.
vmwNsxTNSXApplicationPlatformHealthNappStatusDegradedClear








.1.3.6.1.4.1.6876.120.1.0.43.0.118
ation Platform cluster vmwNsxTDataCenterNappClusterId is running properly.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTNSXApplicationPlatformHealthNappStatusDown








.1.3.6.1.4.1.6876.120.1.0.43.0.119
ation Platform cluster vmwNsxTDataCenterNappClusterId overall status is down.
          
Action required:
Get more information from alarms of nodes and services.
vmwNsxTNSXApplicationPlatformHealthAnalyticsMemUsageVeryHiClear








.1.3.6.1.4.1.6876.120.1.0.43.0.12
 usage of Analytics service is below the very high threshold
value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTNSXApplicationPlatformHealthNappStatusDownClear








.1.3.6.1.4.1.6876.120.1.0.43.0.120
ation Platform cluster vmwNsxTDataCenterNappClusterId is running properly.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTNSXApplicationPlatformHealthClusterCPUUsageHigh









.1.3.6.1.4.1.6876.120.1.0.43.0.13
age of NSX Application Platform cluster vmwNsxTDataCenterNappClusterId is
above the high threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
In the NSX UI, navigate to System | NSX Application Platform | Core Services
and check the System Load field of individual services to see
which service is under pressure. See if the load can be reduced. If more
computing power is required, click on the Scale Out button to request more resources.
vmwNsxTNSXApplicationPlatformHealthClusterCPUUsageHighClear









.1.3.6.1.4.1.6876.120.1.0.43.0.14
age of NSX Application Platform cluster vmwNsxTDataCenterNappClusterId is
below the high threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTNSXApplicationPlatformHealthClusterCPUUsageVeryHigh









.1.3.6.1.4.1.6876.120.1.0.43.0.15
age of NSX Application Platform cluster vmwNsxTDataCenterNappClusterId is
above the very high threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
In the NSX UI, navigate to System | NSX Application Platform | Core Services
and check the System Load field of individual services to see
which service is under pressure. See if the load can be reduced. If more
computing power is required, click on the Scale Out button to request more resources.
vmwNsxTNSXApplicationPlatformHealthClusterCPUUsageVeryHighClear









.1.3.6.1.4.1.6876.120.1.0.43.0.16
age of NSX Application Platform cluster vmwNsxTDataCenterNappClusterId is
below the very high threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTNSXApplicationPlatformHealthClusterDiskUsageHigh









.1.3.6.1.4.1.6876.120.1.0.43.0.17
sage of NSX Application Platform cluster vmwNsxTDataCenterNappClusterId is
above the high threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
In the NSX UI, navigate to System | NSX Application Platform | Core Services
and check the Storage field of individual services to see
which service is under pressure. See if the load can be reduced. If more
disk storage is required, click on the Scale Out button to request more resources.
If data storage service is under strain, another way is to click on the Scale Up button
to increase disk size.
vmwNsxTNSXApplicationPlatformHealthClusterDiskUsageHighClear









.1.3.6.1.4.1.6876.120.1.0.43.0.18
sage of NSX Application Platform cluster vmwNsxTDataCenterNappClusterId is
below the high threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTNSXApplicationPlatformHealthClusterDiskUsageVeryHigh









.1.3.6.1.4.1.6876.120.1.0.43.0.19
sage of NSX Application Platform cluster vmwNsxTDataCenterNappClusterId is
above the very high threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
In the NSX UI, navigate to System | NSX Application Platform | Core Services
and check the Storage field of individual services to see
which service is under pressure. See if the load can be reduced. If more
disk storage is required, click on the Scale Out button to request more resources.
If data storage service is under strain, another way is to click on the Scale Up button
to increase disk size.
vmwNsxTNSXApplicationPlatformHealthAnalyticsCPUUsageHighClear








.1.3.6.1.4.1.6876.120.1.0.43.0.2
age of Analytics service is below the high threshold
value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTNSXApplicationPlatformHealthClusterDiskUsageVeryHighClear









.1.3.6.1.4.1.6876.120.1.0.43.0.20
sage of NSX Application Platform cluster vmwNsxTDataCenterNappClusterId is
below the very high threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTNSXApplicationPlatformHealthClusterMemoryUsageHigh









.1.3.6.1.4.1.6876.120.1.0.43.0.21
 usage of NSX Application Platform cluster vmwNsxTDataCenterNappClusterId is
above the high threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
In the NSX UI, navigate to System | NSX Application Platform | Core Services
and check the Memory field of individual services to see
which service is under pressure. See if the load can be reduced. If more
memory is required, click on the Scale Out button to request more resources.
vmwNsxTNSXApplicationPlatformHealthClusterMemoryUsageHighClear









.1.3.6.1.4.1.6876.120.1.0.43.0.22
 usage of NSX Application Platform cluster vmwNsxTDataCenterNappClusterId is
below the high threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTNSXApplicationPlatformHealthClusterMemUsageVeryHi









.1.3.6.1.4.1.6876.120.1.0.43.0.23
 usage of NSX Application Platform cluster vmwNsxTDataCenterNappClusterId is
above the very high threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
In the NSX UI, navigate to System | NSX Application Platform | Core Services
and check the Memory field of individual services to see
which service is under pressure. See if the load can be reduced. If more
memory is required, click on the Scale Out button to request more resources.
vmwNsxTNSXApplicationPlatformHealthClusterMemUsageVeryHiClear









.1.3.6.1.4.1.6876.120.1.0.43.0.24
 usage of NSX Application Platform cluster vmwNsxTDataCenterNappClusterId is
below the very high threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTNSXApplicationPlatformHealthAnalyticsCPUUsageVeryHi








.1.3.6.1.4.1.6876.120.1.0.43.0.3
age of Analytics service is above the very high threshold
value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
Scale out all services or the Analytics service.
vmwNsxTNSXApplicationPlatformHealthConfigDbCPUUsageHi








.1.3.6.1.4.1.6876.120.1.0.43.0.31
age of Configuration Database service is above the high threshold
value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
Scale out all services.
vmwNsxTNSXApplicationPlatformHealthConfigDbCPUUsageHiClear








.1.3.6.1.4.1.6876.120.1.0.43.0.32
age of Configuration Database service is below the high threshold
value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTNSXApplicationPlatformHealthConfigDbCPUUsageVeryHi








.1.3.6.1.4.1.6876.120.1.0.43.0.33
age of Configuration Database service is above the very high threshold
value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
Scale out all services.
vmwNsxTNSXApplicationPlatformHealthConfigDbCPUUsageVeryHiClear








.1.3.6.1.4.1.6876.120.1.0.43.0.34
age of Configuration Database service is below the very high threshold
value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTNSXApplicationPlatformHealthConfigDbDiskUsageHigh








.1.3.6.1.4.1.6876.120.1.0.43.0.35
sage of Configuration Database service is above the high
threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
Clean up files not needed.
Scale out all services.
vmwNsxTNSXApplicationPlatformHealthConfigDbDiskUsageHighClear








.1.3.6.1.4.1.6876.120.1.0.43.0.36
sage of Configuration Database service is below the high
threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTNSXApplicationPlatformHealthConfigDbDiskUsageVeryHi








.1.3.6.1.4.1.6876.120.1.0.43.0.37
sage of Configuration Database service is above the very high
threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
Clean up files not needed.
Scale out all services.
vmwNsxTNSXApplicationPlatformHealthConfigDbDiskUsageVeryHiClear








.1.3.6.1.4.1.6876.120.1.0.43.0.38
sage of Configuration Database service is below the very high
threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTNSXApplicationPlatformHealthConfigDbMemUsageVeryHigh








.1.3.6.1.4.1.6876.120.1.0.43.0.39
 usage of Configuration Database service is above the very high threshold
value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
Scale out all services.
vmwNsxTNSXApplicationPlatformHealthAnalyticsCPUUsageVeryHiClear








.1.3.6.1.4.1.6876.120.1.0.43.0.4
age of Analytics service is below the very high threshold
value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTNSXApplicationPlatformHealthConfigDbMemUsageVeryHighClear








.1.3.6.1.4.1.6876.120.1.0.43.0.40
 usage of Configuration Database service is below the very high threshold
value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTNSXApplicationPlatformHealthDatastoreCPUUsageHigh








.1.3.6.1.4.1.6876.120.1.0.43.0.41
age of Data Storage service is above the high threshold
value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
Scale out all services or the Data Storage service.
vmwNsxTNSXApplicationPlatformHealthDatastoreCPUUsageHighClear








.1.3.6.1.4.1.6876.120.1.0.43.0.42
age of Data Storage service is below the high threshold
value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTNSXApplicationPlatformHealthDatastoreCPUUsageVeryHi








.1.3.6.1.4.1.6876.120.1.0.43.0.43
age of Data Storage service is above the very high threshold
value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
Scale out all services or the Data Storage service.
vmwNsxTNSXApplicationPlatformHealthDatastoreCPUUsageVeryHiClear








.1.3.6.1.4.1.6876.120.1.0.43.0.44
age of Data Storage service is below the very high threshold
value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTNSXApplicationPlatformHealthDatastoreDiskUsageHigh








.1.3.6.1.4.1.6876.120.1.0.43.0.45
sage of Data Storage service is above the high
threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
Scale out or scale up the data storage service.
vmwNsxTNSXApplicationPlatformHealthDatastoreDiskUsageHighClear








.1.3.6.1.4.1.6876.120.1.0.43.0.46
sage of Data Storage service is below the high
threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTNSXApplicationPlatformHealthDatastoreDiskUsageVeryHi








.1.3.6.1.4.1.6876.120.1.0.43.0.47
sage of Data Storage service is above the very high
threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
Scale out or scale up the data storage service.
vmwNsxTNSXApplicationPlatformHealthDatastoreDiskUsageVeryHiClear








.1.3.6.1.4.1.6876.120.1.0.43.0.48
sage of Data Storage service is below the very high
threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTNSXApplicationPlatformHealthDatastoreMemoryUsageHigh








.1.3.6.1.4.1.6876.120.1.0.43.0.49
 usage of Data Storage service is above the high threshold
value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
Scale out all services or the Data Storage service.
vmwNsxTNSXApplicationPlatformHealthAnalyticsDiskUsageHigh








.1.3.6.1.4.1.6876.120.1.0.43.0.5
sage of Analytics service is above the high
threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
Clean up files not needed.
Scale out all services or the Analytics service.
vmwNsxTNSXApplicationPlatformHealthDatastoreMemoryUsageHighClear








.1.3.6.1.4.1.6876.120.1.0.43.0.50
 usage of Data Storage service is below the high threshold
value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTNSXApplicationPlatformHealthDatastoreMemUsageVeryHi








.1.3.6.1.4.1.6876.120.1.0.43.0.51
 usage of Data Storage service is above the very high threshold
value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
Scale out all services or the Data Storage service.
vmwNsxTNSXApplicationPlatformHealthDatastoreMemUsageVeryHiClear








.1.3.6.1.4.1.6876.120.1.0.43.0.52
 usage of Data Storage service is below the very high threshold
value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTNSXApplicationPlatformHealthMessagingCPUUsageHigh








.1.3.6.1.4.1.6876.120.1.0.43.0.53
age of Messaging service is above the high threshold
value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
Scale out all services or the Messaging service.
vmwNsxTNSXApplicationPlatformHealthMessagingCPUUsageHighClear








.1.3.6.1.4.1.6876.120.1.0.43.0.54
age of Messaging service is below the high threshold
value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTNSXApplicationPlatformHealthMessagingCPUUsageVeryHi








.1.3.6.1.4.1.6876.120.1.0.43.0.55
age of Messaging service is above the very high threshold
value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
Scale out all services or the Messaging service.
vmwNsxTNSXApplicationPlatformHealthMessagingCPUUsageVeryHiClear








.1.3.6.1.4.1.6876.120.1.0.43.0.56
age of Messaging service is below the very high threshold
value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTNSXApplicationPlatformHealthMessagingDiskUsageHigh








.1.3.6.1.4.1.6876.120.1.0.43.0.57
sage of Messaging service is above the high
threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
Clean up files not needed.
Scale out all services or the Messaging service.
vmwNsxTNSXApplicationPlatformHealthMessagingDiskUsageHighClear








.1.3.6.1.4.1.6876.120.1.0.43.0.58
sage of Messaging service is below the high
threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTNSXApplicationPlatformHealthMessagingDiskUsageVeryHi








.1.3.6.1.4.1.6876.120.1.0.43.0.59
sage of Messaging service is above the very high
threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
Clean up files not needed.
Scale out all services or the Messaging service.
vmwNsxTNSXApplicationPlatformHealthAnalyticsDiskUsageHighClear








.1.3.6.1.4.1.6876.120.1.0.43.0.6
sage of Analytics service is below the high
threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTNSXApplicationPlatformHealthMessagingDiskUsageVeryHiClear








.1.3.6.1.4.1.6876.120.1.0.43.0.60
sage of Messaging service is below the very high
threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTNSXApplicationPlatformHealthMessagingMemoryUsageHigh








.1.3.6.1.4.1.6876.120.1.0.43.0.61
 usage of Messaging service is above the high threshold
value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
Scale out all services or the Messaging service.
vmwNsxTNSXApplicationPlatformHealthMessagingMemoryUsageHighClear








.1.3.6.1.4.1.6876.120.1.0.43.0.62
 usage of Messaging service is below the high threshold
value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTNSXApplicationPlatformHealthMessagingMemUsageVeryHi








.1.3.6.1.4.1.6876.120.1.0.43.0.63
 usage of Messaging service is above the very high threshold
value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
Scale out all services or the Messaging service.
vmwNsxTNSXApplicationPlatformHealthMessagingMemUsageVeryHiClear








.1.3.6.1.4.1.6876.120.1.0.43.0.64
 usage of Messaging service is below the very high threshold
value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTNSXApplicationPlatformHealthNodeCPUUsageHigh









.1.3.6.1.4.1.6876.120.1.0.43.0.65
age of NSX Application Platform node vmwNsxTDataCenterNappNodeName
is above the high threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
In the NSX UI, navigate to System | NSX Application Platform | Core Services
and check the System Load field of individual services to see
which service is under pressure. See if load can be reduced. If only a small minority
of the nodes have high CPU usage, by default, Kubernetes will reschedule services automatically.
If most nodes have high CPU usage and load cannot be reduced, click on the Scale Out
button to request more resources.
vmwNsxTNSXApplicationPlatformHealthNodeCPUUsageHighClear









.1.3.6.1.4.1.6876.120.1.0.43.0.66
age of NSX Application Platform node vmwNsxTDataCenterNappNodeName
is below the high threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTNSXApplicationPlatformHealthNodeCPUUsageVeryHigh









.1.3.6.1.4.1.6876.120.1.0.43.0.67
age of NSX Application Platform node vmwNsxTDataCenterNappNodeName
is above the very high threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
In the NSX UI, navigate to System | NSX Application Platform | Core Services
and check the System Load field of individual services to see
which service is under pressure. See if load can be reduced. If only a small minority
of the nodes have high CPU usage, by default, Kubernetes will reschedule services automatically.
If most nodes have high CPU usage and load cannot be reduced, click on the Scale Out
button to request more resources.
vmwNsxTNSXApplicationPlatformHealthNodeCPUUsageVeryHighClear









.1.3.6.1.4.1.6876.120.1.0.43.0.68
age of NSX Application Platform node vmwNsxTDataCenterNappNodeName
is below the very high threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTNSXApplicationPlatformHealthNodeDiskUsageHigh









.1.3.6.1.4.1.6876.120.1.0.43.0.69
sage of NSX Application Platform node vmwNsxTDataCenterNappNodeName is above
the high threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
In the NSX UI, navigate to System | NSX Application Platform | Core Services
and check the Storage field of individual services to see which service is
under pressure. Clean up unused data or log to free up disk resources
and see if the load can be reduced. If more disk storage is required, Scale Out the
service under pressure. If data storage service is under strain, another way is to
click on the Scale Up button to increase disk size.
vmwNsxTNSXApplicationPlatformHealthAnalyticsDiskUsageVeryHi








.1.3.6.1.4.1.6876.120.1.0.43.0.7
sage of Analytics service is above the very high
threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
Clean up files not needed.
Scale out all services or the Analytics service.
vmwNsxTNSXApplicationPlatformHealthNodeDiskUsageHighClear









.1.3.6.1.4.1.6876.120.1.0.43.0.70
sage of NSX Application Platform node vmwNsxTDataCenterNappNodeName is below
the high threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTNSXApplicationPlatformHealthNodeDiskUsageVeryHigh









.1.3.6.1.4.1.6876.120.1.0.43.0.71
sage of NSX Application Platform node vmwNsxTDataCenterNappNodeName is above
the very high threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
In the NSX UI, navigate to System | NSX Application Platform | Core Services
and check the Storage field of individual services to see
which service is under pressure. Clean up unused data or log to free up disk resources
and see if the load can be reduced. If more disk storage is required, Scale Out the
service under pressure. If data storage service is under strain, another way is to
click on the Scale Up button to increase disk size.
vmwNsxTNSXApplicationPlatformHealthNodeDiskUsageVeryHighClear









.1.3.6.1.4.1.6876.120.1.0.43.0.72
sage of NSX Application Platform node vmwNsxTDataCenterNappNodeName is below
the very high threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTNSXApplicationPlatformHealthNodeMemoryUsageHigh









.1.3.6.1.4.1.6876.120.1.0.43.0.73
 usage of NSX Application Platform node vmwNsxTDataCenterNappNodeName
is above the high threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
In the NSX UI, navigate to System | NSX Application Platform | Core Services
and check the Memory field of individual services to see
which service is under pressure. See if load can be reduced. If only a small minority
of the nodes have high Memory usage, by default, Kubernetes will reschedule services automatically.
If most nodes have high Memory usage and load cannot be reduced, click on the Scale Out
button to request more resources.
vmwNsxTNSXApplicationPlatformHealthNodeMemoryUsageHighClear









.1.3.6.1.4.1.6876.120.1.0.43.0.74
 usage of NSX Application Platform node vmwNsxTDataCenterNappNodeName
is below the high threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTNSXApplicationPlatformHealthNodeMemoryUsageVeryHigh









.1.3.6.1.4.1.6876.120.1.0.43.0.75
 usage of NSX Application Platform node vmwNsxTDataCenterNappNodeName
is above the very high threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
In the NSX UI, navigate to System | NSX Application Platform | Core Services
and check the Memory field of individual services to see
which service is under pressure. See if load can be reduced. If only a small minority
of the nodes have high Memory usage, by default, Kubernetes will reschedule services automatically.
If most nodes have high Memory usage and load cannot be reduced, click on the Scale Out
button to request more resources.
vmwNsxTNSXApplicationPlatformHealthNodeMemoryUsageVeryHighClear









.1.3.6.1.4.1.6876.120.1.0.43.0.76
 usage of NSX Application Platform node vmwNsxTDataCenterNappNodeName
is below the very high threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTNSXApplicationPlatformHealthNodeStatusDegraded








.1.3.6.1.4.1.6876.120.1.0.43.0.77
ation Platform node vmwNsxTDataCenterNappNodeName is degraded.
          
Action required:
In the NSX UI, navigate to System | NSX Application Platform | Resources to
check which node is degraded. Check network, memory and CPU usage of the node.
Reboot the node if it is a worker node.
vmwNsxTNSXApplicationPlatformHealthNodeStatusDegradedClear








.1.3.6.1.4.1.6876.120.1.0.43.0.78
ation Platform node vmwNsxTDataCenterNappNodeName is running properly.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTNSXApplicationPlatformHealthNodeStatusDown








.1.3.6.1.4.1.6876.120.1.0.43.0.79
ation Platform node vmwNsxTDataCenterNappNodeName is not running.
          
Action required:
In the NSX UI, navigate to System | NSX Application Platform | Resources to
check which node is down. Check network, memory and CPU usage of the node.
Reboot the node if it is a worker node.
vmwNsxTNSXApplicationPlatformHealthAnalyticsDiskUsageVeryHiClear








.1.3.6.1.4.1.6876.120.1.0.43.0.8
sage of Analytics service is below the very high
threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTNSXApplicationPlatformHealthNodeStatusDownClear








.1.3.6.1.4.1.6876.120.1.0.43.0.80
ation Platform node vmwNsxTDataCenterNappNodeName is running properly.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTNSXApplicationPlatformHealthPlatformCPUUsageHigh








.1.3.6.1.4.1.6876.120.1.0.43.0.81
age of Platform Services service is above the high threshold
value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
Scale out all services.
vmwNsxTNSXApplicationPlatformHealthPlatformCPUUsageHighClear








.1.3.6.1.4.1.6876.120.1.0.43.0.82
age of Platform Services service is below the high threshold
value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTNSXApplicationPlatformHealthPlatformCPUUsageVeryHigh








.1.3.6.1.4.1.6876.120.1.0.43.0.83
age of Platform Services service is above the very high threshold
value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
Scale out all services.
vmwNsxTNSXApplicationPlatformHealthPlatformCPUUsageVeryHighClear








.1.3.6.1.4.1.6876.120.1.0.43.0.84
age of Platform Services service is below the very high threshold
value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTNSXApplicationPlatformHealthPlatformDiskUsageHigh








.1.3.6.1.4.1.6876.120.1.0.43.0.85
sage of Platform Services service is above the high
threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
Clean up files not needed.
Scale out all services.
vmwNsxTNSXApplicationPlatformHealthPlatformDiskUsageHighClear








.1.3.6.1.4.1.6876.120.1.0.43.0.86
sage of Platform Services service is below the high
threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTNSXApplicationPlatformHealthPlatformDiskUsageVeryHi








.1.3.6.1.4.1.6876.120.1.0.43.0.87
sage of Platform Services service is above the very high
threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
Clean up files not needed.
Scale out all services.
vmwNsxTNSXApplicationPlatformHealthPlatformDiskUsageVeryHiClear








.1.3.6.1.4.1.6876.120.1.0.43.0.88
sage of Platform Services service is below the very high
threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTNSXApplicationPlatformHealthPlatformMemoryUsageHigh








.1.3.6.1.4.1.6876.120.1.0.43.0.89
 usage of Platform Services service is above the high threshold
value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
Scale out all services.
vmwNsxTNSXApplicationPlatformHealthAnalyticsMemoryUsageHigh








.1.3.6.1.4.1.6876.120.1.0.43.0.9
 usage of Analytics service is above the high threshold
value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
Scale out all services or the Analytics service.
vmwNsxTNSXApplicationPlatformHealthPlatformMemoryUsageHighClear








.1.3.6.1.4.1.6876.120.1.0.43.0.90
 usage of Platform Services service is below the high threshold
value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTNSXApplicationPlatformHealthPlatformMemUsageVeryHi








.1.3.6.1.4.1.6876.120.1.0.43.0.91
 usage of Platform Services service is above the very high threshold
value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
Scale out all services.
vmwNsxTNSXApplicationPlatformHealthPlatformMemUsageVeryHiClear








.1.3.6.1.4.1.6876.120.1.0.43.0.92
 usage of Platform Services service is below the very high threshold
value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTNSXApplicationPlatformHealthServiceStatusDegraded








.1.3.6.1.4.1.6876.120.1.0.43.0.93
wNsxTDataCenterNappServiceName is degraded. The service may still be able to reach a quorum while
pods associated with vmwNsxTDataCenterNappServiceName are not all stable. Resources consumed by these
unstable pods may be released.
          
Action required:
In the NSX UI, navigate to System | NSX Application Platform | Core Services to check which service
is degraded. Invoke the NSX API GET /napp/api/v1/platform/monitor/feature/health to check which
specific service is degraded and the reason behind it. Invoke the following CLI command to restart
the degraded service if necessary:
`kubectl rollout restart   -n `
Degraded services can function correctly but performance is sub-optimal.
vmwNsxTNSXApplicationPlatformHealthServiceStatusDegradedClear








.1.3.6.1.4.1.6876.120.1.0.43.0.94
wNsxTDataCenterNappServiceName is running properly.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTNSXApplicationPlatformHealthServiceStatusDown








.1.3.6.1.4.1.6876.120.1.0.43.0.95
wNsxTDataCenterNappServiceName is not running.
          
Action required:
In the NSX UI, navigate to System | NSX Application Platform | Core Services to check which service
is degraded. Invoke the NSX API GET /napp/api/v1/platform/monitor/feature/health to check which
specific service is down and the reason behind it. Invoke the following CLI command to restart
the degraded service:
`kubectl rollout restart   -n `
vmwNsxTNSXApplicationPlatformHealthServiceStatusDownClear








.1.3.6.1.4.1.6876.120.1.0.43.0.96
wNsxTDataCenterNappServiceName is running properly.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTNSXApplicationPlatformHealthMetricsCPUUsageHigh








.1.3.6.1.4.1.6876.120.1.0.43.0.97
age of Metrics service is above the high threshold
value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
Scale out all services.
vmwNsxTNSXApplicationPlatformHealthMetricsCPUUsageHighClear








.1.3.6.1.4.1.6876.120.1.0.43.0.98
age of Metrics service is below the high threshold
value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTNSXApplicationPlatformHealthMetricsCPUUsageVeryHigh








.1.3.6.1.4.1.6876.120.1.0.43.0.99
age of Metrics service is above the very high threshold
value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
Scale out all services.
vmwNsxTEdgeEdgeNodeSettingsAndvSphereSettingsAreChanged








.1.3.6.1.4.1.6876.120.1.0.45.0.1
ode vmwNsxTDataCenterEntityId settings and vSphere configuration are changed and does not
match the policy intent configuration. The Edge node configuration visible to user
on UI or API is not same as what is realized. The realized Edge node changes made by
user outside of NSX Manager are shown in the details of this alarm and any edits in
UI or API will overwrite the realized configuration. Fields that differ for Edge node
settings and vSphere configuration are listed in runtime data vmwNsxTDataCenterEdgeNodeAndvSphereSettingsMismatchReason
          
Action required:
Review the node settings and vSphere configuration of this Edge Transport Node vmwNsxTDataCenterEntityId.
Follow one of following actions to resolve alarm -
1. Manually update Edge Transport Node setting Policy intent using
   API : PUT https:///api/v1/transport-nodes/.
2. Accept intent or vSphere realized Edge node configuration or realized
   Edge node settings for this Edge Transport Node through Edge Transport Node
   resolver to resolve this alarm.
3. Resolve alarm by accepting the Edge node settings and vSphere realized configuration
   using refresh API - POST https:///api/v1/transport-nodes/?action=refresh_node_configuration&resource_type=EdgeNode.
vmwNsxTEdgeEdgeVMPresentInNSXInventoryNotPresentInvCenterClear








.1.3.6.1.4.1.6876.120.1.0.45.0.10
vmwNsxTDataCenterEntityId with VM moref id vmwNsxTDataCenterVMMorefId is present in both NSX inventory and vCenter.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTEdgeEdgeVMNotPresentInBothNSXInventoryAndvCenter









.1.3.6.1.4.1.6876.120.1.0.45.0.11
NsxTDataCenterPolicyEdgeVMName with moref id vmwNsxTDataCenterVMMorefId corresponding to the Edge Transport node
vmwNsxTDataCenterEntityId vSphere placement parameters is not found in both NSX inventory and vCenter.
The placement parameters in the vSphere configuration of this Edge Transport
node vmwNsxTDataCenterEntityId refer to the VM with moref vmwNsxTDataCenterVMMorefId.
          
Action required:
The managed object reference moref id of a VM has the form vm-number, which is
visible in the URL on selecting the Edge VM in vCenter UI.  Example vm-12011 in
https:///ui/app/vm;nav=h/urn:vmomi:VirtualMachine:vm-12011:164ff798-c4f1-495b-a0be-adfba337e5d2/summary
Find the VM vmwNsxTDataCenterPolicyEdgeVMName with moref id vmwNsxTDataCenterVMMorefId in vCenter for this Edge
Transport Node vmwNsxTDataCenterEntityId.
Follow the below action to resolve the alarm -
Check if VM has been deleted in vSphere or is present with a different moref id.
1. If the VM is still present in vCenter, put the Edge Transport node in
maintenance mode and then power off and delete the Edge VM in vCenter. Use the NSX
Redeploy API to deploy a new VM for the Edge node. Data traffic for the Edge Transport
node will be disrupted in the interim duration if the Edge VM is forwarding traffic.
2. If the VM is not present in vCenter, use the redeploy API to deploy a new VM for the Edge node.
POST https:///api/v1/transport-nodes/?action=redeploy.
vmwNsxTEdgeEdgeVMNotPresentInBothNSXInventoryAndvCenterClear








.1.3.6.1.4.1.6876.120.1.0.45.0.12
vmwNsxTDataCenterEntityId with VM moref id vmwNsxTDataCenterVMMorefId is present in both NSX inventory and vCenter.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTEdgeFailedToDeleteTheOldVMInvCenterDuringRedeploy










.1.3.6.1.4.1.6876.120.1.0.45.0.13
power off and delete the Edge node vmwNsxTDataCenterEntityId VM with moref id vmwNsxTDataCenterVMMorefId
in vCenter during Redeploy operation. A new Edge VM with moref id vmwNsxTDataCenterNewVMMorefId has
been deployed. Both old and new VMs for this Edge are functional at the same time and
may result in IP conflicts and networking issues.
          
Action required:
The managed object reference moref id of a VM has the form vm-number, which is
visible in the URL on selecting the Edge VM in vCenter UI.  Example vm-12011 in
https:///ui/app/vm;nav=h/urn:vmomi:VirtualMachine:vm-12011:164ff798-c4f1-495b-a0be-adfba337e5d2/summary
Find the VM vmwNsxTDataCenterPolicyEdgeVMName with moref id vmwNsxTDataCenterVMMorefId in vCenter for this Edge
Transport Node vmwNsxTDataCenterEntityId.
Power off and delete the old Edge VM vmwNsxTDataCenterPolicyEdgeVMName with moref id vmwNsxTDataCenterVMMorefId in vCenter.
vmwNsxTEdgeFailedToDeleteTheOldVMInvCenterDuringRedeployClear









.1.3.6.1.4.1.6876.120.1.0.45.0.14
vmwNsxTDataCenterEntityId with stale VM moref id vmwNsxTDataCenterVMMorefId is not found
anymore in both NSX inventory and vCenter. New deployed VM with moref id vmwNsxTDataCenterNewVMMorefId
is present in both NSX inventory and vCenter.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTEdgeEdgeHardwareVersionMismatch











.1.3.6.1.4.1.6876.120.1.0.45.0.15
ode vmwNsxTDataCenterTransportNodeName in Edge cluster vmwNsxTDataCenterEdgeClusterName has a hardware version vmwNsxTDataCenterEdgeTNHwVersion,
which is less than the highest hardware version vmwNsxTDataCenterEdgeClusterHighestHwVersion in the Edge cluster.
          
Action required:
Follow KB article to resolve hardware version mismatch alarm for Edge node vmwNsxTDataCenterTransportNodeName.
          
For more information see:
https://www.vmware.com/esx/support/askvmware/index.php?eventtype=edge.edge_hardware_version_mismatch
vmwNsxTEdgeEdgeHardwareVersionMismatchClear








.1.3.6.1.4.1.6876.120.1.0.45.0.16
ode vmwNsxTDataCenterTransportNodeName hardware version mismatch is resolved now.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTEdgeEdgeNodeSettingsAndvSphereSettingsAreChangedClear







.1.3.6.1.4.1.6876.120.1.0.45.0.2
vmwNsxTDataCenterEntityId node settings and vSphere settings are consistent with policy intent now.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTEdgeEdgeNodeSettingsMismatch








.1.3.6.1.4.1.6876.120.1.0.45.0.3
ode vmwNsxTDataCenterEntityId settings configuration does not match the policy
intent configuration. The Edge node configuration visible to user on
UI or API is not same as what is realized. The realized Edge node changes
made by user outside of NSX Manager are shown in the details of this alarm
and any edits in UI or API will overwrite the realized configuration.
Fields that differ for the Edge node are listed in runtime data
vmwNsxTDataCenterEdgeNodeSettingMismatchReason
          
Action required:
Review the node settings of this Edge transport node vmwNsxTDataCenterEntityId.
Follow one of following actions to resolve alarm -
1. Manually update Edge transport node setting Policy intent
   using API - PUT https:///api/v1/transport-nodes/.
2. Accept intent or realized Edge node settings for this Edge transport node
   through Edge transport node resolver to resolve this alarm.
3. Resolve alarm by accepting the Edge node settings configuration using
   refresh API - POST https:///api/v1/transport-nodes/?action=refresh_node_configuration&resource_type=EdgeNode.
vmwNsxTEdgeEdgeNodeSettingsMismatchClear







.1.3.6.1.4.1.6876.120.1.0.45.0.4
vmwNsxTDataCenterEntityId node settings are consistent with policy intent now.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTEdgeEdgeVmvSphereSettingsMismatch








.1.3.6.1.4.1.6876.120.1.0.45.0.5
ode vmwNsxTDataCenterEntityId configuration on vSphere does not match the policy
intent configuration. The Edge node configuration visible to user on UI or API
is not same as what is realized. The realized Edge node changes made by user
outside of NSX Manager are shown in the details of this alarm and any edits
in UI or API will overwrite the realized configuration. Fields that differ for
the Edge node are listed in runtime data vmwNsxTDataCenterEdgeVMvSphereSettingsMismatchReason
          
Action required:
Review the vSphere configuration of this Edge Transport Node vmwNsxTDataCenterEntityId.
Follow one of following actions to resolve alarm -
1. Accept intent or vSphere realized Edge node configuration for this Edge Transport Node
   through Edge Transport Node resolver to resolve this alarm.
2. Resolve alarm by accepting the Edge node vSphere realized configuration using
   refresh API - POST https:///api/v1/transport-nodes/?action=refresh_node_configuration&resource_type=EdgeNode.
vmwNsxTEdgeEdgeVmvSphereSettingsMismatchClear







.1.3.6.1.4.1.6876.120.1.0.45.0.6
vmwNsxTDataCenterEntityId VM vSphere settings are consistent with policy intent now.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTEdgeEdgevSphereLocationMismatch








.1.3.6.1.4.1.6876.120.1.0.45.0.7
ode vmwNsxTDataCenterEntityId has been moved using vMotion. The Edge node vmwNsxTDataCenterEntityId, the configuration
on vSphere does not match the policy intent configuration. The Edge node configuration visible to
user on UI or API is not same as what is realized. The realized Edge node changes made by user outside
of NSX Manager are shown in the details of this alarm. Fields that differ for the Edge node are listed
in runtime data vmwNsxTDataCenterEdgevSphereLocationMismatchReason
          
Action required:
Review the vSphere configuration of this Edge Transport Node vmwNsxTDataCenterEntityId.
Follow one of following actions to resolve alarm -
1. Resolve alarm by accepting the Edge node vSphere realized config using
   refresh API - POST https:///api/v1/transport-nodes/?action=refresh_node_configuration&resource_type=EdgeNode.
2. If you want to return to the previous location use
   NSX Redeploy API - POST https:///api/v1/transport-nodes/?action=redeploy.
   vMotion back to the original host is not supported.
vmwNsxTEdgeEdgevSphereLocationMismatchClear







.1.3.6.1.4.1.6876.120.1.0.45.0.8
vmwNsxTDataCenterEntityId node vSphere settings are consistent with policy intent now.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTEdgeEdgeVMPresentInNSXInventoryNotPresentInvCenter









.1.3.6.1.4.1.6876.120.1.0.45.0.9
NsxTDataCenterPolicyEdgeVMName with moref id vmwNsxTDataCenterVMMorefId corresponding to the Edge Transport node
vmwNsxTDataCenterEntityId vSphere placement parameters is found in NSX inventory but is not present in vCenter.
Check if the VM has been removed in vCenter or is present with
a different VM moref id.
          
Action required:
The managed object reference moref id of a VM has the form vm-number, which is
visible in the URL on selecting the Edge VM in vCenter UI.  Example vm-12011 in
https:///ui/app/vm;nav=h/urn:vmomi:VirtualMachine:vm-12011:164ff798-c4f1-495b-a0be-adfba337e5d2/summary
Find the VM vmwNsxTDataCenterPolicyEdgeVMName with moref id vmwNsxTDataCenterVMMorefId in vCenter for this Edge
Transport Node vmwNsxTDataCenterEntityId.
If the Edge VM is present in vCenter with a different moref id, follow the below action.
Use NSX add or update placement API with JSON request payload properties vm_id and vm_deployment_config
to update the new vm moref id and vSphere deployment parameters.
POST https:///api/v1/transport-nodes/?action=addOrUpdatePlacementReferences.
If the Edge VM with name vmwNsxTDataCenterPolicyEdgeVMName is not present in vCenter, use the NSX Redeploy API
to deploy a new VM for the Edge node.
POST https:///api/v1/transport-nodes/?action=redeploy.
vmwNsxTNATSNATPortUsageOnGatewayIsHigh









.1.3.6.1.4.1.6876.120.1.0.46.0.1
 usage on logical router vmwNsxTDataCenterEntityId for
SNAT IP vmwNsxTDataCenterSNATIPAddress has reached
the high threshold value of vmwNsxTDataCenterSystemUsageThreshold%. New flows will
not be SNATed when usage reaches the maximum limit.
          
Action required:
Log in as the admin user on Edge node and invoke the NSX CLI command
`get firewall  connection state` by using the right interface
uuid and check various SNAT mappings for the SNAT IP vmwNsxTDataCenterSNATIPAddress.
Check traffic flows going through the gateway is not a denial-of-service
attack or anomalous burst. If the traffic appears to be within
the normal load but the alarm threshold is hit, consider adding more
SNAT IP addresses to distribute the load or
route new traffic to another Edge node.
vmwNsxTNATSNATPortUsageOnGatewayIsHighClear









.1.3.6.1.4.1.6876.120.1.0.46.0.2
 usage on logical router vmwNsxTDataCenterEntityId for SNAT IP vmwNsxTDataCenterSNATIPAddress
has reached below the high threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTPhysicalServerPhysicalServerInstallFailed








.1.3.6.1.4.1.6876.120.1.0.47.0.1
erver vmwNsxTDataCenterTransportNodeName (vmwNsxTDataCenterEntityId) installation failed.
          
Action required:
Navigate to System > Fabric > Nodes > Host Transport Nodes and resolve the error on the node.
vmwNsxTPhysicalServerPhysicalServerInstallFailedClear








.1.3.6.1.4.1.6876.120.1.0.47.0.2
erver vmwNsxTDataCenterTransportNodeName (vmwNsxTDataCenterEntityId) installation completed.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTPhysicalServerPhysicalServerUninstallFailed








.1.3.6.1.4.1.6876.120.1.0.47.0.3
erver vmwNsxTDataCenterTransportNodeName (vmwNsxTDataCenterEntityId) uninstallation failed.
          
Action required:
Navigate to System > Fabric > Nodes > Host Transport Nodes and resolve the error on the node.
vmwNsxTPhysicalServerPhysicalServerUninstallFailedClear








.1.3.6.1.4.1.6876.120.1.0.47.0.4
erver vmwNsxTDataCenterTransportNodeName (vmwNsxTDataCenterEntityId) uninstallation completed.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTPhysicalServerPhysicalServerUpgradeFailed








.1.3.6.1.4.1.6876.120.1.0.47.0.5
erver vmwNsxTDataCenterTransportNodeName (vmwNsxTDataCenterEntityId) upgrade failed.
          
Action required:
Navigate to System > Upgrade and resolve the error, then re-trigger the upgrade.
vmwNsxTPhysicalServerPhysicalServerUpgradeFailedClear








.1.3.6.1.4.1.6876.120.1.0.47.0.6
erver vmwNsxTDataCenterTransportNodeName (vmwNsxTDataCenterEntityId) upgrade completed.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTMalwarePreventionHealthAnalystAPIServiceUnreachable








.1.3.6.1.4.1.6876.120.1.0.48.0.1
wNsxTDataCenterMpsServiceName is degraded on NSX Application Platform. It is unable to
communicate with analyst_api service. Inspected file verdicts may not be up to date.
          
Action required:
In the NSX UI, navigate to System | NSX Application Platform | Core Services to check which
service is degraded. Invoke the NSX API GET /napp/api/v1/platform/monitor/feature/health to
check which specific service is down and the reason behind it. Invoke the following CLI command
to restart the degraded service:
`kubectl rollout restart   -n `
Determine the status of Malware Prevention Cloud Connector service.
          
For more information see:
https://www.vmware.com/esx/support/askvmware/index.php?eventtype=malware_prevention_health.analyst_api_service_unreachable
vmwNsxTMalwarePreventionHealthServiceStatusDownClear









.1.3.6.1.4.1.6876.120.1.0.48.0.10
wNsxTDataCenterMpsServiceName is running properly on vmwNsxTDataCenterTransportNodeName.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTMalwarePreventionHealthAnalystAPIServiceUnreachableClear








.1.3.6.1.4.1.6876.120.1.0.48.0.2
wNsxTDataCenterMpsServiceName is running properly on NSX Application Platform.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTMalwarePreventionHealthDatabaseUnreachable








.1.3.6.1.4.1.6876.120.1.0.48.0.3
wNsxTDataCenterMpsServiceName is degraded on NSX Application Platform. It is unable to
communicate with Malware Prevention database.
          
Action required:
In the NSX UI, navigate to System | NSX Application Platform | Core Services to check which
service is degraded. Invoke the NSX API GET /napp/api/v1/platform/monitor/feature/health to
check which specific service is down and the reason behind it. Invoke the following CLI command
to restart the degraded service:
`kubectl rollout restart   -n `
Determine the status of Malware Prevention Database service.
          
For more information see:
https://www.vmware.com/esx/support/askvmware/index.php?eventtype=malware_prevention_health.database_unreachable
vmwNsxTMalwarePreventionHealthDatabaseUnreachableClear








.1.3.6.1.4.1.6876.120.1.0.48.0.4
wNsxTDataCenterMpsServiceName is running properly on NSX Application Platform.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTMalwarePreventionHealthFileExtractSvcUnreachable











.1.3.6.1.4.1.6876.120.1.0.48.0.5
wNsxTDataCenterMpsServiceName is degraded on vmwNsxTDataCenterTransportNodeName. Unable to communicate
with file extraction functionality. All file extraction abilities on the
vmwNsxTDataCenterTransportNodeName are paused.
          
Action required:
1. On the Edge node identified by vmwNsxTDataCenterNSXEdgeTNName, invoke the NSX CLI `get ids engine status`
to check the status of file_extraction (IDS) service. Inspect /var/log/syslog to find any
suspecting error(s) with the file extraction (IDS) service and/or vmwNsxTDataCenterMpsServiceName.
2. On the Host node identified by vmwNsxTDataCenterNSXESXTNName, log into the associated Malware Prevention
Service VM vmwNsxTDataCenterEntityId and check the status of file extraction (NXGI) service. Inspect
/var/log/syslog on the associated Malware Prevention Service VM vmwNsxTDataCenterEntityId to find any
suspecting error(s).
          
For more information see:
https://www.vmware.com/esx/support/askvmware/index.php?eventtype=malware_prevention_health.file_extraction_service_unreachable
vmwNsxTMalwarePreventionHealthFileExtractSvcUnreachableClear









.1.3.6.1.4.1.6876.120.1.0.48.0.6
wNsxTDataCenterMpsServiceName is running properly on vmwNsxTDataCenterTransportNodeName.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTMalwarePreventionHealthNTICSRepSvcUnreachable








.1.3.6.1.4.1.6876.120.1.0.48.0.7
wNsxTDataCenterMpsServiceName is degraded on NSX Application Platform. It is unable to
communicate with NTICS reputation service. Inspected file reputations may not be up to date.
          
Action required:
In the NSX UI, navigate to System | NSX Application Platform | Core Services to check which
service is degraded. Invoke the NSX API GET /napp/api/v1/platform/monitor/feature/health to
check which specific service is down and the reason behind it. Invoke the following CLI command
to restart the degraded service:
`kubectl rollout restart   -n `
Determine if access to NTICS service is down.
vmwNsxTMalwarePreventionHealthNTICSRepSvcUnreachableClear








.1.3.6.1.4.1.6876.120.1.0.48.0.8
wNsxTDataCenterMpsServiceName is running properly on NSX Application Platform.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTMalwarePreventionHealthServiceStatusDown











.1.3.6.1.4.1.6876.120.1.0.48.0.9
wNsxTDataCenterMpsServiceName is not running on vmwNsxTDataCenterTransportNodeName.
          
Action required:
1. On the Edge node identified by vmwNsxTDataCenterNSXEdgeTNName, invoke the NSX CLI `get services`
to check the status of vmwNsxTDataCenterMpsServiceName. Inspect /var/log/syslog to find any suspecting
error(s).
2. On the Host node identified by vmwNsxTDataCenterNSXESXTNName, log into the associated Malware Prevention
Service VM vmwNsxTDataCenterEntityId and check the status of vmwNsxTDataCenterMpsServiceName. Inspect /var/log/syslog on
the associated Malware Prevention Service VM vmwNsxTDataCenterEntityId to find any suspecting error(s).
          
For more information see:
https://www.vmware.com/esx/support/askvmware/index.php?eventtype=malware_prevention_health.service_status_down
vmwNsxTEdgeClusterEdgeClusterMemberRelocateFailure










.1.3.6.1.4.1.6876.120.1.0.49.0.5
ion on Edge cluster vmwNsxTDataCenterEdgeClusterId to relocate all service
context failed for Edge cluster member index vmwNsxTDataCenterMemberIndexId with
Transport node ID vmwNsxTDataCenterTransportNodeId
          
Action required:
Review the available capacity for the Edge cluster. If more capacity
is required, scale your Edge cluster. Retry the relocate Edge cluster
member operation.
vmwNsxTEdgeClusterEdgeClusterMemberRelocateFailureClear








.1.3.6.1.4.1.6876.120.1.0.49.0.6
with vmwNsxTDataCenterTransportNodeId relocation failure has been resolved now.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTLicensesLicenseExpired









.1.3.6.1.4.1.6876.120.1.0.5.0.1
TDataCenterLicenseEditionType license key ending with
vmwNsxTDataCenterDisplayedLicenseKey, has expired.
          
Action required:
Add a new, non-expired license using the NSX UI by navigating to
System | Licenses then click ADD and specify the key of the new
license. The expired license should be deleted by checking the
checkbox of the license, then click DELETE.
vmwNsxTLicensesLicenseExpiredClear









.1.3.6.1.4.1.6876.120.1.0.5.0.2
d vmwNsxTDataCenterLicenseEditionType license key ending with
vmwNsxTDataCenterDisplayedLicenseKey, has been removed, updated or is no
longer about to expire.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTLicensesLicenseIsAboutToExpire









.1.3.6.1.4.1.6876.120.1.0.5.0.3
TDataCenterLicenseEditionType license key ending with
vmwNsxTDataCenterDisplayedLicenseKey, is about to expire.
          
Action required:
The license is about to expire in several days. Plan to add a
new, non-expiring license using the NSX UI by navigating to System |
Licenses then click ADD and specify the key of the new license. The
expired license should be deleted by checking the checkbox of the
license, then click DELETE.
vmwNsxTLicensesLicenseIsAboutToExpireClear









.1.3.6.1.4.1.6876.120.1.0.5.0.4
ng vmwNsxTDataCenterLicenseEditionType license key ending with
vmwNsxTDataCenterDisplayedLicenseKey, has been removed, updated or is no
longer about to expire.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTVMCAppTransitConnectFailure







.1.3.6.1.4.1.6876.120.1.0.50.0.1
nnect related configuration is not
fully correctly realized. Possible issues could be
failing to retrieve provider information or some
transient provider communication error.
          
Action required:
If this alarm is not auto-resolved within 10
minutes, retry the most recent transit connect
related request(s). For example, if a TGW
attachment API request triggered this alarm,
retry the TGW attachment API request again.
If alarm does not resolve even after retry, then try the following steps:
1. Check if the task keeps failing, or the task has recovered.
  a) Identify leader Manager node.
  After logging into one of the nodes, run command:
   - `su admin`
   - `get cluster status verbose`
  This will show the leader Manager node
  b) Log in to NSX leader Manager node.
  check vmc-app.log on the NSX leader Manager node:
  - `tail -f /var/log/policy/vmc-app.log`
  c) Check logs for following prints
  - If any of these error messages keeps showing every two mins, that means task keeps failing.
  - Failed to get TGW route table for []. Error: []
  - Failed to get TGW routes for attachment [] in route table []. Error
  - Failed to get TGW attachment VPC ID for []. Error: []
  - Failed to get TGW attachment resource ID for []. Error: Unknown resource type
  - Failed to get TGW attachments for TGW []. Error: []
  - Failed to get local TGW attachment []. Error: []
  - Failed to find correct TgwAttachment state in AWS, state: [], skipping TGW route update task
  - TGW attachment [] is not associated with any route table
  - No local TGW SDDC attachment found for []
2. Check if all AWS calls from NSX Manager failed, on leader Manager node.
  Run following command:
  - `export HTTP_PROXY=http://:3128`
  - `export HTTPS_PROXY=http://:3128`
  - `export NO_PROXY=169.254.169.254`
  - `aws ec2 describe-instances --region `
  If aws command failed with error, then there might be a system issue in HTTP
  reverse proxy configuration on pop, or there is AWS service side issue.
3. Check whether TGW attachment still exists in AWS.
  a) TGW attachment ID could be found with GET cloud-service/api/v1/infra/associated-groups
  - `aws ec2 describe-transit-gateway-attachments --region  --transit-gateway-attachment-id `
  If TGW attachment has been deleted, contact VMware Support, share SDDC ID and TGW attachment ID.
  After VMware support team identified the issue, manually delete the object left behind, if needed.
  b) Check if this TGW attachment exists on AWS console.
  c) Another option is logging in to NSX Manager, using aws command to check the state of TGW attachment:
  - `aws ec2 describe-transit-gateway-attachments --region  --transit-gateway-attachment-id `
vmwNsxTVMCAppTransitConnectFailureClear







.1.3.6.1.4.1.6876.120.1.0.50.0.2
nnect failure is remediated.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTTEPHealthFaultyTEP











.1.3.6.1.4.1.6876.120.1.0.51.0.1
TDataCenterVtepName of VDS:vmwNsxTDataCenterDvsName at Transport node:vmwNsxTDataCenterTransportNodeId.
Overlay workloads using this TEP will face network outage.
Reason: vmwNsxTDataCenterVtepFaultReason.
          
Action required:
1. Check if TEP has valid IP or any other underlay connectivity issues.
2. Enable TEP HA to failover workloads to other healthy TEPs.
vmwNsxTTEPHealthTEPAutorecoverSuccessClear










.1.3.6.1.4.1.6876.120.1.0.51.0.10
er for TEP:vmwNsxTDataCenterVtepName of VDS:vmwNsxTDataCenterDvsName at Transport node:vmwNsxTDataCenterTransportNodeId is cleared.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTTEPHealthTEPAutorecoverSuccessOnDPU











.1.3.6.1.4.1.6876.120.1.0.51.0.11
er for TEP:vmwNsxTDataCenterVtepName of VDS:vmwNsxTDataCenterDvsName at Transport node:vmwNsxTDataCenterTransportNodeId.
on DPU vmwNsxTDataCenterDPUId is successful.
          
Action required:
None.
vmwNsxTTEPHealthTEPAutorecoverSuccessOnDPUClear











.1.3.6.1.4.1.6876.120.1.0.51.0.12
er for TEP:vmwNsxTDataCenterVtepName of VDS:vmwNsxTDataCenterDvsName at Transport node:vmwNsxTDataCenterTransportNodeId.
on DPU vmwNsxTDataCenterDPUId is cleared.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTTEPHealthTEPHaActivated










.1.3.6.1.4.1.6876.120.1.0.51.0.13
ivated for TEP:vmwNsxTDataCenterVtepName of VDS:vmwNsxTDataCenterDvsName at Transport node:vmwNsxTDataCenterTransportNodeId.
          
Action required:
Enable AutoRecover or invoke Manual Recover for TEP:vmwNsxTDataCenterVtepName on VDS:vmwNsxTDataCenterDvsName at Transport node:vmwNsxTDataCenterTransportNodeId.
vmwNsxTTEPHealthTEPHaActivatedClear










.1.3.6.1.4.1.6876.120.1.0.51.0.14
ared for TEP:vmwNsxTDataCenterVtepName of VDS:vmwNsxTDataCenterDvsName at Transport node:vmwNsxTDataCenterTransportNodeId.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTTEPHealthTEPHaActivatedOnDPU











.1.3.6.1.4.1.6876.120.1.0.51.0.15
ivated for TEP:vmwNsxTDataCenterVtepName of VDS:vmwNsxTDataCenterDvsName at Transport node:vmwNsxTDataCenterTransportNodeId on DPU vmwNsxTDataCenterDPUId.
          
Action required:
Enable AutoRecover or invoke Manual Recover for TEP:vmwNsxTDataCenterVtepName on VDS:vmwNsxTDataCenterDvsName.
at Transport node:vmwNsxTDataCenterTransportNodeId on DPU vmwNsxTDataCenterDPUId.
vmwNsxTTEPHealthTEPHaActivatedOnDPUClear











.1.3.6.1.4.1.6876.120.1.0.51.0.16
ared for TEP:vmwNsxTDataCenterVtepName of VDS:vmwNsxTDataCenterDvsName at Transport node:vmwNsxTDataCenterTransportNodeId on DPU vmwNsxTDataCenterDPUId.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTTEPHealthFaultyTEPClear










.1.3.6.1.4.1.6876.120.1.0.51.0.2
TDataCenterVtepName of VDS:vmwNsxTDataCenterDvsName at Transport node:vmwNsxTDataCenterTransportNodeId is healthy.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTTEPHealthFaultyTEPOnDPU












.1.3.6.1.4.1.6876.120.1.0.51.0.3
TDataCenterVtepName of VDS:vmwNsxTDataCenterDvsName at Transport node:vmwNsxTDataCenterTransportNodeId on DPU vmwNsxTDataCenterDPUId.
Overlay workloads using this TEP will face network outage.
Reason: vmwNsxTDataCenterVtepFaultReason.
          
Action required:
1. Check if TEP has valid IP or any other underlay connectivity issues.
2. Enable TEP HA to failover workloads to other healthy TEPs.
vmwNsxTTEPHealthFaultyTEPOnDPUClear











.1.3.6.1.4.1.6876.120.1.0.51.0.4
TDataCenterVtepName of VDS:vmwNsxTDataCenterDvsName at Transport node:vmwNsxTDataCenterTransportNodeId on DPU vmwNsxTDataCenterDPUId is healthy.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTTEPHealthTEPAutorecoverFailure










.1.3.6.1.4.1.6876.120.1.0.51.0.5
er for TEP:vmwNsxTDataCenterVtepName of VDS:vmwNsxTDataCenterDvsName at Transport node:vmwNsxTDataCenterTransportNodeId failed.
Overlay workloads using this TEP will failover to other healthy TEPs.
If no other healthy TEPs, overlay workloads will face network outage.
          
Action required:
Check if TEP has valid IP or any other underlay connectivity issues.
vmwNsxTTEPHealthTEPAutorecoverFailureClear










.1.3.6.1.4.1.6876.120.1.0.51.0.6
er for TEP:vmwNsxTDataCenterVtepName of VDS:vmwNsxTDataCenterDvsName at Transport node:vmwNsxTDataCenterTransportNodeId is cleared.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTTEPHealthTEPAutorecoverFailureOnDPU











.1.3.6.1.4.1.6876.120.1.0.51.0.7
er for TEP:vmwNsxTDataCenterVtepName of VDS:vmwNsxTDataCenterDvsName at Transport node:vmwNsxTDataCenterTransportNodeId on DPU vmwNsxTDataCenterDPUId failed.
Overlay workloads using this TEP will failover to other healthy TEPs.
If no other healthy TEPs, overlay workloads will face network outage.
          
Action required:
Check if TEP has valid IP or any other underlay connectivity issues.
vmwNsxTTEPHealthTEPAutorecoverFailureOnDPUClear











.1.3.6.1.4.1.6876.120.1.0.51.0.8
er for TEP:vmwNsxTDataCenterVtepName of VDS:vmwNsxTDataCenterDvsName at Transport node:vmwNsxTDataCenterTransportNodeId on DPU vmwNsxTDataCenterDPUId is cleared.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTTEPHealthTEPAutorecoverSuccess










.1.3.6.1.4.1.6876.120.1.0.51.0.9
er for TEP:vmwNsxTDataCenterVtepName of VDS:vmwNsxTDataCenterDvsName at Transport node:vmwNsxTDataCenterTransportNodeId is successful.
          
Action required:
None.
vmwNsxTPolicyConstraintCreationCountLimitReached











.1.3.6.1.4.1.6876.120.1.0.53.0.1
nt for type vmwNsxTDataCenterConstraintType in vmwNsxTDataCenterConstraintTypePath is currently at vmwNsxTDataCenterCurrentCount which reached the maximum limit of vmwNsxTDataCenterConstraintLimit.
          
Action required:
Review vmwNsxTDataCenterConstraintType usage. Update the constraint to increase the limit or delete unused vmwNsxTDataCenterConstraintType.
vmwNsxTPolicyConstraintCreationCountLimitReachedClear








.1.3.6.1.4.1.6876.120.1.0.53.0.2
aCenterConstraintType Count is below threshold.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTGroupsGroupSizeLimitExceeded
















.1.3.6.1.4.1.6876.120.1.0.54.0.3
sxTDataCenterGroupId has at least vmwNsxTDataCenterGroupSize translated elements which is at or greater
than the maximum numbers limit of vmwNsxTDataCenterGroupMaxNumberLimit. This can result
in long processing times and can lead to timeouts and outages. The current
count for each element type is as follows. IP sets:vmwNsxTDataCenterIPCount, MAC
sets:vmwNsxTDataCenterMacCount, VIFS:vmwNsxTDataCenterVifCount, Logical switch ports:vmwNsxTDataCenterLspCount,
Logical router ports:vmwNsxTDataCenterLrpCount, AdGroups:vmwNsxTDataCenterSidCount.
          
Action required:
1. Consider adjusting group elements in oversized group vmwNsxTDataCenterGroupId.
2. Consider splitting oversized group vmwNsxTDataCenterGroupId to multiple smaller groups and distributing members of oversized group to these groups.
vmwNsxTGroupsGroupSizeLimitExceededClear









.1.3.6.1.4.1.6876.120.1.0.54.0.4
er of elements in group vmwNsxTDataCenterGroupId is below the maximum limit of
vmwNsxTDataCenterGroupMaxNumberLimit.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTSecurityCompliancePollEAL4NonCompliance







.1.3.6.1.4.1.6876.120.1.0.56.0.1
 EAL4+ compliance requirements is being violated.
That means the NSX configuration is currently non-compliant with regards to EAL4+.
          
Action required:
Run the compliance report from the UI Home - Monitoring & Dashboard - Compliance Report menu
and resolve all the issues that are marked with the EAL4+ compliance name.
vmwNsxTSecurityCompliancePollEAL4NonComplianceClear







.1.3.6.1.4.1.6876.120.1.0.56.0.2
compliance issues have all been resolved.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTSecurityCompliancePollNDcPPNonCompliance







.1.3.6.1.4.1.6876.120.1.0.56.0.3
 NDcPP compliance requirements is being violated.
That means the NSX configuration is currently non-compliant with regards to NDcPP.
          
Action required:
Run the compliance report from the UI Home - Monitoring & Dashboard - Compliance Report menu
and resolve all the issues that are marked with the NDcPP compliance name.
vmwNsxTSecurityCompliancePollNDcPPNonComplianceClear







.1.3.6.1.4.1.6876.120.1.0.56.0.4
compliance issues have all been resolved.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTSecurityComplianceTriggerEAL4NonCompliance







.1.3.6.1.4.1.6876.120.1.0.56.0.5
 EAL4+ compliance requirements is being violated.
That means the NSX status is currently non-compliant with regards to EAL4+.
          
Action required:
Run the compliance report from the UI Home - Monitoring & Dashboard - Compliance Report menu
and resolve all the issues that are marked with the EAL4+ compliance name.
vmwNsxTSecurityComplianceTriggerEAL4NonComplianceClear







.1.3.6.1.4.1.6876.120.1.0.56.0.6
compliance issues have all been resolved.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTSecurityComplianceTriggerNDcPPNonCompliance







.1.3.6.1.4.1.6876.120.1.0.56.0.7
 NDcPP compliance requirements is being violated.
That means the NSX status is currently non-compliant with regards to NDcPP.
          
Action required:
Run the compliance report from the UI Home - Monitoring & Dashboard - Compliance Report menu
and resolve all the issues that are marked with the NDcPP compliance name.
vmwNsxTSecurityComplianceTriggerNDcPPNonComplianceClear







.1.3.6.1.4.1.6876.120.1.0.56.0.8
compliance issues have all been resolved.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTIntelligenceHealthCPUUsageHigh









.1.3.6.1.4.1.6876.120.1.0.6.0.41
age on Intelligence node vmwNsxTDataCenterIntelligenceNodeId
is above the high threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
Use the top command to check which processes have the most CPU usages, and
then check /var/log/syslog and these processes' local logs to see if there
are any outstanding errors to be resolved.
vmwNsxTIntelligenceHealthCPUUsageHighClear









.1.3.6.1.4.1.6876.120.1.0.6.0.42
age on Intelligence node vmwNsxTDataCenterIntelligenceNodeId
is below the high threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTIntelligenceHealthCPUUsageVeryHigh









.1.3.6.1.4.1.6876.120.1.0.6.0.43
age on Intelligence node vmwNsxTDataCenterIntelligenceNodeId
is above the very high threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
Use the top command to check which processes have the most CPU usages, and
then check /var/log/syslog and these processes' local logs to see if there
are any outstanding errors to be resolved.
vmwNsxTIntelligenceHealthCPUUsageVeryHighClear









.1.3.6.1.4.1.6876.120.1.0.6.0.44
age on Intelligence node vmwNsxTDataCenterIntelligenceNodeId
is below the very high threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTIntelligenceHealthDataDiskPartitionUsageHigh









.1.3.6.1.4.1.6876.120.1.0.6.0.45
sage of disk partition /data on Intelligence node
vmwNsxTDataCenterIntelligenceNodeId is above the high threshold value
of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
Stop NSX intelligence data collection until the disk usage is below
the threshold. Examine disk partition /data and see if there are any
unexpected large files that can be removed.
vmwNsxTIntelligenceHealthDataDiskPartitionUsageHighClear









.1.3.6.1.4.1.6876.120.1.0.6.0.46
sage of disk partition /data on Intelligence node
vmwNsxTDataCenterIntelligenceNodeId is below the high threshold value
of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTIntelligenceHealthDataDiskPartitionUsageVeryHigh









.1.3.6.1.4.1.6876.120.1.0.6.0.47
sage of disk partition /data on Intelligence node
vmwNsxTDataCenterIntelligenceNodeId is above the very high threshold value
of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
Stop NSX intelligence data collection until the disk usage is below
the threshold. In the NSX UI, navigate to System | Appliances | NSX
Intelligence Appliance. Then click ACTONS, Stop Collecting Data.
vmwNsxTIntelligenceHealthDataDiskPartitionUsageVeryHighClear









.1.3.6.1.4.1.6876.120.1.0.6.0.48
sage of disk partition /data on Intelligence node
vmwNsxTDataCenterIntelligenceNodeId is below the very high threshold value
of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTIntelligenceHealthDiskUsageHigh










.1.3.6.1.4.1.6876.120.1.0.6.0.49
sage of disk partition vmwNsxTDataCenterDiskPartitionName on
Intelligence node vmwNsxTDataCenterIntelligenceNodeId is above
the high threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
Examine disk partition vmwNsxTDataCenterDiskPartitionName and see if there are any
unexpected large files that can be removed.
vmwNsxTIntelligenceHealthDiskUsageHighClear










.1.3.6.1.4.1.6876.120.1.0.6.0.50
sage of disk partition vmwNsxTDataCenterDiskPartitionName on
Intelligence node vmwNsxTDataCenterIntelligenceNodeId is below
the high threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTIntelligenceHealthDiskUsageVeryHigh










.1.3.6.1.4.1.6876.120.1.0.6.0.51
sage of disk partition vmwNsxTDataCenterDiskPartitionName on
Intelligence node vmwNsxTDataCenterIntelligenceNodeId is above
the very high threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
Examine disk partition vmwNsxTDataCenterDiskPartitionName and see if there are any
unexpected large files that can be removed.
vmwNsxTIntelligenceHealthDiskUsageVeryHighClear










.1.3.6.1.4.1.6876.120.1.0.6.0.52
sage of disk partition vmwNsxTDataCenterDiskPartitionName on
Intelligence node vmwNsxTDataCenterIntelligenceNodeId is below
the very high threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTIntelligenceHealthMemoryUsageHigh









.1.3.6.1.4.1.6876.120.1.0.6.0.53
 usage on Intelligence node vmwNsxTDataCenterIntelligenceNodeId
is above the high threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
Use the top command to check which processes have the most memory usages,
and then check /var/log/syslog and these processes' local logs to see if
there are any outstanding errors to be resolved.
vmwNsxTIntelligenceHealthMemoryUsageHighClear









.1.3.6.1.4.1.6876.120.1.0.6.0.54
 usage on Intelligence node vmwNsxTDataCenterIntelligenceNodeId
is below the high threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTIntelligenceHealthMemoryUsageVeryHigh









.1.3.6.1.4.1.6876.120.1.0.6.0.55
 usage on Intelligence node vmwNsxTDataCenterIntelligenceNodeId
is above the very high threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
Use the top command to check which processes have the most memory usages,
and then check /var/log/syslog and these processes' local logs to see if
there are any outstanding errors to be resolved.
vmwNsxTIntelligenceHealthMemoryUsageVeryHighClear









.1.3.6.1.4.1.6876.120.1.0.6.0.56
 usage on Intelligence node vmwNsxTDataCenterIntelligenceNodeId
is below the very high threshold value of vmwNsxTDataCenterSystemUsageThreshold%.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTIntelligenceHealthNodeStatusDegraded









.1.3.6.1.4.1.6876.120.1.0.6.0.57
ce node vmwNsxTDataCenterIntelligenceNodeId is degraded.
          
Action required:
Invoke the NSX API GET /napp/api/v1/platform/monitor/category/health to
check which specific pod is down and the reason behind it. Invoke the following
CLI command to restart the degraded service:
`kubectl rollout restart   -n `
vmwNsxTIntelligenceHealthNodeStatusDegradedClear









.1.3.6.1.4.1.6876.120.1.0.6.0.58
ce node vmwNsxTDataCenterIntelligenceNodeId is running properly.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTIntelligenceHealthStorageLatencyHigh










.1.3.6.1.4.1.6876.120.1.0.6.0.63
e latency of disk partition vmwNsxTDataCenterDiskPartitionName on
Intelligence node vmwNsxTDataCenterIntelligenceNodeId is above
the high threshold value of vmwNsxTDataCenterSystemUsageThreshold milliseconds.
          
Action required:
Transient high storage latency may happen due to spike of I/O requests.
If storage latency remains high for more than 30 minutes,
consider deploying NSX Intelligence appliance in a low latency disk,
or not sharing the same storage device with other VMs.
vmwNsxTIntelligenceHealthStorageLatencyHighClear










.1.3.6.1.4.1.6876.120.1.0.6.0.64
e latency of disk partition vmwNsxTDataCenterDiskPartitionName on
Intelligence node vmwNsxTDataCenterIntelligenceNodeId is below
the high threshold value of vmwNsxTDataCenterSystemUsageThreshold milliseconds.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTInfrastructureCommunicationEdgeTunnelsDown







.1.3.6.1.4.1.6876.120.1.0.7.0.17
l tunnel status of Edge node vmwNsxTDataCenterEntityId is down.
          
Action required:
Invoke the NSX CLI command `get tunnel-ports` to get all tunnel ports,
then check each tunnel's stats by invoking NSX CLI command
`get tunnel-port  stats` to check if there are any drops. Also
check /var/log/syslog if there are tunnel related errors.
vmwNsxTInfrastructureCommunicationEdgeTunnelsDownClear







.1.3.6.1.4.1.6876.120.1.0.7.0.18
s of Edge node vmwNsxTDataCenterEntityId have been restored.
          
Action required:
None, receipt of this notification indicates event cleared.
vmwNsxTIntelligenceCommunicationTNFlowExporterDisconnected







.1.3.6.1.4.1.6876.120.1.0.9.0.7
xporter on Transport node vmwNsxTDataCenterEntityId is disconnected from
the Intelligence node's messaging broker. Data collection is affected.
          
Action required:
Restart the messaging service if it is not running in the Intelligence
node. Resolve the network connection failure between the Transport node
flow exporter and the Intelligence node.
vmwNsxTIntelligenceCommunicationTNFlowExporterDisconnectedClear







.1.3.6.1.4.1.6876.120.1.0.9.0.8
xporter on Transport node vmwNsxTDataCenterEntityId has reconnected to
the Intelligence node's messaging broker.
          
Action required:
None, receipt of this notification indicates event cleared.