VMWARE-NSX-MANAGER-MIB

This MIB file contains the information that the receiving party needs
in order to interpret SNMP traps sent by NSX Manager.
        
VMware NSX for vSphere is a key product in the SDDC architecture. With
NSX, virtualization delivers for networking what it has already delivered
for compute and storage. In much the same way that server virtualization
programmatically creates, snapshots, deletes and restores software-based
virtual machines (VMs), NSX network virtualization programmatically
creates, snapshots, deletes, and restores software-based virtual
networks. The result is a completely transformative approach to
networking that not only enables data center managers to achieve orders
of magnitude better agility and economics, but also allows for a vastly
simplified operational model for the underlying physical network. With
the ability to be deployed on any IP network, including both existing
traditional networking models and next-generation fabric architectures
from any vendor, NSX is a completely non-disruptive solution. In fact,
with NSX, the physical network infrastructure you already have is all you
need to deploy a software-defined data center.
        
The NSX Manager provides the graphical user interface (GUI) and the REST
APIs for creating, configuring, and monitoring NSX components, such as
controllers, logical switches, and edge services gateways. The NSX
Manager provides an aggregated system view and is the centralized network
management component of NSX. NSX Manager is installed as a virtual
appliance on any ESX host in your vCenter environment.
        
Support requests can be filed with VMware using KB article:
http://kb.vmware.com/kb/2006985
        
    To reach NSX Manager Service Composer UI, login to
    vSphere UI(https://)->Networking & Security->Service
    Composer
    

Imported Objects

MODULE-COMPLIANCE, OBJECT-GROUP, NOTIFICATION-GROUPSNMPv2-CONF
MODULE-IDENTITY, OBJECT-IDENTITY, OBJECT-TYPE, NOTIFICATION-TYPE, Integer32SNMPv2-SMI
TEXTUAL-CONVENTION, DateAndTimeSNMPv2-TC
UUIDUUID-TC-MIB
vmwNsxManagerVMWARE-ROOT-MIB

Type Definitions (4)

Name Base Type Values/Constraints
VmwNsxManagerSourceIDOctetStringrange: 0..256
VmwNsxManagerSourceIPAddressOctetStringrange: 0..256
VmwNsxManagerSourceTypeOctetStringrange: 0..256
VmwNsxManagerTypeSeverityEnumerationinformational(1), low(2), medium(3), major(4), critical(5), high(6)

Objects

vmwNsxManagerMIB .1.3.6.1.4.1.6876.90.1
vmwNsxMAlertData .1.3.6.1.4.1.6876.90.1.1
vmwNsxMEventCode .1.3.6.1.4.1.6876.90.1.1.1
vmwNsxMEventSourceIP .1.3.6.1.4.1.6876.90.1.1.10
vmwNsxMEventTimestamp
.1.3.6.1.4.1.6876.90.1.1.2
vmwNsxMEventMessage .1.3.6.1.4.1.6876.90.1.1.3
vmwNsxMEventSeverity .1.3.6.1.4.1.6876.90.1.1.4
vmwNsxMEventComponent .1.3.6.1.4.1.6876.90.1.1.5
vmwNsxMUuid .1.3.6.1.4.1.6876.90.1.1.6
vmwNsxMCount .1.3.6.1.4.1.6876.90.1.1.7
vmwNsxMEventSourceID .1.3.6.1.4.1.6876.90.1.1.8
vmwNsxMEventSourceType .1.3.6.1.4.1.6876.90.1.1.9
vmwNsxMNotification .1.3.6.1.4.1.6876.90.1.2
vmwNsxMBranch .1.3.6.1.4.1.6876.90.1.2.0
vmwNsxMGroupsBranch .1.3.6.1.4.1.6876.90.1.2.0.1
vmwNsxMGroupsPrefix .1.3.6.1.4.1.6876.90.1.2.0.1.0
vmwNsxMSnmp .1.3.6.1.4.1.6876.90.1.2.1
vmwNsxMSnmpPrefix .1.3.6.1.4.1.6876.90.1.2.1.0
vmwNsxMServiceComposer .1.3.6.1.4.1.6876.90.1.2.10
vmwNsxMServiceComposerPrefix .1.3.6.1.4.1.6876.90.1.2.10.0
vmwNsxMSvmOperations .1.3.6.1.4.1.6876.90.1.2.11
vmwNsxMSvmOperationsPrefix .1.3.6.1.4.1.6876.90.1.2.11.0
vmwNsxMTranslation .1.3.6.1.4.1.6876.90.1.2.12
vmwNsxMTranslationPrefix .1.3.6.1.4.1.6876.90.1.2.12.0
vmwNsxMUniversalSync .1.3.6.1.4.1.6876.90.1.2.13
vmwNsxMUniversalSyncPrefix .1.3.6.1.4.1.6876.90.1.2.13.0
vmwNsxMAsyncRest .1.3.6.1.4.1.6876.90.1.2.14
vmwNsxMAsyncRestPrefix .1.3.6.1.4.1.6876.90.1.2.14.0
vmwNsxMExtensionRegistration .1.3.6.1.4.1.6876.90.1.2.15
vmwNsxMExtensionRegistrationPrefix .1.3.6.1.4.1.6876.90.1.2.15.0
vmwNsxMDlp .1.3.6.1.4.1.6876.90.1.2.16
vmwNsxMDlpPrefix .1.3.6.1.4.1.6876.90.1.2.16.0
vmwNsxMSamSystem .1.3.6.1.4.1.6876.90.1.2.17
vmwNsxMSamSystemPrefix .1.3.6.1.4.1.6876.90.1.2.17.0
vmwNsxMUsvm .1.3.6.1.4.1.6876.90.1.2.18
vmwNsxMUsvmPrefix .1.3.6.1.4.1.6876.90.1.2.18.0
vmwNsxMVsmCore .1.3.6.1.4.1.6876.90.1.2.19
vmwNsxMVsmCorePrefix .1.3.6.1.4.1.6876.90.1.2.19.0
vmwNsxMSecurity .1.3.6.1.4.1.6876.90.1.2.2
vmwNsxMSecurityPrefix .1.3.6.1.4.1.6876.90.1.2.2.0
vmwNsxMVxlan .1.3.6.1.4.1.6876.90.1.2.20
vmwNsxMVxlanPrefix .1.3.6.1.4.1.6876.90.1.2.20.0
vmwNsxMLogserver .1.3.6.1.4.1.6876.90.1.2.21
vmwNsxMLogserverPrefix .1.3.6.1.4.1.6876.90.1.2.21.0
vmwNsxMApplicationRuleManager .1.3.6.1.4.1.6876.90.1.2.22
vmwNsxMApplicationRuleManagerPrefix .1.3.6.1.4.1.6876.90.1.2.22.0
vmwNsxMFirewall .1.3.6.1.4.1.6876.90.1.2.3
vmwNsxMFirewallPrefix .1.3.6.1.4.1.6876.90.1.2.3.0
vmwNsxMEdge .1.3.6.1.4.1.6876.90.1.2.4
vmwNsxMEdgePrefix .1.3.6.1.4.1.6876.90.1.2.4.0
vmwNsxMEndpoint .1.3.6.1.4.1.6876.90.1.2.5
vmwNsxMEndpointPrefix .1.3.6.1.4.1.6876.90.1.2.5.0
vmwNsxMEam .1.3.6.1.4.1.6876.90.1.2.6
vmwNsxMEamPrefix .1.3.6.1.4.1.6876.90.1.2.6.0
vmwNsxMFabric .1.3.6.1.4.1.6876.90.1.2.7
vmwNsxMFabricPrefix .1.3.6.1.4.1.6876.90.1.2.7.0
vmwNsxMDepPlugin .1.3.6.1.4.1.6876.90.1.2.8
vmwNsxMDepPluginPrefix .1.3.6.1.4.1.6876.90.1.2.8.0
vmwNsxMMessaging .1.3.6.1.4.1.6876.90.1.2.9
vmwNsxMMessagingPrefix .1.3.6.1.4.1.6876.90.1.2.9.0
vmwNsxManagerMIBConformance .1.3.6.1.4.1.6876.90.1.99
vmwNsxManagerMIBCompliances .1.3.6.1.4.1.6876.90.1.99.1
vmwNsxManagerMIBGroups .1.3.6.1.4.1.6876.90.1.99.2

Notifications/Traps

NameOIDDescription
vmwNsxMConfigGroup




.1.3.6.1.4.1.6876.90.1.2.0.1.0.1
Configuration notifications that are grouped will have this OID prefix.
vmwNsxMSnmpDisabled









.1.3.6.1.4.1.6876.90.1.2.1.0.1
ication is sent when the sending out of Snmp traps is disabled.
This would most likely be the last Snmp trap the snmp manager receives.
You may some times not receive it in case of high volume of traps. In those
cases you can rely on the heartbeat traps not being sent out.
Action required: None. If the sending of Snmp traps is enabled a warmStart
trap is received.
Frequency of traps: Once, whenever the sending snmp traps is disabled.
vmwNsxMSnmpManagerConfigUpdated









.1.3.6.1.4.1.6876.90.1.2.1.0.2
ication is sent when the snmp manager configuration has been
updated. The event message will carry the semicolon separated new snmp
managers' details.
Action required: None
Frequency of traps: Once, whenever the Snmp manager configuration is updated.
vmwNsxMServiceComposerPolicyOutOfSync









.1.3.6.1.4.1.6876.90.1.2.10.0.1
mposer encountered an error while attempting to
enforce rules on this Policy.
Action required: Administrator needs to check the rules on the given Policy
for any errors, as reported in the message. After fixing the rules in the
Policy, user would need to resolve the alarm to bring this Policy back in
sync. Policy's alarm can either be resolved from NSX Manager Service Composer
UI or by using alarms API.
Frequency of traps: This trap is generated only once, if an error is
encountered while enforcing the Policy.
vmwNsxMServiceComposerOutOfSyncPrecedenceChangeFailure









.1.3.6.1.4.1.6876.90.1.2.10.0.10
mposer encountered an error reordering sections to reflect
Policy's precedence change. This generally happens if there are Alarms on any
other Policy.
Action required: Administrator needs to check Policies and/or Firewall
sections for any errors, as reported in the message. After fixing the errors,
user would need to resolve the alarm. Alarm can either be resolved from
NSX Manager Service Composer UI or by using alarms API.
Frequency of traps: This trap is generated only once if a failure is
encountered while reordering section to reflect precedence change.
vmwNsxMServiceComposerOutOfSyncDraftSettingFailure









.1.3.6.1.4.1.6876.90.1.2.10.0.11
mposer encountered an error while initializing auto save drafts
setting.
Action required: Administrator needs to check Policies and/or Firewall
sections for any errors, as reported in the message. After fixing the errors,
user would need to resolve the alarm. Alarm can either be resolved from
NSX Manager Service Composer UI or by using alarms API.
Frequency of traps: This trap is generated only once if a failure is
encountered while initializing auto save drafts setting.
vmwNsxMServiceComposerPolicyDeleted









.1.3.6.1.4.1.6876.90.1.2.10.0.2
ot deleted as a result of the internal
SecurityGroup, over which the Policy was created, got deleted.
Frequency of traps: This event is generated once every time
any internal SecurityGroup, that is being consumed by a policy,
gets deleted.
vmwNsxMServiceComposerFirewallPolicyOutOfSync









.1.3.6.1.4.1.6876.90.1.2.10.0.3
mposer encountered an error while attempting to
enforce Firewall rules on this Policy. Firewall related changes on this
Policy will not take effect, until this alarm is resolved.
Action required: Administrator needs to check the rules on the given Policy
for any errors, as reported in the message. After fixing the rules in the
Policy, user would need to resolve the alarm to bring this Policy back in
sync. Policy's alarm can either be resolved from NSX Manager Service Composer
UI or by using alarms API.
Frequency of traps: This trap is generated only once, if an error is
encountered while enforcing the Policy.
vmwNsxMServiceComposerNetworkPolicyOutOfSync









.1.3.6.1.4.1.6876.90.1.2.10.0.4
mposer encountered an error while attempting to
enforce Network Introspection rules on this Policy. Network Introspection
related changes on this Policy will not take effect, until this alarm is
resolved.
Action required: Administrator needs to check the rules on the given Policy
for any errors, as reported in the message. After fixing the rules in the
Policy, user would need to resolve the alarm to bring this Policy back in
sync. Policy's alarm can either be resolved from NSX Manager Service Composer
UI or by using alarms API.
Frequency of traps: This trap is generated only once, if an error is
encountered while enforcing the Policy.
vmwNsxMServiceComposerGuestPolicyOutOfSync









.1.3.6.1.4.1.6876.90.1.2.10.0.5
mposer encountered an error while attempting to
enforce Guest Introspection rules on this Policy. Guest Introspection
related changes on this Policy will not take effect, until this alarm
is resolved.
Action required: Administrator needs to check the rules on the given Policy
for any errors, as reported in the message. After fixing the rules in the
Policy, user would need to resolve the alarm to bring this Policy back in
sync. Policy's alarm can either be resolved from NSX Manager Service Composer
UI or by using alarms API.
Frequency of traps: This trap is generated only once, if an error is
encountered while enforcing the Policy.
vmwNsxMServiceComposerOutOfSync









.1.3.6.1.4.1.6876.90.1.2.10.0.6
mposer encountered an error synchronizing Policies. Any changes on
Service Composer will not be pushed to Firewall/Network Introspection
Services, until this alarm is resolved.
Action required: Administrator needs to check Policies and/or Firewall
sections for any errors, as reported in the message. After fixing the errors,
user would need to resolve the alarm to bring Service Composer back in sync.
Alarm can either be resolved from NSX Manager Service Composer UI or by using
alarms API.
Frequency of traps: This trap is generated only once, whenever an error is
encountered.
vmwNsxMServiceComposerOutOfSyncRebootFailure









.1.3.6.1.4.1.6876.90.1.2.10.0.7
mposer encountered an error while synchronizing Policies on
reboot.
Action required: Administrator needs to check Policies and/or Firewall config
for any errors, as reported in the message. After fixing the errors, user
would need to resolve the alarm to bring Service Composer back in sync. Alarm
can either be resolved from NSX Manager Service Composer UI or by using
alarms API.
Frequency of traps: This trap is generated only once on NSX Manager reboot,
if an error is encountered.
vmwNsxMServiceComposerOutOfSyncDraftRollback









.1.3.6.1.4.1.6876.90.1.2.10.0.8
mposer went out of sync due to rollback of drafts from Firewall.
Any changes on Service Composer will not be pushed to Firewall/Network
Introspection Services, until this alarm is resolved.
Action required: Administrator needs to resolve the alarm to bring Service
Composer back in sync. Alarm can either be resolved from NSX Manager Service
Composer UI or by using alarms API.
Frequency of traps: This trap is generated only once, whenever Firewall
config is reverted to an older version of drafts.
vmwNsxMServiceComposerOutOfSyncSectionDeletionFailure









.1.3.6.1.4.1.6876.90.1.2.10.0.9
mposer encountered an error while deleting the section
corresponding to the Policy. This generally happens if third party(NetX)
service's Manager is not reachable.
Action required: Administrator needs to check connectivity with third
party(NetX) service's Manager. Once the connectivity is restored, user would
need to resolve the alarm. Alarm can either be resolved from Service Composer
UI or by using alarms API.
Frequency of traps: This trap is generated only once if a failure is
encountered while deleting a Policy's section on Policy deletion.
vmwNsxMInconsistentSvmAlarm









.1.3.6.1.4.1.6876.90.1.2.11.0.1
s are deployed per ESX host, to provide functionality like
guest introspection and McAfee/Trend virus checking in VMs on the host.
An issue is detected with the state of the deployed Service VM. Follow
instructions in http://kb.vmware.com/kb/2125482 to analyze the logs
further. Warning: Resolving this alarm will delete the VM. After
deletion you will see a different alarm saying VM is deleted. If you
resolve same, it will reinstall the VM. If redeployment of the VM does
not fix the original issue, the original alarm will be added back
immediately.
Action required: Use resolve API to resolve the Alarm.
Frequency of traps: Once per host.
vmwNsxMSvmRestartAlarm









.1.3.6.1.4.1.6876.90.1.2.11.0.2
s are deployed per ESX host, to provide functionality like
guest introspection and McAfee/Trend virus checking in VMs on the host.
An issue is detected with the state of the deployed Service VM. Follow
instructions in http://kb.vmware.com/kb/2125482 to analyze the logs
further. Warning: Resolving this alarm will restart the VM. If the
root cause here is not solved, the same alarm will be added back
immediately.
Action required: Use resolve API to resolve the Alarm.
Frequency of traps: Once per host.
vmwNsxMSvmAgentUnavailable









.1.3.6.1.4.1.6876.90.1.2.11.0.3
s detected while marking agent as available. Kindly
check the logs. Resolving this alarm will attempt to mark the agent as
available.
Action required: Use resolve API to resolve the Alarm.
Frequency of traps: Once per host.
vmwNsxMVmAddedToSg









.1.3.6.1.4.1.6876.90.1.2.12.0.1
ot added to the SecurityGroup.
Frequency of traps: Once for every VM getting added to any SecurityGroup.
vmwNsxMVmRemovedFromSg









.1.3.6.1.4.1.6876.90.1.2.12.0.2
ot removed from the SecurityGroup.
Frequency of traps: Once for every VM getting removed from any
SecurityGroup.
vmwNsxMFullUniversalSyncFailed









.1.3.6.1.4.1.6876.90.1.2.13.0.1
is encountered when doing full sync of universal
objects on a secondary NSX manager. IP address of the secondary NSX
manager is present in event's message variable.
Action required: Kindly check NSX manager logs on the secondary NSX
manager on which the full sync has failed.
Frequency of traps: This trap is generated once per NSX manager on which
full sync failure is seen.
vmwNsxMSecondaryDown









.1.3.6.1.4.1.6876.90.1.2.13.0.2
NSX manager is unreachable.
Action required: Kindly check if NSX manager is running and is reachable
from primary NSX manager. IP address of the secondary NSX
manager is present in event's message variable.
Frequency of traps: This trap is generated once per NSX manager for which
connection issue is seen.
vmwNsxMUniversalSyncFailedForEntity









.1.3.6.1.4.1.6876.90.1.2.13.0.3
is encountered when doing sync of universal object
on a secondary NSX manager. IP address of the secondary NSX
manager is present in event's message variable.
Action required: Kindly check NSX manager logs on the secondary NSX manager
on which the sync has failed.
Frequency of traps: This trap is generated once per universal object on
a NSX manager on which sync failure is seen.
vmwNsxMUniversalSyncStoppedOnSecondary









.1.3.6.1.4.1.6876.90.1.2.13.0.4
NSX manager is no longer receiving periodic universal sync updates.
IP address of the NSX manager is present in event's message variable.
Action required: Kindly check NSX manager logs and universal configuration on
the primary NSX manager to check if the secondary NSX manager has got removed.
Frequency of traps: This trap is generated one every 4 hours if secondary NSX
manager has not received universal sync updates for more than 4 hours.
vmwNsxMUniversalSyncResumedOnSecondary









.1.3.6.1.4.1.6876.90.1.2.13.0.5
NSX manager has resumed receiving periodic universal sync updates.
Frequency of traps: This trap is generated whenever communication between
primary and secondary managers resumes.
vmwNsxMServerUp









.1.3.6.1.4.1.6876.90.1.2.14.0.1
at NSX manager server is up and in running state,
Informs clients of NSX Manager of the current state.
Action required: None
Frequency of traps: Once for every query
vmwNsxMExtensionRegistered









.1.3.6.1.4.1.6876.90.1.2.15.0.1
NSX manager as a vCenter extenstion. This is
applicable when no other NSX Manager is registered with vCenter and the
current NSX manager is the one registering with vCenter.
Action required: None
Frequency of traps: Only once when the extension is registered for the
very first time.
vmwNsxMExtensionUpdated









.1.3.6.1.4.1.6876.90.1.2.15.0.2
e vCenter extension registration with the new NSX
Manager. This is applicable when there already exists another NSX manager
that is registered as a vCenter extension and the current one overwrites
it.
Action required: None
Frequency of traps: Every time a NSX Manager registers as a vCenter
extension when there already exists another NSX manager registered with
vCenter
vmwNsxMDataSecScanStarted






.1.3.6.1.4.1.6876.90.1.2.16.0.1
tion generated when NSX Data Security scan started
on VirtualMachine.
vmwNsxMDataSecScanEnded






.1.3.6.1.4.1.6876.90.1.2.16.0.2
tion generated when NSX Data Security scan ended
on VirtualMachine.
vmwNsxMSamDataCollectionEnabled









.1.3.6.1.4.1.6876.90.1.2.17.0.1
tivity Monitoring will start collecting data.
Action required: None
Frequency of traps: Event is triggered when SAM data collection state is
toggled.
vmwNsxMSamDataCollectionDisabled









.1.3.6.1.4.1.6876.90.1.2.17.0.2
tivity Monitoring will stop collecting data.
Action required: SAM data collection can be enabled to start collectiing
data.
Frequency of traps: Event is triggered when SAM data collection state is
toggled
vmwNsxMSamDataStoppedFlowing









.1.3.6.1.4.1.6876.90.1.2.17.0.3
tivity Monitoring data stopped flowing from USVM
Action required: Check the following
   - USVM log to see if heartbeats are recieved and sent
   - is the USVM running
   - is the Mux - USVM connection healthy
   - is the USVM - RMQ connection healthy
   - does the VM have endpoint driver installed
Frequency of traps: Event is triggered when NSX Manager does not receives
SAM data from USVM
vmwNsxMSamDataResumedFlowing









.1.3.6.1.4.1.6876.90.1.2.17.0.4
tivity Monitoring data resumes flowing from USVM
Action required: None
Frequency of traps: Event is triggered when SAM data is received from USVM.
vmwNsxMUsvmHeartbeatStopped









.1.3.6.1.4.1.6876.90.1.2.18.0.1
ed sending heartbeats to management plane.
Action required: Connection to NSX Manager was lost. Check why the
Manager didn't send a heartbeat.
Frequency of traps: Event is triggered when NSX Manager does not receives
heartbeats from USVM
vmwNsxMUsvmHeartbeatResumed









.1.3.6.1.4.1.6876.90.1.2.18.0.2
start sending heartbeats to management plane.
Action required: None
Frequency of traps: Event is triggered when NSX Manager receives
heartbeats from USVM
vmwNsxMUsvmReceivedHello









.1.3.6.1.4.1.6876.90.1.2.18.0.3
a HELLO message to Mux
Action: None
Frequency of traps: Event is triggered when Epsec Mux receives HELLO
message from USVM during initial connection establishement.
vmwNsxMUpgradeSuccess









.1.3.6.1.4.1.6876.90.1.2.19.0.1
tion generated when NSX Manager upgraded
successfully.
vmwNsxMCertificateExpired








.1.3.6.1.4.1.6876.90.1.2.19.0.10
e with mentioned id has expired.
Action: Replace expired certificate. Please refer to NSX Administration and API guide for details on certificate related operations.
Frequency: This is triggered every day until expired certificate is replaced.
vmwNsxMCertificateAboutToExpire








.1.3.6.1.4.1.6876.90.1.2.19.0.11
e with mentioned id will expire on mentioned date.
Action: Replace expiring certificate. Please refer to NSX Administration and API guide for details on certificate related operations.
 Frequency: This is triggered every day until expiring certificate is replaced.
vmwNsxMCPUHigh









.1.3.6.1.4.1.6876.90.1.2.19.0.12
r CPU usage is currently high.CPU usage is based on load across all cores.
Action: If NSX Manager is facing performance issue, please collect the technical
support logs for NSX Manager and the host, and contact VMware technical support.
Frequency: This is triggered whenever NSX Manager CPU is above high threshold for consecutive 5 intervals.
Default value of high threshold and interval are 80% and 1 minute respectively.
vmwNsxMCPUNormal









.1.3.6.1.4.1.6876.90.1.2.19.0.13
r CPU usage is back to normal.CPU usage is based on load across all cores.
Action: None
Frequency: This is triggered whenever NSX Manager CPU is below high threshold after 1 interval from CPU being above high threshold.
Default value of high threshold and interval are 80% and 1 minute respectively.
vmwNsxMRestoreSuccess









.1.3.6.1.4.1.6876.90.1.2.19.0.2
tion generated when NSX Manager restored
successfully.
vmwNsxMDuplicateIp









.1.3.6.1.4.1.6876.90.1.2.19.0.3
nager IP has been assigned to another machine
Action: None
Frequency: This is triggered whenever NSX Manager detects that its IP
address is being used by another machine in the same network
vmwNsxMVirtualMachineMarkedAsSystemResource









.1.3.6.1.4.1.6876.90.1.2.19.0.4
chine is marked as system resource
Action: None
Frequency: This is triggered whenever any virtual machine is marked as system
resource
vmwNsxMScaleAboveSupportedLimits









.1.3.6.1.4.1.6876.90.1.2.19.0.5
value(s) of the mentioned parameter(s) has crossed the supported scale limits
Action: Reduce the scale of the parameter(s) mentioned
Frequency: This is triggered every hour if any new parameters cross the supported scale limits
vmwNsxMScaleAboveThreshold









.1.3.6.1.4.1.6876.90.1.2.19.0.6
value(s) of the mentioned parameter(s) has crossed set threshold scale value
Action: Reduce the scale of the parameter(s) mentioned
Frequency: This is triggered every hour if any new parameters cross the set threshold value
vmwNsxMScaleNormalized









.1.3.6.1.4.1.6876.90.1.2.19.0.7
value(s) of the mentioned parameter(s) is back to normal after being above the set threshold or the supported value
Action: No action required.
Frequency: This is triggered every hour if any new parameters return to normal after being above the set threshold or the supported value
vmwNsxMScaleNotEqualToRecommendedValue









.1.3.6.1.4.1.6876.90.1.2.19.0.8
value(s) of the mentioned parameter(s) is not matching the recommended value
Action: Increase/Decrease number of objects  for the parameter(s) mentioned, to match with recommended value. Please refer NSX Administration guide.
Frequency: This is triggered every hour if any new parameters does not match the recommended value
vmwNsxMIpAddedBlackList









.1.3.6.1.4.1.6876.90.1.2.2.0.1
ser authentication fails for number of times that
user is blacklisted and further login attempts are disabled for that user
from given IP address for some time.
Action required: None
Frequency of traps: Whenever user authentication fails consecutively within
some time.
vmwNsxMVcDisconnected









.1.3.6.1.4.1.6876.90.1.2.2.0.10
here is disconnectivity for default VCenter
Connection maintained by NSX.
Action required: Administrator needs to check the connectivity with vCenter
for network problems or any other reasons.
vmwNsxMLostVcConnectivity









.1.3.6.1.4.1.6876.90.1.2.2.0.11
here is disconnectivity for default VCenter
Connection maintained by NSX.
Action required: Administrator needs to check the connectivity with
vCenter for network problems or any other reasons.
vmwNsxMSsoDisconnected









.1.3.6.1.4.1.6876.90.1.2.2.0.12
here is disconnection with SSO lookup service.
Action required: Please check the configuration for possible disconnection
reasons like Invalid Credentials, Time sync issues, Network connectivity
problems etc. Navigate to Appliance management Web UI in browser
(https:///) traverse to Manage vCenter Registration tab
and verify the configuration for SSO Lookupservice.
Frequency of traps: Once per disconnect event, default frequency to check
SSO connection state is 1 hour.
vmwNsxMSsoTimeout









.1.3.6.1.4.1.6876.90.1.2.2.0.13
y to configure VC on a system where SSO is already configured,
we fetch the token to login to the VC using SSO. If it times out during
that, this trap is raised.
Action required: Try reconnecting to the SSO server and if the service remains
unresponsive, try restarting the NSX management service from the NSX appliance
management UI. Contact the SSO administrator if the issue persists.
Frequency of traps: Whenever we try to configure VC on a system where
SSO is already configured.
vmwNsxMIpRemovedBlackList









.1.3.6.1.4.1.6876.90.1.2.2.0.2
 is blacklisted, after blacklist duration expires,
user is removed from blacklist.
Action required: None
Frequency of traps: Whenever blacklist duration expires for any user.
vmwNsxMSsoConfigFailure









.1.3.6.1.4.1.6876.90.1.2.2.0.3
onfiguration of lookup service / SSO fails due to
various reasons like invalid credentials, invalid configuration, time sync
problem etc.
Action required: Check the event message and reconfigure lookup service
with correct details.
Frequency of traps: Once per failed configuration of lookup service.
vmwNsxMSsoUnconfigured









.1.3.6.1.4.1.6876.90.1.2.2.0.4
ser unconfigures lookup service.
Action required: None
Frequency of traps: Once per unconfiguration event of lookup service.
vmwNsxMUserRoleAssigned









.1.3.6.1.4.1.6876.90.1.2.2.0.5
is assigned on NSX manager for vCenter user.
Action required: None
Frequency of traps: Once for each user who is assigned role.
vmwNsxMUserRoleUnassigned









.1.3.6.1.4.1.6876.90.1.2.2.0.6
is unassigned on NSX manager for vCenter user.
Action: None
Frequency of traps: Once for each user where role is removed.
vmwNsxMGroupRoleAssigned









.1.3.6.1.4.1.6876.90.1.2.2.0.7
is assigned on NSX manager for vCenter group.
Action required: None
Frequency of traps: Once for each group who is assigned role.
vmwNsxMGroupRoleUnassigned









.1.3.6.1.4.1.6876.90.1.2.2.0.8
is unassigned on NSX manager for vCenter group.
Action required: None
Frequency of traps: Once for each group where role is removed.
vmwNsxMVcLoginFailed









.1.3.6.1.4.1.6876.90.1.2.2.0.9
onnection with vCenter starts failing due to
invalid credentials.
Action required: Reconfigure NSX Manager vCenter configuration with
correct credentials.
vmwNsxMVxlanLogicalSwitchImproperlyCnfg









.1.3.6.1.4.1.6876.90.1.2.20.0.1
 is triggered if one or more distributed virtual
port groups backing a certain Logical Switch were modified and/or removed.
Or if migration of Control plane mode for a Logical Switch/Transport
Zone failed.
Action required: (1) If the event was triggered due to
deletion/modification of backing distributed virtual port groups, then the
error will be visible on Logical Switch UI page. Resolve from there will
try and create missing distributed virtual port groups for the Logical
Switch. (2) If event was triggered due to failure of Control plan mode
migration, redo the migration for that Logical Switch or Transport Zone.
Frequency of traps: Event is triggered due to user actions as explained
in description.
Affects: Logical Switch network traffic.
vmwNsxMVxlanControllerRemoved









.1.3.6.1.4.1.6876.90.1.2.20.0.10
tion generated when VXLAN Controller has been removed due to
the connection cant be built, please check controller IP configuration
and deploy again.
vmwNsxMVxlanControllerConnProblem









.1.3.6.1.4.1.6876.90.1.2.20.0.11
r detected the connection between two controller
nodes is broken.
Action required: It is a warning event, users need to check the controller
cluster for the further steps. Check following KB 2127655 https://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&docType=kc&externalId=2127655&sliceId=1&docTypeID=DT_KB_1_1&dialogID=40732913&stateId=0%200%2040754965 to see if issue matches.
Frequency of traps: Whenever the controller reports the issue.
Affects: Networking might get affected.
vmwNsxMVxlanControllerInactive









.1.3.6.1.4.1.6876.90.1.2.20.0.12
fication information couldn't be sent to NSX
Controllers.
Action required: Ensure that NSX Controller cluster is in healthy state
before preparing a new Host. Invoke Controller Sync API to try and rectify
this error.
Frequency of traps: When a new host is prepared for NSX networking.
Affects: Newly prepared Host. Communication channel between Host and NSX
Controllers might have issues.
vmwNsxMVxlanControllerActive









.1.3.6.1.4.1.6876.90.1.2.20.0.13
tion generated when Controller cluster state is
now active. Controller Synchronization job is in progress.
Frequency of traps: Controller cluster becomes active again from a
previous inactive state.
Action required: User doesnt have to take any corrective action.
NSX will auto-sync the controllers.
vmwNsxMVxlanVmknicMissingOrDeleted









.1.3.6.1.4.1.6876.90.1.2.20.0.14
ic is missing or deleted from host.
Action required: Issue can be resolved from Logical Network
Preparation - VXLAN Transport UI section. Clicking on resolve will try to
rectify the issue.
Frequency of traps: First time NSX Manager finds that VXLAN vmknic is
missing or deleted from Host.
Affects: VXLAN Traffic to/from the mentioned Host will be affected.
vmwNsxMVxlanInfo









.1.3.6.1.4.1.6876.90.1.2.20.0.15
r will raise this event when connection between
either of the following component is established/re-established
(i) connection between NSX Manager and Host Firewall agent.
(ii) connection between NSX Manager and Control Plane Agent.
(iii) connection between Control Plane Agent to Controllers.
Action required: None
Frequency of traps: NSX Manager will raise this event when connection
between either of the following component is established/re-established
(i) connection between NSX Manager and Host Firewall agent.
(ii) connection between NSX Manager and Control Plane Agent
(iii) connection between Control Plane Agent to Controllers.
vmwNsxMVxlanVmknicPortGrpMissing









.1.3.6.1.4.1.6876.90.1.2.20.0.16
r detected one vxlan vmknic is missing on VC.
Action required: Check the host, if that vmknic is deleted, click on the
resolve button on UI, or call the remediate API (POST /api/2.0/vdn/config/host/{hostId}/vxlan/vteps?action=remediate)
to recreate the vxlan vmknic.
Frequency of traps: First time when vxlan vmknic is detected missing
(manually deleted by user or inventory report the incorrect information)
Affects:  The VXLAN traffic on that host may be interrupted.
vmwNsxMVxlanVmknicPortGrpAppears









.1.3.6.1.4.1.6876.90.1.2.20.0.17
r detected one vxlan vmknic that was marked as
missing has now reappeared on VC.
Action required: None
Frequency of traps: When that missing vmknic re-appears again.
Affects: The VXLAN traffic on that host may be resumed.
vmwNsxMVxlanConnDown









.1.3.6.1.4.1.6876.90.1.2.20.0.18
 is triggered when either of the following
connections are detected down by NSX Manager:
(i) connection between NSX Manager and Host Firewall agent.
(ii) connection between NSX Manager and Control Plane Agent.
(iii) connection between Control Plane Agent to Controllers.
Action required:
(i) If NSX Manager to Host Firewall Agent connection is
down, check NSX Manager and Firewall Agent logs to get error details. You
can try Fabric Synchronize API to try and retificy this issue.
(ii) If NSX Manager to Control Plane Agent connection is down, please
check NSX Manager and Control Plane Agent logs to get the error detail,
check whether the Control Plane Agent process is down.
(iii) If Control Plane Agent to Controllers connection is down, please go
to UI Installation page to check the connection status for crossponding
Host.
Frequency of traps: When
(i) NSX Manager looses connection with Firewall agent on host or
(ii) NSX Manager losses connection with Control plane agent on host or
(iii) Control plane agent on Host looses connection with NSX Controllers.
Affects: VMs on that Host might get affected.
vmwNsxMBackingPortgroupMissing









.1.3.6.1.4.1.6876.90.1.2.20.0.19
r detected one backing portgroup of a logical
switch is missing on VCenter.
Action required: Click on the resolve button on UI or call the API
(POST https:///api/2.0/vdn/virtualwires//backing?action=remediate)
to recreate that backing portgroup.
Frequency of traps: Whenever logical switch backing portgroup is missing
on VC.
Affects: VMs cannot be connected to this Logical Switch.
vmwNsxMVxlanLogicalSwitchProperlyCnfg









.1.3.6.1.4.1.6876.90.1.2.20.0.2
itch status has been marked good, most probably as
result of resolving any errors on it.
Action required: None
Frequency of traps: Event is triggered when user resolves the Logical
Switch error and as a result missing backing distributed virtual port
groups are recreated.
vmwNsxMBackingPortgroupReappears









.1.3.6.1.4.1.6876.90.1.2.20.0.20
r detected one backing portgroup of a logical
switch that was missing reappears on VC.
Action required: None
Frequency of traps: Whenever user triggered remediate API on Logical
Switch which has missing backing portgroup.
vmwNsxMManagedObjectIdChanged









.1.3.6.1.4.1.6876.90.1.2.20.0.21
r detected the Managed Objectid of one backing
portgroup of a logical switch changed.
Action required: None
Frequnecy of traps: This typically happens when user restores a backup
of Logical Switch backing portgroup.
vmwNsxMHighLatencyOnDisk









.1.3.6.1.4.1.6876.90.1.2.20.0.22
r detected some disk on a NSX Controller has high
latency.
Action required: Rectify the issue on specified device and controller.
Frequency of traps: First time NSX detected this issue as reported by
Controller. When this issue gets resolved another Informational event will
be raised by NSX Manager indicating the same.
Affects: NSX Controller.
vmwNsxMHighLatencyOnDiskResolved









.1.3.6.1.4.1.6876.90.1.2.20.0.23
r detected the disk high latency alert on a some
disk on a NSX Controller has been resolved.
Frequency of traps: First time NSX detected, previously raised disk
latency issue has been resolved.
vmwNsxMControllerVmPoweredOff









.1.3.6.1.4.1.6876.90.1.2.20.0.24
r detected a Controller Virtual Machine is powered
off from vCenter.
Action required: Click on the 'Resolve' button on Controller page on UI or
call the API (POST https:///api/2.0/vdn/controller/{controllerId}?action=remediate)
to power on the Controller Virtual Machine.
Frequency of traps: This event wil be raised when controller Virtual
Machine is powered off from vCenter.
Affects: Controller cluster status might go to disconnected if a controller
Virtual Machine is powered off. Any operation that requires an active
Controller Cluster may be affected.
vmwNsxMControllerVmDeleted









.1.3.6.1.4.1.6876.90.1.2.20.0.25
r detected a Controller Virtual Machine is deleted
from vCenter.
Action required: Click on the Resolve button on Controller page on UI or
call the API (POST https:///api/2.0/vdn/controller/{controllerId}?action=remediate)
to clean up NSX manager's database state.
Frequency of traps: This event will be raised when Controller Virtual
Machine is deleted from vCenter.
Affects: Controller cluster status might go to disconnected if a controller
Virtual Machine is powered off. Any operation that requires an active
Controller Cluster may be affected.
vmwNsxMVxlanConfigNotSet









.1.3.6.1.4.1.6876.90.1.2.20.0.26
r detected the VXLAN configuration is not set on
the host (would-block issue). And this event indicates NSX Manager tried
to rectify this issue by resending the VXLAN configuration on Host.
Action required: See KB 2107951 https://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&docType=kc&externalId=2107951&sliceId=1&docTypeID=DT_KB_1_1&dialogID=40732862&stateId=0%200%2040754197
for more information.
Frequency of traps: This event will generate when host preparation task is
triggered for a host and Host encounters would-block issue.
Affects: It is a notification, no specific guide for the next step.
vmwNsxMVxlanPortgroupDeleted









.1.3.6.1.4.1.6876.90.1.2.20.0.27
r will raise this event when a VXLAN portgroup is deleted from
a switch.
ACTION: User does not have to take any action. This is just a notification
for the user.
FREQUENCY: This event is generated a single time when the VXLAN portgroup
is deleted from the switch.
AFFECTS: The VXLAN traffic on the switch is interrupted.
vmwNsxMVxlanVDSandPgMismatch









.1.3.6.1.4.1.6876.90.1.2.20.0.28
N: NSX Manager will raise this event when the teaming policies of
a VDS and it's associated VXLAN portgroup are not the same.
ACTION: Set the VXLAN portgroup teaming policy back to the original value. 
Below is the workaround to correctly update the teaming policy in a deployed
cluster: The supported method to correct teaming policy inconsistencies is
to create and prepare a new NSX cluster with the required teaming policy,
and to migrate ESXi hosts to that cluster. Changing the teaming policy in
the manager DB by some other means only applies to newly created virtual
wires after the DB change is made.
FREQUENCY: This event will generate whenever the VXLAN portgroup teaming
policy is changed to something different than the VDS teaming policy.
AFFECTS: The VXLAN traffic on the switch may be interrupted.
vmwNsxMVxlanControllerDisconnected









.1.3.6.1.4.1.6876.90.1.2.20.0.29
N: NSX Manager raises this event when the Controller VM is
Disconnected ie the controller can not reached from the NSX Manager.
ACTION: Make sure the IP is assigned to the controller VM and it is
reachable form the NSX Manager.
FREQUENCY: The event occurs when the Controller VM is powered off or not responding.
Minimum 40 secs between two disconnecet events.
AFFECTS: Controller cluster status might go to disconnected if a controller
Virtual Machine is disconnected. Any operation that requires an active
Controller Cluster may be affected.
vmwNsxMVxlanInitFailed









.1.3.6.1.4.1.6876.90.1.2.20.0.3
configure vmknic as a VTEP, VXLAN traffic through
this interface will be dropped until this is resolved.
Action required: Check the host's vmkernel.log for more details.
Frequency of traps: Every time a VTEP vmknic tries to connect to it's
Distributed Virtual Port.
Affects: VXLAN traffic on the affected Host.
vmwNsxMVxlanControllerConnected









.1.3.6.1.4.1.6876.90.1.2.20.0.30
N: NSX Manager will raise this informational event when the
controller VM is connected to controller cluster.
ACTION: No action to be taken.
FREQUENCY: Event occurs as controller is connected to controller-cluster.
Minimum 40 secs between two connecet events.
AFFECTS: None.
vmwNsxMVxlanControllerVmPoweredOn









.1.3.6.1.4.1.6876.90.1.2.20.0.31
N: NSX manager detected a Controller Virtual Machine is powered on from vCenter.
ACTION: None.
FREQUENCY: This event occurs when Controller Virtual Machine is powered On.
AFFECTS: Controller cluster status might go to disconnected if a controller Virtual Machine
is powered off. Any operation that requires an active Controller Cluster may be affected.
vmwNsxMVxlanHostEvents









.1.3.6.1.4.1.6876.90.1.2.20.0.32
N: NSX manager receives a notification from the host informing MAC mismatch occurred.
ACTION: Contact the administrator to take following action:
         (1) Look for VMs that caused this issue at hosts.
         (2) If VMs are rouge, shut dowm these VMs.
FREQUENCY: Event is triggerred as MAC mismatch occurs at switchport on the hosts.
AFFECTS: Identify the VMs causing this issue.
vmwNsxMVxlanPortInitFailed









.1.3.6.1.4.1.6876.90.1.2.20.0.4
configure VXLAN on the Distributed Virtual Port,
the port will be disconnected.
Action required: Check the host's vmkernel.log for more details.
Frequency of traps: Every time a VXLAN vNic tries to connect to it's
Distributed Virtual Port on the host.
Affects: VXLAN traffic on the affected Host.
vmwNsxMVxlanInstanceDoesNotExist









.1.3.6.1.4.1.6876.90.1.2.20.0.5
iguration was received for a Distributed Virtual
Port, but the host has not yet enabled VXLAN on the vSphere Distributed
Switch. VXLAN ports on affected Host will fail to connect until resolved.
Action required: See KB 2107951 (https://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&docType=kc&externalId=2107951&sliceId=1&docTypeID=DT_KB_1_1&dialogID=40732862&stateId=0%200%2040754197)
Frequency of traps: Every time any VXLAN related port (vNic or vmknic)
tries to connect to it's Distributed Virtual Port on the host.
Affects: VXLAN Traffic on that Host.
vmwNsxMVxlanLogicalSwitchWrkngImproperly









.1.3.6.1.4.1.6876.90.1.2.20.0.6
face was unable to join the specified multicast
address, the VTEP will be unable to receive some traffic from other hosts
until this is resolved. The host will periodically retry joining the group
until it is successful.
Action required: Check the host's vmkernel.log for more details.
Frequency of traps: NSX retries joining failed mcast groups every
5 seconds.
Affects: Logical Switch associated with problem VTEP interface won't work
properly.
vmwNsxMVxlanTransportZoneIncorrectlyWrkng









.1.3.6.1.4.1.6876.90.1.2.20.0.7
ress of a VTEP vmknic has changed.
Action required: None.
Frequency of traps: Every time a VTEP IP changes
vmwNsxMVxlanTransportZoneNotUsed









.1.3.6.1.4.1.6876.90.1.2.20.0.8
c does not have a valid IP address assigned, all
VXLAN traffic through this vmknic will be dropped.
Action required: Verify the IP configuration for the interface, and the
DHCP server if DHCP is used.
Frequency of traps: Once per VTEP loosing it's IP address.
vmwNsxMVxlanOverlayClassMissingOnDvs









.1.3.6.1.4.1.6876.90.1.2.20.0.9
es where not installed prior to DVS configuration
for VXLAN. All VXLAN ports will fail to connect until resolved.
Action required: See KB 2107951 https://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&docType=kc&externalId=2107951&sliceId=1&docTypeID=DT_KB_1_1&dialogID=40732862&stateId=0%200%2040754197
Frequency of traps: Once per setting of the com.vmware.netoverlay.layer0=vxlan
opaque property or whenver the host is configured for vxlan or Host
reconnects to VCEnter and host has some problem.
Affects: VXLAN Traffic for that Host will be affected.
vmwNsxMLogserverEventGenStopped






.1.3.6.1.4.1.6876.90.1.2.21.0.1
N: This event is triggered if the security log on domain controller event log server is full.
ACTION: See article regarding the issue : https://support.microsoft.com/en-us/kb/867860
        Contact Domain Administrator to take one of the following actions:
        (1). Inclease the size of security log.
        (2). Clear the security log.
        (3). Archive the security log.
FREQUENCY: Event is triggered due to event log size reaches its limit.
AFFECTS: Identity firewall stops functioning.
vmwNsxMApplicationRuleManagerFlowAnalysisStart






.1.3.6.1.4.1.6876.90.1.2.22.0.1
ow analysis on Application Rule Manager session.
Action required: None
Frequency of traps:This trap is sent once for each session on which analysis has started
vmwNsxMApplicationRuleManagerFlowAnalysisFailed






.1.3.6.1.4.1.6876.90.1.2.22.0.2
sis failed on Application Rule Manager session.
Contextual data provided with this event may indicate the cause of this failure.
Action required: Start a new monitoring session for the applcation and try analyzing flows again.
On repeated failure, collect ESX & NSX Manager tech support bundle and open SR with Vmware tech support.
Frequency of traps:This trap is sent once for each session on which analysis has failed
vmwNsxMApplicationRuleManagerFlowAnalysisComplete






.1.3.6.1.4.1.6876.90.1.2.22.0.3
flow analysis on Application Rule Manager session.
Action required: None
Frequency of traps:This trap is sent once for each session on which analysis has completed
vmwNsxMFltrCnfgUpdateFailed









.1.3.6.1.4.1.6876.90.1.2.3.0.1
r failed to enforce DFW. VMs on this host may not be protected by the DFW.  Contextual data provided with this event may indicate the cause of this failure.
This could happen if the VIB version mismatches on the NSX Manager and ESX host. This may happen during an upgrade. Please collect ESX and NSX Manager tech support bundle and open a SR with Vmware techsupport. See http://kb.vmware.com/kb/2074678 and http://kb.vmware.com/kb/1010705
vmwNsxMFlowMissed









.1.3.6.1.4.1.6876.90.1.2.3.0.10
d.
Contextual data provided with this event may indicate the cause of this failure.
Collect error logs (vmkernel.log) when firewall configuration was applied to the vnic. Verify that vsip kernel heaps have enough free memory and vsfwd memory consumption is within resource limits. Check VSFWD logs. See kb.vmware.com/kb/2125437.
If the issue persists, please collect ESX and NSX Manager tech support bundle and open a SR with Vmware techsupport. See http://kb.vmware.com/kb/2074678 and
http://kb.vmware.com/kb/1010705
vmwNsxMSpoofGuardCnfgUpdateFailed









.1.3.6.1.4.1.6876.90.1.2.3.0.11
receive, parse or Update the spoofguard configuration.
Contextual data provided with this event may indicate the cause of this failure.
Verify that the host in question was properly prepared by NSX Manager.
Collect error logs (vmkernel.log) when the spoofguard configuration was applied to the host. For Sync the firewall configuration . See kb.vmware.com/kb/2125437.
vmwNsxMSpoofGuardFailed









.1.3.6.1.4.1.6876.90.1.2.3.0.12
apply spoofguard to the vnic.
Contextual data provided with this event may indicate the cause of this failure.
Verify that vsip kernel heaps have enough free memory.
Please collect ESX and NSX Manager tech support bundle and open a SR with Vmware techsupport. See http://kb.vmware.com/kb/2074678 and
http://kb.vmware.com/kb/1010705
vmwNsxMSpoofGuardApplied









.1.3.6.1.4.1.6876.90.1.2.3.0.13
oofguard for vnic.
Action required: None
vmwNsxMSpoofGuardDisableFail









.1.3.6.1.4.1.6876.90.1.2.3.0.14
disable spoofguard on the vnic.
Please collect ESX and NSX Manager tech support bundle and open a SR with Vmware techsupport. See http://kb.vmware.com/kb/2074678 and
http://kb.vmware.com/kb/1010705
vmwNsxMSpoofGuardDisabled









.1.3.6.1.4.1.6876.90.1.2.3.0.15
poofguard for vnic.
Action required: None
vmwNsxMLegacyAppServiceDeletionFailed









.1.3.6.1.4.1.6876.90.1.2.3.0.16
tion generated when legacy application service VM
deletion failed.
vmwNsxMFirewallCpuThresholdCrossed









.1.3.6.1.4.1.6876.90.1.2.3.0.17
usage threshold was exceeded.
Reduce the amount of traffic of VMs on the host in question.
vmwNsxMFirewallMemThresholdCrossed









.1.3.6.1.4.1.6876.90.1.2.3.0.18
 memory threshold exceeded for the specified heap.
Reduce the number of  of VMs on the host in question, reduce the number of rules or containers in firewall config. Use appliedTo feature to limit the number of rules for the current cluster.
vmwNsxMConnPerSecThrshldCrossed









.1.3.6.1.4.1.6876.90.1.2.3.0.19
ectons Per Second (CPS) threshold exceeded for the specified vnic.
Reduce the amount of new connections of VMs on the host in question.
vmwNsxMFltrCnfgNotAppliedToVnic









.1.3.6.1.4.1.6876.90.1.2.3.0.2
r failed to enforce DFW configuration on a vnic. This particular VM may not be protected by the DFW. Contextual data provided with this event may indicate the cause of this failure.This could happen if the VIB version mismatches on the NSX Manager and ESX host. This may happen during an upgrade. Please collect ESX and NSX Manager tech support bundle and open a SR with Vmware techsupport. See http://kb.vmware.com/kb/2074678 and
http://kb.vmware.com/kb/1010705
vmwNsxMFirewallCnfgUpdateTimedOut









.1.3.6.1.4.1.6876.90.1.2.3.0.20
r waits for 2 minutes after publishing the Firewall configuration to each host in the cluster. If a host takes more than 2 minutes to process the data, it times out.
Please check the Host in question. See if VSFWD is functioning or not. Also use CLI commands to verify if the rule realization is working properly or not. See kb.vmware.com/kb/2125437.
Please collect ESX and NSX Manager tech support bundle and open a SR with Vmware techsupport. See http://kb.vmware.com/kb/2074678 and
http://kb.vmware.com/kb/1010705
vmwNsxMSpoofGuardCnfgUpdateTmOut









.1.3.6.1.4.1.6876.90.1.2.3.0.21
r waits for 2 minutes after publishing the Spoofguard configuration to each host in the cluster. If a host takes more than 2 minutes to process the data, it times out.
Please check the Host in question. See if VSFWD is functioning or not. Also use CLI commands to verify if the rule realization is working properly or not. See kb.vmware.com/kb/2125437.
Please collect ESX and NSX Manager tech support bundle and open a SR with Vmware techsupport. See http://kb.vmware.com/kb/2074678 and
http://kb.vmware.com/kb/1010705
vmwNsxMFirewallPublishFailed









.1.3.6.1.4.1.6876.90.1.2.3.0.22
onfiguration Publishing has failed for a given cluster/host.
Please collect ESX and NSX Manager tech support bundle and open a SR with Vmware techsupport. See http://kb.vmware.com/kb/2074678 and
http://kb.vmware.com/kb/1010705
vmwNsxMCntnrUpdatePublishFailed









.1.3.6.1.4.1.6876.90.1.2.3.0.23
 of container (IP/MAC/vNIC) update pdate failed for a given host/cluster object.
Please collect ESX and NSX Manager tech support bundle and open a SR with Vmware techsupport. See http://kb.vmware.com/kb/2074678 and
http://kb.vmware.com/kb/1010705
vmwNsxMSpoofGuardUpdatePublishFailed









.1.3.6.1.4.1.6876.90.1.2.3.0.24
hing of the spoofguard updates on this host has failed. Please collect ESX and NSX Manager tech support bundle and open a SR with Vmware techsupport. See http://kb.vmware.com/kb/2074678 and
http://kb.vmware.com/kb/1010705
vmwNsxMExcludeListPublishFailed









.1.3.6.1.4.1.6876.90.1.2.3.0.25
hing of the exclude list or updates to the exclude list on this host has failed. Please collect ESX and NSX Manager tech support bundle and open a SR with Vmware techsupport. See http://kb.vmware.com/kb/2074678 and
http://kb.vmware.com/kb/1010705
vmwNsxMFirewallCnfgUpdateOnDltCntnr









.1.3.6.1.4.1.6876.90.1.2.3.0.26
f the object referenced in firewall rules.
Action required: Go to the NSX manager DFW UI. All the invalid reference
are marked invalid on the UI as well. Please remove the orphaned referenced
and update the firewall rules.
vmwNsxMHostSyncFailed









.1.3.6.1.4.1.6876.90.1.2.3.0.27
 force synchronization has failed. Please collect ESX and NSX Manager tech support bundle and open a SR with Vmware techsupport. See http://kb.vmware.com/kb/2074678 and
http://kb.vmware.com/kb/1010705
vmwNsxMHostSynced









.1.3.6.1.4.1.6876.90.1.2.3.0.28
 operation for host succeeded.
Action required: None
vmwNsxMFirewallInstalled









.1.3.6.1.4.1.6876.90.1.2.3.0.29
The Distributed Firewall was successfully Installed on the host.
vmwNsxMFltrCnfgAppliedToVnic









.1.3.6.1.4.1.6876.90.1.2.3.0.3
ly updated filter config.
Action required: None
vmwNsxMFirewallInstallFailed









.1.3.6.1.4.1.6876.90.1.2.3.0.30
buted Firewall Installation has failed. Please collect ESX and NSX Manager tech support bundle and open a SR with Vmware techsupport. See http://kb.vmware.com/kb/2074678 and
http://kb.vmware.com/kb/1010705
vmwNsxMFirewallClusterInstalled









.1.3.6.1.4.1.6876.90.1.2.3.0.31
The Distributed Firewall has been installed at the request of a user.
vmwNsxMFirewallClusterUninstalled









.1.3.6.1.4.1.6876.90.1.2.3.0.32
The Distributed Firewall has been uninstalled at the request of a user.
vmwNsxMFirewallClusterDisabled









.1.3.6.1.4.1.6876.90.1.2.3.0.33
The Distributed Firewall has been disabeld on the cluster at the request of a user.
vmwNsxMFirewallForceSyncClusterFailed









.1.3.6.1.4.1.6876.90.1.2.3.0.34
 operation for the cluster has failed.
Use CLI commands to look at the logs and verify if any error messages appeared during the operation. See kb.vmware.com/kb/2125437.
Please collect ESX and NSX Manager tech support bundle and open a SR with Vmware techsupport. See http://kb.vmware.com/kb/2074678 and
http://kb.vmware.com/kb/1010705
vmwNsxMFirewallForceSyncClusterSuccess









.1.3.6.1.4.1.6876.90.1.2.3.0.35
 operation for cluster succeeded.
Action required: None
vmwNsxMFirewallVsfwdProcessStarted









.1.3.6.1.4.1.6876.90.1.2.3.0.36
ess started on host.
Action required: None
vmwNsxMFirewallRulesetApplyAllFailed









.1.3.6.1.4.1.6876.90.1.2.3.0.37
 apply all the rule section configuration. Contextual data provided with this event may indicate the cause of this failure.
Collect error logs (vmkernel.log) when firewall configuration was applied to the vnic. Verify that vsip kernel heaps have enough free memory. Check VSFWD logs. See kb.vmware.com/kb/2125437 .
If the issue persists, please collect ESX and NSX Manager tech support bundle and open a SR with Vmware techsupport. See http://kb.vmware.com/kb/2074678 and http://kb.vmware.com/kb/653.
vmwNsxMFirewallRulesetAppliedAll









.1.3.6.1.4.1.6876.90.1.2.3.0.38
ly applied all rule section config.
Action required: None.
vmwNsxMCntnrCnfgApplyFailedToVnic









.1.3.6.1.4.1.6876.90.1.2.3.0.39
 apply the container configuration. Contextual data provided with this event may indicate the cause of this failure.
Collect error logs (vmkernel.log) when firewall configuration was applied to the vnic. Verify that vsip kernel heaps have enough free memory. Check VSFWD logs. See kb.vmware.com/kb/2125437 .
If the issue persists, please collect ESX and NSX Manager tech support bundle and open a SR with Vmware techsupport. See http://kb.vmware.com/kb/2074678 and http://kb.vmware.com/kb/653.
vmwNsxMFltrCreatedForVnic









.1.3.6.1.4.1.6876.90.1.2.3.0.4
ated. DFW is enforced in the datapath for the
vnic.
Action required: None
vmwNsxMCntnrCnfgApplyAllFailedToVnic









.1.3.6.1.4.1.6876.90.1.2.3.0.40
 apply all container configuration. Contextual data provided with this event may indicate the cause of this failure.
Collect error logs (vmkernel.log) when firewall configuration was applied to the vnic. Verify that vsip kernel heaps have enough free memory. Check VSFWD logs. See kb.vmware.com/kb/2125437 .
If the issue persists, please collect ESX and NSX Manager tech support bundle and open a SR with Vmware techsupport. See http://kb.vmware.com/kb/2074678 and http://kb.vmware.com/kb/653.
vmwNsxMCntnrCnfgAppliedAllToVnic









.1.3.6.1.4.1.6876.90.1.2.3.0.41
ly applied all container config to all vnics.
Action required: None.
vmwNsxMSpoofGuardApplyAllFailed









.1.3.6.1.4.1.6876.90.1.2.3.0.42
apply all spoofguard to the vnics.
Contextual data provided with this event may indicate the cause of this failure.
Verify that vsip kernel heaps have enough free memory.
Please collect ESX and NSX Manager tech support bundle and open a SR with Vmware techsupport. See http://kb.vmware.com/kb/2074678 and
http://kb.vmware.com/kb/653.
vmwNsxMSpoofGuardAppliedAll









.1.3.6.1.4.1.6876.90.1.2.3.0.43
l spoofguard for vnics.
Action required: None.
vmwNsxMFirewallTimeoutUpdateFailed









.1.3.6.1.4.1.6876.90.1.2.3.0.44
ession timer timeout configuration parse/update failed. Timeout values are unchanged. Contextual data provided with this event may indicate the cause of this failure. Collect error logs (vsfwd.log) when the host received firewall config. Force sync firewall config using ForceSync API/UI. See kb.vmware.com/kb/2125437 .
If the issue persists, please collect ESX and NSX Manager tech support bundle and open a SR with Vmware techsupport. See http://kb.vmware.com/kb/2074678 and http://kb.vmware.com/kb/653.
vmwNsxMFirewallTimeoutApplyFailed









.1.3.6.1.4.1.6876.90.1.2.3.0.45
ession timer timeout configuration apply failed. Certain timeout values are unchanged. Contextual data provided with this event may indicate the cause of this failure. Collect error logs (vsfwd.log) when the host received firewall config. Force sync firewall config using ForceSync API/UI. See kb.vmware.com/kb/2125437 .
If the issue persists, please collect ESX and NSX Manager tech support bundle and open a SR with Vmware techsupport. See http://kb.vmware.com/kb/2074678 and http://kb.vmware.com/kb/653.
vmwNsxMFirewallTimeoutApplied









.1.3.6.1.4.1.6876.90.1.2.3.0.46
ssion timeout values for a vnic.
Action required: None.
vmwNsxMFirewallTimeoutApplyAllFailed









.1.3.6.1.4.1.6876.90.1.2.3.0.47
apply all firewall session timer timeout configuration. Some timeout values are unchanged. Contextual data provided with this event may indicate the cause of this failure. Collect error logs (vsfwd.log) when the host received firewall config. Force sync firewall config using ForceSync API/UI. See kb.vmware.com/kb/2125437 .
If the issue persists, please collect ESX and NSX Manager tech support bundle and open a SR with Vmware techsupport. See http://kb.vmware.com/kb/2074678 and http://kb.vmware.com/kb/653.
vmwNsxMFirewallTimeoutAppliedAll









.1.3.6.1.4.1.6876.90.1.2.3.0.48
ssion timeout values for all vnics.
Action required: None.
vmwNsxMCntnrCnfgAppliedToVnic









.1.3.6.1.4.1.6876.90.1.2.3.0.49
ly applied container config to all vnics.
Action required: None.
vmwNsxMFltrDeletedForVnic









.1.3.6.1.4.1.6876.90.1.2.3.0.5
eted. DFW is removed from the vnic.
Action required: None
vmwNsxMFirewallMaxConcurrentConnectionsThresholdCrossed









.1.3.6.1.4.1.6876.90.1.2.3.0.50
mum concurrent connections threshold was exceeded for the specified vnic.
Reduce the amount of traffic on the vnic in question.
vmwNsxMFirewallProcessMemoryThresholdCrossed









.1.3.6.1.4.1.6876.90.1.2.3.0.51
ry utilization threshold was exceeded for the specified process.
Reduce the number of rules or containers in firewall config. If persists, there may be memory leaks in the process so restarting it may be necessary.
vmwNsxMFirewallCpuThresholdCrossCleared









.1.3.6.1.4.1.6876.90.1.2.3.0.52
usage is back to below threshold level.
Action required: None.
vmwNsxMFirewallMemThresholdCrossCleared









.1.3.6.1.4.1.6876.90.1.2.3.0.53
 memory usage is back to below threshold level for the specified heap.
Action required: None.
vmwNsxMConnPerSecThrshldCrossCleared









.1.3.6.1.4.1.6876.90.1.2.3.0.54
ectons Per Second (CPS) is back to below threshold level for the specified vnic.
Action required: None.
vmwNsxMFirewallMaxConcurrentConnectionsThresholdCrossCleared









.1.3.6.1.4.1.6876.90.1.2.3.0.55
mum concurrent connections is back to below threshold level for the specified vnic.
Action required: None.
vmwNsxMFirewallProcessMemoryThresholdCrossCleared









.1.3.6.1.4.1.6876.90.1.2.3.0.56
ry utilization is back to below threshold level for the specified process.
Action required: None.
vmwNsxMFirewallThresholdConfigApplied









.1.3.6.1.4.1.6876.90.1.2.3.0.57
ly applied all threshold config.
Action required: None.
vmwNsxMFirewallThresholdConfigApplyFailed









.1.3.6.1.4.1.6876.90.1.2.3.0.58
hreshold configuration apply failed. Certain threshold values are unchanged. Contextual data provided with this event may indicate the cause of this failure. Collect error logs (vsfwd.log) when the host received firewall config. Force sync firewall config using ForceSync API/UI. See kb.vmware.com/kb/2125437 .
If the issue persists, please collect ESX and NSX Manager tech support bundle and open a SR with Vmware techsupport. See http://kb.vmware.com/kb/2074678 and http://kb.vmware.com/kb/653.
vmwNsxMUnsupportedIPsetConfigured









.1.3.6.1.4.1.6876.90.1.2.3.0.59
n IP address 0.0.0.0 or 0.0.0.0/32 is configured as IPSet.
KB article: https://ikb.vmware.com/kb/53157
Action: Information-only event. No action required. Workaround to this issue documented in KB article.
vmwNsxMFirewallConfigUpdateFailed









.1.3.6.1.4.1.6876.90.1.2.3.0.6
ule Configuration between the NSX Manager and the host is not in sync. Contextual data provided with this event may indicate the cause of this failure. Verify that the host in question was properly prepared by NSX Manager. Collect error logs (vsfwd.log) when the host received firewall config. Force sync firewall config using ForceSync API/UI. See kb.vmware.com/kb/2125437 .
          
If the issue persists, please collect ESX and NSX Manager tech support bundle and open a SR with Vmware techsupport. See http://kb.vmware.com/kb/2074678 and
http://kb.vmware.com/kb/1010705
vmwNsxMFirewallRuleFailedVnic









.1.3.6.1.4.1.6876.90.1.2.3.0.7
apply Distributed Firewall configuration.
Contextual data provided with this event may indicate the cause of this failure.
Collect error logs (vmkernel.log) when the firewall configuration was applied to the vnic. vsip kernel heaps may not have enough free memory. Check VSFWD logs . See kb.vmware.com/kb/2125437.
          
If the issue persists, please collect the ESX and NSX Manager tech support bundle and open a SR with Vmware techsupport.  See http://kb.vmware.com/kb/2074678 and
http://kb.vmware.com/kb/1010705
vmwNsxMFirewallRuleAppliedVnic









.1.3.6.1.4.1.6876.90.1.2.3.0.8
rewall config. Key value will have context info
like generation number and also other debugging info.
Action required: None
vmwNsxMCntnrCnfgUpdateFailed









.1.3.6.1.4.1.6876.90.1.2.3.0.9
eive, parse or update the container configuration. Contextual data provided with this event may indicate the cause of this failure.
Collect error logs (vmkernel.log) when firewall configuration was applied to the vnic. Verify that vsip kernel heaps have enough free memory. Check VSFWD logs. See kb.vmware.com/kb/2125437 .
If the issue persists, please collect ESX and NSX Manager tech support bundle and open a SR with Vmware techsupport. See http://kb.vmware.com/kb/2074678 and
http://kb.vmware.com/kb/1010705
vmwNsxMEdgeNoVmServing








.1.3.6.1.4.1.6876.90.1.2.4.0.1
e Edge VMs found in serving state. There is a
possibility of network disruption.
Action required: System auto recovers from this state today. Event should
be followed by traps with event code 30202 or 30203
vmwNsxMEdgeUpgrade51x








.1.3.6.1.4.1.6876.90.1.2.4.0.10
tion generated when Edge Gateway is upgraded
to 5.1.x.
Action required: None
vmwNsxMEdgeLicenseChanged








.1.3.6.1.4.1.6876.90.1.2.4.0.11
tion generated when Edge licensing changed on
vCenter Server.
Action required: None
vmwNsxMEdgeApplianceMoved








.1.3.6.1.4.1.6876.90.1.2.4.0.12
tion generated when Edge appliance is moved in the vCenter
inventory.
vmwNsxMEdgeApplianceNotFound








.1.3.6.1.4.1.6876.90.1.2.4.0.13
tion generated when Edge appliance not found in
the vCenter inventory.
Action required: If VM is accidentally deleted, redeploy edge.
vmwNsxMEdgeVMHealthCheckMiss








.1.3.6.1.4.1.6876.90.1.2.4.0.14
tion generated when Edge VM is not responding to
health check.
Action required: Communicaiton issues between manager and edge. Log
analysis required to root cause issue.
vmwNsxMEdgeHealthCheckMiss








.1.3.6.1.4.1.6876.90.1.2.4.0.15
tion generated when none of the Edge VMs are found
in serving state. There is a possibility of network disruption.
Action required: Commnunicaiton issues between manager and edge. Log
analysis required to root cause issue.
vmwNsxMEdgeCommAgentNotConnected








.1.3.6.1.4.1.6876.90.1.2.4.0.16
tion generated when Edge Communication Agent is not
connected to vCenter Server.
Action required: Check VSM and VC connectivity. Try registering VSM to VC
vmwNsxMApplianceWithDifferentId








.1.3.6.1.4.1.6876.90.1.2.4.0.17
tion generated when Edge VM is discovered with
a different vmId.
Action required: None
vmwNsxMFirewallRuleModified








.1.3.6.1.4.1.6876.90.1.2.4.0.18
tion generated when Edge firewall rule is modified.
Action required: Revisit firewall rule and perform required updates
vmwNsxMEdgeAntiAffinityRuleViolated








.1.3.6.1.4.1.6876.90.1.2.4.0.19
tion generated when powering on NSX Edge appliance
violates a virtual machine anti-affinity rule.
Action required: Anti affinity rules removed from cluster. Both HA VM may
run on same host. Go to VC and please revisit anti affinity rules
on Cluster
vmwNsxMEdgeGatewayCreated








.1.3.6.1.4.1.6876.90.1.2.4.0.2
ay created.
Action required: None
vmwNsxMEdgeHaEnabled








.1.3.6.1.4.1.6876.90.1.2.4.0.20
tion generated when NSX Edge HighAvailability is
enabled.
Action required: None
vmwNsxMEdgeHaDisabled








.1.3.6.1.4.1.6876.90.1.2.4.0.21
tion generated when NSX Edge HighAvailability
is disabled.
Action required: None
vmwNsxMEdgeGatewayRecovered








.1.3.6.1.4.1.6876.90.1.2.4.0.22
tion generated when NSX Edge Gateway has recovered
and now responding to health check.
Action required: None
vmwNsxMEdgeVmRecovered








.1.3.6.1.4.1.6876.90.1.2.4.0.23
tion generated when NSX Edge VM has recovered
and now responding to health check.
Actione required: None
vmwNsxMEdgeGatewayUpgraded








.1.3.6.1.4.1.6876.90.1.2.4.0.24
tion generated when Edge Gateway is upgraded.
Action required: None
vmwNsxMEdgeVmHlthChkDisabled








.1.3.6.1.4.1.6876.90.1.2.4.0.25
tion generated when Edge VM health check is
disabled on consecutive critical vix errors. Please redeploy or force
sync vm to resume health check.
Action required: This points to environmental issues that lead to
repeated failure over vix. Log analysis needs to be done to identify
root cause. Post resoving issues force sync edge vm to resume health
check. Force sync and redeploy are disruptive operation.
vmwNsxMEdgePrePublishFailed








.1.3.6.1.4.1.6876.90.1.2.4.0.26
tion generated when Pre Publish has failed on Edge
VM.
Action required: Firewall rules might be out of sync. System auto recovers
but if problem persists then trigger force sync.
vmwNsxMEdgeForcedSync








.1.3.6.1.4.1.6876.90.1.2.4.0.27
tion generated when Edge VM was force synced.
Action required: None
vmwNsxMEdgeVmBooted








.1.3.6.1.4.1.6876.90.1.2.4.0.28
tion generated when Edge VM was booted.
Action required: None
vmwNsxMEdgeVmInBadState








.1.3.6.1.4.1.6876.90.1.2.4.0.29
tion generated when Edge VM is in Bad State. Needs
a force sync.
Action required: Force sync required.
vmwNsxMEdgeVmBadState








.1.3.6.1.4.1.6876.90.1.2.4.0.3
 bad state. Needs a force sync.
Action required: System auto triggres force sync but if problem is
sustained then manual force sync should be triggered. For ESG force
sync is disruptive and will reboot edge VMs.
vmwNsxMEdgeVmCpuUsageIncreased








.1.3.6.1.4.1.6876.90.1.2.4.0.30
tion generated when Edge VM CPU usage has
increased.
Action required: Spikes are normal but collect tech support logs for
further analysis if high CPU sustained for longer duration.
vmwNsxMEdgeVmMemUsageIncreased








.1.3.6.1.4.1.6876.90.1.2.4.0.31
tion generated when Edge VM Memory usage has
increased.
Action required: System recovers but collect tech support logs for further
analysis.
vmwNsxMEdgeVmProcessFailure








.1.3.6.1.4.1.6876.90.1.2.4.0.32
tion generated when Edge VM process monitor detects
a process failure.
Action required: System recovers but collect tech support logs for further
analysis.
vmwNsxMEdgeVmSysTimeBad








.1.3.6.1.4.1.6876.90.1.2.4.0.33
tion generated when Edge VM system time is bad.
Action required: System recovers. Check NTP setting on hosts.
vmwNsxMEdgeVmSysTimeSync








.1.3.6.1.4.1.6876.90.1.2.4.0.34
tion generated when Edge VM system time sync up happens.
Action required: None
vmwNsxMEdgeAesniCryptoEngineUp








.1.3.6.1.4.1.6876.90.1.2.4.0.35
tion generated when AESNI crypto engine is up.
Action required: None
vmwNsxMEdgeAesniCryptoEngineDown








.1.3.6.1.4.1.6876.90.1.2.4.0.36
tion generated when AESNI crypto engine is down.
Action required: None
vmwNsxMEdgeVmOom








.1.3.6.1.4.1.6876.90.1.2.4.0.37
tion generated when Edge VM is out of memory. The
Edge is rebooting in 3 seconds.
Action required: Collect tech support for further analysis.
vmwNsxMEdgeFileSysRo








.1.3.6.1.4.1.6876.90.1.2.4.0.38
tion generated when Edge file system is read only.
Action required: Check datastore issues, once resolved force sync is
required.
vmwNsxMEdgeHaCommDisconnected








.1.3.6.1.4.1.6876.90.1.2.4.0.39
tion generated when Edge HighAvailability
communication channel is disconnected from peer node.
Action required: User will need to check network infrastructure
(virtual and physical) to look for any failures, specially on
the interfaces and the path configured for HA.
vmwNsxMEdgeVmCommFailed








.1.3.6.1.4.1.6876.90.1.2.4.0.4
communicate with the Edge VM.
Action required: Need investigation depending upon comunication channel.
Log needs to be checked for VIX error code for futher action.
vmwNsxMEdgeHaSwitchOverSelf








.1.3.6.1.4.1.6876.90.1.2.4.0.40
tion generated when High Availability is disabled for NSX Edge.
The primary NSX Edge VM has its state transitioned from ACTIVE to SELF.
High Availability (HA) ensures that NSX Edge services are always available,
by deploying an additional Edge VM for failover. The primary NSX Edge VM is
the ACTIVE node and the secondary VM is the STANDBY node. Whenever the
ACTIVE VM is unreachable on account of VM powered off or network
connectivity issues, the STANDBY VM takes over the ACTIVE vm's role.
In the event NSX Edge High Availability is disabled, the STANDBY VM is
deleted and the ACTIVE VM continues to function with its ACTIVE state
transitioned to SELF.
Action required: None
vmwNsxMEdgeHaSwitchOverActive








.1.3.6.1.4.1.6876.90.1.2.4.0.41
tion generated when High Availability switch over has happened
for NSX Edge. The secondary NSX Edge VM has its state transitioned from
STANDBY to ACTIVE. High Availability (HA) ensures that NSX Edge services
are always available, by deploying an additional Edge VM for failover.
The primary NSX Edge VM is the ACTIVE node and the secondary VM is the
STANDBY node. Whenever the ACTIVE VM is unreachable on account of VM
powered off or network connectivity issues, the STANDBY VM takes over the
ACTIVE vm's role.
Action required: None
vmwNsxMEdgeHaSwitchOverStandby








.1.3.6.1.4.1.6876.90.1.2.4.0.42
tion generated when High Availability switch over has happened
for NSX Edge. The primary NSX Edge VM has its state transitioned from
ACTIVE to STANDBY. High Availability (HA) ensures that NSX Edge services
are always available, by deploying an additional Edge VM for failover.
The primary NSX Edge VM is the ACTIVE node and the secondary VM is the
STANDBY node. Whenever the ACTIVE VM is unreachable on account of VM
powered off or network connectivity issues, the STANDBY VM takes over the
ACTIVE vm's role. When connectivity is re-established between the NSX Edge
VM's, one of the VM's state is transitioned from ACTIVE to STANDBY.
Action required: None
vmwNsxMEdgeMonitorProcessFailure








.1.3.6.1.4.1.6876.90.1.2.4.0.43
tion generated when Edge process monitor detected
a process failure.
Action required: Collect tech support logs for further analysis.
vmwNsxMLbVirtualServerPoolUp








.1.3.6.1.4.1.6876.90.1.2.4.0.44
tion generated when LoadBalancer virtualServer/pool
is up.
Action required: None
vmwNsxMLbVirtualServerPoolDown








.1.3.6.1.4.1.6876.90.1.2.4.0.45
tion generated when LoadBalancer virtualServer/pool
is down.
vmwNsxMLbVirtualServerPoolWrong








.1.3.6.1.4.1.6876.90.1.2.4.0.46
tion generated when LoadBalancer virtualServer/pool state is
wrong.
vmwNsxMLbPoolWarning








.1.3.6.1.4.1.6876.90.1.2.4.0.47
tion generated when LoadBalancer pool changed to a warning
state.
vmwNsxMIpsecChannelUp








.1.3.6.1.4.1.6876.90.1.2.4.0.48
tion generated when IPsec Channel is up.
Action required: None
vmwNsxMIpsecChannelDown








.1.3.6.1.4.1.6876.90.1.2.4.0.49
tion generated when IPsec Channel is down.
Action required: Collect tech support logs for further analysis.
vmwNsxMEdgeVmCnfgChanged








.1.3.6.1.4.1.6876.90.1.2.4.0.5
tion generated when NSX Edge VM configuration is
changed.
Action required: None
vmwNsxMIpsecTunnelUp








.1.3.6.1.4.1.6876.90.1.2.4.0.50
tion generated when IPsec Tunnel is up.
Action required: None
vmwNsxMIpsecTunnelDown








.1.3.6.1.4.1.6876.90.1.2.4.0.51
tion generated when IPsec Tunnel is down.
Action required: Collect tech support logs for further analysis.
vmwNsxMIpsecChannelUnknown








.1.3.6.1.4.1.6876.90.1.2.4.0.52
tion generated when IPsec Channel status is
unknown.
Action required: Collect tech support logs for further analysis.
vmwNsxMIpsecTunnelUnknown








.1.3.6.1.4.1.6876.90.1.2.4.0.53
tion generated when IPsec Tunnel status is unknown.
Action required: Collect tech support logs for further analysis.
vmwNsxMGlobalLbMemberUp








.1.3.6.1.4.1.6876.90.1.2.4.0.54
tion generated when Global Loadbalancer member
status is up.
Action required: None
vmwNsxMGlobalLbMemberWarning








.1.3.6.1.4.1.6876.90.1.2.4.0.55
tion generated when Global Loadbalancer member
status is warning.
vmwNsxMGlobalLbMemberDown








.1.3.6.1.4.1.6876.90.1.2.4.0.56
tion generated when Global Loadbalancer member
status is down.
vmwNsxMGlobalLbMemberUnknown








.1.3.6.1.4.1.6876.90.1.2.4.0.57
tion generated when Global Loadbalancer member
status is unknown.
vmwNsxMGlobalLbPeerUp








.1.3.6.1.4.1.6876.90.1.2.4.0.58
tion generated when Global Loadbalancer peer status
is up.
Action required: None
vmwNsxMGlobalLbPeerDown








.1.3.6.1.4.1.6876.90.1.2.4.0.59
tion generated when Global Loadbalancer peer status
is down.
vmwNsxMEdgeGatewayDeleted








.1.3.6.1.4.1.6876.90.1.2.4.0.6
tion generated when Edge Gateway is deleted.
Action required: None
vmwNsxMDhcpServiceDisabled








.1.3.6.1.4.1.6876.90.1.2.4.0.60
tion generated when DHCP Relay Service is
disabled.
vmwNsxMEdgeResourceReservationFailure








.1.3.6.1.4.1.6876.90.1.2.4.0.61
nt CPU and/or Memory Resources available on Host or Resource
Pool, during resource reservation at the time of NSX Edge deployment.
Resources are explicitly reserved to ensure sufficient resources are
available for NSX Edge to service High Availability. User can view the
available resources vs reserved resources by navigating to the page
Home > Hosts and Clusters > [Cluster-name] > Monitor > Resource Reservation.
Action required: After checking available resources, re-specify the
resources as part of appliance configuration so that resource reservation
succeeds.
vmwNsxMEdgeSplitBrainDetected








.1.3.6.1.4.1.6876.90.1.2.4.0.62
n detected for NSX Edge with HighAvailability.
NSX Edge VM's configured for High Availability are unable to
determine if the other VM is alive due to network failure. In
such scenario, both the VM's think the other is not alive and
take on the ACTIVE state. This may cause network disruption.
Action required: User will need to check network infrastructure
(virtual and physical) to look for any failures, specially on
the interfaces and the path configured for HA.
vmwNsxMEdgeSplitBrainRecovered








.1.3.6.1.4.1.6876.90.1.2.4.0.63
plit Brain for NSX Edge with HighAvailability.
The network path used by the NSX Edge VM's High Availability has been
re-established. NSX Edge VM's are able to communicate with each other,
and one of the VM has taken the STANDBY role, resolving the ACTIVE-ACTIVE
split brain scenario.
Action required: None
vmwNsxMEdgeSplitBrainRecoveryAttempt








.1.3.6.1.4.1.6876.90.1.2.4.0.64
Split Brain resolution for NSX Edge.
Split Brain recovery is being attempted on NSX Edge by NSX Manager.
Action required: None
vmwNsxMEdgeResourceReservationSuccess








.1.3.6.1.4.1.6876.90.1.2.4.0.65
fied or system managed CPU and/or Memory Resource reservation for
Edge VM is successful on Cluster or Resource
Pool, during deployment/redeployment of NSX Edge or Edge VM appliance
configuration.
Resources are explicitly reserved to ensure sufficient resources are
available for NSX Edge to service High Availability. User can view the
available resources vs reserved resources by navigating to the page
Home > Hosts and Clusters > [Cluster-name] > Monitor > Resource Reservation.
Action required: None
vmwNsxMEdgeSddcChannelUp








.1.3.6.1.4.1.6876.90.1.2.4.0.66
nication channel to vmcd is up on specified NSX Edge.
Action required: None
Frequency: Once when the VMCI communication channel come up.
URL: Nil
vmwNsxMEdgeSddcChannelDown








.1.3.6.1.4.1.6876.90.1.2.4.0.67
nication channel to vmcd is down on specified NSX Edge.
Action required: Check status of vmcd daemon process.
                 Go to the ESX host and check vmcd status /etc/init.d/vmcd status
                 If not running, start vmcd /etc/init.d/vmcd start.
Frequency: Once when the VMCI communication channel goes down.
URL: Nil
vmwNsxMEdgeDuplicateIpDetected








.1.3.6.1.4.1.6876.90.1.2.4.0.68
ge IP has been assigned to another device on the network. MAC address of the conflicting device is provided.
Action required: Change the IP address of the other device on network.
Frequency: Once when the Edge detects the duplication of its IP by another macAddress in the same network.
URL: Nil
vmwNsxMEdgeDuplicateIpResolved








.1.3.6.1.4.1.6876.90.1.2.4.0.69
uplicate IP address issue is resolved.
Action required: None
Frequency: Once when the Edge detects the duplication is resolved
URL: Nil
vmwNsxMEdgeGatewayReDeployed








.1.3.6.1.4.1.6876.90.1.2.4.0.7
tion generated when Edge Gateway is redeployed.
Action required: None
vmwNsxMEdgeBgpNeighborUp








.1.3.6.1.4.1.6876.90.1.2.4.0.70
n BGP neighbor is up.
BGP neighbor IP address in the eventSourceId tells us for which neighbor this event is getting raised.
Action required: None
vmwNsxMEdgeBgpNeighborDown








.1.3.6.1.4.1.6876.90.1.2.4.0.71
n BGP neighbor is down. Once per neighbor.
BGP neighbor IP address in the eventSourceId tells us for which neighbor this event is getting raised.
Action required: None
vmwNsxMEdgeBgpNeighborASMismatch








.1.3.6.1.4.1.6876.90.1.2.4.0.72
n there is a mismatch in AS number configured. Raised repeatedly when BGP neighbor is getting established.
BGP neighbor IP address in the eventSourceId tells us for which neighbor this event is getting raised.
Action required: Correct the configured AS number
vmwNsxMEdgeOSPFNeighborUp








.1.3.6.1.4.1.6876.90.1.2.4.0.73
n OSPF neighbor is up.
OSPF router Id in the eventSourceId tells us for which neighbor this event is getting raised.
Action required: None
vmwNsxMEdgeOSPFNeighborDown








.1.3.6.1.4.1.6876.90.1.2.4.0.74
n OSPF neighbor is down.
OSPF router Id in the eventSourceId tells us for which neighbor this event is getting raised.
Action required: None
vmwNsxMEdgeOSPFNeighborMTUMismatch








.1.3.6.1.4.1.6876.90.1.2.4.0.75
n there is a MTU mismatch in neighbor ship request.
OSPF router Id in the eventSourceId tells us for which neighbor this event is getting raised.
Action required: Correct the MTU configured.
vmwNsxMEdgeOSPFNeighborAreaIdMisMatch








.1.3.6.1.4.1.6876.90.1.2.4.0.76
n there is a areaId mismatch in neighbor ship request.
OSPF router Id in the eventSourceId tells us for which neighbor this event is getting raised.
Action required: Correct the areaId configured.
vmwNsxMEdgeOSPFNeighborHelloTimerMisMatch








.1.3.6.1.4.1.6876.90.1.2.4.0.77
n there is hello timer mismatch in neighbor ship request.
OSPF router Id in the eventSourceId tells us for which neighbor this event is getting raised.
Action required: Correct the hello timer configured.
vmwNsxMEdgeOSPFNeighborDeadTimerMisMatch








.1.3.6.1.4.1.6876.90.1.2.4.0.78
n there is dead timer mismatch in neighbor ship request.
OSPF router Id in the eventSourceId tells us for which neighbor this event is getting raised.
Action required: Correct the dead timer configured.
vmwNsxMEdgeL2vpnTunnelUp








.1.3.6.1.4.1.6876.90.1.2.4.0.79
n the l2vpn tunnel is up.
Action required: None
vmwNsxMEdgeVmPowerOff








.1.3.6.1.4.1.6876.90.1.2.4.0.8
tion generated when NSX Edge VM is powered off.
Action required: None
vmwNsxMEdgeL2vpnTunnelDown








.1.3.6.1.4.1.6876.90.1.2.4.0.80
n the l2vpn tunnel is down.
We can debug why the l2vpn tunnel is down using the following cli commands,
show service l2vpn,
show configuration l2vpn,
show service l2vpn bridge
Action required: None.
vmwNsxMEdgeHAForceStandbyRemoved








.1.3.6.1.4.1.6876.90.1.2.4.0.81
tion generated when forced standby for edge
enforced by NSX Manager gets removed.
Action required: None.
vmwNsxMEdgeHAForceStandbyRemovalFailed








.1.3.6.1.4.1.6876.90.1.2.4.0.82
n the force standby removal for edge failed.
Action required: None.
vmwNsxMEdgeVmBADStateRecovered








.1.3.6.1.4.1.6876.90.1.2.4.0.83
n edge VM is recovered from bad state.
Action required: None.
vmwNsxMEdgeVmBADStateAutoHealRecoveryDisabled








.1.3.6.1.4.1.6876.90.1.2.4.0.84
n VM can not be recovered from bad state after system specified retry of auto recovery ForceSync operation.
Action required: Trigger force sync to recover edge VM from bad state. To trigger force sync from UI see
(https://docs.vmware.com/en/VMware-NSX-for-vSphere/6.2/com.vmware.nsx.admin.doc/GUID-21FF2937-4CDF-491C-933E-8F44E21ED55E.html)
or call the API (POST https:///api/4.0/edges/?action=forcesync)
vmwNsxMEdgeHaInUseVnicChanged








.1.3.6.1.4.1.6876.90.1.2.4.0.85
tion generated when internally allocated vnic
for HA internal communication is changed because vnic configurations
have changed.
Action required: None.
vmwNsxMEdgeHaCommConnected








.1.3.6.1.4.1.6876.90.1.2.4.0.86
tion generated when Edge HighAvailability
communication channel is established with peer node.
Action required: None.
vmwNsxMEdgeVmRenameFailed








.1.3.6.1.4.1.6876.90.1.2.4.0.87
n Edge VM rename operation failed.
Given Edge name has invalid characters or exceeds
maximum number of characters permitted. There is
no functionality effect on the Edge. User may
choose a shorter name and rename the Edge.
Action required: None.
vmwNsxMEdgeBgpNeighborshipError








.1.3.6.1.4.1.6876.90.1.2.4.0.88
n something goes wrong in neighborship establishment.
Error message will have error code and sub error code. BGP neighbor IP
address in the eventSourceId tells us for which neighbor this event is
getting raised.
vmwNsxMEventMessage for example will contain following details:
(Error when establishing BGP neighborship with neighbor 10.20.30.40
having AS number 1201 with error code 6, error sub code 5.)
Action required: The error code given in message are standard error codes.
Act accordingly as per the error codes.
vmwNsxMEdgeVmBadStateNotRecovered








.1.3.6.1.4.1.6876.90.1.2.4.0.89
n failed to recover NSX Edge VM from Bad State after force sync.
Action required: None.
vmwNsxMEdgeApplianceSizeChanged








.1.3.6.1.4.1.6876.90.1.2.4.0.9
tion generated when Edge appliance size has
changed.
Action required: None
vmwNsxMEdgeVmDcnOutOfSync








.1.3.6.1.4.1.6876.90.1.2.4.0.90
n GroupingObject sync is timed out on NSX Edge VM.
Action required: None.
vmwNsxMEdgeConsumedResourcesMissingInInventory








.1.3.6.1.4.1.6876.90.1.2.4.0.91
n missing or deleted resources from inventory are found used
in NSX Edges.
Action required: Please reconfigure these NSX Edges to use existing
resources. Refer to NSX Manager logs for the complete list of missing
resources.
vmwNsxMEdgeIpsecDeprecatedComplianceSuiteInUse








.1.3.6.1.4.1.6876.90.1.2.4.0.92
n a deprecated compliance suite is used in an IPSec site in
NSX Edges.
Action required: Please reconfigure IPSec site to use supported
compliance suite.
vmwNsxMEdgeConnectedToMultipleTZHavingSameClusters








.1.3.6.1.4.1.6876.90.1.2.4.0.93
n a NSX Distributed Logical Router is connected to multiple
transport zones having same clusters.
Action required: None.
vmwNsxMEdgeConnectedToMultipleTZHavingDifferentClusters








.1.3.6.1.4.1.6876.90.1.2.4.0.94
n a NSX Distributed Logical Router is connected to multiple
transport zones having different clusters. This may cause network
disruption on clusters which are not common to all transport zones.
Action required: Please fix the configuration by either reconfiguring
transport zones to have same clusters or by connecting NSX Distributed
Logical Router to a single transport zone.
vmwNsxMEndpointThinAgentEnabled









.1.3.6.1.4.1.6876.90.1.2.5.0.1
A notification generated when Thin agent is enabled.
vmwNsxMGuestIntrspctnEnabled









.1.3.6.1.4.1.6876.90.1.2.5.0.2
tion generated when Guest Introspection solution
is enabled.
vmwNsxMGuestIntrspctnIncompatibleEsx









.1.3.6.1.4.1.6876.90.1.2.5.0.3
tion generated when Guest Introspection solution
was contacted by an incompatible version of the ESX module.
vmwNsxMGuestIntrspctnEsxConnFailed









.1.3.6.1.4.1.6876.90.1.2.5.0.4
tion generated when connection between the ESX
module and the Guest Introspection solution failed.
vmwNsxMGuestIntrspctnStatusRcvFailed









.1.3.6.1.4.1.6876.90.1.2.5.0.5
tion generated when failed to receive status
from Guest Introspection solution.
vmwNsxMEsxModuleEnabled









.1.3.6.1.4.1.6876.90.1.2.5.0.6
A notification generated when ESX module is enabled.
vmwNsxMEsxModuleUninstalled









.1.3.6.1.4.1.6876.90.1.2.5.0.7
A notification generated when ESX module is uninstalled.
vmwNsxMGuestIntrspctnHstMxMssngRep









.1.3.6.1.4.1.6876.90.1.2.5.0.8
tion generated when Guest Introspection host MUX
is missing report.
vmwNsxMEndpointUndefined









.1.3.6.1.4.1.6876.90.1.2.5.0.9
A notification generated when Endpoint is undefined.
vmwNsxMEamGenericAlarm









.1.3.6.1.4.1.6876.90.1.2.6.0.1
s problems to NSX during vib/service VM
install/upgrade as these traps.
Action required: Use resolve API to resolve the Alarm.
Frequency of traps: N times per cluster per user action, where N is number
of hosts in a cluster.
vmwNsxMFabricDplymntStatusChanged









.1.3.6.1.4.1.6876.90.1.2.7.0.1
 of a service on a cluster has changed. It can
change to RED(Failure), GREEN(Success), YELLOW(in-progress).
Action required: RED state would be accompanied with an
EAM Alarm/Event/Trap, that indicates root cause. Use resolver API to fix
it.
Frequency of traps: Once per state change. State could change 2-3 times
per user operation [Deploy/Undeploy/Update]
vmwNsxMUpgradeOfDplymntFailed









.1.3.6.1.4.1.6876.90.1.2.7.0.10
 deployment unit failed, please check if ovf/vib
urls are accessible, in correct format and all the properties in ovf
environment have been configured in service attributes. Please check
logs for details.
Action required: Ensure that ovf/vib urls accessible from VC and are in
correct format. Use resolve API to resolve the Alarm. Service will be
redeployed.
Frequency of traps: Once per cluster per user operation [Upgrade]
vmwNsxMFabricDependenciesNotInstalled









.1.3.6.1.4.1.6876.90.1.2.7.0.11
e being installed is dependent on another service
that has not yet been installed.
Action required: Deploy the required service on the cluster.
Frequency of traps: Once per cluster per user operation [Deploy]
vmwNsxMFabricErrorNotifSecBfrUpgrade









.1.3.6.1.4.1.6876.90.1.2.7.0.12
e notifying security solution before upgrade. The
solution may not be reachable/responding.
Action required: Ensure that solution urls are accessible from NSX. Use
resolve API to resolve the Alarm. Service will be redeployed.
Frequency of traps: Once per cluster per user operation [Upgrade]
vmwNsxMFabricErrCallbackNtRcvdUpgrade









.1.3.6.1.4.1.6876.90.1.2.7.0.13
ceive callback from security solution for upgrade
notification even after timeout.
Action required: Ensure that solution urls are accessible from NSX, and
NSX is reachable from the solution. Use resolve API to resolve the Alarm.
Service will be redeployed.
Frequency : Once per cluster per user operation [Upgrade]
vmwNsxMFabricErrCallbackNtRcvdUninstall









.1.3.6.1.4.1.6876.90.1.2.7.0.14
tion of service failed.
Action required: Ensure that solution urls are accessible from NSX, and
NSX is reachable from the solution. Use resolve API to resolve the Alarm.
Service will be removed.
Frequency of traps: Once per cluster per user operation [Uninstall]
vmwNsxMFabricUninstallServiceFailed









.1.3.6.1.4.1.6876.90.1.2.7.0.15
e notifying security solution before uninstall.
Resolve to notify once again, or delete to uninstall without notification.
Action required: Ensure that solution urls are accessible from NSX, and
NSX is reachable from the solution. Use resolve API to resolve the Alarm.
Service will be removed.
Frequency of traps: Once per cluster per user operation [Uninstall]
vmwNsxMFabricErrorNotifSecBfrUninstall









.1.3.6.1.4.1.6876.90.1.2.7.0.16
e notifying security solution before uninstall.
Resolve to notify once again, or delete to uninstall without notification.
Action required: Ensure that solution urls are accessible from NSX, and
NSX is reachable from the solution. Use resolve API to resolve the Alarm.
Service will be removed.
Frequency of traps: Once per cluster per user operation [Uninstall]
vmwNsxMFabricServerRebootUninstall









.1.3.6.1.4.1.6876.90.1.2.7.0.17
ooted while security solution notification for
uninstall was going on.
Action required: Ensure that solution urls are accessible from NSX.
Use resolve API to resolve the Alarm. Service will be uninstalled.
Frequency of traps: Once per cluster per user operation [Uninstall]
vmwNsxMFabricServerRebootUpgrade









.1.3.6.1.4.1.6876.90.1.2.7.0.18
ooted while security solution notification for
upgrade was going on.
Action required: Ensure that solution urls are accessible from NSX.
Use resolve API to resolve the Alarm. Service will be redeployed.
Frequency of traps: Once per cluster per user operation [Upgrade]
vmwNsxMFabricConnEamFailed









.1.3.6.1.4.1.6876.90.1.2.7.0.19
r relies on the ESX Agent Manager service in VC for
deploying/monitoring NSX vibs on ESX. The connection to this EAM service
has gone down. This could be due to EAM service or VC restart/stop or an
issue in the EAM service.
Action required: In the NSX UI, traverse to Manage, then NSX Management
Service. Verify that the status of VC connection on this page is Green.
Use the VC IP to verify that EAM is UP by visiting https:///eam/mob.
Frequency of traps: Once per switch from success to failed EAM connection
vmwNsxMFabricDplymntUnitCreated









.1.3.6.1.4.1.6876.90.1.2.7.0.2
r has created the required objects for deploying
a service on a cluster. This would be followed by deployment of the
service on all hosts in the cluster.
Action required: None
Frequency: Once per cluster
vmwNsxMFabricConnEamRestored









.1.3.6.1.4.1.6876.90.1.2.7.0.20
r relies on the EAM service in VC for
deploying/monitoring NSX vibs on ESX. The connection of NSX to this EAM
service was re-established successfully.
Action required: None
Frequency of traps: Once per switch from failed to success EAM connection
vmwNsxMFabricPreUninstallCleanUpFailed









.1.3.6.1.4.1.6876.90.1.2.7.0.21
all cleanup failed.
Action required: Use resolve API to resolve the Alarm. Service will be removed.
Frequency of traps: Once per cluster per user operation [Uninstall]
vmwNsxMFabricBackingEamNotFound









.1.3.6.1.4.1.6876.90.1.2.7.0.22
g EAM agency for this deployment could not be found.
It is possible that the VC services may still be initializing. Please try
to resolve the alarm to check existence of the agency. In case you have
deleted the agency manually, please delete the deployment entry from NSX.
Action required: Use resolve API to check existence of the agency, if
backing agency exists in EAM, else delete the deployment entry from NSX.
Frequency of traps: Once per cluster.
vmwNsxMFabricVibManualInstallationRequired









.1.3.6.1.4.1.6876.90.1.2.7.0.23
is generated when an attempt is made to upgrade or uninstall
NSX vibs on stateless host using EAM. All stateless host should be
prepared using the auto deploy feature (Refer : https://kb.vmware.com/s/article/2005131).
Fix configuration using auto deploy feature and use the resolve API to resolve the alarm.
Action required: Use the resolve API to resolve the alarm.
Frequency of traps: Once per host.
vmwNsxMFabricUninstallDeploymentUnit









.1.3.6.1.4.1.6876.90.1.2.7.0.24
 unit fabric state is UNINSTALLED and health Status is SUCCEEDED. Resolve this alarm to complete uninstallation.
Frequency of traps: Once per cluster.
vmwNsxMFabricDplymntUnitUpdated









.1.3.6.1.4.1.6876.90.1.2.7.0.3
r has made changes in the objects required for
deploying a service on a cluster. This would be followed by updation of
the service on all hosts in the cluster.
Action required: None
Frequency of traps: Once per cluster per user operation [Update]
vmwNsxMFabricDplymntUnitDestroyed









.1.3.6.1.4.1.6876.90.1.2.7.0.4
has been removed from all hosts in a cluster. NSX
Manager has deleted the objects for the service on the cluster.
Action required: None
Frequency of traps: Once per cluster
vmwNsxMDataStoreNotCnfgrdOnHost









.1.3.6.1.4.1.6876.90.1.2.7.0.5
could not be configured on host, probably its
not connected.
Action required: Ensure that datastore is connected to the host. Use
resolve API to resolve the Alarm. Service will be deployed.
Frequency of traps: Once per cluster per user operation [Deploy].
vmwNsxMFabricDplymntInstallationFailed









.1.3.6.1.4.1.6876.90.1.2.7.0.6
on of service failed, please check if ovf/vib urls
are accessible, in correct format and all the properties in ovf environment
have been configured in service attributes. Please check logs for details.
Action required: Ensure that ovf/vib urls accessible from VC and are in
correct format. Use resolve API to resolve the Alarm. Service will
be deployed.
Frequency of traps: Once per cluster per user operation [Deploy].
vmwNsxMFabricAgentCreated









.1.3.6.1.4.1.6876.90.1.2.7.0.7
e has been successfully installed on a host.
Action required: None
Frequency of traps: N times per cluster, where N is number of hosts in a
cluster.
vmwNsxMFabricAgentDestroyed









.1.3.6.1.4.1.6876.90.1.2.7.0.8
e has been successfully removed from a host.
Action required: None
Frequency of traps: N times per cluster, where N is number of hosts in
a cluster.
vmwNsxMFabricSrvceNeedsRedplymnt









.1.3.6.1.4.1.6876.90.1.2.7.0.9
ll need to be redeployed as the location of the
OVF / VIB bundles to be deployed has changed.
Action required: Use resolve API to resolve the Alarm. Service will be
redeployed.
Frequency of traps: N times per NSX Manager IP change, where N is number
of cluster and service combinations deployed.
vmwNsxMDepPluginIpPoolExhausted









.1.3.6.1.4.1.6876.90.1.2.8.0.1
ying Guest Introspection or other VM based service
with static IP, NSX Manager needs to have a IP pool, for IP assignment to
the VM. This pool has been exhausted, and new service VMs cannot be
provisioned.
Action required: Traverse to the Networking & Security page on VMWare
vSphere Web Client, then go to Installation, followed by Service
Deployments. Note the IP pool name for the failed service. Now
traverse to NSX Managers, then go to Manage tab, followed by Grouping
Objects sub-tab. Click on IP Pools, and add more Ips to the static IP
pool. Use resolve API to resolve the Alarm. Service will be deployed.
Frequency of traps: N times per cluster, where N is number of hosts in
the cluster.
vmwNsxMDepPluginGenericAlarm









.1.3.6.1.4.1.6876.90.1.2.8.0.2
 plugin generic alarm.
Action required: Use resolve API to resolve the Alarm. Service will be
deployed.
Frequency of traps: N times per cluster, where N is number of hosts in
the cluster.
vmwNsxMDepPluginGenericException









.1.3.6.1.4.1.6876.90.1.2.8.0.3
 plugin generic exception alarm.
Action required: Use resolve API to resolve the Alarm. Service will be
deployed.
Frequency of traps: N times per cluster, where N is number of hosts in
the cluster.
vmwNsxMDepPluginVmReboot









.1.3.6.1.4.1.6876.90.1.2.8.0.4
o be rebooted for some changes to be
made/take effect.
Action required: Use resolve API to resolve the Alarm.
Frequency of traps: N times per cluster, where N is number of hosts in
the cluster.
vmwNsxMMessagingConfigFailed









.1.3.6.1.4.1.6876.90.1.2.9.0.1
tion generated when host messaging configuration
failed.
vmwNsxMMessagingReconfigFailed









.1.3.6.1.4.1.6876.90.1.2.9.0.2
tion generated when host messaging connection
reconfiguration failed.
vmwNsxMMessagingConfigFailedNotifSkip









.1.3.6.1.4.1.6876.90.1.2.9.0.3
tion generated when host messaging configuration
failed and notifications were skipped.
vmwNsxMMessagingInfraUp









.1.3.6.1.4.1.6876.90.1.2.9.0.4
ns a heartbeat with all hosts it manages. Missing
heartbeat responses from a host indicate a communication issue between
manager and the host. Such instances are indicated by event code 391002.
When the communication is restored after such an instance, it is indicated
by this event/trap.
Action required: Refer to KB article http://kb.vmware.com/kb/2133897
Frequency of traps: Will be seen within 3 minutes of communication being
restored between manager and a host.
URL: http://kb.vmware.com/kb/2133897
vmwNsxMMessagingInfraDown









.1.3.6.1.4.1.6876.90.1.2.9.0.5
ns a heartbeat with all hosts it manages. Missing
heartbeat responses from a host indicate a communication issue between
manager and the host. In the case of such a communication issue, this trap
will be sent.
Action required: Refer to KB article http://kb.vmware.com/kb/2133897
Frequency of traps: Will be seen within 6 minutes of a communication
failure between manager and a host.
URL: http://kb.vmware.com/kb/2133897
vmwNsxMMessagingDisabled









.1.3.6.1.4.1.6876.90.1.2.9.0.6
g client such as a Host, an Edge appliance or
a USVM appliance is expected to change its password within 2 hours of
being prepped or deployed. If the password isn't changed in this
duration, the messaging account for the client is disabled.
Action required: This event will indicate communication issue between
the manager and the client. Verify if the client is running.
If running, in case of a Host, re-sync messaging. In case of an Edge or
a USVM, redeploy.
Frequency of traps: Will be seen 2 hours after prep, host re-sync
or deployment of appliance.
URL: http://kb.vmware.com/kb/2133897