Testing configuration for an action (emitted when testing an action in the UI)
versionmismatch
.1.3.6.1.4.1.7146.1.2.15.0.10
Configuration update refused: traffic manager version mismatch
sipstreamnoports
.1.3.6.1.4.1.7146.1.2.15.0.100
No suitable ports available for streaming data connection
rtspstreamnoports
.1.3.6.1.4.1.7146.1.2.15.0.101
No suitable ports available for streaming data connection
geodataloadfail
.1.3.6.1.4.1.7146.1.2.15.0.102
Failed to load geolocation data
poolpersistencemismatch
.1.3.6.1.4.1.7146.1.2.15.0.103
Pool uses a session persistence class that does not work with this virtual server's protocol
connerror
.1.3.6.1.4.1.7146.1.2.15.0.104
A protocol error has occurred
connfail
.1.3.6.1.4.1.7146.1.2.15.0.105
A socket connection failure has occurred
badcontentlen
.1.3.6.1.4.1.7146.1.2.15.0.106
HTTP response contained an invalid Content-Length header
activatealldead
.1.3.6.1.4.1.7146.1.2.15.0.107
Activating this machine automatically because it is the only working machine in its Traffic IP Groups
machinerecovered
.1.3.6.1.4.1.7146.1.2.15.0.108
Remote machine has recovered and can raise Traffic IP addresses
flipperrecovered
.1.3.6.1.4.1.7146.1.2.15.0.109
Machine is ready to raise Traffic IP addresses
machineok
.1.3.6.1.4.1.7146.1.2.15.0.11
Remote machine is now working
activatedautomatically
.1.3.6.1.4.1.7146.1.2.15.0.110
Machine has recovered and been activated automatically because it would cause no service disruption
zclustermoderr
.1.3.6.1.4.1.7146.1.2.15.0.111
An error occurred when using the zcluster Multi-Hosted IP kernel module
ec2flipperraiselocalworking
.1.3.6.1.4.1.7146.1.2.15.0.112
Moving EC2 IP Address; local machine is working
ec2flipperraiseothersdead
.1.3.6.1.4.1.7146.1.2.15.0.113
Moving EC2 IP Address; other machines have failed
autherror
.1.3.6.1.4.1.7146.1.2.15.0.114
An error occurred during user authentication
logfiledeleted
.1.3.6.1.4.1.7146.1.2.15.0.115
A virtual server request log file was deleted (appliances only)
license-graceperiodexpired
.1.3.6.1.4.1.7146.1.2.15.0.116
Unable to authorize license key
license-authorized
.1.3.6.1.4.1.7146.1.2.15.0.117
License key authorized
license-rejected-authorized
.1.3.6.1.4.1.7146.1.2.15.0.118
License server rejected license key; key remains authorized
license-rejected-unauthorized
.1.3.6.1.4.1.7146.1.2.15.0.119
License server rejected license key; key is not authorized
machinetimeout
.1.3.6.1.4.1.7146.1.2.15.0.12
Remote machine has timed out and been marked as failed
license-timedout-authorized
.1.3.6.1.4.1.7146.1.2.15.0.120
Unable to contact license server; license key remains authorized
license-timedout-unauthorized
.1.3.6.1.4.1.7146.1.2.15.0.121
Unable to contact license server; license key is not authorized
license-unauthorized
.1.3.6.1.4.1.7146.1.2.15.0.122
License key is not authorized
cachesizereduced
.1.3.6.1.4.1.7146.1.2.15.0.123
Configured cache size exceeds license limit, only using amount allowed by license
morememallowed
.1.3.6.1.4.1.7146.1.2.15.0.124
License allows more memory for caching
lessmemallowed
.1.3.6.1.4.1.7146.1.2.15.0.125
License allows less memory for caching
usedcredsdeleted
.1.3.6.1.4.1.7146.1.2.15.0.126
A Cloud Credentials object has been deleted but it was still in use
apistatusprocesshanging
.1.3.6.1.4.1.7146.1.2.15.0.127
A cloud API process querying changes to cloud instances is hanging
autonodedestroyed
.1.3.6.1.4.1.7146.1.2.15.0.128
A cloud API call to destroy a node has been started
autoscalestatusupdateerror
.1.3.6.1.4.1.7146.1.2.15.0.129
An API call made by the autoscaler process has reported an error
machinefail
.1.3.6.1.4.1.7146.1.2.15.0.13
Remote machine has failed
ec2iperr
.1.3.6.1.4.1.7146.1.2.15.0.130
Problem occurred when managing an EC2 IP address
dropec2ipwarn
.1.3.6.1.4.1.7146.1.2.15.0.131
Removing EC2 IP Address from all machines; it is no longer a part of any Traffic IP Groups
ec2nopublicip
.1.3.6.1.4.1.7146.1.2.15.0.132
Cannot raise Elastic IP on this machine until EC2 provides it with a public IP address
multihostload
.1.3.6.1.4.1.7146.1.2.15.0.133
The amount of load handled by the local machine destined for this Traffic IP has changed
tpslimited
.1.3.6.1.4.1.7146.1.2.15.0.134
License key transactions-per-second limit has been hit
ssltpslimited
.1.3.6.1.4.1.7146.1.2.15.0.135
License key SSL transactions-per-second limit has been hit
bwlimited
.1.3.6.1.4.1.7146.1.2.15.0.136
License key bandwidth limit has been hit
licensetoomanylocations
.1.3.6.1.4.1.7146.1.2.15.0.137
A location has been disabled because you have exceeded the licence limit
autonodedestructioncomplete
.1.3.6.1.4.1.7146.1.2.15.0.138
The destruction of a node in an autoscaled pool is now complete
autonodeexisted
.1.3.6.1.4.1.7146.1.2.15.0.139
IP address of newly created instance already existed in pool's node list
allmachinesok
.1.3.6.1.4.1.7146.1.2.15.0.14
All machines are working
autoscaledpooltoosmall
.1.3.6.1.4.1.7146.1.2.15.0.140
Minimum size undercut - growing
autoscaleinvalidargforcreatenode
.1.3.6.1.4.1.7146.1.2.15.0.141
The 'imageid' was empty when attempting to create a node in an autoscaled pool
autonodedisappeared
.1.3.6.1.4.1.7146.1.2.15.0.142
A node in an autoscaled pool has disappeared from the cloud
autoscaledpoolrefractory
.1.3.6.1.4.1.7146.1.2.15.0.143
An autoscaled pool is now refractory
cannotshrinkemptypool
.1.3.6.1.4.1.7146.1.2.15.0.144
Attempt to scale down a pool that only had pending nodes or none at all
autoscalinghysteresiscantgrow
.1.3.6.1.4.1.7146.1.2.15.0.145
An autoscaled pool is waiting to grow
autonodecreationcomplete
.1.3.6.1.4.1.7146.1.2.15.0.146
The creation of a new node requested by an autoscaled pool is now complete
autonodestatuschange
.1.3.6.1.4.1.7146.1.2.15.0.147
The status of a node in an autoscaled pool has changed
autoscalinghysteresiscantshrink
.1.3.6.1.4.1.7146.1.2.15.0.148
An autoscaled pool is waiting to shrink
autoscalingpoolstatechange
.1.3.6.1.4.1.7146.1.2.15.0.149
An autoscaled pool's state has changed
flipperbackendsworking
.1.3.6.1.4.1.7146.1.2.15.0.15
Back-end nodes are now working
glbmissingips
.1.3.6.1.4.1.7146.1.2.15.0.150
A DNS Query returned IP addresses that are not configured in any location
glbnolocations
.1.3.6.1.4.1.7146.1.2.15.0.151
No valid location could be chosen for Global Load Balancing
locationmonitorok
.1.3.6.1.4.1.7146.1.2.15.0.152
A monitor has indicated this location is now working
locationmonitorfail
.1.3.6.1.4.1.7146.1.2.15.0.153
A monitor has detected a failure in this location
locationok
.1.3.6.1.4.1.7146.1.2.15.0.154
Location is now healthy for GLB Service
locationfail
.1.3.6.1.4.1.7146.1.2.15.0.155
Location has failed for GLB Service
locationsoapok
.1.3.6.1.4.1.7146.1.2.15.0.156
An external SOAP agent indicates this location is now working
locationsoapfail
.1.3.6.1.4.1.7146.1.2.15.0.157
An external SOAP agent has detected a failure in this location
glbdeadlocmissingips
.1.3.6.1.4.1.7146.1.2.15.0.158
A DNS Query returned IP addresses that are not configured for any location that is currently alive
autoscaleresponseparseerror
.1.3.6.1.4.1.7146.1.2.15.0.159
An API call made by the autoscaler process has returned a response that could not be parsed
flipperfrontendsworking
.1.3.6.1.4.1.7146.1.2.15.0.16
Frontend machines are now working
glbnewmaster
.1.3.6.1.4.1.7146.1.2.15.0.160
A location has been set as active for a GLB service
glblogwritefail
.1.3.6.1.4.1.7146.1.2.15.0.161
Failed to write log file for GLB service
glbfailalter
.1.3.6.1.4.1.7146.1.2.15.0.162
Failed to alter DNS packet for global load balancing
autoscalednodecontested
.1.3.6.1.4.1.7146.1.2.15.0.163
Two pools are trying to use the same instance
autoscalepoolconfupdate
.1.3.6.1.4.1.7146.1.2.15.0.164
A pool config file has been updated by the autoscaler process
autonodecreationstarted
.1.3.6.1.4.1.7146.1.2.15.0.165
Creation of new node instigated
autoscaleinvalidargfordeletenode
.1.3.6.1.4.1.7146.1.2.15.0.166
'unique id' was empty when attempting to destroy a node in an autoscaled pool
autoscalinghitroof
.1.3.6.1.4.1.7146.1.2.15.0.167
Maximum size reached by autoscaled pool, cannot grow further
autoscalinghitfloor
.1.3.6.1.4.1.7146.1.2.15.0.168
Minimum size reached, cannot shrink further
apichangeprocesshanging
.1.3.6.1.4.1.7146.1.2.15.0.169
API change process still running after refractory period is over
pingbackendfail
.1.3.6.1.4.1.7146.1.2.15.0.17
Failed to ping back-end nodes
autoscaledpooltoobig
.1.3.6.1.4.1.7146.1.2.15.0.170
Over maximum size - shrinking
autoscalingprocesstimedout
.1.3.6.1.4.1.7146.1.2.15.0.171
A cloud API process has timed out
autoscalingdisabled
.1.3.6.1.4.1.7146.1.2.15.0.172
Autoscaling for a pool has been disabled due to errors communicating with the cloud API
locmovemachine
.1.3.6.1.4.1.7146.1.2.15.0.173
Machine now in location
locempty
.1.3.6.1.4.1.7146.1.2.15.0.174
Location no longer contains any machines
autoscalinglicenseerror
.1.3.6.1.4.1.7146.1.2.15.0.175
Autoscaling not permitted by licence key
autoscalinglicenseenabled
.1.3.6.1.4.1.7146.1.2.15.0.176
Autoscaling support has been enabled
autoscalinglicensedisabled
.1.3.6.1.4.1.7146.1.2.15.0.177
Autoscaling support has been disabled
confreptimeout
.1.3.6.1.4.1.7146.1.2.15.0.178
Replication of configuration has timed out
confrepfailed
.1.3.6.1.4.1.7146.1.2.15.0.179
Replication of configuration has failed
pingfrontendfail
.1.3.6.1.4.1.7146.1.2.15.0.18
Failed to ping any of the machines used to check the front-end connectivity
analyticslicenseenabled
.1.3.6.1.4.1.7146.1.2.15.0.180
Realtime Analytics support has been enabled
analyticslicensedisabled
.1.3.6.1.4.1.7146.1.2.15.0.181
Realtime Analytics support has been disabled
autoscalingchangeprocessfailure
.1.3.6.1.4.1.7146.1.2.15.0.182
An API process that should have created or destroyed a node has failed to produce the expected result
autoscalewrongimageid
.1.3.6.1.4.1.7146.1.2.15.0.183
A node created by the autoscaler has the wrong imageid
autoscalewrongname
.1.3.6.1.4.1.7146.1.2.15.0.184
A node created by the autoscaler has a non-matching name
autoscalewrongsizeid
.1.3.6.1.4.1.7146.1.2.15.0.185
A node created by the autoscaler has the wrong sizeid
logdiskoverload
.1.3.6.1.4.1.7146.1.2.15.0.186
Log disk partition usage has exceeded threshold
logdiskfull
.1.3.6.1.4.1.7146.1.2.15.0.187
Log disk partition full
autoscalingresuscitatepool
.1.3.6.1.4.1.7146.1.2.15.0.188
An autoscaled pool has failed completely
zxtmhighload
.1.3.6.1.4.1.7146.1.2.15.0.189
The number of simultaneously active connections has reached a level that the software cannot process in due time; there is a high risk of connections timing out
pinggwfail
.1.3.6.1.4.1.7146.1.2.15.0.19
Failed to ping default gateway
glbservicedied
.1.3.6.1.4.1.7146.1.2.15.0.190
GLB Service has failed
glbserviceok
.1.3.6.1.4.1.7146.1.2.15.0.191
GLB Service has recovered
license-rejected-unauthorized-ts
.1.3.6.1.4.1.7146.1.2.15.0.192
License key rejected from authorization code
license-authorized-ts
.1.3.6.1.4.1.7146.1.2.15.0.193
License key authorized by authorization code
license-rejected-authorized-ts
.1.3.6.1.4.1.7146.1.2.15.0.194
License key rejected from authorization code; key remains authorized
license-timedout-authorized-ts
.1.3.6.1.4.1.7146.1.2.15.0.195
Unable to run authorization code to completion; key remains valid
license-timedout-unauthorized-ts
.1.3.6.1.4.1.7146.1.2.15.0.196
Unable to run authorization code to completion
license-graceperiodexpired-ts
.1.3.6.1.4.1.7146.1.2.15.0.197
Unable to authorize license key
flipperraiseremotedropped
.1.3.6.1.4.1.7146.1.2.15.0.198
This Traffic Manager has re-raised traffic IP addresses as the remote machine which was hosting them has dropped them
sslrehandshakemininterval
.1.3.6.1.4.1.7146.1.2.15.0.199
SSL re-handshake requests have exceeded the frequency permitted by configuration
running
.1.3.6.1.4.1.7146.1.2.15.0.2
Software is running
statebaddata
.1.3.6.1.4.1.7146.1.2.15.0.20
Received an invalid response from another cluster member
sslhandshakemsgsizelimit
.1.3.6.1.4.1.7146.1.2.15.0.200
SSL handshake messages have exceeded the size permitted by configuration
sslcrltoobig
.1.3.6.1.4.1.7146.1.2.15.0.201
CRL does not fit in the configured amount of shared memory, increase ssl!crl_mem!size and restart software
numpools-exceeded
.1.3.6.1.4.1.7146.1.2.15.0.202
Total number of pools exceeded the maximum limit
numlocations-exceeded
.1.3.6.1.4.1.7146.1.2.15.0.203
Total number of locations exceeded the maximum limit
numtipg-exceeded
.1.3.6.1.4.1.7146.1.2.15.0.204
Total number of traffic IP group exceeded the maximum limit
numnodes-exceeded
.1.3.6.1.4.1.7146.1.2.15.0.205
Total number of nodes exceeded the maximum number of nodes that can be monitored
ec2nosecondaryprivateip
.1.3.6.1.4.1.7146.1.2.15.0.206
Cannot raise Elastic IP on this machine as no suitable secondary IP is available on the allowed network card(s)
ec2vpceipassocerr
.1.3.6.1.4.1.7146.1.2.15.0.207
Problem occurred while getting a list of private IP addresses and their EIP associations
ec2vpciderr
.1.3.6.1.4.1.7146.1.2.15.0.208
Problem occurred while getting VPCID
license-explicitlydisabled-ts
.1.3.6.1.4.1.7146.1.2.15.0.209
License key explicitly disabled from authorization code
stateconnfail
.1.3.6.1.4.1.7146.1.2.15.0.21
Failed to connect to another cluster member for state sharing
rulestreamerrortoomuch
.1.3.6.1.4.1.7146.1.2.15.0.210
Rule supplied too much data in HTTP stream
rulestreamerrornotenough
.1.3.6.1.4.1.7146.1.2.15.0.211
Rule did not supply enough data in HTTP stream
rulestreamerrorprocessfailure
.1.3.6.1.4.1.7146.1.2.15.0.212
Data supplied to HTTP stream could not be processed
rulestreamerrornotstarted
.1.3.6.1.4.1.7146.1.2.15.0.213
Attempt to stream data or finish a stream before streaming had been initialized
rulestreamerrornotfinished
.1.3.6.1.4.1.7146.1.2.15.0.214
Attempt to initialize HTTP stream before previous stream had finished
rulestreamerrorinternal
.1.3.6.1.4.1.7146.1.2.15.0.215
Internal error while processing HTTP stream
rulestreamerrorgetresponse
.1.3.6.1.4.1.7146.1.2.15.0.216
Attempt to use http.getResponse or http.getResponseBody after http.stream.startResponse
rulesinvalidrequestbody
.1.3.6.1.4.1.7146.1.2.15.0.217
Client sent invalid HTTP request body
serviceruleabort
.1.3.6.1.4.1.7146.1.2.15.0.218
GLB service rule aborted during execution
servicerulelocunknown
.1.3.6.1.4.1.7146.1.2.15.0.219
GLB service rule specified an unknown location
stateok
.1.3.6.1.4.1.7146.1.2.15.0.22
Successfully connected to another cluster member for state sharing
servicerulelocnotconfigured
.1.3.6.1.4.1.7146.1.2.15.0.220
GLB service rule specified a location that is not configured for the service
servicerulelocdead
.1.3.6.1.4.1.7146.1.2.15.0.221
GLB service rule specified a location that has either failed or been marked as draining in the service configuration
aptimizeuseunknownprofile
.1.3.6.1.4.1.7146.1.2.15.0.222
Rule selected an unknown Web Accelerator profile
aptimizedisabled
.1.3.6.1.4.1.7146.1.2.15.0.223
Rule attempted to use Web Accelerator but it is not enabled
aptimizeuseunknownscope
.1.3.6.1.4.1.7146.1.2.15.0.224
Rule selected an unknown Web Accelerator scope
childcommsfail
.1.3.6.1.4.1.7146.1.2.15.0.225
There was an error communicating with a child process
childhung
.1.3.6.1.4.1.7146.1.2.15.0.226
The child process did not respond within the configured time
childkilled
.1.3.6.1.4.1.7146.1.2.15.0.227
The child process has been killed because it did not respond to control requests within the configured time
datalocalstorefull
.1.3.6.1.4.1.7146.1.2.15.0.228
data.local.set() has run out of space
fipsfailinit
.1.3.6.1.4.1.7146.1.2.15.0.229
FIPS 140-2 cryptographic module initialization failed
statereadfail
.1.3.6.1.4.1.7146.1.2.15.0.23
Reading state data from another cluster member failed
fipsfailops
.1.3.6.1.4.1.7146.1.2.15.0.230
FIPS 140-2 cryptographic module operations failed
clocknotmonotonic
.1.3.6.1.4.1.7146.1.2.15.0.231
The monotonic system clock went backwards
clockjump
.1.3.6.1.4.1.7146.1.2.15.0.232
The system clock jumped forwards or backwards by more than one second
rebootrequired
.1.3.6.1.4.1.7146.1.2.15.0.233
Machine must be rebooted to apply configuration changes
ocspstaplingfail
.1.3.6.1.4.1.7146.1.2.15.0.234
OCSP request (for OCSP stapling) failed
ocspstaplingnomem
.1.3.6.1.4.1.7146.1.2.15.0.235
Insufficient memory for OCSP stapling
appliance
.1.3.6.1.4.1.7146.1.2.15.0.236
Appliance notification
pingsendfail
.1.3.6.1.4.1.7146.1.2.15.0.237
Failed to send ping packets
autonodenopublicip
.1.3.6.1.4.1.7146.1.2.15.0.238
Node has no public IP address
ocspstaplingrevoked
.1.3.6.1.4.1.7146.1.2.15.0.239
An OCSP request (for OCSP stapling) reported that a certificate was revoked
statetimeout
.1.3.6.1.4.1.7146.1.2.15.0.24
Timeout while sending state data to another cluster member
ocspstaplingunknown
.1.3.6.1.4.1.7146.1.2.15.0.240
An OCSP request (for OCSP stapling) reported that a certificate was unknown
ocspstaplingunrevoked
.1.3.6.1.4.1.7146.1.2.15.0.241
An old but good OCSP response was returned for a revoked certificate
ruleoverrun
.1.3.6.1.4.1.7146.1.2.15.0.242
Rule exceeded execution time warning threshold
appfirewallcontrolstarted
.1.3.6.1.4.1.7146.1.2.15.0.243
Application firewall started
autonoderemoved
.1.3.6.1.4.1.7146.1.2.15.0.244
A node in an DNS-derived autoscaled pool has been removed
routingswoperational
.1.3.6.1.4.1.7146.1.2.15.0.245
Routing software is now operational
routingswfailurelimitreached
.1.3.6.1.4.1.7146.1.2.15.0.246
Routing software has failed and reached its failure limit
routingswfailed
.1.3.6.1.4.1.7146.1.2.15.0.247
Routing software had a major failure and will be restarted
routingswstartfailed
.1.3.6.1.4.1.7146.1.2.15.0.248
Routing software failed to start
appfirewallcontrolstopped
.1.3.6.1.4.1.7146.1.2.15.0.249
Application firewall stopped
stateunexpected
.1.3.6.1.4.1.7146.1.2.15.0.25
Received unexpected state data from another cluster member
appfirewallcontrolrestarted
.1.3.6.1.4.1.7146.1.2.15.0.250
Application firewall restarted
appfirewallcontroltimeout
.1.3.6.1.4.1.7146.1.2.15.0.251
Application firewall control command timed out
appfirewallcontrolerror
.1.3.6.1.4.1.7146.1.2.15.0.252
Application firewall control command failed
ospfneighborsok
.1.3.6.1.4.1.7146.1.2.15.0.253
All monitored OSPF neighbors are peered
ospfneighborsdegraded
.1.3.6.1.4.1.7146.1.2.15.0.254
Some of the monitored OSPF neighbors are not peered
ospfneighborsfailed
.1.3.6.1.4.1.7146.1.2.15.0.255
None of the monitored OSPF neighbors are peered
nameserverunavailable
.1.3.6.1.4.1.7146.1.2.15.0.256
DNS-derived Autoscaling will not update, as the DNS server is unavailable
nameserveravailable
.1.3.6.1.4.1.7146.1.2.15.0.257
DNS-derived Autoscaling will resume updating, as the DNS server is now responding
autoscaleresolvefailure
.1.3.6.1.4.1.7146.1.2.15.0.258
A hostname used for DNS-derived Autoscaling doesn't resolve
glbtoomanylocations
.1.3.6.1.4.1.7146.1.2.15.0.259
There are too many Data Centers configured and the Global Load Balancing feature is not guaranteed to work reliably with more than 255 Data Centres
statewritefail
.1.3.6.1.4.1.7146.1.2.15.0.26
Writing state data to another cluster member failed
dnszonevalidate
.1.3.6.1.4.1.7146.1.2.15.0.260
The built-in DNS server has failed to validate a DNS zone file
dnszonecreaterecord
.1.3.6.1.4.1.7146.1.2.15.0.261
The built-in DNS server has failed to create a DNS record
dnszoneparsechild
.1.3.6.1.4.1.7146.1.2.15.0.262
The built-in DNS server has failed to parse a DNS zone file
dnserroraddzone
.1.3.6.1.4.1.7146.1.2.15.0.263
The built-in DNS server has failed to add a DNS zone
dnsaddzone
.1.3.6.1.4.1.7146.1.2.15.0.264
The built-in DNS server has successfully added a DNS zone
dnszoneparse
.1.3.6.1.4.1.7146.1.2.15.0.265
The built-in DNS server has failed to parse a DNS zone file
ec2dataretrievalfailed
.1.3.6.1.4.1.7146.1.2.15.0.266
Traffic manager failed to get the required data from Amazon servers
ec2dataretrievalsuccessful
.1.3.6.1.4.1.7146.1.2.15.0.267
Traffic manager has now successfully retrieved the required data from Amazon servers
dnszonedelete
.1.3.6.1.4.1.7146.1.2.15.0.268
DNS zone has been deleted
dnserrordeletezone
.1.3.6.1.4.1.7146.1.2.15.0.269
The built-in DNS server has failed to delete a DNS zone
sslhwfail
.1.3.6.1.4.1.7146.1.2.15.0.27
SSL hardware support failed
dnssecexpired
.1.3.6.1.4.1.7146.1.2.15.0.270
DNSSEC zone contains expired signatures
dnssecexpires
.1.3.6.1.4.1.7146.1.2.15.0.271
DNSSEC zone contains signatures that are about to expire
glbactivedcmismatch
.1.3.6.1.4.1.7146.1.2.15.0.272
Active datacentre mismatches among cluster members
locationdraining
.1.3.6.1.4.1.7146.1.2.15.0.273
Location is being drained for GLB Service
locationnotdraining
.1.3.6.1.4.1.7146.1.2.15.0.274
Location is not being drained for GLB Service
locationdisabled
.1.3.6.1.4.1.7146.1.2.15.0.275
Location has been disabled for GLB Service
locationenabled
.1.3.6.1.4.1.7146.1.2.15.0.276
Location has just been enabled for GLB Service
locationunavailable
.1.3.6.1.4.1.7146.1.2.15.0.277
Location has become unavailable for GLB Service
locationavailable
.1.3.6.1.4.1.7146.1.2.15.0.278
Location is now available for GLB Service
glbmanualfailback
.1.3.6.1.4.1.7146.1.2.15.0.279
Manual failback triggered
sslhwrestart
.1.3.6.1.4.1.7146.1.2.15.0.28
SSL hardware support restarted
nodedrainingtodelete
.1.3.6.1.4.1.7146.1.2.15.0.280
Removed node is in use and will be drained
nodedrainingtodeletetimeout
.1.3.6.1.4.1.7146.1.2.15.0.281
Draining to delete period for node has expired
bgpneighborsok
.1.3.6.1.4.1.7146.1.2.15.0.282
There are established sessions with all BGP neighbors
bgpneighborsdegraded
.1.3.6.1.4.1.7146.1.2.15.0.283
Some of the BGP neighbors do not have established sessions
bgpneighborsfailed
.1.3.6.1.4.1.7146.1.2.15.0.284
None of the BGP neighbors have an established session
bgpnoneighbors
.1.3.6.1.4.1.7146.1.2.15.0.285
There are no valid BGP neighbors defined
zxtmcpustarvation
.1.3.6.1.4.1.7146.1.2.15.0.286
The number of simultaneously active connections has reached a level that the software cannot process in due time because of CPU starvation; there is a high risk of connections timing out
ec2initialized
.1.3.6.1.4.1.7146.1.2.15.0.287
The EC2 instance is now initialized
upgradereboot
.1.3.6.1.4.1.7146.1.2.15.0.288
Virtual Traffic Manager Appliance reboot required
sysctlreboot
.1.3.6.1.4.1.7146.1.2.15.0.289
Virtual Traffic Manager Appliance reboot required
sslhwstart
.1.3.6.1.4.1.7146.1.2.15.0.29
SSL hardware support started
upgraderestart
.1.3.6.1.4.1.7146.1.2.15.0.290
Virtual Traffic Manager software restart required
unspecifiedreboot
.1.3.6.1.4.1.7146.1.2.15.0.291
Virtual Traffic Manager restart/reboot required
gcedataretrievalfailed
.1.3.6.1.4.1.7146.1.2.15.0.292
Traffic manager failed to get the required data from GCE instance
gcedataretrievalsuccessful
.1.3.6.1.4.1.7146.1.2.15.0.293
Traffic manager has now successfully retrieved the required data from GCE instance
autofailbacktimerstarted
.1.3.6.1.4.1.7146.1.2.15.0.294
Auto-failback wait period started
autofailbacktimerstopped
.1.3.6.1.4.1.7146.1.2.15.0.295
Auto-failback delay timer stopped due to system failure
autofailbackafterdelay
.1.3.6.1.4.1.7146.1.2.15.0.296
Automatic failback after delay
autofailbacktimercancelled
.1.3.6.1.4.1.7146.1.2.15.0.297
Auto-failback delay timer cancelled
fewfreefds
.1.3.6.1.4.1.7146.1.2.15.0.3
Running out of free file descriptors
confdel
.1.3.6.1.4.1.7146.1.2.15.0.30
Configuration file deleted
confmod
.1.3.6.1.4.1.7146.1.2.15.0.31
Configuration file modified
confadd
.1.3.6.1.4.1.7146.1.2.15.0.32
Configuration file added
confok
.1.3.6.1.4.1.7146.1.2.15.0.33
Configuration file now OK
javadied
.1.3.6.1.4.1.7146.1.2.15.0.34
Java runner died
javastop
.1.3.6.1.4.1.7146.1.2.15.0.35
Java support has stopped
javastartfail
.1.3.6.1.4.1.7146.1.2.15.0.36
Java runner failed to start
javaterminatefail
.1.3.6.1.4.1.7146.1.2.15.0.37
Java runner failed to terminate
javanotfound
.1.3.6.1.4.1.7146.1.2.15.0.38
Cannot start Java runner, program not found
javastarted
.1.3.6.1.4.1.7146.1.2.15.0.39
Java runner started
restartrequired
.1.3.6.1.4.1.7146.1.2.15.0.4
Software must be restarted to apply configuration changes
servleterror
.1.3.6.1.4.1.7146.1.2.15.0.40
Servlet encountered an error
monitorfail
.1.3.6.1.4.1.7146.1.2.15.0.41
Monitor has detected a failure
monitorok
.1.3.6.1.4.1.7146.1.2.15.0.42
Monitor is working
rulexmlerr
.1.3.6.1.4.1.7146.1.2.15.0.43
Rule encountered an XML error
pooluseunknown
.1.3.6.1.4.1.7146.1.2.15.0.44
Rule selected an unknown pool
ruleabort
.1.3.6.1.4.1.7146.1.2.15.0.45
Rule aborted during execution
rulebufferlarge
.1.3.6.1.4.1.7146.1.2.15.0.46
Rule has buffered more data than expected
rulebodycomperror
.1.3.6.1.4.1.7146.1.2.15.0.47
Rule encountered invalid data while uncompressing response
forwardproxybadhost
.1.3.6.1.4.1.7146.1.2.15.0.48
Rule selected an unresolvable host
invalidemit
.1.3.6.1.4.1.7146.1.2.15.0.49
Rule used event.emit() with an invalid custom event
timemovedback
.1.3.6.1.4.1.7146.1.2.15.0.5
Time has been moved back
rulenopersistence
.1.3.6.1.4.1.7146.1.2.15.0.50
Rule selected an unknown session persistence class
rulelogmsginfo
.1.3.6.1.4.1.7146.1.2.15.0.51
Rule logged an info message using log.info
rulelogmsgwarn
.1.3.6.1.4.1.7146.1.2.15.0.52
Rule logged a warning message using log.warn
rulelogmsgserious
.1.3.6.1.4.1.7146.1.2.15.0.53
Rule logged an error message using log.error
norate
.1.3.6.1.4.1.7146.1.2.15.0.54
Rule selected an unknown rate shaping class
poolactivenodesunknown
.1.3.6.1.4.1.7146.1.2.15.0.55
Rule references an unknown pool via pool.activenodes
datastorefull
.1.3.6.1.4.1.7146.1.2.15.0.56
data.set() has run out of space
expired
.1.3.6.1.4.1.7146.1.2.15.0.57
License key has expired
licensecorrupt
.1.3.6.1.4.1.7146.1.2.15.0.58
License key is corrupt
expiresoon
.1.3.6.1.4.1.7146.1.2.15.0.59
License key expires within 7 days
sslfail
.1.3.6.1.4.1.7146.1.2.15.0.6
One or more SSL connections from clients failed recently
usinglicense
.1.3.6.1.4.1.7146.1.2.15.0.60
Using license key
licenseclustertoobig
.1.3.6.1.4.1.7146.1.2.15.0.61
Cluster size exceeds license key limit
unlicensed
.1.3.6.1.4.1.7146.1.2.15.0.62
Started without a license
usingdevlicense
.1.3.6.1.4.1.7146.1.2.15.0.63
Using a development license
poolnonodes
.1.3.6.1.4.1.7146.1.2.15.0.64
Pool configuration contains no valid backend nodes
poolok
.1.3.6.1.4.1.7146.1.2.15.0.65
Pool now has working nodes
pooldied
.1.3.6.1.4.1.7146.1.2.15.0.66
Pool has no back-end nodes responding
noderesolvefailure
.1.3.6.1.4.1.7146.1.2.15.0.67
Failed to resolve node address
noderesolvemultiple
.1.3.6.1.4.1.7146.1.2.15.0.68
Node resolves to multiple IP addresses
nodeworking
.1.3.6.1.4.1.7146.1.2.15.0.69
Node is working again
hardware
.1.3.6.1.4.1.7146.1.2.15.0.7
Appliance hardware notification. Deprecated, replaced with 'appliance'
nostarttls
.1.3.6.1.4.1.7146.1.2.15.0.70
Node doesn't provide STARTTLS support
nodefail
.1.3.6.1.4.1.7146.1.2.15.0.71
Node has failed
starttlsinvalid
.1.3.6.1.4.1.7146.1.2.15.0.72
Node returned invalid STARTTLS response
ehloinvalid
.1.3.6.1.4.1.7146.1.2.15.0.73
Node returned invalid EHLO response
flipperraiselocalworking
.1.3.6.1.4.1.7146.1.2.15.0.74
Raising Traffic IP Address; local machine is working
flipperraiseothersdead
.1.3.6.1.4.1.7146.1.2.15.0.75
Raising Traffic IP Address; other machines have failed
flipperraiseosdrop
.1.3.6.1.4.1.7146.1.2.15.0.76
Raising Traffic IP Address; Operating System had dropped this IP address
dropipinfo
.1.3.6.1.4.1.7146.1.2.15.0.77
Dropping Traffic IP Address due to a configuration change or traffic manager recovery
dropipwarn
.1.3.6.1.4.1.7146.1.2.15.0.78
Dropping Traffic IP Address due to an error
flipperdadreraise
.1.3.6.1.4.1.7146.1.2.15.0.79
Re-raising Traffic IP Address; Operating system did not fully raise the address
zxtmswerror
.1.3.6.1.4.1.7146.1.2.15.0.8
Internal software error
flipperipexists
.1.3.6.1.4.1.7146.1.2.15.0.80
Failed to raise Traffic IP Address; the address exists elsewhere on your network and cannot be raised
triggersummary
.1.3.6.1.4.1.7146.1.2.15.0.81
Summary of recent service protection events
slmclasslimitexceeded
.1.3.6.1.4.1.7146.1.2.15.0.82
SLM shared memory limit exceeded
slmrecoveredwarn
.1.3.6.1.4.1.7146.1.2.15.0.83
SLM has recovered
slmrecoveredserious
.1.3.6.1.4.1.7146.1.2.15.0.84
SLM has risen above the serious threshold
slmfallenbelowwarn
.1.3.6.1.4.1.7146.1.2.15.0.85
SLM has fallen below warning threshold
slmfallenbelowserious
.1.3.6.1.4.1.7146.1.2.15.0.86
SLM has fallen below serious threshold
vscrloutofdate
.1.3.6.1.4.1.7146.1.2.15.0.87
CRL for a Certificate Authority is out of date
vsstart
.1.3.6.1.4.1.7146.1.2.15.0.88
Virtual server started
vsstop
.1.3.6.1.4.1.7146.1.2.15.0.89
Virtual server stopped
customevent
.1.3.6.1.4.1.7146.1.2.15.0.9
A custom event was emitted using the TrafficScript 'event.emit()' function
privkeyok
.1.3.6.1.4.1.7146.1.2.15.0.90
Private key now OK (hardware available)
ssldrop
.1.3.6.1.4.1.7146.1.2.15.0.91
Request(s) received while SSL configuration invalid, connection closed
vslogwritefail
.1.3.6.1.4.1.7146.1.2.15.0.92
Failed to write log file for virtual server
vssslcertexpired
.1.3.6.1.4.1.7146.1.2.15.0.93
Public SSL certificate expired
vssslcerttoexpire
.1.3.6.1.4.1.7146.1.2.15.0.94
Public SSL certificate will expire within seven days
vscacertexpired
.1.3.6.1.4.1.7146.1.2.15.0.95
Certificate Authority certificate expired
vscacerttoexpire
.1.3.6.1.4.1.7146.1.2.15.0.96
Certificate Authority certificate will expire within seven days