The user {{username}} paused the replication from {{replicationSource}} to {{replicationDestination}}.
fluidFSEventAdminResumeReplication
.1.3.6.1.4.1.674.11000.2000.200.20.1.103
The user {{username}} resumed the replication from {{replicationSource}} to {{replicationDestination}}.
fluidFSEventAdminStartedImmediateReplication
.1.3.6.1.4.1.674.11000.2000.200.20.1.104
The user {{username}} started immediate replication from {{replicationSource}} to {{replicationDestination}}.
fluidFSEventAdminDetachReplication
.1.3.6.1.4.1.674.11000.2000.200.20.1.105
The user {{username}} detached the destination NAS volume {{replicationDestination}} from source NAS volume {{replicationSource}}.
fluidFSEventAdminAttachReplication
.1.3.6.1.4.1.674.11000.2000.200.20.1.106
The user {{username}} attached the destination NAS volume {{replicationDestinationVolume}}@{{replicationDestinationCluster}} to NAS volume {{replicationSourceVolume}}.
Firmware DellFluidFS-{{spVersion}} installation started Description: Firmware DellFluidFS-{{spVersion}} installation started Action Items: You should follow the installation process in the CLI / WUI and the event log to verify it successfully completes
fluidFSEventAdminServicePackFailed
.1.3.6.1.4.1.674.11000.2000.200.20.1.155
Firmware DellFluidFS-{{spVersion}} failed to install. Description: Firmware DellFluidFS-{{spVersion}} failed to install, reason: {{spError}}. Action Items: System may be in inconsistent state, please check system validation in CLI / WUI.[[nl]]If system is in unstable state (i.e. some controllers are reported down) then escalate to resolve the situation.
fluidFSEventAdminServicePackFinished
.1.3.6.1.4.1.674.11000.2000.200.20.1.156
Successfully upgraded to firmware DellFluidFS-{{spVersion}}.
fluidFSEventAdminGeneralFutureEventInfo
.1.3.6.1.4.1.674.11000.2000.200.20.1.162
{{generalEventText}}
fluidFSEventAdminGeneralFutureEventMajor
.1.3.6.1.4.1.674.11000.2000.200.20.1.163
{{generalEventText}}
fluidFSEventAdminGeneralFutureEventCrit
.1.3.6.1.4.1.674.11000.2000.200.20.1.164
{{generalEventText}}
fluidFSEventAdminGeneralFutureEventWarning
.1.3.6.1.4.1.674.11000.2000.200.20.1.165
{{generalEventText}}
fluidFSEventAdminCancelReplication
.1.3.6.1.4.1.674.11000.2000.200.20.1.167
{{username}} cancelled the replication from {{replicationSource}} to {{replicationDestination}}.
fluidFSEventAdminDisableReplication
.1.3.6.1.4.1.674.11000.2000.200.20.1.170
Replication-policy of NAS volumes {{replicationSource}} and {{replicationDestination}} was disabled. Description: The user {{username}} disabled the replication-policy of the destination NAS volume {{replicationDestination}} and the source NAS volume {{replicationSource}}.
fluidFSEventAdminEnableReplication
.1.3.6.1.4.1.674.11000.2000.200.20.1.171
Replication-policy of NAS volumes {{replicationSourceVolume}} and {{replicationDestinationVolume}} was enabled. Description: The user {{username}} enabled replication: re-attached NAS volume {{replicationDestinationVolume}} on cluster {{replicationDestinationCluster}} to local NAS volume {{replicationSourceVolume}}.
fluidFSEventAdminAdminLoggedInToInterface
.1.3.6.1.4.1.674.11000.2000.200.20.1.180
The user {{username}} logged in to {{ui}}.
fluidFSEventAdminFormatFileSystemMkfsFailed
.1.3.6.1.4.1.674.11000.2000.200.20.1.181
Format failed during execution of File System Structures Initialization phase. Internal Information: mkfs.sh failed with error code {{ErrCode}}
Incremental Format failed during execution of File System Structures Initialization phase. Internal Information: mkfs.sh failed with error code {{ErrCode}}
fluidFSEventAdminControllerAutoAttachStart
.1.3.6.1.4.1.674.11000.2000.200.20.1.185
Auto attach of NAS controller {{controllerId}} in appliance {{applianceId}} has started. Action Items: You should follow the attachment process in the CLI / WUI to verify it successfully completes.
fluidFSEventAdminControllerAutoDetachStart
.1.3.6.1.4.1.674.11000.2000.200.20.1.186
Auto detach of NAS controller {{controllerId}} in appliance {{applianceId}} has started. Action Items: You should follow the detachment process in the CLI / WUI to verify it successfully completes.
fluidFSEventAdminApplianceCreateStart
.1.3.6.1.4.1.674.11000.2000.200.20.1.187
Create of NAS Appliance {{applianceId}} ({{applianceServiceTag}}) has started. Action Items: You should follow the appliance create process in the CLI / WUI to verify it successfully completes.
fluidFSEventAdminEnableEscalationAccess
.1.3.6.1.4.1.674.11000.2000.200.20.1.188
Escalation access was enabled for {{escalationPeriodInMinutes}} minutes.
User {{username}} created {{objectName}} exceeding the maximum recommended count {{recommendedMaxCount}}.
fluidFSEventAdminControllerAttachStart
.1.3.6.1.4.1.674.11000.2000.200.20.1.190
Attach of NAS controller {{controllerId}} has started by {{username}}.
fluidFSEventAdminControllerAttachEnd
.1.3.6.1.4.1.674.11000.2000.200.20.1.191
Attach of NAS controller {{controllerId}} has completed successfully.
fluidFSEventAdminControllerDetachStart
.1.3.6.1.4.1.674.11000.2000.200.20.1.192
Detach of NAS controller {{controllerId}} was started by {{username}}.
fluidFSEventAdminControllerDetachEnd
.1.3.6.1.4.1.674.11000.2000.200.20.1.193
Detach of NAS controller {{controllerId}} has completed successfully.
fluidFSEventAdminApplianceJoinStart
.1.3.6.1.4.1.674.11000.2000.200.20.1.194
Join of NAS Appliance {{applianceId}} was started by {{username}}. Action Items: You should follow the appliance join process in the CLI / WUI to verify it successfully completes.
fluidFSEventAdminApplianceJoinEnd
.1.3.6.1.4.1.674.11000.2000.200.20.1.195
Join of NAS Appliance {{applianceId}} has completed successfully.
fluidFSEventAntivirusAntivirusHostUp
.1.3.6.1.4.1.674.11000.2000.200.20.10.16
Antivirus host {{host}}:{{port}} accessibility is restored. Description: Antivirus host {{host}}:{{port}} accessibility is restored.
fluidFSEventAntivirusAntivirusHostDown
.1.3.6.1.4.1.674.11000.2000.200.20.10.17
Antivirus host {{host}}:{{port}} is not accessible. Description: Antivirus host {{host}}:{{port}} is not accessible.
No antivirus hosts are accessible. Description: No antivirus hosts are accessible. Virus scanning is not possible. SMB Shares with AV scan configured will not be accessible.
AntiVirus servers are not configured. Description: SMB share {{shareName}} is configured to use antivirus checking but no AV servers are configured yet. Access to the files inside the share is denied. Action Items: AntiVirus servers must be configured to provide AntiVirus facility
fluidFSEventAntivirusAntivirusScanWasTooLong2
.1.3.6.1.4.1.674.11000.2000.200.20.10.31
Antivirus scan for file {{FSIDName}}/{{dsidPath}} took too long Description: Scanning for viruses for file {{dsidPath}} on NAS volume {{FSIDName}} accessed via SMB share {{shareName}} took too long: {{scanningDuration}} seconds (must be less than {{scanningDurationThreshold}}), scanning facility '{{scanningFacility}}'. Action Items: Please check network availability and current load on problematic AntiVirus facility.
fluidFSEventAntivirusAccessDenied
.1.3.6.1.4.1.674.11000.2000.200.20.10.32
Virus was found in file {{FSIDName}}/{{dsidPath}} but quarantine is not applicable. Description: Virus ({{virusDescription}}) was found in file {{dsidPath}} on NAS volume {{FSIDName}} accessed via SMB share {{shareName}}. Quarantine action is not possible for this file, access to file is denied. Internal Information: Reason why quarantine action is not possible: {{accessDeniedReason}}
fluidFSEventAntivirusAccessDeniedDueInternalError
.1.3.6.1.4.1.674.11000.2000.200.20.10.33
An error occurred while scanning the file {{FSIDName}}/{{dsidPath}} for viruses. Description: An error occurred during AV scan of the file {{dsidPath}} on NAS volume {{FSIDName}} accessed via SMB share {{shareName}}. Access to file is denied. Internal Information: Error details: {{accessDeniedReason}}
fluidFSEventAntivirusUnexpectedAvResponse
.1.3.6.1.4.1.674.11000.2000.200.20.10.34
Error occurred while scanning for viruses. Description: AV server returned unexpected error during the scan for file {{dsidPath}} on NAS volume {{FSIDName}} accessed via SMB share {{shareName}}. Access is denied. Internal Information: ICAP protocol issue: unexpected ICAP status code {{icapStatus}} returned by facility {{server}}:{{port}}
fluidFSEventAntivirusQuarantineActionFailed
.1.3.6.1.4.1.674.11000.2000.200.20.10.35
Virus was found in file {{FSIDName}}/{{dsidPath}} but quarantine action has failed. Description: Virus ({{virusDescription}}) was found in file {{dsidPath}} on NAS volume {{FSIDName}} accessed via SMB share {{shareName}}. Quarantine action failed for this file, access to file is denied. Internal Information: Quarantine action error details: {{accessDeniedReason}}
fluidFSEventAntivirusInfectedFileQuarantined2
.1.3.6.1.4.1.674.11000.2000.200.20.10.36
Virus was found in file {{FSIDName}}/{{filePath}}, quarantine action is applied. Description: Virus ({{virusDescription}}) was found in file {{filePath}} on NAS volume {{FSIDName}} accessed via SMB share {{shareName}}. File is moved to quarantine (new path is {{dsidPath}}).
fluidFSEventAntivirusInfectedFileRepaired
.1.3.6.1.4.1.674.11000.2000.200.20.10.37
Infected file {{FSIDName}}/{{dsidPath}} was repaired. Description: Virus ({{virusDescription}}) was found in file {{dsidPath}} on NAS volume {{FSIDName}} accessed via SMB share {{shareName}}. The file was repaired automatically.
Access denied: file {{FSIDName}}/{{dsidPath}} is too large for AV verification. Description: File {{dsidPath}} (which is {{fileSizeMb}} MB) on NAS volume {{FSIDName}} accessed via SMB share {{shareName}} cannot be scanned for viruses because it exceeds the maximum scan size ({{largeFileSizeLimitMb}} Mb). This share is configured to block access to un-scanned files. Action Items: Increase the maximum scan size for AV scanning or change the policy to allow un-scanned files for access. Please note, Anti-virus scan of large files may delay the initial open request.
fluidFSEventAntivirusLargeFileNotScanned
.1.3.6.1.4.1.674.11000.2000.200.20.10.39
Access allowed: file {{FSIDName}}/{{dsidPath}} was opened without AV verification. Description: File {{dsidPath}} (which is {{fileSizeMb}} MB) on NAS volume {{FSIDName}} accessed via SMB share {{shareName}} was not scanned for viruses because it exceeds the maximum scan size ({{largeFileSizeLimitMb}} MB). Action Items: Increase the maximum file size for AV scan or change the policy to deny un-scanned files for access. Please note, Anti-virus scan of large files may delay the initial open request.
fluidFSEventClientAccessNfsAccessDenied
.1.3.6.1.4.1.674.11000.2000.200.20.11.100
NFS user {{nfsUserName}} access to a file denied. Insufficient permissions. Description: NFS access denied: user {{nfsUserName}} does not have sufficient permissions for the file {{dsidPath}} on NAS Volume {{nasContainerIdIntegerName}}. Desired/granted access mask: {{desiredNfsAccessMaskHrf}} / {{grantedNfsAccessMaskHrf}}. Internal Information: {{internalInfo}}
fluidFSEventClientAccessNfsModificationOfRoExport
.1.3.6.1.4.1.674.11000.2000.200.20.11.101
NFS access denied. Attempt to modify a read-only export {{nfsExport}}. Description: NFS Access Denied: User {{nfsUserName}} accessing from {{clientIpAddr}} tried to perform modifications on a read-only export: {{nfsExport}}, NAS Volume {{nasContainerIdIntegerName}}. Action Items: Verify that the relevant export definition of read-only was done intentionally. Contact the user who is responsible for this access or for application access to understand the reason for using read-only export with write operations.
NFS access denied. Secured port must be used; export {{nfsExport}}. Description: NFS Access Denied: User {{nfsUserName}} tried to access from {{clientIpAddr}} to a secure export {{nfsExport}}, NAS Volume {{nasContainerIdIntegerName}} using a non secure port {{nfsClientPort}}. Action Items: Secure exports require that the client accessing them will use a well-known port (under 1024). This is done as a security measure. Identify the relevant export and the checkbox that marks it as secure/requires secure port. If there is no need for a secure export (e.g. the network is not public) make the export insecure by unchecking the secure export option and try again. If the export must remain secure, please refer to the NFS client documentation in order to change the currently used NFS client port to a well-known port (under 1024).
fluidFSEventClientAccessModificationOfRoVolume
.1.3.6.1.4.1.674.11000.2000.200.20.11.103
Access denied. Attempt to modify a read-only NAS volume {{nasContainerIdIntegerName}}. Description: Access denied: User {{nfsUserName}} accessing from client {{clientIpAddr}} tried to modify the file {{dsidPath}} with operation '{{nfsOp}}' on a read-only NAS volume {{nasContainerIdIntegerName}}.
fluidFSEventClientAccessModifyAccessToSnapshot
.1.3.6.1.4.1.674.11000.2000.200.20.11.104
Access denied. Attempt to modify a snapshot; NAS volume {{nasContainerIdIntegerName}}. Description: Access denied: User {{nfsUserName}} accessing from client {{clientIpAddr}} tried to modify the file {{dsidPath}} on NAS volume {{nasContainerIdIntegerName}} that is located in a snapshot. Action Items: Snapshot is an exact representation of the NAS volume data at the time of the snapshot creation. For that reason, and by definition, snapshot data can't be modified. Contact the user and instruct him to limit any modifications to the current work area in the NAS volume.
NFS access denied. Operation not permitted on {{volumeStyle}} volume {{nasContainerIdIntegerName}}. Description: NFS access denied: User {{nfsUserName}} accessing from client {{clientIpAddr}} tried to '{{nfsOp}}' on a {{volumeStyle}} NAS volume {{nasContainerIdIntegerName}}, file {{dsidPath}}. [[nl]] Choosing a volume security style dictates which protocol will be used to set permissions on files in this volume. This also sets certain limitations on operations which are not allowed for this security style: [[nl]]1. Setting an ACL for a file in a NAS volume defined with UNIX Security style. [[nl]]2. Setting mode for a file in a NAS volume defined with NTFS Security style. [[nl]]3. Changing the read-only flag for a file in a NAS volume defined with UNIX Security style. [[nl]]4. Setting SID/GSID ownership on UNIX Security style [[nl]]5. Setting UID/GID ownership on NTFS Security style. [[nl]]A user might try to perform one of the above actions and get an 'Access denied' error without understanding why. Action Items: The NAS Volume security style should be chosen according to the main intended usage. If a user needs to perform a security related activity which is forbidden, consider splitting the data into two separate NAS Volumes based on the access pattern.
fluidFSEventClientAccessNotAnOwner
.1.3.6.1.4.1.674.11000.2000.200.20.11.106
NFS access denied. User {{nfsUserName}} not owner of file. Description: NFS Access Denied: User {{nfsUserName}} accessing from client {{clientIpAddr}} can't perform owner restricted operation {{nfsOp}} on file {{dsidPath}}, NAS Volume {{nasContainerIdIntegerName}}, file owner {{userOwnerName}} : {{groupOwnerName}}.
NFS access denied. Root privileges needed to give ownership. Description: NFS Access Denied: User {{nfsUserName}} accessing from client {{clientIpAddr}} can't give ownership away by operation {{nfsOp}} on file {{dsidPath}} to {{userOwnerName}} because root privileges are required for that, NAS Volume {{nasContainerIdIntegerName}}.
NFS access denied. Root privileges needed to give ownership. Description: NFS Access Denied: User {{nfsUserName}} accessing from client {{clientIpAddr}} can't give ownership away by operation {{nfsOp}} on file {{dsidPath}} to {{groupOwnerName}} because root privileges are required for that, NAS Volume {{nasContainerIdIntegerName}}.
fluidFSEventClientAccessOwnerOfDifferentTypes
.1.3.6.1.4.1.674.11000.2000.200.20.11.109
NFS access denied. File and group owner are inconsistent. Description: NFS Access Denied: both file owner and group owner must be from the same identity type (NTFS vs UNIX), attempt to set different identity types was detected. Client {{clientIpAddr}} User: {{nfsUserName}}, file {{dsidPath}} on NAS Volume {{nasContainerIdIntegerName}}, setting ownership: {{userOwnerName}} ({{userIdentityType}}) : {{groupOwnerName}} ({{groupIdentityType}}). Action Items: It is impossible to change file owner id to UID if the original file ownership is SID/GSID. If the user wants to change the ownership of the file to UNIX ownership, he must set UID and GID at same time. For example: user chown michael:group and not chown michael.
SMB access denied. Operation not permitted on {{volumeStyle}} volume. Description: Access Denied: User {{nfsUserName}} tried to '{{nfsOp}}' on a {{volumeStyle}} NAS Volume {{nasContainerIdIntegerName}}, file {{dsidPath}}. [[nl]]Choosing a volume security style dictates which protocol will be used to set permissions on files in this volume. This also sets certain limitations on operations which are not allowed for this security style: [[nl]]1. Setting an ACL for a file in a NAS volume defined with UNIX Security style. [[nl]]2. Setting mode for a file in a NAS volume defined with NTFS Security style. [[nl]]3. Changing the read-only flag for a file in a NAS volume defined with UNIX Security style. [[nl]]4. Setting SID/GSID ownership on UNIX Security style [[nl]]5. Setting UID/GID ownership on NTFS Security style. [[nl]]A user might try to perform one of the above actions and get an 'Access denied' error without understanding why. Action Items: The NAS Volume security style should be chosen according to the main intended usage. If a user needs to perform a security related activity which is forbidden, consider splitting the data into two separate NAS Volumes based on the access pattern.
Share {{cifsShare}} ACL's denied access to user {{userName}}. Description: Connection for user '{{userName}}' to share {{cifsShare}} denied due to share ACL permissions, client {{clientIpAddr}}, NAS Controller {{nasControllerId}}, NAS Volume {{nasContainerIdIntegerName}}.
fluidFSEventClientAccessGeneralAccessDenied
.1.3.6.1.4.1.674.11000.2000.200.20.11.112
Access denied. Insufficient permissions, user {{userName}}. Description: Access denied on '{{cifsOp}}' operation. Desired access mask: {{desiredCifsAccessMaskName}}, missing bits: {{missedCifsAccessBitsName}}, client: {{clientIpAddr}}, user: {{userName}}, file '{{dsidPath}}' on NAS Volume {{nasContainerIdIntegerName}}. Internal Information: {{internalInfo}}
fluidFSEventClientAccessNfs4WritePseudoFS
.1.3.6.1.4.1.674.11000.2000.200.20.11.113
Write Access to pseudo-fs of {{TenantName}}, user {{userName}}. Description: Write Access denied to user {{userName}}. Attempted modification of NFS4 Pseudo File System of {{TenantName}}). Client IP: {{clientIpAddr}}.
Access Denied. Open mode conflict; user {{userName}}. Description: Access Denied on '{{operation}}' operation due open mode conflict. Desired access mask: {{desiredCifsAccessMaskName}}, open mode mask: {{openModeAccessBitsName}}, client: {{clientIpAddr}}, user: {{userName}}, file '{{dsidPath}}' on NAS Volume {{nasContainerIdIntegerName}}.
Access Denied. Conflict with share mode; NAS Volume {{nasContainerIdIntegerName}}. Description: Access Denied: User {{userName}} accessing from client {{clientIpAddr}} tried to open file {{dsidPath}} on NAS Volume {{nasContainerIdIntegerName}}, which is opened with conflicting share mode by another user.
fluidFSEventClientAccessDeleteOnCloseIsSet
.1.3.6.1.4.1.674.11000.2000.200.20.11.116
Access denied. File marked for deletion; File {{dsidPath}}. Description: Access denied: the file is marked for deletion. Client: {{clientIpAddr}}, user: {{userName}}, file {{dsidPath}} on NAS Volume {{nasContainerIdIntegerName}}.
fluidFSEventClientAccessUserPasswordExpired
.1.3.6.1.4.1.674.11000.2000.200.20.11.117
SMB client login to {{TenantName}} failed. Password expired for user {{userName}}. Description: SMB client login failure: Password expired for client {{clientIpAddr}} user name '{{userName}}'. NAS Controller {{nasControllerId}} of {{TenantName}}.
fluidFSEventClientAccessNoSuchUser
.1.3.6.1.4.1.674.11000.2000.200.20.11.118
SMB client login to {{TenantName}} failed. The user {{userName}} could not be found. Description: SMB client login failure: The login username '{{userName}}' could not be found for client {{clientIpAddr}}. NAS Controller {{nasControllerId}} of {{TenantName}}.
Share-level permissions denied access; share {{cifsShare}}. Description: SMB access denied on '{{cifsOp}}' operation: share-level permission applied for the share {{cifsShare}} prohibits such actions. Desired access mask: {{desiredCifsAccessMaskName}}, client: {{clientIpAddr}}, user: {{userName}}, file '{{nasContainerPath}}' on NAS Volume {{nasContainerIdIntegerName}}.
fluidFSEventClientAccessCifsUserHasNoPrivileges2
.1.3.6.1.4.1.674.11000.2000.200.20.11.122
SMB access denied. User {{cifsUser2Name}} has insufficient privileges. Description: SMB access denied: accessing user does not have required privileges {{cifsPrivileges}} to perform '{{cifsOp}}', client: {{clientIpAddr}}, user: {{cifsUser2Name}}, file '{{nasContainerPath}}' on NAS Volume {{nasContainerIdIntegerName}}.
fluidFSEventClientAccessCifsUserHasNoPrivileges3
.1.3.6.1.4.1.674.11000.2000.200.20.11.123
SMB access denied. User {{cifsUser2Name}} has insufficient privileges Description: SMB access denied: accessing user does not have required privileges {{cifsPrivileges}} to perform '{{cifsOp}}', client: {{clientIpAddr}}, user: {{cifsUser2Name}}, file '{{dsidPath}}' on NAS Volume {{nasContainerIdIntegerName}}
fluidFSEventClientAccessNestedNetgroupNotFound
.1.3.6.1.4.1.674.11000.2000.200.20.11.124
Missing netgroup '{{netgroupName}}' could not be found by {{TenantName}}. Description: Nested netgroup '{{netgroupName}}' of parent netgroup '{{parentNetgroupName}}' was not found while resolving recursive definitions of export access list during NFS mount operation on {{TenantName}}. Missing netgroup was ignored. Action Items: Please check the content of the NIS/LDAP external repository and fix the definition of broken netgroup.
Netgroup '{{netgroupName}}' has too many levels of nesting on {{TenantName}}. Description: Circular netgroup definitions alert: netgroup has too many levels of nesting. Netgroup '{{netgroupName}}' exceeded the limit of {{netgroupThreshold}} levels of nesting on {{TenantName}}. Action Items: Consider reviewing your NIS/LDAP external repository for circular netgroup definitions.
fluidFSEventClientAccessSMBMemThreshold3
.1.3.6.1.4.1.674.11000.2000.200.20.11.126
Due to SMB high memory access denied on open file operation Description: SMB access denied on open file operation for user: {{cifsUser2Name}}, file '{{nasContainerPath}}' on NAS Volume {{nasContainerIdIntegerName}} due to SMB high memory consumption on NAS controller {{nasControllerId}}.
fluidFSEventClientAccessSMBMemThreshold4
.1.3.6.1.4.1.674.11000.2000.200.20.11.127
Due to SMB high resource usage new sessions not allowed. Description: Due to SMB high resource usage new sessions are not allowed on NAS controller {{nasControllerId}}.
fluidFSEventClientAccessRpcSecGssAuthFailure
.1.3.6.1.4.1.674.11000.2000.200.20.11.128
NFS RPCSECGSS user authentication failure for user {{userName}} on {{TenantName}}. Description: NFS RPCSECGSS authentication for user '{{userName}}' failed, client {{clientIpAddr}}, NAS controller {{nasControllerId}} of {{TenantName}}.
fluidFSEventClientAccessSMBMaxOpenFilesThreshold
.1.3.6.1.4.1.674.11000.2000.200.20.11.129
Due to SMB too many open files denied open file operation. Description: SMB access denied on open file operation due to SMB too many open files on NAS controller {{nasControllerId}}. Allowed number of open files per node is {{maxOpenFilesPerNode}}.
fluidFSEventClientAccessExtensionIsDeniedOnShare
.1.3.6.1.4.1.674.11000.2000.200.20.11.130
SMB access denied. Extension {{fileExtension}} is denied on share {{cifsShare}}. Description: SMB access denied: Extension based access is configured on share {{cifsShare}} on NAS volume {{nasContainerIdIntegerName}}, and extension {{fileExtension}} of file {{nasContainerPath}} is denied. User {{cifsUser2Name}} is not a member of the users groups to be excluded from filtering.
fluidFSEventClientAccessAutoHomeDirCreationError
.1.3.6.1.4.1.674.11000.2000.200.20.11.131
Error on creation home directory for user {{cifsUser2Name}}. Description: Auto home directory creation failed for user {{cifsUser2Name}} on NAS volume {{nasContainerIdIntegerName}} with path {{nasContainerPath}}. NT error code {{cifsErrorCode}}.
fluidFSEventClientAccessMountDeniedOnVip
.1.3.6.1.4.1.674.11000.2000.200.20.11.132
NFS mount Denied. Path denied {{mountPath}} on subnet {{clusterVIP}} Description: NFS client {{clientIpAddr}} was trying to mount NFS export with path {{mountPath}} via cluster VIP {{clusterVIP}} but access from this subnet is not allowed by this NFS export. Access was denied. Client was accessing the export in blind mode that can be considered as security issue.
Access from disallowed subnet to share {{cifsShare}} was denied Description: Connection for user '{{userName}}' to share {{cifsShare}} denied because access from this subnet is not allowed by SMB share settings, client {{clientIpAddr}}, NAS Controller {{nasControllerId}}, NAS Volume {{nasContainerIdIntegerName}}.
fluidFSEventClientAccessNoRootForOperation
.1.3.6.1.4.1.674.11000.2000.200.20.11.134
NFS access denied. Root privileges on export needed for {{operation}}. Description: NFS Access Denied: Client {{clientIpAddr}} can't execute operation {{operation}} with volume {{nasContainerIdIntegerName}} because root privileges are required for the client in the relevant NFS export.
fluidFSEventClientAccessFtpLoginFailed
.1.3.6.1.4.1.674.11000.2000.200.20.11.135
FTP user log-in to {{TenantName}} failed Description: FTP access to {{TenantName}} from {{clientIpAddr}} is denied for user {{userName}}. Action Items: When the client attempts FTP connection but provides bad account details, this event is generated. Possible reasons: wrong password, bad username, disabled anonymous access, account is disabled, password is expired and needs to be changed. The user should verify account details.
fluidFSEventClientAccessFtpLoginAccountProblem
.1.3.6.1.4.1.674.11000.2000.200.20.11.136
FTP user log-in to {{TenantName}} failed due to problematic account Description: FTP client {{clientIpAddr}} user {{userName}} log-in to {{TenantName}} failed due to problematic account Action Items: When the client makes FTP connection but provided account has problems, this event is generated. Possible reasons: account is locked, account is expired, logon with this account is disabled. System administrator might provide support to the user to fix the account problem.
fluidFSEventClientAccessFtpLoginInternalProblem
.1.3.6.1.4.1.674.11000.2000.200.20.11.137
FTP user log-in to {{TenantName}} failed due to an internal problem Description: FTP client {{clientIpAddr}} user {{userName}} log-in to {{TenantName}} failed due to internal problem Internal Information: {{internalInfo}}
fluidFSEventClientAccessFtpLoginSucceeded
.1.3.6.1.4.1.674.11000.2000.200.20.11.138
FTP user logged into {{TenantName}} Description: FTP client {{clientIpAddr}} user {{userName}} logged into {{TenantName}} Action Items: When the client makes FTP connection and successfully logs in, this event is generated
fluidFSEventClientAccessFtpAnonymousLoggedIn
.1.3.6.1.4.1.674.11000.2000.200.20.11.139
FTP Anonymous user logged into {{TenantName}} Description: FTP Anonymous user logged into {{TenantName}} with email {{userEmail}} from {{clientIpAddr}}. Action Items: We register an event for every anonymous user logins.
FTP user log-in to {{TenantName}} failed due to wrong TLS version Description: FTP client {{clientIpAddr}} log-in to {{TenantName}} failed due to TLS protocol version mismatch Action Items: Please make sure FTP client software supports encryption method version TLS 1.{{tlsVersion}}
fluidFSEventClientAccessMaxFtpConnectionsReached
.1.3.6.1.4.1.674.11000.2000.200.20.11.141
Reached Maximum FTP connections on NAS controller {{nasControllerId}} of {{TenantName}}. Maximum connection allowed per controller is {{maxConnections}} Action Items: This failure is common when there are too many FTP connections to the same fsd.
fluidFSEventClientAccessFtpMissingBaseLandingDir
.1.3.6.1.4.1.674.11000.2000.200.20.11.142
FTP user log-in to {{TenantName}} failed due to missing landing path Description: FTP client {{clientIpAddr}} user {{userName}} log-in to {{TenantName}} failed due to missing landing directory '{{baseLandingPath}}' on volume {{nasContainerIdIntegerName}}}. Action Items: When the client makes FTP connection and successfully logs in but landing directory does not exist, this event is generated. Please create the landing directory.
fluidFSEventClientAccessFtpMissingHomeLandingDir
.1.3.6.1.4.1.674.11000.2000.200.20.11.143
FTP user log-in to {{TenantName}} failed due to missing home landing path Description: FTP client {{clientIpAddr}} user {{userName}} log-in to {{TenantName}} failed due to missing home directory '{{suffixLandingPath}}' within base landing directory '{{baseLandingPath}}' on volume {{nasContainerIdIntegerName}} Action Items: When the client makes FTP connection and successfully logs in but home landing directory does not exist, this event is generated. FTP does not create home folders automatically. Please create them manually.
fluidFSEventClientAccessHdfsMaxConnectionsReached
.1.3.6.1.4.1.674.11000.2000.200.20.11.144
Reached maximum HDFS connections Description: Reached Maximum HDFS connections. Action Items: This failure is common when there are too many HDFS connections to the same fsd.
fluidFSEventClientAccessHdfsUserNotFound
.1.3.6.1.4.1.674.11000.2000.200.20.11.145
HDFS user name was not found in any of the repositories of {{TenantName}} Description: HDFS access from {{clientIpAddr}} provided the user name {{userName}}, but a corresponding user was not found in any of the repositories of {{TenantName}}
Snapshot Access denied by {{TenantName}} due to restriction Description: Snapshot access to the HDFS landing directory of NAS Volume {{nasContainerIdIntegerName}} of {{TenantName}} was denied due to '.snapshots' restriction. client: {{clientIpAddr}}, user: {{userName}}, path: '{{baseLandingPath}}'.
GPO Policy was not applied on {{TenantName}}. Took too much time. Description: Policy {{policyGuid}} {{policyName}} was not applied on {{TenantName}}. Took to much time.
fluidFSEventClientAccessGpoOtherFailure
.1.3.6.1.4.1.674.11000.2000.200.20.11.148
GPO Policy was not applied on {{TenantName}} due to system error. Description: Policy {{policyGuid}} {{policyName}} was not applied on {{TenantName}} due to system error.
fluidFSEventClientAccessGpoSuccess
.1.3.6.1.4.1.674.11000.2000.200.20.11.149
GPO Policy was applied successfully on {{TenantName}}. Description: Policy {{policyGuid}} {{policyName}} was applied successfully on {{TenantName}}.
GPO Policy on target on {{TenantName}} was not applied due to system error Description: Policy {{policyGuid}} {{policyName}} on target {{policyTargetPath}} was not applied due to system error on {{TenantName}}.
GPO Do Not Replace was not applied on {{TenantName}} due to system error. Description: Do not replace logic was not applied in Policy {{policyGuid}} {{policyName}} on target {{policyTargetPath}} due to system error on {{TenantName}}.
fluidFSEventClientAccessGpoInfFileReadFailure
.1.3.6.1.4.1.674.11000.2000.200.20.11.152
GPO Policy .inf file read failure on {{TenantName}}. Description: {{TenantName}} failed to read the .inf file {{filePath}} of the Policy {{policyGuid}} {{policyName}}
fluidFSEventClientAccessCapLdapQueryFailure
.1.3.6.1.4.1.674.11000.2000.200.20.11.153
Central Access Policy LDAP query failure on {{TenantName}}. Description: LDAP query for Central Access Policy objects {{searchedObjects}} failed on {{TenantName}}.
Central Access Policy rule parsing failure on {{TenantName}}. Description: Parsing of the Central Access Policy rule {{ruleDN}} of the policy {{policyDN}} failed on {{TenantName}}.
Administrator password was reset from the console. Description: Administrator password was reset from the console.
fluidFSEventClientAccessSMBMemThreshold1
.1.3.6.1.4.1.674.11000.2000.200.20.11.54
Due to SMB high resource usage disabling Large MTU. Description: Due to SMB high resource usage disabling Large MTU on NAS controller {{nasControllerId}}.
fluidFSEventClientAccessSMBMemThreshold2
.1.3.6.1.4.1.674.11000.2000.200.20.11.55
Due to SMB high resource usage new connections not allowed. Description: Due to SMB high resource usage new connections are not allowed on NAS controller {{nasControllerId}}.
fluidFSEventClientAccessLockDBOutOfResources
.1.3.6.1.4.1.674.11000.2000.200.20.11.58
File open or lock denied due to lack of internal resources. Description: NAS controller {{nodeId}} is low on resources and cannot serve SMB and NFSv4 file open and lock requests. Disconnect existing connections to free resources.
fluidFSEventClientAccessSessionDBOutOfResources
.1.3.6.1.4.1.674.11000.2000.200.20.11.59
NFSv4 session was denied. Too many sessions currently in use Description: No resources available for new NFSv4 sessions as too many sessions are currently in use. NAS controller {{nodeId}}.
fluidFSEventClientAccessMountFailureInternalError
.1.3.6.1.4.1.674.11000.2000.200.20.11.6
NFS mount failure. Internal mount error {{mountPath}}. Description: NFS mount failure: Internal mount error for the client {{clientIpAddr}} while mounting path {{mountPath}}. Internal Information: NFS error code: {{nfsErrorCode}}
fluidFSEventClientAccessUsersProxyNoService
.1.3.6.1.4.1.674.11000.2000.200.20.11.70
NFS/SMB partial service Description: Access to {{serverKind}} directory service is limited or service is temporary unavailable Action Items: This scenario usually happens after directory changes (UNIX database re-configuration, Active Directory join, etc). The problem can occur due poor performance of newly configured directory; please wait some time until new directory will be fully initialized. In case of unexpected or persistent problem please check the operability of directory and network connectivity to directory service. Also problems with '{{service}}' internal service can cause this issue - please check for problematic services.
fluidFSEventClientAccessSecChangeErrorBypassOn
.1.3.6.1.4.1.674.11000.2000.200.20.11.71
Security change errors bypassed Description: Errors due to chmod/chown/chgrp on volumes with windows security style now bypassed
fluidFSEventClientAccessSecChangeErrorBypassOff
.1.3.6.1.4.1.674.11000.2000.200.20.11.72
Security change errors returned Description: Errors due to chmod/chown/chgrp on volumes with windows security style now returned
File Access Notifications were dropped. Description: {{droppedNotificationsCounter}} File Access Notifications were dropped due to auditing server issues
File Access Notification encryption failed Description: Failed initializing encrypted connection to the auditing server. encryption credentials configuration may be invalid
fluidFSEventClientAccessMountFromInsecurePort
.1.3.6.1.4.1.674.11000.2000.200.20.11.89
NFS mount failure. Wrong port was used for secure export {{nfsExport}}. Description: NFS mount failure: Client {{clientIpAddr}} attempted to mount the secure export {{nfsExport}} on NAS Volume {{nasContainerIdIntegerName}} but used port {{nfsClientPort}} instead of a well-known port (under 1024) on the client side. Action Items: Secure exports require that the client accessing them will perform the mount using a well-known port (under 1024). This is done as a security measure.
fluidFSEventClientAccessMountExportDoesntExist
.1.3.6.1.4.1.674.11000.2000.200.20.11.90
NFS mount failure. The path {{mountPath}} is not exported by {{TenantName}}. Description: NFS mount failure: Client {{clientIpAddr}} attempted to mount the path ({{mountPath}}) from {{TenantName}} which is not exported to NFS clients. Action Items: This failure is commonly caused by spelling mistakes done on the client machine or when accessing the wrong server. List all available exports on the NAS and verify everything is as expected, all the needed exports exist and nothing has been changed unintentionally.
fluidFSEventClientAccessMountPathDoesntExist
.1.3.6.1.4.1.674.11000.2000.200.20.11.91
NFS mount failure. Non existent mount path {{mountPath}}. Description: NFS mount failure: Client {{clientIpAddr}} tried to mount nonexistent mount path {{mountPath}} on NAS Volume {{nasContainerIdIntegerName}}.
NFS mount failure. Access denied during mount {{mountPath}}. Description: NFS mount failure: Access denied for the client {{clientIpAddr}} while mounting path {{mountPath}} on NAS Volume {{nasContainerIdIntegerName}}.
NFS mount failure. Access attempt to limited export {{nfsExport}} Description: NFS mount failure: Client {{clientIpAddr}} is not allowed to access export {{nfsExport}} on NAS Volume {{nasContainerIdIntegerName}}, with defined export options: {{nfsExportOptions}}. Action Items: This error is caused by a client which accesses a limited export. The export was defined with clients limitations (network or netgroup or IP) that did not include the accessing client.
NFS mount failure. Netgroups could not be attained {{nfsExport}}. Description: NFS mount failure: Client {{clientIpAddr}} failed to mount export {{nfsExport}} on NAS Volume {{nasContainerIdIntegerName}} since the required netgroup information could not be attained. Export options: {{nfsExportOptions}}. Action Items: Check availability of NIS/LDAP server. This error is usually the outcome of a communication error between the NAS and the NIS/LDAP server. It could be caused by a network issue, directory server overload or a software malfunction in one of environment components.
fluidFSEventClientAccessClientClockSkew
.1.3.6.1.4.1.674.11000.2000.200.20.11.95
Client authentication failed on {{TenantName}} due to client clock skew. Description: Kerberos authentication failed because of the client clock skew between client {{clientIpAddr}} and NAS controller {{nasControllerId}} of {{TenantName}}. Action Items: Fix client clock. Your client clock must be within 5 minutes range from the Kerberos server (i.e. AD) clock. Configure the used AD as an NTP server for this client to avoid clock skews errors.
fluidFSEventClientAccessSharePathNotFound
.1.3.6.1.4.1.674.11000.2000.200.20.11.96
Client connect failure. Share {{cifsShare}} points to invalid directory. Description: SMB client connection failure: Client {{clientIpAddr}} accessed a nonexistent share {{cifsShare}} which refers to an inexistent directory {{nasContainerPath}} in NAS Volume {{nasContainerIdIntegerName}}. Action Items: List all available shares on the NAS and identify the problematic share. It should have an indication that it is not accessible.
fluidFSEventClientAccessIncorrectUserPassword
.1.3.6.1.4.1.674.11000.2000.200.20.11.97
SMB client login to {{TenantName}} failed. Wrong password for user {{userName}}. Description: SMB client login failure: Client {{clientIpAddr}} with user name '{{userName}}' supplied the wrong password. NAS Controller {{nasControllerId}} of {{TenantName}}.
fluidFSEventClientAccessLoginFailure
.1.3.6.1.4.1.674.11000.2000.200.20.11.98
SMB client login to {{TenantName}} failed. Authorization failed for user {{userName}}. Description: Authorization for user '{{userName}}' failed, client {{clientIpAddr}}, NAS controller {{nasControllerId}} of {{TenantName}}. Internal Information: {{internalInfo}}
fluidFSEventClientAccessShareDoesntExist
.1.3.6.1.4.1.674.11000.2000.200.20.11.99
SMB client connection failure. Unavailable share {{cifsShare}} on {{TenantName}}. Description: SMB client connection failure: Client {{clientIpAddr}} attempted to connect to an unavailable share {{cifsShare}} of {{TenantName}}. Action Items: This failure is commonly caused by spelling mistakes done on the client machine or by a user trying to access the home share of a different user.
fluidFSEventLicensingEulaNotApprovedByUser
.1.3.6.1.4.1.674.11000.2000.200.20.13.14
End-user license agreement (EULA) was not accepted by user. Description: FluidFS version {{FluidFsVersion}} has a new end user license agreement. You must accept this agreement in order to use the system. Action Items: To accept the end user agreement go to User Interface System menu.
fluidFSEventExaminerSystemCorruption
.1.3.6.1.4.1.674.11000.2000.200.20.14.1
FluidFS Health Scan found an inconsistency in the system. Description: FluidFS Health Scan found an inconsistency in the system. Please contact Support. Internal Information: FluidFS Health Scan has found an error in {{category}} on level {{level}}
fluidFSEventExaminerExaminingCycleFinished
.1.3.6.1.4.1.674.11000.2000.200.20.14.3
FluidFS Health Scan finished a scanning cycle. Description: FluidFS Health Scan finished a scanning cycle of the entire system
fluidFSEventComponentExampleDeleteFailed
.1.3.6.1.4.1.674.11000.2000.200.20.15.10
Failed to delete {{Name}} by {{userName}}. {{retcodeString}}.
fluidFSEventComponentEqlFabricRoutesModifySuccess
.1.3.6.1.4.1.674.11000.2000.200.20.15.100
EQL fabric route {{networkId}} modified by {{userName}}.
fluidFSEventComponentEqlFabricRoutesDeleteSuccess
.1.3.6.1.4.1.674.11000.2000.200.20.15.101
EQL fabric route {{networkId}} deleted by {{userName}}.
fluidFSEventComponentFabricIscsiV4CreateSuccess
.1.3.6.1.4.1.674.11000.2000.200.20.15.102
iSCSI IPv4 fabric {{Name}} created by {{userName}}.
fluidFSEventComponentFabricIscsiV4CreateWarning
.1.3.6.1.4.1.674.11000.2000.200.20.15.103
iSCSI IPv4 fabric {{Name}} created by {{userName}}. {{retcodeString}}
fluidFSEventComponentFabricIscsiV4ModifySuccess
.1.3.6.1.4.1.674.11000.2000.200.20.15.104
iSCSI IPv4 fabric {{Name}} modified by {{userName}}.
fluidFSEventComponentFabricIscsiV4ModifyWarning
.1.3.6.1.4.1.674.11000.2000.200.20.15.105
iSCSI IPv4 fabric {{Name}} modified by {{userName}}. {{retcodeString}}
fluidFSEventComponentFabricIscsiV4DeleteSuccess
.1.3.6.1.4.1.674.11000.2000.200.20.15.106
iSCSI IPv4 fabric {{Name}} deleted by {{userName}}.
Failure to modify iSCSI storage portals by {{userName}}. {{retcodeString}}
fluidFSEventComponentExampleDeleteWarning
.1.3.6.1.4.1.674.11000.2000.200.20.15.11
Example {{Name}} was deleted by {{userName}}, but with a warning: {{retcodeString}}.
fluidFSEventComponentTimeModifySuccess
.1.3.6.1.4.1.674.11000.2000.200.20.15.12
Time configuration was modified by {{userName}}
fluidFSEventComponentNasApplianceBlinkSuccess
.1.3.6.1.4.1.674.11000.2000.200.20.15.122
User {{userName}} set the appliance ID {{applianceId}} blink state for the 1st controller to {{blink1}} and the 2nd controller to {{blink2}}.
fluidFSEventComponentRebootNasControllerSuccess
.1.3.6.1.4.1.674.11000.2000.200.20.15.123
User {{userName}} rebooted NAS controller {{controllerId}}.
fluidFSEventComponentMoveClientActionSuccess
.1.3.6.1.4.1.674.11000.2000.200.20.15.124
Client or router with {{ClientIp}} IP, which referred to {{AccessIp}} IP, was moved to controller{{NodeId}} by {{userName}}.
fluidFSEventComponentPinClientActionSuccess
.1.3.6.1.4.1.674.11000.2000.200.20.15.125
Client or router with {{ClientIp}} IP, which referred to {{AccessIp}} IP was restricted to the required controller (pin state is {{PinState}}) by {{userName}}.
The user {{userName}} ran general system diagnostic. Description: The user {{userName}} ran general system diagnostic.
fluidFSEventComponentRunNetworkDiagnosticSuccess
.1.3.6.1.4.1.674.11000.2000.200.20.15.172
The user {{userName}} ran networking diagnostic. Description: The user {{userName}} ran networking diagnostic.[[nl]]Diagnostic arguments:[[nl]]ClientIP - {{RunNetworkDiagnosticClientIP}}.
The user {{userName}} ran performance diagnostic. Description: The user {{userName}} ran performance diagnostic.[[nl]]Diagnostic arguments:[[nl]]FSID - {{RunPerformanceDiagnosticFSID}}.
Network diagnostic failed. Description: Network diagnostic, which was run by user {{userName}} failed.[[nl]]Diagnostic arguments:[[nl]]ClientIP - {{RunNetworkDiagnosticClientIP}}[[nl]]{{retcodeString}}
Performance diagnostic failed. Description: Performance diagnostic, which was run by user {{userName}} failed.[[nl]]Diagnostic arguments:[[nl]]FSID - {{RunPerformanceDiagnosticFSID}}[[nl]]{{retcodeString}}
The user {{userName}} had modified the Secure Console Access settings Description: The user {{userName}} had modified the Secure Console Access settings
{{userName}} failed to modify the Secure Console Access settings Description: The user {{userName}} had failed to modify the Secure Console Access settings due to the following error: {{retcodeString}}
{{userName}} failed to modify the Secure Console Access settings Description: The user {{userName}} had failed to modify the Secure Console Access settings due to the following error: {{retcodeString}}
{{userName}} had failed to modify the Events filter configuration Description: {{userName}} had failed to modify the Events filter configuration. {{retcodeString}}
fluidFSEventComponentPhoneHomeModifySuccess
.1.3.6.1.4.1.674.11000.2000.200.20.15.216
{{userName}} updated phone home settings Description: {{userName}} updated phone home settings
The user {{userName}} set the internal storage reservation Description: The user {{userName}} set the internal storage reservation. Current configuration: events - {{eventsSpace}} MB
fluidFSEventComponentBwLimiterCreateSuccess
.1.3.6.1.4.1.674.11000.2000.200.20.15.254
The user {{userName}} created Bw Limiter {{Name}}.
fluidFSEventComponentBwLimiterModifySuccess
.1.3.6.1.4.1.674.11000.2000.200.20.15.255
The user {{userName}} updated the configuration of BW Limiter {{Name}}.
fluidFSEventComponentBwLimiterDeleteSuccess
.1.3.6.1.4.1.674.11000.2000.200.20.15.256
The user {{userName}} deleted BW Limiter {{Name}}.
Restore NAS container configuration {{FSIDName}} finished with errors. Description: The configuration of the NAS container {{FSIDName}} was restored by user {{userName}} with errors: {{retcodeString}}
Restore NAS volume {{FSIDName}} configuration finished with errors. Description: The configuration of the NAS volume {{FSIDName}} was restored from files by user {{userName}} with errors: {{retcodeString}}.
fluidFSEventComponentAdministratorCreateSuccess
.1.3.6.1.4.1.674.11000.2000.200.20.15.287
The user {{userName}} created new administrator {{AdminName}}.
fluidFSEventComponentAdministratorModifySuccess
.1.3.6.1.4.1.674.11000.2000.200.20.15.288
The user {{userName}} modified the configuration of administrator {{AdminName}}.
fluidFSEventComponentAdministratorDeleteSuccess
.1.3.6.1.4.1.674.11000.2000.200.20.15.289
The user {{userName}} deleted administrator {{AdminName}}.
fluidFSEventComponentCifsShareCreateSuccess
.1.3.6.1.4.1.674.11000.2000.200.20.15.290
The user {{userName}} created the {{shareName}} SMB share on {{FSIDName}} NAS volume.
fluidFSEventComponentCifsShareModifySuccess
.1.3.6.1.4.1.674.11000.2000.200.20.15.291
The user {{userName}} updated the configuration of {{shareName}} SMB share on {{FSIDName}} NAS volume.
fluidFSEventComponentCifsShareDeleteSuccess
.1.3.6.1.4.1.674.11000.2000.200.20.15.292
The user {{userName}} deleted the {{shareName}} SMB share on {{FSIDName}} NAS volume.
fluidFSEventComponentCifsHomeShareModifySuccess
.1.3.6.1.4.1.674.11000.2000.200.20.15.293
The user {{userName}} updated the configuration of home SMB share on {{FSIDName}} NAS volume of {{TenantName}}.
fluidFSEventComponentNfsExportCreateSuccess
.1.3.6.1.4.1.674.11000.2000.200.20.15.294
The user {{userName}} created the {{NFSExportPath}} NFS export on {{FSIDName}} NAS volume.
fluidFSEventComponentNfsExportModifySuccess
.1.3.6.1.4.1.674.11000.2000.200.20.15.295
The user {{userName}} updated the configuration of {{NFSExportPath}} NFS export on {{FSIDName}} NAS volume.
fluidFSEventComponentNfsExportDeleteSuccess
.1.3.6.1.4.1.674.11000.2000.200.20.15.296
The user {{userName}} deleted {{NFSExportPath}} NFS export on {{FSIDName}} NAS volume.
The user {{userName}} updated the configuration of {{Name}} schedule for replication between {{FSIDName}} and {{remoteVolumeName}}@{{remoteClusterName}} NAS volumes.
The user {{userName}} has connected {{FSIDName}} volume to {{remoteVolume}} volume of {{remoteSystemName}} system. Description: The user {{userName}} has connected NAS volume to {{remoteVolume}} NAS volume of {{remoteSystemName}} system.
The user {{userName}} has disconnected {{FSIDName}} and {{remoteVolume}}@{{remoteSystemName}} volumes. Description: The user {{userName}} has disconnected the replication between {{FSIDName}} NAS volume and {{remoteVolume}} NAS volume of {{remoteSystemName}} system.
The user {{userName}} has disabled replication between {{FSIDName}} and {{remoteVolume}}@{{remoteSystemName}} volumes. Description: The user {{userName}} has disabled the replication between {{FSIDName}} NAS volume and {{remoteVolume}} NAS volume of {{remoteSystemName}} system.
The user {{userName}} has enabled replication between {{FSIDName}} and {{remoteVolume}}@{{remoteSystemName}} volumes. Description: The user {{userName}} has enabled the replication between {{FSIDName}} NAS volume and {{remoteVolume}} NAS volume of {{remoteSystemName}} system.
The user {{userName}} has promoted replication between {{FSIDName}} and {{remoteVolume}}@{{remoteSystemName}} volumes. Description: The user {{userName}} has promoted replication between {{FSIDName}} NAS volume and {{remoteVolume}} NAS volume of {{remoteSystemName}} system.
The user {{userName}} has demoted replication between {{FSIDName}} and {{remoteVolume}}@{{remoteSystemName}} volumes. Description: The user {{userName}} has demoted replication between {{FSIDName}} NAS volume and {{remoteVolume}} NAS volume of {{remoteSystemName}} system.
The user {{userName}} has started replication between {{FSIDName}} and {{remoteVolume}}@{{remoteSystemName}} volumes. Description: The user {{userName}} has started replication from {{FSIDName}} NAS volume to {{remoteVolume}} NAS volume of {{remoteSystemName}} system.
fluidFSEventComponentCreateBlockStorageSuccess
.1.3.6.1.4.1.674.11000.2000.200.20.15.311
BlockStorage {{storageId}} was successfully created.
fluidFSEventComponentModifyBlockStorageSuccess
.1.3.6.1.4.1.674.11000.2000.200.20.15.312
Client subnet {{storageId}} was successfully modified.
fluidFSEventComponentDeleteBlockStorageSuccess
.1.3.6.1.4.1.674.11000.2000.200.20.15.313
Client subnet {{storageId}} was successfully deleted.
fluidFSEventComponentSubnetsCreateSuccess
.1.3.6.1.4.1.674.11000.2000.200.20.15.314
Client subnet {{networkId}}/{{prefix}} was successfully created by {{userName}}.
fluidFSEventComponentSubnetsModifySuccess
.1.3.6.1.4.1.674.11000.2000.200.20.15.315
Client subnet {{networkId}}/{{prefix}} was successfully modified by {{userName}}.
fluidFSEventComponentSubnetsDeleteSuccess
.1.3.6.1.4.1.674.11000.2000.200.20.15.316
Client subnet {{networkId}}/{{prefix}} was successfully deleted by {{userName}}.
fluidFSEventComponentActiveDirectoryJoinSuccess
.1.3.6.1.4.1.674.11000.2000.200.20.15.317
The user {{userName}} has successfully joined {{TenantName}} to Active Directory domain '{{Domain}}'.
fluidFSEventComponentActiveDirectoryJoinWarning
.1.3.6.1.4.1.674.11000.2000.200.20.15.318
The user {{userName}} has joined {{TenantName}} to Active Directory domain '{{Domain}}', but with a warning: {{retcodeString}}
fluidFSEventComponentActiveDirectoryLeaveSuccess
.1.3.6.1.4.1.674.11000.2000.200.20.15.319
The user {{userName}} has successfully left {{TenantName}} from Active Directory domain.
The System Service Mode is changed to {{systemServiceState}}. Description: The System Service Mode is changed to {{systemServiceState}}.
fluidFSEventComponentLocalUsersCreateSuccess
.1.3.6.1.4.1.674.11000.2000.200.20.15.320
The user {{userName}} has successfully added new Local User '{{localAccountName}}' to {{TenantName}} cluster repository.
fluidFSEventComponentLocalUsersDeleteSuccess
.1.3.6.1.4.1.674.11000.2000.200.20.15.321
The user {{userName}} has successfully deleted Local User '{{localAccountName}}' from {{TenantName}} cluster repository.
fluidFSEventComponentLocalUsersModifySuccess
.1.3.6.1.4.1.674.11000.2000.200.20.15.322
The user {{userName}} has successfully modified Local User '{{localAccountName}}' in {{TenantName}} cluster repository.
fluidFSEventComponentLocalGroupsCreateSuccess
.1.3.6.1.4.1.674.11000.2000.200.20.15.323
The user {{userName}} has successfully added new Local Group '{{localAccountName}}' to {{TenantName}} cluster repository.
fluidFSEventComponentLocalGroupsDeleteSuccess
.1.3.6.1.4.1.674.11000.2000.200.20.15.324
The user {{userName}} has successfully deleted Local Group '{{localAccountName}}' from {{TenantName}} cluster repository.
fluidFSEventComponentLocalGroupsModifySuccess
.1.3.6.1.4.1.674.11000.2000.200.20.15.325
The user {{userName}} has successfully modified Local Group '{{localAccountName}}' in {{TenantName}} cluster repository.
fluidFSEventComponentUserDatabaseModifySuccess
.1.3.6.1.4.1.674.11000.2000.200.20.15.326
The user {{userName}} has successfully re-configured UNIX external repository of {{TenantName}} to use '{{userDatabaseType}}' accounts database.
fluidFSEventComponentMappingPolicyModifySuccess
.1.3.6.1.4.1.674.11000.2000.200.20.15.327
The user {{userName}} has successfully re-configured users mapping policy of {{TenantName}}.
fluidFSEventComponentManualMappingsCreateSuccess
.1.3.6.1.4.1.674.11000.2000.200.20.15.328
The user {{userName}} has successfully added new manual mapping rule between Windows user '{{windowsUserName}}' and UNIX user '{{unixUserName}}' on {{TenantName}}.
fluidFSEventComponentManualMappingsDeleteSuccess
.1.3.6.1.4.1.674.11000.2000.200.20.15.329
The user {{userName}} has successfully deleted manual mapping rule between Windows user '{{windowsUserName}}' and UNIX user '{{unixUserName}}' on {{TenantName}}.
fluidFSEventComponentManualMappingsModifySuccess
.1.3.6.1.4.1.674.11000.2000.200.20.15.330
The user {{userName}} has successfully modified manual mapping rule between Windows user '{{windowsUserName}}' and UNIX user '{{unixUserName}}' on {{TenantName}}.
fluidFSEventComponentQuotaRuleCreateSuccess
.1.3.6.1.4.1.674.11000.2000.200.20.15.331
The user {{userName}} created quota rule on volume {{FSIDName}}.
fluidFSEventComponentQuotaRuleModifySuccess
.1.3.6.1.4.1.674.11000.2000.200.20.15.332
The user {{userName}} modified quota rule on volume {{FSIDName}}.
fluidFSEventComponentQuotaRuleDeleteSuccess
.1.3.6.1.4.1.674.11000.2000.200.20.15.333
The user {{userName}} deleted quota rule on volume {{FSIDName}}.
fluidFSEventComponentCifsProtocolModifySuccess
.1.3.6.1.4.1.674.11000.2000.200.20.15.334
The user {{userName}} updated the general local users settings of {{TenantName}}.
fluidFSEventComponentFileCloningSuccess
.1.3.6.1.4.1.674.11000.2000.200.20.15.335
The user {{userName}} cloned ({{FSIDName}}){{SourceFilePath}} to ({{FSIDName}}){{DestinationDir}}/{{DestinationFileName}}.
fluidFSEventComponentFolderCreateSuccess
.1.3.6.1.4.1.674.11000.2000.200.20.15.336
The user {{userName}} created {{DestinationParentDir}}/{{DestinationDir}} directory on NAS volume {{FSIDName}}.
fluidFSEventComponentNfsProtocolModifySuccess
.1.3.6.1.4.1.674.11000.2000.200.20.15.337
The user {{userName}} updated the general NFS settings of {{TenantName}}.
Hdfs diagnostic failed. Description: Hdfs diagnostic, which was run by user {{userName}} failed.[[nl]]{{retcodeString}}
fluidFSEventComponentReplicationModifyRetPolicy
.1.3.6.1.4.1.674.11000.2000.200.20.15.343
The user {{userName}} has modified the replication snapshot retention. Description: The user {{userName}} has modified the replication snapshot retention policy for replication between {{FSIDName}} and {{remoteVolume}}@{{remoteSystemName}} volumes.
fluidFSEventComponentHdfsDiagnosticsFailed
.1.3.6.1.4.1.674.11000.2000.200.20.15.344
Failed to generate diagnostics package for 'HdfsDiagnostics'. {{retcodeString}}
fluidFSEventComponentHdfsDiagnosticsOk
.1.3.6.1.4.1.674.11000.2000.200.20.15.345
Package generation for 'HdfsDiagnostics' completed successfully.
The user {{userName}} has modified the BW limiter for replication between {{FSIDName}} and {{remoteVolume}}@{{remoteSystemName}} NAS volumes.
fluidFSEventComponentGpoModifySuccess
.1.3.6.1.4.1.674.11000.2000.200.20.15.359
GPO configuration of {{TenantName}} was modified successfully Description: The user {{userName}} has successfully modified the GPO configuration of {{TenantName}}.
fluidFSEventComponentNasAppliancesCreateSuccess
.1.3.6.1.4.1.674.11000.2000.200.20.15.36
NAS Appliance {{applianceId}} ({{applianceServiceTag}}) was successfully created by user {{userName}}.
fluidFSEventComponentGpoTriggerSuccess
.1.3.6.1.4.1.674.11000.2000.200.20.15.360
Group policy polling was triggered successfully on {{TenantName}} Description: The user {{userName}} has triggered group policy polling successfully on {{TenantName}}.
fluidFSEventComponentTenantPublicIPsModifySuccess
.1.3.6.1.4.1.674.11000.2000.200.20.15.361
The user {{userName}} updated the configuration of {{TenantName}} client network Public IPs.
The user {{userName}} has modifyed the compression and deduplication mode for replication between {{FSIDName}} and {{remoteVolume}}@{{remoteSystemName}} NAS volumes.
fluidFSEventComponentExampleActionSyncSuccess
.1.3.6.1.4.1.674.11000.2000.200.20.15.363
Example action with sync succeeded.
fluidFSEventComponentExampleActionSyncFailed
.1.3.6.1.4.1.674.11000.2000.200.20.15.364
Failed to perform example action with sync. Error: {{retcodeString}}.
fluidFSEventComponentAdminSubnetModifySuccess
.1.3.6.1.4.1.674.11000.2000.200.20.15.365
Management subnet {{networkId}}/{{prefix}} was successfully modified by {{userName}}. State is {{state}}
fluidFSEventComponentTriggerPhoneHomeSuccess
.1.3.6.1.4.1.674.11000.2000.200.20.15.366
The user {{userName}} triggered phone home.
fluidFSEventComponentInterfaceRolesModifySuccess
.1.3.6.1.4.1.674.11000.2000.200.20.15.367
Interface Roles for NAS Appliance {{applianceId}} were successfully modified by user {{userName}}.
fluidFSEventComponentSmbDiagnosticOk
.1.3.6.1.4.1.674.11000.2000.200.20.15.368
Package generation for 'SMB Diagnostics' completed successfully.
fluidFSEventComponentSmbDiagnosticFailed
.1.3.6.1.4.1.674.11000.2000.200.20.15.369
Failed to generate diagnostics package for 'SMB Diagnostics'. {{retcodeString}}
fluidFSEventComponentNasAppliancesCreateFailed
.1.3.6.1.4.1.674.11000.2000.200.20.15.37
NAS Appliance {{applianceId}} ({{applianceServiceTag}}) could not be created by user {{userName}}. {{retcodeString}}.
fluidFSEventComponentNfsDiagnosticOk
.1.3.6.1.4.1.674.11000.2000.200.20.15.370
Package generation for 'NFS Diagnostics' completed successfully.
fluidFSEventComponentNfsDiagnosticFailed
.1.3.6.1.4.1.674.11000.2000.200.20.15.371
Failed to generate diagnostics package for 'NFS Diagnostics'. {{retcodeString}}
fluidFSEventComponentFtpDiagnosticOk
.1.3.6.1.4.1.674.11000.2000.200.20.15.372
Package generation for 'FTP Diagnostics' completed successfully.
fluidFSEventComponentFtpDiagnosticFailed
.1.3.6.1.4.1.674.11000.2000.200.20.15.373
Failed to generate diagnostics package for 'FTP Diagnostics'. {{retcodeString}}
Restore of cluster configuration finished with errors. Description: The configuration of the cluster was restored by user {{userName}} with errors: {{retcodeString}}.
Restore of system configuration finished with errors. Description: The configuration of the system was restored by user {{userName}} with errors: {{retcodeString}}.
Restore of system configuration finished with errors. Description: The configuration of the system was restored from files by user {{userName}} with errors: {{retcodeString}}
fluidFSEventComponentExampleCreateWarning
.1.3.6.1.4.1.674.11000.2000.200.20.15.5
Example {{Name}} was created by {{userName}}, but with a warning: {{retcodeString}}.
fluidFSEventComponentExampleModifySuccess
.1.3.6.1.4.1.674.11000.2000.200.20.15.6
Example {{Name}} was modified by {{userName}}. Result: {{retcodeString}}.
Client network interface was successfully modified to MTU {{clientNetworkInterfaceMtu}} and bonding mode {{clientNetworkInterfaceBondMode}} by {{userName}}.
fluidFSEventComponentEqlFabricCreateSuccess
.1.3.6.1.4.1.674.11000.2000.200.20.15.94
EQL fabric created by {{userName}}.
fluidFSEventComponentEqlFabricCreateWarning
.1.3.6.1.4.1.674.11000.2000.200.20.15.95
EQL fabric created by {{userName}}. {{retcodeString}}
fluidFSEventComponentEqlFabricModifySuccess
.1.3.6.1.4.1.674.11000.2000.200.20.15.96
EQL fabric modified by {{userName}}.
fluidFSEventComponentEqlFabricModifyWarning
.1.3.6.1.4.1.674.11000.2000.200.20.15.97
EQL fabric modified by {{userName}}. {{retcodeString}}
fluidFSEventComponentEqlFabricDeleteSuccess
.1.3.6.1.4.1.674.11000.2000.200.20.15.98
EQL fabric deleted by {{userName}}.
fluidFSEventComponentEqlFabricRoutesCreateSuccess
.1.3.6.1.4.1.674.11000.2000.200.20.15.99
EQL fabric route {{networkId}} created by {{userName}}.
Secure Console Access on NAS Controller {{node}} cannot be enabled Description: Secure Console Access session on NAS Controller {{node}} had been disconnected from the server side or the system is disabled on the reservation web server
Secure Console Access failure on NAS Controller {{node}} Description: Secure Console Access session on NAS Controller {{node}} had been disconnected due to SSH tunnel failure
Failed to establish the Secure Console Access session Description: Failed to establish the Secure Console Access session on NAS Controller {{node}} due to error response from the reservation web server
Secure Console Access session on NAS Controller {{node}} had expired Description: Secure Console Access session on NAS Controller {{node}} had been disconnected due to TTL expiry
fluidFSEventSupportShellSupportShellLogin
.1.3.6.1.4.1.674.11000.2000.200.20.16.8
User logged-in into Support Shell Description: User {{userName}} logged-in into support shell in NAS Controller {{node}}
fluidFSEventSupportShellSupportShellLogout
.1.3.6.1.4.1.674.11000.2000.200.20.16.9
User logged-out from Support Shell Description: User {{userName}} logged-out from support shell in NAS Controller {{node}}
fluidFSEventAuditingSuccessfulAuditWrite
.1.3.6.1.4.1.674.11000.2000.200.20.17.23
Successful Write by {{uName}} on {{nName}}/{{dPath}} Description: The User {{uName}} performed a successful write operation on file {{nName}}/{{dPath}}. The Desired access mask: {{desiredCifsAccessMaskName}}
fluidFSEventAuditingSuccessfulAuditRead
.1.3.6.1.4.1.674.11000.2000.200.20.17.24
Successful Read by {{uName}} on {{nName}}/{{dPath}} Description: The User {{uName}} performed a successful read operation on file {{nName}}/{{dPath}}. The Desired access mask: {{desiredCifsAccessMaskName}}
fluidFSEventAuditingSuccessfulAuditDelete
.1.3.6.1.4.1.674.11000.2000.200.20.17.25
Successful Delete by {{uName}} on {{nName}}/{{dPath}} Description: The User {{uName}} performed a successful delete operation on file {{nName}}/{{dPath}}. The Desired access mask: {{desiredCifsAccessMaskName}}
fluidFSEventAuditingSuccessfulAuditReadAttr
.1.3.6.1.4.1.674.11000.2000.200.20.17.26
Successful Read Attributes by {{uName}} on {{nName}}/{{dPath}} Description: The User {{uName}} performed a successful read attributes operation on file {{nName}}/{{dPath}}. The Desired access mask: {{desiredCifsAccessMaskName}}
fluidFSEventAuditingSuccessfulAuditWriteAttr
.1.3.6.1.4.1.674.11000.2000.200.20.17.27
Successful Write Attributes by {{uName}} on {{nName}}/{{dPath}} Description: The User {{uName}} performed a successful write attributes operation on file {{nName}}/{{dPath}}. The Desired access mask: {{desiredCifsAccessMaskName}}
fluidFSEventAuditingSuccessfulAuditWriteOwner
.1.3.6.1.4.1.674.11000.2000.200.20.17.28
Successful Change Owner by {{uName}} on {{nName}}/{{dPath}} Description: The User {{uName}} performed a successful change owner operation on file {{nName}}/{{dPath}}. The Desired access mask: {{desiredCifsAccessMaskName}}
fluidFSEventAuditingSuccessfulAuditWriteDACL
.1.3.6.1.4.1.674.11000.2000.200.20.17.29
Successful Change permissions by {{uName}} on {{nName}}/{{dPath}} Description: The User {{uName}} performed a successful write to DACL operation on file {{nName}}/{{dPath}}. The Desired access mask: {{desiredCifsAccessMaskName}}
fluidFSEventAuditingSuccessfulAuditAccessSACL
.1.3.6.1.4.1.674.11000.2000.200.20.17.30
Successful System Security Access by {{uName}} on {{nName}}/{{dPath}} Description: The User {{uName}} performed a successful access to SACL operation on file {{nName}}/{{dPath}}. The Desired access mask: {{desiredCifsAccessMaskName}}
fluidFSEventAuditingSuccessfulAuditReadCtl
.1.3.6.1.4.1.674.11000.2000.200.20.17.31
Successful Read Security Descriptor by {{uName}} on {{nName}}/{{dPath}} Description: The User {{uName}} performed a successful read security descriptor operation on file {{nName}}/{{dPath}}. The Desired access mask: {{desiredCifsAccessMaskName}}
fluidFSEventAuditingFailedAuditWrite
.1.3.6.1.4.1.674.11000.2000.200.20.17.32
Failed Write by {{uName}} on {{nName}}/{{dPath}} Description: The User {{uName}} performed a failed write operation on file {{nName}}/{{dPath}}. The Desired access mask: {{desiredCifsAccessMaskName}}
fluidFSEventAuditingFailedAuditRead
.1.3.6.1.4.1.674.11000.2000.200.20.17.33
Failed Read by {{uName}} on {{nName}}/{{dPath}} Description: The User {{uName}} performed a failed read operation on file {{nName}}/{{dPath}}. The Desired access mask: {{desiredCifsAccessMaskName}}
fluidFSEventAuditingFailedAuditDelete
.1.3.6.1.4.1.674.11000.2000.200.20.17.34
Failed Delete by {{uName}} on {{nName}}/{{dPath}} Description: The User {{uName}} performed a failed delete operation on file {{nName}}/{{dPath}}. The Desired access mask: {{desiredCifsAccessMaskName}}
fluidFSEventAuditingFailedAuditReadAttr
.1.3.6.1.4.1.674.11000.2000.200.20.17.35
Failed Read Attributes by {{uName}} on {{nName}}/{{dPath}} Description: The User {{uName}} performed a failed read attributes operation on file {{nName}}/{{dPath}}. The Desired access mask: {{desiredCifsAccessMaskName}}
fluidFSEventAuditingFailedAuditWriteAttr
.1.3.6.1.4.1.674.11000.2000.200.20.17.36
Failed Write Attributes by {{uName}} on {{nName}}/{{dPath}} Description: The User {{uName}} performed a failed write attributes operation on file {{nName}}/{{dPath}}. The Desired access mask: {{desiredCifsAccessMaskName}}
fluidFSEventAuditingFailedAuditWriteOwner
.1.3.6.1.4.1.674.11000.2000.200.20.17.37
Failed Change Owner by {{uName}} on {{nName}}/{{dPath}} Description: The User {{uName}} performed a failed change owner operation on file {{nName}}/{{dPath}}. The Desired access mask: {{desiredCifsAccessMaskName}}
fluidFSEventAuditingFailedAuditWriteDACL
.1.3.6.1.4.1.674.11000.2000.200.20.17.38
Failed Change permissions by {{uName}} on {{nName}}/{{dPath}} Description: The User {{uName}} performed a failed write to DACL operation on file {{nName}}/{{dPath}}. The Desired access mask: {{desiredCifsAccessMaskName}}
fluidFSEventAuditingFailedAuditAccessSACL
.1.3.6.1.4.1.674.11000.2000.200.20.17.39
Failed Access to SACL by {{uName}} on {{nName}}/{{dPath}} Description: The User {{uName}} performed a failed access to SACL operation on file {{nName}}/{{dPath}}. The Desired access mask: {{desiredCifsAccessMaskName}}
fluidFSEventAuditingFailedAuditReadCtl
.1.3.6.1.4.1.674.11000.2000.200.20.17.40
Failed Read Security Descriptor by {{uName}} on {{nName}}/{{dPath}} Description: The User {{uName}} performed a failed read security descriptor operation on file {{nName}}/{{dPath}}. The Desired access mask: {{desiredCifsAccessMaskName}}
fluidFSEventAuditingFailedAuditDeleteName
.1.3.6.1.4.1.674.11000.2000.200.20.17.41
Failed Delete by {{uName}} on {{nName}}/{{name}} Description: The User {{uName}} performed a failed delete operation on file {{nName}}/{{name}}. The Desired access mask: {{desiredCifsAccessMaskName}}
fluidFSEventAuditingSuccessfulAuditDeleteName
.1.3.6.1.4.1.674.11000.2000.200.20.17.42
Successful Delete by {{uName}} on {{nName}}/{{name}} Description: The User {{uName}} performed a successful delete operation on file {{nName}}/{{name}}. The Desired access mask: {{desiredCifsAccessMaskName}}
fluidFSEventUpgradeFailedToReadFileInformation
.1.3.6.1.4.1.674.11000.2000.200.20.18.1
Fail to read information of file {{servicePackName}} while checking for new service pack. File was removed
fluidFSEventUpgradeFailedToGetFileSize
.1.3.6.1.4.1.674.11000.2000.200.20.18.2
Fail to get size of file {{servicePackName}} while checking for new service pack. File was removed
fluidFSEventUpgradeInvalidFileFormatOrChecksum
.1.3.6.1.4.1.674.11000.2000.200.20.18.3
File {{servicePackName}} is not a valid package for upgrade. File was removed
The service pack {{servicePackName}} was removed because its version is older than current one
fluidFSEventUpgradeServicePackHasOldVersion
.1.3.6.1.4.1.674.11000.2000.200.20.18.5
The service pack {{servicePackName}} was removed because its version is older than available versions on the current node
fluidFSEventUpgradeServicePackAlreadyExists
.1.3.6.1.4.1.674.11000.2000.200.20.18.6
The service pack {{servicePackName}} was removed because file with the same version already exists
fluidFSEventUpgradeMonitoringDataCorruption
.1.3.6.1.4.1.674.11000.2000.200.20.18.7
Failed to recover the monitored data after upgrade because of corruption
fluidFSEventUpgradeServicePackDownloaded
.1.3.6.1.4.1.674.11000.2000.200.20.18.8
New service pack {{servicePackName}} downloaded
fluidFSEventHardwareNoPacketLossInSubnetHost
.1.3.6.1.4.1.674.11000.2000.200.20.2.1
NAS Controller {{nodeId}} has no packet loss to host {{hostName}} on {{subnetName}} subnet.
fluidFSEventHardwareBpsAccessibilityNormal
.1.3.6.1.4.1.674.11000.2000.200.20.2.109
NAS Controller{{nodeId}} BPS reachability is back to normal (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareBpsNormal
.1.3.6.1.4.1.674.11000.2000.200.20.2.110
BPS on NAS Controller{{nodeId}} is back to normal operation. Charge is {{bpsBatteryCharge}}% (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareNoPacketLossJumboInSubnet
.1.3.6.1.4.1.674.11000.2000.200.20.2.12
NAS Controller {{nodeId}} has no jumbo packet loss. Description: NAS Controller {{nodeId}} has no packet loss to IP address {{subnetName}} on host {{hostName}} (using jumbo frames). Subnet identifier {{network_id}}, prefix {{prefix}}, vlan tag id {{vlanTag}}.
fluidFSEventHardwareProcessorCoresCountFailed
.1.3.6.1.4.1.674.11000.2000.200.20.2.120
Failed to validate CPU cores count is equal in NAS Controller {{firstNodeId}} and NAS Controller {{secondNodeId}} (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareProcessorCoresCountOk
.1.3.6.1.4.1.674.11000.2000.200.20.2.121
CPU cores count test successful. Count is equal in NAS Controller {{firstNodeId}} and NAS Controller {{secondNodeId}} (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareMemoryNotOptimal
.1.3.6.1.4.1.674.11000.2000.200.20.2.124
Memory device status in NAS Controller{{nodeId}} is not optimal (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareMemoryError
.1.3.6.1.4.1.674.11000.2000.200.20.2.125
Memory device status in NAS Controller{{nodeId}} is critical (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareMemoryBackToNormal
.1.3.6.1.4.1.674.11000.2000.200.20.2.126
Memory device in NAS Controller{{nodeId}} returned to normal operation (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareMemoryTempWarn
.1.3.6.1.4.1.674.11000.2000.200.20.2.127
Memory device temperature sensors in NAS Controller{{nodeId}} report warning status (Ambient temperature: {{temperature}} degrees Celsius) (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareMemoryTempNormal
.1.3.6.1.4.1.674.11000.2000.200.20.2.128
Memory device temperature sensors in NAS Controller{{nodeId}} report normal status (Ambient temperature: {{temperature}} degrees Celsius) (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareMemoryTempCritical
.1.3.6.1.4.1.674.11000.2000.200.20.2.129
Memory device temperature sensors in NAS Controller{{nodeId}} report critical status (Ambient temperature: {{temperature}} degrees Celsius) (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareCantReachSubnet
.1.3.6.1.4.1.674.11000.2000.200.20.2.13
NAS Controller {{nodeId}} cannot reach IP {{subnetName}} Description: NAS Controller {{nodeId}} cannot reach IP address {{subnetName}} on host {{hostName}}. Subnet identifier {{network_id}}, prefix {{prefix}}, vlan tag id {{vlanTag}}.
fluidFSEventHardwareMemorySizeCountFailed
.1.3.6.1.4.1.674.11000.2000.200.20.2.130
Failed to validate memory size is equal in NAS Controller {{firstNodeId}} and NAS Controller {{secondNodeId}} (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareMemorySizeCountOk
.1.3.6.1.4.1.674.11000.2000.200.20.2.131
Memory size test successful. Size is equal in NAS Controller {{firstNodeId}} and NAS Controller {{secondNodeId}} (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareMemoryInfoNotAvailable
.1.3.6.1.4.1.674.11000.2000.200.20.2.132
Memory device status in NAS Controller{{nodeId}} is not available. Internal error in memory monitoring (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareMemoryInfoIsAvailable
.1.3.6.1.4.1.674.11000.2000.200.20.2.133
Memory device status in NAS Controller{{nodeId}} is available again (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareControllerStatusNotAvailable
.1.3.6.1.4.1.674.11000.2000.200.20.2.135
NAS Controller{{nodeId}} hardware status is not available. Internal error in controller status analysis (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareControllerStatusAvailable
.1.3.6.1.4.1.674.11000.2000.200.20.2.136
NAS Controller{{nodeId}} hardware status is available again (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareLocalDisksInfoNotAvailable
.1.3.6.1.4.1.674.11000.2000.200.20.2.143
Local storage status in NAS Controller{{nodeId}} is not available. S.M.A.R.T. info is not available (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareLocalDisksInfoIsAvailable
.1.3.6.1.4.1.674.11000.2000.200.20.2.144
Local storage status in NAS Controller{{nodeId}} is available again (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareApplianceIntrusionWarning
.1.3.6.1.4.1.674.11000.2000.200.20.2.153
Chassis cover for NAS Appliance{{applianceId}} is open (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareApplianceIntrusionOk
.1.3.6.1.4.1.674.11000.2000.200.20.2.154
Chassis cover for NAS Controller {{applianceId}} was closed (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareAppFanSystemInfoNotAvailable
.1.3.6.1.4.1.674.11000.2000.200.20.2.168
Fans set overall status in NAS Appliance {{applianceId}} is not available. Internal error in fans monitoring (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareAppFanSystemInfoIsAvailable
.1.3.6.1.4.1.674.11000.2000.200.20.2.169
Fans set overall status in NAS Appliance {{applianceId}} is available again (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareAppFanError
.1.3.6.1.4.1.674.11000.2000.200.20.2.170
Fan {{componentName}} in NAS Appliance {{applianceId}} has critical status (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareAppFanWarning
.1.3.6.1.4.1.674.11000.2000.200.20.2.171
Fan {{componentName}} in NAS Appliance {{applianceId}} has warning status (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareAppFanBackToNormal
.1.3.6.1.4.1.674.11000.2000.200.20.2.172
Fan {{componentName}} in NAS Appliance {{applianceId}} returned to normal operation (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareAppFanInfoNotAvailable
.1.3.6.1.4.1.674.11000.2000.200.20.2.173
Fan {{componentName}} status in NAS Appliance {{applianceId}} is not available. Internal error in fans monitoring (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareAppFanInfoIsAvailable
.1.3.6.1.4.1.674.11000.2000.200.20.2.174
Fan {{componentName}} status in NAS Appliance {{applianceId}} is available again (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwarePowerSupplyInfoNotAvailable
.1.3.6.1.4.1.674.11000.2000.200.20.2.178
Power supply status in NAS Controller{{nodeId}} is not available. Internal error in power supply monitoring (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwarePowerSupplyInfoIsAvailable
.1.3.6.1.4.1.674.11000.2000.200.20.2.179
Power supply status in NAS Controller{{nodeId}} is available again (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareCantReachJumboSubnet
.1.3.6.1.4.1.674.11000.2000.200.20.2.18
NAS Controller {{nodeId}} cannot reach IP using (jumbo frames) Description: NAS Controller {{nodeId}} cannot reach IP address {{subnetName}} on host {{hostName}} (using jumbo frames). Subnet identifier {{network_id}}, prefix {{prefix}}, vlan tag id {{vlanTag}}.
fluidFSEventHardwarePsuError
.1.3.6.1.4.1.674.11000.2000.200.20.2.180
Power supply unit {{componentName}} in NAS Controller{{nodeId}} has failure status, power supply redundancy lost (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwarePsuWarning
.1.3.6.1.4.1.674.11000.2000.200.20.2.181
Power supply unit {{componentName}} in NAS Controller{{nodeId}} has warning status (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwarePsuBackToNormal
.1.3.6.1.4.1.674.11000.2000.200.20.2.182
Power supply unit {{componentName}} in NAS Controller{{nodeId}} returned to normal operation (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwarePsuInfoNotAvailable
.1.3.6.1.4.1.674.11000.2000.200.20.2.183
Power supply unit {{componentName}} status in NAS Controller{{nodeId}} is not available. Internal error in power supply monitoring (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwarePsuInfoIsAvailable
.1.3.6.1.4.1.674.11000.2000.200.20.2.184
Power supply unit {{componentName}} status in NAS Controller{{nodeId}} is available again (service tag: {{contextBasedServiceTag}}).
Power supply status in NAS Appliance {{applianceId}} is not available. Internal error in power supply monitoring (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareAppPowerSupplyInfoIsAvailable
.1.3.6.1.4.1.674.11000.2000.200.20.2.189
Power supply status in NAS Appliance {{applianceId}} is available again (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareAppPsuError
.1.3.6.1.4.1.674.11000.2000.200.20.2.190
Power supply unit {{componentName}} in NAS Appliance {{applianceId}} has failure status, power supply redundancy lost (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareAppPsuWarning
.1.3.6.1.4.1.674.11000.2000.200.20.2.191
Power supply unit {{componentName}} in NAS Appliance {{applianceId}} has warning status (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareAppPsuBackToNormal
.1.3.6.1.4.1.674.11000.2000.200.20.2.192
Power supply unit {{componentName}} in NAS Appliance {{applianceId}} returned to normal operation (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareAppPsuInfoNotAvailable
.1.3.6.1.4.1.674.11000.2000.200.20.2.193
Power supply unit {{componentName}} status in NAS Appliance {{applianceId}} is not available. Internal error in power supply monitoring (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareAppPsuInfoIsAvailable
.1.3.6.1.4.1.674.11000.2000.200.20.2.194
Power supply unit {{componentName}} status in NAS Appliance {{applianceId}} is available again (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareAppPsuAmperageError
.1.3.6.1.4.1.674.11000.2000.200.20.2.195
Current sensor in power supply unit {{componentName}} in NAS Appliance {{applianceId}} reports critical status (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareAppPsuAmperageWarning
.1.3.6.1.4.1.674.11000.2000.200.20.2.196
Current sensor in power supply unit {{componentName}} in NAS Appliance {{applianceId}} reports warning status (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareAppPsuAmperageNormal
.1.3.6.1.4.1.674.11000.2000.200.20.2.197
Current sensor in power supply unit {{componentName}} in NAS Appliance {{applianceId}} reports normal status (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareAppPsuTempError
.1.3.6.1.4.1.674.11000.2000.200.20.2.201
Temperature sensor in power supply unit {{componentName}} in NAS Appliance {{applianceId}} reports critical status (Ambient temperature: {{temperature}} degrees Celsius) (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareAppPsuTempWarning
.1.3.6.1.4.1.674.11000.2000.200.20.2.202
Temperature sensor in power supply unit {{componentName}} in NAS Appliance {{applianceId}} reports warning status (Ambient temperature: {{temperature}} degrees Celsius) (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareAppPsuTempNormal
.1.3.6.1.4.1.674.11000.2000.200.20.2.203
Temperature sensor in power supply unit {{componentName}} in NAS Appliance {{applianceId}} reports normal status (Ambient temperature: {{temperature}} degrees Celsius) (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareAppPsuVoltageError
.1.3.6.1.4.1.674.11000.2000.200.20.2.207
Voltage sensor in power supply unit {{componentName}} in NAS Appliance {{applianceId}} reports critical status (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareAppPsuVoltageWarning
.1.3.6.1.4.1.674.11000.2000.200.20.2.208
Voltage sensor in power supply unit {{componentName}} in NAS Appliance {{applianceId}} reports warning status (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareAppPsuVoltageNormal
.1.3.6.1.4.1.674.11000.2000.200.20.2.209
Voltage sensor in power supply unit {{componentName}} in NAS Appliance {{applianceId}} reports normal status (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareMotherBoardAmperageError
.1.3.6.1.4.1.674.11000.2000.200.20.2.213
NAS Controller{{nodeId}} system board current sensors report critical status (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareMotherBoardAmperageWarning
.1.3.6.1.4.1.674.11000.2000.200.20.2.214
NAS Controller{{nodeId}} system board current sensors report warning status (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareMotherBoardAmperageNormal
.1.3.6.1.4.1.674.11000.2000.200.20.2.215
NAS Controller{{nodeId}} system board current sensors report normal status (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareMotherBoardTempError
.1.3.6.1.4.1.674.11000.2000.200.20.2.216
NAS Controller{{nodeId}} system board temperature sensors report critical status (Ambient temperature: {{temperature}} degrees Celsius) (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareMotherBoardTempWarning
.1.3.6.1.4.1.674.11000.2000.200.20.2.217
NAS Controller{{nodeId}} system board temperature sensors report warning status (Ambient temperature: {{temperature}} degrees Celsius) (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareMotherBoardTempNormal
.1.3.6.1.4.1.674.11000.2000.200.20.2.218
NAS Controller{{nodeId}} system board temperature sensors report normal status (Ambient temperature: {{temperature}} degrees Celsius) (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareMotherBoardVoltageError
.1.3.6.1.4.1.674.11000.2000.200.20.2.219
NAS Controller{{nodeId}} system board voltage sensors report critical status (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareMotherBoardVoltageWarning
.1.3.6.1.4.1.674.11000.2000.200.20.2.220
NAS Controller{{nodeId}} system board voltage sensors report warning status (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareMotherBoardVoltageNormal
.1.3.6.1.4.1.674.11000.2000.200.20.2.221
NAS Controller{{nodeId}} system board voltage sensors report normal status (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareMotherBoardInfoNotAvailable
.1.3.6.1.4.1.674.11000.2000.200.20.2.222
Controller system board status in NAS Controller{{nodeId}} is not available. Internal error in system board monitoring (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareMotherBoardInfoIsAvailable
.1.3.6.1.4.1.674.11000.2000.200.20.2.223
Controller system board status in NAS Controller{{nodeId}} is available again (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareNetworksInfoNotAvailable
.1.3.6.1.4.1.674.11000.2000.200.20.2.224
Network status in NAS Controller{{nodeId}} is not available. Internal error in network monitoring (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareNetworksInfoIsAvailable
.1.3.6.1.4.1.674.11000.2000.200.20.2.225
Network status in NAS Controller{{nodeId}} is available again (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareNetworkInfoNotAvailable
.1.3.6.1.4.1.674.11000.2000.200.20.2.229
{{networkName}} network status in NAS Controller{{nodeId}} is not available. Internal error in network monitoring (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareNetworkInfoIsAvailable
.1.3.6.1.4.1.674.11000.2000.200.20.2.230
{{networkName}} network status in NAS Controller{{nodeId}} is available again (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareApplianceStatusNotAvailable
.1.3.6.1.4.1.674.11000.2000.200.20.2.240
NAS Appliance {{applianceId}} hardware status is not available. Internal error in appliance status analysis (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareApplianceStatusAvailable
.1.3.6.1.4.1.674.11000.2000.200.20.2.241
NAS Appliance {{applianceId}} hardware status is available again (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareLocalIPMIBackToNormal
.1.3.6.1.4.1.674.11000.2000.200.20.2.245
NAS Controller{{nodeId}} IPMI is back to normal (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwarePeerIPMIBackToNormal
.1.3.6.1.4.1.674.11000.2000.200.20.2.246
NAS Controller{{nodeId}} peer IPMI is reachable again (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareLocalIPMIStatusNotAvailable
.1.3.6.1.4.1.674.11000.2000.200.20.2.247
NAS Controller{{nodeId}} local IPMI status is not available. Internal error in IPMI status monitoring (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareLocalIPMIStatusAvailable
.1.3.6.1.4.1.674.11000.2000.200.20.2.248
NAS Controller{{nodeId}} local IPMI status is available again (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwarePeerIPMIStatusNotAvailable
.1.3.6.1.4.1.674.11000.2000.200.20.2.249
NAS Controller{{nodeId}} peer IPMI status is not available. Internal error in IPMI status monitoring (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwarePeerIPMIStatusAvailable
.1.3.6.1.4.1.674.11000.2000.200.20.2.250
NAS Controller{{nodeId}} peer IPMI status is available again (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwarePsuNotDetected
.1.3.6.1.4.1.674.11000.2000.200.20.2.251
Power supply unit {{componentName}} in NAS Controller{{nodeId}} is not detected, power supply redundancy lost. Please make sure the unit is attached properly (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwarePsuDetected
.1.3.6.1.4.1.674.11000.2000.200.20.2.252
Power supply unit {{componentName}} in NAS Controller{{nodeId}} is detected (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareAppPsuNotDetected
.1.3.6.1.4.1.674.11000.2000.200.20.2.253
Power supply unit {{componentName}} in NAS Appliance {{applianceId}} is not detected, power supply redundancy lost. Please make sure the unit is attached properly (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareAppPsuDetected
.1.3.6.1.4.1.674.11000.2000.200.20.2.254
Power supply unit {{componentName}} in NAS Appliance {{applianceId}} is detected (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareBpsNotAccessible
.1.3.6.1.4.1.674.11000.2000.200.20.2.257
NAS Controller{{nodeId}} failed to reach its BPS (service tag: {{contextBasedServiceTag}}).
Eth{{ethName}} which belongs to NAS Controller{{nodeId}} is down, therefore replication related connectivity may be affected.
fluidFSEventHardwarePsuNoInput
.1.3.6.1.4.1.674.11000.2000.200.20.2.259
Power supply unit {{componentName}} in NAS Controller{{nodeId}} has no input. Please make sure the unit cable is attached properly (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwarePsuInputDetected
.1.3.6.1.4.1.674.11000.2000.200.20.2.260
Power supply unit {{componentName}} in NAS Controller{{nodeId}} input detected (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareAppPsuNoInput
.1.3.6.1.4.1.674.11000.2000.200.20.2.261
Power supply unit {{componentName}} in NAS Appliance {{applianceId}} has no input. Please make sure the unit cable is attached properly (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareAppPsuInputDetected
.1.3.6.1.4.1.674.11000.2000.200.20.2.262
Power supply unit {{componentName}} in NAS Appliance {{applianceId}} input detected (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwarePeerIPMIReachability
.1.3.6.1.4.1.674.11000.2000.200.20.2.264
NAS Controller{{nodeId}} cannot reach peer IPMI (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareLocalIPMIReachability
.1.3.6.1.4.1.674.11000.2000.200.20.2.265
NAS Controller{{nodeId}} IPMI is not working (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareChassisIntrusionAlert
.1.3.6.1.4.1.674.11000.2000.200.20.2.267
Chassis cover for NAS Controller{{nodeId}} is open (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareChassisIntrusionBackToNormal
.1.3.6.1.4.1.674.11000.2000.200.20.2.268
Chassis cover for NAS Controller{{nodeId}} was closed (service tag: {{contextBasedServiceTag}}).
Battery Calibration Cycle status in NAS Controller{{nodeId}} is not available. Internal error in Battery monitoring (service tag: {{contextBasedServiceTag}}).
Battery Calibration Cycle status in NAS Controller{{nodeId}} is available again (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareBpsCalibrationCycleInProgress
.1.3.6.1.4.1.674.11000.2000.200.20.2.271
Battery Calibration Cycle is in progress. Description: A Battery Calibration Cycle in NAS Controller{{nodeId}} is in progress. The Battery Pack will go through charge and discharge cycles, therefore battery error events during this process are normal. This process may take up to 7 days from first notification to complete. (service tag: {{contextBasedServiceTag}}).
Forced BPS Calibration pending, performance impact expected. Description: Calibration Cycle will be forced in 30 days from first notification. Could not perform normal battery calibration cycle in NAS Controller{{nodeId}} since peer controller battery is not in an optimal condition. If peer battery is not repaired before the Calibration Cycle is forced, then during portions of the Calibration Cycle the file system will move to journaling mode, which may cause a performance impact (service tag: {{contextBasedServiceTag}}).
The Forced Battery Calibration Cycle in NAS Controller{{nodeId}} is no longer required (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareBpsCalibrationCycleIdle
.1.3.6.1.4.1.674.11000.2000.200.20.2.274
The Battery Calibration Cycle in NAS Controller{{nodeId}} is idle (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareBpsCalibrationLowBattery
.1.3.6.1.4.1.674.11000.2000.200.20.2.275
BPS on NAS Controller{{nodeId}} is not operating properly. This is expected as the battery is under calibration cycle. Charge is {{bpsBatteryCharge}}% (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareStandardFrameDisconnected
.1.3.6.1.4.1.674.11000.2000.200.20.2.276
{{ethName}} on NAS Controller{{firstNodeId}} cannot access {{targetEthName}} on NAS Controller{{secondNodeId}} (service tag: {{contextBasedServiceTag}}). Check that these interfaces are connected to the same switch, and are properly tagged.
fluidFSEventHardwareStandardFrameConnected
.1.3.6.1.4.1.674.11000.2000.200.20.2.277
{{ethName}} on NAS Controller{{firstNodeId}} can access {{targetEthName}} on NAS Controller{{secondNodeId}} (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareLargeFrameLDisconnected
.1.3.6.1.4.1.674.11000.2000.200.20.2.278
{{ethName}} on NAS Controller{{firstNodeId}} cannot access {{targetEthName}} on NAS Controller{{secondNodeId}} using large frames (service tag: {{contextBasedServiceTag}}). Check that jumbo-frames are enabled for these interfaces in the switch.
fluidFSEventHardwareLargeFrameConnected
.1.3.6.1.4.1.674.11000.2000.200.20.2.279
{{ethName}} on NAS Controller {{firstNodeId}} can access {{targetEthName}} on NAS Controller {{secondNodeId}} using large frames (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareBpsLowBattery
.1.3.6.1.4.1.674.11000.2000.200.20.2.282
BPS charge on NAS Controller{{nodeId}} is too low ({{bpsBatteryCharge}}%). BPS is not operating properly (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareExtServerJumboFramePacketLoss
.1.3.6.1.4.1.674.11000.2000.200.20.2.283
NAS Controller{{nodeId}} had {{packetLoss}}% packet loss to {{extSrvType}} {{ip}}. Please make sure the {{extSrvType}} allows jumbo frames.
fluidFSEventHardwareCantReachExtServeJumboFrame
.1.3.6.1.4.1.674.11000.2000.200.20.2.284
NAS Controller{{nodeId}} cannot reach {{extSrvType}} {{ip}}. Please make sure the {{extSrvType}} allows jumbo frames.
fluidFSEventHardwareCantReachStaticRoute
.1.3.6.1.4.1.674.11000.2000.200.20.2.285
NAS Controller{{nodeId}} cannot reach the gateway {{ip}} to network {{destinationNetwork}}.
fluidFSEventHardwareCanReachStaticRoute
.1.3.6.1.4.1.674.11000.2000.200.20.2.286
NAS Controller{{nodeId}} can reach the gateway {{ip}} to network {{destinationNetwork}}.
FluidFS cannot access any formatted LUN. Please make sure the backend storage is on and appropriately connected to FluidFS.
fluidFSEventHardwareClusterSeesAllFormattedLuns
.1.3.6.1.4.1.674.11000.2000.200.20.2.288
FluidFS connection to all the formatted LUNs is fully restored.
fluidFSEventHardwareNodeCantSeeFormattedLunsMajor
.1.3.6.1.4.1.674.11000.2000.200.20.2.289
NAS Controller{{nodeId}} cannot reach all the formatted LUNs. The system may not be serving. Please make sure that this Controller is properly connected to a storage.
NAS Controller{{nodeId}} cannot reach all the formatted LUNs. The performance may be degraded. Please make sure that this Controller is properly connected to a storage.
fluidFSEventHardwareNodeSeesFormattedLuns
.1.3.6.1.4.1.674.11000.2000.200.20.2.291
NAS Controller{{nodeId}} restored connection to all the formatted LUNs.
fluidFSEventHardwareClusterCantSeeFormattedLun
.1.3.6.1.4.1.674.11000.2000.200.20.2.292
LUN {{lunId}} is not reachable from FluidFS. Please make sure the LUN exists and is properly mapped from the backend storage.
fluidFSEventHardwareClusterSeesFormattedLun
.1.3.6.1.4.1.674.11000.2000.200.20.2.293
LUN {{lunId}} is now fully reachable from FluidFS.
fluidFSEventHardwareNodeCantSeeLunMajor
.1.3.6.1.4.1.674.11000.2000.200.20.2.294
FluidFS Controller{{nodeId}} can not reach formatted LUN {{lunId}}. The system may not be serving. Please check that this LUN has appropriate permissions to be seen by this FluidFS Controller.
fluidFSEventHardwareNodeCantSeeLunWarning
.1.3.6.1.4.1.674.11000.2000.200.20.2.295
FluidFS Controller{{nodeId}} can not reach formatted LUN {{lunId}}. The performance may be degraded. Please check that this LUN has appropriate permissions to be seen by this FluidFS Controller.
fluidFSEventHardwareMeshDisconnected
.1.3.6.1.4.1.674.11000.2000.200.20.2.296
node{{fromNode}}.{{fromEth}} cannot reach node{{toNode}}.{{toEth}} in vlan {{vlanTag}} Description: {{fromEth}} on NAS Controller{{fromNode}} cannot access {{toEth}} on NAS Controller{{toNode}} with vlan tag {{vlanTag}}. Action Items: Check that these interfaces are connected to the same switch, and are properly tagged.
fluidFSEventHardwareMeshConnected
.1.3.6.1.4.1.674.11000.2000.200.20.2.297
node{{fromNode}}.{{fromEth}} can reach node{{toNode}}.{{toEth}} in vlan {{vlanTag}} Description: {{fromEth}} on NAS Controller{{fromNode}} can access {{toEth}} on NAS Controller{{toNode}} with vlan tag {{vlanTag}}.
Fibre Channel port speeds in NAS Controller{{nodeId}} are not identical.
fluidFSEventHardwareBpsAccessibilityNotAvailable
.1.3.6.1.4.1.674.11000.2000.200.20.2.302
BPS reachability in NAS Controller{{nodeId}} is not available (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareBpsAccessibilityIsAvailable
.1.3.6.1.4.1.674.11000.2000.200.20.2.303
BPS reachability in NAS Controller{{nodeId}} is available again (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareBpsInfoNotAvailable
.1.3.6.1.4.1.674.11000.2000.200.20.2.304
BPS status info in NAS Controller{{nodeId}} is not available (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareBpsInfoIsAvailable
.1.3.6.1.4.1.674.11000.2000.200.20.2.305
BPS status info in NAS Controller{{nodeId}} is available again (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareProcessorNotOptimal
.1.3.6.1.4.1.674.11000.2000.200.20.2.306
Processor {{cpuName}} status in NAS Controller{{nodeId}} is not optimal (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareProcessorError
.1.3.6.1.4.1.674.11000.2000.200.20.2.307
Processor {{cpuName}} status in NAS Controller{{nodeId}} is critical (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareProcessorBackToNormal
.1.3.6.1.4.1.674.11000.2000.200.20.2.308
Processor {{cpuName}} in NAS Controller{{nodeId}} returned to normal operation (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareProcessorTempNotAvailable
.1.3.6.1.4.1.674.11000.2000.200.20.2.309
Processor {{cpuName}} temprature sensor info in NAS Controller{{nodeId}} is not available (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareProcessorTempIsAvailable
.1.3.6.1.4.1.674.11000.2000.200.20.2.310
Processor {{cpuName}} temprature sensor info in NAS Controller{{nodeId}} is available again (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareProcessorTempWarn
.1.3.6.1.4.1.674.11000.2000.200.20.2.311
Processor {{cpuName}} temperature sensor in NAS Controller{{nodeId}} reports warning status (Ambient temperature: {{temperature}} degrees Celsius) (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareProcessorTempNormal
.1.3.6.1.4.1.674.11000.2000.200.20.2.312
Processor {{cpuName}} temperature sensors in NAS Controller{{nodeId}} report normal status (Ambient temperature: {{temperature}} degrees Celsius) (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareProcessorTempCritical
.1.3.6.1.4.1.674.11000.2000.200.20.2.313
Processor {{cpuName}} temperature sensors in NAS Controller{{nodeId}} report critical status (Ambient temperature: {{temperature}} degrees Celsius) (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareProcessorVoltageNotAvailable
.1.3.6.1.4.1.674.11000.2000.200.20.2.314
Processor {{cpuName}} voltage sensor info in NAS Controller{{nodeId}} is not available (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareProcessorVoltageIsAvailable
.1.3.6.1.4.1.674.11000.2000.200.20.2.315
Processor {{cpuName}} voltage sensor info in NAS Controller{{nodeId}} is available again (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareProcessorVoltageWarn
.1.3.6.1.4.1.674.11000.2000.200.20.2.316
Processor {{cpuName}} voltage sensors in NAS Controller{{nodeId}} report warning status (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareProcessorVoltageNormal
.1.3.6.1.4.1.674.11000.2000.200.20.2.317
Processor {{cpuName}} voltage sensors in NAS Controller{{nodeId}} report normal status (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareProcessorVoltageCritical
.1.3.6.1.4.1.674.11000.2000.200.20.2.318
Processor {{cpuName}} voltage sensors in NAS Controller{{nodeId}} report critical status (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareProcessorInfoNotAvailable
.1.3.6.1.4.1.674.11000.2000.200.20.2.319
Processor {{cpuName}} status in NAS Controller{{nodeId}} is not available. Internal error in processor monitoring (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareProcessorInfoIsAvailable
.1.3.6.1.4.1.674.11000.2000.200.20.2.320
Processor {{cpuName}} status in NAS Controller{{nodeId}} is available again (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareProcessorDisabled
.1.3.6.1.4.1.674.11000.2000.200.20.2.321
Processor {{cpuName}} in NAS Controller{{nodeId}} is disabled (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareProcessorEnabled
.1.3.6.1.4.1.674.11000.2000.200.20.2.322
Processor {{cpuName}} in NAS Controller{{nodeId}} is enabled again (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareMemoryTempNotAvailable
.1.3.6.1.4.1.674.11000.2000.200.20.2.323
Memory device temperature sensors info in NAS Controller{{nodeId}} is not available (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareMemoryTempIsAvailable
.1.3.6.1.4.1.674.11000.2000.200.20.2.324
Memory device temperature sensors info in NAS Controller{{nodeId}} is available again (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareBpsError
.1.3.6.1.4.1.674.11000.2000.200.20.2.325
BPS on Controller{{nodeId}} (service tag: {{contextBasedServiceTag}}) is not operating properly - battery reports failed state (charge: {{bpsBatteryCharge}}%). Description: BPS on NAS Controller{{nodeId}} (service tag: {{contextBasedServiceTag}}) is not operating properly - battery reports failed state (charge: {{bpsBatteryCharge}}%). This could be the result of a recent calibration process - if you receive this event again after a few days for this specific controller, please contact Support.
fluidFSEventHardwareLocalDiskWearingOut
.1.3.6.1.4.1.674.11000.2000.200.20.2.326
Local storage disk drive {{diskName}} (Model: {{diskModel}}; Serial: {{diskSerial}}) in NAS Controller{{nodeId}} is wearing out. Consider disk replacement. Overall reads/writes: {{diskOverallReads}} / {{diskOverallWrites}} MB. (service tag: {{contextBasedServiceTag}})
fluidFSEventHardwareLocalDiskLotOfBadSectors
.1.3.6.1.4.1.674.11000.2000.200.20.2.327
Local disk {{diskName}} (Model: {{diskModel}}; Serial: {{diskSerial}}) in NAS Controller{{nodeId}} has a lot of bad sectors. Consider disk replacement. Bad sectors count: {{badSectorsCount}} (service tag: {{contextBasedServiceTag}})
fluidFSEventHardwareLocalDiskBadSectorsCount
.1.3.6.1.4.1.674.11000.2000.200.20.2.328
Local storage disk drive {{diskName}} (Model: {{diskModel}}; Serial: {{diskSerial}}) in NAS Controller{{nodeId}} bad sectors count: {{badSectorsCount}} (service tag: {{contextBasedServiceTag}})
fluidFSEventHardwareLocalDiskLotOfFailedWrites
.1.3.6.1.4.1.674.11000.2000.200.20.2.329
Local storage disk drive {{diskName}} (Model: {{diskModel}}; Serial: {{diskSerial}}) in NAS Controller{{nodeId}} has a lot of failed writes. Consider disk replacement. Failed writes count: {{failedWritesCount}} (service tag: {{contextBasedServiceTag}})
fluidFSEventHardwareLocalDiskFailedWritesCount
.1.3.6.1.4.1.674.11000.2000.200.20.2.330
Local storage disk drive {{diskName}} (Model: {{diskModel}}; Serial: {{diskSerial}}) in NAS Controller{{nodeId}} failed writes count: {{failedWritesCount}} (service tag: {{contextBasedServiceTag}})
fluidFSEventHardwareLocalDiskBad
.1.3.6.1.4.1.674.11000.2000.200.20.2.331
S.M.A.R.T. self test is failed for local storage disk drive {{diskName}} (Model: {{diskModel}}; Serial: {{diskSerial}}) in NAS Controller{{nodeId}}. Disk may need a replacement. (service tag: {{contextBasedServiceTag}})
fluidFSEventHardwareLocalDiskBackToNormal
.1.3.6.1.4.1.674.11000.2000.200.20.2.332
Local storage disk drive {{diskName}} (Model: {{diskModel}}; Serial: {{diskSerial}}) in NAS Controller{{nodeId}} returned to normal operation (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareLocalDiskModelNotSupported
.1.3.6.1.4.1.674.11000.2000.200.20.2.333
Local storage disk drive {{diskName}} (Model: {{diskModel}}; Serial: {{diskSerial}}) is not supported in NAS Controller{{nodeId}}. Disk monitoring may not work properly. Suppoerted modles are: {{supportedModels}}. (service tag: {{contextBasedServiceTag}})
fluidFSEventHardwareLocalDiskInfoNotAvailable
.1.3.6.1.4.1.674.11000.2000.200.20.2.334
Local storage disk drive {{diskName}} (Model: {{diskModel}}; Serial: {{diskSerial}}) status in NAS Controller{{nodeId}} is not available. Internal error in local storage monitoring (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareLocalDiskInfoIsAvailable
.1.3.6.1.4.1.674.11000.2000.200.20.2.335
Local storage disk drive {{diskName}} (Model: {{diskModel}}; Serial: {{diskSerial}}) status in NAS Controller{{nodeId}} is available again (service tag: {{contextBasedServiceTag}}).
Intrusion info for NAS Appliance{{applianceId}} is not available (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareApplianceIntrusionIsAvailable
.1.3.6.1.4.1.674.11000.2000.200.20.2.337
Intrusion info for NAS Appliance{{applianceId}} is available again (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareAppPsuAmperageNotAvailable
.1.3.6.1.4.1.674.11000.2000.200.20.2.338
Current sensor info in power supply unit {{componentName}} in NAS Appliance {{applianceId}} is not available (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareAppPsuAmperageIsAvailable
.1.3.6.1.4.1.674.11000.2000.200.20.2.339
Current sensor info in power supply unit {{componentName}} in NAS Appliance {{applianceId}} is available again (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareAppPsuTempNotAvailable
.1.3.6.1.4.1.674.11000.2000.200.20.2.340
Temperature sensor info in power supply unit {{componentName}} in NAS Appliance {{applianceId}} is not available (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareAppPsuTempIsAvailable
.1.3.6.1.4.1.674.11000.2000.200.20.2.341
Temperature sensor info in power supply unit {{componentName}} in NAS Appliance {{applianceId}} is available again (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareAppPsuVoltageNotAvailable
.1.3.6.1.4.1.674.11000.2000.200.20.2.342
Voltage sensor info in power supply unit {{componentName}} in NAS Appliance {{applianceId}} is not available (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareAppPsuVoltageIsAvailable
.1.3.6.1.4.1.674.11000.2000.200.20.2.343
Voltage sensor info in power supply unit {{componentName}} in NAS Appliance {{applianceId}} is available again (service tag: {{contextBasedServiceTag}}).
Temperature sensor info in top controller in NAS Appliance {{applianceId}} is available again (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareAppTopControllerTempNormal
.1.3.6.1.4.1.674.11000.2000.200.20.2.346
Temperature sensor in top controller in NAS Appliance {{applianceId}} reports normal status (Ambient temperature: {{temperature}} degrees Celsius) (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareAppTopControllerTempWarning
.1.3.6.1.4.1.674.11000.2000.200.20.2.347
Temperature sensor in top controller in NAS Appliance {{applianceId}} reports warning status (Ambient temperature: {{temperature}} degrees Celsius) (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareAppTopControllerTempError
.1.3.6.1.4.1.674.11000.2000.200.20.2.348
Temperature sensor in top controller in NAS Appliance {{applianceId}} reports critical status (Ambient temperature: {{temperature}} degrees Celsius) (service tag: {{contextBasedServiceTag}}).
Temperature sensor info in bottom controller in NAS Appliance {{applianceId}} is available again (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareAppBottomControllerTempNormal
.1.3.6.1.4.1.674.11000.2000.200.20.2.351
Temperature sensor in bottom controller in NAS Appliance {{applianceId}} reports normal status (Ambient temperature: {{temperature}} degrees Celsius) (service tag: {{contextBasedServiceTag}}).
Temperature sensor in bottom controller in NAS Appliance {{applianceId}} reports warning status (Ambient temperature: {{temperature}} degrees Celsius) (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareAppBottomControllerTempError
.1.3.6.1.4.1.674.11000.2000.200.20.2.353
Temperature sensor in bottom controller in NAS Appliance {{applianceId}} reports critical status (Ambient temperature: {{temperature}} degrees Celsius) (service tag: {{contextBasedServiceTag}}).
Temperature sensor info in left raiser card in NAS Appliance {{applianceId}} is available again (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareAppLeftRaiserCardTempNormal
.1.3.6.1.4.1.674.11000.2000.200.20.2.356
Temperature sensor in left raiser card in NAS Appliance {{applianceId}} reports normal status (Ambient temperature: {{temperature}} degrees Celsius) (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareAppLeftRaiserCardTempWarning
.1.3.6.1.4.1.674.11000.2000.200.20.2.357
Temperature sensor in left raiser card in NAS Appliance {{applianceId}} reports warning status (Ambient temperature: {{temperature}} degrees Celsius) (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareAppLeftRaiserCardTempError
.1.3.6.1.4.1.674.11000.2000.200.20.2.358
Temperature sensor in left raiser card in NAS Appliance {{applianceId}} reports critical status (Ambient temperature: {{temperature}} degrees Celsius) (service tag: {{contextBasedServiceTag}}).
Temperature sensor info in right raiser card in NAS Appliance {{applianceId}} is available again (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareAppRightRaiserCardTempNormal
.1.3.6.1.4.1.674.11000.2000.200.20.2.361
Temperature sensor in right raiser card in NAS Appliance {{applianceId}} reports normal status (Ambient temperature: {{temperature}} degrees Celsius) (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareAppRightRaiserCardTempWarning
.1.3.6.1.4.1.674.11000.2000.200.20.2.362
Temperature sensor in right raiser card in NAS Appliance {{applianceId}} reports warning status (Ambient temperature: {{temperature}} degrees Celsius) (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareAppRightRaiserCardTempError
.1.3.6.1.4.1.674.11000.2000.200.20.2.363
Temperature sensor in right raiser card in NAS Appliance {{applianceId}} reports critical status (Ambient temperature: {{temperature}} degrees Celsius) (service tag: {{contextBasedServiceTag}}).
NAS Controller{{nodeId}} system board voltage sensors info is not available (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareMotherBoardVoltageIsAvailable
.1.3.6.1.4.1.674.11000.2000.200.20.2.369
NAS Controller{{nodeId}} system board voltage sensors info is available again (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareNetworkInterfaceDown
.1.3.6.1.4.1.674.11000.2000.200.20.2.370
{{networkName}} network interface {{ethName}} in NAS Controller{{nodeId}} is down.
fluidFSEventHardwareNetworkInterfaceBackUp
.1.3.6.1.4.1.674.11000.2000.200.20.2.371
{{networkName}} network interface {{ethName}} in NAS Controller{{nodeId}} is back up.
fluidFSEventHardwareNetworkInterfaceUnknown
.1.3.6.1.4.1.674.11000.2000.200.20.2.372
{{networkName}} network interface {{ethName}} in NAS Controller{{nodeId}} has unknown status.
fluidFSEventHardwareFibreChannelPortDown
.1.3.6.1.4.1.674.11000.2000.200.20.2.373
Fibre Channel port {{ethName}} in NAS Controller{{nodeId}} is down.
fluidFSEventHardwareFibreChannelPortUp
.1.3.6.1.4.1.674.11000.2000.200.20.2.374
Fibre Channel port {{ethName}} in NAS Controller{{nodeId}} is back up.
fluidFSEventHardwareFibreChannelPortUnknown
.1.3.6.1.4.1.674.11000.2000.200.20.2.375
Fibre Channel port {{ethName}} in NAS Controller{{nodeId}} has unknown status.
fluidFSEventHardwareSameFcHbaSpeedOnController
.1.3.6.1.4.1.674.11000.2000.200.20.2.376
Fibre Channel port speeds in NAS Controller{{nodeId}} are identical again.
fluidFSEventHardwareSameEthSpeedOnNetworkNode
.1.3.6.1.4.1.674.11000.2000.200.20.2.377
Eth speeds on {{networkName}} network in NAS Controller{{nodeId}} are identical again.
fluidFSEventHardwareAllIfsAreDown
.1.3.6.1.4.1.674.11000.2000.200.20.2.378
All {{subnetName}} network interfaces on NAS Controller{{nodeId}} are down.
fluidFSEventHardwareEthSpeedOnSubnetNodeNotEqual
.1.3.6.1.4.1.674.11000.2000.200.20.2.379
Eth speeds on {{subnetName}} network in NAS Controller{{nodeId}} are not identical.
fluidFSEventHardwareCantReachDefaultGateway
.1.3.6.1.4.1.674.11000.2000.200.20.2.38
NAS Controller{{nodeId}} cannot reach default gateway {{ip}}.
fluidFSEventHardwareBpsTempNotAvailable
.1.3.6.1.4.1.674.11000.2000.200.20.2.380
BPS temperature sensor info in NAS Controller{{nodeId}} is not available (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareBpsTempIsAvailable
.1.3.6.1.4.1.674.11000.2000.200.20.2.381
BPS temperature sensor info in NAS Controller{{nodeId}} is not available again (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareBpsTempNormal
.1.3.6.1.4.1.674.11000.2000.200.20.2.382
Temperature sensor in BPS in NAS Controller{{nodeId}} reports normal status (Ambient temperature: {{temperature}} degrees Celsius) (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareBpsTempWarning
.1.3.6.1.4.1.674.11000.2000.200.20.2.383
Temperature sensor in BPS in NAS Controller{{nodeId}} reports warning status (Ambient temperature: {{temperature}} degrees Celsius) (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwareBpsTempError
.1.3.6.1.4.1.674.11000.2000.200.20.2.384
Temperature sensor in BPS in NAS Controller{{nodeId}} reports error status (Ambient temperature: {{temperature}} degrees Celsius) (service tag: {{contextBasedServiceTag}}).
fluidFSEventHardwarePSUfirmwareUpgradeFail
.1.3.6.1.4.1.674.11000.2000.200.20.2.385
Firmware upgarade for {{id}} failed
fluidFSEventHardwarePSUfirmwareUpgradeSuccess
.1.3.6.1.4.1.674.11000.2000.200.20.2.386
Firmware upgrade for {{id}} finished successfuly
fluidFSEventHardwareUnsupportedNic
.1.3.6.1.4.1.674.11000.2000.200.20.2.387
Unsupported card installed on Controller{{nodeId}}. Description: An unsupported PCI card was detected in slot {{pciSlot}} on NAS Controller{{nodeId}}: {{pciCard}}.
fluidFSEventHardwareCanReachDefaultGateway
.1.3.6.1.4.1.674.11000.2000.200.20.2.39
NAS Controller{{nodeId}} can reach default gateway {{ip}}.
fluidFSEventHardwarePacketLossInSubnet
.1.3.6.1.4.1.674.11000.2000.200.20.2.4
NAS Controller {{nodeId}} had packet loss to {{subnetName}}. Description: NAS Controller {{nodeId}} had {{packetLoss}}% packet loss to IP address {{subnetName}} on host {{hostName}}. Subnet identifier {{network_id}}, prefix {{prefix}}, vlan tag id {{vlanTag}}.
fluidFSEventHardwarePacketLossInSubnetHost
.1.3.6.1.4.1.674.11000.2000.200.20.2.56
NAS Controller {{nodeId}} had {{packetLoss}}% packet loss to host {{hostName}} on {{subnetName}} subnet.
fluidFSEventHardwareNoLinkInEthOnController
.1.3.6.1.4.1.674.11000.2000.200.20.2.57
Eth{{ethName}} which belongs to NAS Controller {{nodeId}} is down.
fluidFSEventHardwareEthLinkRestoredOnController
.1.3.6.1.4.1.674.11000.2000.200.20.2.58
Eth{{ethName}} which belongs to NAS Controller {{nodeId}} is restored.
fluidFSEventHardwarePacketLossJumboInSubnetHost
.1.3.6.1.4.1.674.11000.2000.200.20.2.59
NAS Controller {{nodeId}} had {{packetLoss}}% packet loss (using jumbo frames) to host {{hostName}} on {{subnetName}} subnet.
fluidFSEventHardwareNoPacketLossInSubnet
.1.3.6.1.4.1.674.11000.2000.200.20.2.6
NAS Controller {{nodeId}} has no packet loss on {{subnetName}}. Description: NAS Controller {{nodeId}} has no packet loss to IP address {{subnetName}} on host {{hostName}}. Subnet identifier {{network_id}}, prefix {{prefix}}, vlan tag id {{vlanTag}}.
fluidFSEventHardwareCantReachSubnetHost
.1.3.6.1.4.1.674.11000.2000.200.20.2.60
NAS Controller {{nodeId}} cannot reach host {{hostName}} on {{subnetName}} subnet.
fluidFSEventHardwareCantReachJumboSubnetHost
.1.3.6.1.4.1.674.11000.2000.200.20.2.61
NAS Controller {{nodeId}} cannot reach (using jumbo frames) host {{hostName}} on {{subnetName}} subnet.
fluidFSEventHardwareNoPacketLossJumboInSubnetHost
.1.3.6.1.4.1.674.11000.2000.200.20.2.7
NAS Controller {{nodeId}} has no packet loss (using jumbo frames) to host {{hostName}} on {{subnetName}} subnet.
fluidFSEventHardwarePacketLossJumboInSubnet
.1.3.6.1.4.1.674.11000.2000.200.20.2.8
NAS Controller {{nodeId}} had jumbo packet loss. Description: NAS Controller {{nodeId}} had {{packetLoss}}% packet loss to IP address {{subnetName}} on host {{hostName}} (using jumbo frames). Subnet identifier {{network_id}}, prefix {{prefix}}, vlan tag id {{vlanTag}}.
fluidFSEventNdmpBackupStarted
.1.3.6.1.4.1.674.11000.2000.200.20.3.10
The backup session by ID {{SessionId}} initiated by DMA {{InitiatorDMAServer}} on {{StartTimestamp}} started. Description: The backup session by ID {{SessionId}} initiated by DMA {{InitiatorDMAServer}} on {{StartTimestamp}} started. The source path is {{Path}} and it is executed on controller {{ExecutedOnNode}}.
fluidFSEventNdmpBackupFinished
.1.3.6.1.4.1.674.11000.2000.200.20.3.11
The backup session by ID {{SessionId}} initiated by DMA {{InitiatorDMAServer}} on {{StartTimestamp}} finished. Description: The backup session by ID {{SessionId}} initiated by DMA {{InitiatorDMAServer}} on {{StartTimestamp}} finished. The source path is {{Path}} and it is executed on controller {{ExecutedOnNode}}.
fluidFSEventNdmpRestoreStarted
.1.3.6.1.4.1.674.11000.2000.200.20.3.12
The restore session by ID {{SessionId}} initiated by DMA {{InitiatorDMAServer}} on {{StartTimestamp}} started. Description: The restore session by ID {{SessionId}} initiated by DMA {{InitiatorDMAServer}} on {{StartTimestamp}} started. The target path is {{Path}} and it is executed on controller {{ExecutedOnNode}}.
fluidFSEventNdmpRestoreFinished
.1.3.6.1.4.1.674.11000.2000.200.20.3.13
The restore session by ID {{SessionId}} initiated by DMA {{InitiatorDMAServer}} on {{StartTimestamp}} finished. Description: The restore session by ID {{SessionId}} initiated by DMA {{InitiatorDMAServer}} on {{StartTimestamp}} finished. The target path is {{Path}} and it is executed on controller {{ExecutedOnNode}}.
fluidFSEventNdmpSessionFailed
.1.3.6.1.4.1.674.11000.2000.200.20.3.14
The session by ID {{SessionId}} failed: {{Detail}}.
fluidFSEventNdmpSessionTerminated
.1.3.6.1.4.1.674.11000.2000.200.20.3.15
The session by ID {{SessionId}} was terminated: {{Detail}}.
Internal storage reservation is running low. Description: Internal storage reservation is running low. The system capacity is {{systemCapacity}}. Please contact Support.
fluidFSEventQuotaSystemOutOfFreeSpace
.1.3.6.1.4.1.674.11000.2000.200.20.4.13
System is out of available space. Description: System is out of available space. Action Items: Add more storage to your system.
fluidFSEventQuotaSystemUsedSpaceOverwhelmed
.1.3.6.1.4.1.674.11000.2000.200.20.4.14
System usage reached system capacity of {{systemCapacity}}. Description: System usage reached system capacity of {{systemCapacity}}. Action Items: Add more storage to your system.
fluidFSEventQuotaSnapshotThresholdExceeded
.1.3.6.1.4.1.674.11000.2000.200.20.4.16
Snapshots space exceeded threshold for NAS volume {{virtualVolumeName}}. Description: NAS volume {{virtualVolumeName}} - Snapshots space exceeded the defined snapshots threshold {{snapshotsSpaceThreshold}}% of the total size {{volumeSize}}.
Low space is available for snapshots in NAS Volume {{virtualVolumeName}}. Description: NAS Volume {{virtualVolumeName}} - Space available for snapshots is less than {{snapshotsSpaceThreshold}}% of the total size {{volumeSize}}.
fluidFSEventQuotaUserQuotaExceeded
.1.3.6.1.4.1.674.11000.2000.200.20.4.18
User soft quota exceeded. User {{userNameName}}, NAS volume {{virtualVolumeName}}. Description: User {{userNameName}} on NAS volume {{virtualVolumeName}} has exceeded its soft limit. Soft limit is {{quotaThreshold}} MB. Action Items: First determine whether this is the outcome of default quota or specific user quota defined for this NAS volume. If there is a fit between the current definition and the required quota enforcement, contact the user and warn him/her that future writes might be denied due to quota violation. If there is a special user requirement, and this is the outcome of a default quota definition, you might consider defining specific user quota for this user.
fluidFSEventQuotaGroupQuotaExceeded
.1.3.6.1.4.1.674.11000.2000.200.20.4.19
Group soft quota exceeded. Group {{groupNameName}}, NAS volume {{virtualVolumeName}}. Description: Group {{groupNameName}} on NAS volume {{virtualVolumeName}} has exceeded its soft limit. Soft limit is {{quotaThreshold}} MB. Action Items: First determine whether this is the outcome of default quota or specific group quota defined for this NAS volume. If there is a fit between the current definition and the required quota enforcement, contact the group manager and warn him/her that future writes might be denied due to quota violation. If there is a special group requirement, and this is the outcome of a default quota definition, you might consider defining specific group quota for this group.
fluidFSEventQuotaVolumeFreeSpaceThresholdExceeded
.1.3.6.1.4.1.674.11000.2000.200.20.4.20
Available space on NAS volume {{virtualVolumeName}} is running low. Description: NAS volume {{virtualVolumeName}} - Volume available space is less than {{volumeSpaceThreshold}}. Please note that writes to the NAS volume's clones will fail if its base Volume reaches full capacity. Action Items: See if the space shortage is caused by system space shortage or the volume is almost full. If the volume is thick or the shortage is at the volume level, you should consider removing old snapshots, cleanup the volume or increase the volume size. If the volume is thin and it is far from being full, consider removing snapshots and files in other thin volumes that are using unreserved space, purchase more storage and expand LUNs.
fluidFSEventQuotaVolumeUsedSpaceThresholdExceeded
.1.3.6.1.4.1.674.11000.2000.200.20.4.21
NAS volume {{virtualVolumeName}} exceeded used space threshold. Description: NAS volume {{virtualVolumeName}} - Used space exceeded {{volumeSpaceThreshold}}% of the total size {{volumeSize}}. Please note that writes to the NAS volume's clones will fail if its base volume reaches full capacity.
fluidFSEventQuotaVolumeOutOfFreeSpace
.1.3.6.1.4.1.674.11000.2000.200.20.4.22
NAS volume {{virtualVolumeName}} is out of available space. Description: NAS volume {{virtualVolumeName}} - Volume is out of available space. Please note that writes to the NAS volume and its clones will fail. Action Items: See if the space shortage is caused by system space shortage or the volume is full. If the volume is thick or the shortage is at the volume level, you should consider removing old snapshots, cleanup the volume or increase the volume size. If the volume is thin and it is far from being full, consider removing snapshots and files in other thin volumes that are using unreserved space, purchase more storage and expand LUNs.
fluidFSEventQuotaVolumeUsedSpaceOverwhelmed
.1.3.6.1.4.1.674.11000.2000.200.20.4.23
NAS volume {{virtualVolumeName}} used space reached the volume size (no space). Description: NAS volume {{virtualVolumeName}} - Used space reached the volume size {{volumeSize}}. Please note that writes to the NAS volume and its clones will fail. Action Items: See if the space shortage is caused by system space shortage or the volume is full. If the volume is thick or the shortage is at the volume level, you should consider removing old snapshots, cleanup the volume or increase the volume size. If the volume is thin and it is far from being full, consider removing snapshots and files in other thin volumes that are using unreserved space, purchase more storage and expand LUNs.
fluidFSEventQuotaDirectoryQuotaExceeded
.1.3.6.1.4.1.674.11000.2000.200.20.4.24
Directory Quota '{{quotaDirPath}}' in volume {{virtualVolumeName}} exceeded the soft limit Description: Directory Quota '{{quotaDirPath}}' in volume {{virtualVolumeName}} exceeded the soft limit of {{quotaThreshold}} MB. Action Items: Advise the users to clear some space from the directory or consider extending the quota limit.
fluidFSEventQuotaSystemFreeSpaceThresholdExceeded
.1.3.6.1.4.1.674.11000.2000.200.20.4.7
Low system available space. Description: System available space is less than {{systemThreshold}}. Action Items: Add more storage to your system.
fluidFSEventQuotaSystemUsedSpaceThresholdExceeded
.1.3.6.1.4.1.674.11000.2000.200.20.4.8
System usage is higher than {{systemThreshold}}%. Description: System usage is higher than threshold {{systemThreshold}}% of system capacity {{systemCapacity}}. Action Items: Add more storage to your system.
fluidFSEventReplicationDisconnection
.1.3.6.1.4.1.674.11000.2000.200.20.5.14
Replication between NAS volume {{sourceVolumeName}}@{{sourceCluster}} and NAS volume {{destinationVolume}}@{{destinationCluster}} failed. Description: Replication between NAS volume {{sourceVolumeName}}@{{sourceCluster}} and NAS volume {{destinationVolume}}@{{destinationCluster}} failed because of the following reason: The connection between source and destination systems was lost. The replication will continue automatically when the connection is restored.
fluidFSEventReplicationInternalError
.1.3.6.1.4.1.674.11000.2000.200.20.5.15
Replication between NAS volume {{sourceVolumeName}}@{{sourceCluster}} and NAS volume {{destinationVolume}}@{{destinationCluster}} failed. Description: Replication between NAS volume {{sourceVolumeName}}@{{sourceCluster}} and NAS volume {{destinationVolume}}@{{destinationCluster}} failed because of the following reason: There was an internal error (err: {{errorCode}}), please contact Support.
fluidFSEventReplicationSourceFSIsDown
.1.3.6.1.4.1.674.11000.2000.200.20.5.16
Replication between NAS volume {{sourceVolumeName}}@{{sourceCluster}} and NAS volume {{destinationVolume}}@{{destinationCluster}} failed. Description: Replication between NAS volume {{sourceVolumeName}}@{{sourceCluster}} and NAS volume {{destinationVolume}}@{{destinationCluster}} failed because of the following reason: The file system of source NAS volume is down. The replication will continue automatically when the file-system is started.
fluidFSEventReplicationSourceFSIsNotOptimal
.1.3.6.1.4.1.674.11000.2000.200.20.5.17
Replication between NAS volume {{sourceVolumeName}}@{{sourceCluster}} and NAS volume {{destinationVolume}}@{{destinationCluster}} failed. Description: Replication between NAS volume {{sourceVolumeName}}@{{sourceCluster}} and NAS volume {{destinationVolume}}@{{destinationCluster}} failed because of the following reason: The file-system of the source NAS volume is not optimal. The replication will continue automatically when the file-system recovers.
fluidFSEventReplicationDestinationFSIsNotOptimal
.1.3.6.1.4.1.674.11000.2000.200.20.5.18
Replication between NAS volume {{sourceVolumeName}}@{{sourceCluster}} and NAS volume {{destinationVolume}}@{{destinationCluster}} failed. Description: Replication between NAS volume {{sourceVolumeName}}@{{sourceCluster}} and NAS volume {{destinationVolume}}@{{destinationCluster}} failed because of the following reason: The file-system of the destination NAS volume is not optimal. The replication will continue automatically when the file-system recovers.
fluidFSEventReplicationDestinationFSIsDown
.1.3.6.1.4.1.674.11000.2000.200.20.5.19
Replication between NAS volume {{sourceVolumeName}}@{{sourceCluster}} and NAS volume {{destinationVolume}}@{{destinationCluster}} failed. Description: Replication between NAS volume {{sourceVolumeName}}@{{sourceCluster}} and NAS volume {{destinationVolume}}@{{destinationCluster}} failed because of the following reason: The file system of destination NAS volume is down. The replication will continue automatically when the file-system is started.
fluidFSEventReplicationConfigurationError
.1.3.6.1.4.1.674.11000.2000.200.20.5.20
Replication between NAS volume {{sourceVolumeName}}@{{sourceCluster}} and NAS volume {{destinationVolume}}@{{destinationCluster}} failed. Description: Replication between NAS volume {{sourceVolumeName}}@{{sourceCluster}} and NAS volume {{destinationVolume}}@{{destinationCluster}} failed because of the following reason: The source and destination systems resources are not balanced. Please consider to add new applience to the destination system.
fluidFSEventReplicationMngConVipChanged
.1.3.6.1.4.1.674.11000.2000.200.20.5.26
Cluster VIP {{oldMngConVip}} was changed to {{newMngConVip}}. Please update partnership systems with this change.
Replication between NAS volume {{sourceVolumeName}}@{{sourceCluster}} and NAS volume {{destinationVolume}}@{{destinationCluster}} failed. Description: Replication between NAS volume {{sourceVolumeName}}@{{sourceCluster}} and NAS volume {{destinationVolume}}@{{destinationCluster}} failed because of the following reason: The destination NAS volume disconnected from the source NAS volume. Please perform the disconnect action also from the source NAS volume, and if you would like, reconnect the NAS volumes again to allow replication.
fluidFSEventReplicationDestinationClusterIsBusy
.1.3.6.1.4.1.674.11000.2000.200.20.5.56
Replication between NAS volume {{sourceVolumeName}}@{{sourceCluster}} and NAS volume {{destinationVolume}}@{{destinationCluster}} failed. Description: Replication between NAS volume {{sourceVolumeName}}@{{sourceCluster}} and NAS volume {{destinationVolume}}@{{destinationCluster}} failed because of the following reason: Temporarily the destination cluster is not available to serve the required replication.
fluidFSEventReplicationIncompatibleVersions
.1.3.6.1.4.1.674.11000.2000.200.20.5.57
Replication between NAS volume {{sourceVolumeName}}@{{sourceCluster}} and NAS volume {{destinationVolume}}@{{destinationCluster}} failed. Description: Replication between NAS volume {{sourceVolumeName}}@{{sourceCluster}} and NAS volume {{destinationVolume}}@{{destinationCluster}} failed because of the following reason: The system version of source NAS volume is higher than the system version of the destination system. Please upgrade the destination system to same version as source system.
Replication between NAS volume {{sourceVolumeName}}@{{sourceCluster}} and NAS volume {{destinationVolume}}@{{destinationCluster}} failed. Description: Replication between NAS volume {{sourceVolumeName}}@{{sourceCluster}} and NAS volume {{destinationVolume}}@{{destinationCluster}} failed because of the following reason: There is not enough space in the destination NAS volume. Please increase the destination NAS volume size.
Replication between {{sourceVolumeName}}@{{sourceCluster}} and {{destinationVolume}}@{{destinationCluster}} experienced a temporary failure. Description: Replication between NAS volume {{sourceVolumeName}}@{{sourceCluster}} and NAS volume {{destinationVolume}}@{{destinationCluster}} experienced a temporary failure because of the following reason: The source NAS volume is busy freeing space. The replication will continue automatically when space is available.
Replication between {{sourceVolumeName}}@{{sourceCluster}} and {{destinationVolume}}@{{destinationCluster}} experienced a temporary failure. Description: Replication between NAS volume {{sourceVolumeName}}@{{sourceCluster}} and NAS volume {{destinationVolume}}@{{destinationCluster}} experienced a temporary failure because of the following reason: The destination NAS volume is busy freeing space. The replication will continue automatically when space is available.
fluidFSEventReplicationSourceFSIsBusy
.1.3.6.1.4.1.674.11000.2000.200.20.5.61
Replication between NAS volume {{sourceVolumeName}}@{{sourceCluster}} and NAS volume {{destinationVolume}}@{{destinationCluster}} failed. Description: Replication between NAS volume {{sourceVolumeName}}@{{sourceCluster}} and NAS volume {{destinationVolume}}@{{destinationCluster}} failed because of the following reason: The file-system of the source NAS volume is busy with replication of other NAS volumes. The replication will continue automatically when the file-system releases part of the resources.
fluidFSEventReplicationDestinationFSIsBusy
.1.3.6.1.4.1.674.11000.2000.200.20.5.62
Replication between NAS volume {{sourceVolumeName}}@{{sourceCluster}} and NAS volume {{destinationVolume}}@{{destinationCluster}} failed. Description: Replication between NAS volume {{sourceVolumeName}}@{{sourceCluster}} and NAS volume {{destinationVolume}}@{{destinationCluster}} failed because of the following reason: The file-system of the destination NAS volume is busy with replication of other NAS volumes. The replication will continue automatically when the file-system releases part of the resources.
fluidFSEventReplicationJumboFramesBlocked
.1.3.6.1.4.1.674.11000.2000.200.20.5.63
Replication between NAS volume {{sourceVolumeName}}@{{sourceCluster}} and NAS volume {{destinationVolume}}@{{destinationCluster}} failed. Description: Replication between NAS volume {{sourceVolumeName}}@{{sourceCluster}} and NAS volume {{destinationVolume}}@{{destinationCluster}} failed because of the following reason: Jumbo frames are blocked over the network.
fluidFSEventReplicationRecoveredReplication
.1.3.6.1.4.1.674.11000.2000.200.20.5.64
Replication between NAS volume {{sourceVolumeName}}@{{sourceCluster}} and NAS volume {{destinationVolume}}@{{destinationCluster}} finished. Description: Replication between NAS volume {{sourceVolumeName}}@{{sourceCluster}} and NAS volume {{destinationVolume}}@{{destinationCluster}} successfully recovered from the problem and finished.
fluidFSEventReplicationDestinationVolumeShrink
.1.3.6.1.4.1.674.11000.2000.200.20.5.65
Replication between NAS volume {{sourceVolumeName}}@{{sourceCluster}} and NAS volume {{destinationVolume}}@{{destinationCluster}} succeeded. Description: Replication between NAS volume {{sourceVolumeName}}@{{sourceCluster}} and NAS volume {{destinationVolume}}@{{destinationCluster}} succeeded but resizing of the destination volume failed.
Replication between NAS volume {{sourceVolumeName}}@{{sourceCluster}} and NAS volume {{destinationVolume}}@{{destinationCluster}} failed. Description: Replication between NAS volume {{sourceVolumeName}}@{{sourceCluster}} and NAS volume {{destinationVolume}}@{{destinationCluster}} failed because of the following reason: The replication policy status on the destination volume is currently promoted. Please perform the promote action also from the source NAS volume, and demote to allow replication.
Optimization policy configuration is not updated. Description: As part of replication between NAS volume {{sourceVolumeName}}@{{sourceCluster}} and NAS volume {{destinationVolume}}@{{destinationCluster}} the optimization policy configuration has failed to update.
fluidFSEventReplicationRollbackSnapCloneBase
.1.3.6.1.4.1.674.11000.2000.200.20.5.68
Replication between NAS volume {{sourceVolumeName}}@{{sourceCluster}} and NAS volume {{destinationVolume}}@{{destinationCluster}} failed. Description: Replication between NAS volume {{sourceVolumeName}}@{{sourceCluster}} and NAS volume {{destinationVolume}}@{{destinationCluster}} failed because of the following reason: rollback snapshot failed due to clone snapshot base.
fluidFSEventReplicationReplicationFinished
.1.3.6.1.4.1.674.11000.2000.200.20.5.69
Replication between NAS volume {{sourceVolumeName}}@{{sourceCluster}} and NAS volume {{destinationVolume}}@{{destinationCluster}} finished. Description: Replication between NAS volume {{sourceVolumeName}}@{{sourceCluster}} and NAS volume {{destinationVolume}}@{{destinationCluster}} which start at {{startTime}} has transferred {{totalTransferredSize}} MB of data and finished successfully at {{endTime}}.
fluidFSEventReplicationDisconnectionDestination
.1.3.6.1.4.1.674.11000.2000.200.20.5.70
Replication between NAS volume {{sourceVolume}}@{{sourceCluster}} and NAS volume {{destinationVolumeName}}@{{destinationCluster}} failed Description: Replication between NAS volume {{sourceVolume}}@{{sourceCluster}} and NAS volume {{destinationVolumeName}}@{{destinationCluster}} failed because of the following reason: The connection between source and destination systems was lost. The replication will continue automatically when the connection is restored.
fluidFSEventReplicationInternalErrorDestination
.1.3.6.1.4.1.674.11000.2000.200.20.5.71
Replication between NAS volume {{sourceVolume}}@{{sourceCluster}} and NAS volume {{destinationVolumeName}}@{{destinationCluster}} failed Description: Replication between NAS volume {{sourceVolume}}@{{sourceCluster}} and NAS volume {{destinationVolumeName}}@{{destinationCluster}} failed because of the following reason: There was an internal error (err: {{errorCode}}), please contact the support team
fluidFSEventReplicationSourceFSIsDownDestination
.1.3.6.1.4.1.674.11000.2000.200.20.5.72
Replication between NAS volume {{sourceVolume}}@{{sourceCluster}} and NAS volume {{destinationVolumeName}}@{{destinationCluster}} failed Description: Replication between NAS volume {{sourceVolume}}@{{sourceCluster}} and NAS volume {{destinationVolumeName}}@{{destinationCluster}} failed because of the following reason: The file system of source NAS volume is down. The replication will continue automatically when the file-system is started.
Replication between NAS volume {{sourceVolume}}@{{sourceCluster}} and NAS volume {{destinationVolumeName}}@{{destinationCluster}} failed Description: Replication between NAS volume {{sourceVolume}}@{{sourceCluster}} and NAS volume {{destinationVolumeName}}@{{destinationCluster}} failed because of the following reason: The file-system of the source NAS volume is not optimal. The replication will continue automatically when the file-system recovers.
Replication between NAS volume {{sourceVolume}}@{{sourceCluster}} and NAS volume {{destinationVolumeName}}@{{destinationCluster}} failed Description: Replication between NAS volume {{sourceVolume}}@{{sourceCluster}} and NAS volume {{destinationVolumeName}}@{{destinationCluster}} failed because of the following reason: The file-system of the destination NAS volume is not optimal. The replication will continue automatically when the file-system recovers.
Replication between NAS volume {{sourceVolume}}@{{sourceCluster}} and NAS volume {{destinationVolumeName}}@{{destinationCluster}} failed Description: Replication between NAS volume {{sourceVolume}}@{{sourceCluster}} and NAS volume {{destinationVolumeName}}@{{destinationCluster}} failed because of the following reason: The file system of destination NAS volume is down. The replication will continue automatically when the file-system is started.
Replication between NAS volume {{sourceVolume}}@{{sourceCluster}} and NAS volume {{destinationVolumeName}}@{{destinationCluster}} failed Description: Replication between NAS volume {{sourceVolume}}@{{sourceCluster}} and NAS volume {{destinationVolumeName}}@{{destinationCluster}} failed because of the following reason: The source and destination systems resources are not balanced. Please consider to add new applience to the destination system.
Replication between NAS volume {{sourceVolume}}@{{sourceCluster}} and NAS volume {{destinationVolumeName}}@{{destinationCluster}} failed Description: Replication between NAS volume {{sourceVolume}}@{{sourceCluster}} and NAS volume {{destinationVolumeName}}@{{destinationCluster}} failed because of the following reason: The destination NAS volume disconnected from the source NAS volume, please perform the disconnect action also from the source NAS volume and if you would like reconnect the NAS volumes again.
Replication between NAS volume {{sourceVolume}}@{{sourceCluster}} and NAS volume {{destinationVolumeName}}@{{destinationCluster}} failed Description: Replication between NAS volume {{sourceVolume}}@{{sourceCluster}} and NAS volume {{destinationVolumeName}}@{{destinationCluster}} failed because of the following reason: Temporarily the destination cluster is not available to serve the required replication.
Replication between NAS volume {{sourceVolume}}@{{sourceCluster}} and NAS volume {{destinationVolumeName}}@{{destinationCluster}} failed Description: Replication between NAS volume {{sourceVolume}}@{{sourceCluster}} and NAS volume {{destinationVolumeName}}@{{destinationCluster}} failed because of the following reason: The system version of source NAS volume is higher than the system version of the destination system. Please upgrade the destination system to same version as source system.
Replication between NAS volume {{sourceVolume}}@{{sourceCluster}} and NAS volume {{destinationVolumeName}}@{{destinationCluster}} failed Description: Replication between NAS volume {{sourceVolume}}@{{sourceCluster}} and NAS volume {{destinationVolumeName}}@{{destinationCluster}} failed because of the following reason: There is not enough space in the destination NAS volume, Please increase the destination NAS volume size.
Replication between {{sourceVolume}}@{{sourceCluster}} and {{destinationVolumeName}}@{{destinationCluster}} experienced a temporary failure Description: Replication between NAS volume {{sourceVolume}}@{{sourceCluster}} and NAS volume {{destinationVolumeName}}@{{destinationCluster}} experienced a temporary failure because of the following reason: The source NAS volume is busy freeing space. The replication will continue automatically when space is available.
Replication between {{sourceVolume}}@{{sourceCluster}} and {{destinationVolumeName}}@{{destinationCluster}} experienced a temporary failure Description: Replication between NAS volume {{sourceVolume}}@{{sourceCluster}} and NAS volume {{destinationVolumeName}}@{{destinationCluster}} experienced a temporary failure because of the following reason: The destination NAS volume is busy freeing space. The replication will continue automatically when space is available.
fluidFSEventReplicationSourceFSIsBusyDestination
.1.3.6.1.4.1.674.11000.2000.200.20.5.83
Replication between NAS volume {{sourceVolume}}@{{sourceCluster}} and NAS volume {{destinationVolumeName}}@{{destinationCluster}} failed Description: Replication between NAS volume {{sourceVolume}}@{{sourceCluster}} and NAS volume {{destinationVolumeName}}@{{destinationCluster}} failed because of the following reason: The file-system of the source NAS volume is busy with replication of other NAS volumes. The replication will continue automatically when the file-system releases part of the resources .
Replication between NAS volume {{sourceVolume}}@{{sourceCluster}} and NAS volume {{destinationVolumeName}}@{{destinationCluster}} failed Description: Replication between NAS volume {{sourceVolume}}@{{sourceCluster}} and NAS volume {{destinationVolumeName}}@{{destinationCluster}} failed because of the following reason: The file-system of the destination NAS volume does not have enough resources to proceed with replication of other NAS volumes. The replication will continue automatically when the file-system releases part of the resources.
Replication between NAS volume {{sourceVolume}}@{{sourceCluster}} and NAS volume {{destinationVolumeName}}@{{destinationCluster}} failed Description: Replication between NAS volume {{sourceVolume}}@{{sourceCluster}} and NAS volume {{destinationVolumeName}}@{{destinationCluster}} failed because of the following reason: Jumbo frames are blocked over the network.
Replication between NAS volume {{sourceVolume}}@{{sourceCluster}} and NAS volume {{destinationVolumeName}}@{{destinationCluster}} finished. Description: Replication between NAS volume {{sourceVolume}}@{{sourceCluster}} and NAS volume {{destinationVolumeName}}@{{destinationCluster}} successfully recovered from the problem and finished.
Replication between NAS volume {{sourceVolume}}@{{sourceCluster}} and NAS volume {{destinationVolumeName}}@{{destinationCluster}} succeeded Description: Replication between NAS volume {{sourceVolume}}@{{sourceCluster}} and NAS volume {{destinationVolumeName}}@{{destinationCluster}} succeeded but resizing of the desination volume failed.
Replication between NAS volume {{sourceVolume}}@{{sourceCluster}} and NAS volume {{destinationVolumeName}}@{{destinationCluster}} failed Description: Replication between NAS volume {{sourceVolume}}@{{sourceCluster}} and NAS volume {{destinationVolumeName}}@{{destinationCluster}} failed because of the following reason: The replication policy status on the destination volume is currently promoted, please perform the promote action also from the source NAS volume, and demote allow replication.
Optimization policy configuration is not updated Description: As part of replication between NAS volume {{sourceVolume}}@{{sourceCluster}} and NAS volume {{destinationVolumeName}}@{{destinationCluster}} the optimization policy configuration has failed to update.
Replication between NAS volume {{sourceVolume}}@{{sourceCluster}} and NAS volume {{destinationVolumeName}}@{{destinationCluster}} failed Description: Replication between NAS volume {{sourceVolume}}@{{sourceCluster}} and NAS volume {{destinationVolumeName}}@{{destinationCluster}} failed because of the following reason: rollback snapshot failed due to clone snapshot base.
Replication between NAS volume {{sourceVolume}}@{{sourceCluster}} and NAS volume {{destinationVolumeName}}@{{destinationCluster}} finished. Description: Replication between NAS volume {{sourceVolume}}@{{sourceCluster}} and NAS volume {{destinationVolumeName}}@{{destinationCluster}} which start at {{startTime}} has transferred {{totalTransferredSize}} MB of data and finished successfully at {{endTime}}.
Replication between NAS volume {{sourceVolumeName}}@{{sourceCluster}} and NAS volume {{destinationVolume}}@{{destinationCluster}} failed. Description: Replication between NAS volume {{sourceVolumeName}}@{{sourceCluster}} and NAS volume {{destinationVolume}}@{{destinationCluster}} failed because of the following reason: There is not enough space in the destination NAS volume. Please increase the destination NAS volume size to be at least {{rehydratedVolumeSize}} MBs.
Replication between NAS volume {{sourceVolumeName}}@{{sourceCluster}} and NAS volume {{destinationVolume}}@{{destinationCluster}} failed. Description: Replication between NAS volume {{sourceVolumeName}}@{{sourceCluster}} and NAS volume {{destinationVolume}}@{{destinationCluster}} failed because of the following reason: The destination NAS volume is in the middle of replicating to another NAS volume. Please temporarily disable the outgoing replications from the destination NAS volume.
Replication between NAS volume {{sourceVolume}}@{{sourceCluster}} and NAS volume {{destinationVolumeName}}@{{destinationCluster}} failed. Description: Replication between NAS volume {{sourceVolume}}@{{sourceCluster}} and NAS volume {{destinationVolumeName}}@{{destinationCluster}} failed because of the following reason: The destination NAS volume is in the middle of replicating to another NAS volume. Please temporarily disable the outgoing replications from the destination NAS volume.
Replication between NAS volume {{sourceVolumeName}}@{{sourceCluster}} and NAS volume {{destinationVolume}}@{{destinationCluster}} finished. Description: Replication between NAS volume {{sourceVolumeName}}@{{sourceCluster}} and NAS volume {{destinationVolume}}@{{destinationCluster}} which start at {{startTime}} has transferred {{totalTransferredSize}} MB of data with {{savingsPercent}}% savings due to data reduction and finished successfully at {{endTime}}.
Replication between NAS volume {{sourceVolume}}@{{sourceCluster}} and NAS volume {{destinationVolumeName}}@{{destinationCluster}} finished. Description: Replication between NAS volume {{sourceVolume}}@{{sourceCluster}} and NAS volume {{destinationVolumeName}}@{{destinationCluster}} which start at {{startTime}} has transferred {{totalTransferredSize}} MB of data with {{savingsPercent}}% savings due to data reduction and finished successfully at {{endTime}}.
fluidFSEventSnapshotFailedCreateSnapshot
.1.3.6.1.4.1.674.11000.2000.200.20.7.1
The snapshot scheduler failed to create {{snapshotName}} snapshot in volume {{volumeName}}.
Failed removing the VM snapshot(s), created by FluidFS Description: Failed to perform a cleanup of VM snapshot(s), previously created by FluidFS on VMWare server(s) during {{snapshotName}} snapshot of {{FSIDName}} NAS volume creation Action Items: Please check network availability and current load on related VMWare server(s). Try to delete the redundant VM snapshot(s) manually in the vSphere user interface Internal Information: One (or more) VMWare servers may be unavailable or responds with errors to FluidFS requests
fluidFSEventSnapshotSnapshotIsNotVMConsistent
.1.3.6.1.4.1.674.11000.2000.200.20.7.11
The {{snapshotName}} snapshot is not consistent from VMWare perspective Description: NAS volume {{FSIDName}} had been restored to the {{snapshotName}} snapshot recovery point, which is not consistent from VMWare perspective Action Items: In case {{FSIDName}} NAS volume indeed serves as VMWare Datastore, please try to restore the volume to a consistent snapshot recovery point
fluidFSEventSnapshotSnapshotDeltaFailed
.1.3.6.1.4.1.674.11000.2000.200.20.7.12
Snapshot delta process on volume {{FSIDName}} failed Description: NAS volume {{FSIDName}} did not finish properly and therefore aborted Action Items: Please restart the process
fluidFSEventSnapshotSnapshotDeltaSucceded
.1.3.6.1.4.1.674.11000.2000.200.20.7.13
Snapshot delta process on volume {{FSIDName}} succeded Description: NAS volume {{FSIDName}} has finished properly and file {{outputFileName}} was created in path {{outputPath}} on volume {{outputFSIDName}} Action Items: file {{outputFileName}} was created in path {{outputPath}} on volume {{outputFSIDName}} and is available now for reading
fluidFSEventSnapshotSnapshotDeltaStarted
.1.3.6.1.4.1.674.11000.2000.200.20.7.14
Snapshot delta process on volume {{FSIDName}} started Description: NAS volume {{FSIDName}} has started with output file {{outputFileName}} in path {{outputPath}} on volume {{outputFSIDName}} Action Items: file {{outputFileName}} will be created in path {{outputPath}} on volume {{outputFSIDName}}
fluidFSEventSnapshotFailedDeleteSnapshot
.1.3.6.1.4.1.674.11000.2000.200.20.7.2
The snapshot scheduler failed to delete snapshots in volume {{volumeName}}.
fluidFSEventSnapshotScheduleDeleteSnapshotBySpace
.1.3.6.1.4.1.674.11000.2000.200.20.7.3
The snapshot named {{snapshotName}} on {{volumeName}} NAS volume was scheduled for deletion by snapshots space auto-deletion mechanism.
Failed initializing the VM snapshot(s) creation Description: Failed to initiate VM snapshot(s) creation on VMWare server(s), thus {{snapshotName}} snapshot of {{FSIDName}} NAS volume will not be consistent from VMWare perspective Action Items: Please check network availability and current load on related VMWare server(s) Internal Information: One (or more) VMWare servers may be unavailable or responds with errors to FluidFS requests
Failed to create VM snapshot(s) on VMWare server(s) Description: Failed to create VM snapshot(s), thus {{snapshotName}} snapshot of {{FSIDName}} NAS volume will not be consistent from VMWare perspective Action Items: Please check network availability and current load on related VMWare server(s) Internal Information: One (or more) VMWare servers may be unavailable or responds with errors to FluidFS requests
fluidFSEventHealthPairPowerProblem
.1.3.6.1.4.1.674.11000.2000.200.20.9.10
The system switched Write-Through mode on (journaling). Description: NAS controller {{thisNodeId}} BPS is not functioning properly and its peer controller {{peerNodeId}} has health problems as well. Switched Write-Through mode on (journaling).
fluidFSEventHealthDuplicateIp
.1.3.6.1.4.1.674.11000.2000.200.20.9.102
Duplicate IP address {{ipAddr}} in use on the node {{nodeId}}. Description: On node {{nodeId}} an ip address {{ipAddr}} in use on this system is being used by another system with mac address {{macAddr}}. Action Items: Either change the address {{ipAddr}} on node {{nodeId}}, or on the other device.
fluidFSEventHealthUnmapIsNotSupportedOnLuns
.1.3.6.1.4.1.674.11000.2000.200.20.9.103
Unmap is not supported by some of the LUNs Description: The following LUNs doesn't support the unmap functionallity: {{LunsList}}. Action Items: Please consider to upgrade the listed LUNs to the newest version.
fluidFSEventHealthPairPowerProblemRecovered
.1.3.6.1.4.1.674.11000.2000.200.20.9.11
The system switched Write-Through mode off (mirroring). Description: NAS controller {{thisNodeId}} verified the power system reliability and switched Write-Through mode off (mirroring).
fluidFSEventHealthPairHWProblem
.1.3.6.1.4.1.674.11000.2000.200.20.9.110
The system switched Write-Through mode on (journaling). Description: NAS controller {{thisNodeId}} hardware is not working properly and its peer NAS controller {{peerNodeId}} has health problems as well. Switched Write-Through mode on (journaling).
fluidFSEventHealthPairHWRecovered
.1.3.6.1.4.1.674.11000.2000.200.20.9.111
The system switched Write-Through mode off (mirroring). Description: NAS controller {{thisNodeId}} hardware is working properly. Switched Write-Through mode off (mirroring).
Problem with allocating protected memory Description: Memory detector could not allocate the protected memory Action Items: Please contact support
fluidFSEventHealthInvalidSizeOfProtectedMemory
.1.3.6.1.4.1.674.11000.2000.200.20.9.116
Problem with allocating protected memory Description: The model's predefined size of protected memory is not equal to allocated Action Items: Please contact support
Problem with allocating unprotected memory Description: Preallocation script could not allocate the unprotected memory Action Items: Please contact support
iSCSI portals are unavailable on all NAS Controllers
fluidFSEventHealthUnresponsiveIscsiPortalsOnNodes
.1.3.6.1.4.1.674.11000.2000.200.20.9.119
iSCSI portals are unavailable on NAS Controllers {{nodeIds}}
fluidFSEventHealthSafeShutdownBegins
.1.3.6.1.4.1.674.11000.2000.200.20.9.12
The power button of NAS controller {{thisNodeId}} was pressed. Description: The power button of NAS controller {{thisNodeId}} was pressed. The NAS controller's cache is being flushed and will orderly shut down in approximately 5 minutes. Please allow it to finish. Action Items: If this process takes longer than expected, don't try to workaround by manually shutting down the NAS controller. Please contact Support.
Auditing subscribers became unavailable for NAS Controller/s: {{nodeIds}}
fluidFSEventHealthSafeShutdownEnds
.1.3.6.1.4.1.674.11000.2000.200.20.9.13
NAS controller {{thisNodeId}} cache was flushed and it is shutting down.
fluidFSEventHealthPairCacheArmingProblem
.1.3.6.1.4.1.674.11000.2000.200.20.9.130
System switched Write-Through mode on (journaling). Description: NAS controller {{thisNodeId}} has problems with arming the protected memory, as well as it's peer. Switched Write-Through mode on (journaling).
fluidFSEventHealthPairCacheArmingProblemRecovered
.1.3.6.1.4.1.674.11000.2000.200.20.9.131
System switched Write-Through mode off (mirroring). Description: NAS controller {{thisNodeId}} succeeded arming the protected memory, and switched Write-Through mode off (mirroring) .
Problem with arming of protected memory regions Description: Memory detector could not arm protected memory region Action Items: Please contact support
fluidFSEventHealthFSUserMetaDataMissing
.1.3.6.1.4.1.674.11000.2000.200.20.9.133
Directory entry was identified as missing; Entry {{dirEntry}} in {{parentDirDsidPath}}. Action Items: Recover it from backup. Internal Information: File {{dsid}} volume {{FSIDName}}
LDAP servers, used by {{TenantName}}, are not responding. Description: All LDAP servers ({{serverNames}}), used by {{TenantName}}, are not responding on all NAS Controllers. For each server please check: its server logs for any event that caused it not to respond to requests; that it responds to ping requests from clients on the same subnetwork; that it responds to LDAP requests from clients on the same subnetwork.
NIS servers, used by {{TenantName}}, are not responding. Description: All NIS servers ({{serverNames}}), used by {{TenantName}}, are not responding on all NAS Controllers. For each server please check: its server logs for any event that caused it not to respond to requests; that it responds to ping requests from clients on the same subnetwork; that it responds to NIS requests from clients on the same subnetwork.
fluidFSEventHealthResponsiveNisServerOnAllNodes
.1.3.6.1.4.1.674.11000.2000.200.20.9.138
Found responsive NIS server(s) among ({{serverNames}}), used by {{TenantName}}, on all NAS Controllers.
All NIS servers ({{serverNames}}), used by {{TenantName}}, are unresponsive on some NAS Controllers: {{nodeIds}}.
fluidFSEventHealthPairTemperatureProblem
.1.3.6.1.4.1.674.11000.2000.200.20.9.14
The system switched Write-Through mode on (journaling). Description: The system switched Write-Through mode on (journaling) because both NAS controllers' temperature appears high (or cannot be verified), on the controller pair with IDS {{thisNodeId}} and {{peerNodeId}}.
DNS servers, used by {{TenantName}}, are not responding. Description: All DNS servers ({{serverIps}}), used by {{TenantName}}, are not responding on all NAS Controllers. For each server please check: its server logs for any event that caused it not to respond to requests; that it responds to ping requests from clients on the same subnetwork; that it responds to DNS requests from clients on the same subnetwork.
fluidFSEventHealthResponsiveDnsServerOnAllNodes
.1.3.6.1.4.1.674.11000.2000.200.20.9.141
Found responsive DNS server(s) among ({{serverIps}}), used by {{TenantName}}, on all NAS Controllers.
All DNS servers ({{serverIps}}), used by {{TenantName}}, are unresponsive on some NAS Controllers: {{nodeIds}}.
fluidFSEventHealthUnresponsiveAdServer
.1.3.6.1.4.1.674.11000.2000.200.20.9.143
Active Directory, used by {{TenantName}}, is not responsive. Description: Active Directory domain {{domainName}}, used by {{TenantName}}, is not reachable. For each domain controller please check: its logs for any event that caused it not to respond to requests; that it responds to ping requests from clients on the same subnetwork; that it responds to AD requests from clients on the same subnetwork.
fluidFSEventHealthResponsiveAdServer
.1.3.6.1.4.1.674.11000.2000.200.20.9.144
Active Directory domain {{domainName}}, used by {{TenantName}}, is reachable.
fluidFSEventHealthSlowResponse
.1.3.6.1.4.1.674.11000.2000.200.20.9.145
Slow response time from {{serverKind}} to {{TenantName}}. Description: Response time from {{serverKind}} servers ({{serverList}}) of {{responseTime}} seconds exceeds {{criticalResponseTime}} seconds on {{TenantName}}. Action Items: Check availability of all {{serverKind}} servers. This error is usually the outcome of a communication error between the NAS and the external server. It could be caused by a network issue, external server overload or a software malfunction in one of environment components. Internal Information: {{internalInfo}}
fluidFSEventHealthAdCriticalClockSkew
.1.3.6.1.4.1.674.11000.2000.200.20.9.146
AD (used by {{TenantName}}) and NAS controller {{nodeId}} clocks aren't synchronized Description: Clock of NAS controller {{nodeId}} and AD clock, used by {{TenantName}} (domain {{DomainControllerName}}), are not synchronized (skew of {{adClockSkew}} seconds exceeds critical threshold of {{maxAdClockSkew}} seconds). Kerberos authentication may fail. Fix system clock/Fix AD clock. Enter the time configuration screen and get the configured NTP host. If no NTP is configured, it is recommended to configure one by either using the organization's NTP or the same IP/name used for joining the NAS to the AD.
fluidFSEventHealthCIFSAttachedACLCorruption
.1.3.6.1.4.1.674.11000.2000.200.20.9.147
Failed to read a file ACL's. Description: ACL of file {{dsidPath}} can not be read. Access will be denied to all users. Action Items: Please reset the file ACL
Unrecoverable erroneous block in user file. Description: Unrecoverable erroneous block was identified in {{dsidPath}} and it is inaccessible. Action Items: Move this file/directory to a different name, and recover it from backup. Internal Information: Mapping {{fileMapping}}
fluidFSEventHealthPairTemperatureProblemRecovered
.1.3.6.1.4.1.674.11000.2000.200.20.9.15
The system switched Write-Through mode off (mirroring). Description: NAS controller {{thisNodeId}} verified the temperature system reliability and switched Write-Through mode off (mirroring).
NAS Volume {{FSIDName}} is partially serving. Description: Unrecoverable erroneous block was identified in volume {{FSIDName}}. This volume is partially serving. Action Items: Run the file system diagnostics and contact Support. Internal Information: File {{dsid}} mapping {{fileMapping}}
fluidFSEventHealthFSRehydrationSpaceMarginReached
.1.3.6.1.4.1.674.11000.2000.200.20.9.151
Rehydration paused on volume {{FSIDName}} - insufficient space. Description: Volume {{FSIDName}} doesn't have the required free {{dedupSpaceMarginMb}} MB needed for rehydration. Action Items: Make sure volume {{FSIDName}} has at least {{dedupSpaceMarginMb}} MB free.
fluidFSEventHealthFSLowOnLunSpaceDedupPostponed
.1.3.6.1.4.1.674.11000.2000.200.20.9.154
High NAS pool space usage. Data Reduction paused on cluster. Description: Cluster does not have the required NAS pool free space needed for Data Reduction Action Items: Please perform one of the following actions to increase NAS pool free space : expand your NAS pool; Delete old Snapshots; Delete redundant NAS volumes.
fluidFSEventHealthFSLowOnLunSpaceDedupResumed
.1.3.6.1.4.1.674.11000.2000.200.20.9.155
Data Reduction resumed on cluster. Description: NAS pool free space increased. Data reduction is resumed
fluidFSEventHealthFSLowOnLunSpaceFSNotAccessible
.1.3.6.1.4.1.674.11000.2000.200.20.9.156
High NAS pool space usage. File system access is limited Description: High NAS pool space usage. File system is accessible only using NFS3. Action Items: Please perform one of the following actions to increase NAS pool free space : expand your NAS pool; Delete old Snapshots; Delete redundant NAS volumes.
fluidFSEventHealthFSLowOnLunSpaceFSAccessible
.1.3.6.1.4.1.674.11000.2000.200.20.9.157
File system access is back to normal. Description: NAS pool free space increased. File system can be accessed by all protocols
fluidFSEventHealthLocalTimeDesynchronizedToNtp
.1.3.6.1.4.1.674.11000.2000.200.20.9.158
Local time on node {{nodeId}} is desynchronized by {{timeDiffSec}} seconds.
fluidFSEventHealthClientsOnFailback
.1.3.6.1.4.1.674.11000.2000.200.20.9.16
There are clients ({{numberOfClients}}) that are waiting for manual failback.
fluidFSEventHealthCacheLossOfUserFiles
.1.3.6.1.4.1.674.11000.2000.200.20.9.21
Latest user writes might have been lost. Description: The system identified that some of the latest user writes might have been lost. This might have been caused by a reboot of two NAS controllers in the same NAS appliance simultaneously. The system will start an automatic recovery procedure and in few minutes you should receive a completion event. During that time you will have partial or no service, SMB clients will be disconnected and NFS clients will be forced to remount. After the system is fully functional, please check your applications integrity and repeat the latest writes if needed. Action Items: In case repeating the latest writes is not possible or you still experience any difficulties, consider recovering the data from the latest snapshots or from backup. Internal Information: Domain {{domainId}}
fluidFSEventHealthCacheLossOfMetadata
.1.3.6.1.4.1.674.11000.2000.200.20.9.22
File System process failed to recover. Description: The system identified that at least one of its File System processes has restarted and failed to recover it. The system will provide limited service. Please contact Support for further assistance. Action Items: Search product Knowledge Base for further information. Internal Information: Domain {{domainId}}
fluidFSEventHealthCacheLossRecovered
.1.3.6.1.4.1.674.11000.2000.200.20.9.23
File system recovery successfully finished. Description: File system recovery successfully finished. Action Items: Data written in the day prior to the event might be compromised and should be verified.
fluidFSEventHealthStoreAccessProblem
.1.3.6.1.4.1.674.11000.2000.200.20.9.24
Lost communication to storage from NAS controller {{nodeId}}. Description: The system identified a communication problem between NAS controller {{nodeId}} and the storage controllers which may affect system performance. Action Items: The system constantly monitors the validity of communication between NAS controllers and Storage controller. This event is issued once the system identifies a communication issue. If the system was recently deployed or any of its components were recently moved, verify that all connections and configurations are according to the recommendations. If the system was functioning optimally for a while and this is a new event it is most likely due to a physical problem (like a bad/loose cable) or a change in network configuration which affects the components communication. Verify that all the network cables are correctly connected to the relevant NAS controller. Verify that all the iSCSI switches are up and running. If any change was introduced to the environment (new physical switch, new switch configuration), try to revert this change in order to understand the cause of the problem. Verify that the iSCSI switch is configured according to the recommendation. If the problem is persistent after verifying all the above, it is recommended to power cycle the problematic NAS controller using the UI. Internal Information: Domain {{domainId}} setId {{setId}}
fluidFSEventHealthLunVerificationFailure
.1.3.6.1.4.1.674.11000.2000.200.20.9.25
NAS controller {{nodeId}} encountered a problem accessing a NAS storage pool. Action Items: Please contact Support. Internal Information: Domain {{domainId}} LUN {{lunId}}
Clients may encounter a long period of partial data access. Description: The file system identified that clients may have partial data access in the last {{timeout}} minutes. Action Items: The NAS file system constantly verifies that it is accessible for read/write operations. This event is issued once it identifies that some operations took more time than expected or failed completely in the last {{timeout}} minutes. This could be caused by an intermittent issue of a HW failure like failed disk or disconnected network port. Verify that no RAID is being rebuilt and you do not have a recently failed drive. Verify that the cluster is connected according to the recommended cabling method and switch configuration.
fluidFSEventHealthFileSystemAccessibilityRevival
.1.3.6.1.4.1.674.11000.2000.200.20.9.28
File system back to optimal state. Description: The File System has overcome its accessibility problem and returned back to full functionality.
fluidFSEventHealthNodeFailed
.1.3.6.1.4.1.674.11000.2000.200.20.9.4
NAS controller {{nodeName}} is unavailable.
fluidFSEventHealthFSMultipleRmapChecksumError
.1.3.6.1.4.1.674.11000.2000.200.20.9.47
Multiple erroneous file system blocks were identified and fixed. Action Items: If the problem recurs, verify system hardware health.
Unrecoverable erroneous block was identified. Description: Unrecoverable erroneous block was identified. The file system is partially serving. Action Items: Contact Support. Internal Information: File {{dsid}} mapping {{fileMapping}}
fluidFSEventHealthNodeOK
.1.3.6.1.4.1.674.11000.2000.200.20.9.5
NAS controller {{nodeName}} is available.
fluidFSEventHealthFSMultipleUserMetaDataMissing
.1.3.6.1.4.1.674.11000.2000.200.20.9.50
Multiple directory entries were identified as missing. Action Items: Recover them from backup.
fluidFSEventHealthFSLowOnLunSpace
.1.3.6.1.4.1.674.11000.2000.200.20.9.53
High NAS pool capacity. Performance might be affected. Description: NAS pool capacity usage has reached high watermark which will affect performance. Action Items: Expand the NAS pool capacity, or free some space.
fluidFSEventHealthLocalPartitionIsFull
.1.3.6.1.4.1.674.11000.2000.200.20.9.6
The used space of local file system partition {{partitionName}} on NAS controller {{nodeName}} is high ({{percentOfUsedSpace}}%). Please contact support.
fluidFSEventHealthBadClientLoadBalancing
.1.3.6.1.4.1.674.11000.2000.200.20.9.65
Bad {{protocol}} clients distribution: ({{connectionsVector}}). Description: Bad {{protocol}} clients distribution: ({{connectionsVector}}). Each pair in this set represents NAS controller ID and number of connections separated by a colon. Use the mass rebalance feature available via the clients connections screen.
fluidFSEventHealthCifsLockingInconsistancy
.1.3.6.1.4.1.674.11000.2000.200.20.9.66
Inconsistent locks interrupted SMB service. Description: SMB service is interrupted due to inconsistencies in locking mechanism, starting recovery process. Internal Information: Domain {{domainId}}
The usage of swap memory on NAS controller {{nodeName}} is high ({{percentOfUsedSwap}}%).
fluidFSEventHealthFSReclaimerOverload
.1.3.6.1.4.1.674.11000.2000.200.20.9.72
High load in snapshots space reclaiming process. Description: The file system has been recently engaged in reclaiming the space of deleted snapshots or NAS volumes. This could be the result of the deletion of a NAS volume and/or multiple snapshots. Action Items: If this event recurs on a daily basis, the file system performance can be improved; Reducing the frequency of snapshot scheduling will make more resources available for clients activity. Internal Information: Domain {{domainId}}
fluidFSEventHealthPairStoreSetProblem
.1.3.6.1.4.1.674.11000.2000.200.20.9.75
System switched Write-Through mode on (journaling). Description: NAS controller {{thisNodeId}} has lost connection to storage subsystem and its peer NAS controller {{peerNodeId}} has health problems as well. Switched Write-Through mode on (journaling).
fluidFSEventHealthPairStoreSetProblemRecovered
.1.3.6.1.4.1.674.11000.2000.200.20.9.76
System switched Write-Through mode off (mirroring). Description: NAS controller {{thisNodeId}} verified connection to storage subsystem and switched Write-Through mode off (mirroring).
fluidFSEventHealthPairPowerSuppliesProblem
.1.3.6.1.4.1.674.11000.2000.200.20.9.77
The system switched Write-Through mode on (journaling). Description: NAS controller {{thisNodeId}} has lost full power supplies redundancy and its peer NAS controller {{peerNodeId}} has health problems as well. Switched Write-Through mode on (journaling).
fluidFSEventHealthPairPowerSuppliesRecovered
.1.3.6.1.4.1.674.11000.2000.200.20.9.78
The system switched Write-Through mode off (mirroring). Description: NAS controller {{thisNodeId}} full power supplies redundancy was restored and switched Write-Through mode off (mirroring).
fluidFSEventHealthHighUseOfMemory
.1.3.6.1.4.1.674.11000.2000.200.20.9.8
The memory usage on NAS controller {{nodeName}} is high.
fluidFSEventHealthPairFansProblem
.1.3.6.1.4.1.674.11000.2000.200.20.9.85
The system switched Write-Through mode on (journaling). Description: NAS controller {{thisNodeId}} fans are not working properly and its peer NAS controller {{peerNodeId}} has health problems as well. Switched Write-Through mode on (journaling).
fluidFSEventHealthPairFansRecovered
.1.3.6.1.4.1.674.11000.2000.200.20.9.86
The system switched Write-Through mode off (mirroring). Description: NAS controller {{thisNodeId}} fans are working properly. Switched Write-Through mode off (mirroring).
fluidFSEventHealthProtocolsStateInconsistancy
.1.3.6.1.4.1.674.11000.2000.200.20.9.87
Inconsistent state interrupted SMB/NFSv4 service. Description: SMB/NFSv4 services are interrupted due to inconsistencies in locking mechanism, starting recovery process. Internal Information: Domain {{domainId}}
File descriptors usage on NAS controller {{nodeName}} is high ({{percentOfUsedFileDescriptors}}%).
fluidFSEventHealthFSDedupSpaceMarginReached
.1.3.6.1.4.1.674.11000.2000.200.20.9.91
Optimization paused on volume {{virtualVolume}} - insufficient space. Description: Volume {{virtualVolume}} doesn't have the required free {{dedupSpaceMarginMb}} MB needed for optimization. Action Items: Make sure volume {{virtualVolume}} has at least {{dedupSpaceMarginMb}} MB free.
fluidFSEventHealthStoreAccessRestored
.1.3.6.1.4.1.674.11000.2000.200.20.9.92
Restored communication to storage from NAS controller {{nodeId}}. Description: The system identified that a communication problem between NAS controller {{nodeId}} and the storage controllers was resolved. Internal Information: Domain {{domainId}} setId {{setId}}
fluidFSCluOK
.1.3.6.1.4.1.674.11000.2000.200.23.1
will be sent any time the condition of the cluster
becomes ok.
fluidFSGenNormalEvent
.1.3.6.1.4.1.674.11000.2000.200.23.14
This trap will be sent for normal events.
fluidFSGenMinorEvent
.1.3.6.1.4.1.674.11000.2000.200.23.15
This trap will be sent for minor events.
fluidFSGenMajorEvent
.1.3.6.1.4.1.674.11000.2000.200.23.16
This trap will be sent for major events.
fluidFSGenCriticalEvent
.1.3.6.1.4.1.674.11000.2000.200.23.17
This trap will be sent for critical events.
fluidFSAntiVirusEvent
.1.3.6.1.4.1.674.11000.2000.200.23.18
rus software on or connected to system
detected a virus.
fluidFSCluDegraded
.1.3.6.1.4.1.674.11000.2000.200.23.2
will be sent any time the condition of the cluster
becomes degraded.
fluidFSCluFailed
.1.3.6.1.4.1.674.11000.2000.200.23.3
will be sent any time the condition of the cluster
becomes failed.
fluidFSCluNodeOK
.1.3.6.1.4.1.674.11000.2000.200.23.4
will be sent any time the condition of a node in
the cluster becomes ok.
fluidFSCluNodeDegraded
.1.3.6.1.4.1.674.11000.2000.200.23.5
will be sent any time the condition of a node in
the cluster becomes degraded.
User Action: Make a note of the cluster node name then
check the node for the cause of the degraded condition.
fluidFSCluNodeFailed
.1.3.6.1.4.1.674.11000.2000.200.23.6
will be sent any time the condition of a node in
the cluster becomes failed.
User Action: Make a note of the cluster node name
then check the node for the cause of the failure.
fluidFSCluResourceOK
.1.3.6.1.4.1.674.11000.2000.200.23.7
will be sent any time the condition of a cluster
resource becomes ok.
fluidFSCluResourceDegraded
.1.3.6.1.4.1.674.11000.2000.200.23.8
will be sent any time the condition of a cluster
resource becomes degraded.
User Action: Make a note of the cluster resource name then
check the resource for the cause of the degraded condition.
fluidFSCluResourceFailed
.1.3.6.1.4.1.674.11000.2000.200.23.9
will be sent any time the condition of a cluster
resource becomes failed.
User Action: Make a note of the cluster resource name
then check the resource for the cause of the failure.