Data Core
Data Core
November 2016
Overview 3
Recent changes made to this document 3
Linux compatibility lists 4
RedHat Enterprise Linux 4
SUSE Linux Enterprise Server 5
Ubuntu 6
Other Linux distributions 8
Self-qualifying 'other' Linux distributions 8
The DataCore Server's settings 9
The Linux Host's settings 11
Operating system settings 11
Multipath configuration settings 12
Linux applications 16
SAP HANA 16
Known issues 19
Appendix A 21
Preferred Server & Preferred Path settings 21
Appendix B 23
Configuring Disk Pools 23
Appendix C 24
Reclaiming storage 24
SANsymphony's Automatic Reclamation feature 24
Reclaiming storage without using fstrim 25
SANsymphony's Manual Reclamation feature 26
Previous changes 27
Fundamental Linux administration skills are assumed; including how to connect Linux Hosts to
storage array target ports (i.e. DataCore Server Front End ports) via either Fibre Channel or
iSCSI, along with the process of discovering, mounting and formatting disk devices in general.
5.4 and earlier Not Supported Not Supported Not Supported Not Supported
5.5 – 5.10 Not Supported Not Supported Not Supported Not Supported
6.7 – 6.8 Not Supported Not Supported Not Qualified Not Qualified
7.1 – 7.2 Not Supported Not Supported Not Qualified Not Qualified
Compatibility notes
Please see page 7 for specific details on the differences between 'Not Supported', 'Not
Qualified' and 'Supported'.
SCSI UNMAP
RedHat Enterprise Linux Hosts are supported using SCSI UNMAP commands - see 'Appendix C -
Reclaiming Storage' on page 24.
1
SANsymphony-V 8.x and all versions of SANsymphony 9.x before PSP4 Update 4 are now ‘End of Life’.
Please see: End of Life Notifications http://datacore.custhelp.com/app/answers/detail/a_id/1329
10.x and
Not Supported Not Supported Not Supported Not Supported
earlier
11.0 (no SP) Not Supported Supported Not Supported Not Supported
(2)
11.0 SP 4 Not Supported Not Supported Supported Not Qualified
(2)
12.0 SP 1 Not Supported Not Supported Supported Not Qualified
Compatibility notes
Please see page 7 for specific details on the differences between 'Not Supported', 'Not
Qualified' and 'Supported'.
SCSI UNMAP
SUSE Linux Enterprise Server Hosts are not supported using SCSI UNMAP commands.
1
SANsymphony-V 8.x and all versions of SANsymphony 9.x before PSP4 Update 4 are now ‘End of Life’.
Please see: End of Life Notifications http://datacore.custhelp.com/app/answers/detail/a_id/1329
2
Fibre Channel only – iSCSI FE connections are still considered 'Not Qualified'.
Ubuntu
SANsymphony
(1)
9.0 PSP 4 Update 4 10.0 (all versions)
13.x and
Not Supported Not Supported Not Supported Not Supported
earlier
Note: This table applies only to the Linux distribution provided by Canonical Ltd and does not
include any Debian Linux distribution (or other distributions that are themselves either based on
Debian or Ubuntu’s own codebases).
Compatibility notes
Please see page 7 for specific details on the differences between 'Not Supported', 'Not
Qualified' and 'Supported'.
Note: If the Host is running SAP HANA then only Fibre Channel connections are supported.
Please see page 16 for more information.
SCSI UNMAP
Ubuntu Linux Hosts have not been qualified using SCSI UNMAP commands. Also see 'Appendix
C - Reclaiming Storage' on page 25.
1
Please see the compatibility notes on page 7 for specific details on the differences between 'Not Supported', 'Not
Qualified' and 'Supported'.
Compatibility notes
Supported
These Linux/SANsymphony combinations have been tested using all the host-specific settings
listed in this document against all Virtual Disk types. Mirrored and Dual Virtual Disks have been
tested for 'high availability' in all possible failure scenarios.
Not Qualified
These Linux/SANsymphony combinations have not been tested against any Mirrored or Dual
Virtual Disks types. DataCore therefore cannot guarantee 'high availability' in any failure
scenario (even if all host-specific settings listed in this document are followed) however, self-
qualification may be possible. For more information on this please see:
http://datacore.custhelp.com/app/answers/detail/a_id/1506
Support for any Linux versions that are considered ‘End of Life’ by the vendor and are listed as
'Not Qualified' can still be self-qualified but only if there is an agreed ‘support contract’ with
your Linux vendor. In this case though, DataCore Technical Support will not do root-cause
analysis for SANsymphony in the case of any future issues but will offer 'best effort' support to
get Hosts accessing any SANsymphony Virtual Disks.
Note: Non-mirrored Virtual Disks are always considered as supported even for 'Not Qualified'
combinations of Linux/SANsymphony.
Not Supported
These Linux/SANsymphony combinations have usually failed one or more of our 'high
availability' tests when using Mirrored or Dual Virtual Disks types; but also may simply be where
an Operating System's own requirements (or limitations) due to its age make it impractical to
test. Entries marked as 'Not Supported' can never be self-qualified. Mirrored or Dual Virtual
Disks types are configured at the end-user's own risk.
Note: Non-mirrored Virtual Disks are always considered as supported even for 'Not Supported'
combinations of Linux/SANsymphony.
In this case, DataCore Technical Support will help the customer to get the Host Operating
system accessing Virtual Disks, but will not then do any root-cause analysis.
These other distributions can also ‘self-qualified’ by following the same process mentioned in
the compatibility notes on page 7.
Note: Non-mirrored Virtual Disks are always considered as supported even for 'other' Linux
distributions used with SANsymphony.
Ubuntu Linux
When registering the Host choose the 'Linux (all other distributions)' menu option.
Port roles
Ports used for serving Virtual Disks to Hosts should only have the Front End (FE) role enabled.
Mixing other Port Role types may cause unexpected results as Ports that only have the FE role
enabled will be turned off when the DataCore Server software is stopped (even if the physical
server remains running). This helps to guarantee that any Hosts do not still try to access FE
Ports, for any reason, once the DataCore Software is stopped but where the DataCore Server
remains running. Any Port with the Mirror and/or Back End role enabled do not shut off when
the DataCore Server software is stopped but still remain active.
Multipathing support
The Multipathing Support option should be enabled so that Mirrored Virtual Disks or Dual
Virtual Disks can be served to Hosts from all available DataCore FE ports. Also see the
Multipathing Support section from the SANsymphony Help: http://www.datacore.com/SSV-
Webhelp/Hosts.htm
Note: Hosts that have non-mirrored Virtual Disks served to them do not need Multipathing
Support enabled unless they have other Mirrored or Dual Virtual Disks served as well.
Virtual Disks LUNs and serving to more than one Host or Port
DataCore Virtual Disks always have their own unique Network Address Authority (NAA)
identifier that a Host can use to manage the same Virtual Disk being served to multiple Ports on
the same Host Server and the same Virtual Disk being served to multiple Hosts.
See the SCSI Standard Inquiry Data section from the online Help for more information on this:
http://www.datacore.com/SSV-Webhelp/Changing_Virtual_Disk_Settings.htm
While DataCore cannot guarantee that a disk device's NAA is used by a Host's operating system
to identify a disk device served to it over different paths generally we have found that it is. And
while there is sometimes a convention that all paths by the same disk device should always
using the same LUN 'number' to guarantees consistency for device identification, this may not
be technically true. Always refer to the Host Operating System vendor’s own documentation for
advice on this.
DataCore's Software does, however always try to create mappings between the Host's ports
and the DataCore Server's Front-end (FE) ports for a Virtual Disk using the same LUN number(1)
where it can. The software will first find the next available (lowest) LUN 'number' for the Host-
DataCore FE mapping combination being applied and will then try to apply that same LUN
number for all other mappings that are being attempted when the Virtual Disk is being served.
If any Host-DataCore FE port combination being requested at that moment is already using that
same LUN number (e.g. if a Host has other Virtual Disks served to it from previous) then the
software will find the next available LUN number and apply that to those specific Host-
DataCore FE mappings only.
1
The software will also try to match a LUN 'number' for all DataCore Server Mirror Port mappings of a Virtual Disk
too, although the Host does not 'see' these mirror mappings and so this does not technically need to be the same
as the Front End port mappings (or indeed as other Mirror Path mappings for the same Virtual Disk). Having Mirror
mappings using different LUNs has no functional impact on the Host or DataCore Server at all.
For example if the two DataCore Virtual Disks are using /dev/sda and /dev/sdb respectively:
This will change the SCSI Disk timeout value to 80 seconds for those particular disk devices.
The SCSI Disk timeout can then be verified by running the cat command on the disk device
directly
cat /sys/block/sda/device/timeout
Note: The SCSI Disk timeout may revert to the system default after a reboot. Please contact your
Linux vendor or consult their documentation for details on creating a ‘rules’ file in /etc/udev/ to
help resolve this behavior.
defaults {
polling_interval 60
}
This is a DataCore-required value which helps prevent excessive Host attempts to check for a
Virtual Disk Storage Source’s Host Access value, after a failure, is set as Offline. Smaller interval
settings will interfere with overall Host performance.
Do not add this parameter to the 'device' section (discussed on the next page) by mistake; else
the setting will not work as expected.
Blacklist exceptions
Usually all storage vendor devices are specified under a separate device sub-section within the
blacklist_exceptions section.
blacklist_exceptions {
vendor "DataCore"
product "Virtual Disk"
}
ALUA-enabled Hosts
Please refer to the compatibility list for your distribution of Linux on page 4 to see which
combinations of Linux and SANsymphony support ALUA
device {
vendor "DataCore"
product "Virtual Disk"
path_checker tur
prio alua
failback 10
no_path_retry fail
dev_loss_tmo infinity
fast_io_fail_tmo 5
rr_min_io_rq 100
# Alternative option – See notes below
# rr_min_io 100
path_grouping_policy group_by_prio
# Alternative policy - See notes below
# path_grouping_policy failover
device {
vendor "DataCore"
product "Virtual Disk"
path_checker tur
failback 10
dev_loss_tmo infinity
fast_io_fail_tmo 5
no_path_retry fail
dev_loss_tmo infinity
Also requires the ‘fast_io_fail_tmo 5’ setting (see next). The dev_loss_tmo setting controls the
length of time (normally indicated in seconds) before a PATH to a Virtual Disk, that has since
become unavailable to the Linux Host, is removed from the Linux operating system. For
example; when a DataCore Server is stopped or a PATH to a DataCore Server’s Virtual Disk is set
to ‘Host Access: Not Allowed’.
Once a PATH to a Virtual Disk has been removed by the Linux operating system it can then only
usually be re-established by ‘manual intervention’ (e.g. user-initiated rescan on the Linux Host).
The ‘infinity’ value prevents the PATH from being removed. If the ‘fast_io_fail_tmo 5’ setting is
not present in the multipath.conf file, the ‘infinity’ setting is ignored and the dev_loss_tmo
value defaults to ‘600’ (10 minutes).
Note: Some older kernels of RedHat Enterprise Linux and SUSE Linux Enterprise Server 11 SP2
and earlier do not support the ‘infinity’ value and it will be ignored (and may also post an error
in syslog). In that case, a default value - usually ‘600’ seconds - will be applied instead.
Use the ‘cat’ command to verify that any DataCore Virtual Disks detected by the Linux Host are
using the ‘infinity’ value correctly.
A simple example:
sleshost3:~ # cat /sys/class/fc_remote_ports/rport-*\:*-*/dev_loss_tmo
30
30
2147483647
2147483647
2147483647
30
Note: The value of ‘2147483647’ indicates a dev using the infinity setting.
fast_io_fail_tmo
This is required by the ‘dev_loss_tmo infinity’ setting (see previous). Do not use any other value
other than ‘5’, otherwise the dev_loss_tmo setting will use a larger, default value (usually ‘600’
seconds).
failback
This adds an extra ‘wait’ period (10 seconds) that helps to prevent unnecessary ‘failback’
attempts to a Virtual Disk whose Storage Source’s Host Access value is still set as Not Allowed.
no_path_retry
Required for any Linux Hosts configured for ALUA and want to use the Preferred Server setting
set to ‘ALL’. See Appendix A on page 21 about information regarding the ‘ALL’ setting.
path_checker
This is a DataCore-required value. No other value should be used.
Note: The ‘failover’ value is unqualified for RHEL 6.5 and greater.
In either case, make sure the Preferred Server setting on the DataCore Server is either set to
'Auto Select' or an explicit Server Name. See Appendix A on page 21 about information
regarding the 'Auto Select' setting.
rr_min_io_rq
This is a DataCore-required value for Linux Hosts that are running kernels newer than 2.6.30.
Older versions must use the option ‘rr_min_io 100’ instead (see next).
rr_min_io
This is a DataCore-required value for Linux Hosts that are running kernels older than 2.6.31.
Newer versions must use the option ‘rr_min_io_rq 100’ instead (see previous).
user_friendly_names
This is an optional setting and simply specifies that the operating system should use the
‘/etc/multipath/bindings’ file to assign a persistent and unique alias to the multipath device, in
the form of mpathn.
If this value is set to 'no' (or omitted completely) the operating system will use a WWID as an
alias for the multipath device instead.
Sizing guidelines
Please refer to the documents:
SANsymphony with SAP HANA - Sizing Guidelines
http://datacore.custhelp.com/app/answers/detail/a_id/1619
kpartx-0.4.9-0.105.1.8637.2.PTF.933282.x86_64.rpm
multipath-tools-0.4.9-0.105.1.8637.2.PTF.933282.x86_64.rpm
su <SID>adm
cd /usr/sap/<SID>/HDB<Instance>
source hdbenv.sh
hdbparam --paramset "fileio[DATA].async_write_submit_active=on"
hdbparam --paramset "fileio[DATA].async_write_submit_blocks=all"
exit
partition_*_*__prtype = 5
Notes: Filesystem entries hi-lighted in red are those that are mounted on that particular host
when the database is started. Those that are not hi-lighted in red are still available to the host
but are not mounted.
In the case of a ‘failover event’ the HANA application will determine which of the other hosts will
then mount and use the failed host’s file systems. Because any host can, potentially, require
access to any of the other hosts’ file systems all SANsymphony Virtual Disks used for the SAP
HANA data or log filesystems must be served to all SAP HANA hosts – active and standby.
Some of the issues here have been found during DataCore’s own testing but many others are
issues reported by DataCore Software customers, where a specific problem had been identified
and then subsequently resolved.
DataCore cannot be held responsible for incorrect information regarding Linux products. No
assumption should be made that DataCore has direct communication with any of the Linux
vendors regarding the issues listed here and we always recommend that users contact their
own Linux vendor directly to see if there are any updates or fixes since they were reported to
us.
For ‘Known issues’ for DataCore’s own Software products, please refer to the relevant DataCore
Software Component’s release notes.
However for RHEL and SLES Hosts, this will result in the mkfs command sending additional
‘discard’ commands during the format process, resulting in longer format times. Ubuntu has
not been tested with SCSI UNMAP so is considered unqualified.
Use the ‘-K’ option while formatting to disable the ‘discard’ command during formatting. Also
see: http://linux.die.net/man/8/mkfs.xfs and http://linux.die.net/man/8/mkfs.ext4
Refer to your own man page to be sure your installation supports this option.
Care must be taken so as not to completely use all the SAUs from the Disk Pool; and if Ext3 is
required, then use a small (i.e. 4MB) SAU size. Other filesystem types do not seem to exhibit
this behavior and use only a few SAUs during filesystem creation.
Manual rescans are needed to update previously failed paths for Ubuntu Hosts
DataCore have not been able to get Ubuntu to automatically re-detect paths to mirrored Virtual
Disks that have failed or have been removed (e.g. after stopping a DataCore Server) and are
then subsequently made available again. Manual intervention is required.
multipath -ll
Establish which paths are currently failed and then use one of the commands
rescan-scsi-bus
For example
will send a signal to the disk device and ‘force’ the operating system to update the path’s status
properly.
It is up to the Host’s own Operating System or Failover Software to determine which DataCore
Server is its preferred server.
If for any reason the Storage Source on the preferred DataCore Server becomes unavailable,
and the Host Access for the Virtual Disk is set to Offline or Disabled, then the other DataCore
Server will be designated the ‘Active Optimized’ side. The Host will be notified by both
DataCore Servers that there has been an ALUA state change, forcing the Host to re-check the
ALUA state of both DataCore Servers and act accordingly.
If the Storage Source on the preferred DataCore Server becomes unavailable but the Host
Access for the Virtual Disk remains Read/Write, for example if only the Storage behind the
DataCore Server is unavailable but the FE and MR paths are all connected or if the Host
physically becomes disconnected from the preferred DataCore Server (e.g. Fibre Channel or
iSCSI Cable failure) then the ALUA state will not change for the remaining, ‘Active Non-
optimized’ side. However, in this case, the DataCore Server will not prevent access to the Host
nor will it change the way READ or WRITE IO is handled compared to the ‘Active Optimized’
side, but the Host will still register this DataCore Server’s Paths as ‘Active Non-Optimized’ which
may (or may not) affect how the Host behaves generally.
In the case where the Preferred Server is set to ‘All’, then both DataCore Servers are designated
‘Active Optimized’ for Host IO.
All IO requests from a Host will use all Paths to all DataCore Servers equally, regardless of the
distance that the IO has to travel to the DataCore Server. For this reason, the ‘All’ setting is not
normally recommended. If a Host has to send a WRITE IO to a ‘remote’ DataCore Server (where
the IO Path is significantly distant compared to the other ‘local’ DataCore Server), then the
WAIT times accrued by having to send the IO not only across the SAN to the remote DataCore
Server, but for the remote DataCore Server to mirror back to the local DataCore Server and
then for the mirror write to be acknowledged from the local DataCore Server to the remote
DataCore Server and finally for the acknowledgement to be sent to the Host back across the
SAN, can be significant.
The benefits of being able to use all Paths to all DataCore Servers for all Virtual Disks are not
always clear cut. Testing is advised.
So for example, if the Preferred Server is designated as DataCore Server A and the Preferred
Paths are designated as DataCore Server B, then DataCore Server B will be the ‘Active
Optimized’ Side not DataCore Server A.
In a two-node Server group there is usually nothing to be gained by making the Preferred Path
setting different to the Preferred Server setting and it may also cause confusion when trying to
diagnose path problems, or when redesigning your DataCore SAN with regard to Host IO Paths.
Where there are three or more Servers in a Server Group, and where one or more of these
DataCore Servers shares Mirror Paths between different DataCore Servers then setting the
Preferred Path makes more sense. So for example, DataCore Server A has two mirrored Virtual
Disks, one with DataCore Server B, and one with DataCore Server C and DataCore Server B also
has a mirrored Virtual Disk with DataCore Server C then using just the Preferred Server setting
to designate the ‘Active Optimized’ side for the Host’s Virtual Disks becomes more complicated.
In this case the Preferred Path setting can be used to override the Preferred Server setting for a
much more granular level of control.
The smaller the SAU size, the larger the number of indexes are required, by the Disk Pool driver,
to keep track of the equivalent amount of allocated storage compared to a Disk Pool with a
larger SAU size; e.g. there are potentially four times as many indexes required in a Disk Pool
using a 32MB SAU size compared to one using 128MB – the default SAU size.
As SAUs are allocated for the very first time, the Disk Pool needs to update these indexes and
this may cause a slight delay for IO completion and might be noticeable on the Host. However
this will depend on a number of factors such as the speed of the physical disks, the number of
Hosts accessing the Disk Pool and their IO READ/WRITE patterns, the number of Virtual Disks in
the Disk Pool and their corresponding Storage Profiles.
Therefore, DataCore usually recommend using the default SAU size (128MB) as it is a good
compromise between physical storage allocation and IO overhead during the initial SAU
allocation index update. Should a smaller SAU size be preferred, the configuration should be
tested to make sure that a potential increased number of initial SAU allocations does not
impact the overall Host performance.
Always refer to your Linux Vendor to determine which file systems are supported for either the
fstrim command and/or the ‘mount –o discard’ option for the version of RHEL or SLES. Note
that using the ‘mount –o discard’ option may affect Host performance – again refer to your
Linux Vendor for their recommendations.
No additional 'zeroing' of the Physical Disk or 'scanning' of the Disk Pool is required.
Any all-zero write addresses that are detected to be physically 'adjacent' to each other – from a
block address point of view – the Disk Pool driver will 'merge' these requests together in the list
so as to keep the size of it as small as possible. Also as entire 'all-zeroed' SAUs are re-assigned
back to the Disk Pool, the record of all its address spaces is removed from the in-memory list
making space available for future all-zero writes to other SAUs that are still allocated.
However if write I/O pattern of the Hosts mean that the Disk Pool receives all-zero writes to
many, non-adjacent block addresses the list will require more space to keep track of them
compared to all-adjacent block addresses. In extreme cases, where the in-memory list can no
longer hold any more new all-zero writes (because all the allocated system memory for the
Automatic Reclamation feature has been used) the Disk Pool driver will discard the oldest
records of the all-zero writes to accommodate newer records of all-zero write I/O.
Likewise if a DataCore Server is rebooted for any reason, then the in-memory list is completely
lost and any knowledge of SAUs that were already partially detected as having been written
with all-zeroes will now no longer be remembered.
In both of these cases this can mean that, over time, even though technically an SAU may have
been completely overwritten with all-zero writes, the Disk Pool driver does not have a record
that cover the entire address space of that SAU in its in-memory list and so the SAU will not be
made available to the Disk Pool but remain allocated to the Virtual Disk until any future all-zero
writes happen to re-write the same address spaces that were forgotten about previously by the
Disk Pool driver. In these scenarios, a Manual Reclamation will force the Disk Pool to re-read all
SAUs and perhaps detect those now missing all-zero address spaces.
See the section 'Manual Reclamation' on the next page for more information.
Then use the dd command to fill all of the unused file system space with 'all-zero' write I/O.
This I/O will then be detected by SANsymphony's Automatic Reclamation function (see previous
page for more details).
Note that manual reclamation will create additional 'read' I/O on the Storage Array used by the
Disk Pool, as this process runs at 'low priority' it should not interfere with normal I/O
operations. However, caution is advised, especially when scripting the manual reclamation
process.
Manual Reclamation may still be required even when Automatic Reclamation has taken place
(see the 'Automatic Reclamation' section on the previous page for more information)
For example, if the Host has written the data in such a way that every allocated SAU contains a
small amount of non-zero block data, even if the total amount of data is significantly less than
the total amount of SAU allocation, then no SAUs can be reclaimed.
It may be possible, in some cases, to use the Host Operating System’s own defragmentation
tools to force the non-zero block data to be moved to a more contiguous pattern, so leaving the
‘end’ of the DataCore LUN full of SAUs that no longer have non-zero data on them that can then
be reclaimed. However care should be taken that the act of defragmenting the data does not
itself cause more SAU allocation as the block data is moved around during the re-organization.
DataCore Software can offer no guarantees.
October
Updated
Linux compatibility lists
SUSE Linux Enterprise Server 11.0 SP 4
SUSE Linux Enterprise Server 12.0 SP 1
Both of these versions are now 'Supported' using ALUA with Fibre Channel Front-end connections. Note: iSCSI
Front End connections are still considered 'Not Qualified'
July
Updated
This document has been reviewed for SANsymphony 10.0 PSP 5.
No other updates were required.
April
Added
Linux compatibility list
Red Hat Enterprise Linux Versions 7.1 – 7.2
Red Hat Enterprise Linux Versions 6.7 – 6.8
SUSE Linux Enterprise Server 12 (no Service Pack) and 12 Service Pack 1
January
Updated
Linux compatibility list
Red Hat Enterprise Linux Version 5.5 – 5.10
Previously this version was listed as 'not qualified' with ALUA enabled Hosts and SANsymphony-V 9.x. This has now
been changed to 'Not Supported with Mirrored or Dual Virtual Disks'.
2015
November
Added
Linux Applications – SAP HANA
This includes an example of a 3-node SAP HANA configuration.
Updated
SANsymphony-V 8.x and all versions of SANsymphony 9.x before PSP4 Update 4 are now ‘End of Life’. Please see:
End of Life Notifications http://datacore.custhelp.com/app/answers/detail/a_id/1329
September
Added
Linux Applications – SAP HANA
A new section has been added with settings specific to SAP HANA.
The setting dev_loss_tmo infinity requires the additional setting fast_io_fail_tmo to be present in the
multipath.conf file – this was omitted previously from the SUSE Linux Enterprise Server requirements; and that the
fast_io_fail_tmo value is set to 5 – this was previously set to 30 for Red Hat Enterprise Linux requirements.
June
Added
Linux compatibility list
Ubuntu 14.04 LTS has now been qualified with SANsymphony-V 10.x.
Known Issues
Ubuntu requires manual intervention to redetect failed paths to a DataCore Server.
April
Added
Known Issues
All RHEL or SUSE Linux specific ‘known issues’ with SANsymphony-V will now be documented here.
March
Updated
Linux compatibility lists
SUSE Linux Enterprise Server 11.0 with Service Pack 3 is now qualified with SANsymphony-V 10.x using ALUA-only
settings.
February
Updated
Linux compatibility lists
Red Hat Enterprise Linux 7.0 is now qualified with SANsymphony-V 10.x using ALUA-only settings.
no_path_retry fail
2014
November
Updated
Linux compatibility lists
Updated the table for all Red Hat Enterprise Linux and SUSE Linux Enterprise Server Versions
July
Updated
Linux compatibility lists
Updated the table for all Red Hat Enterprise Linux Versions 5.0 - 5.10. It was previously listing versions 5.0 – 5.2
only.
Host Settings
Disk Timeouts – Added an example for use of the cat command to determine the current SCSI Disk Timeout and a
note that some versions of Linux may revert to the default settings after a reboot and how to resolve this.
June
Updated
Linux compatibility lists
Updated the table for SANsymphony-V 10.x
May
Added
Linux compatibility lists
Red Hat Enterprise Linux – 6.5 with ALUA is now qualified for use with SANsymphony-V 9.x
Updated
Multipath Configuration Settings
Red Hat Enterprise Linux only
Added a new Multipath.conf entry requirement - fast_io_fail_tmo - which is required to support the dev_loss_tmo
value of ‘infinity’ else the value may default back to 10 minutes instead. NB: Check with your Vendor to make sure
your specific Linux Kernel version supports these options.
Added an optional Multipath.conf entry user_friendly_names which will create simpler, easier to read, disk device
names for users to work with. NB: Check with your Vendor to make sure your specific Linux Kernel version
supports this option.
April
This document combines all of DataCore’s Linux-related information from older Technical Bulletins into a single
document including:
Added
Linux compatibility lists
Appendix A
This section gives more detail on the Preferred Server and Preferred Path settings with regard to how it may affect
a Host.
Appendix B
This section incorporates information regarding "Reclaiming Space in Disk Pools" (from Technical Bulletin 16) that
is specific to Linux Hosts.
Updated
Host Settings - SUSE Linux Enterprise Server with/without ALUA enabled
dev_loss_tmo - For SLES 11 SP2 or greater, please make sure that multipath-tools-0.4.9-0.70.72.1 or greater is
installed for the ‘infinity’ setting to work properly.
Improved explanations to most of the required Host Settings and DataCore Server Settings generally.
January 2013
Updated the section for the multipath.conf sections specifically for the polling_interval line.
July 2012
Added SANsymphony-V 9.x. Updated the ‘Notes’ section for the multipath.conf sections specifically for the
getuid_callout line.
May 2012
Updated DataCore Server and Host minimum requirements. Removed all references to ‘End of Life’ SANsymphony
and SANmelody versions that are no longer supported as of December 31 2012. Updated the configuration
settings and entries for /etc/multipath.conf (with additional values and explanations) for all DataCore products.
Users should re-check their existing configurations and make any appropriate changes. Removed all (out of date)
step-by-step instructions on how to manage and configure/format disk devices on the Host.
March 2011
Added SANsymphony-V
December 2009
Initial Publication
Technical Bulletin 2b: "Redhat Enterprise Linux 6.x Hosts"
July 2013
Removed all references to RHEL version 5.3 as this were never tested with ALUA-enabled hosts and so may cause
confusion. Please use Technical Bulletin 2a for this earlier version. This Bulletin is now only for RHEL versions 6.x.
Updated Host Requirements. Added additional information in the ‘Notes’ section of the multipath.conf section.
June 2013
Improved blacklist_exceptions example for multipath.conf.
April 2013
Removed all references to SANmelody as this is now ‘End of Life’ as of December 31 2012. Updated settings for
multipath.conf including additional settings and explanations.
March 2013
RHEL versions 6.0, 6.1, 6.2 and 6.3 explicitly stated.
January 2013
Updated the section for the multipath.conf sections specifically for the polling_interval line. This was previously
stated to go in the devices. This was incorrect, and should be added to defaults section instead.
July 2013
This Bulletin is now only for SLES versions 11.x. Updated Host Requirements. Completely updated what is
considered ‘qualified’, ‘not qualified’ and ‘not supported’ in this document. Added additional information in the
‘Notes’ section of the multipath.conf section.
June 2013
Improved blacklist_exceptions example for multipath.conf.
April 2013
Removed all references to SANmelody as this is now ‘End of Life’ as of December 31 2012. Updated settings for
multipath.conf including additional settings and explanations.
February 2013
Added notes for SLES 11 SP2; including an additional multipath.conf setting. Updated that SLES 11 SP1 is no longer
supported as this was found not work correctly in all failure conditions. Users on SLES 11 SP1 should update to SP2.
January 2013
Updated the section for the multipath.conf sections specifically for the polling_interval line. This was previously
stated to go in the devices. This was incorrect, and should be added to defaults section instead.
ALTHOUGH THE MATERIAL PRESENTED IN THIS DOCUMENT IS BELIEVED TO BE ACCURATE, IT IS PROVIDED “AS IS” AND USERS MUST TAKE ALL
RESPONSIBILITY FOR THE USE OR APPLICATION OF THE PRODUCTS DESCRIBED AND THE INFORMATION CONTAINED IN THIS DOCUMENT.
NEITHER DATACORE NOR ITS SUPPLIERS MAKE ANY EXPRESS OR IMPLIED REPRESENTATION, WARRANTY OR ENDORSEMENT REGARDING, AND
SHALL HAVE NO LIABILITY FOR, THE USE OR APPLICATION OF ANY DATACORE OR THIRD PARTY PRODUCTS OR THE OTHER INFORMATION
REFERRED TO IN THIS DOCUMENT. ALL SUCH WARRANTIES (INCLUDING ANY IMPLIED WARRANTIES OF MERCHANTABILITY, NON-
INFRINGEMENT, FITNESS FOR A PARTICULAR PURPOSE AND AGAINST HIDDEN DEFECTS) AND LIABILITY ARE HEREBY DISCLAIMED TO THE
FULLEST EXTENT PERMITTED BY LAW.
No part of this document may be copied, reproduced, translated or reduced to any electronic medium or machine-readable form without
the prior written consent of DataCore Software Corporation