Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
590 views372 pages

Implementing IBM Storewise v7000 Unified

In this IBM® Redbooks® publication we introduce the IBM Storwize® V7000 Unified (V7000U). Storwize V7000 Unified is a virtualized storage system designed to consolidate block and file workloads into a single storage system.

Uploaded by

Annette Sahores
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
590 views372 pages

Implementing IBM Storewise v7000 Unified

In this IBM® Redbooks® publication we introduce the IBM Storwize® V7000 Unified (V7000U). Storwize V7000 Unified is a virtualized storage system designed to consolidate block and file workloads into a single storage system.

Uploaded by

Annette Sahores
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 372

ibm.

com/redbooks
Front cover
Implementing the IBM
Storwize V7000 Unified
Jure Arzensek
Nancy Kinney
Daniel Owen
Jorge Quintal
Jon Tate
Consolidates storage and file serving
workloads into an integrated system
Simplifies management and
reduces cost
Integrated support for IBM
Real-time Compression
International Technical Support Organization
Implementing the IBM Storwize V7000 Unified
December 2013
SG24-8010-01
Copyright International Business Machines Corporation 2013. All rights reserved.
Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP Schedule
Contract with IBM Corp.
Second Edition (December 2013)
This edition applies to Version 7.1.0.5 of the IBM Storwize V7000 code and Version v1.4.2.0-27-2273 of the
IBM Storwize V7000 File Module code.
Note: Before using this information and the product it supports, read the information in Notices on
page xi.
Copyright IBM Corp. 2013. All rights reserved. iii
Contents
Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi
Trademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xii
Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Authors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii
Now you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
Comments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
Stay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xvi
Chapter 1. Introduction. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.1 A short history lesson . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 About the rest of this book . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.3 Latest release highlights . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3.1 USB Access Recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3.2 Call-home installation enhancement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3.3 SSH tunnel from the IFS Management Node to the Storwize V7000 Node . . . . . . 4
Chapter 2. Terminology and file serving concepts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.1 Terminology for storage and file services . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.1.1 Terminology for random access mass storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.1.2 Terminology for file systems and file sharing, file access, and file transfer . . . . . . 7
2.2 File serving with file sharing and file transfer protocols. . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2.1 The Network File System protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2.2 The Server Message Block protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.2.3 The File Transfer Protocol. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
2.2.4 The Hypertext Transfer Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.2.5 The Secure Copy Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.2.6 The Secure Shell File Transfer Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
Chapter 3. Architecture and functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
3.1 High-level overview of Storwize V7000 Unified. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.1.1 Storwize V7000 Unified storage subsystem: the Storwize V7000 . . . . . . . . . . . . 22
3.1.2 Storwize V7000 Unified file server subsystem: The file modules . . . . . . . . . . . . . 22
3.2 Storwize V7000 Unified system configuration. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.2.1 Storwize V7000 Unified storage subsystem configuration . . . . . . . . . . . . . . . . . . 23
3.2.2 Storwize V7000 Unified file server subsystem configuration . . . . . . . . . . . . . . . . 24
3.3 Storwize V7000 Unified storage functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
3.4 Storwize V7000 Unified file serving related functionality. . . . . . . . . . . . . . . . . . . . . . . . 26
3.4.1 Storwize V7000 Unified file sharing and file transfer protocols. . . . . . . . . . . . . . . 27
3.4.2 Storwize V7000 Unified NFS protocol support . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.4.3 Storwize V7000 Unified SMB and CIFS protocol support . . . . . . . . . . . . . . . . . . . 29
3.4.4 Storwize V7000 Unified cluster manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.4.5 Storwize V7000 Unified product limits. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
Chapter 4. Access control for file serving clients . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
4.1 Authentication and authorization in general . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.1.1 UNIX authentication and authorization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.1.2 Windows authentication and authorization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
4.1.3 UNIX and Windows authentication and authorization in Storwize V7000 Unified. 39
iv Implementing the IBM Storwize V7000 Unified
4.2 Methods used for access control . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.2.1 Kerberos . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.2.2 User names and user IDs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.2.3 Group names and group identifiers in UNIX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
4.2.4 Resource names and security identifiers in Windows. . . . . . . . . . . . . . . . . . . . . . 40
4.2.5 UID/GID/SID mapping in the Storwize V7000 Unified. . . . . . . . . . . . . . . . . . . . . . 40
4.2.6 Directory services in general. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
4.2.7 Windows NT 4.0 Domain Controller and SAMBA Primary Domain Controller . . . 41
4.2.8 Lightweight Directory Access Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
4.2.9 Microsoft Active Directory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
4.2.10 Services for UNIX and Identity Management for UNIX. . . . . . . . . . . . . . . . . . . . 41
4.2.11 Network Information Service. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
4.2.12 Access control list in general. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
4.2.13 GPFS NFSv4 ACLs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
4.2.14 POSIX bits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
4.2.15 ACL mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
4.3 Access control with Storwize V7000 Unified . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.3.1 Authentication methods supported . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.3.2 Active Directory authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4.3.3 AD with SFU authentication or with Identity Management for UNIX. . . . . . . . . . . 44
4.3.4 SAMBA primary domain controller authentication. . . . . . . . . . . . . . . . . . . . . . . . . 44
4.3.5 LDAP authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
4.3.6 Network Information Service. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
4.4 Access control limitations and considerations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
4.4.1 Authentication limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
4.4.2 Authorization limitations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
Chapter 5. Storage virtualization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
5.1 User requirements that drive storage virtualization. . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
5.2 Storage virtualization terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
5.2.1 Realizing the benefits of Storwize V7000 Unified storage virtualization . . . . . . . . 53
5.2.2 Using internal physical disk drives in the Storwize V7000 Unified . . . . . . . . . . . . 53
5.2.3 Using external physical disk drives in the Storwize V7000 Unified. . . . . . . . . . . . 55
5.3 Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
Chapter 6. NAS use cases and differences: SONAS and Storwize V7000 Unified . . . 57
6.1 Use cases for Storwize V7000 Unified . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
6.1.1 Unified storage with both file and block access . . . . . . . . . . . . . . . . . . . . . . . . . . 58
6.1.2 Multi-user file sharing with centralized snapshots and backup . . . . . . . . . . . . . . . 59
6.1.3 Availability and data protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
6.1.4 Information life-cycle management (ILM), hierarchical storage management (HSM),
and archiving solution. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
6.2 Storwize V7000 Unified and SONAS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
6.2.1 SONAS brief overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
6.2.2 Implementation differences between Storwize V7000 Unified and SONAS . . . . . 63
Chapter 7. IBM General Parallel File System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
7.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
7.2 GPFS technical concepts and architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
7.2.1 Split brain situations and GPFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
7.2.2 GPFS file system pools and Storwize V7000 storage pools. . . . . . . . . . . . . . . . . 68
7.2.3 File system pools in GPFS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
7.2.4 GPFS file sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
7.2.5 GPFS parallel access and byte-range locking . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
Contents v
7.2.6 GPFS synchronous internal replication. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
7.2.7 Active Cloud Engine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
7.2.8 GPFS and hierarchical storage management (HSM) . . . . . . . . . . . . . . . . . . . . . . 73
7.2.9 GPFS snapshots. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
7.2.10 GPFS quota management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
Chapter 8. Copy services overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75
8.1 Storage copy services of the Storwize V7000 Unified . . . . . . . . . . . . . . . . . . . . . . . . . 76
8.1.1 FlashCopy for creating point-in-time copies of volumes . . . . . . . . . . . . . . . . . . . . 76
8.1.2 Metro Mirror and Global Mirror for remote copy of volumes . . . . . . . . . . . . . . . . . 78
8.2 File system level copy services of the Storwize V7000 Unified file modules . . . . . . . . 79
8.2.1 Snapshots of file systems and file sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
8.2.2 Asynchronous replication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
Chapter 9. GUI and CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
9.1 Graphical user interface setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
9.1.1 Web server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
9.1.2 Management GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
9.1.3 Web browser and settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
9.1.4 Starting the browser connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
9.2 Command-line interface setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
9.3 Using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
9.3.1 Menus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
9.4 Using the CLI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
9.4.1 File commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
9.4.2 Block Commands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
Chapter 10. Planning for implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
10.1 IBM SONAS/V7000 Unified Questionnaire . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
10.1.1 Opportunity details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
10.1.2 Capacity requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
10.1.3 Performance requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
10.1.4 Client communication protocols . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
10.1.5 Data protection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
10.1.6 Data center requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
10.1.7 Storwize V7000 Unified specific questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114
10.1.8 Service requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
10.2 Planning steps sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
10.2.1 Perform the physical hardware planning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
10.2.2 Define the environment and services needed. . . . . . . . . . . . . . . . . . . . . . . . . . 117
10.2.3 Plan for system implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
10.2.4 Plan for data migration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
10.3 Support, limitations, and tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
10.4 Storwize V7000 Unified advanced features and functions . . . . . . . . . . . . . . . . . . . . 119
10.4.1 Licensing for advanced functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
10.4.2 External virtualization of SAN-attached back-end storage . . . . . . . . . . . . . . . . 119
10.4.3 Remote Copy Services (for block I/O access only). . . . . . . . . . . . . . . . . . . . . . 120
10.4.4 FlashCopy (block volumes only) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
10.4.5 General GPFS recommendation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
10.4.6 GPFS internal synchronous replication (NSD failure groups) . . . . . . . . . . . . . . 121
10.4.7 Manage write-caching options in Storwize V7000 Unified and on client side . . 121
10.4.8 Redundancy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
10.5 Miscellaneous configuration planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
10.5.1 Set up local users to manage the Storwize V7000 Unified system. . . . . . . . . . 122
vi Implementing the IBM Storwize V7000 Unified
10.5.2 Define call home and event notifications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
10.5.3 Storage pool layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
10.6 Physical hardware planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
10.6.1 Plan for space and layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
10.6.2 Planning for Storwize V7000 Unified environment . . . . . . . . . . . . . . . . . . . . . . 125
10.7 System implementation planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
10.7.1 Configuration details and settings that are required for setup. . . . . . . . . . . . . . 127
10.7.2 Configuration options for file access only . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
10.7.3 Configuration options for Block I/O access only . . . . . . . . . . . . . . . . . . . . . . . . 137
Chapter 11. Implementation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
11.1 Process overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
11.2 Task checklist . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140
11.3 Hardware unpack, rack, and cable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
11.3.1 Preparation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
11.3.2 Review packing slips and check components. . . . . . . . . . . . . . . . . . . . . . . . . . 142
11.3.3 Confirm environmentals and planning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
11.3.4 Rack controller enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
11.3.5 Rack expansion enclosures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144
11.3.6 Rack file modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
11.3.7 Cabling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
11.4 Power on and check out . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
11.4.1 Network. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
11.4.2 Power on expansions and controllers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
11.4.3 Power on file modules. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
11.5 Install the latest software. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
11.5.1 Determine current firmware and code levels. . . . . . . . . . . . . . . . . . . . . . . . . . . 152
11.5.2 Preparation for reload . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
11.5.3 Reinstall the software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
11.6 Initialize the system. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
11.6.1 Configure USB key . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
11.6.2 Initialize the Storwize V7000 controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
11.6.3 Initialize the file modules. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
11.7 Base configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
11.7.1 Connect to the graphical user interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
11.7.2 Easy Setup wizard . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
11.7.3 Set up periodic configuration backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177
11.8 Manual setup and configuration changes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
11.8.1 System names . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
11.8.2 System licenses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 178
11.8.3 Support . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
11.9 Network . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
11.9.1 Public Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
11.9.2 Service ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
11.9.3 Internet Small Computer System Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
11.9.4 Fibre Channel ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
11.9.5 Fibre Channel ports . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
11.10 Alerting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
11.10.1 Email . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
11.10.2 SNMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189
11.10.3 Syslog Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190
11.11 Directory Services and Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
11.11.1 Domain Name System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
Contents vii
11.11.2 Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191
11.12 Health check . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
11.13 User security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193
11.13.1 Change passwords . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
11.13.2 Create cluster users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
11.13.3 Create local users by using local authentication for NAS access . . . . . . . . . . 197
11.14 Storage controller configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199
11.14.1 External SAN requirements. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
11.14.2 Configure storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
11.15 Block configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200
11.15.1 Copy Services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
11.16 File Services configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
11.16.1 File service components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205
11.16.2 File systems examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207
Chapter 12. Antivirus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215
12.1 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 216
12.2 Scanning individual files . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
12.3 Scheduled scan. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 217
12.4 Set up and configure antivirus. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 218
12.4.1 Antivirus setup steps. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219
Chapter 13. Performance and monitoring. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225
13.1 Tuning Storwize V7000 Unified for performance . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
13.1.1 Disk and file-system configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226
13.1.2 Network Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
13.1.3 Client Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228
13.2 Monitoring the performance of Storwize V7000 Unified . . . . . . . . . . . . . . . . . . . . . . 229
13.2.1 Graphical Performance Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229
13.2.2 Command Line Interface (CLI) Performance Monitoring . . . . . . . . . . . . . . . . . 231
13.2.3 Tivoli Storage Productivity Center. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231
13.3 Identifying and resolving performance problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
13.3.1 Health Status . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232
13.3.2 Network Issues . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
13.3.3 High Latencies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
Chapter 14. Backup and recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
14.1 Cluster backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
14.1.1 Philosophy for file and block . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 236
14.1.2 Storage enclosure backup (Storwize V7000) . . . . . . . . . . . . . . . . . . . . . . . . . . 236
14.1.3 File module backup. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237
14.2 Cluster recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
14.2.1 Storage enclosure recovery (V7000) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 238
14.3 Data backup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
14.3.1 Data backup philosophy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
14.3.2 Tivoli Storage Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 240
14.3.3 Network Data Management Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 246
14.4 Data recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
14.4.1 Tivoli Storage Manager . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 253
14.4.2 Asynchronous data recovery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
14.4.3 Network Data Management Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254
Chapter 15. Troubleshooting and maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255
15.1 Maintenance philosophy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
viii Implementing the IBM Storwize V7000 Unified
15.2 Event logs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 256
15.2.1 Storwize V7000 storage controller event log (block). . . . . . . . . . . . . . . . . . . . . 258
15.2.2 V7000 Unified File Module Event log file . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 261
15.2.3 Block . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262
15.2.4 File . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 264
15.2.5 Working with compressed volumes out of space conditions. . . . . . . . . . . . . . . 266
15.3 Collect support package . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 270
15.4 Information center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272
15.4.1 Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
15.4.2 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
15.4.3 Search results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 273
15.4.4 Offline information center . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
15.5 Call home and alerting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
15.5.1 Simple Network Management Protocol. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
15.5.2 Email . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 274
15.5.3 Call home . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
15.6 IBM Support Remote Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 275
15.7 Changing parts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
15.8 Preparing for recovery. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276
15.8.1 Superuser password . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
15.8.2 Admin password . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
15.8.3 Root password . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
15.8.4 Service IP addresses . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
15.8.5 Test GUI connection to the Storwize V7000 storage enclosure . . . . . . . . . . . . 277
15.8.6 Assist On-site . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277
15.8.7 Backup config saves . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
15.9 Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
15.9.1 Software package . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 278
15.9.2 Software upgrade . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 280
Chapter 16. Real-time Compression in the IBM Storwize V7000 Unified. . . . . . . . . . 287
16.1 General considerations: Compression and block volume compression use cases. . 288
16.2 Compressed file system pool configurations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 288
16.2.1 Selectively compressed file system with two pools. . . . . . . . . . . . . . . . . . . . . . 288
16.2.2 Configuring a selective compressed file system. . . . . . . . . . . . . . . . . . . . . . . . 290
16.2.3 Compression rules by file set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296
16.2.4 Use case for compressing and uncompressing existing data. . . . . . . . . . . . . . 299
16.3 Capacity Planning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 315
16.3.1 Planning capacity with the NAS Compression Estimation Utility . . . . . . . . . . . 315
16.3.2 Installing and using the NAS Compression Estimation Utility . . . . . . . . . . . . . . 316
16.3.3 Using NAS Compression Estimation Utility. . . . . . . . . . . . . . . . . . . . . . . . . . . . 316
16.3.4 Capacity planning for selectively compressed file systems . . . . . . . . . . . . . . . 317
16.4 Compression metrics for file systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 320
16.5 Managing compressed file systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
16.5.1 Adding a compressed pool to a file system. . . . . . . . . . . . . . . . . . . . . . . . . . . . 322
16.5.2 Making a selectively compressed file system uncompressed. . . . . . . . . . . . . . 325
16.6 Compression saving reporting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
16.6.1 Reporting basic overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
16.6.2 Reporting of compression in the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 328
16.6.3 Compression reporting by using the CLI. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 332
Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
Contents ix
Other publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335
Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 336
Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337
Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 339
x Implementing the IBM Storwize V7000 Unified
Copyright IBM Corp. 2013. All rights reserved. xi
Notices
This information was developed for products and services offered in the U.S.A.
IBM may not offer the products, services, or features discussed in this document in other countries. Consult
your local IBM representative for information on the products and services currently available in your area. Any
reference to an IBM product, program, or service is not intended to state or imply that only that IBM product,
program, or service may be used. Any functionally equivalent product, program, or service that does not
infringe any IBM intellectual property right may be used instead. However, it is the user's responsibility to
evaluate and verify the operation of any non-IBM product, program, or service.
IBM may have patents or pending patent applications covering subject matter described in this document. The
furnishing of this document does not grant you any license to these patents. You can send license inquiries, in
writing, to:
IBM Director of Licensing, IBM Corporation, North Castle Drive, Armonk, NY 10504-1785 U.S.A.
The following paragraph does not apply to the United Kingdom or any other country where such
provisions are inconsistent with local law: INTERNATIONAL BUSINESS MACHINES CORPORATION
PROVIDES THIS PUBLICATION "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR
IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT,
MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some states do not allow disclaimer of
express or implied warranties in certain transactions, therefore, this statement may not apply to you.
This information could include technical inaccuracies or typographical errors. Changes are periodically made
to the information herein; these changes will be incorporated in new editions of the publication. IBM may make
improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time
without notice.
Any references in this information to non-IBM websites are provided for convenience only and do not in any
manner serve as an endorsement of those websites. The materials at those websites are not part of the
materials for this IBM product and use of those websites is at your own risk.
IBM may use or distribute any of the information you supply in any way it believes appropriate without incurring
any obligation to you.
Any performance data contained herein was determined in a controlled environment. Therefore, the results
obtained in other operating environments may vary significantly. Some measurements may have been made
on development-level systems and there is no guarantee that these measurements will be the same on
generally available systems. Furthermore, some measurements may have been estimated through
extrapolation. Actual results may vary. Users of this document should verify the applicable data for their
specific environment.
Information concerning non-IBM products was obtained from the suppliers of those products, their published
announcements or other publicly available sources. IBM has not tested those products and cannot confirm the
accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the
capabilities of non-IBM products should be addressed to the suppliers of those products.
This information contains examples of data and reports used in daily business operations. To illustrate them
as completely as possible, the examples include the names of individuals, companies, brands, and products.
All of these names are fictitious and any similarity to the names and addresses used by an actual business
enterprise is entirely coincidental.
COPYRIGHT LICENSE:
This information contains sample application programs in source language, which illustrate programming
techniques on various operating platforms. You may copy, modify, and distribute these sample programs in
any form without payment to IBM, for the purposes of developing, using, marketing or distributing application
programs conforming to the application programming interface for the operating platform for which the sample
programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore,
cannot guarantee or imply reliability, serviceability, or function of these programs.
xii Implementing the IBM Storwize V7000 Unified
Trademarks
IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines
Corporation in the United States, other countries, or both. These and other IBM trademarked terms are
marked on their first occurrence in this information with the appropriate symbol ( or ), indicating US
registered or common law trademarks owned by IBM at the time this information was published. Such
trademarks may also be registered or common law trademarks in other countries. A current list of IBM
trademarks is available on the Web at http://www.ibm.com/legal/copytrade.shtml
The following terms are trademarks of the International Business Machines Corporation in the United States,
other countries, or both:
Active Cloud Engine
AFS
AIX
BladeCenter
DS4000
Easy Tier
FlashCopy
Global Technology Services
GPFS
IBM
PartnerWorld
PureFlex
Real-time Compression
Redbooks
Redbooks (logo)
Smarter Planet
Storwize
System Storage
System x
Tivoli
XIV
z/OS
The following terms are trademarks of other companies:
Intel, Intel Xeon, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks
of Intel Corporation or its subsidiaries in the United States and other countries.
Linux is a trademark of Linus Torvalds in the United States, other countries, or both.
Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the
United States, other countries, or both.
UNIX is a registered trademark of The Open Group in the United States and other countries.
Other company, product, or service names may be trademarks or service marks of others.
Copyright IBM Corp. 2013. All rights reserved. xiii
Preface
In this IBM Redbooks publication we introduce the IBM Storwize V7000 Unified
(V7000U). Storwize V7000 Unified is a virtualized storage system designed to consolidate
block and file workloads into a single storage system. Advantages include simplicity of
management, reduced cost, highly scalable capacity, performance, and high availability.
Storwize V7000 Unified storage also offers improved efficiency and flexibility through built-in
solid-state drive (SSD) optimization, thin provisioning, IBM Real-time Compression, and
nondisruptive migration of data from existing storage. The system can virtualize and reuse
existing disk systems offering a greater potential return on investment.
We suggest that you familiarize yourself with the following books to get the most from this
publication:
Implementing the IBM Storwize V7000 V6.3, SG24-7938
Implementing the IBM System Storage SAN Volume Controller V6.3, SG24-7933
Real-time Compression in SAN Volume Controller and Storwize V7000, REDP-4859
SONAS Implementation Guide and Best Practices Guide, SG24-7962
SONAS Concepts, Architecture, and Planning Guide, SG24-7963
Authors
Jure Arzensek is an Advisory IT Specialist for IBM Slovenia
and works as a PFE for the CEEMEA Level 2 team, supporting
PureFlex and IBM BladeCenter products. He has been with
IBM since 1995 and has worked in various technical support
and technical education roles in EMEA and CEE. Jure holds a
degree in Computer Science from the University of Ljubljana.
His other areas of expertise include IBM System x servers,
SAN, SVC, Storwize V7000, System Storage DS3000, IBM
DS4000, and DS5000 products and network operating
systems for the Intel platform. He has co-authored thirteen
other IBM Redbooks publications.
Nancy Kinney worked as an IBM US Remote Technical
Support Engineer for IBM Global Technology Services in the
Austin AIX/IBM System Storage center where she acquired
end-to-end experience in troubleshooting FC technologies.
She holds an IBM Midrange Specialist Certification as well as
NetApp/N-Series NCDA Certification. She is currently an
Infrastructure Architect. She has a wide range of experience
working with FC Storage technologies from multiple vendors as
well as working with multipathing drivers for OSs attaching to
storage, and networking technologies.
xiv Implementing the IBM Storwize V7000 Unified
This book was produced by a team of specialists working at Brocade Communications
Systems, San Jose; IBM Manchester Labs, UK, and the IBM International Technical Support
Organization, San Jose Center.
Previous authors:
Andreas Baer
Nitzan Iron
Tom Jahn
Paul Jenkin
Jorge Quintal
Bosmat Tuv-El
We extend our thanks to the following people for their contributions to this project, including
the development and PFE teams in Hursley:
Robin Findlay
Carlos Fuente
Geoff Lane
Andrew Martin
Cameron McAllister
Paul Merrison
Steve Randle
Matt Smith
Daniel Owen is the Performance Architect for Storwize V7000
Unified. Prior to joining STG System Storage he worked within
POWER Systems, developing technology to improve the
performance of applications on POWER/AIX. Daniel is a
Chartered Engineer who has over a decade's worth of
experience working on the performance of computer systems.
Jorge Quintal is a Storage Managing Consultant currently
providing Development Support for IBM Real-time
Compression. He joined IBM through the acquisition of
Sequent Computer Systems in 1999 and during his time at
IBM, he has worked for Storage Lab Services as one of the
original members working with SAN File System, SVC,
network-attached storage (NAS), and as lead for N-Series
services development and implementations. Jorge also worked
for an extended period as an IBM XIV Technical Advisor.
Jon Tate is a Project Manager for IBM System Storage SAN
Solutions at the International Technical Support Organization
(ITSO), San Jose Center. Before joining the ITSO in 1999, he
worked in the IBM Technical Support Center, providing Level 2
support for IBM storage products. Jon has 27 years of
experience in storage software and management, services,
and support, and is both an IBM Certified IT Specialist and an
IBM SAN Certified Specialist. He is also the UK Chairman of
the Storage Networking Industry Association.
Preface xv
Barry Whyte
Muhammad Zubair
Chris Canto
Peter Eccles
IBM Hursley
Duane Bolland
Jackson Shea
IBM Beaverton
Norm Bogard
IBM Orlando
Chris Saul
IBM San Jose
Sangam Racherla
IBM ITSO
Achim Christ
Nils Haustein
Michael Jahn
Thomas Luther
Alexander Saupp
IBM Germany
Special thanks to the Brocade staff for their unparalleled support of this residency in terms of
equipment and support in many areas:
Silviano Gaona
Brian Steffler
Marcus Thordal
Jim Baldyga
Brocade Communications Systems
Now you can become a published author, too!
Heres an opportunity to spotlight your skills, grow your career, and become a published
authorall at the same time! Join an ITSO residency project and help write a book in your
area of expertise, while honing your experience using leading-edge technologies. Your efforts
will help to increase product acceptance and customer satisfaction, as you expand your
network of technical contacts and relationships. Residencies run from two to six weeks in
length, and you can participate either in person or as a remote resident working from your
home base.
Find out more about the residency program, browse the residency index, and apply online at:
ibm.com/redbooks/residencies.html
Comments welcome
Your comments are important to us!
xvi Implementing the IBM Storwize V7000 Unified
We want our books to be as helpful as possible. Send us your comments about this book or
other IBM Redbooks publications in one of the following ways:
Use the online Contact us review Redbooks form found at:
ibm.com/redbooks
Send your comments in an email to:
[email protected]
Mail your comments to:
IBM Corporation, International Technical Support Organization
Dept. HYTD Mail Station P099
2455 South Road
Poughkeepsie, NY 12601-5400
Stay connected to IBM Redbooks
Find us on Facebook:
http://www.facebook.com/IBMRedbooks
Follow us on Twitter:
http://twitter.com/ibmredbooks
Look for us on LinkedIn:
http://www.linkedin.com/groups?home=&gid=2130806
Explore new Redbooks publications, residencies, and workshops with the IBM Redbooks
weekly newsletter:
https://www.redbooks.ibm.com/Redbooks.nsf/subscribe?OpenForm
Stay current on recent Redbooks publications with RSS Feeds:
http://www.redbooks.ibm.com/rss.html
Copyright IBM Corp. 2013. All rights reserved. 1
Chapter 1. Introduction
The IBM Storwize V7000 Unified integrates the serving of storage and file related services,
such as file sharing and file transfer capabilities, in one system. The Storwize V7000 Unified
can provide storage system virtualization as well, using the mature virtualization capabilities
of the IBM SAN Volume Controller (SVC). It is an integrated storage server, storage
virtualization, and file server appliance.
1
2 Implementing the IBM Storwize V7000 Unified
1.1 A short history lesson
IT infrastructure concepts and products advance and change over time, adjusting to changing
requirements and tasks at hand. For instance, in many computing environments, a
centralized approach with a strictly specified design and infrastructure components has been
superseded by a client/server approach, with a decentralized and less proprietary, more
interoperable infrastructure. This infrastructure is easier to be adopted by new concepts and
tasks.
As always, IT architects have to work with the concepts, technology, and designs that are
available at a given point-in-time. For instance, it used to be the case that the servers in
client/server environments could use only internal storage. This changed with the dawn of
external Redundant Array of Independent Disks (RAID) and RAID storage systems. If we look
at storage as a service, this previously internal-only service was now out-tasked to a
specialized device, enabling new infrastructure concepts.
The serving of storage from a central element in the infrastructure to many storage clients,
and the implementation of high availability by mirroring data to two independent storage
servers has now became more prevalent in the industry. These specialized storage servers
are in essence, storage server appliances. What was simply called a server because it
housed all elements in one device and provided a service to clients, now became a client
itself, a storage client. They need to use the storage service provided by the storage servers
to be able to build their own added value services on top.
Servers were becoming more used for specialized tasks, and later, the hardware and
software was adapted to support this specialization to a certain degree.
One of these tasks was organizing files in file systems and making the file system space and
the files themselves accessible by file sharing clients. These adapted servers are known as
file servers. With external storage, these devices are now storage clients in one aspect and
file servers in another. They use a lower-level service (storage) and provide a higher-level
service (file serving).
To enhance the functionality, availability, disaster tolerance, and other aspects of a file serving
subinfrastructure, new types of devices were developed and introduced. These were
specialized single purpose file servers (a file server appliance). They are solely designed to
provide only the functionality that is implemented into the product of the given vendor, without
the explicit ability to use them for different tasks (as with multi-purpose servers).
This specialization, which can have many other advantages, led to appliances that were
called network-attached storage (NAS). Although strictly speaking, it is not serving storage
but serving files.
This gave us a modular, layered infrastructure in which each logical layer also corresponds
with a class of device that provides services that are only related to that layer. A storage
server serves storage to a storage client. A file server serves files to a file client. These
devices are connected to each other using different forms of external networks, using
specialized protocols to provide their service.
When file server appliances started to be used to generate storage volumes from files in their
file system, they made these volumes accessible to storage clients by using Internet Small
Computer System Interface (iSCSI) and Fibre Channel (FC). This meant that these devices
provided services belonging to two different functional layers, and they acted as both file
servers and storage servers. To make the distinction, they were called Unified Storage. The
IBM approach to this type of storage is to take a storage server that can provide storage
virtualization and integrate it with a file server product in one device.
Chapter 1. Introduction 3
This integrated product is the IBM Storwize V7000 Unified Storage, as shown in Figure 1-1.
Figure 1-1 IBM Storwize V7000 Unified Storage
Internally, this device is still built with a layered approach, whereby a storage server serves
storage to external storage clients and to the internal file server. And, the file server provides
file services to external file clients. This is a truly integrated storage server, storage
virtualization, and file server appliance.
1.2 About the rest of this book
This book starts off providing the basics of the terminology and concepts of storage and file
services and how they relate to each other. Building on that, we introduce file sharing and file
transfer methods in general. The architecture of the Storwize V7000 is briefly explained, as
well as the specifics of the implementation of the file-related functionality and the access
control options. Storage-related functions like virtualization and copy services are covered
more briefly because these topics are already covered in available ITSO publications; these
functions are the same as in the IBM Storwize V7000. We recommend having a copy of the
following book available to help your understanding of the Storwize V7000 piece of the
equation, Implementing the IBM Storwize V7000 V6.3, SG24-7938.
Information of the file server-related part of the product, such as the IBM General Parallel File
System (GPFS) and differences to Scale Out Network Attached Storage (SONAS) are
included as well. For more information about SONAS, see the following publications:
SONAS Implementation Guide and Best Practices Guide, SG24-7962
SONAS Concepts, Architecture, and Planning Guide, SG24-7963
For GPFS, we recommend, Implementing the IBM General Parallel File System (GPFS) in a
Cross Platform Environment, SG24-7844.
The theoretical part of the book is followed by the implementation chapters of the book. The
user interfaces, planning for implementation, and the actual implementation of the Storwize
V7000 Unified are described. This includes the antivirus implementation, performance and
monitoring overview, backup and recovery, and troubleshooting and maintenance.
This book aims to help to understand file services and how they are implemented in the
Storwize V7000 Unified and to help to successfully implement, run, and maintain the product.
V7000
Unified
V7000
Unified
V7000
Unified
V7000
Unified
4 Implementing the IBM Storwize V7000 Unified
1.3 Latest release highlights
Although the Storwize V7000 Unified received various small updates in various areas, there
are several new enchancements and features worthy of note.
1.3.1 USB Access Recovery
If a system is relocated or the network is changed, access to the management interface may
get lost if there is not very careful planning. This change allows for a quick and easy way to
recover the file module console and re-set the management Ips. This will be discussed in
detail in the implementation and troubleshooting chapter.
1.3.2 Call-home installation enhancement
This is an enhancement which allows our clients to use call home setup enabling better
tracking for quick access to machines installed in the field or when needing to send a CE into
the field. After the System Setup wizard completes the user is prompted to complete the
Support Services wizard.
1.3.3 SSH tunnel from the IFS Management Node to the Storwize V7000 Node
This enables the Host Software Group 3rd party plugins designed for V7000 to transparently
work on V7000 Unified as will be discussed in Chapter 9, GUI and CLI on page 83.
Copyright IBM Corp. 2013. All rights reserved. 5
Chapter 2. Terminology and file serving
concepts
This chapter describes terminology that is used for storage and file services, as well as
client/server file sharing and file transfer protocols.
2
The Storwize V7000 Unified provides file and block storage. The terminology in this
chapter pertains mostly to file storage. For detailed block storage information, consult the
following Redbook: Implementing the IBM Storwize V7000 V6.3, SG24-7938.
6 Implementing the IBM Storwize V7000 Unified
2.1 Terminology for storage and file services
The Storwize V7000 Unified provides block and file storage to clients. With an integrated
product like the Storwize V7000 Unified, it is essential to use coherent terminology.
The main focus when designing infrastructures is the service that the products provide (for
example, is it the kind of service the client asks for, are the requirements met?). Even though
our products perform input/output (I/O) operations, it is services like the mapping of logical
volumes and the sharing of files that are important to have in mind.
2.1.1 Terminology for random access mass storage
The lowest level of service provided to permanently store and retrieve data for use in upper
layers in computer systems is random access mass storage, hereafter referred to as storage.
Storage, for instance, is provided by hard disk drives (HDDs), solid-state drives (SSDs), and
(using HDDs and SSDs) by Redundant Array of Independent Disks (RAID) controllers and the
controllers of external storage systems and storage servers. Oftentimes, the term block
storage is used, although in many cases it is not needed to use the term block.
The prefix block originates in the disk data organization architecture. For instance, the
fixed-block architecture (FBA, or shorter FB) uses records of data written in blocks of a
specific size. Logical block address (LBA), an FBA that is common today, uses a fixed sector
size of 512 bytes. Referring to storage with the prefix block leaves out other architectures
such as count key data (CKD), used for instance in IBM z/OS. In cases where it is needed to
differentiate from CKD, it makes sense to refer to the storage as fixed block storage
(FB storage) without omitting the term fixed.
LBA storage, as used by open systems, provides the storage to upper layers of the system as
a sequence of blocks, which is called a volume. For the term, volume as well, the prefix block
is not needed to differentiate from upper layer facilities and protocols. The term is not needed
because these protocols do not provide storage or volumes in any form. They use storage in
the form of volumes and provide upper layer services.
Storage that is provided by physical random access mass storage devices such as HDDs or
SSDs is referred to as physical volume (PV), for instance in RAID and in Logical Volume
Manager (LVM) software components of an operation system.
Using physical volumes to start with, there are volumes that are called logical volumes (LV)
being presented in different places. As such, the term logical volume has been overused and
it depends on either the context it is being used in or upon a further definition to be
understood. An LV can be a logical volume that is provided by a computer systems internal
RAID adapter, an external storage system, or an LVM. All these LVs have in common that
they also present a sequence of blocks that can be used the same way a PV would. For upper
layers that use these volumes, the source is transparent. This means that they do not know if
they are using a PV or an LV. For instance an LV, like a PV, might be used by a file system
(FS) or a database management system (DBMS). The diagram in Figure 2-1 on page 7
shows a storage server providing logical volumes to a storage client.
A computer system accessing a logical volume mapped to it is a storage client. A logical
volume that is served to a storage client by a storage server (storage system) is often referred
to as a LUN. This usage of the term LUN is wrong because LUN is short for logical unit
number, an addressing scheme that is used to identify a logical unit (LU). This addressing
scheme is used for Small Computer System Interface (SCSI) based systems, using the
different flavors of SCSI, Internet SCSI (iSCSI), and Fibre Channel (FC). A logical volume is
only one kind of a SCSI logical unit (LU), identified by a LUN. However, LUs are not limited to
Chapter 2. Terminology and file serving concepts 7
be logical volumes. They can be other devices that are addressed by the SCSI addressing
scheme as well, for instance tape devices.
Summarized, with storage servers, we provide the service storage when we map logical
volumes to storage clients.
Figure 2-1 depicts a storage server providing logical volumes to a storage client.
Figure 2-1 Storage server providing logical volumes to a storage client
2.1.2 Terminology for file systems and file sharing, file access, and file transfer
Operating systems of storage clients put a structure on and organize the storage space of
logical volumes to store and retrieve data. Usually an LVM takes the logical volumes from
storage servers to build LVM logical volumes. On top of the LVM logical volumes are the
facilities to structure them to enable the writing, accessing, and reading of data, such as FS or
DBMS.
Sometimes there is a bit of confusion about file systems and file serving and file sharing,
which is partly rooted in the naming of the networking protocols that are used to share and
access file resources.
A file system (or file system) is used to control how information is stored and retrieved.
Without a file system, information placed in a storage area would be one large body of
information with no way to tell where one piece of information stops and the next begins.
By separating the information into individual pieces, and giving each piece a name, the
information is easily separated and identified. Taking its name from the way paper based
information systems are named, each piece of information is called a file. The structure and
logic rules used to manage the groups of information and their names is called a file system.
Physical volumes (HDDs and SSDs) are used to
generate logical volumes in a storage server .
These LV are mapped to a storage client .
The logical volumes provided by the storage
system are being used to generate logical
volumes in the LVM of the storage client .
The logical volumes of the LVM are being used
by file systems or DBMS
Storage Client
upper layer applications and
protocols
Storage Server
Storage System Controllers

logical
volumes
Logical Volume Manager

LVM logical
volumes
DBMS / FS
organized
data
8 Implementing the IBM Storwize V7000 Unified
There are many different kinds of file systems. Each one has different structure and logic.
Each one has different properties of speed, flexibility, security, size and more. Some file
systems have been designed to be used for specific applications. For example the ISO 9660
file system is designed specifically for optical disks.
File systems can be used on many different kinds of storage devices. Each storage device
uses a different kind of media. The most common storage device in use today is a hard drive
whose media is a disc that has been coated with a magnetic film. The film has ones and zeros
'written' on it sending electrical pulses to a magnetic read-write head. Other media that are
used are magnetic tape, optical disc, and flash memory. In some cases, the computer's main
memory (RAM) is used to create a temporary file system for short term use.
File systems are used to implement type of data store to store, retrieve and update a set of
files. File system refers to either the abstract data structures used to define files, or the
actual software or firmware components that implement the abstract ideas.
Some file systems are used on local data storage devices; others provide file access via a
network protocol (e.g. NFS, SMB, or 9P clients). Some file systems are virtual, in that the
files supplied are computed on request (e.g. proxies) or are merely a mapping into a
different file system used as a backing store. The file system manages access to both the
content of files and the metadata about those files. It is responsible for arranging storage
space; reliability, efficiency, and tuning with regard to the physical storage medium are
important design considerations.
File sharing is a means of making data that is already organized in a file system accessible to
users of other (network connected) computer systems. These file sharing protocols, also
called file access protocols, use client/server infrastructures. We do not intend to describe
Peer-to-Peer (P2P) file sharing protocols in this publication. The file server part of the protocol
makes the files accessible for the file client part of the protocol. It is common and technically
correct to say files are being shared, not only because the server shares access to the files
with the client, but also because these files might be accessed by multiple clients (shared).
Some common file sharing protocols that allow users to access files on another computer
systems file system in a similar way as they access data in their local file system, are the
different versions of the Network File System (NFS) protocol and the Server Message Block
(SMB) protocol. The SMB protocol often is referred to as Common Internet File System
(CIFS). CIFS is only a specific dialect (one version) of the SMB protocol. See 2.2.2, The
Server Message Block protocol on page 13 for more information about the SMB protocol.
The newest versions of the file sharing protocols NFS and SMB are NFSv4.1 and SMB 2.1.
The terms used for NFS and SMB servers and clients are NFS file server, NFS file client,
SMB file server, and SMB file client. Sometimes, the software instances are called NFS file
service and SMB file service to distinguish them from the computer systems hardware they
run on. In short, the terms NFS server and NFS client, and SMB server and SMB client, are
used as well.
The diagram in Figure 2-2, as an example, shows a file server using logical volumes, the file
system ext3, and the file sharing protocol NFS to make files accessible to an NFS client.
Chapter 2. Terminology and file serving concepts 9
Figure 2-2 File server housing a file system and sharing files, file client accessing files
Other methods of making files accessible and transferring files are the File Transfer Protocol
(FTP), the Hypertext Transfer Protocol (HTTP) and its flavors, as well as the Secure Shell
(SSH)-based protocols Secure Copy Protocol (SCP) and Secure FTP (SFTP). The terms
used for FTP and HTTP servers and clients are FTP file server (for short: FTP server), FTP
file client (for short: FTP client), HTTP server, and HTTP client.
Specialized file servers are commonly called network-attached storage (NAS). Although NAS
products are network attached, they are file servers that use internal or external storage (in
which case they themselves are storage clients). Although the term NAS is widely used, it
might help to think of them as a file server appliance or simply as special purpose file servers.
The diagram in Figure 2-3 shows the different layers from storage to the access of files and
how they relate to each other. The example that is shown is using a storage server to
provision logical volumes to a storage client (the file server). The file server then uses these
logical volumes to organize data in form of files in file systems. The file server then provides
access to the content of these file systems to file clients by using file access protocols.
The logical volumes provided by the storage
system are being used to generate logical
volumes in the LVM of the storage client .
The logical volumes provided by the LVM are
being used by the ext 3 file system for saving file
system structures and files.
NFS File Server
(Storage Client)
NFS File Client
The client part of the file sharing protocol NFS is
being used to access the exports.
The server part of the file sharing protocol NFS is
being used to export subsets of the file systems
files.
Logical Volume Manager

LVM logical
volumes
ext3 File System
saved files
NFS File Client
accessed files
NFS File Service
shared files
10 Implementing the IBM Storwize V7000 Unified
Figure 2-3 Storage server providing LV to file server, file server sharing files, file client accessing files
SAN
Physical volumes (HDDs and SSDs) are used to
generate logical volumes in a storage server .
These LV are mapped to a storage client .
Client network
NFS File Client
NFS File Server
(Storage Client)
Storage Server
Storage System Controllers

logical
volumes
The logical volumes provided by the storage
system are being used to generate logical
volumes in the LVM of the storage client .
The logical volumes provided by the LVM are
being used by the ext 3 file system for saving file
system structures and files.
The server part of the file sharing protocol NFS is
being used to export subsets of the file systems
files.
The client part of the file sharing protocol NFS is
being used to access the exports.
ext3 File System
saved files
Logical Volume Manager

LVM logical
volumes
NFS File Client
accessed files
NFS File Service
shared files
Chapter 2. Terminology and file serving concepts 11
2.2 File serving with file sharing and file transfer protocols
One important aspect of distributed computing is being able to access files on a remote
system. To the user, remotely located files should have a transparent image. That is, the user
should not be concerned whether the file is on a remote or local system. This also means that
only a single set of commands should be defined to control local and remote files. The file
sharing protocols NFS and SMB use the concept of integrating access to shared parts of
remote file systems into the local file system structures. The following list includes some of
the design issues for file access protocols:
Access: A file should appear to a user as a local file whether it is on a remote or local
machine. The path name to both local and remote files should be identical.
Concurrency: The access to a file must be synchronized so that a modification by one
process does not cause any undesired effect to other processes that depend on the same
file.
Failure and recovery: Both the client and the server must provide a recovery mechanism
such that if one or both fails, the other will carry out a set of expected behaviors.
Mobility: If a file is moved from one system to another, the client must still be able to
access that file without any alteration.
Scalability: A wide range of network sizes and workloads must be supported.
Other ways of serving and transferring files are for instance FTP, HTTP, SCP, and SFTP. They
are designed with different goals in mind, for instance, HTTP is mainly used to serve content
to World Wide Web (WWW) clients, whereas FTP is being used to transfer files to and from
remote locations without the goal of making the remote files available to processes of the
local machine transparently.
2.2.1 The Network File System protocol
The Network File System (NFS) is a distributed file system protocol first developed by Sun
Microsystems (Sun) in 1984. It is designed to enable client computer users to access files
over a network from remote systems in a similar way as though they were accessing files in a
local storage.
Overview of the Network File System protocol
The portions of a file system tree made accessible are called exports on the server side and
mounts on the client side so basically, local physical file systems on an NFS server are made
accessible to NFS clients. NFS, because of its roots, is found mostly on UNIX and Linux like
systems and is included in the base operating system or distributions. NFS is the standard for
these systems to share and access files over a network. There are NFS implementations
available for many other operating systems as well.
NFS initially only used the stateless User Datagram Protocol (UDP) as the transport layer but
implementations using the stateful Transmission Control Protocol (TCP) started to appear as
well. NFS is based on Remote Procedure Call (RPC), which is an interprocess
communication protocol used to start routines in another address space, such as in a remote
computer system. NFS is described and defined in Request for Comments (RFC) documents
and might be implemented at no charge.
12 Implementing the IBM Storwize V7000 Unified
Network File System Version 2
The first version of NFS available outside of Sun Microsystems was NFS Version 2 (NFSv2).
It was published in 1989 in RFC 1094. NFSv2 needs a port mapper (rpc.portmap, portmap,
rpcbind), which assigns the ports on which the NFS services will listen. These ports are
temporary and change when the NFS server restarts. Additional configuration is necessary
for the ports that are used to become permanent, which makes NFS usable through firewalls,
which allow traffic to only specified ports.
NFSv2 has some limitations concerning use cases, scalability, and performance. It supports
files only up to 2 GB, because of the 32 bit file size. The size of any single data transfer
cannot exceed 8 KB, which hinders performance because of the high amount of NFS
requests. Another performance limiting factor is that NFSv2 works only in a synchronous way.
Data must be written to the file system by the NFS server before the write is acknowledged to
the client (so called stable writes). This limits the scalability of NFSv2.
NFSv2 does not support Kerberos authentication. To grant NFS clients access to exports, the
access is granted to the computer system that the NFS client is running on. This means any
user on that system is able to access the exports. The limits for users are only file and
directory permissions.
Network File System Version 3
NFS Version 3 (NFSv3) was published in 1995 in RFC 1813. It still uses the port mapper.
NFSv3 removed some limitations and introduced some changes. The file offset is now 64 bit,
allowing the support for files larger than 2 GB. The maximum transfer size limit of 8 KB is
gone; the client and server can agree upon the transfer size.
NFSv3 introduced an asynchronous method of operation (so called unstable writes). With
unstable writes, the server does not need to acknowledge the write to the file system to the
client immediately and thus can delay the write. The server must acknowledge the write only
when it receives a commit request. This speeds up client writes and enables the server to
efficiently write the data, mostly independent from the clients write operations. The NFSv3
client is able to detect uncommitted data in an error situation and can recover from that.
When NFSv3 was introduced, support for TCP as a transport-layer increased. Some vendors
had already added support for NFSv2 with TCP as a transport. Sun Microsystems added
support for TCP as a transport at the same time it added support for Version 3. Using TCP as
a transport made using NFS over a Wide Area Network (WAN) become more feasible.
NFSv3 still grants access to the computer system and does not authenticate the user, and
does not support Kerberos. This limits the use of NFSv3 to trusted networks.
Sun Microsystems handed over the maintenance of NFS to the Internet Engineering Task
Force (IETF) before NFS Version 4 was defined and published.
Note: RFC documents are sometimes referred to as being Internet standards. This is not
the case, although some RFCs become a standard as well. An RFC might, for example,
propose and describe Internet-related protocols and methods. Even though not officially a
standard, it allows implementations of protocols to adhere to a specific RFC so that
implementations based on a certain RFC might be able to interact and be compatible with
each other. The name RFC can be misleading; an RFC, when published, is not changed.
In case there is a need for a change, a new RFC is published with a new RFC number.
Chapter 2. Terminology and file serving concepts 13
Network File System Version 4
NFS Version 4 (NFSv4) was published in 2000 in RFC 3010. In 2003, it was revised and
published in RFC 3530. It was also influenced by Andrew File System (AFS) and CIFS
which included performance improvements, mandated strong security, and introduced a
stateful protocol. The NFSv4 server does not rely on a port mapper anymore. NFSv4 requires
TCP as the transport layer; it listens on the well known port TCP 2049. It is a stateful protocol,
maintaining the state of objects on the server. This way the server knows about the intentions
of clients and some problems with stateless operation can be avoided. NFSv4 improves
performance and functionality, for instance, with file system semantics for the Microsoft
Windows operating systems. It supports Windows access control lists (Windows ACLs), but
not the Portable Operating System Interface (POSIX) based on UNIX ACL, in addition to
UNIX permissions. The security model in NFSv4 builds upon Kerberos, Low Infrastructure
Public Key Mechanism (LIPKEY), and Simple Public Key Mechanism Version 3 (SPKM-3).
The method that is used is agreed upon by the NFS client and NFS server. Also to be
negotiated are other security mechanisms such as which encryption algorithm is being used.
A recent development is NFS version 4.1 (NFSv4.1), which was published in 2010 in RFC
5661. NFSv4.1 offers new features, for instance, to use clustered installations with parallel
processing.
2.2.2 The Server Message Block protocol
The Server Message Block (SMB) protocol is an application-layer network protocol for
sharing file and printing resources in a Distributed Computing Environment. Common Internet
File System (CIFS) is considered the modern dialect of SMB. It enables the access to shared
remote resources in a similar way as though they were part of the local system. Most usage
of SMB involves computers running Microsoft Windows.
Initially, the SMB protocol was developed by IBM and further developed by Microsoft, Intel,
IBM, 3Com, and others. An early mention of SMB is in the IBM Personal Computer Seminar
Proceedings document from October 1984. In 1987, the SMB protocol was officially defined
in a Microsoft/Intel document called Microsoft Networks/OpenNET-FILE SHARING
PROTOCOL. Thereafter, it was developed by Microsoft and others. It has been mainly used
with client computers running the IBM OS/2 OS versions and the Microsoft Windows family of
OSs, where the SMB protocol with the functionality to act as SMB server and SMB client is
built in. SMB server and SMB client implementations for other platforms became available
and are in widespread use today.
Overview of the original Server Message Block protocol
The SMB protocol has been developed further over the years, which results in many variants
of the protocol, called dialects. It retained compatibility with an earlier version with the ability
to negotiate the dialect to be used for a session. The dialects are defined by a standard
command set and are identified by a standard string such as PC NETWORK PROGRAM 1.0
Note: There are a broad range of ACL formats, which differ in syntax and semantics. The
ACL format defined by Network File System version 4 (NFSv4) is called NFSv4 ACL.
GPFS supports the NFSv4 ACL format; this implementation is sometimes referred to as
GPFS NFSv4 ACL. The Storwize V7000 Unified system stores all user files in GPFS.
Access protection in the Storwize V7000 Unified system is implemented in GPFS using
NFSv4 ACLs, and is enforced for all of the protocols that are supported by the Storwize
V7000 Unified system: CIFS, NFS, FTP, HTTPS, and SCP. The implementation of NFSv4
ACLs in GPFS does not imply that GPFS or the Storwize V7000 Unified system supports
NFSv4. The Storwize V7000 Unified system supports NFS version 2 (NFSv2) and NFS
version 3 (NFSv3).
14 Implementing the IBM Storwize V7000 Unified
(the first dialect of the SMB protocol), MICROSOFT NETWORKS 3.0, DOS LANMAN 2.1, or
NTLM 0.12 (the SMB dialect NTLAN Manager, which is designated as CIFS).
The name of the SMB protocol refers to the packets of data sent between the client and
server (the Server Message Blocks). Each SMB contains a request from an SMB client or a
response from an SMB server (client/server, request response protocol).
The SMB protocol is used to provide access to resources and access resources, called
shares. Shares can be subsets of file systems and its contents, printers, and serial ports and
some other resources. The term share is also used because the resources might be
accessed by multiple clients (shared between them), with the protocol providing the locking
mechanisms. The SMB protocol, being a presentation/application layer protocol, has been
implemented on top of various transport layer protocols, with NetBIOS over TCP/IP being the
most common. In general, it is independent from the transport protocol, but expects it to be
connection-oriented. With changes, it can be implemented on top of a stateless protocol such
as UDP as well.
As stated before, the SMB protocol is compatible with an earlier version, for both the server
and the client. When an SMB client connects to an SMB server, they identify which dialect of
the SMB protocol they both understand and negotiate which one to use. The goal is to agree
upon the dialect with the highest level of functionality that they both support.
There are two levels of access control. The first level is the share level in which a client needs
to provide the password for the share. The second level is the user level, in which the client
authenticates to the server with a user name and password. When authenticated, the server
in turn sends a user ID (UID) to the client, which uses this UID in all later SMBs. Now the
access to shares (which are not protected at the share level as well) is possible. In case of file
serving, the SMB client might now open, read, write, and close files by sending the
appropriate SMBs to the SMB server.
SMB provides the locking of files and records. To enhance performance a mechanism called
opportunistic locking (oplock) has been introduced. It enables SMB protocol clients to cache
data. This mechanism can lead to data loss in case of connection failures or server failures.
To prevent data loss in case of failures, an option to disable opportunistic locking might be
provided in the SMB protocol implementation. Disabling oplocks removes the performance
benefits as well. Figure 2-4 shows how an SMB server manages oplocks.
Chapter 2. Terminology and file serving concepts 15
Figure 2-4 SMB server managing oplocks
The SMB protocol enables to set attributes for files, directories, and extended attributes as
well. It also supports ACLs.
SMB server grants oplock for file1 to SMB client 1
and starts caching data for file1
SMB client 1 requests oplock for file1 from SMB
server
SMB C1 SMB C2
SMB File Server
file1
1
2
4
3
5
SMB client 2 requests access to the file1
SMB server breaks oplock for file1, SMB client 1
must write cached data to the SMB server
SMB server grants access to file1 to SMB client 2
1
2
4
3
5
16 Implementing the IBM Storwize V7000 Unified
For SMB 1, the authentication support has been enhanced. It is now compatible with the
Generic Security Services application programming interface (GSS API). It supports
Kerberos authentication through GSS API.
It is now possible to query for older file versions, in case that is supported by the file system.
Other enhancements make the protocol more efficient, such as the implementation of SMB
server side only operations (without the need to transfer the file to the SMB client and back to
the SMB server). For instance, for an SMB client-initiated copy operation from one directory to
another, it makes no sense to read and write the file (transfer the data back and forth over the
network, wasting network resources) because it still is done in CIFS. Concerning the use of
TCP as the transport protocol, additionally to NetBIOS over TCP, SMB 1 supports to run
directly on TCP (Direct TCP), without the need for NetBIOS.
Quotas might be used by the SMB 1 client to limit file space that is used if the SMB server
supports quotas.
What is CIFS and what is it not?: CIFS is not synonymous with the SMB protocol, rather
it is a dialect of the SMB protocol.
Microsoft started to call its versions of the SMB protocol the Microsoft SMB protocol. The
first dialect of the Microsoft SMB protocol was the NT LAN Manager dialect. This dialect of
the SMB protocol was based upon the implementation of the SMB protocol in the Microsoft
NT4 and Windows 2000 (NT5) OS. Microsoft proposed this specific dialect of SMB to the
Internet Engineering Task Force (IETF) to become a standard with the name CIFS. During
this time, the terms SMB and CIFS started to be used interchangeably, driven by Microsoft,
and because it was expected by many that CIFS would become a standard and would be
the successor (instead of just a dialect) of the SMB protocol. CIFS did not become a
standard and has not been published as an RFC document either. In 2002, the Storage
Network Industry Association (SNIA) CIFS Work Group published a document for CIFS
with the title Common Internet File System (CIFS) Technical Reference Revision: 1.0,
which does not constitute a standard or specification, nor does it claim to be.
There is no real specification (for instance, documented in an RFC) which would enable
developers to implement the SMB protocol with (to spec) and be interoperable with various
other implementations (which would adhere to this specification as well). For instance, to
be interoperable with the SMB protocol implemented in Microsoft products, developers
must adjust to the changes that Microsoft might make.
After CIFS (the NT LAN Manager dialect of the SMB protocol), the Microsoft SMB protocol
was then developed further and the term SMB protocol is being used by Microsoft and
others. The extended version of CIFS is the Server Message Block (SMB) Version 1.0
Protocol (SMB 1). The most recent development, the complete redesign of the SMB
protocol, is being officially called the Server Message Block (SMB) Version 2 Protocol.
Kerberos and GSS API: Kerberos is an Internet Protocol designed to add security to
networked servers. It uses secret-key strong cryptography to deliver user authentication for
networked applications. Kerberos has been developed at the Massachusetts Institute of
Technology (MIT) and can be freely obtained. It is available in commercial products as well.
The Kerberos API is not standardized but Kerberos 5 includes a GSSAPI implementation.
GSSAPI is an application programming interface (API) to enable software vendors to
implement security-related services with a common API instead of supporting each other
directly. GSSAPI is an IETF standard. With GSS API being implemented in the SMB 1
protocol, Kerberos can be used for authentication of SMB 1 clients.
Chapter 2. Terminology and file serving concepts 17
The SMB Version 2 protocol
All versions of the SMB protocol, including CIFS and SMB 1, are evolutionary changes and
enhancements of the original SMB protocol. The Server Message Block (SMB) Version 2
Protocol (SMB 2), introduced in 2006 by Microsoft, is a newly developed file sharing protocol.
It features a different set of commands, but is based on the concepts of the original SMB
protocol. SMB 2.1 was introduced with Windows 7 and Server 2008 R2. SMB 3.0 (previously
named SMB 2.2) is the latest version and was introduced with Windows 8 and Windows
Server 2012. To be compatible with older SMB versions (including the original SMB protocol
versions), the older SMB protocol is used to negotiate the SMB version to be used by the
SMB client and the SMB server. Although the protocol is proprietary, its specification has
been published to allow other systems to interoperate with Microsoft operating systems that
use the new protocol.
SMB 2 reduces complexity, increases efficiency and thus performance (especially for high
latency networks). Also, it is more scalable and introduces other enhancements such as
connection error handling, improved message signing and support for symbolic links.
The SMB 2 protocol supports only TCP as the transport protocol, either using Direct TCP or
NetBIOS over TCP.
SMB 3.0 also brings several changes that adds functionality and improved performance over
SMB 2.
2.2.3 The File Transfer Protocol
The File Transfer Protocol (FTP) is an open systems interconnection (OSI) application layer
protocol designed to transfer files between computer systems. It was initially written by Abhay
Bhushan and was developed further and defined in RFC documents. FTP is a client/server
protocol with the FTP server providing access to files (and file space) and FTP clients
browsing the content made available, downloading files from and uploading files to the FTP
server.
FTP is historically command-line based, but graphical user interface (GUI) implementations
across many OSs became available and are in widespread use. FTP clients might mimic the
look and behavior of file managers or file browsers in operating systems GUIs, but FTP is not
designed to integrate transparently into the representation of file system trees as file access
protocols like the NFS protocol and the SMB protocol do. There is no share to connect to and
no export to mount. Instead, an FTP client connects to an FTP server and either must
authenticate or might be able to connect as anonymous (no authentication needed). Then, the
client might browse the FTP repository and transfer files. When files have been downloaded
to the FTP clients, users of the clients system might use the files from within the file system
(as opposed to the file access protocols NFS and SMB, which are used to work with files still
in the file system of the remote system).
FTP uses TCP with the server by using two well known ports, the control (or command) port
TCP 21 for commands and authentication, and the data port TCP 20 for data transfers.
There are two modes of operation with FTP, active mode and passive mode. With active mode,
the FTP client uses a random unprivileged port (above TCP 1023) to connect to the FTP
servers control port (TCP 21, where the FTP server is listening on for incoming FTP client
requests). Using this control connection, the FTP client informs the FTP server about its own
listening port for the transfer of data. The client-side listening port that is known to the FTP
server is not a well known port. Instead, it is the random port the FTP client used to connect
Note: The Storwize V7000 Unified currently supports SMB 2.
18 Implementing the IBM Storwize V7000 Unified
to the FTP server plus 1 (the FTP client only tells the FTP server the port, it does not initiate
the data transfer connection). Then, the server connects with its data port (TCP 20) to this
connection-specific port of the client. This data connection is initiated by the server and the
client side is listening, a behavior that is normally expected from a server. This mode of
operation was called active mode in retrospect. It is the FTP server who actively tries to
establish the data transfer connection to a client.
When servers or daemons listen to ports for connection attempts by clients from outside a
firewall secured network, they need these ports opened or forwarded by firewalls or they
would wait for eternity. Therefore, firewalls are usually configured to allow incoming traffic to
servers on specific well known ports. The behavior of the FTP protocol in active mode, where
the FTP client is listening to a random and temporary non-privileged port, leads to
implications with client-side firewalls. Connection attempts from outside, if not specifically
allowed, are usually blocked.
To overcome this issue, the passive mode has been introduced to FTP. In this mode, the FTP
client uses two non-privileged ports above port TCP 1023, usually consecutive, to initiate two
connections to the FTP server. It uses the first port to establish the control connection to the
FTP server control port TCP 21. It tells the FTP server to use passive mode (the FTP server
will be listening on a port for the data transfer connection). The server-side data port is a
non-privileged port above port TCP 1023 on the server side, randomly chosen by the FTP
server. The FTP server sends the information about which port it listens on to the FTP client.
With this knowledge, the FTP client can now establish the data transfer connection from its
second port to the FTP server listening port. Thus, only the client is initiating connections and
the client-side firewall should let them pass. The server-side firewall must be configured to
allow the incoming connection attempts to the non-privileged ports to pass, which is the
drawback to that method from the server-side point of view. The issue is that the firewall must
be configured to allow connection attempts to the ports above 1023. One way to reduce that
problem is FTP servers supporting the limiting of listening ports to a range of ports. Only
these ports would need to be configured by the firewall to allow incoming connection
attempts.
2.2.4 The Hypertext Transfer Protocol
The Hypertext Transfer Protocol (HTTP) is an OSI application layer client/server networking
protocol. It is an IETF standard with the version HTTP/1.1 being the most recent. HTTP is
used to transfer files between HTTP servers and HTTP clients. The HTTP server software
makes the content available to the HTTP clients, using port TCP 80 by default (other TCP
ports can be used as well but must be specified by the HTTP client). The HTTP client
software might have functionality to interpret these files and display the result as layout. Such
HTTP client software is commonly called web browser. As such, HTTP is a core technology
for the WWW.
To encrypt the requests by a client and the actual content while in transit, the Secure Sockets
Layer (SSL) and its successor, Transport Layer Security (TLS) can be used. This is called
HTTP Secure (HTTPS) or HTTP over SSL/TLS. HTTPS uses port TCP 443 by default. HTTPS
is specified in RFC documents.
2.2.5 The Secure Copy Protocol
The Secure Copy Protocol (SCP) is a client/server protocol used by SCP clients to copy files
to and retrieve files from SCP servers. It is also possible to initiate a transfer of files between
two remote systems. It uses the Secure Shell infrastructure for encrypted transfer. Usually the
Chapter 2. Terminology and file serving concepts 19
SSH daemon provides the SCP functionality and might act as both: the SCP client and the
SCP server. The port used is TCP 22. There is no official standard for SCP.
To connect to an Storwize V7000 Unified environment using SCP, a prerequisite is that an
SCP client is installed, available and functioning properly. Windows clients do not have an
SCP client installed by default, so it must be installed before this protocol can be used.
2.2.6 The Secure Shell File Transfer Protocol
SSH File Transfer Protocol (SFTP), also sometimes referred to as Secure FTP, is a file
transfer protocol developed in the context of SSH2 and is the standard file transfer protocol to
be used with SSH2. It expects to run over a secured connection that SSH provides. It is not
equal to Simple FTP or to FTP over SSH. It is a newly developed protocol. It provides
functionality similar to FTP as compared to the SSH-based SCP protocol.
20 Implementing the IBM Storwize V7000 Unified
Copyright IBM Corp. 2013. All rights reserved. 21
Chapter 3. Architecture and functions
In this chapter, we provide an overview of the architecture of Storwize V7000 Unified and its
functions.
3
22 Implementing the IBM Storwize V7000 Unified
3.1 High-level overview of Storwize V7000 Unified
To be able to serve logical volumes and files, the hardware and software to provide these
services is integrated into one product. Viewed from its clients, one part of Storwize V7000
Unified is a storage server and the other part is a file server; therefore, it is called Unified.
The Storwize V7000 Unified is a single, integrated storage infrastructure using unified central
management to simultaneously support Fibre Channel, IP Storage Area Networks (iSCSI)
and Network Attached Storage (NAS) data formats and managed centrally.
3.1.1 Storwize V7000 Unified storage subsystem: the Storwize V7000
Storwize V7000 Unified uses internal storage to generate and provide logical volumes to
storage clients, thus acting as a storage system. It is capable of the virtualization of external
storage systems as well.
The storage subsystem of Storwize V7000 Unified consists of the hardware and software of
the IBM Storwize V7000 storage system (Storwize V7000). Initially, it runs the SAN Volume
Controller/Storwize V7000 code level 7.1 (at the time of writing).
The storage subsystem is used for the following functions:
The provision of logical volumes to external storage clients
The provision of logical volumes to the internal storage clients, the file modules
3.1.2 Storwize V7000 Unified file server subsystem: The file modules
In addition to providing logical volumes, Storwize V7000 Unified is used to provide access to
file system space and thus to files in these file systems. It uses file sharing protocols/file
access protocols and file transfer/file copy protocols, thus acting as a file server.
The file server subsystem of Storwize V7000 Unified consists of two IBM Storwize V7000 file
modules (FM). These file modules perform the functions of the IBM Storwize V7000 Unified
software, initially running code level 1.4.2 (at the time of writing).
The file modules of Storwize V7000 Unified are internal storage clients of Storwize V7000
Unified. They use the logical volumes provided by the Storwize V7000 to save files and share
files to file clients. The base operating system (OS) of the file modules is RedHat 6.1. They
use a distributed file system, the IBM General Parallel File System (GPFS) to store and
retrieve files. To make the content of the GPFS accessible by file clients, the file modules use
the file sharing protocols/file access protocols Network File System (NFS), Server Message
Block (SMB), File Transfer Protocol (FTP), Hypertext Transfer Protocol Secure (HTTPS),
Secure Copy Protocol (SCP), and Secure FTP (SFTP).
A high-level system diagram of Storwize V7000 Unified with a virtualized storage system, a
storage client, and a file client, is shown in Figure 3-1.
Chapter 3. Architecture and functions 23
Figure 3-1 Storwize V7000 Unified, high-level system diagram
3.2 Storwize V7000 Unified system configuration
The IBM Storwize V7000 Unified consists of a single Storwize V7000 control enclosure, 0-9
Storwize V7000 expansion enclosures (storage server subsystem), two file modules (file
server subsystem), and the inter-connecting cables. You can add up to three additional
control enclosures for additional I/O groups. Each additional control enclosure together with
the associated expansion enclosures (up to nine per control enclosure) provides a new
volume I/O group. The file modules remain directly connected to the original control enclosure
which presents I/O group 0 which is the normal default configuration. Plan to only have block
volumes in the new I/O groups. File volumes that are created for you when a new file system
is created, must continue to be in I/O group 0.
3.2.1 Storwize V7000 Unified storage subsystem configuration
The Storwize V7000 control enclosure houses two node canisters, the redundant power
supplies (764 W with battery backup) and the internal drive bays of which there are two types:
Enclosure for 12 3.5 in. drives
Enclosure for 24 2.5 in. drives
The expansion enclosures contain two Switched Bunch Of Drives (SBOD) canisters of either
type (which can be mixed freely) and two 500 W power supply units (PSUs). The basic
Storwize V7000 Unified supports up to 120 3.5 in. drives, 240 2.5 in. drives or a mixture of
V7000U
l
o
g
i
c
a
l

v
o
l
u
m
e
V7000 storage system
file modules
logical volume
l
o
g
i
c
a
l

v
o
l
u
m
e
f
i
l
e

s
e
r
v
i
n
g
storage client file client
virtualized
storage
Note: Adding more file modules to the system is not supported.
24 Implementing the IBM Storwize V7000 Unified
both. Storwize V7000 Unified supports the same dual-port 6 Gb Serial Attached SCSI (SAS)
HDDs and SSDs as the Storwize V7000:
146GB 2.5 inch 15k RPM SAS HDD
300GB 2.5 inch 15k RPM SAS HDD
1TB 2.5 inch 7.2k RPM NL HAS HDD
200GB 2.5 inch SSD (E-MLC)
400GB 2.5 inch SSD (E-MLC)
800GB 2.5 inch SSD
1.2TB 6Gb SAS 2.5 inch SFF HDD
300GB 6Gb SAS 10k RPM 2.5 inch SFF HDD
600GB 6Gb SAS 10k RPM 2.5 inch SFF HDD
900GB 6Gb SAS 10k RPM 2.5 inch SFF HDD
2TB 3.5 inch 7.2k RPM HDD
3TB 3.5 inch 7.2k RPM NL SAS HDD
4TB 6Gb NL SAS 3.5 inch 7.2k RPM LFF HDD
The Storwize V7000 subsystem of the Storwize V7000 Unified is used to create virtual
volumes and provides them as logical volumes to storage clients. The protocols used are FC,
Fiber Channel over Ethernet (FCoE), and iSCSI.
The following interfaces are on the two node canisters:
Eight 2/4/8 Gb FC ports, which are fitted with short wave transceivers, SFP+:
Four ports for external connectivity to FC storage clients.
Four ports for internal connectivity to the file modules.
Each canister has two USB ports for a total of four. The USB ports are used for installation
and maintenance tasks.
Four 1 GbE ports for external connectivity to iSCSI storage clients and for management (at
least one 1 GbE port of each Storwize V7000 controller canister must be connected to the
client network).
Four 10 GbE ports for connectivity to iSCSI and FCoE storage clients. This is optional. It
requires a Host Interface Module (HIM) in each Storwize V7000 controller canister.
Four 4x 6 Gb SAS connectors for up to five expansion enclosures in the first SAS chain
and for up to four expansion enclosures in the second SAS chain.
3.2.2 Storwize V7000 Unified file server subsystem configuration
The file server subsystem of Storwize V7000 Unified consists of two file modules (FM). They
are IBM System x servers x3650 M3.
The following details are for one file module:
Form factor: 2U
Processor: Single Four Core Intel Xeon C3539 2.13 GHz, 8G L3 cache (or similar)
Cache: 72 GB
Storage: Two 600 GB 10 K SAS drives, RAID 1
Power Supply Units: Two (redundant), 675 W
The following interfaces are on one file module:
Four 1 GbE ports
Two ports for external connectivity to file clients and file level remote copy
Two ports for the management network between the FM for Unified clustering
Two ports 10 GbE for external connectivity to file clients and file level remote copy
Chapter 3. Architecture and functions 25
Two ports 8 Gb FC, one port is internally connected to each Storwize V7000 node canister
The internal and external interfaces of the V7000k, including the optional 10 GbE interfaces
on the Storwize V7000, are shown in Figure 3-2.
Figure 3-2 Storwize V7000 Unified, internal and external interfaces
3.3 Storwize V7000 Unified storage functions
Storwize V7000 Unified provides the storage service by mapping logical volumes to storage
clients.
The storage functions implemented in and supported by Storwize V7000 Unified are the
same as in the Storwize V7000 release 7.1. The Storwize V7000 uses RAID to protect
against drive failures for the internal SAS attached storage. The other storage functions of the
Storwize V7000 can be used on both internal and external virtualized storage systems.
The protocol used to obtain access to external storage systems is the Fibre Channel Protocol
(FCP). The protocols used by storage clients to access the logical volumes mapped to them
are FC, FCoE, and iSCSI. The externally connected FC ports on the Storwize V7000
controllers work as initiator and target.
For storage system virtualization, the Storwize V7000 supports up to 128 storage pools
(MDisk Groups). The extent sizes for these pools might be configured to be between 16 MB
and 8 GB. The system can manage 2^22 extents. For example, with 16 MB extent size, the
system can manage up to 16 MB X 4,194,304 = 64 TB. The maximum number of volumes for
use by the file modules and for external storage clients is 8192 or 2048 per I/O Group.
Volumes might be in striped, sequential, or image mode (for online volume migration). Volume
mirroring can be used to protect against failures of MDisk Groups, for instance, two external
2/4/8 Gb FC
for FC storage clients
GbE / 10 GbE
for file clients
Virtualized
Storage
V7000U
SAS SAS SAS
FC FC
FC
2 * GbE
2 * 10 GbE
2 * GbE
2 * 10 GbE
2 * FC
2 * FC
FC
4x 6Gb SAS
for V7000 Expansions
2/4/8 Gb FC for
virtualized storage
2 * GbE
GbE
for SONAS clustering
2/4/8 Gb FC
for File Modules
V7k
Expansion 1
up to 5
V7k
Expansion 2
up to 4
2 * GbE
2 * 10 GbE
V7k
Canister 2
V7k
Canister 1
2 * GbE
2 * 10 GbE
GbE / 10 GbE
for iSCSI storage clients
FM 1 FM 2
26 Implementing the IBM Storwize V7000 Unified
virtualized storage systems. Thin provisioning can be used with 64 KB or 256 KB grain size.
Compressed volumes have a default grainsize of 64 KB. The system can support up to 800
compressed volumes or 200 volumes per I/O Group. IBM Easy Tier hybrid storage pools
with two tiers might be used (SSD and HDD). The automatic extent level migration works
based on access history in the previous rolling 24 hour period. Easy Tier is also supported
with compressed volumes and extent level migration is based on reads.
For an in-depth discussion of the storage functions of Storwize V7000 block storage, refer to
Implementing the IBM Storwize V7000 V6.3, SG24-7938.
3.4 Storwize V7000 Unified file serving related functionality
Additionally, Storwize V7000 Unified provides file sharing and file transfer services, broadly
called file serving. The protocols implemented in Storwize V7000 Unified are described in
general in Chapter 2, Terminology and file serving concepts on page 5. Here, we list
specifics of the software architecture, operational characteristics, and components of
Storwize V7000 Unified software.
The file-related functionality of IBM Storwize V7000 Unified is provided by the Storwize V7000
Unified software, which runs on the file modules.
The file modules provides the means for file serving capabilities (NFS, SMB, HTTPS, FTP,
SCP, and SFTP).
The Storwize V7000 backend provides the connection to the storage.
Both file modules of Storwize V7000 Unified work together as a cluster, working in parallel to
provide the functions of the Storwize V7000 software.
The Storwize V7000 Unified software provides multiple elements and integrated components
that work together in a coordinated manner to provide the file-related services to clients. A
basic overview of the software components running on Storwize V7000 Unified file modules is
shown in Figure 3-3.
Chapter 3. Architecture and functions 27
Figure 3-3 Basic overview of Storwize V7000 Unified file modules software
The software running on Storwize V7000 Unified file modules provides integrated support of
policy-based automated placement and subsequent tiering and migration of data. We can
provision storage pools and store file data according to its importance to the organization or
according to performance requirements. For example, we can define multiple storage pools
with various drive types and performance profiles. We can create a higher performance
storage pool with fast drives and define a less expensive (and lower performance) storage
pool with higher capacity Nearline drives. Sophisticated policies are built into the Storwize
V7000 Unified, which can transparently migrate data between pools based on many
characteristics, such as capacity threshold limits and age of the data for use in an information
lifecycle management (ILM) strategy. Policies are also used for compression by placing
known compressed and uncompressed files into separate file system pools.
The Storwize V7000 Unified software supports remote replication, point-in-time copy (file
system-level snapshots), and automated storage tiering, all managed as a single instance
within a global name space. Asynchronous replication is specifically designed to cope with
connections that provide low bandwidth, high latency, and low reliability. The asynchronous
scheduled process picks up the updates on the source Storwize V7000 Unified system and
writes them to the target Storwize V7000 Unified system by using snapshots and the rsync
tool. The rsync tool is a standard Linux utility and is included in all popular Linux distributions
which includes the file modules.
3.4.1 Storwize V7000 Unified file sharing and file transfer protocols
The network file sharing protocols and file transfer protocols that are supported by Storwize
V7000 Unified today are NFS, SMB, FTP, SCP, SFTP, and HTTPS. Storwize V7000 Unified
uses GPFS to organize the data and save it to the storage of the Storwize V7000 part of the
Clustered Trivial Database for
clustered services
File serving protocols for file client
access of V7000 file resources
rsync and snatshots for
asynchronous replication
TSM for backup/restore and
Hierarchical Storage Management
General Parallel File System and
advanced functions as well as
monitoring and management
Red Hat Enterprise Linux
SMB
Samba
SCP,
SFTP
sshd
HTTPS
Apache
FTP
vsftpd
NFS
nfsd
CTDB
Replication
rsync, snapshot
GPFS and advanced functions
file system and file sets
Access control, authentication and authorization
Management interfaces, GUI and CLI
Policy engine, scan engine
ILM, tiered storage
Monitoring agents
file system snapshots
Backup
TSM
HSM
28 Implementing the IBM Storwize V7000 Unified
system. The Storwize V7000 Unified cluster manager is used to provide cross-node and
cross-protocol locking services for the file serving functions in NFS, SMB, FTP, SCP, SFTP,
and HTTPS. The SMB file sharing function maps semantics and access control to the
Portable Operating System Interface (POSIX)-based GPFS with native NFSv4 access control
list (ACL).
3.4.2 Storwize V7000 Unified NFS protocol support
You can use the NFS client-server communication standard to view, store, and update files on
a remote computer. Using NFS, a client can mount all or a portion of an exported file system
from the server to access data. You must configure the NFS server to enable NFS file sharing
in the Storwize V7000 Unified system.
Supports NFSv2 and NFSv3 with NFSv4 partial implementation.
Supports normal NFS data access functions with NFS consistency guarantees.
Supports authorization and ACLs.
Supports client machine authorization through NFS host lists. Supports enforcement of
Access Control Lists (ACLs).
Supports reading and writing of the standard NFSv3 / POSIX bits.
Supports the NFSv3 advisory locking mechanism.
Semi-transparent node failover (application must support network retry).
The Storwize V7000 Unified Software file system implements NFSv4 Access Control Lists
(ACLs) for security, regardless of the actual network storage protocol used. This method
provides the strength of the NFSv4 ACLs even to clients that access the Storwize V7000
Unified by the NFSv2, NFSv3, CIFS, FTP, and HTTPS protocols.
NFS protocol limitations and considerations
The flexibility of the NFS protocol export options allows for potentially unsafe configurations.
Adhering to good practice guidelines reduces the potential for data corruption. The following
considerations and limitations apply to V7000 Unified version 1.4 which may change with a
future release such as version 1.5.
NFSv4 is not supported.
NFS Kerberos functionality, for example SecureNFS, is not supported.
Do not mount the same NFS export on one client from both Storwize V7000 Unified file
modules because data corruption might occur.
Do not mount the same export twice on the same client.
Do not export both a directory and any of its subdirectories from a server if both are part of
the same file system.
Do not export the same file system, or the same file, through multiple exports to the same
set of clients.
A client should never access the same file through two different server:export paths. The
client cannot distinguish that the two objects are the same, so write ordering is not
possible and client-side caching is affected.
In the Storwize V7000 Unified system, each export is assigned a new file system ID even
if the exports are from the same file system. This process can lead to data corruption,
which is why it is not good practice.
Although the use of nested mounts on the same file system is strongly discouraged, it is
possible to create nested mounts by using Storwize V7000 Unified system. If nested
mounts are configured on Storwize V7000 Unified system, it is the clients responsibility to
exercise extreme caution to avoid any possibility of corruption.
Chapter 3. Architecture and functions 29
POSIX ACLs for NFSv2 and NFSv3 are not supported on Storwize V7000 Unified system.
Clients should only mount Storwize V7000 Unified NFS exports by using an IP address.
Do not mount a Storwize V7000 Unified NFS export by using a DNS Resource Record
entry name. If you mount a Storwize V7000 Unified NFS export by using a host name,
ensure that the name is unique and remains unique because this restriction prevents data
corruption and data unavailability.
When an NFS client detects an NFS server change, such as an NFS server reboot or a
new NFS server assuming NFS server responsibility from the previous NFS server, while
writing data asynchronously, the NFS client is responsible for detecting whether it is
necessary to retransmit data and for retransmitting all uncommitted cached data to the
NFS server if retransmission is required.
Storwize V7000 Unified system failover is predicated on this expected client behavior. For
example, when an NFS client is writing data asynchronously to one of Storwize V7000
Unified file modules, if the other file module assumes the NFS server role, the NFS client
must detect the server change and retransmit all uncommitted cached data to the other file
module to ensure that all of the data is safely written to stable storage.
The Storwize V7000 Unified system uses the group IDs (GIDs) supplied by the NFS client
to grant or deny access to file system objects, as defined in RFC 5531. When a user is
defined in more than 16 groups, to get the wanted access control, you must appropriately
define the groups that are transmitted from the client, and appropriately define mode bits
or access control lists (ACLs) on the Storwize V7000 Unified system.
Files created on an NFSv3 mount on a Linux client are visible only through CIFS clients
mounted on the same server node. CIFS clients mounted on different server nodes cannot
view these files.
3.4.3 Storwize V7000 Unified SMB and CIFS protocol support
The SMB/CIFS protocol functionality of Storwize V7000 Unified is provided by an
implementation of Samba and it is clustered by using the clustered trivial database (CTDB).
Samba is a software package made available under the GNU General Public License (GPL)
aimed to provide file sharing functions to SMB file clients (for instance, such as Windows
systems). Samba provides the following functions:
Name resolution.
Access control through authentication and authorization.
Integrate with a Windows Server domain as a Primary Domain Controller (PDC) or as a
domain member.
Can be part of an Active Directory (AD) domain.
Service announcement for browsing of resources.
File sharing and print queue sharing
SMB/CIFS protocol access in the Storwize V7000 Unified has been explicitly tested from
SMB file clients running Microsoft Windows (2000, XP, Vista 32-bit, Vista 64-bit, 2008 Server),
Linux with SMBClient, Mac OS X 10.5, and Windows 7.
GPFS is a POSIX-compliant UNIX style file system. For SMB file clients, the Storwize V7000
Unified maps UNIX ACLs to Windows access control semantics. A multitude of file access
concurrency and cross-platform mapping functions are done by the Storwise V7000 Unified
software, especially in the cluster manager. The Storwize V7000 Unified implementation for
SMB file access includes the following characteristics:
File access using SMB is only supported for file systems that are on internal storage of the
Storwize V7000 Unified storage part and not for external virtualized storage systems.
30 Implementing the IBM Storwize V7000 Unified
SMB protocol version support:
The SMB 1 protocol is supported.
SMB 2.0 and later is not fully supported. See the note in SMB protocol limitations on
page 30.
SMB data access and transfer capabilities are supported by normal locking semantics.
Provides consistent locking across platforms by supporting mandatory locking
mechanisms and strict locking.
User authentication is provided through Microsoft Active Directory (AD) or through
Lightweight Directory Access Protocol (LDAP).
Consistent central ACLs enforcement across all platforms.
ACLs are enforced on files and directories and can be modified by using Windows tools.
Semi transparent fail over if the SMB/CIFS implementation supports the network retry.
Supports the win32 share modes for opening and creating files.
File lookup is not case-sensitive.
Support for DOS attributes on files and directories.
Archive bit, ReadOnly bit, System bit, and other semantics not requiring POSIX attributes.
MS-DOS/16 bit Windows short file names.
Supports generation of 8.3 character file names.
Notification support of changes to file semantics to all clients in session with the file.
Opportunistic locks and leases are supported for enabling client-side caching.
Offline or de-staged file support (by the Storwize V7000 Unified hierarchical storage
management (HSM) function through IBM Tivoli Storage Manager):
Offline files are displayed with the IBM HourGlass symbol in Windows Explorer.
Recall to disk is transparent to the application. No additional operation is needed.
Windows Explorer can display file properties without the need to recall offline files.
Storwize V7000 Unified snapshots are integrated into the Volume Shadow Copy Service
(VSS) interface.
Allows users with the appropriate authority to recall older file versions from the
Storwize V7000 Unified snapshots.
Supports file version history for file versions created by Storwize V7000 Unified
snapshots.
The standard CIFS time stamps are made available:
Created time stamp:
The time when the file was created in the current directory.
When the file is copied to a new directory, a new value is set.
Modified time stamp:
The time when the file was last modified.
When the file is copied elsewhere, it keeps the value in the new directory.
Accessed time stamp:
The time when the file was last accessed.
This value is set by the application, but not all applications modify it.
SMB protocol limitations
Consider the following SMB protocol limitations when configuring and managing the Storwize
V7000 Unified system:
Chapter 3. Architecture and functions 31
Alternate data streams are not supported. One example is an NTFS alternate DataStream
from a Mac OS X operating system.
Server-side file encryption is not supported.
Level 2 opportunistic locks (oplocks) are currently not supported. This means that level 2
oplock requests are not granted.
Symbolic links cannot be stored or changed and are not reported as symbolic links, but
symbolic links created via NFS will be respected if they point to a target under the same
exported directory.
SMB signing for attached clients is not supported.
SSL secured communication to Active Directory is not supported.
Storwize V7000 Unified acting as a Distributed File System (DFS) root is not supported.
Windows Internet Naming Service (WINS) is not supported.
Retrieving Quota information using NT_TRANSACT_QUERY_QUOTA is not supported.
Setting Quota information using NT_TRANSACT_SET_QUOTA is not supported.
Managing the Storwize V7000 Unified system by using the Microsoft Management
Console Computer Management Snap-in is not supported, with the following exceptions:
Listing shares and exports
Changing share or export permissions
Users must be granted permissions to traverse all of the parent folders on an export to
enable access to a CIFS export.
SMB/CIFS1 specific limitations
CIFS extensions for UNIX are not supported.
Cannot create a shadow copy of a shared folder using a remote procedure call (RPC) from
a shadow copy client.
Backup utilities such as Microsoft Volume Shadow Copy Service, cannot create a shadow
copy of a shared folder using an RPC.
SMB2 specific limitations
SMB 2.1 is not supported.
The Storwize V7000 Unified system does not grant durable or persistent file handles.
Storwize V7000 Unified FTP and SFTP support
Storwize V7000 Unified provides FTP access from FTP clients using vsftpd. The following
characteristics apply:
Supports file transfer to and from any standard FTP client.
Supports user authentication through AD and LDAP.
Supports enforcement of ACLs and retrieval of POSIX attributes. ACLs cannot be modified
by using FTP because there is no support for the chmod command.
Supports FTP resume for clients that support the network trying again on node fail over.
Characters for file names and directory names are UTF 8 encoded.
You cannot use the FTP protocol with the PuTTY utility to access the Storwize V7000
Unified Service IP address, because PuTTY attempts to list files, which is not permitted by
the Storwize V7000 Unified Service IP FTP service. PuTTY SCP is supported.
32 Implementing the IBM Storwize V7000 Unified
When using FileZilla to view a directory listing on a Storwize V7000 Unified system, all file
time stamps will have a constant time offset. The time offset is caused by FileZilla
automatically converting the time stamps from UTC to the local time zone. This conversion
can be customized by adding the Storwize V7000 Unified system to the site manager and
adjusting the server time offset in the Advanced tab.
When opening the FTP session, specify one of the IP addresses that are defined in the
Public Networks page of the GUI. When prompted for Name, specify the userid in the
format: domain\user. When prompted for password, enter the password.
Storwize V7000 Unified HTTPS support
Storwize V7000 Unified supports simple read only file transfer of files through the HTTPS
protocol from any HTTP client using Apache. All transfers are using HTTPS to provide access
control. The following features are supported through HTTPS:
Supports read only file transfer of appropriately formatted files.
Supports user authentication through AD and LDAP.
Supports enforcement of ACLs. ACLs cannot be viewed or modified with this protocol.
On node fail over during a file transfer, the transfer is canceled and must be tried again on
the other file module. Partial retrieve is supported, minimizing duplicate transfers in a fail
over situation.
Characters for file names and directory names are UTF 8 encoded.
The Storwize V7000 Unified software uses HTTP aliases as the vehicle to emulate the share
or export concept. For example, share XYZ is accessible by http://server.domain/XYZ. The
system redirects all HTTP access requests to HTTPS.
The Web-based Distributed Authoring and Versioning (WebDAV) and the Representational
State Transfer (REST) API are currently not supported in Storwize V7000 Unified. They are
known requirements.
Storwize V7000 Unified SCP and SFTP support
The Storwize V7000 supports the transfer of files between an SCP client and Storwize V7000
Unified by using sshd. All the default options implemented in this protocol are supported.
Also, SFTP is available by sshd to transfer files in a manner similar to using FTP.
Storwize V7000 Unified locking characteristics
POSIX byte range locks set by NFS clients are stored in GPFS, and Windows clients
accessing Storwize V7000 Unified by using the SMB protocol accept these POSIX locks. The
mapping of SMB protocol locks to POSIX locks is updated dynamically on each locking
change.
Unless the application specifically knows how to handle byte range locks on a file or are
designed for multiple concurrent writes, concurrent writes to a single file are not desirable in
any operating system.
To maintain data integrity, locks are used to guarantee that only one process can write to a file
(or to a byte range in a file) at a time. Although file systems traditionally locked the entire file,
newer ones such as GPFS support the ability for a range of bytes within a file to be locked.
Byte range locking is supported for both the SMB protocol and the NFS protocol, but this does
require the application to know how to use this capability.
If another process attempts to write to a file (or a section of one) that is already locked, it
receives an error and waits until the lock is released.
Chapter 3. Architecture and functions 33
The Storwize V7000 Unified supports the standard DOS and NT file system (deny-mode)
locking requests. These requests allow only one process to write to an entire file at a given
time, as well as byte range locking. In addition, Storwize V7000 Unified supports the Windows
locking known as opportunistic locking or oplock.
SMB protocol byte range locks set by Windows SMB file clients are stored both in the
Storwize V7000 Unified cluster-wide database, and by mapping them to POSIX byte range
locks in GPFS. This mapping ensures that NFS file clients see relevant SMB protocol locks as
POSIX advisory locks, and NFS file clients accept these locks.
3.4.4 Storwize V7000 Unified cluster manager
The cluster manager is a core Storwize V7000 Unified component as shown in Figure 3-4 on
page 33. The cluster manager coordinates and orchestrates Storwize V7000 Unified
functions and advanced functions. The cluster manager runs on only one of the file modules
and can fail over to the second file module if an issue occurs such as a system hang or other
failure.
Figure 3-4 Storwize V7000 Unified Cluster Manager
The Storwize V7000 Unified cluster manager provides the clustered implementation and
management of the file modules including tracking and distributing record updates across
both file modules in the cluster. It controls the public IP addresses used to publish the file
services, and moves them as necessary between the file modules. By monitoring scripts, the
cluster manager monitors and determines the health state of the file modules. If a file module
has a problem, such as a hardware or software failure, the cluster manager dynamically
migrates the affected public IP addresses and in-flight workloads to the other file module. It
34 Implementing the IBM Storwize V7000 Unified
uses the tickle-ACK method with the affected clients so that they re-establish the TCP
connection to the other file module. With this method, acknowledgement packets get
exchanged which allow for the remaining file module to send an appropriate reset packet so
that clients know the connection to the original file module is to be reset. Otherwise, clients
would time out after a possibly long time.
The Storwize V7000 Unified software works in an active-active, high-available, and
workload-sharing manner with the clustering functionality provided by the cluster manager. If
a file module fails, the Storwize V7000 Unified software automatically fails over the workload
to the remaining file module. From a workload allocation standpoint, the Storwize V7000
Unified uses the Domain Name System (DNS) to perform round-robin access to spread
workload as equally as possible on an IP address basis across the file modules.
The Storwize V7000 Unified allocates a single network client to one file module. The Storwize
V7000 Unified software does not rotate a single clients workload across file modules. This
process is not only unsupported by DNS or the SMB protocol, but would also decrease
performance because caching and read-ahead is done in the file module. It is for this reason
that any one individual client is going to be assigned, during their session, to one file module.
One of the primary functions of the cluster manager is to support concurrent access from
concurrent users, spread across multiple various network protocols and platforms to many
files. The Storwize V7000 Unified software also supports, with the appropriate authority,
concurrent read and write access to the same file, including byte-range locking. Byte-range
locking means that two users can access the same file concurrently, and each user can lock
and update a subset of the file.
We see that all file accesses from the users to GPFS logically traverse the cluster manager. It
logically implies that the cluster manager handles metadata and locking, but does not handle
data transfer. In other terms, the cluster manager is not in-band for data transfer.
The Clustered Trivial DataBase (CTDB) functionality provides important capabilities for the
cluster manager to provide a global name space to all users from any file access protocol, in
which both file modules appear as a single file server. The CTDB also assures that all the
Storwize V7000 Unified SMB components on both file modules are able to talk to each other
in a high performance manner, and update each other about the locking and other
information.
3.4.5 Storwize V7000 Unified product limits
Consider the following limits when configuring and managing the Storwize V7000 Unified
system:
Non-Storwize V7000 Unified application installation is not supported on Storwize V7000
Unified file modules.
The number of shares and exports that can be created per service (CIFS, NFS, FTP, SCP,
and HTTPS) is limited to 1000 per protocol.
If naming a share by using the command-line interface (CLI), you can use up to 80
characters. However, the graphical user interface (GUI) limits you to 72 characters.
The share name global is reserved and cannot be used as a share name.
Restricting ports by VLAN, service, or other criteria, is not possible.
VLAN 1 is not supported for Storwize V7000 Unified client traffic.
This restriction is intended to prevent security exposure and reduce the probability of
network configuration errors. VLAN 1 has been used within the industry as the default or
Chapter 3. Architecture and functions 35
native VLAN. Many vendors use VLAN ID value 1 for management traffic by default.
Configuring VLAN 1 as available within the network can be a security exposure because
VLAN 1 might span large parts of the switched network by default. Common practice in the
industry strongly discourages the use of VLAN 1 for user client traffic. Setting VLAN 1 for
user client traffic can require explicit steps that differ by vendor and can be prone to
configuration error.
The access control lists (ACL) READ_NAMED, WRITE_NAMED, and SYNCHRONIZE
have no effect on the Storwize V7000 Unified system.
Task scheduling has the following limitations:
Only schedules in equal space increments of three hours are supported.
Space increments other than three hour increments are not supported.
It is strongly suggested that only the UTF-8 character set is selected when connecting to
Storwize V7000 Unified CLI via Secure Shell (SSH), to Storwize V7000 Unified GUI via
browser, or when connecting to Storwize V7000 Unified shares and exports via NFS and
FTP:
By selecting UTF-8 encoding in the SSH client for the connection to Storwize V7000
Unified CLI.
By selecting UTF-8 as locale for the connection to Storwize V7000 Unified in the FTP
client.
All Storwize V7000 Unified internal scripts and tools currently use LANG=en_US.UTF8,
and handle file names and directory names as though they contained only UTF-8
characters. Users can create files and directories by using different locales. For
example, by using an external Linux client that is set to LANG=ISO-8859-1, LANG=is_IS
or LANG=de_DE, or DBCS locales like LANG=euc_JP. Storwize V7000 Unified kernel NFS
daemon simply treats file and directory names as a stream of bytes, so by using NFS
mounts, you can theoretically copy those files and directories into Storwize V7000
Unified system. Storwize V7000 Unified kernel NFS daemon is not aware of locales,
and therefore can copy files or directories with non-UTF-8 characters into Storwize
V7000 Unified system.
UTF-8 uses the most-significant bit to encode characters that are not in the ASCII
character set, which includes only characters with hexadecimal values 0x01-0x7f;
decimal values 1 - 127. The UTF-8 encoding enforces that if one byte in a file or
directory name is greater than hexadecimal 0x7f, that a second, and maybe a third,
and maybe a forth, byte must follow to complete a valid character. Therefore, files and
directories that are created in a non-UTF-8 locale that have such a byte greater than
0x7f in their name would be invalid when interpreted as UTF-8.
The CLI command input, the GUI, and some output such as messages and log entries,
currently require UTF-8 format only.
Multibyte Character Set (MBCS) support is limited to file and directory names. MBCS
characters in object names (for example, user names) are not supported.
Current limitations:
Non-UTF-8 characters in file and directory names are not displayed correctly in the
CLI, the GUI, messages, or log entries.
Non-UTF-8 file and directory names can be read only from clients that have the
same language setting. For example, if an NFS client defined as ISO-8859-1 is
used to create a file, a CIFS client or a different NFS client using UTF-8 cannot see
or access that file.
Non-UTF-8 file and directory names cannot be backed up or restored.
36 Implementing the IBM Storwize V7000 Unified
Non-UTF-8 file and directory names cannot be specified when using Storwize
V7000 Unified CLI, which interprets characters only as UTF-8. Attempting to restore
a file name that contains a non-UTF-8 character would not restore the file with that
file name because the byte representation is different.
Non-UTF-8 file and directory names might cause problems in other Storwize V7000
Unified areas, including asynchronous replication, backup, and file access methods
such as FTP, HTTPS, SCP, SMB, and NFS.
Non-UTF-8 file and directory names might get represented differently in different
locales. Some locales might not even be able to represent the byte combination at
all, might treat the file names as invalid, and might not process them correctly, if at
all.
Object names using multi-byte non-UTF-8 characters might be limited to as few as
25% of the maximum number of characters that are supported for the names of the
same object that are composed of only 1-byte UTF-8 characters.
A directory that contains non-UTF-8 characters in its path cannot be the root of a
share or export, or of a file set.
The following characters cannot be backed up or restored: chr(24), chr(25),
newline, and the common slash /.
Wildcards and double quotation marks are not officially supported for backup. If you
require that those characters be backed up, contact your IBM representative.
Notes:
Windows Service for UNIX (SFU)/Subsystem for UNIX based Applications (SUA) NFS
does not handle non-ASCII UTF-8 file names correctly.
Internet Explorer (IE) does not correctly display non-ASCII characters in FTP file names
and directory names that use UTF-8 for file name encoding. Such files or directories
cannot be accessed. For example, IE does not correctly parse FTP directory listings
that contain space characters within user or group names.
The Storwize V7000 Unified is tested up to 6 VLAN based subnets.
For further configuration limits and restrictions consult the Storwize V7000 Unified
Support Site at http://www.ibm.com/storage/support/ and locate the V1.4 Configuration
limits and restrictions for IBM Storwize V7000 Unified at:
http://www-01.ibm.com/support/docview.wss?uid=ssg1S1004227
Copyright IBM Corp. 2013. All rights reserved. 37
Chapter 4. Access control for file serving
clients
This chapter describes access control to resources for file serving clients. Access control is a
broad term about controlling who (user) or what (system) is granted access to which
resources and can have many criteria. Two key concepts that are used to control access are
authentication and authorization.
Authentication is a means to provide and verify credentials to ensure the identity of a user.
Authorization is a means to grant access to a specific service or specific resources to a user,
usually after successful authentication.
4
38 Implementing the IBM Storwize V7000 Unified
4.1 Authentication and authorization in general
The objective of authentication is to verify the claimed identity of users and components.
Authentication methods include, for example, unique user IDs, keys and digital certificates. As
the first process, authentication provides a way of identifying a user, typically by having the
user provide credentials before access is granted. This can be done by entering a valid user
name and valid password. Typically, the process of authentication is based on each user
having a unique set of criteria for gaining access. The authentication server compares the
entered authentication credentials with user credentials stored in a database. If the
credentials match, the user is deemed to have been identified successfully. If the credentials
do not match, authentication fails.
After a successful authentication, the user can access services or resources based upon the
associated authorization. Authorization might be based on a user ID (UID) and matching UID
in access control lists (ACLs) or other means of mapping a specific user to a specific resource
or service.
4.1.1 UNIX authentication and authorization
UNIX authentication is system-based. Granting access to resources within the system or for
shared resources being accessed from the system, the authorization, is UID/group identifier
(GID) based. A user uses their user name and a credential (for instance a password or a
private Secure Shell (SSH) key) to log on to a UNIX system. The system looks up the users
UID in local files or an external directory service such as Lightweight Directory Access
Protocol (LDAP) and then verifies the received credential. The information for credential
verification might be stored locally (for instance, hashes of passwords are stored in
/etc/shadow, public SSH keys are stored in .ssh/authorized_keys) or the external directory
service (for instance, LDAP).
When a user has successfully logged on to a UNIX system, they are trusted (authenticated)
on this system, but also by all other systems that trust the particular system the user just
logged on to. For example, for file sharing via the NFS file sharing protocol, a Network File
System (NFS) file server administrator creates an NFS export and grants access to the users
system. For NFS file access, the UNIX NFS client running on the users system sends the
users UID with each file access request. The NFS service running on the NFS file server
considers this UID as authenticated, assuming that the system correctly authenticated the
users UID. The UID is not used for another specific authentication by the NFS server, but for
the authorization of the users access to file resources. For instance, if a users system is
authenticated, but the UID of the user does not match the UID for resources on the NFS
server system, the user still has no access to the resources of the NFS server because he is
not authorized to access them.
This means that to be able to access remote NFS resources, the users system must be
authenticated to the remote server, and the user must be authenticated to the local system
and authorized to access its resources (by a successful logon to a system). Also, the UID (the
user systems NFS client provides) must match a valid UID on the NFS servers system for
specific resources.
4.1.2 Windows authentication and authorization
Windows authentication and authorization is session-based. A user logs on using their user
name and password on the Windows system. The system looks up the users security
identifier (SID) in the local Windows registry or on the Windows domain controller, for
Chapter 4. Access control for file serving clients 39
example, an Active Directory (AD) server, and then verifies the received credential. The
information for credential verification might be stored locally (in the Windows registry) or on
the external Windows domain controller (the previously mentioned Active Directory server).
When a user is successfully logged on to a Windows system, they are trusted on this system.
However, the user still must authenticate and authorize to other services provided by other
network connected systems before they can access them. For instance, to access shares via
the Server Message Block (SMB) protocol, an SMB file server administrator creates an SMB
share and customizes the ACL (for authorization) of this share to grant the user access to the
share.
For access to SMB shares, the SMB client running on the users system sends an
authentication request to an authentication server (for instance, Active Directory). The
authentication server checks if the requesting user is allowed to use the service, and then
returns a session credential (which is encrypted with the key of the SMB server) to the SMB
client. The SMB client sends the session credential to the SMB server. The SMB server
decrypts the session credential and verifies its content. When the session credential is
verified, the SMB server knows that the user was authenticated and authorized by the
authentication server.
4.1.3 UNIX and Windows authentication and authorization in Storwize V7000
Unified
To provide heterogeneous file sharing for UNIX and Windows, the Storwize V7000 Unified
must support the authentication methods for UNIX and Windows, as previously described.
The Storwize V7000 Unified uses Windows authentication for incoming SMB connection
requests and UNIX authentication for incoming NFS, Hypertext Transfer Protocol (HTTP),
Secure Copy Protocol (SCP), and Secure File Transfer Protocol (SFTP) requests.
4.2 Methods used for access control
Depending on the dominating operating system environment, the size of the infrastructure,
and other variables, different methods to enforce access control are employed.
4.2.1 Kerberos
Kerberos is a network authentication protocol for client/server applications by using
symmetric key cryptography. User password in clear text format is never sent over the
network. The Kerberos server grants a ticket to the client for a short span of time. This ticket is
used by the client of a service while communicating with the server to get access to the
service, for instance, access to SMB file server shares. Windows Active Directory
authentication is based on Kerberos. MIT Kerberos is a free implementation of Kerberos
protocol, which is provided by the Massachusetts Institute of Technology.
For more information about Kerberos, see this website:
http://web.mit.edu/kerberos/#what_is
4.2.2 User names and user IDs
UNIX system and UNIX-based appliances such as the Storwize V7000 Unified use user
names and UIDs to represent users of and to the system. The user name is typically a human
40 Implementing the IBM Storwize V7000 Unified
readable sequence of alphanumeric characters and the UID is a positive integer value. When
a user logs on to a UNIX system, the operating system looks up the UID and then uses it for
further representation of the user.
User names, UIDs, and the mapping of user names to UIDs are stored locally in the
/etc/passwd file or on an external directory service such as AD, LDAP, or Network Information
Service (NIS).
4.2.3 Group names and group identifiers in UNIX
UNIX systems use groups to maintain sets of users which have the same permissions to
access certain system resources. Similar to user names and UIDs, a UNIX system also
maintains group names and group identifiers (GIDs). A UNIX user can be a member of one or
more groups, where one group is the primary or default group. UNIX groups are not nested.
They contain users only, but not other groups.
Group names, GIDs, the mapping of group names to GIDs, and the memberships of users in
groups are stored locally in the /etc/group file or on an external directory service such as AD,
LDAP, or NIS. The primary group of a user is stored in /etc/passwd or in an external directory
service.
4.2.4 Resource names and security identifiers in Windows
Windows refers to all operating system entities as resources, including users, groups,
computers, and other resources. Each resource is represented by a security identifier (SID).
Windows groups can be nested. For instance, one group can include one or more users and
one or more groups. Resource names and SIDs are stored locally in the Windows registry or
in an external directory service such as Active Directory or LDAP.
4.2.5 UID/GID/SID mapping in the Storwize V7000 Unified
The Storwize V7000 Unified stores all user data in the GPFS file system, which uses UIDs
and GIDs for authorization. For SMB share access, the Storwize V7000 Unified needs to map
SIDs to UIDs and GIDs to enforce access control. NFS clients send the UID and GID of a
user that requests access to a file. The Storwize V7000 Unified uses Linux default access
control mechanism by comparing the received UID and GID with the UIDs and GIDs stored in
GPFS.
The UIDs and GIDs used by the NFS clients must match the UIDs and GIDs stored inside
GPFS. There is a requirement to allow the remapping of external UIDs and GIDs used by the
NFS client to different UIDs and GIDs stored on GPFS.
For HTTP, SFTP, and SCP access, the Storwize V7000 Unified requires users to authenticate
via a user name. The Storwize V7000 Unified needs to map the user name to one UID and
one or more GIDs for GPFS access control.
When SMB clients using Windows connect to the Storwize V7000 Unified, it first contacts the
Active Directory to check for user name and password combination. The UID/GID pair
created is then stored in the idmap database in the Storwize V7000 Unified. The first time a
user logs in, the ID mapping is created. After that, it is picked up from the database directly.
For NFS access from UNIX clients, the UID is provided by the UNIX clients itself. In case of
mixed access from Windows and UNIX, Active Directory with Services for UNIX (SFU) can be
used.
Chapter 4. Access control for file serving clients 41
4.2.6 Directory services in general
Storing user and group information in local files works well for small organizations that
operate only a few servers. Whenever a user is added or deleted, the group membership is
changed or a password is updated. This information must be updated on all servers. Storing
this information in local files does not scale for large organizations having many users that
need selected access to many servers and services.
Directory services allow you to store and maintain user and group information centrally on an
external server. Servers look up this information in the directory server instead of storing this
information in local files.
4.2.7 Windows NT 4.0 Domain Controller and SAMBA Primary Domain Controller
A domain is a concept introduced in Windows NT where a user might be granted access to a
number of computer resources with the use of user credentials. A domain controller (DC) is a
server that responds to authentication requests and controls access to various computer
resources. Windows 2000 and later versions introduced Active Directory, which largely
eliminated the concept of primary and backup domain controllers. Primary domain controllers
(PDCs) are still used by customers. The SAMBA software can be configured as the primary
domain controller and the client can run SAMBA on Linux. The Samba4 project has the goal
to run SAMBA as an AD server.
4.2.8 Lightweight Directory Access Protocol
The Lightweight Directory Access Protocol (LDAP) is a directory service access protocol
using TCP/IP. LDAP was developed as a lightweight alternative to the traditional Directory
Access Protocol (DAP). The lightweight in the name refers to the fact that LDAP is not as
network intensive as DAP.
An LDAP directory is usually structured hierarchically, as a tree of nodes. Each node
represents an entry within the LDAP database. A single LDAP entry consists of multiple
key/value pairs called attributes, and is uniquely identified by a distinguished name.
4.2.9 Microsoft Active Directory
Active Directory (AD) is a Microsoft created technology introduced in Windows 2000 and
provides the following network services:
Directory service, which is based on LDAP
Authentication service, which is based on Kerberos
Domain Name System (DNS) function
4.2.10 Services for UNIX and Identity Management for UNIX
Services for UNIX (SFU) is a Microsoft Windows component for Windows Server 2003 with
AD. Identity Management for UNIX is used instead in Windows Server 2008 with AD, which
provides interoperability between Microsoft Windows and UNIX environments. The Storwize
V7000 Unified uses it primarily for UID/GID/SID mapping.
42 Implementing the IBM Storwize V7000 Unified
4.2.11 Network Information Service
Network Information Service (NIS) is a directory service protocol for centrally storing
configuration data of a computer network. NIS protocols and commands were originally
defined by Sun Microsystems. The service is now widely implemented. Originally called
Yellow Pages or YP, some of the binary names still start with yp. The original NIS design was
seen to have inherent limitations, specifically in the areas of scalability and security.
Therefore, modern and secure directory systems, primarily LDAP, are used as an alternative.
The NIS information is stored in so called NIS maps, typically providing the following
information:
Password-related data similar to data stored in /etc/passwd
Group related data similar to data stored in /etc/group
Network configuration such as netgroups
4.2.12 Access control list in general
Generally, an access control list (ACL) is a list of permissions that is attached to a resource.
An ACL describes which identities are allowed to access the respective resource (for instance
read, write, execute). ACLs are the built-in access control mechanism of UNIX and Windows
systems. Storwize V7000 Unified uses a Linux built-in ACL mechanism for access control to
files that are stored on GPFS.
4.2.13 GPFS NFSv4 ACLs
There is a broad range of ACL formats that differ in syntax and semantics. The ACL format
defined by NFSv4 is also called NFSv4 ACL. GPFS ACLs implement the NFSv4 style ACL
format, which is sometimes referred to GPFS NFSv4 ACL. The Storwize V7000 Unified stores
all user files in GPFS. The GPFS NFSv4 ACLs are used for access control of files stored on
the Storwize V7000 Unified.
4.2.14 POSIX bits
The POSIX bits of a file are a way to specify access permissions to files. UNIX file systems
allow you to specify the owner and the group of a file. The POSIX bits of a file allow you to
configure access control for the owner, the group, and for all other users to read, write to, or
execute the file. POSIX bits are less flexible than ACLs.
The change of the POSIX bits of a GPFS file system triggers a modification of its GPFS
NFSv4 ACL. Because the Storwize V7000 Unified uses GPFS NFSv4 ACLs for access
control, the Storwize V7000 Unified administrators and IBM service personnel should never
change the POSIX bits of files stored in GPFS.
4.2.15 ACL mapping
GPFS NFSv4 ACLs and Windows ACLs are not compatible. For instance, Windows supports
unlimited nested groups that are not fully supported by GPFS NFSv4 ACLs. The Storwize
V7000 Unified maps Windows ACLs on a best fit basis to GPFS NFSv4 ACLs, which results
Note: The implementation of NFSv4 ACLs in GFPS does not imply that GPFS or the
Storwize V7000 Unified supports NFSv4. NFSv4 support of the Storwize V7000 Unified is
planned for a future release.
Chapter 4. Access control for file serving clients 43
in some limitations. It is a known limitation that in this aspect the current Storwize V7000
Unified is not fully compatible with Windows SMB file sharing.
4.3 Access control with Storwize V7000 Unified
The authentication configuration of the Storwize V7000 Unified consists of two elements, the
configuration of a directory service and the refinement of the ID mapping. In the Storwize
V7000 Unified implementation, it is essential to define an initial owner when creating a share.
Only this owner has initial access from a file client and can start to define directory structures
and associated ACLs for all other designated users of this share. It cannot be displayed and
listed afterwards, and cannot be changed if there is any data stored on the share.
4.3.1 Authentication methods supported
Storwize V7000 Unified supports the following authentication methods:
Active Directory
Active Directory with Microsoft Windows Services for UNIX (SFU), and Identity
Management for UNIX
SAMBA Primary Domain Controller (PDC)
LDAP
LDAP with MIT Kerberos
NIS
Local authentication service configured internally on Storwize V7000 Unified system
Storwize V7000 Unified uses the following other authentication elements within the methods
supported:
Netgroups: It is a group of systems used to restrict access for mounting NFS exports on a
set of systems and deny mounting on the rest of the systems. The Storwize V7000 Unified
supports netgroup being stored in NIS.
Kerberos: The Storwize V7000 Unified supports Kerberos with AD (mandatory) and LDAP
(optional).
Secure Sockets Layer/Transport Level Security (SSL/TLS): These protocols are primarily
used to increase the confidentiality and integrity of data being sent over the network.
These protocols are based on public-key cryptography and use Digital Certificate based
on X.509 for identification.
With the Storwize V7000 Unified, we can configure only one authentication method, for
instance AD, at one time. External authentication server needs to be configured separately
and the Storwize V7000 Unified GUI or CLI does not provide any means to configure or
manage the external authentication server. This is true even for the Kerberos server.
The Storwize V7000 Unified provides server-side authentication configuration for various
protocols, which include NFS, SMB, FTP, SCP, SFTP, and HTTP. For NFSv3, only the protocol
configuration is performed. Kerberos has a few special steps to be performed on the V7000
for NFSv3, though. Because authentication happens on the NFSv3 client side, it needs to be
configured on the client side mainly.
It is required that the V7000 Unified is synchronized in time with the authentication servers.
Authentication does not work if time is not synchronized. Authentication configuration does
not ensure synchronization; therefore, this needs to be ensured manually.
44 Implementing the IBM Storwize V7000 Unified
4.3.2 Active Directory authentication
To use Active Directory, the V7000 Unified must be configured for and joined to the Active
Directory domain. This automatically creates the required computer account in the AD. The
public clustername specified during installation is used as the computer account name. File
sharing protocol access should always be done with this name (in order for Kerberos to work).
Authentication is provided for all supported file access protocols except NFS. AD with SFU
must be configured for access via NFS for Windows Server 2003 with AD. On Windows
Server 2008 with AD, the Identity Management for UNIX must be enabled in AD.
4.3.3 AD with SFU authentication or with Identity Management for UNIX
AD with SFU or with Identity Management for UNIX is the correct choice for clients with the
following conditions:
Customer uses Windows Server 2003 or WIndows Server 2008 with AD to store user
information and user passwords.
Customer plans to use NFS.
Customer plans to use asynchronous replication.
The primary Windows group assigned to an AD user must have a GID assigned. Otherwise,
the user is denied access to the system. Each user in AD must have a valid UID and GID
assigned to be able to mount and access exports. SFU should not be added when data is
stored on the Storwize V7000 Unified.
The primary UNIX group setting in AD is not used by the Storwize V7000 Unified. The
Storwize V7000 Unified always uses the primary Windows group as the primary group for the
user. This results in new files and directories created by a user via the SMB protocol being
owned by their primary Windows group and not by the primary UNIX group. For this reason, it
is recommended that the UNIX primary group is the same as the Windows primary group
defined for the user.
When data is stored on the Storwize V7000 Unified with AD, it is difficult to add SFU later on.
This is because the UIDs and GIDs used internally by GPFS must match the UIDs and GIDs
stored in SFU. If conflicting UIDs and GIDs are stored in SFU, this is not possible so clients
should configure the Storwize V7000 Unified with AD and SFU from the beginning.
The ACLs copied from the source Storwize V7000 Unified system to the target Storwize
V7000 Unified system include UIDs and GIDs of the source Storwize V7000 Unified system,
which are inconsistent with the UIDs and GIDs of the target Storwize V7000 Unified system.
To enable all NFS users of more than one Storwize V7000 Unified to access all systems with
the same identity, the authentication schema must be changed to UID/GID in AD. Existing
users must be edited (mapping from SID to UID/GID) and new users must get the UNIX user
information added while creating the user in AD.
4.3.4 SAMBA primary domain controller authentication
NT4/SAMBA PDC is the old domain controller concept used by Windows NT and Windows
2000. This is not supported by Microsoft any more. To support this method, the open
source/Samba community developed the Samba PDC.
Chapter 4. Access control for file serving clients 45
4.3.5 LDAP authentication
LDAP can be used in environments where Windows and UNIX clients are being used. The
Storwize V7000 Unified supports LDAP with Kerberos for SMB protocol access. LDAP is not
supported for secured NFS, FTP, HTTP, SCP protocol access.
4.3.6 Network Information Service
NIS is used in UNIX-based environments for centralized user and service management. NIS
keeps user, domain, and netgroup information. Netgroup is used to group client machine
IP/host name, which can be specified while creating NFS exports. NIS is also used for user
authentication for services like SSH, FTP, HTTP, and more. In the Storwize V7000 Unified, we
use NIS for netgroup support and ID mapping. We use the NIS default domain to resolve the
netgroup even though we support multiple NIS domains. The NIS client configuration needs
server and domain details of the NIS server.
Three different modes of NIS configuration are supported:
NIS for netgroup and AD or PDC/NT4 for authentication and AD increment ID mapping.
Used for mixed environments where there are Windows and UNIX users. In this mode, the
Storwize V7000 Unified supports both the SMB and the NFS protocol and netgroups.
NIS with ID mapping as an extension to AD or Samba PDC/NT4 and netgroup support.
Plain NIS without any authentication, just for netgroup support (only NFS).
4.4 Access control limitations and considerations
The following limitations for authentication and authorization apply.
4.4.1 Authentication limitations
Consider the following authentication limitations when configuring and managing the Storwize
V7000 Unified system.
For AD with the SFU UID/GID/SID mappings extension:
Enabling SFU for a trusted domain requires a two-way trust between the principal and the
trusted domain.
To access the Storwize V7000 Unified system, users and groups must have a valid
UID/GID assigned to them in Active Directory. Allowed range is 1 - 4294967295, both
inclusive. It is advisable to keep the lower range greater that 1024 to avoid conflict with the
CLI users. Starting the command with a lower range less than 1024 generates a warning
message and asks for confirmation. Use the --force option to override it.
For user access, the primary group on the Storwize V7000 Unified system is the Microsoft
Windows Primary group, not the UNIX primary group that is listed in the UNIX attribute tab
in the user's properties. Therefore, the user's primary Microsoft Windows group must be
assigned a valid GID.
For AD with the NIS mappings extension:
Because UNIX style names do not allow spaces in the name, the following conventions for
mapping Active Directory users and groups to NIS are implemented:
Convert all uppercase characters to lowercase characters.
46 Implementing the IBM Storwize V7000 Unified
Replace every space character with the underscore character. For example, an Active
Directory user named CAPITAL Name has the corresponding name capital_name on
NIS.
If Active Directory is already configured on the Storwize V7000 Unified system, you can
use only the --idMapConfig option of the cfgad Storwize V7000 Unified CLI command to
change the high value of the range. The high value of the range can be changed only to a
higher value. You cannot change the high value of the range to a lower value. You cannot
change the low value of the range, and you cannot change the range size.
For example, if you used the cfgad Storwize V7000 Unified CLI command with the
--idMapConfig option to configure Active Directory specifying a value for the
--idMapConfig option as 3000-10000:2000, you can use only the cfgad Storwize V7000
Unified CLI command with the --idMapConfig option to increase the value 10000 for the
high value of the range. You cannot decrease the value of 10000 for the high value of the
range. You cannot change the value 3000 for the low value of the range, and you cannot
change the value 2000 for the range size.
To change from NIS ID mappings to Active Directory ID mappings, or to change the ID
mapping parameters of an already existing Active Directory configuration by using the
--idMapConfig option of the cfgad Storwize V7000 Unified CLI command, either to change
the low value of the range, decrease the high value of the range, or change the range size,
you must perform the following steps in the following sequence:
a. Submit the cleanupauth Storwize V7000 Unified CLI command and do not specify the
--idmapDelete option.
b. Submit the cleanupauth Storwize V7000 Unified CLI command and do specify the
--idmapDelete option.
c. Submit the cfgad Storwize V7000 Unified CLI command with the options and values
that you want for the new Active Directory configuration.
If you do not perform the preceding steps in sequence, results are unpredictable and can
include loss of data access.
UIDs and GIDs less than 1024 are denied access for the FTP, SCP, and HTTPS protocols
for all of the supported authentication schemes other than Active Directory with SFU.
Authentication configuration commands stop and restart the services SMB, NFS, FTP,
SCP, and HTTPS. This action is disruptive for clients that are connected. Connected
clients lose their connection, and file operations are interrupted. File services resume a
few seconds after an authentication configuration command completes.
4.4.2 Authorization limitations
When managing authorization, the following Storwize V7000 Unified system implementation
details apply:
When a child file or child directory is created, the ACL that the file is initially assigned
depends on the ACL type, the file system settings, and the ACL of the parent directory.
Depending on these variables, the results in GPFS might be slightly different than in
Microsoft Windows. For example, if the parent directory is set to have two ACLs, for
example, full access for owner and for everyone, the Windows default is to create two
ACLs for the child: Allow full access for owner and allow full access for everyone. The
GPFS default creates six ACLs: Allow and deny ACLs for owner, group, and everyone.
The special permissions Write Data/Create File and Create Folder/Append Data cannot
be set separately for files. If either of these permissions is set, both are set. Enabling one
always enables the other, and disabling one always disables the other. For directories,
they can be set separately on condition that these access control entries (ACEs) are not
Chapter 4. Access control for file serving clients 47
inherited by files. You can configure two separate ACEs, where the ACE that is inherited
by files has both special permissions enabled or both disabled, and another ACE that is
inherited by directories where one of the preceding special permissions is enabled and the
other disabled. In this case, the Apply onto field of the Permission Entry panel can contain
the following values:
This folder only.
This folder and subfolders.
Subfolders only.
If you attempt to specify the values: This folder, subfolders, and files; this folder and files;
or files only; the following security pop-up message will be displayed:
Unable to save permission changes on folder. The parameter is incorrect.
The BypassTraversalCheck privilege that can be used on Windows servers is not
supported on the Storwize V7000 Unified system. To read the content of a subdirectory, a
user must not only have READ permission in the ACL of this subdirectory, the user must
also have traversal permission (SEARCH in Windows, execute in POSIX) for all of the
parent directories. You can set the traverse permission in the everyone group ACE at the
share root, and inherit this privilege to all subdirectories.
ACL management can be done through NAS protocols by an authorized user.
The default ACL on a file system root directory (700 root root) prevents users from
accessing the file system.
ACL inheritance stops at file set junction points. New file sets always have the default ACL
(700 root root).
For security reasons, creating an export does not allow the setting of an owner for existing
directories. The owner can be changed only if the directory is empty. When a directory
contains files or directories or linked file sets, you cannot change the owner when creating an
export.
If the owner option is omitted when creating an export for a nonexistent directory, the directory
is created and inherits the ACL if inheritance is configured. If ACL inheritance is not
configured, a new export for a nonexistent directory is assigned the default ACL (700 root
root).
If the directory is empty, the owner can be changed by deleting and re-creating the export with
the owner option.
Using POSIX commands such as chmod overwrites any previous ACLs and creates an ACL
with entries only for owner, group, and everyone.
When you create an export, you can create multiple levels of directories if they do not yet
exist. However, if multiple directory levels are created, the owner is set only for the leaf
directory. All other directories are created as owner root and can be accessed only if an
appropriate ACL inheritance has been previously configured.
Storwize V7000 Unified automatically creates owner, group, and everyone special entries to
the ACL to support interoperability with NFS, unlike Microsoft Windows, which does not use
these special ACL entries. An inherited ACL might look different from the parent ACL because
the owner and group entries have changed. Other ACL entries are not affected by this special
behavior.
Only the root user can change ownership to a different user.
48 Implementing the IBM Storwize V7000 Unified
Copyright IBM Corp. 2013. All rights reserved. 49
Chapter 5. Storage virtualization
This chapter contains a general description of virtualization concepts and provides an
explanation about storage virtualization in the Storwize V7000 Unified system.
5
50 Implementing the IBM Storwize V7000 Unified
5.1 User requirements that drive storage virtualization
In todays environment, there is an emphasis on an IBM Smarter Planet and dynamic
infrastructure. Thus, there is a need for a storage environment that is as flexible as the
application and server mobility.
In a non-virtualized storage environment, every system is an island that must be managed
separately. The storage virtualization helps to overcome this obstacle and improves the
management process of different storage systems.
You can see the importance of addressing the complexity of managing storage networks by
applying the total cost of ownership (TCO) metric to storage networks. Industry analyses
show that storage acquisition costs are only about 20% of the TCO. Most of the remaining
costs are related to managing the storage system.
When discussing virtualization, it is important to recognize that there are many ways to
achieve it. Therefore, the best possible option might vary depending on requirements, and
because of that, the path taken is usually different for each specific environment.
However, there are some overall and general benefits that can address the client concerns
and they are described here:
Easier administration:
Simplified management: The integrated approach means less hardware and software
layers to manage separately, and therefore less resource is needed
Less management required equates to fewer costs and therefore cost savings
Ideally a single level of control for use of advanced functions such as copy services
Decoupling usage of advanced functions from, for example, the hardware and
technology used as the storage back-end
Decoupling usage of multiple, virtualized operating system environments from the
actual server and hardware used
Improved flexibility:
Shared buffer resources: Flexible assignment of resources to multiple levels is possible
as required. As needs change over time, there is the potential for automation of this
flexible resource assignment
Improved resource usage:
Shared buffers plus their flexible assignment can lead to less resources and less
resource buffers required
Delayed acquisitions with this consolidated approach
Cost savings because of consolidated block and file virtualization approach
5.2 Storage virtualization terminology
Although storage virtualization is a term that is used extensively throughout the storage
industry, it can be applied to a wide range of technologies and underlying capabilities. In
reality, most storage devices can technically claim to be virtualized in one form or another.
Therefore, we must start by defining the concept of storage virtualization as used in this book.
Chapter 5. Storage virtualization 51
This is how IBM defines storage virtualization:
Storage virtualization is a technology that makes one set of resources look and feel like
another set of resources, preferably with more desirable characteristics.
It is a logical representation of resources that is not constrained by physical limitations:
It hides part of the complexity.
It adds or integrates new function with existing services.
It can be nested or applied to multiple layers of a system.
When discussing storage virtualization, it is important to understand that virtualization can be
implemented at various layers within the input/output (I/O) stack. We have to clearly
distinguish between virtualization at the disk layer and virtualization at the file system layer.
The focus of this book is virtualization at the disk layer, which is more specifically referred to
as block-level virtualization, or block aggregation layer.
Virtualization devices can be inside or outside the data path, and when this is the case, they
are called in-band/symmetrical or out-of-band/asymmetrical virtualization:
Symmetrical: In-band appliance
The device is a storage area network (SAN) appliance that sits in the data path, and all I/O
flows through the device. This kind of implementation is also referred to as symmetric
virtualization or in-band.
The device is both target and initiator. It is the target of I/O requests from the host
perspective, and the initiator of I/O requests from the storage perspective. The redirection
is performed by issuing new I/O requests to the storage. The SAN Volume Controller
(SVC)/Storwize V7000 uses symmetrical virtualization.
Asymmetrical: Out-of-band or controller-based
The device is usually a storage controller that provides an internal switch for external
storage attachment. In this approach, the storage controller intercepts and redirects I/O
requests to the external storage as it does for internal storage. The actual I/O requests are
themselves redirected. This kind of implementation is also referred to as asymmetric
virtualization or out-of-band.
Figure 5-1 shows variations of the two virtualization approaches.
52 Implementing the IBM Storwize V7000 Unified
Figure 5-1 Overview of block-level virtualization architectures
In terms of the storage virtualization concept, the focus in this chapter is on block-level
storage virtualization in a symmetrical or in-band solution.
Figure 5-2 shows in-band storage virtualization on the storage network layer.
Chapter 5. Storage virtualization 53
Figure 5-2 In-band storage virtualization on the storage network layer
The IBM Storwize V7000 Unified inherits its fixed block-level storage virtualization features
entirely from the SVC and Storwize V7000 product family. These features can be used
independently of the file system or enhancements that are built into the Storwize V7000
Unified based on the software stack that runs on the two file modules.
5.2.1 Realizing the benefits of Storwize V7000 Unified storage virtualization
The Storwize V7000 Unified system can manage external storage arrays and its own internal
storage.
Managing external storage in the V7000 Unified system reduces the number of separate
environments that must be managed down to a single environment and provides a single
interface for storage management.
Moreover, advanced functions such as mirroring and IBM FlashCopy are provided in this
system so there is no need to purchase them again for each new disk subsystem.
Migrating data from external storage to the Storwize V7000 Unified system can be easily
done because of the virtualization engine this system offers. This process is done by
connecting the external storage array to the existing logical unit number (LUN) on the V7000
Unified system and copying the data with the data migration procedure.
In addition, free space does not need to be maintained and managed within each storage
subsystem, which further increases capacity usage.
5.2.2 Using internal physical disk drives in the Storwize V7000 Unified
The Storwize V7000 Unified recognizes its internal physical disk drives as drives and
supports Redundant Array of Independent Disks (RAID) arrays.
The RAID array can be composed of different numbers of physical disk drives depending on
the RAID level chosen, although in general, it can be up to 16 drives in one RAID array. This
54 Implementing the IBM Storwize V7000 Unified
RAID array/MDisk is then added to a pool layer, which manages performance and capacity,
whereby multiple MDisks can belong to one pool. In these pools, the logical storage entity, the
volumes, are created (which are in the pool and by default, are striped across all the MDisks
in a pool whereby the stripesize is known as an extent). These volumes are then mapped to
external hosts to provide storage capacity to them and are seen by the hosts as a Small
Computer System Interface (SCSI) disk. This is a volume with underlying special properties
and capabilities, such as two independent copies with volume mirroring, or a much smaller
real capacity in the case of a thin-provisioned volume.
These are the logical steps that are required on V7000 to set up the volumes and make them
accessible to a host:
1. Select the physical drives.
2. Create the RAID array (when done, this array is now an MDisk).
3. Add this MDisk to a pool.
4. Create the volumes in this pool.
5. Map the volumes to the external host.
Figure 5-3 shows virtualization layers using internal disks.
Figure 5-3 Virtualization layers using internal disks
Note: In the Storwize V7000 Unified, these are the steps required for external hosts
attached via Internet Small Computer System Interface (iSCSI), Fibre Channel over
Ethernet (FCoE), or via a Fibre Channel (FC) SAN to the Storwize V7000 component. It
works the same for the volumes used by the file modules to host the data for file systems
and enable host access via file protocols. Both file modules are direct attached to the
Storwize V7000 component as hosts but are not displayed as hosts in the standard host
panels in the GUI. The volumes that are used for file systems are visible at several parts of
the GUI screens.
Chapter 5. Storage virtualization 55
5.2.3 Using external physical disk drives in the Storwize V7000 Unified
Generally, there is a dependency on the specifics of the external storage subsystem being
used and the features that are built in to it.
Therefore, these steps provide an overview of how to use external SAN-attached storage.
Our only assumption is that the storage subsystem provides RAID functionality for data
protection against physical disk drive failures.
In the external SAN-attached storage system (storage controller), there are internal logical
devices created, presented, and mapped to the Storwize V7000 as logical volumes, LUNs, or
RAID arrays. This depends on the capabilities of the specific storage system. These logical
entities are recognized by the Storwize V7000 as MDisks directly, which are then added to
storage pools in the same fashion as for MDisks based on Storwize V7000 internal disks as
described previously. Next, volumes are created in these storage pools and mapped to
external hosts, and again this process is done in the same way as before.
This process is shown in Figure 5-4.
The following steps are required on the external storage system/controller to map the external
storage to the V7000: Select the physical disks, create the RAID array, then map the entire
array to the Storwize V7000. Or, create logical volumes within the array and map these to the
Storwize V7000.
They are recognized as MDisks in the Storwize V7000 and treated so in the next logical
configuration steps.
To set up the volumes on the Storwize V7000 and make them accessible to a host, you must
detect the MDisks from the external controller, then add the MDisks that are found to the
storage pools. Then, create volumes in the pools and map the volumes to the external hosts
connected via iSCSI or FC. See Figure 5-4.
Important: Do not place file systems that are accessed with the Common Internet File
System (CIFS) or SMB on external storage systems. CIFS or SMB access to file system
volumes is supported only for volumes that are placed on internal storage. External
storage can be used to provide NFS, iSCSI, and FC protocols to other hosts once the
external storage is under control of the Storwize V7000 Unified.
56 Implementing the IBM Storwize V7000 Unified
Figure 5-4 Virtualization layers using external disks
In the previous sections, we described how the different layers interact with each other to
provide the volume entity that is to be presented to the connected hosts.
In a Storwize V7000 Unified configuration, the file system storage uses its own set of volumes
that are created in Storwize V7000 storage pools. The volumes are created when a file
system is created which controls the naming, sizing, and number of volumes. Volumes
intended for other hosts besides the file modules need to be created individually.
5.3 Summary
Storage virtualization is no longer merely a concept or an unproven technology. All major
storage vendors offer storage virtualization products. Making use of storage virtualization as
the foundation for a flexible and reliable storage solution helps enterprises to better align
business and IT by optimizing the storage infrastructure and storage management to meet
business demands.
The IBM System Storage SAN Volume Controller, the Storwize V7000, the Storwize V3500,
the Storwize V3700, and the Storwize V7000 Unified are built upon a mature, sixth-generation
virtualization solution that uses open standards and is consistent with the Storage Networking
Industry Association (SNIA) storage model. The appliance-based in-band block virtualization
process, in which intelligence, including advanced storage functions, is migrated from
individual storage devices to the storage network, can reduce your total cost of ownership and
improve your return on investment.
At the same time, it can improve the usage of your storage resources, simplify your storage
management, and improve the availability of your applications.
Copyright IBM Corp. 2013. All rights reserved. 57
Chapter 6. NAS use cases and differences:
SONAS and Storwize V7000
Unified
In this chapter, we build on the network-attached storage (NAS) methods described earlier in
Chapter 4, Access control for file serving clients on page 37. We also describe some typical
use cases for NAS by using the features and functions built into the Storwize V7000 Unified.
As an add-on to this chapter, we list the major differences to the IBM Scale Out NAS solution
(SONAS), which is built by using the same software stack that has been adopted for the
Storwize V7000 Unified file access component. Therefore, most of the actual file access
methods and functionality that are built-in is similar. However, because the SONAS hardware
is very different, there are also some major differences between the two products. One of the
major areas is scalability.
6
58 Implementing the IBM Storwize V7000 Unified
6.1 Use cases for Storwize V7000 Unified
Here are some examples of use cases that use and benefit from the powerful software
features built into the Storwize V7000 Unified. There are many other possibilities and most of
the options can be combined to build the tailored solution that fits to the individual needs of a
client.
6.1.1 Unified storage with both file and block access
Sometimes there are requirements for a storage system that can handle both file access and
block access at the same time. An extra benefit then is flexibility regarding the storage
assignment. There is the flexibility of being able to move storage capacity between these two
access methods as required and as needs might change over time. At the same time, with
access to the data, simultaneously using both file and block access methods should not
interfere with each other in terms of creating performance dependencies.
The Storwize V7000 Unified provides storage to both worlds in one unified system with the
flexibility mentioned and is a good solution for these requirements. It provides dedicated
interfaces for both methods of data access and allows for the shifting of storage capacity
between them:
Storwize V7000 Unified provides file access via the IP network, for example, for Server
Message Block (SMB)/Common Internet File System (CIFS), and Network File System
(NFS) exports. File access is handled by the two file modules.
Storwize V7000 Unified provides block storage access via IP for iSCSI and block access is
handled by the Storwize V7000.
Storwize V7000 Unified provides block storage access via the SAN using Fibre Channel
Protocol and block access is handled by the Storwize V7000.
Storwize V7000 Unified offers the flexibility to use separate or shared storage pools
between file and block access, in both cases internally using separate volumes for file
access versus block access.
Flexible usage and moving of storage capacity according to changing needs.
An overview of the different interfaces is shown in Figure 6-1.
Figure 6-1 Storwize V7000 Unified: Unified storage for both file and block access
Chapter 6. NAS use cases and differences: SONAS and Storwize V7000 Unified 59
6.1.2 Multi-user file sharing with centralized snapshots and backup
In many cases, there is a benefit of having a centralized storage solution for multiple users
working together, for example, in a project workgroup. Although every user has their own
home directory and data to work with, there is also a need to share files between the
members of the group and this can be set up with the appropriate share and access control
structures. In addition, it is more efficient to handle requirements such as data protection and
space management at a workgroup level rather than at an individual user level.
The Storwize V7000 Unified has the appropriate functions to enable centralized management
and protection and individual data access. It also provides enhanced scalability in a single
namespace compared to traditional single NAS filer solutions. This is shown in Figure 6-2.
Figure 6-2 Storwize V7000 Unified centralized management for multiple users sharing files
Multiple users and groups store and share files on a Storwize V7000 Unified:
Multi-user access to a single file is possible by using sophisticated General Parallel File
System (GPFS) locking mechanisms
File sharing between Windows and UNIX client environments is possible
Storwize V7000 Unified allows you to set quotas for individual users, groups, or at a share
level:
Providing granular space management as needed
Including warning levels, soft quota, and hard limits, hard quota
Storwize V7000 Unified allows for centralized snapshots, for example, for important data
of the entire workgroup:
The administrator uses general snapshot rules such as scope, frequency, and levels of
retention, which are tailored to the needs of, for example, the workgroup
It removes the need for every user to take care of their own data
It provides easy recovery of multiple file versions
60 Implementing the IBM Storwize V7000 Unified
A centralized backup provides data protection for the entire system, or at a share level,
which means:
Individual users do not have to define and take care of this themselves
More efficient resource usage and scheduling
File replication to a second Storwize V7000 Unified system for disaster protection:
Works at the file system level and protects the entire file system, including all file shares
6.1.3 Availability and data protection
The Storwize V7000 Unified has multiple built-in options for availability and data protection of
important data:
Clustered file modules provide high availability and redundancy against failures by using a
GPFS based clustered file system to store your data
Clustered file modules provide load balancing for file access from multiple clients
simultaneously
The Storwize V7000 Unified provides a highly available storage back-end
Data can and should be protected by using backup to tape
Disaster protection can be added by using asynchronous replication of files to a distant
site
Some of these options are illustrated in Figure 6-3.
Figure 6-3 Storwize V7000 Unified availability and data protection
6.1.4 Information life-cycle management (ILM), hierarchical storage
management (HSM), and archiving solution
It is often a requirement to manage the data placement over the lifetime of a file, and to place
or move it to the appropriate storage tier for a cost efficient solution, all preferably in an
automated fashion with minimal administrative management efforts (and related costs). At the
same time, there might be legal requirements to keep certain type of files for compliance, for
an extended period of time.
The Storwize V7000 Unified is well suited to these requirements with its powerful built-in
features, and through integration with Tivoli Storage Manager for Space Management.
Policy-based information lifecycle management (ILM) and hierarchical storage management
(HSM) functionality provide a solution to these requirements, as shown in Figure 6-4.
Chapter 6. NAS use cases and differences: SONAS and Storwize V7000 Unified 61
Figure 6-4 Storwize V7000 Unified as ILM, HSM, and archiving solution
ILM and HSM provide the following abilities:
Archive applications and users store data in file shares on the Storwize V7000 Unified.
You can define policies for automated data placement, migration, and deletion by using:
A powerful Structured Query Language-like (SQL-like) policy language that is built in to
define these individual, tailored policies as required.
Placement policies define the initial placement of a file when it is first created.
Migration policies are used to move data to its appropriate storage tier over its entire
lifetime.
Static data can be kept for an extended period, for example, as defined by legal
requirements and deleted automatically.
HSM can automatically migrate rarely used files (according to defined criteria) to tape:
This process is handled by the IBM Tivoli Storage Manager HSM software.
Both migration to external tape and recall of the data on request are fully transparent.
A stub file is left in the file system where the original data was located. If there is a
request to access that data again, a transparent file recall moves the data back into the
file system.
6.2 Storwize V7000 Unified and SONAS
To describe the differences to Storwize V7000 Unified, a better understanding of the SONAS
implementation is needed. Therefore, this section starts with a short introduction into the
SONAS solution.
6.2.1 SONAS brief overview
SONAS is a scale-out NAS implementation, which is built with a focus on scalability. There is
enormous room for growth until the architectural limitations are hit. However, it still provides a
single namespace across the entire configuration. It is a GPFS two-tier implementation (see
Chapter 7, IBM General Parallel File System on page 65):
The nodes handling client I/O are known as interface nodes.
62 Implementing the IBM Storwize V7000 Unified
The nodes providing the Network Shared Disks (NSDs) and handling the back-end
storage tasks are called storage nodes.
This process is illustrated in Figure 6-5.
Figure 6-5 SONAS Overview: Two-tier architecture and independent scalability
The scalable high speed connectivity between interface nodes and storage nodes is
implemented via an internal InfiniBand network.
At the time of writing, the latest SONAS version is 1.4.2. For detailed information on the latest
version, see the SONAS Information Center Website:
http://pic.dhe.ibm.com/infocenter/sonasic/sonas1ic/index.jsp
SONAS release 1.4.2 supports up to the following number of nodes:
30 interface nodes
Each interface node can provide from two to eight Ethernet connections
30 storage pods
Providing up to 7200 disk drives in the storage back-end
With 3 TB hard disk drives (HDDs), this results in a 21.6 PB storage capacity
SONAS uses a centralized management concept with multiple distributed Management Node
roles. This configuration provides single access by using either a Graphical User Interface
(GUI) or Command Line Interface (CLI) to the SONAS cluster.
In general, SONAS and Storwize V7000 Unified support the same software features.
However, there are differences such as the ones listed in 6.2.2, Implementation differences
Chapter 6. NAS use cases and differences: SONAS and Storwize V7000 Unified 63
between Storwize V7000 Unified and SONAS. Therefore, be sure to check both products
support pages as the official references:
Support portal for SONAS:
http://www.ibm.com/support/entry/portal/Overview/Hardware/System_Storage/Networ
k_Attached_Storage_%28NAS%29/SONAS/Scale_Out_Network_Attached_Storage
Support portal for Storwize V7000 Unified:
http://www.ibm.com/storage/support/storwize/v7000/unified
6.2.2 Implementation differences between Storwize V7000 Unified and SONAS
Although both products use the same NAS software stack, there are differences because of
the different hardware implementations, scalability, and supported software features as well.
The following list reflects differences at current software release for both products and might
be subject to change in future releases:
GPFS one-tier architecture in Storwize V7000 Unified and two-tier architecture in SONAS
SONAS has dedicated, different servers, as interface nodes and storage nodes, while
Storwize V7000 Unified uses a pair of file modules
Hardware scalability is limited in Storwize V7000 Unified, for example, no independent
scalability of resources for interface nodes and storage nodes in Storwize V7000 Unified:
There is a fixed number of two file modules in the Storwize V7000 Unified
Storage capacity limits in Storwize V7000 Unified: Internally, one V7000 Control
Enclosure and up to nine V7000 Expansion Enclosures (up to 240 disk drives), plus
support for external virtualized SAN-attached storage
Local authentication is only available on Storwize V7000 Unified. For more information,
see Chapter 11, Implementation on page 139.
Real-time Compression is only available on Storwize V7000 Unified. For more information,
see Chapter 16, Real-time Compression in the IBM Storwize V7000 Unified on
page 287.
64 Implementing the IBM Storwize V7000 Unified
Copyright IBM Corp. 2013. All rights reserved. 65
Chapter 7. IBM General Parallel File System
In this chapter, we describe the clustered file system that is built into the Storwize V7000
Unified as one of its foundations: the IBM General Parallel File System (GPFS). The content
is focused on GPFS itself and its features. In terms of the implementation inside the Storwize
V7000 Unified, there might be some differences to the capabilities which GPFS natively has.
Check the Storwize V7000 Unified Configuration Limits and Restrictions page accordingly,
which can be found at the following link:
http://www.ibm.com/support/docview.wss?uid=ssg1S1004227
Asynchronous replication for files, as implemented in Storwize V7000 Unified, is not a generic
GPFS function. Therefore, this feature is not described here. Refer to Chapter 8, Copy
services overview on page 75.
7
66 Implementing the IBM Storwize V7000 Unified
7.1 Overview
GPFS has been available from IBM for a long time dating back to the first releases in the
mid-1990s. It has its roots in parallel computing requirements where a scalable, highly
available file system was required. GPFS has the parallelism of serving data to (many) clients
as well as availability and scalability built-in from design. Therefore, it has been used for many
years in parallel computing, high-performance computing (HPC), Digital Media solutions, and
Smart Analytics to name a few.
It also inherited functionality of other projects over time, such as the IBM SAN file system.
GPFS is part of many other IBM solution offerings as well, such as IBM Information Archive
and Smart Analytics solutions.
7.2 GPFS technical concepts and architecture
The concept of GPFS is a clustered file system built on a grid parallel architecture. Parallelism
for both host access and data transfers to storage enables its scalability and performance.
The storage entities that GPFS knows are called Network Shared Disks (NSDs), as shown in
Figure 7-1 on page 67. GPFS works with the concept of separate NSD servers and NSD
clients in a two-tier architecture. However, if only one tier is used, both NSD server and client
roles are in the same machine.
Storwize V7000 Unified implementation uses one tier architecture. In contrast, SONAS
implementation uses two-tier architecture (as also shown in Figure 7-1).
Chapter 7. IBM General Parallel File System 67
Figure 7-1 GPFS: Examples for One Tier (Storwize V7000 Unified) and Two Tier Architecture (SONAS)
GPFS stripes all the data written across all available NSDs using the defined file system block
size as the stripe size. This way, GPFS ensures that the maximum number of NSDs is
contributing to a given I/O, avoiding performance hot spots:
Supported file system block sizes in Storwize V7000 Unified are 256 KB, 1 MB, and 4 MB
The minimum I/O size that GPFS is using is called a sub block or fragment (which is 1/32
of the file system block size because GPFS is working with 32 sub blocks per file system
block internally)
Sub blocks are introduced to combine small files or parts of files into a single block to
avoid wasted capacity
68 Implementing the IBM Storwize V7000 Unified
For virtualized NSDs presented to the GPFS layer (like NSDs provided by the Storwize V7000
Unified), it is therefore beneficial to optimize the NSD I/O characteristics according to the
GPFS I/O pattern.
From a storage subsystem perspective, we want to get full stride writes to its underlying
RAID arrays. This means parity (for RAID 5 and RAID 6) can be calculated immediately
without the need to read from disks before, therefore avoiding the RAID penalty. In
conjunction with GPFS, this leads to a change of the RAID presets that are built into the
Storwize V7000 Unified compared to V7000 stand-alone. The presets for the RAID arrays in
Storwize V7000 Unified aim to configure eight data disks plus the required parity disks into
one RAID array:
For RAID 5, it is an 8+P array
For RAID 6, this is an 8+P+Q array
These new presets are reflected in the sizing tools like IBM Capacity Magic and IBM Disk
Magic as well.
7.2.1 Split brain situations and GPFS
A GPFS cluster normally requires at least three cluster nodes. An uneven number is selected
on purpose by most clustering solutions to still have a quorum of cluster nodes (or other
voting members) available if one of them fails, in order to avoid a split brain situation of the
cluster. A typical split brain scenario means that there are two parts of the cluster still alive,
each one with half of the remaining cluster nodes (so that none of the two parts has a
quorum), but they cannot communicate any longer. Because of the loss of communication
between the two parts, the parts themselves cannot distinguish which one has the most
current information and should continue to operate. One solution for this is to have a tie
breaker in the configuration, and the SAN Volume Controller (SVC)/Storwize V7000 clustering
implementation uses a quorum disk for that purpose.
In the GPFS cluster implementation in Storwize V7000 Unified, it is not possible to have three
cluster nodes because there are only two file modules in the configuration. To help with a split
brain situation when the file modules have lost communication, the Storwize V7000 storage
system in the back-end acts as the tie breaker. If they lose communication, both file modules,
as cluster members, communicate with the Storwize V7000 and provide their status.
Storwize V7000 then determines which file module should continue to operate (survive) as
the GPFS cluster and sends an expelmember command to the other file module, which then
has to leave the cluster. In addition, the V7000 removes the volume mappings of the expelled
file module to guarantee data integrity for the remaining GPFS cluster.
7.2.2 GPFS file system pools and Storwize V7000 storage pools
GPFS has the internal concept of pools (in GPFS internal terminology they are called storage
pools too) and to distinguish them from the pools used in the Storwize V7000 storage layer,
we refer to them as file system pools in this book. As described in Chapter 3, Architecture
and functions on page 21, these GPFS file system pools are mapped to V7000 storage pools
within a Storwize V7000 Unified system. An overview of the different internal structures that
are involved is shown in Figure 7-2.
Chapter 7. IBM General Parallel File System 69
Figure 7-2 File system pool and storage pool concept in Storwize V7000 Unified
Regarding the mapping between file system pools and storage pools, there is one exception:
GPFS synchronous internal replication uses a one to two mapping (one file system pool to
two storage pools) as described in 7.2.6, GPFS synchronous internal replication on
page 72. Normally, there is a one to one mapping between file system pools and storage
pools, and in either case this mapping is typically established at file system creation.
For a standard file system that is not providing information lifecycle management (ILM)
functionality, there is one file system pool, which is mapped to one storage pool.
For a file system providing ILM functionality, there are multiple file system pools, for example,
system gold, silver, bronze, where each one is mapped to one storage pool, and where the
storage pools have descending tiers. That is, storage classes descend from fast/expensive to
slower/cheaper storage, from the tier mapped to the system gold file system pool to the one
mapped to the bronze file system pool.
7.2.3 File system pools in GPFS
The following standards exist in file system pools in GPFS:
There is a maximum of eight internal pools per file system.
One pool (default) is always required as the system pool.
Seven optional pools are available as user pools.
For configuring ILM, the file system is created with multiple file system pools. That means,
beside the one default file system pool called system (which exists for every GPFS file
Note: To ensure there is no confusion between these two logical entities within the
Storwize V7000 Unified, we refer to the GPFS pool entity as the file system pool and the
Storwize V7000 pool is called the storage pool in this book.
GPFS File system
GPFS
Pool 1
GPFS
Pool 2
External
Pool
GPFS File system represents the name and
storage space for files
Uses pools for file placement, migration and internal
sync replication
GPFS Pool is the destination for placement and
migration of data
Is comprised of NSD
Network shared Disk is the volume used in GPFS
to store the data
Allows storing data and / or metadata
Mapped 1:1 to V7000 volume provided by V7000 pool
V7000 Pool is a grouping of managed disks (mdisk)
with identical characteristics (except for Easy Tier)
File system pool and Storage pool concept in V7000 Unified
V7000 Pool 1
NSD NSD NSD NSD
mdisks
NSD
Pool 2
mdi sks
Vol Vol Vol Vol Vol
TSM HSM
File Modules
V7000
70 Implementing the IBM Storwize V7000 Unified
system), there are as many more file system pools of descending tiers that are defined
(and mapped to storage pools with corresponding descending tiers) as extra storage tiers.
An example of a typical hierarchy of pools of descending storage tiers/classes is gold,
silver, bronze.
In addition, an external file system pool is possible. This pool is used for offloading data as
part of a hierarchical storage management (HSM) solution managed by IBM Tivoli Storage
Manager for Space Management as the storage manager application for HSM.
GPFS itself provides a clustered file system layer that is able to support up to 256 file systems
in a native GPFS or SONAS implementation. The supported limit in a Storwize V7000 Unified
environment is 64 file systems currently.
In addition to the file system pools (with NSDs) and storage pool (with volumes) concept
shown in Figure 7-2 on page 69, there are more configuration layers involved to establish and
manage access to the data stored inside the GPFS file system.
The essential step is the definition of shares (exports) to be able to access the data from file
clients. File sets and directories provide more granularity for the management of that data
access (file sets also enable quota management and independent file sets enable snapshots
in addition), as shown in Figure 7-3.
Figure 7-3 Layers involved in managing data access in Storwize V7000 Unified
Note: The maximum number of file systems supported in the Storwize V7000 Unified
implementation is different, currently up to 64 file systems. For the most current information
for not only the file system limit, but for all maxima, see the following web page:
http://www.ibm.com/support/docview.wss?uid=ssg1S1004227
File sets
GPFS File system
GPFS
Pool 1
GPFS
Pool 2
External
Pool
File share is the remote file system
Share can export entire file system, file sets or
directory
File set is a logical sub-tree of the GPFS file
system namespace (directory)
Allows to partition the file system into smaller,
more manageable units
Provide quota and snapshot functions
(independent file sets)
GPFS File system is the internal file system
Clustered on two file modules
Includes Active Cloud Engine
Uses pools to store data providing ILM or
replication functions
GPFS configuration in V7000 Unified
Share
NFS, CIFS, HTTP, FTP,
Share
V7000 Pool V7000 Pool
NSD NSD NSD NSD
TSM HSM
mdisks mdisks
Directory
/
u
/
a
/
b
/
w
Chapter 7. IBM General Parallel File System 71
7.2.4 GPFS file sets
A file set is a subtree of a file system namespace that in many respects behaves like a
separate file system. File sets provide the ability to partition a file system to allow
administrative operations at a finer granularity than the entire file system.
File sets in many aspects behave like a separate file system and are available in two types:
dependent file sets and independent file sets:
An independent file set has a separate inode space but shares physical storage with the
remainder of the file system.
A dependent file set shares the inode space and snapshot capability of the containing
independent file set.
When the file system is first created, only one file set exists, which is called the root file set.
The root file set contains the root directory and system files such as quota files.
File set details and differences:
The default is one root file set per file system.
Quotas and policies are supported on both dependent file sets and independent file sets.
Snapshots are only supported for independent file sets (and at the level of the entire file
system itself) because of the following reasons:
Only independent file sets provide their own inode space. Dependent file sets use the
inodes of the file system.
The GPFS limit for snapshots is 256 per file system. However, 32 are reserved for
internal use, for example, for backup and async replication. Therefore, a maximum of
224 snapshots per file system are available to the user.
A maximum of 256 snapshots are available per independent file set.
A maximum of 1000 independent file sets and 3000 dependent file sets is supported per
file system.
For more information about the differences between dependent and independent file sets,
see this web page:
http://pic.dhe.ibm.com/infocenter/storwize/unified_ic/index.jsp?topic=%2Fcom.ibm.s
torwize.v7000.unified.142.doc%2Fmng_filesets_topic_welcome.html
7.2.5 GPFS parallel access and byte-range locking
GPFS uses a distributed cluster manager and various roles distributed across the cluster
nodes, both highly scalable and highly available, with transparent adaptive and self-healing
capabilities. For that purpose, all nodes or a redundant subset of nodes have equal roles and
run the same daemons. If one node fails, other available nodes can then take over its role.
GPFS allows parallel access from different client systems to a single file managed by
sophisticated locking mechanisms operating GPFS cluster wide on the level of byte-range
within a file. There is also support for opportunistic locking (oplocks), which allows client-side
caching of data. This caching can provide a performance benefit.
72 Implementing the IBM Storwize V7000 Unified
7.2.6 GPFS synchronous internal replication
GPFS provides optional, additional redundancy by using a synchronous, file system internal
replication, which is based on grouping of NSDs into independent failure groups:
A failure group is a logical group of NSDs with the same dependencies, for example,
failure boundaries:
Typically two independent storage systems, providing NSDs to protect the GPFS data
against the failure of an entire storage system.
Within the implementation in Storwize V7000 Unified, there is one Storwize V7000 in
the back-end, which in itself provides redundancy by design and protects against any
single failure. Also, virtualized external SAN-attached storage systems are managed
and presented by the Storwize V7000 and can also rely on the failure boundary of an
entire Storwize V7000 storage system as well.
Setting up the GPFS synchronous internal replication requires the definition of two
independent storage pools (as failure groups) per file system pool:
Resulting in a two to one mapping between two storage pools and the file system pool
Metadata is always stored in the file system pool
The replication process is fully synchronous mirroring over two sets of NSDs from the two
independent storage pools
The configurable options are to replicate metadata, data, or both between two
independent storage pools (failure groups)
7.2.7 Active Cloud Engine
GPFS provides a fast and scalable scan engine that is able to scan through all files or
subdirectories quickly, and that is used for multiple purposes. For example, various purposes
include identifying files for antivirus scanning, for ILM, and for changed files for incremental
backups. Because GPFS is designed for scalability and can grow to a very large file system
with a single name space, its scan engine is designed to be scalable as well. This is a real
competitive advantage of GPFS compared to other large clustered file systems.
Important: For applications with critical data, all non-mirrored caching options in GPFS,
which includes the following, should be disabled:
Caching on the client side: Controlled via opportunistic locking (oplocks) option
Caching in file module cache or local interface node cache: Controlled via syncio option
This can be done by using the CLI command chexport for the relevant shares with the
parameters oplocks=no and syncio=yes.
Note: GPFS synchronous internal replication of metadata, data, or both, is fully supported
and configurable on Storwize V7000 Unified. The goal is to perform synchronous mirroring
between independent failure groups on two separate storage systems, which do not have
to provide redundancy. In the Storwize V7000 Unified implementation, there is one highly
available main storage subsystem with redundancy by design. This system is the Storwize
V7000 itself, managing all the disk storage in the back-end (Storwize V7000 internal disks
and external SAN-attached storage systems virtualized by the Storwize V7000).
Chapter 7. IBM General Parallel File System 73
In the GPFS implementations in both SONAS and Storwize V7000 Unified, this engine is also
called the IBM Active Cloud Engine (ACE).
The GPFS scan engine is also used to apply user-defined policies to the files stored in GPFS,
building the foundation for the ILM of files within GPFS:
Enabling ILM, automated migrations of files based on user-defined criteria
Uses a subset of Structured Query Language (SQL) as policy language
User can specify rules grouped in different policies using this policy language
Can be scripted
Policies and rules for file placement (creation), migration, and deletion
Rules within the policy are evaluated first to last, and the first one to match is executed and
determines the handling of the relevant files:
Recommendation is to add a default rule in every case that gets applied when no other
rule matches the criteria
If no default rule exists and no other rule matches the criteria defined, no action is
taken unless defined otherwise
For more information, see the GPFS 3.5: Advanced Administration Guide, SC23-5182-05:
http://www.ibm.com/support/docview.wss?uid=pub1sc23518205
7.2.8 GPFS and hierarchical storage management (HSM)
GPFS has support for Tivoli Storage Manager HSM built-in, which provides a way to offload
data from the file system to external storage with Tivoli Storage Manager for Space
Management. Internally, this is handled by a special file system pool that has been defined,
called the external pool. Based on the defined criteria, the GPFS policy engine identifies the
files to be moved. GPFS then moves these files into this external storage pool from which the
Tivoli Storage Manager HSM server fetches the data and stores it on a Tivoli Storage
Manager HSM supported storage device, usually a tape device.
While the data itself is being offloaded, and is saving space inside the file system, for every
file a so-called stub file is left inside the file system. This file contains all the metadata
belonging to that file and that is needed to be read by the scan/policy engine. That means for
policy scans, the data itself can remain on the external storage outside of the GPFS file
system because all metadata information for this file is still available via the stub file. But if a
user wants to access the file, or Tivoli Storage Manager wants to back up the file, or the
antivirus scanner wants to scan the file, it is recalled and is loaded back into the file system by
HSM. This process is all done transparently to the user.
7.2.9 GPFS snapshots
GPFS also offers a space-efficient snapshot technology, which is described in Chapter 8,
Copy services overview on page 75. This list is a summary of the main features of GPFS
snapshots as implemented in Storwize V7000 Unified:
Snapshots use pointers to data blocks based on redirect-on-write. This means a new
write coming in and even an update to an existing data block is written to a new data block
because the old data block is still contained in the snapshot. This method provides
efficient use of space because it does not use space/capacity when started. Space is only
used when data changes and new blocks are written based on redirect-on-write.
Snapshots are available for file systems and independent file sets.
74 Implementing the IBM Storwize V7000 Unified
Allowing a maximum of 256 snapshots of the entire file system plus 256 for independent
file sets underneath.
Reserving 32 of these snapshots for internal use, hence 224 are available for client use.
Snapshot rules allow the scheduling of snapshots if required, and retention rules for
snapshots can be defined.
The snapshot manager routine runs once per minute executing snapshot rules in
sequential order of definition.
7.2.10 GPFS quota management
As mentioned previously, you can specify quotas for file system space management:
Quotas can be set at a file set, user, or group level.
Soft quotas, a grace period, and hard quotas are supported:
When the soft quota limit is reached, a warning is sent but write access is still possible
until either the grace period expires or the hard quota limit is reached, whichever
comes first. Writing of data is then inhibited until space is freed up, for example, by
deleting files.
Default grace period is seven days.
When the hard quota limit is reached while updating a file, writing and closing the
currently open file is still possible to protect the data.
More detailed information about GPFS, including different purpose-built configurations, can
be found in the following IBM Redbooks publications:
GPFS: A Parallel File System, SG24-5165
Implementing the IBM General Parallel File System (GPFS) in a Cross Platform
Environment, SG24-7844
Copyright IBM Corp. 2013. All rights reserved. 75
Chapter 8. Copy services overview
This chapter provides an overview of the Storwize V7000 Unified storage copy functions
provided by the Storwize V7000 storage subsystem and the file level copy functions provided
by the file modules. For an in-depth discussion about storage copy functions, see the
IBM Redbooks publication, Implementing the IBM Storwize V7000 V6.3, SG24-7938.
### Change reference to SG24-7938 when an update of SG24-7938 becomes available!
###
8
76 Implementing the IBM Storwize V7000 Unified
8.1 Storage copy services of the Storwize V7000 Unified
Storwize V7000 Unified provides storage in the form of logical volumes to the internal file
modules and to external storage clients. In addition, it provides the same logical
volume-based copy services as the stand-alone Storwize V7000 provides. These services
are FlashCopy, Metro Mirror, Global Mirror, and Global Mirror with Change Volumes.
8.1.1 FlashCopy for creating point-in-time copies of volumes
FlashCopy is the point-in-time copy capability of the Storwize V7000. It is used to create
instant, complete, and consistent copy from a source volume to a target volume. Often this
functionality is called time-zero copy, point-in-time copy, or snapshot copy.
Creating a copy without snapshot functionality
Without a function such as FlashCopy, to achieve a consistent copy of data for a specific
point-in-time, the I/O of the application that manipulates the data has to be quiesced for the
entire time the physical copy process takes place. In such a case, the time that the copy
process requires is defined by the amount of data to be copied and the capabilities of the
infrastructure to copy the data. Only after the copy process is finished can the application that
manipulates the data start to access the volume that was involved. Only then is it ensured that
the data on the copy target is self-consistent and identical to the data on the source for a
specified point in time.
Creating copies with FlashCopy
With FlashCopy, this process is different. FlashCopy enables the creation of a copy of a
source volume to a target volume in a very short time. Thus, the application has to be
prevented from changing the data only for a short period. For the FlashCopy function to be
executed, a FlashCopy mapping must be created; two ordinary volumes get mapped together
for the creation of a point-in-time copy.
After the FlashCopy process is started on the mapping, the target volume represents the
contents of the source volume for the point in time when the FlashCopy was started. The
target volume does not yet contain all the data of the source volume physically. It can be seen
as a virtual copy, created by using bitmaps.
After FlashCopy has started, but before it has finished physically copying the data to the
target, the copy can be accessed in read/write mode. From that point on, data that has to be
changed on the source volume (by the applications that manipulate the source volume) is
written to the target volume beforehand. This ensures that the representation of the data on
the target volume for that point in time is valid.
It is also possible to copy all the data of the source volume to the target volume through a
background copy process. The target volume, although not fully copied yet, represents a
clone of the source volume when the relationship between the source and the target exists.
When all data has been copied to the target volume, the relationship between the volumes
can be removed, and both volumes become normal. The former target volume is now a
physical clone of the source volume for the point in time that the FlashCopy was started.
To create consistent copies of data that span multiple volumes, consistency groups can be
used. Consistency groups are sets of FlashCopy mappings, which get copied at the same
point in time, thus creating a consistent snapshot of the data across all volumes.
FlashCopy is very flexible. It is possible for a volume to be a source volume in one FlashCopy
mapping and for the volume to be the target volume in another FlashCopy mapping. This is
Chapter 8. Copy services overview 77
called Cascaded FlashCopy. Also, one volume can have the role as source volume in multiple
FlashCopy mappings with different target volumes. This is called Multiple Target FlashCopy.
Another feature of FlashCopy is the capability of incrementally updating a fully copied target
volume with only the changes that have been made to the source volume of the same
mapping. This is called Incremental FlashCopy. Another feature of FlashCopy is the
possibility to reverse the direction of the mapping, thus making it possible to restore a source
volume from a target volume while retaining the original target volume. This is called Reverse
FlashCopy. The flexibility of FlashCopy can be enhanced further with the use of thin
provisioned volumes, this is called Space Efficient FlashCopy.
FlashCopy usage cases
FlashCopy has many uses. One obvious use is for backing up a consistent set of data without
requiring a long backup window. The application manipulating the data must ensure that the
data is consistent, and the application must be suspended for a short period. When the copy
is started, the backup application can access the target while the applications can resume
manipulating the live data. No full volume copy is needed.
One very useful case for FlashCopy is to create a full consistent copy of production data for a
given point in time at a remote location. In this case, we combine Metro Mirror/Global Mirror
and FlashCopy, and we take a FlashCopy from the Metro Mirror/Global Mirror secondary
volumes. We can take a consistent backup of our production data on the second location, or
create a clone of the data so it is available if anything should happen to our production data.
FlashCopy can also be used as a safety net for operations that make copies of data
inconsistent for longer-than-normal periods of time. For example, if Global Mirror was to get
out of synchronization, the auxiliary volume is still consistent in itself, but the process of
resynchronization renders the auxiliary volume inconsistent if it is not finished. To obtain a
consistent copy of the data of the auxiliary volume while it is being synchronized, a FlashCopy
of this volume can be created.
Another use for FlashCopy is to create clones of data for application development testing or
for application integration testing. FlashCopy is also useful when a set of data has to be used
for different purposes. For example, a FlashCopy database can be used for data mining.
FlashCopy presets
The IBM Storwize V7000 storage subsystem provides three FlashCopy presets, named
Snapshot, Clone, and Backup, to simplify the more common FlashCopy operations, as shown
in Table 8-1.
Table 8-1 FlashCopy presets
Preset Purpose
Snapshot Creates a point-in-time view of the production data. The snapshot is not
intended to be an independent copy, but is used to maintain a view of the
production data at the time the snapshot is created.
This preset automatically creates a thin-provisioned target volume with
0% of the capacity allocated at the time of creation. The preset uses a
FlashCopy mapping with 0% background copy so that only data written
to the source or target is copied to the target volume.
78 Implementing the IBM Storwize V7000 Unified
8.1.2 Metro Mirror and Global Mirror for remote copy of volumes
Metro Mirror and Global Mirror are IBM branded terms for the functions Synchronous Remote
Copy (Metro Mirror) and Asynchronous Remote Copy (Global Mirror). We use the term
Remote Copy to refer to both functions where the text applies to each equally. These functions
are used to maintain a copy of logical volumes held by one Storwize V7000, Storwize V7000
Unified, or SAN Volume Controller (SVC) in another Storwize V7000, Storwize V7000 Unified,
or SVC at a remote location. This copy can be either synchronous or asynchronous. You can
utilize Global Mirror with Change Volumes functionality to support the use of low-bandwidth
links (this functionality was introduced in SVC version 6.3.0).
Metro Mirror
Metro Mirror works by establishing a synchronous copy relationship between two volumes of
equal size. This relationship can be an intracluster relationship established between two
nodes within the same I/O group of one cluster, or an intercluster relationship, which means a
relationship between two clusters that are separated by distance. Those relationships can be
stand-alone or in a consistency group.
Metro Mirror functionality ensures that updates are committed to both the primary and
secondary volumes before sending confirmation of the completion to the server. This ensures
that the secondary volume is synchronized with the primary volume. The secondary volume is
in a read-only state, and manual intervention is required to change that access to read/write
state. The server administrator also has to mount the secondary disk so that the application
can start to use that volume.
Global Mirror
Global Mirror copy relationships work in a similar way that Metro Mirror does but by
establishing an asynchronous copy relationship between two volumes of equal size. This
relationship is mostly intended for intercluster relationships over long distances.
With Global Mirror, a confirmation is sent to the server before it has received good completion
at the secondary volume. When a write is sent to a primary volume, it is assigned a sequence
number. Mirror writes sent to the secondary volume are committed in sequential number
order. If a write is issued while another write is outstanding, it might be given the same
sequence number.
Clone Creates an exact replica of the volume, which can be changed without
affecting the original volume. After the copy operation completes, the
mapping that was created by the preset is automatically deleted.
This preset automatically creates a volume with the same properties as
the source volume and creates a FlashCopy mapping with a background
copy rate of 50. The FlashCopy mapping is configured to automatically
delete itself when the FlashCopy mapping reaches 100% completion
Backup Creates a point-in-time replica of the production data. After the copy
completes, the backup view can be refreshed from the production data,
with minimal copying of data from the production volume to the backup
volume.
This preset automatically creates a volume with the same properties as
the source volume. The preset creates an incremental FlashCopy
mapping with a background copy rate of 50.
Preset Purpose
Chapter 8. Copy services overview 79
This functionality operates to maintain a consistent image at the secondary volume at all
times. It identifies sets of I/Os that are active concurrently at the primary volume, assigning an
order to those sets, and applying these sets of I/Os in the assigned order at the secondary. If
a further write is received from a host while the secondary write is still active for the same
block, even though the primary write might have completed, the new host write on the
secondary is delayed until the previous write is completed.
Global Mirror with Change Volumes
Global Mirror with Change Volumes is an added piece of functionality for Global Mirror
designed to help attainment of consistency on lower-quality network links.
Change Volumes use the FlashCopy functionality, but cannot be manipulated as FlashCopy
volumes, because they are special purpose only. Change Volumes provide the ability to
replicate point-in-time images on a cycling period (default 300 seconds.) This means that the
change rate only needs to include the condition of the data at the point-in-time the image was
taken, instead of all the updates during the period.
With Change Volumes, a FlashCopy mapping exists between the primary volume and the
primary Change Volume. The mapping is updated on the cycling period (60 seconds to one
day). The primary Change Volume is then replicated to the secondary Global Mirror volume at
the target site, which is then captured in another change volume on the target site. This
provides an always consistent image at the target site and protects the data from being
inconsistent during resynchronization.
Copy services interoperability between SAN Volume Controller, Storwize
V7000, and Storwize V7000 Unified
With 6.3.0, a new concept was introduced to the Storwize V7000 and Storwize V7000 Unified
called layers. Layers determine how the Storwize V7000 and Storwize V7000 Unified interact
with the SVC. Currently there are two layers, replication and storage. All devices must be at
least at the 6.3.0 code level and the Storwize V7000 and Storwize V7000 Unified must be set
to be the replication layer when in a copy relationship with the SVC.
The replication layer is for when you want to use the Storwize V7000 or the Storwize V7000
Unified with one or more SAN Volume Controllers as a remote copy partner. The storage
layer is the default mode of operation for the Storwize V7000 and is for when you want to use
the Storwize V7000 to present storage to a SVC.
8.2 File system level copy services of the Storwize V7000
Unified file modules
This section provides an overview of how the Storwize V7000 Unified file modules implement
copy services. We describe the two main features, snapshots and asynchronous replication.
8.2.1 Snapshots of file systems and file sets
The Storwize V7000 Unified implements space-efficient snapshots. Snapshots enable online
backups to be maintained, providing near instantaneous access to previous versions of data
without requiring complete, separate copies or resorting to offline backups.
In the current version, the Storwize V7000 Unified can offer 256 snapshots per file system
and 256 per file set. The snapshots can be scheduled or performed by authorized users or by
80 Implementing the IBM Storwize V7000 Unified
the Storwize V7000 Unified administrator. SONAS snapshot technology makes efficient use
of storage by storing only block-level changes between each successive snapshot. Only the
changes made to the original file system use extra physical storage, thus reducing physical
space requirements and maximizing recoverability.
Snapshots also support the integration with Microsoft Volume Shadow Copy Service (VSS).
The VSS allows you to display an older file or a folder version, through Microsoft Windows
Explorer. Snapshots are exported to Windows Server Message Block (SMB) clients by the
VSS application programming interface (API). This means that snapshot data can be
accessed and copied back, through the previous versions dialog in Microsoft Windows
Explorer.
8.2.2 Asynchronous replication
Another important feature of the Storwize V7000 Unified file module software is
asynchronous replication. In this section, we provide an overview of how asynchronous
replication is designed to provide a bandwidth friendly mechanism.
Asynchronous replication is available for replicating incremental changes at the file system
level to another site. Asynchronous replication is done by using an IBM enhanced and IBM
supported version of the open source tool rsync. The enhancements include the ability to
have more than one file module in parallel able to work on the rsync transfer of the files.
The asynchronous replication is unidirectional. Changes on the target site are not replicated
back. Async replication can be scheduled. Depending on the number of files included in the
replication, the minimal interval varies depending on the amount of data and files to be sent.
The asynchronous replication is space efficient, because it transfers only the changed blocks
of a file, not the entire file again. Resource efficiency and high performance is achieved by
using both interface nodes in parallel to transfer the data.
Asynchronous replication is useful for disaster tolerance and disaster recovery capabilities, in
other words, using incremental change replication to a disaster recovery remote site. This is
important when the raw amount of data for backup and restore is so large that a tape restore
at a disaster recovery site might be unfeasible from a time-to-restore standpoint.
The first step in performing the asynchronous replication is to execute a central policy engine
scan for asynchronous replication. The high performance scan engine is used for this scan.
As part of the asynchronous replication, an internal snapshot is made of both the source file
system and the target file system. The next step is to make a mathematical hash of the
source and target snapshots, and compare them. The final step is to use the parallel data
transfer capabilities by having both file modules participate in the transfer of the changed
blocks to the target remote file systems. The internal snapshot at the source side assures that
data being transmitted is consistent and maintains integrity, and is at a single point in time.
The internal snapshot at the target is there to provide a fallback point-in-time capability, if for
any reason the drain of the changes from source to target fails before it is complete.
The basic steps of Storwize V7000 Unified asynchronous replication are as follows:
Take a snapshot of both the local and remote file system. This ensures first that you are
replicating a frozen and consistent state of the source file system.
Collect a file path list with corresponding stat information, by comparing the two with a
mathematical hash to identify changed blocks.
Distribute the changed file list to a specified list of source interface nodes.
Chapter 8. Copy services overview 81
Run a scheduled process that performs rsync operations on both file modules, for a given
file list, to the destination Storwize V7000 Unified. Rsync is a well-understood open source
utility, which picks up the changed blocks on the source Storwize V7000 Unified file
system, and streams those changes in parallel to the remote. It then writes them to the
target Storwize V7000 Unified file system.
The snapshot at the remote Storwize V7000 Unified system ensures that a safety fallback
point is available if there is a failure in the drain of the new updates.
When the drain is complete, then the remote file system is ready for use.
Both snapshots are automatically deleted after a successful replication run.
A simple diagram of asynchronous replication is shown in Figure 8-1.
Figure 8-1 Asynchronous replication
Asynchronous replication limitations
There are limitations that should be kept in mind when using the asynchronous replication
function:
The asynchronous replication relationship is configured as a one-to-one relationship
between the source and target.
The entire file system is replicated in asynchronous replication. Although you can specify
paths on the target system, you cannot specify paths on the source system.
The source and target cannot be in the same system.
Asynchronous replication processing on a file system can be impacted by the number of
migrated files within the file system. Asynchronous replication on a source file system
causes migrated files to be recalled and brought back into the source file system during
the asynchronous replication processing.
File set information about the source system is not copied to the target system. The file
tree on the source is replicated to the target, but the fact that it is a file set is not carried
82 Implementing the IBM Storwize V7000 Unified
forward to the target system's file tree. File sets must be created and linked on the target
system before initial replication because a file set cannot be linked to an existing folder.
Quota information is also not carried forward to the target system's file tree. Quotas can be
set after initial replication as required, using quota settings from the source system.
Active Directory (AD) only, and AD with NIS using Storwize V7000 Unified internal
UID/GID mapping, are not supported by asynchronous replication because the mapping
tables in the Storwize V7000 Unified system clustered trivial database (CTDB) are not
transferred by asynchronous replication. If asynchronous replication is used, the user ID
mapping must be external to the Storwize V7000 Unified system.
Considerations
For the first occurrence of running asynchronous replication, you might want to consider
transporting the data to the remote site physically at first and have replication take care of
changes to the data. Asynchronous replication is no faster than a simple copy operation.
Ensure that adequate bandwidth is available to finish replications on time.
There is no mechanism for throttling on asynchronous replication. GPFS balances the load
between asynchronous replication and other processes.
Source and target root paths that are passed as parameters must not contain a space,
comma, parenthesis, single or double quotation mark characters, marks including: `, :, \, \n, \r,
\t, ?, !, %, or any white space characters.
More detailed information on asynchronous replication is available in the IBM Storwize V7000
Unified Information Center, available at the following website:
http://pic.dhe.ibm.com/infocenter/storwize/unified_ic/index.jsp?topic=%2Fcom.ibm.s
torwize.v7000.unified.142.doc%2Fmng_arepl_topic_welcome.html
Copyright IBM Corp. 2013. All rights reserved. 83
Chapter 9. GUI and CLI
The primary interface for the Storwize V7000 Unified is the graphical user interface (GUI)
where all configuration and administration functions can be performed. All functions can also
be performed using the terminal-based command-line interface (CLI). A few specialized
commands are only available in the CLI, which might also be required during recovery if the
GUI is unavailable. Both methods are required for management of the cluster.
In this chapter, we demonstrate how to set up both methods of access, show how to use
them, and when each is appropriate.
9
84 Implementing the IBM Storwize V7000 Unified
9.1 Graphical user interface setup
Almost all of the IP addresses in the cluster have a web interface running behind them, but
each has a specific purpose.
9.1.1 Web server
Each node in the cluster has a web server running. What is presented by each of these web
servers depends on the functional status and configuration of the particular node at any given
time. All web connections use the HTTPS protocol. If a connection is attempted using HTTP,
then it is usually redirected.
Storage node canisters
Both the storage nodes in the control enclosure can be connected to on their service IP
address. This displays the Service Assistant (SA) panel for that node. This is a direct
connection to the node software and does not require that the cluster is active or operational,
only that the node has booted its operating system.
One of the nodes assumes the role of the config node when the cluster is active. That node
presents the storage management IP address and presents the storage system management
GUI. Only one of the nodes presents this and there is only one address.
File nodes
For management functions, one of the file nodes is the active management node. This node
presents the management IP address and the management GUI for the entire cluster.
Both the file nodes are connectable with HTTPS over the other IP addresses assigned to their
interfaces. What is presented depends on the user configuration of the cluster.
Chapter 9. GUI and CLI 85
The IP Report that is shown in Figure 9-1 can be accessed by using Settings Network
IP Report. It shows all the IPs being used to manage the file module nodes and control
enclosure nodes.
Figure 9-1 IP Report
9.1.2 Management GUI
The primary management interface for the Storwize V7000 Unified cluster is the management
IP address that is assigned to the file modules. This GUI combines all management functions
and can be used for both file and block storage management.
The storage system or control enclosure also has a management interface, which is the same
as the management GUI found on the stand-alone Storwize V7000. This can also be
connected to at any time, but provides management of the storage function only. Access to
resources directly used by the file modules is prohibited, but normal block configuration can
be done. It is suggested that you use only the full cluster GUI presented from the file module
to avoid confusion, although there might be times during complex recovery when IBM
Support asks you to connect to this interface.
You will need to access the storage GUI during implementation to set passwords and test its
functionality.
9.1.3 Web browser and settings
To connect to the GUI, you need a workstation running an approved web browser. Generally,
any current browser is supported, but to see the current list of supported browsers go to the
Storwize V7000 Unified support website:
http://www-01.ibm.com/support/docview.wss?uid=ssg1S1004228
86 Implementing the IBM Storwize V7000 Unified
At the time of writing, Firefox 3.5 or higher and Internet Explorer (IE) 8.x or higher are listed as
supported.
To access the management GUI, you must ensure that your web browser is supported and
has the appropriate settings enabled. For browser settings, refer to the following web page:
http://ibm.biz/BdxFX2
9.1.4 Starting the browser connection
Start the browser application and enter the management IP address assigned to the file
modules. If you used http://<ip_address>, you are redirected to https://<ip_address>.
You are now warned that there is a security exception and you need to approve the exception
to continue. This step is normal for this type of HTTPS connection.
This now presents you with the logon page, as shown in Figure 9-2.
Figure 9-2 GUI logon
Enter your user name and password. If this is a new installation, the default is admin/admin.
Otherwise, you need to use the user ID and password that is assigned to you by your storage
administrator.
Note the box on this window labeled Low Graphics Mode. This option disables the animated
graphics on the management pages and provides a simplified graphics presentation. This is
useful if connecting remotely because it reduces the traffic and increases response time.
Some users prefer to disable the animation by using this option.
Chapter 9. GUI and CLI 87
With animation on, hover over the icons on the left side and the submenu choices are
presented. See Figure 9-3. Using the mouse, select the submenu to start the wanted page.
Figure 9-3 GUI animation
Alternatively, if you have disabled animation, first click the icon to show that section, then
using the pull-down icons at the top of the page, select the subheading as seen in Figure 9-4.
88 Implementing the IBM Storwize V7000 Unified
Figure 9-4 GUI no animation
9.2 Command-line interface setup
Using a suitable terminal client such as PuTTY, connect to the management IP address by
using Secure Shell (SSH) (port 22). You then get a login prompt, as shown in Figure 9-5.
Figure 9-5 CLI: Login prompt
Chapter 9. GUI and CLI 89
If this is the first time a connection has been made from this workstation, you might be asked
to accept a security key, as shown in Figure 9-6. Click Yes to tell PuTTY to save the rsa key
for future connections.
Figure 9-6 PuTTY rsa key
Save the connection definition in PuTTY so it can easily be started in the future.
Also connect, test, and save a session to the storage management IP address. This is used
only in a recovery situation, but it is a good practice to have it tested and easy to start
beforehand. If you are accustomed to earlier code levels of SAN Volume Controller (SVC),
you will notice that the requirement to create and store a key file has been dropped.
Authentication is now by user ID and password.
9.3 Using the GUI
All management and configuration functions for both file and block are available through the
Storwize V7000 Unified management GUI interface. For this reason, there is no need to
connect to the GUI interface of the storage module for normal operations.
When logged on to the Management GUI, the main window is displayed. The window has five
main areas:
Top action bar This area is the blue bar across the top. It has a welcome message
and includes links to Help and Information. It also has a logout link to
close all functions and log off the GUI.
Main Section Icons Down the left side of the window are a number of icons. Each
represents a main section of management. Hovering the mouse over
each icon causes it to become larger and a submenu opens to the
right. If low graphics mode was selected, you click the icon to display
the topic and chose the submenu by using the navigation.
Navigation menu Along the top of the main window, there is a menu showing the
currently displayed panel and which section and menu item it belongs
to. If the submenu has multiple choices, then it is shown as a
pull-down option that allows you to select the submenu.
Main window The current window is displayed in the right (largest) panel, which is
based on the selections made. The contents of this window vary
depending on the action being performed.
Bottom status bar At the bottom are three bars:
90 Implementing the IBM Storwize V7000 Unified
- The left bar indicates the current file capacity of the cluster and how
much has been used.
- The middle bar indicates the number of background tasks running.
- On the right side, the bar gives information about the health of the
cluster. Normally this bar is colored green, but changes to yellow or
red if there are exceptions. Hovering over the X at the left end opens
a list of the major components that have unhealthy status and
indicates the highest priority status on that component.
9.3.1 Menus
We describe the menus available in the sections that follow.
Home
The Home menu has only one submenu, the Overview. The Overview window displays a
graphical view of the entire cluster from a data point of view. It shows the major areas where
data is managed and each icon also gives the number of resources defined in that area. The
icons are arranged to show the relationship between each and the data flow.
The suggested task button gives a list of shortcuts to common tasks.
Clicking an icon displays a brief description of that resource at the bottom of the window.
Monitoring
The Monitoring menu displays the following submenus:
System This gives a graphical view of the cluster showing each major
component. The graphic for each component indicates its status with
colored indicators. Hovering over the links or clicking the item displays
windows giving the status and configuration details. The identify button
turns on the attention light on each component to help in locating the
physical device.
System details This option gives a tree view of each component. Clicking the entry in
the tree view displays details of that component in the right panel.
There is an action pull-down icon to select from the available actions
for that component. Each component view is unique and gives detailed
information about that component, including its status. Where
applicable, the event logs relating to that component are listed.
Events This menu option is described in detail in 15.2, Event logs on
page 256. There are two tabs that display the two independent event
logs, file and block. The view of each log can be customized and
filtered by selecting the wanted options in the filter controls at the top
of the panel. Both logs provide an action pull-down icon which acts on
the highlighted line entry in the view below. It is also possible to
right-click a log entry directly to show this action list. The choices in the
action list vary between the two logs.
Capacity This menu option gives a window with five tabs. The File Systems tab
shows a list of the file systems, their total capacity, and usage details.
Clicking each file system causes it to be included in the graph
displayed at the bottom of the window, which tracks historic usage over
time, or if wanted, the percentage. The File System Pools tab shows a
list of the file systems, their total capacity, and specific usage details. It
also shows details related to compression and thin provisioning. The
Chapter 9. GUI and CLI 91
File Sets tab lists the file sets defined and gives usage metrics on each
one. The Users tab gives file usage by each user that is defined to the
cluster. The User Groups tab gives a higher level view that is based on
the groups the users belong to.
Performance The performance option has three tabs and gives a simple view of
some key performance indicators. The graphs that are shown are not
meant to provide detailed tuning information, but to show at a quick
glance the areas of immediate concern that might need further
investigation or to quickly identify a problem area during a
performance impact on the cluster. The File tab shows four graphs
and the scale can be altered using the pull-down menu on the right
side. The Block tab shows four graphs with the scale fixed to
5 minutes. The scope can be changed to show the whole cluster or
one node. The File Modules tab shows graphs and the scale can also
be altered by using the pull-down menu on the right side.
Files
All configuration actions for the file services are performed in this menu. These functions are
covered in detail in the Chapter 11, Implementation on page 139. The Files menu presents
the following submenus:
File Systems Use this option to view the status and manage the file systems
configured on the cluster. Use the New File System button to create a
file system, or the Actions pull-down menu to perform management
functions on an existing one. You can determine whether the file
system is compressed or not and also see capacity information. You
can also filter whether to show NSD or storage pool details for each
file system.
Shares This option lists all shares or exports defined on the cluster, the path
for their root, and the protocol they are able to be accessed with. Use
the New Share button to create a share. This action starts a window to
enter the details. Or, use the Actions pull-down menu to manage an
existing share.
File Sets This menu option shows the defined file sets in a list detailing their
type, what the path is, in which file system, and statistical details. Use
the New File Set button to define a new file set and the Actions
pull-down menu to manage an existing one.
Snapshots In this option, you can create a snapshot, or manage an existing one
from the list displayed by using the New Snapshot and Actions
pull-down menus.
Quotas In this option, you can create a quota by using the New Quota
pull-down menu, or manage an existing one from the list displayed by
using the Actions pull-down menu.
Services The services tab is used to configure and manage the additional tools
that are provided for the file service:
- Backup selection gives a choice of which backup technology is used
to back up the file service. At the time of writing, two options are
available: IBM Tivoli Storage Manager and Network Data Management
Protocol (NDMP).
- The backup option display is technology-specific and is used to
configure the backup process.
92 Implementing the IBM Storwize V7000 Unified
- The antivirus selection is used to configure the external antivirus
server if antivirus scanning is being used.
Pools
The storage pools are a pool of storage from which volumes are provisioned and used as
block storage by servers directly. These pools are also used by the file server to form file
systems. Resources that are owned by the file server do not show in all the GUI views, but the
capacity that is used is seen. The display for each pool also displays details related to
compression. This menu gives several views:
Volumes by Pool Clicking the pool (or MDiskgroup) in the left panel displays the
volumes in that group and their details in the right panel. You can use
the New Volume tab to create new volumes for blocked storage to
servers and also the Actions tab to manage these same volumes. You
can monitor only NSDs that are the volumes assigned to file systems
such as examining properties.
Internal Storage The Storwize V7000 has internal disk drives in the enclosures. This
option displays and manages these drives. Click the drive class in the
left panel to display the drives in that class.
External Storage Storwize V7000 can also manage external storage subsystems if
wanted by using the SAN connection. If any are attached, they are
managed in this option. Click the storage system controller in the left
panel to display the volumes presented in the right panel.
MDisks by Pools This option gives a different view of the pools. Here, we can see which
MDisks are in each pool.
System Migration This wizard is to assist with migrating an external storage system to be
managed by the Storwize V7000.
Volumes
The volumes are built from extents in the storage pools and presented to hosts as external
disks. There are several types of block volumes such as thin-provisioned, compressed,
uncompressed, or generic, and mirrored volumes. In this view we can create, list, and
manage these volumes:
Volumes This is a listing of all volumes.
Volumes by Pool By selecting the pool in the left panel, you can display the volumes that
are built from that pool.
Volumes by Host The hosts that are defined to the cluster are listed in the left panel. By
clicking a host, we can see which volumes are mapped to that host.
The file modules are hosts also and use block volumes (NSDs) but do
not appear as hosts.
Hosts
Each host that will be accessing block volumes on the cluster needs to be defined. In each
definition, there also needs to be defined the worldwide name (WWN) or Internet Small
Computer System Interface (iSCSI) details of the ports of that host. When the host is defined,
then volumes can be mapped to it, which are then visible to the ports with the WWNs listed:
Hosts This is a list of all defined hosts. Here we can add and manage these
definitions.
Chapter 9. GUI and CLI 93
Ports by Host This view allows us to see the ports that are defined on each host. The
hosts are listed in the left panel. Clicking the host displays the ports in
the right panel.
Host Mappings Each mapping that shows the host and the volume that is mapped is
listed, one per line.
Volumes by Host In this view, you can select the host from the left panel and see
volumes that are mapped to it in the right panel.
Copy services
Storwize V7000 Unified provides a number of different methods of coping and replicating
data. FlashCopy is provided for instant copy of block volumes within the cluster. Remote copy
is used to copy block volumes to another location on another cluster and this can be done
synchronously (Metro Mirror) or asynschronously (Global Mirror). File systems can be
replicated to another file system by using the File Copy Services submenu described here:
FlashCopy In this option, all the volumes in the cluster are listed. Here, we can
create and manage copies and view the status of each volume.
Consistency Groups These are used to group multiple copy operations together that have
a need to be controlled at the same time. In this way, the group can
be controlled by starting, stopping, and so on, with a single operation.
Additionally, the function ensures that when stopped for any reason,
the I/Os to all group members have all stopped at the same
point-in-time in terms of the host writes to the primary volumes,
ensuring time consistency across volumes.
FlashCopy Mappings This option allows you to create and view the relationship (mapping)
between the FlashCopy source and target volumes.
Remote Copy In this option, you can create remote copies and consistency groups.
You can then view and manage these.
Partnerships For a remote copy to be used, there must be a partnership that is set
up between two or more clusters. This option is used to create and
manage these partnerships.
File Copy Services Use this panel to select different methods to replicate data between
different file systems.
Access
There are a number of levels of user access to the cluster, which are managed in this option.
The access levels are divided into groups, each having a different level of access and
authority. If wanted, multiple users can be defined and their access assigned to suit the tasks
they perform:
Users This option lists the user groups in the left panel, and the users in that
group in the right panel. New users can be added to a group and
managed.
Audit Log All commands issued on the cluster are logged in this log. Even if
initiated from the GUI, most actions cause a CLI command to be run,
so this is also logged.
Local Authentication The system supports user authentication and ID mapping using local
authentication server for network-attached storage (NAS) data
access. Using local authentication eliminates the need for a remote
authentication service, such as Active Directory or Samba primary
domain controller (PDC), thus simplifying authentication
configuration and management.
94 Implementing the IBM Storwize V7000 Unified
Settings
Use the Settings panel to configure system options for event notifications, directory services,
IP addresses, and preferences that are related to display options in the management GUI:
Event Notifications This option is used to configure the alerting and logging. Here we
define the email and SNMP servers and the levels of alerting as
wanted. This is covered in detail in 15.5, Call home and alerting on
page 274.
Directory Services Directory Services defines the fundamental settings for the file server.
We need to define the DNS domain and servers. We define the
authentication method that is used by the file server and the
authentication server.
Network Protocol If HTTP is configured as an access protocol on any shares, we need to
define the HTTPS security. Web access is by HTTPS only; pure HTTP
protocol is not allowed. Here we set up the authentication method and
keys if wanted.
Network The network setup for all the interfaces in the cluster is configured
here. Use the buttons in the left panel to select the interface and view
or modify the values in the right panel:
- Public Networks defines the file access IP addresses that are
presented on the client facing interfaces. These addresses float across
the file module ports as needed.
- Service IP Addresses are for the storage enclosure only. Define a
unique address for port 1 on each node canister. This address is used
only for support and recovery.
- iSCSI defines settings for the cluster to attach iSCSI-attached hosts.
- Use the Fibre Channel panel to display the Fibre Channel
connectivity between nodes, storage systems, and hosts.
IP Report The IP Report panel displays all the IP addresses that are currently
configured on the system.
Support This option allows us to define connectivity for sending alerts to IBM
Support and allowing IBM Support to connect to the cluster. We can
also create, offload, and manage the data collections that are needed
by support.
General In this option, we can set the time and date for the cluster, enter
licensing details if needed, and perform software upgrades for the
cluster. The software process is covered in detail in 15.9, Software
on page 278.
9.4 Using the CLI
Note: We suggest using the GUI instead of the CLI. The GUI builds the CLI commands
needed and automatically includes the correct parameters that are needed. We also
suggest using the GUI to determine the best way to use CLI commands. One example is
when creating a compressed volume that requires some specific parameters such as
rsize and autoexpand to avoid having the volume going offline prematurely because these
parameters are missing or mis-configured.
Chapter 9. GUI and CLI 95
Like the GUI, there is a CLI connection to the Storwize V7000 Unified management address
and also to the Storwize V7000 storage enclosure management address. All functions can be
performed on the Storwize V7000 Unified, so the only access required for normal operation is
this single CLI session. The CLI session to the storage is only needed in recovery situations,
but it is a good practice to have set it up and tested it.
The commands are unique, so storage commands can be issued on the unified CLI by using
the same syntax. Most block commands can be prefixed with svcinfo or svctask as has been
the case on SVC and Storwize V7000 previously. Where there is ambiguity, this prefix needs
to be added. This ensures that the command is unique and gets the wanted result.
For example, lsnode displays information about the file modules, as shown in Example 9-1.
Example 9-1 lsnode file module information
[kd97pt0.ibm]$ lsnode
Hostname IP Description Role Product version Connection status GPFS status CTDB status
Last updated
mgmt001st001 169.254.8.2 active management node management,interface,storage 1.4.2.0-27 OK active active
10/22/13 1:17 PM
mgmt002st001 169.254.8.3 passive management node management,interface,storage 1.4.2.0-27 OK active active
10/22/13 1:17 PM
EFSSG1000I The command completed successfully.
[kd97pt0.ibm]$
The svcinfo lsnode command displays information about the Storwize V7000 nodes, as
shown in Example 9-2.
Example 9-2 lsnode Storwize V7000 node information
[kd97pt0.ibm]$ svcinfo lsnode
id name UPS_serial_number WWNN status IO_group_id IO_group_name config_node UPS_unique_id hardware iscsi_name
iscsi_alias panel_name enclosure_id canister_id enclosure_serial_number
1 node1 5005076802002B6C online 0 io_grp0 yes 5005076802002B6C 300
iqn.1986-03.com.ibm:2145.sanjose1.node1 01-1 1 1 78G06N1
2 node2 5005076802002B6D online 0 io_grp0 no 5005076802002B6D 300
iqn.1986-03.com.ibm:2145.sanjose1.node2 01-2 1 2 78G06N1
[kd97pt0.ibm]$
The information center has detailed information about the use and syntax of all commands.
Most commands are available to all users, but some commands are dependent on the
authority level of the user ID that is logged on.
Scripting of CLI commands is supported if the scripting tool supports SSH calls. Refer to the
information center for details about generating a key and using scripting:
http://www.ibm.com/support/publications/us/library/
Listed below are some commands that you might find useful during recovery. Always refer to
the information center for syntax and expected results.
9.4.1 File commands
lscluster Lists the clusters managed
lsnode Lists the nodes in the cluster
lsnwmgt Shows the configuration of the management ports
lsnwinterface Lists the physical client facing interfaces
lsnw Lists the networks (or subnets) defined
chnwmgt Sets or changes the addressing of the file module management ports
chrootpwd Changes the root password across all nodes. Need root logon
initnode Stops or restarts a file node
96 Implementing the IBM Storwize V7000 Unified
resumenode Resumes a node that has been suspended or banned
stopcluster Shuts down a cluster or node
suspendnode Suspends a node
lsfs Lists the file systems
lsmount Lists the mount status of all file systems
mountfs Used to mount a file system, only used during recovery
unmountfs Unmounts a file system
9.4.2 Block Commands
svc_snap Gathers a data collection from the block storage Storwize V7000
lssystem Lists the Storwize V7000 storage system
svcinfo lsnode Lists the nodes in the storage system
lsdumps Lists dump files saved on the storage system
lsfabric Produces a list (often very long) of all the Fibre Channel paths known
to the storage system
lsmdisk Lists all the MDisks that are visible to the storage system. Useful if you
need to also see MDisks owned by the file storage, which are hidden
in the GUI
detectmdisk Rescans and rebalances Fibre Channel paths. Use with care because
this reconfigures the pathing to the current visible paths and drop
failed paths
chsystemip Change or set the IP addresses of the storage system
stopsystem Allows you to stop a node or the entire storage system
Copyright IBM Corp. 2013. All rights reserved. 97
Chapter 10. Planning for implementation
In this chapter, we describe the planning steps that are required to prepare for a successful
implementation of the IBM Storwize V7000 Unified system. We strongly suggest considering
all the solution requirements in advance in order to achieve the best results, rather than an ad
hoc, unplanned implementation. These requirements should be recorded early as part of the
solution planning process.
The IBM SONAS/V7000 Unified Questionnaire is a vital planning resource which should be
completed during pre-sales activities. We describe the questionnaire in section 10.1, IBM
SONAS/V7000 Unified Questionnaire.
IBM Storwize V7000 Unified InfoCenter is another resource available to you during the
planning phase. You can access the planning information by pointing your Web browser to the
following address:
http://pic.dhe.ibm.com/infocenter/storwize/unified_ic/index.jsp?topic=%2Fcom.ibm.s
torwize.v7000.unified.142.doc%2Fsvc_webplanning_21pb8b.html
10
98 Implementing the IBM Storwize V7000 Unified
10.1 IBM SONAS/V7000 Unified Questionnaire
Adequate and careful planning is a very important part of solution design and
implementation. Without proper planning in place, issues during implementation phase will
arise, and different functional, performance and capacity problems will likely occur when
V7000 Unified is put into productional use.
The IBM SONAS/V7000 Unified Questionnaire is a very useful tool to assist you during
pre-sales activities. The IBM account team will present the client with questionnaire in order
to collect and understand all relevant information and requirements of the solution. It is
important to take the time and respond as completely and as accurately as possible to ensure
the optimal solution planning.
Figure 10-1 shows the beginning section of IBM SONAS/V7000 Unified Questionnaire.
Figure 10-1 IBM SONAS/V7000 Unified questionnaire
The questionnaire contains the following key areas:
Opportunity details
Important: When completed, the IBM SONAS/V7000 Unified Questionnaire becomes IBM
Confidential.
Chapter 10. Planning for implementation 99
Capacity requirements
Performance requirements
Client communication protocols
Data protection
Data center requirements
Storwize V7000 Unified specific questions
Service requirements
We explain these key areas in more detail in the following sections.
10.1.1 Opportunity details
Figure 10-2 shows the opportunity details section in the questionnaire.
Figure 10-2 Opportunity details
When completed, this section will provide general details such as the client name,
seller/business partner details, target installation and production dates and so on.
100 Implementing the IBM Storwize V7000 Unified
10.1.2 Capacity requirements
This part of questionnaire is shown in Figure 10-3.
Figure 10-3 Capacity requirements
In this section of questionnaire, we gather information about required capacity for block and
file I/O, expected data growth rate, file systems, and tiering details.
Capacity Magic modelling tool can be used to determine the configuration for required
capacity based on input data.
Information lifecycle management
File systems for information lifecycle management (ILM) require multiple internal file system
pools to be defined in different tiers. These tiers are then mapped to different storage pools
which should be based on storage tiers, for example, drive classes and drive technology.
You need to create a plan for the lifecycle of a file, for example based on file type, time since
last modification, or time since last access. Based on the file capacities needed for the
different tiers, the corresponding capacity in storage tiers must be provided. This determines
the type and number of disk drives to order for the back-end storage.
Determine the wanted policy definitions for the following components:
Data placement at file creation
Chapter 10. Planning for implementation 101
Data migration between the tiered pools during the lifetime of the file
Data deletion after the files specified expiration criteria are met
Hierarchical storage management
With hierarchical storage management, be aware of the following factors:
Hierarchical storage management (HSM) works with Tivoli Storage Manager as the
backup method
HSM is not supported by Network Data Management Protocol (NDMP) backup
Requires software and license for Tivoli for Space Management
Requires an external file system pool to be configured
Requires appropriate external storage, which is supported by Tivoli Storage Manager
HSM
10.1.3 Performance requirements
We show the performance requirements section of questionnaire in Figure 10-4.
102 Implementing the IBM Storwize V7000 Unified
Figure 10-4 Performance requirements
The questions in this part of questionnaire try to determine the factors important for good
performance. These factors include the expected type of I/O (sequential access I/O versus
random access I/O), large files versus small files, expected cache hit ratio, number of
concurrent users etc. If the client plans to use storage in VMware environment, we also need
to identify VMware vSphere version, number of virtual machines in use, number of datastores
and number of VMware ESXi hosts.
If this is an existing environment or if a test environment is available, the workloads
experienced can be measured and projected. Tools are available at no cost to analyze
workloads and gather the necessary information about the system components, I/O patterns,
and network traffic. For Windows environments, perfmon can be used. In IBM AIX and Linux
environments, nmon is one of the options to use. There are other options like traceroute,
netstat, tcptrace, tcpdump, iozone, iorate, netperf, nfsstat, iostat, and others that can be used
as well.
Chapter 10. Planning for implementation 103
Disk Magic modelling tool can be used to verify that performance requirements can be met
with the system configuration determined with Capacity Magic. You can adjust drive types,
Redundant Array of Independent Disks (RAID) levels, and number of drives required
accordingly.
10.1.4 Client communication protocols
This is the largest part of questionnaire, and its purpose is to collect information about
communication protocols the client intends to use. The subsections in this part are:
Network requirements
SMB/CIFS protocol requirements
Authentication requirements
NFS protocol requirements
Other client and protocol requirements
We show the first part of client communication protocols section in Figure 10-5.
104 Implementing the IBM Storwize V7000 Unified
Figure 10-5 Client communication protocols (first part)
Figure 10-5 shows fields which belong to the following subsections:
Network requirements
This subsection contains fields for entering information about file I/O network parameters,
such as network speed and VLAN details.
SMB/CIFS requirements
If the client plans to use SMB/CIFS protocol, this subsection needs to be filled out
accordingly.
Authentication requirements
In this subsection, authentication type, number of user domains and other pertinent
information can be provided.
The second part of client communication protocols section is shown in Figure 10-6.
Chapter 10. Planning for implementation 105
Figure 10-6 Client communication protocols (second part)
This part shows the following subsections:
NFS protocol requirements
This subsection needs to be completed if the client plans to use NFS. Details such as the
maximum number of NFS users and the number of concurrent NFS users can be provided
here.
Other client and protocol requirements
The use of protocols such as FTP, HTTPS and SCP can be indicated here, along with
other relevant information.
Network considerations
The Storwize V7000 Unified uses Storwize V7000 as the back-end storage system for
internal storage. Therefore, all the LAN considerations for the Storwize V7000 product apply
to the back-end part as well:
In contrast to the ports on the file modules, the network ports on the Storwize V7000 are
not bonded by default.
106 Implementing the IBM Storwize V7000 Unified
The 1 Gb Ethernet port 1 on both node canisters is used for management access via the
Storwize V7000 cluster IP address by default:
Optionally, use port 2 to define the second management IP address for redundancy.
As with the stand-alone Storwize V7000, the management IP address is active on
port 1 of the current configuration node canister. Either one of the two node canisters in
a V7000 can act as the configuration node canister. Configuration node role changes
between the two in case of problems, changes, or during V7000 code updates.
Both 1 GbE ports can be simultaneously used and configured for iSCSI access from iSCSI
hosts.
The 10 GbE ports in V7000 models 312/324 can be configured for iSCSI access only.
Every node canister in the V7000 should be configured with a service IP address that is
accessible in the given network environment, in case the Storwize V7000 cluster
management IP address is not reachable or the cluster itself is no longer working. Then,
access via the Service Assistant interface to the individual node canisters might be
required to debug and resolve the situation. Default service IP addresses of node
canisters are 192.168.70.121 and 192.168.70.122, with a subnet mask of 255.255.255.0
and a default gateway of 192.168.70.1. These can be used if appropriate or changed to
other valid addresses on the client network.
In addition, there are two file modules providing their own, different interfaces and therefore
generate more considerations:
All interfaces are bonded, that is, using a virtual interface bonded on two physical ports.
Therefore, the two ports belonging to one bonded interface cannot be attached to
separate networks. This configuration is described in the installation documentation as
well.
The file modules use two 1 GbE ports (bonded) for a direct cluster connection between
them.
Similar to the Storwize V7000, the two remaining 1 GbE ports can be used for both
management access and data traffic (difference: the V7000 supports iSCSI traffic only) via
TCP/IP.
The default bonds that are configured: ethX0 for data traffic on the 1 GbE ports, mgmt0 for
management traffic on the same 1 GbE ports, and ethX1 for the 10 GbE ports of the file
modules.
Note: The management communication between the Storwize V7000 and the two file
modules runs via these 1 GbE ports on the Storwize V7000. They must be
configured to be in the same subnet as the management ports on the file modules.
The file modules can optionally use 10 GbE ports for management, but the 1 GbE
ports are the default and must be used for the initial configuration.
The Storwize V7000 Unified uses only IPv4 at this time.
Note: In contrast to the stand-alone Storwize V7000, the service IP addresses of the
Storwize V7000 node canisters can no longer be changed as part of the USB key
initialization process of a Storwize V7000 Unified system. The init tool screens for Storwize
V7000 Unified allow us to only set the service IP addresses of the two file modules during
the initial installation. Therefore, we recommend as the first step, when GUI access is
available, to change the service IP addresses of the V7000 node canisters to the wanted
ones.
Chapter 10. Planning for implementation 107
Default management access is via 1 GbE ports and this is required for initial installation:
Management via 10 GbE ports is optional and can be configured later.
Ensure that communication with the Storwize V7000 management IP via 1 GbE
continues to work because management of the Storwize V7000 storage system is
always via 1 GbE.
VLANs are supported and can be configured for both 1 GbE and 10 GbE after initial
installation and Easy Setup steps are complete. There is no VLAN support during initial
installation and Easy Setup.
SAN considerations
The storage area network (SAN) considerations of the Storwize V7000 Unified are similar to
the ones of a stand-alone Storwize V7000 because all the FC access-related functions are
the same. The only difference is that the Storwize V7000 Unified has only four FC ports
available on its V7000 node canisters for SAN connectivity because the other four ports are
dedicated for, and directly connected to, the two file modules.
Our recommendation is to have a redundant SAN configuration with two independent fabrics,
providing redundancy for the V7000 connections, FC host port connections, and connections
for external SAN virtualized storage systems. All connections should be evenly distributed
between both fabrics to provide redundancy in case a fabric, host, or storage adapter goes
offline.
Zoning considerations
For the Fibre Channel connectivity of the Storwize V7000 Unified, the same zoning
considerations apply as for a stand-alone V7000. The one difference is that there are only
four FC ports available (two ports per node canister: Port 3 and port 4).
Our recommendation is to create a node zone in every fabric with two of the four V7000 ports
(one per node canister: port 3 in one fabric, port 4 in the other fabric) as a means of a
redundant communication path between the two node canisters (if there is a problem with the
communication via the midplane inside the V7000).
For the Storwize V7000 host attachment via Fibre Channel, create a host zone per host in
each fabric and assign the FC connections of this host in a redundant fashion and zone the
V7000 node canisters to ensure redundancy.
If there is external SAN-attached storage to be virtualized, create a storage zone in each
fabric with half of the ports of the external storage system and one port per node canister in
the same fashion.
File access protocols
All data subsets need to be accessed via only one file protocol:
NFS exports: The default owner of an export is the root user. You need root user access to
the NFS client for initial access to the export and to create the directory structures and
access permissions for other users as wanted.
Note: Currently, there is an open problem when having both 1 GbE data network and
10 GbE data network on the same subnet. The Storwize V7000 Unified then responds only
on the 1 GbE interfaces. This issue will be fixed in one of the next maintenance releases.
108 Implementing the IBM Storwize V7000 Unified
CIFS shares: It is mandatory to specify an owner when creating a new share to be able to
access it for the first time from the client side. Otherwise, the default owner will be the root
user as with NFS, but this user typically does not exist on CIFS client side. The initial
owner that is specified is the one used for initial access from CIFS client side to create the
directory structures and access control lists (ACLs) for all users as required. It is
important, for example, that the traverse folder has the right to be able to access
directories below the original home directory of the share. When the directory structure for
other users and appropriate ACLs are defined, necessary shares can be defined in the
Storwize V7000 Unified afterward. If their access works as needed, the initial owner can
be deleted if wanted or the initial owners access right can be minimized as needed.
Multiple simultaneous exports of same subset of data via different protocols
This is fully supported by the Storwize V7000 Unified. Most likely, this multiprotocol export will
be using NFS and CIFS. The difficulty is to ensure that the access rights and ACLs set are
compatible from both client sides.
Authentication requirements
Decide which implementation of authentication service and if only external (recommended) or
a mixed external and internal user ID mapping will be used.
If there is an existing authentication infrastructure already, many times this will include only
one version (for example, Active Directory or Lightweight Directory Access Protocol (LDAP)),
Notes:
First time creation of a CIFS share: It is mandatory to specify an owner for first time
access when creating a CIFS share. Otherwise, only the root user has access to that share
if no other permissions are set from the client side already (which is not the case if it is
really the first time access). An owner of a share can be set or changed on the Storwize
V7000 Unified only when there is no data stored in the share yet.
For managing CIFS ACLs: Authorization setting to Bypass traversal check is not
supported by Storwize V7000 Unified. Therefore, traverse folder rights must be explicitly
granted to all users (or, for example, to Everyone) that should have access to directory
structures below the level of the current directory. These users do not see any contents
when traversing directories that just have traverse folder rights.
Simultaneous export of the same data via both CIFS and NFS: Changing ACLs from
the NFS side will most likely destroy the CIFS ACLs because NFS uses the much simpler
Portable Operating System Interface (POSIX) bits for user/group/other to manage access
rights. Because CIFS provides much more sophisticated ACL management options, it is
recommended to manage ACLs for the common share on the CIFS client side.
Note: Volumes for file access (equivalent to NSDs, as seen by GPFS) are not explicitly
visible in the GUI and cannot be modified in the standard GUI windows, for example,
volumes or pools. Only volumes for block I/O can be created and modified in these GUI
windows. This is made on purpose because the file volumes get created on file system
creation and are always associated with a file system. Therefore, they should not be
manipulated separately and are hidden from the standard GUI panels involving volumes.
They are managed on the file system layer instead (typical CLI commands: chfs and
mkdisk).
The file volumes can be displayed explicitly by listing the details about the appropriate file
system in the GUI or via CLI.
Chapter 10. Planning for implementation 109
which basically determines the decision for the implementation of Storwize V7000 Unified as
well.
The details of each of the following authentication options are described in Chapter 4,
Access control for file serving clients on page 37.
Below is a summary of the available options out of which one method needs to be selected.
Active Directory (includes Kerberos)
The following Active Directory options and considerations exist:
Standard (provides ID mapping for Windows only)
In this case, the Storwize V7000 Unified uses an internal ID mapping, both for UNIX type
users (using a User ID/Group ID scheme) and mapping Windows subject identifiers (SIDs)
to local user identifiers (UIDs) and group identifiers (GIDs)
With Services for UNIX (RFC2307 schema)
Available on domain controllers running Windows 2003 SP2 R2 and higher
This option provides full external user ID mapping for both Windows and UNIX users
inside the Active Directory server
With Services for UNIX (SFU schema)
Available on domain controllers running Windows 2000 and 2003
Similar option for older domain controllers to provide full external user ID mapping for
both Windows and UNIX users inside the Active Directory server
With Network Information Service (NIS) (netgroup support only)
Adding netgroup support via NIS
Netgroup is an option to group hosts and manage them as one group
With NIS (netgroup support and user ID mapping)
Using NIS for the UNIX user ID mapping
When using Active Directory, we recommend to provide full external user ID mapping in the
Active Directory server.
Lightweight Directory Access Protocol
The following Lightweight Directory Access Protocol (LDAP) options exist:
LDAP
Secure LDAP (with Kerberos)
Secure LDAP (with Kerberos) and Secure Sockets Layer/Transport Layer Security
(SSL/TLS) encrypted communication (available via CLI only)
Important: The Storwize V7000 Unified supports only one authentication method at a
time. Changing it later is not recommended. Therefore, it is important to carefully decide
and select the method at the start.
Ensure that long-term goals are taken into account. Also, the potential usage of certain
functions, like asynchronous replication and the planned future enhancements for WAN
caching (see Statement of Direction published at the announcement time of Storwize
V7000 Unified), requires an external only ID mapping. Therefore, if there are chances that
this might be needed at some point in the future, ensure that an external only user ID
mapping is used from the start.
110 Implementing the IBM Storwize V7000 Unified
Secure LDAP (with Kerberos) and SSL/TLS encrypted communication provides the most
security and is therefore the recommended option in an LDAP environment.
Samba primary domain controller (NT4 mode)
The following Samba primary domain controller (PDC) options are available:
Stand-alone
With NIS (netgroup support only)
With NIS (netgroup support and user ID mapping)
Samba primary domain controller is a legacy implementation that not many environments
require. However, it is still supported by the Storwize V7000 Unified:
NIS (NFS only)
NIS with netgroup support only
This implementation does not provide a user-based authentication but rather a client-based
(host or IP address) authentication because all users connecting from the same NFS client
machine will get access.
Local authentication
In this release, version 4.1, local authentication has been added. This gives the ability to
create an open LDAP server on the node and replicate the configuration between the nodes
by using the LDAP MirrorMode.
For more information, see Chapter 11, Implementation on page 139.
10.1.5 Data protection
We use this section to gather planning information about the following data protection
methods:
Snapshots
Asynchronous replication
Backup and restore with IBM TSM or NDMP
Antivirus
Figure 10-7 shows data protection section of questionnaire.
Chapter 10. Planning for implementation 111
Figure 10-7 Data protection
Snapshots
Snapshots are by design space efficient, working with pointers and redirect on write for
updates to existing data blocks, which are part of a snapshot. Therefore, the rate of changes
to the data being part of a snapshot as well as the frequency of creating snapshots and the
retention period for the existing snapshots determine the capacity required for snapshots. If a
snapshot is just used for backup or asynchronous replication and deleted afterwards, there is
usually no need to include significant extra capacity for snapshots into the planning. But if a
number of previous versions of files are kept in snapshots to protect against operational
failures (enabling easy file restores for users), this needs to be taken into account along with
the expected change rate for this data.
Asynchronous replication
With asynchronous replication, be aware of the following factors:
Requires a second Storwize V7000 Unified system
Not supported within a single Storwize V7000 Unified system
Requires full external user ID mapping, for example, Active Directory with Services for
UNIX (SFU)
112 Implementing the IBM Storwize V7000 Unified
Operates at the file system level, with a 1:1 relation between the local and remote file
system
Sequence of operations:
First, a snapshot is taken on the source file system
The source file system is scanned to identify the files and directories which were
changed (created, modified or deleted) since the last asynchronous replication
Changes are identified and replicated to the target file system
When the replication is complete, a snapshot is taken on the target file system and the
source snapshot is deleted
Timing:
Frequency is determined by the interval defined in a scheduled task for asynchronous
replication. The minimum interval that can be defined is 1 minute. Duration of one run is
determined by the time to take a snapshot, scan for changed files, and the time it takes to
transfer the changes to the remote site given the network bandwidth available. Also
factored in is the time to take a snapshot at the remote site, and finally the time to delete
the source snapshot. In case the first asynchronous replication cycle has not been
completed before the next scheduled asynchronous replication is triggered, the
subsequent replication will not start. This is to enable the first one to complete
successfully (an error will be logged). After its completion, a new asynchronous replication
will start at the next scheduled replication cycle.
Backup and restore with IBM TSM or NDMP
With Tivoli Storage Manager and NDMP, be aware of the following factors:
Only one method is supported. Therefore, you must choose Tivoli Storage Manager or
NDMP.
If NDMP is selected, HSM is not supported.
NDMP backup is supported by Netbackup, Commvault Simpana, EMC Networker, and
Tivoli Storage Manager as Data Management Application (DMA).
The NDMP data service runs on the Storwize V7000 Unified file modules.
Different NDMP topologies are supported: Two-way or three-way (local is not supported):
Two-way: DMA and Tape Service running on the same system.
Note: File set and share definitions, quota, and snapshot rules that are defined are not
contained within the replicated data. This information is kept only at the source and is not
transferred to the replication target.
These definitions have to be applied to the target file system as needed for a failover
scenario (which might be different from the scenario at the source).
For testing disaster recovery, the target file system can be mounted as read-only to clients
on the target side. If writes are allowed and happen to the target file system, there is a
potential data integrity issue because the source file system does not reflect these updates
(changes are only tracked at the source).
If write access is needed on the target side, for example, as part of a disaster recovery test,
it is required to create file clones for the affected/targeted data files within the target file
systems. This cannot be done by using the snapshot on the target side (which can be
accessed only as read-only) because it is not possible to create file clones from a
snapshot.
Chapter 10. Planning for implementation 113
Three-way: DMA and Tape Service running on different systems, whereby the
metadata information is sent to the DMA and the data containers sent to the Tape
Service.
If Tivoli Storage Manager is selected as the backup method, the preinstalled Tivoli Storage
Manager client on the file modules is used.
Selection of Tivoli Storage Manager enables the option to use HSM as well.
Antivirus
With antivirus, be aware of the following factors:
Supported antivirus product families/access schemes are Symantec and McAfee (via
Internet Content Adaptation Protocol (ICAP) on port 1344)
Requires external scan engines
Configurable options: Scan on file open, scan on file close after write, scheduled batch
scans (also known as: bulk scan)
10.1.6 Data center requirements
This section can be used to provide details on special data center requirements, such as
power, floor space and rack requirements. See Figure 10-8 for details.
Figure 10-8 Data center requirements
Note: Bulk scans do not rescan HSM-migrated files. No file recall is therefore required.
114 Implementing the IBM Storwize V7000 Unified
10.1.7 Storwize V7000 Unified specific questions
Figure 10-9 shows the fields in this section, specific to Storwize V7000 Unified. Use this
section to provide planning details relevant to real-time compression, the use of Easy Tier,
Metro Mirror or Global Mirror, and the use of FCoE or iSCSI.
Figure 10-9 Storwize V7000 Unified specific questions
It is important to understand the requirements for using compression before enabling it:
Hardware requirements: Compression requires dedicated hardware resources within the
node, which is assigned and de-assigned when compression is enabled and disabled.
When you create the first compressed volume in an I/O group, hardware resources are
assigned because there are fewer cores available for the fast path I/O code. Therefore,
you should not create a compressed volume/file system if the CPU utilization is
consistently sustained above 25%.
Data type: The best candidates for data compression are data types that are not
compressed by nature. Do not use compression in volumes/file systems that contain data
compressed by nature. Selecting such data to be compressed will provide little savings or
no savings at all while consuming CPU resources by generating additional I/Os. Avoid
compressing data with less than a 25% compression ratio. Data with at least a 45%
compression ratio is the best candidate for compression.
Compression ratio estimation: To estimate the compression ratio of a volume, use the
Comprestimator tool. This is a command-line host-based utility that scans the volume and
returns the compression ratio that can be achieved while using compression.
Comprestimator: Comprestimator can be used only on devices mapped to hosts as
block devices. Therefore, it cannot be used in file servers and file systems in the V7000
Unified. For more information about estimating the compression ratio of files, see
Chapter 16, Real-time Compression in the IBM Storwize V7000 Unified on page 287.
Chapter 10. Planning for implementation 115
Mixture of compressible and non-compressible data in a file system - Placement
policy:
When a file system contains a mixture of compressible and non-compressible data, it is
possible to create a file system with two file system pools. One for the compressible files,
configured with compression enabled; and the other for the non-compressible files. The
placement policy option is configured to place the compressible files in the compressed
file system pool. The policy is based on a list of file extensions of compressed file types
that are defined as an exclude list. This extensions list is edited manually when configuring
the policy. Using the placement policy will avoid spending system resources on
non-compressible data. The policy is affecting only files that are created after the
management policy has changed.
For more information about the placement policy and file types, refer to Chapter 16,
Real-time Compression in the IBM Storwize V7000 Unified on page 287.
License: Compression has limited access in the current version. Therefore, a code should
be entered when configuring a new compressed file system. In order to get the code to
enable compression, contact IBM at [email protected] and a specialist will contact
you to provide the code.
Number of compressed volumes: The number of compressed volumes is limited to 200
per I/O group. This number includes compressed file systems. When a file system is
created, three compressed volumes are created for it, and they are counted in the 200
compressed volumes limitation. For example, if you created 195 compressed volumes and
then created one compressed file system, you will have 198 compressed volumes in use
and you will not be able to create another compressed file system (only two compressed
volumes will be left and it takes three). Therefore, it is important to understand this
limitation and plan the number of compressed volumes and file systems in the entire
system before using it.
Plan the number of pools you need to use compression: When using compression,
considerations about the MDisks that should be created are different. There are several
items that should be considered first:
Compressed and non-compressed volumes should not be stored in the same
MDisk group. In mixed volume types, the compressed and non-compressed volumes
share cache and it might increase the response time. Therefore, it is not recommended
to create compressed volumes in an MDisk group that contains non-compressed
volumes.
Create different pools for data and metadata. To use compression, the data and
metadata should be separated because the metadata should not be compressed.
Therefore, at least two storage pools should be created. For more information about
configuration refer to Chapter 11, Implementation on page 139.
Balanced system: When creating the first compressed volume, CPU and memory
resources are allocated for compression. When the system contains more than one I/O
group, it is recommended to create a balanced system. If you create a low number of
compressed volumes, it is recommended to create them all in one I/O group. For larger
numbers of compressed volumes, the general recommendation (in systems with more
than one I/O group) is to distribute compressed volumes across I/O groups. For example,
a clustered pair of Storwize V7000 control enclosures requires 100 compressed volumes.
It is better to configure 50 volumes per I/O group, instead of 100 compressed volumes in
one I/O group. You should also ensure that the preferred nodes are evenly distributed.
IBM Easy Tier: Real-time Compression does not support Easy Tier. Easy Tier is a
performance function that will automatically migrate or move extents of a volume to, or
from, one MDisk storage tier to another MDisk storage tier. Easy Tier monitors the host I/O
activity and latency on the extents of all volumes with the Easy Tier function turned on in a
116 Implementing the IBM Storwize V7000 Unified
multi-tiered storage pool over a 24-hour period. Compressed volumes have a unique write
pattern to MDisks, which would have triggered unnecessary data migrations. For this
reason, Easy Tier is disabled for compressed volumes and you cannot enable it. You can
however create a compressed volume in an Easy Tier storage pool but automatic data
placement is not active.
For more information about Real-time Compression, see Chapter 16, Real-time
Compression in the IBM Storwize V7000 Unified on page 287.
For more information about compression technology, see Real-time Compression in SAN
Volume Controller and Storwize V7000, REDP-4859:
http://www.redbooks.ibm.com/redpieces/abstracts/redp4859.html
10.1.8 Service requirements
The final section of questionnaire can be used to document service requirement details. As
shown in Figure 10-10, this section allows you to list required system implementation and
data migration services.
Figure 10-10 Service requirements
If data migration is needed, there are different ways available to achieve it. In general, if you
need IBM assistance with the migration of data into the Storwize V7000 Unified, contact IBM
regarding the Data Migration Services offerings available that match your requirements:
http://www.ibm.com/services/us/en/it-services/data-migration-services.html
For SAN-attached block storage, a migration wizard is built into the GUI, which helps in
migrating existing data into the Storwize V7000 managed storage. This wizard is described in
more detail in Implementing the IBM Storwize V7000 V6.3, SG24-7938.
For data migration from existing file storage and network attached storage to the Storwize
V7000 Unified, the migration has to happen on a file level to keep all files aware and the
software up-to-date about the changes, and to maintain the ACLs of files and directories.
Therefore, the block level migration options built into the V7000 cannot be used for that
purpose. We recommend contacting your IBM representative to decide on the best migration
policy in this case.
10.2 Planning steps sequence
In the following sections, we list and explain the correct sequence of planning steps.
Chapter 10. Planning for implementation 117
10.2.1 Perform the physical hardware planning
It is important to take into account the physical components that are needed to ensure that
everything that is required is ordered ahead of time. Consider the following components:
Storwize V7000 Unified system, cables, and connectors
Storage area network (SAN) and network switches that are required, cables, and
connectors
File access clients that are required, and I/O adapters
Fibre Channel (FC) and Internet Small Computer System Interface (iSCSI) hosts that are
required, and I/O adapters
Power and cooling requirements for all hardware involved
Plan for the lab floor space and rack layout for all the hardware identified
10.2.2 Define the environment and services needed
Plan for the environment and the services that are required:
IP addresses needed for management and service of Storwize V7000 and both File
Modules, public IP addresses to serve File I/O, and client IP addresses
Authentication service: Servers needed according to the selected method and netgroup/ID
mapping support
Time synchronization: Network Time Protocol (NTP) servers
Domain Name System (DNS): DNS servers
Copy services and async replication, including required connectivity and remote target
systems
Back up servers according to the method chosen and storage
Tivoli Storage Manager hierarchical storage management (HSM) servers and storage, if
required
Antivirus scan engines
10.2.3 Plan for system implementation
The closer you get to the actual implementation, the more important it is that you consider the
following requirements:
Define the local and remote (if needed) SAN zoning requirements
Define the network requirements for management and data access
Define the network interfaces of V7000 and file modules, including subnets and VLANs
Define the logical configuration of the system (both File and Block access)
Define the pools and LUN layout for Block access
Define the pools, exports and shares, file systems, file sets, and directory structures for file
access
Define users required for management and monitoring roles of the Storwize V7000 Unified
itself and for file-based access requiring authentication and configure them within the
authentication service/directory server
Plan for the user ID mapping method (external/mixed)
118 Implementing the IBM Storwize V7000 Unified
Define authorizations that are required for every file access user within the file system/file
set/directory structures
10.2.4 Plan for data migration
Based on the features of the Storwize V7000 Unified, there are two different options for data
migration:
Migrating data from existing SAN-attached storage to the Storwize V7000 Unified using
the built-in migration wizard and image mode volumes
Migrating data from existing NAS systems to the Storwize V7000 Unified using file-based
migration options
10.3 Support, limitations, and tools
Always verify your environment against the latest support information for Storwize V7000
Unified and be sure to use the latest versions of modeling tools for capacity and performance
(Capacity Magic and Disk Magic).
Determine lists of hosts and platforms to be attached and verify interoperability support and
any restrictions for the following components:
FC attachments
Network attachments and file access
iSCSI attachments
Determine your requirements and verify that they are within the capabilities and limitations of
the system. The Technical Delivery Assessment (TDA) checklist provides more useful
considerations. Use the modeling tools available (with help from your IBM Support or IBM
Business Partner Support if needed) to determine the system configuration that is able to
fulfill your capacity and performance requirements.
Here are some useful links for these purposes:
Support portal for Storwize V7000 Unified:
http://www.ibm.com/storage/support/storwize/v7000/unified
Interoperability support pages for Storwize V7000 Unified:
http://www.ibm.com/support/docview.wss?uid=ssg1S1004228
Configuration Limits and Restrictions:
http://www.ibm.com/support/docview.wss?uid=ssg1S1004227
The general Limitations section in the information center is useful as preparation for the
planning and implementation decisions:
http://pic.dhe.ibm.com/infocenter/storwize/unified_ic/index.jsp?topic=%2Fcom.ib
m.storwize.v7000.unified.142.doc%2Fadm_limitations.html
Verify the planned setup and environment using the Pre-Sales Technical Delivery
Assessment (TDA) checklist. The checklists for TDA for both Pre-Sales and
Pre-Installation can be found here (IBM and IBM Business Partner internal link only,
contact your IBM or BP support for help if you do not have access):
http://w3.ibm.com/support/assure/assur30i.nsf/WebIndex/SA986
Chapter 10. Planning for implementation 119
Determine your capacity requirements including asynchronous replication for files,
snapshots, FlashCopy, Remote Copy for block I/O, and GPFS internal replication
requirements, and verify the system configuration using Capacity Magic.
Determine all the workload parameters required such as the number of I/Os per second
(IOPS), throughput in MBps, I/O transfer sizes for both file I/O and block I/O workloads,
number of clients (for file access), number of hosts (for block I/O access, both iSCSI and
FC), copy services requirements for both block I/O and file I/O. Also, verify the system
configuration using Disk Magic modeling.
The accuracy of the input data determines the quality of the output regarding the system
configuration required.
It is also important to note that there are influencing factors outside of the Storwize V7000
Unified system that can lead to a different performance experience after implementation, like
network setup and I/O capabilities of the clients used.
The IBM modelling tools for capacity (Capacity Magic) and performance (Disk Magic) can be
found at this website (IBM internal link only. Contact your IBM or IBM Business Partner
support for help with the modelling if you do not have access. IBM Business Partners have
access to these tools through IBM PartnerWorld):
http://w3.ibm.com/sales/support/ShowDoc.wss?docid=SSPQ048068H83479I86
10.4 Storwize V7000 Unified advanced features and functions
In the following sections, we describe planning considerations for the Storwize V7000 Unified
advanced functions.
10.4.1 Licensing for advanced functions
With licensing for advanced functions, be aware of the following factors:
Almost all advanced functions are included in the two base licenses required for Storwize
V7000 Unified. The following licenses are required: 5639-VM1 (V7000 Base, one license
per V7000 enclosure required) and 5639-VF1 (File Module Base, two licenses required)
Exception: External virtualization requires a 5639-EV1 license by enclosure
Exception: Remote Copy Services for block I/O access requires a 5639-RM1 license by
enclosure
Exception: Real-time Compression requires license by enclosure
10.4.2 External virtualization of SAN-attached back-end storage
With external virtualization of SAN-attached back-end storage, be aware of the following
factors:
Provides scalability beyond the Storwize V7000 limit for internal storage, which is 360 TB
currently
Maximum capacity that can be addressed is determined by Storwize V7000 extent sizes
defined at the storage pool layer, with a maximum of 2^22 extents managed
External storage is licensed by storage enclosure
Same support matrix as Storwize V7000 and the SAN Volume Controller (SVC)
120 Implementing the IBM Storwize V7000 Unified
10.4.3 Remote Copy Services (for block I/O access only)
Remote Copy Services (not including asynchronous file-based replication) are the same as
available with a Storwize V7000 with the same minimum code release of V6.4. They are not
applicable for Storwize V7000 volumes used for file systems.
An important consideration in the Storwize V7000 Unified is the reduced FC fabric
connections because of the required direct connections between the file modules and the
canisters.
See more details about this topic in Implementing the IBM Storwize V7000 V6.3, SG24-7938.
With Remote Copy Services, be aware of the following factors:
Remote copy partnerships are supported by other SVC, Storwize V7000, or Storwize
V7000 Unified systems (with the StorwizeV7000 in the back-end of a Storwize V7000
Unified system)
Fibre Channel Protocol support only
Licensed by enclosure
SAN and SVC/Storwize V7000 Copy Services distance rules apply (maximum 80 ms per
round trip)
Needs partnerships defined to remote system (SVC/Storwize V7000/Storwize V7000
Unified)
A maximum of three partnerships at a time are supported, and that means a maximum of
four systems can be in one copy configuration. Not all topologies that are possible are
supported, for example, all four systems configured in a string A-B-C-D
Within the partnerships defined between systems, the copy services relationships are
established at a volume level as a 1:1 relationship between volumes. Each volume can be
in only one copy services relationship at a time
Consistency groups are supported
10.4.4 FlashCopy (block volumes only)
The FlashCopy implementation (not including snapshots as used for file sets and file
systems) is the same as available with a stand-alone Storwize V7000 with the same minimum
code release of V6.3. FlashCopy operations are not applicable for Storwize V7000 volumes
used for file systems.
See more details about this topic in Implementing the IBM Storwize V7000 V6.3, SG24-7938.
With FlashCopy, be aware of the following factors:
Need to take volumes and capacity needed for FlashCopies into account
All SVC/V7000 FlashCopy options are fully supported on standard Storwize V7000
volumes not used in file systems
Consistency groups are supported
10.4.5 General GPFS recommendation
Every file operation requires access to the metadata associated, therefore it is a general
GPFS recommendation to place the metadata on the fastest drive type available. This can be
achieved by creating Network Shared Disks (NSDs) (Storwize V7000 volumes associated
Chapter 10. Planning for implementation 121
with a file system) based on the fastest drive type and add them to the system file system
pool with the specific usage type of metadataonly. In addition, the usage type of the other,
slower NSDs must be set to dataonly. This ensures that only the fastest disks host the
metadata of the file system.
10.4.6 GPFS internal synchronous replication (NSD failure groups)
As described in Chapter 7, IBM General Parallel File System on page 65, this provides an
extra copy of the selected data type (data, metadata, or both) in a different storage pool.
Therefore, the pool configuration and additional capacity required needs to be taken into
account:
Synchronous replication operates within a file system and provides duplication of the
selected data type
This section describes the needs for additional space. When creating a new file system,
two storage pools should be defined: One for the metadata and at least one for the data.
You can also add multiple pools for the data in the same file system
Ideally, file system pools using this functionality are replicated between storage pools in
independent failure boundaries:
This independence defines the level of protection, for example, against storage
subsystem failure
This independence is compromised here because there is only one Storwize V7000
storage system managing the back-end storage
If metadata, data, or both is to be replicated between the file system pools
Defines level of protection
Capacity used must be included in planning for total file system capacity
Approximately 5 - 10% of file system capacity is used for metadata so you will need to
adjust overall capacity accordingly
10.4.7 Manage write-caching options in Storwize V7000 Unified and on client
side
There are different options to enable and disable caching within the layers inside the Storwize
V7000 Unified and also outside, for example, on the client side.
On NFS clients, the options specified with the mount command determine if client side
caching is allowed, hence this can be changed at the level of each individual export. The
Storwize V7000 Unified has no control of which option each NFS client is using. Mounting an
export with the sync parameter disables the client side caching and assures the data is sent
to the Storwize V7000 Unified after each update immediately.
For CIFS clients, the Storwize V7000 Unified supports opportunistic locking (oplocks), which
enables client side caching for the CIFS clients. That means by default, the client-side
caching is granted by the Storwize V7000 Unified to every client that requests opportunistic
locking. This can be changed for every individual export and share via the chexport
command (see shaded information box below).
Inside the Storwize V7000 Unified, there can be write-caching on the file modules managed
by the NFS and CIFS server layer. This happens by default for all open files for CIFS access.
For NFS access, the default is already set to syncio=yes. As soon as there is a sync
command or a file gets closed, the updates are written to the NSDs (volumes in the V7000
122 Implementing the IBM Storwize V7000 Unified
pools) immediately and are stored in the mirrored write cache of the V7000 before destaged
to disk. This is safe because there is a second copy of the data. The caching in the file
modules can be controlled for every export/share via the chexport command options (see the
following information box).
10.4.8 Redundancy
The Storwize V7000 Unified has been designed to be highly available providing redundancy
by design. In order to achieve high availability for the data access and operations, it is
required that other parts of the environment provide redundancy as well. This is essential for
services like authentication, NTP, and DNS. There is a similar requirement for the networks to
be redundant and for the power sources of all these components as well.
If there is no redundancy at just one of these levels, there is an exposure to not being able to
continue operations when there is just a single failure in the environment. Having redundancy
at all these levels ensures that at least a double failure is necessary to create an outage to the
operations.
10.5 Miscellaneous configuration planning
In the sections that follow, we detail some of the other planning considerations.
10.5.1 Set up local users to manage the Storwize V7000 Unified system
A number of predefined roles are available to define users with different accesses and to tailor
access levels to your requirements:
Security Administrator rights plus user management
Administrator: Full administration of the system except user management
Export Administrator: Export and share related administration only
System Administrator: System-related administration only
Storage Administrator: Storage-related administration only
Snapshot Administrator: Snapshot-related administration only
Backup Administrator: Backup and replication-related administration only
Operator: Has only read access to the system
The default user of a Storwize V7000 Unified system is admin, which has the Security
Administrator role and can manage other users
It is recommended to create other Security Administrator users as required. Optionally, you
can increase security by changing the default access, for example, by changing the password
for the user admin.
Important: For applications with critical data, all non-mirrored caching options in GPFS
should be disabled:
Caching on the client side: controlled via opportunistic locking (oplocks) option
Caching in file module cache: controlled via syncio option
This can be done by using the command-line interface (CLI) command chexport for the
relevant shares with the parameters oplocks=no and syncio=yes.
Chapter 10. Planning for implementation 123
10.5.2 Define call home and event notifications
Call home requires a Simple Mail Transfer Protocol (SMTP) or email server address on the
client LAN that can forward emails to the default IBM service address. Details about the
system, client and administrator contact, and phone numbers are needed to establish contact
from IBM Support personnel in case of problems.
Event notification is supported by the following channels:
Email: Requires SMTP or email server address to be specified. Multiple levels of
notifications can be specified (for example, problems and informational events).
SNMP: Defines IP address of server and which kinds of events, for example, status
changes. Utilization should trigger a notification.
Syslog server: Defines IP address of server to receive information. Currently, only
information about the V7000 is sent.
10.5.3 Storage pool layout
In general, there are useful default settings, referred to as presets, built into the system, which
will be used for the automated configuration steps as offered by Easy Setup. If there are
standard performance requirements for either file I/O or block I/O workload, these can
conveniently be used, creating shared pools containing volumes for both workloads. If there is
a significant workload on either the file I/O or block I/O side, it is recommended to separate
these by using separate pools. The separate pools enable fault isolation, performance
predictability by using different physical disk drives in the back-end, and easier performance
analysis.
An additional criteria is the file protocol used to access the data: Data accessed by CIFS
clients must not be on SAN-attached, external virtualized storage. Besides the reason of
having separate failure boundaries on storage pool level, this is another reason to manage
separate storage pools for external storage and assign them only to file systems that do not
have CIFS shares defined.
Here is a checklist for the storage pool layout:
Block, file, or mixed storage/workload required:
Block workload only: No dependencies to file workloads. Use GUI or CLI to configure
No special performance requirements. Then, use the presets and best practices
built into Storwize V7000 by checking Auto-configure storage in Easy Setup
Special consideration regarding performance and placement optimization: The CLI
allows for specially-tailored configurations
File workload only: Operating outside of the general positioning of Storwize V7000 Unified,
but there might be good reasons for that:
No special performance requirements: Use Auto-configure storage in Easy Setup
and GUI to configure. This includes the presets and best practices built-in
Special consideration regarding performance and placement optimization: The CLI
allows for specially-tailored configurations
If mixed block and file workload: Plan storage layout between the two, including a manual
configuration of MDisks/Pools/Volumes as needed. Configure storage layout, first for the
file systems (generating file volumes), then block volumes. General recommendation:
Although supported, do not use mixed pools. Use separate pools for file access and block
I/O for better performance control and bottleneck analysis if required.
124 Implementing the IBM Storwize V7000 Unified
10.6 Physical hardware planning
Based on the results of the configuration sizing steps, determine the list of hardware items to
order. Table 10-1 on page 124 tries to pre-empt the main questions to ensure that all areas
have been thought of. However, because of the complexity and multitude of options, this
might not be complete in every case and specific items might need to be added as required.
Table 10-1 Checklist for required hardware
Ensure that the required power and cooling are verified and provided as well.
10.6.1 Plan for space and layout
Physical space and layout planning considerations are as follows:
An appropriate 19-inch rack with 6U - 24U of space is required, depending on the number
of expansion enclosures to be installed. Each V7000 enclosure and each file module
measures 2U in height. The minimum configuration is one V7000 control enclosure and
two file modules with a total height of 6U.
Hardware area Components/Details Your items/numbers
Storwize V7000 Unified
configuration
- Base configuration
- V7000 expansions
- Connectivity for the different
interfaces/protocols for all
locations/sites involved
Network components and
connectivity
- Ethernet switches
- 1 GbE/10 GbE and
connectivity for all
locations/sites involved
SAN connectivity - FC switches/directors
- Ports and connectivity for all
locations/sites involved
Clients for File access - Server HW
- 1 GbE NIC, or 10 GbE CNA
connectivity for all
locations/sites involved
Hosts for FC or iSCSI access - Server HW
- FC HBAs
- 1 Gb and 10 Gb network cards
- Connectivity for all
locations/sites involved
Services - Servers for NTP, DNS,
Authentication, Backup, HSM,
Antivirus: all including
connectivity
Miscellaneous - SAN-attached storage and
connectivity
- Remote V7000/Storwize
V7000 Unified systems for
Remote Copy or Async
Replication
Chapter 10. Planning for implementation 125
Redundant power outlets in the rack are required to connect the two power cords per
V7000 enclosure and per file module to independent power sources. The number of power
outlets that are required ranges 6 - 24 per Storwize V7000 Unified system depending on
the number of V7000 expansion enclosures.
Regarding the physical hardware placement, layout, and connectivity, there is very
detailed information in Chapter 11, Implementation on page 139.
Two serial-attached SCSI (SAS) cables of the appropriate length are required per V7000
expansion enclosure. The individual lengths required are determined by the rack layout
and placement chosen for the V7000 control and expansion enclosures.
The Storwize V7000 Unified information center provides an overview about aspects of the
physical implementation planning. See the information center:
http://pic.dhe.ibm.com/infocenter/storwize/unified_ic/index.jsp?topic=%2Fcom.ibm.s
torwize.v7000.unified.142.doc%2Fsvc_installplan_22qgvs.html
The Storwize V7000 Unified hardware installation is described in the Storwize V7000 Unified
Quick Installation Guide, GA32-1056, available from this link:
http://publib.boulder.ibm.com/infocenter/storwize/unified_ic/index.jsp?topic=%2Fco
m.ibm.storwize.v7000.unified.doc%2Fmlt_relatedinfo_224agr.html
10.6.2 Planning for Storwize V7000 Unified environment
Here is a list of the minimum prerequisites to set up and use a Storwize V7000 Unified
system. Several services are essential for operating and accessing the system and should
therefore be provided in a highly available fashion, like NTP, authentication, and DNS:
Time servers for synchronization according to the Network Time Protocol (NTP): To
guarantee common clock/time across the environment, especially between the
authentication server and the Storwize V7000 Unified system and Tivoli Storage Manager
backups. Provide two servers for redundancy.
Domain Name System servers
Required for DNS round robin for file access, provide two servers for redundancy
Required for Active Directory authentication (if this is used)
Authentication servers: Depends on the choice/decision made in Step 10.5.1, Set up local
users to manage the Storwize V7000 Unified system on page 122. Provide two servers
for redundancy.
Note: There are two independent SAS chains to connect the V7000 control enclosure to
the expansion enclosures. A symmetrical, balanced way to distribute the expansion
enclosures on both SAS chains is recommended for performance and availability. The
internal disk drives of the control enclosure belong to SAS chain 2 and therefore a
maximum of four expansion enclosures can be connected to this chain. On SAS chain 1, a
maximum of five expansion enclosures can be connected. To ensure a symmetrical,
balanced distribution, the first expansion enclosure would be connected to SAS chain 1,
the second one to SAS chain 2, the third one to SAS chain 1, and so on.
Note: All of these services are required but do not necessarily require a dedicated server.
For example, in case of Active Directory for authentication, the server can provide the NTP
and DNS service as well.
126 Implementing the IBM Storwize V7000 Unified
Select one of these services: Active Directory, LDAP, Samba PDC, local authentication,
or NIS
Optional:
NIS server for Active Directory or Samba PDC
Kerberos/KDC server for LDAP
Connectivity
Total of 4 x 1 GbE ports for file modules, min. 2 x 1 GbE ports for V7000
Optional: 10 GbE ports for file service (4x), or iSCSI attachment (minimum 2x)
IP addresses for management and service access
Minimum of six IP addresses required for system management and service:
1 x Storwize V7000 Unified cluster management IP
1 x V7000 cluster management IP
2 x service IP addresses for the two file modules (one each)
2 x service IP addresses for the two V7000 node canisters (one each)
Optional: 10 GbE for management of the Storwize V7000 Unified cluster and file modules
Storwize V7000 requires 1 GbE for management
Initial setup of Storwize V7000 Unified always requires 1 GbE for management and a
dedicated port. VLANs are not currently supported for initial setup using Easy Setup
VLANs are not supported for the initial setup, but can be configured later
VLAN ID of 1 must not be used
Optional: IP addresses for iSCSI if required:
A range of 1 - 4 IP addresses for 1 Gb iSCSI and 1 - 4 IP addresses for 10 Gb iSCSI
Recommended: Use minimum of 2 addresses per required interface 1 Gb or 10 Gb
IP addresses for serving file I/O to clients:
Minimum of two public IP addresses to be used to have both file modules active in
serving I/O:
For each interface used (1 GbE, 10 GbE) for file I/O
1 GbE uses ethX0 bond, 10 GbE uses ethX1 bond
All network ports on the file modules are bonded by default. Both ports must be
connected to the same subnet as documented
Note the difference: Network ports on Storwize V7000 node canisters are not
bonded
Minimum is one public IP address per interface used, but then only one file module
would serve I/O. The second file module would remain passive
Optional prerequisites: Only needed if these features and functions will be used:
Important: The management and service IP addresses of a Storwize V7000 Unified must
all be on the same subnet.
All management and service IP addresses must be active and the network configured
correctly at initial installation. In case of connectivity problems between the file modules
and the Storwize V7000, the initial installation will fail.
Chapter 10. Planning for implementation 127
Backup servers: Tivoli Storage Manager or NDMP, supported storage, licensed by
Tivoli Storage Manager Server Value Units
Tivoli Storage Manager HSM servers, supported storage, licensed by Tivoli Storage
Manager for Space Management
Antivirus scan engines
10.7 System implementation planning
If the Storwize V7000 Unified will be added to an existing environment, only the planning for
the physical location, power, and connectivity is required. This is because all external
services such as time synchronization via NTP servers, DNS, and authentication using the
existing method that is set up in the environment, are already available. It is possible that an
add-on like SFU to an existing Active Directory infrastructure is required.
In the same sense, it is necessary to start building the infrastructure (physical location, power,
cooling, connectivity) and the required external services (like NTP, DNS, authentication) first,
before implementing the Storwize V7000 Unified. As shown in Chapter 11, Implementation
on page 139 and Table 10-2 on page 128, the relevant, correct, and often detailed information
has to be entered during the Easy Setup wizard. Steps such as specifying the NTP servers
are mandatory and the Storwize V7000 Unified checks if there is an existing connection
during Easy Setup. If there is no response from, for example, NTP or DNS servers, the Easy
Setup will fail.
If no authentication method is defined during Easy Setup, there will be no data access from
the file client side. This is because the required protocol daemons/services start within the
Storwize V7000 Unified only after authentication is configured.
10.7.1 Configuration details and settings that are required for setup
There are many different options involved in implementing the Storwize V7000 Unified.
Therefore, it is difficult to provide a complete list of all the details required for all options and
combinations. There are also two classes of settings, optional and mandatory. This section
tries to summarize all the mandatory information needed and most of the optional areas to
have it available as a comprehensive overview when implementing the system. However,
there might be some optional areas that are not covered here in detail.
You can find a detailed description of the steps covered during the Easy Setup wizard and its
related configuration information fields in Chapter 11, Implementation on page 139.
128 Implementing the IBM Storwize V7000 Unified
The following tables detailing the required system setup information, start chronologically with
the information required for the init tool to prepare the USB key, followed by the information
that is requested during the Easy Setup wizard. See Table 10-2.
Table 10-2 Information for Storwize V7000 Unified setup
Step/Purpose Entry
field/information
Comment/explanation Your Data/Selection
Init Tool: V7000 IP,
Gateway, Subnet mask
- V7000 management
IP address
- Subnet mask
- Gateway IP address
Mandatory:
- IP for the V7000
storage cluster (not
accessed directly in
normal operations)
- Gateway and Subnet
mask for entire Unified
system
Note: Service IP
addresses for the
V7000 node canisters
cannot be set here;
need access to GUI or
CLI. Recommendation:
set them first after
completing Easy Setup
and get GUI access the
first time

Init Tool: Storwize
V7000 Unified IP and
file module details
- Storwize V7000
Unified cluster
management IP
address
- File Module 1 service
IP address
- File Module 2 service
IP address
- Internal network IP
address range
Mandatory:
- IP for the Unified
cluster: Needed and
used for all
management
operations of the
Storwize V7000 Unified
- Individual service IP
for direct access to a file
module for
troubleshooting
- Internal network for
direct communication
and troubleshooting
Chapter 10. Planning for implementation 129
Easy Setup:
System attributes
- System name
- NetBIOS name
- Time zone
- NTP server IP
address
- Alternate NTP server
IP address
Mandatory:
- Name of the Storwize
V7000 Unified cluster
- Name by which this
cluster is seen on the
network (SMB protocol)
and known to an AD
domain
- Continent and City:
different scheme, not
sorted by GMT+/-Xh,
see link
http://publib.boulder
.ibm.com/infocenter/s
torwize/unified_ic/in
dex.jsp?topic=%2Fcom.
ibm.storwize.v7000.un
ified.doc%2Fappx_time
zones.html
- NTP server is required
Optional:
- It is recommended to
have a backup/alternate
NTP server
Easy Setup:
Licenses
- External
virtualization (by
enclosures)
- Remote Copy (by
enclosures)
-Real-time
Compression by
enclosures
Mandatory:
- Number of storage
enclosures of
virtualized SAN storage
behind V7000
- Number of storage
enclosures used for
block I/O based Remote
Copy functions of
V7000 (via Fibre
Channel SAN)
- Number of storage
enclosures using
Real-time Compression
Step/Purpose Entry
field/information
Comment/explanation Your Data/Selection
130 Implementing the IBM Storwize V7000 Unified
Easy Setup Step 4:
Support Notifications
Step 1:
- Email server IP
address
- Company name
- Customer email
- Customer telephone
number
- Off-shift telephone
number
- IBM Support email
address
Step 2:
- Enable a proxy
server to access the
Internet
Optional, but strongly
recommended:
- Storwize V7000
Unified cluster will use
this email server to
send email
- Company name to
appear in the email sent
- Customer contact to
receive email from
Storwize V7000 Unified
- Prime shift telephone
number which will be
called by IBM Support
- Off-shift telephone
number if prime shift
number is not answered
24 hours
- Leave at default: It is
the default address in
IBM for call home alerts
Step 2:
- if access via proxy
server is required, click
enable and provide the
proxy detail information
Easy Setup Step 5:
Domain Name System
- DNS domain name
- DNS servers
- DNS search
domains
Mandatory:
- Name of public
network domain
associated with
Storwize V7000 Unified
operations
- IP addresses of your
DNS servers. One is
required; more are
recommended for
availability/redundancy
Optional:
- Additional domain
names to be searched
in
Easy Setup Step 6:
Authentication
- Active Directory (AD)
- LDAP
- Samba PDC
- NIS (NFS only)
- Extended NIS
- Local authentication
Mandatory (if not
specified here, use
GUI/CLI to configure
authentication later):
- Radio buttons, choice
between AD, LDAP,
Samba PDC, and NIS
Optional:
- Extended NIS can be
chosen with AD or
Samba PDC
Step/Purpose Entry
field/information
Comment/explanation Your Data/Selection
Chapter 10. Planning for implementation 131
Authentication
- Details for Active
Directory
(only required if
choice is Active
Directory)
- Server
- User ID
- Password
- Enable Services for
UNIX (SFU)
- Domain name,
ranges, schema mode
If Extended NIS in
addition:
- Primary NIS domain
- Server map
- Enable user ID
mapping
- Domain map
- User map
- User ID range
- Group ID range
(only required if
choice is Active
Directory)
- IP address of Active
Directory server
- Administrative user ID
- Password for
administrative user ID
- Check-box; select if
support for UNIX is
required
- (only if SFU selected):
Name of domain SFU
belongs to, lower to
upper limit of the range
for user and group IDs,
SFU schema mode
used (SFU or
RFC2307)
if Extended NIS in
addition:
- Name of the primary
NIS domain
- NIS server to NIS
domain map
- Check Enable if NIS
user ID mapping to be
used. This enables the
next four topics:
- Mapping of the AD
domain to the NIS
domains
- Define how to deal
with user exceptions
(DENY, AUTO, or
DEFAULT)
- Specify user ID range
to be used with AUTO
option
- Specified group ID
range to be used with
AUTO option
Step/Purpose Entry
field/information
Comment/explanation Your Data/Selection
132 Implementing the IBM Storwize V7000 Unified
Authentication
- Details for LDAP
(only required if
choice is LDAP)
- Specify one or more
LDAP servers
- Search base for
users and groups
- Bind distinguished
name (DN)
- Bind password
- User suffix
- Group suffix
- Workgroup
- Security Method
- Enable Kerberos
- Server name
- Realm
(only required if
choice is LDAP):
- IP addresses of LDAP
servers
- Search base as
defined in LDAP server
- DN as defined in the
LDAP servers
- Password for this DN
- User suffix as defined
by the LDAP server
- Group suffix as
defined by the LDAP
server
- Domain name
- If SSL or TLS is used,
a window to specify
certificate will appear. If
the setting is off, the
option for Kerberos
appears (GUI). Usage
of SSL/TLS and
Kerberos can be
configured using the
CLI
- Check box; check to
enable Kerberos
- (only if Kerberos
enabled): Name of
Kerberos server
- (only if Kerberos
enabled): Kerberos
realm
Step/Purpose Entry
field/information
Comment/explanation Your Data/Selection
Chapter 10. Planning for implementation 133
Authentication
- Details for Samba
PDC
(only required if
choice is Samba
PDC)
- Server host
- Administrative user
ID
- Administrative
password
- Domain name
- NetBIOS name
If Extended NIS in
addition:
- Primary NIS domain
- Server map
- Enable user ID
mapping
- Domain map
- User map
- User ID range
- Group ID range
(only required if
choice is Samba
PDC):
- IP address of the NT4
PDC server
- User ID with admin
authority to access the
NT4 PDC server
- Password for this user
ID
- NT4 domain name
- NT4 NetBIOS name
if Extended NIS in
addition:
- Name of the primary
NIS domain
- NIS server to NIS
domain map
- Check Enable if NIS
user ID mapping to be
used. This enables the
next four entry topics:
- Mapping of the NT4
domain to the NIS
domains
- Define how to deal
with user exceptions
(DENY, AUTO, or
DEFAULT)
- Specify user ID range
to be used with AUTO
option
- Specified group ID
range to be used with
AUTO option
Authentication
- Details for NIS (NFS
only)
(only required if
choice is NIS (NFS
only), also known
as: Basic NIS)
- Primary NIS domain
- Server Map
(only required if
choice is NIS (NFS
only), also known as:
Basic NIS):
- Name of primary NIS
domain
- NIS server to NIS
domain map
Authentication
- Details for local
authentication
- User/group name
- Password
Optional: Group ID.
Otherwise, it will be set
automatically
Step/Purpose Entry
field/information
Comment/explanation Your Data/Selection
134 Implementing the IBM Storwize V7000 Unified
Easy Setup Step 8:
Configure storage
- Automatically
configure internal
storage now
Optional here in Easy
Setup, but mandatory
to be configured to have
V7000 provide storage
capacity for the
Storwize V7000 Unified
system:
- Click yes if internal
storage should be
configured as specified
in Configuration
Summary
- If not, use GUI/CLI
later to configure the
internal storage
provided by the V7000.
If external,
SAN-attached storage
is used; use its
appropriate GUI/CLI
interfaces to configure
Step/Purpose Entry
field/information
Comment/explanation Your Data/Selection
Chapter 10. Planning for implementation 135
10.7.2 Configuration options for file access only
In Table 10-3, we show the file access-specific configuration options.
Table 10-3 File access-specific configuration options
Easy Setup Step 9:
Public Networks
- New network
- Subnet
- VLAN ID
- Default gateway
- IP address pool
- Additional gateways
- Interface
Optional here in Easy
Setup, but mandatory
to be configured to
enable access to data
for file clients (if not
configured here, use
GUI/CLI to configure
later to enable File I/O):
- Select new network to
get to next windows
- Subnet with network
mask in CIDR syntax
(that is, number of bits
reserved for network
mask), see Table 3 in
http://publib.boulder
.ibm.com/infocenter/s
torwize/unified_ic/in
dex.jsp?topic=%2Fcom.
ibm.storwize.v7000.un
ified.doc%2Fsvc_hardw
are_planning.html
- VLANs cannot be
configured in Easy
Setup - later step: Enter
VLAN number if VLANs
are to be used (Note:
VLAN 1 is not
supported)
- IP address of default
gateway for this subnet
- Pool of public IP
addresses used to
serve File I/O using
DNS round-robin;
minimum is one, but
need at least two to
have both file modules
serving I/O
- Optional; add if there
are additional gateways
- Select ethX1 for
10GbE, ethX0 for 1 GbE
interface bond
Step/Purpose Entry
field/information
Comment/explanation Your Data/Selection
Step/Purpose Entry
field/information
Comment/explanation Your Data/Selection
Client systems - 10 GbE attached
- 1 GbE attached
List systems, IP
addresses, users
IP addresses, users
136 Implementing the IBM Storwize V7000 Unified
Users - Local users in
Storwize V7000
Unified for
management and
monitoring
- Users by client
systems for data
access
- Create local users and
select their roles
- Create users on client
systems, specify
access rights, create
same users within
authentication servers if
applicable
Local Users Storwize
V7000 Unified:
Users for data access
by client:
File systems - Sizes and capacity
needed
- include ILM and
policies if required
ILM - Define tiered
storage pools
- Define tiered file
system pools
- Define ILM policies
File sets - Independent
- Dependent
Add more granularity:
- To define snapshot
rules and quota
- To define quota
Exports/Shares By Protocol:
- CIFS
- NFS
- HTTPS
- FTP
- SCP
Mixed exports if
required, for
example:
CIFS and NFS
- Define owner at initial
CIFS share creation
- Define extended ACLs
for CIFS if required
- Define
authorization/access
rights from the client
side
Snapshots - Creation rules
- Retention rules
- By independent file set
- By file system
Quota - By user
- By group
- By file set
For each entity
required:
- Define soft limit and
hard limit
Backup - Method
- Server IP
addresses
- Choose Tivoli Storage
Manager or NDMP
- Specify IP addresses
of backup servers
HSM - Define external file
system pool
- Define Tivoli
Storage Manager
HSM settings
Step/Purpose Entry
field/information
Comment/explanation Your Data/Selection
Chapter 10. Planning for implementation 137
10.7.3 Configuration options for Block I/O access only
Table 10-4 shows the block I/O specific configuration options.
Table 10-4 Block I/O specific configuration options
Async Replication - Prepare remote
partner systems
- Define file system to
be replicated; create
target file system on
remote system;
define replication
settings
- Define schedule
and task
GPFS internal
replication
- Define multiple
storage pools per file
system pools, min.
for system pool, other
data pools if required
- Define if metadata,
data, or both to be
replicated
Real-time
Compression
Define separate
pools for the data and
metadata
Step/Purpose Entry
field/information
Comment/explanation Your Data/Selection
Step/Purpose Entry
field/information
Comment/explanation Your Data/Selection
Hosts - Host names for FC or
iSCSI
- WWPNs
- Create host objects
with associated FC
WWPNs or iSCSI IQN
- Create SAN zoning
Storage configuration
by host
- Capacity
- Storage pool layout
- Volumes
- Thin provisioning
- Easy Tier
- Pool and volume
layout based on overall
planning and modeling
results
- Define Thin
Provisioning
parameters
- Define Easy Tier start
configurations (hybrid
storage pools)
Copy Services
partnerships
- Metro Mirror
- Global Mirror
- Volumes and volume
pairs
- Consistency Groups
- To V7000, SVC, or
Storwize V7000 Unified
systems (but V7000 to
V7000 Fibre
Channel-attached
thereof)
138 Implementing the IBM Storwize V7000 Unified
FlashCopy
requirements
- Volumes
- Type and options
used
- Consistency Groups
Step/Purpose Entry
field/information
Comment/explanation Your Data/Selection
Copyright IBM Corp. 2013. All rights reserved. 139
Chapter 11. Implementation
This chapter describes the steps to implement the Storwize V7000 Unified from hardware
setup to providing host storage. It is not expected that one resource will perform all the steps
because several different skill sets are likely to be required during the process.
11
140 Implementing the IBM Storwize V7000 Unified
11.1 Process overview
The installation, implementation, and configuration tasks are grouped into major steps in the
following processes. Each step is performed sequentially and in most cases must be
completed before the next step can begin.
A task checklist is included to help in the implementation. It serves as a checklist to ensure all
steps are completed, as a quick reference for experienced implementers. The checklist can
also be useful in planning a timeline and to identify the resources and skills needed.
11.2 Task checklist
The major steps for installing and configuring the Storwize V7000 Unified are listed below to
give a quick overview and aid in planning. These steps are covered in detail in the following
sections. Table 11-1 shows an implementation checklist.
Table 11-1 Implementation checklist
Important: Although the intent is to provide a complete end-to-end checklist and
procedures for implementation, the latest product manuals should always be referenced.
Where possible, manual references are included.
Task Steps Complete
Hardware rack and stack
Preparation Complete the planning checklist
including IP addresses, protocols,
and server names and addresses.
Packing slips Check all items received against the
packing lists.
Environmentals Confirm cooling and power, room
access, and safety.
Rack storage control enclosure Rack mount the storage enclosure.
Rack expansion enclosures If any expansion enclosures are
shipped, rack mount these now using
the recommended layout.
Rack file modules Rack the two file modules.
Cabling Power
Control enclosures
Expansion enclosures: SAS cables
File modules
Ethernet
Power On Power on and check in this order:
Network Switches, routers, and devices
Power on storage enclosures Storage expansions
Storage control enclosure
Power on file modules Both
Chapter 11. Implementation 141
Software If software is preinstalled, skip.
Prepare for reload if required
Reinstall software if required
Initialize
Configure USB key Set storage service IPs.
Run init tool and enter settings.
Initialize the storage Insert key into storage enclosure.
Confirm success.
Initialize the file modules Insert key into one file module.
Confirm success.
Base Configuration
Configure and connect to GUI Setup browser access and PuTTY.
EZ-Setup Log on to run EZ-Setup.
Complete as much of the
configuration as possible with
information provided.
Backups Set up scheduled backup.
Health check
Run health checks Confirm system is healthy.
Security
Change passwords Change admin password on file
modules and superuser password on
block storage.
Create users Create more user logons as wanted.
Storage controller More configuration is using block
storage.
SAN requirements Connect to SAN and zone.
Configure storage Configure and discover any external
storage being used.
Discover MDisks and build pools.
Block storage
Volumes Configure volumes as required.
Hosts Define hosts and host ports to cluster.
Mapping Map volumes to hosts.
Copy services Configure copy services:
- FlashCopy
- Inter-cluster relationships
- Remote copy global/metro
File storage Configure the file storage.
Task Steps Complete
142 Implementing the IBM Storwize V7000 Unified
11.3 Hardware unpack, rack, and cable
11.3.1 Preparation
Ensure that you have reviewed and completed the topics described in the planning chapter.
This includes the physical environment, allocation of IP addresses and names, identification,
preparation of network resources (for example, Domain Name System (DNS), Network File
System (NFS)), and access to the online information center.
Also, ensure that all personnel are familiar with safety information.
11.3.2 Review packing slips and check components
Locate the packing slip in the shipping boxes and review it. Confirm that all the ordered
components and features have been received.
The following minimum items are needed.
Control enclosure
At a minimum, the following control enclosure components are needed:
Control enclosure (models 2076-112, 2076-124, 2076-312, or 2076-324). The last two
digits of the model number identify the number of drive slots, either 12 or 24.
Expansion enclosure (models 2076-212 or 2076-224) if ordered.
Rack-mounting hardware kit, including per enclosure:
Two rails (right and left assembly)
Two M5 x 15 Hex Phillips screws per rail (two rails)
Two M5 x 15 Hex Phillips screws per chassis
Note: Two parts of the rail kit are attached to each side of the enclosure.
Two power cords.
File systems Create file systems from the pools.
Files sets Define file sets if wanted.
Shares Create shares.
Add authorized user to shares.
Task Steps Complete
For the following sections, see the IBM Storwize V7000 Unified Information Center for
details and the latest updates at this website:
http://pic.dhe.ibm.com/infocenter/storwize/unified_ic/topic/com.ibm.storwize.v7
000.unified.140.doc/ifs_ichome_140.html
Tip: IBM has provided a IBM Storwize V7000 Unified Model 2073-720 Quick Start Guide:
http://pic.dhe.ibm.com/infocenter/storwize/unified_ic/topic/com.ibm.storwize.v7
000.unified.142.doc/ifs_bkmap_quickinst_flyer.pdf
Chapter 11. Implementation 143
Drive assemblies or blank carriers (installed in the enclosure).
Verify the number of drives and the size of the drives.
Other items that are shipped with control enclosure
The following support items are shipped with the control enclosure:
Documents
Read first flyer
Quality hotline flyer
Environmental flyers
Safety notices
Limited Warranty information
License information
License Function authorization document
IBM Storwize V7000 Quick Installation Guide
IBM Storwize V7000 Troubleshooting, Recovery, and Maintenance Guide
Environmental notices CD
Software CD that contains the publication PDFs, and the information center content.
One USB key, also known as a flash drive, is located with the publications.
Additional components for control enclosures
The following additional components, if ordered, are for the control enclosures:
Fibre Channel cables, if ordered
Small form-factor pluggable (SFP) transceivers that are preinstalled in the enclosure
Longwave SFP transceivers, if ordered
Additional components for expansion enclosures
Two serial-attached SCSI (SAS) cables are needed for each expansion enclosure.
Two file modules
Each file module box contains the following components:
File module (server)
Rack-mounting hardware kit, including the following items:
Two sets of two rails (right and left assembly)
Large cable tie
Cable ties
Two sets of four M6 screws per rail (two rails)
Two sets of two 10-32 screws per chassis
Cable management support arm
Cable management arm mounting bracket
Cable management arm stop bracket
Cable management arm assembly
Note: The rail kits for the servers differ from the control enclosure
Two power cords
Additional components for file modules
The following additional components come with the file modules:
Documents
Read first flyer
Quality hotline flyer
Environmental flyers
Safety notices
Limited warranty information
144 Implementing the IBM Storwize V7000 Unified
IBM Storwize V7000 Quick Installation Guide
IBM Storwize V7000 Troubleshooting, Recovery, and Maintenance Guide
License information
License Function authorization document
Environmental notices CD
Software CD that contains the publication PDFs, and the information center content
Small form-factor pluggable (SFP) transceivers that are preinstalled in the enclosure
Two USB keys, one for each file module
11.3.3 Confirm environmentals and planning
If not already done, you should review the planning chapter in this book. Ensure that you
understand locations for each component, that the environment has sufficient capacity in
terms of power and cooling, and that rack space is available. Also, confirm that the planning
worksheet has been completed.
Two people are needed to perform the racking of the modules. It is also recommended that
two people are used to perform the cabling because it makes the task much easier.
When racking, do not block any air vents. Usually 15 cm (6 inches) of space provides the
appropriate airflow.
Do not leave open spaces above or below an installed module in the rack cabinet. To help
prevent damage to module components, always install a blank filler panel to cover the open
space and to help ensure the appropriate air circulation.
11.3.4 Rack controller enclosures
Perform these steps for the rack controller enclosures:
1. Install the rails in the rack for the controller enclosure. Allow 2U for this module.
2. At this time, you might find it easier to install the rails for the expansion enclosures and
also the file modules if they are in the same rack. This is true because there is more room
if this step is done before any modules are installed.
3. Remove the enclosure end caps by squeezing the middle of the cap and pulling.
4. From the front of the rack, with two people, lift the enclosure into position and align the
rails. Carefully slide it into the rack until it is fully seated.
5. Insert the last two screws (one each side) to secure the enclosure to the rack, then replace
the end caps.
11.3.5 Rack expansion enclosures
Repeat these steps for each expansion enclosure:
1. Install the rails in the rack for the expansion enclosure if not already done. Allow 2U.
2. Remove the enclosure end caps by squeezing the middle of the cap and pulling.
For the following hardware installation tasks, see the online IBM Storwize V7000 Unified
Information Center for the latest details or further clarification:
http://pic.dhe.ibm.com/infocenter/storwize/unified_ic/topic/com.ibm.storwize.v7
000.unified.140.doc/ifs_ichome_140.html
Chapter 11. Implementation 145
3. From the front of the rack, with two people, lift the enclosure into position and align the
rails. Carefully slide it into the rack until it is fully seated.
4. Insert the last two screws (one each side) to secure the enclosure to the rack, then replace
the end caps.
11.3.6 Rack file modules
Perform these steps for each rack file module:
1. Install the rails in the rack for the expansion enclosure if not already done. Allow 2U.
2. Extend the rails fully out the front of the rack.
3. From the front of the rack, with two people, lift the enclosure into position with the front
slightly higher and align the pins at the rear. Then, lower the front and align the front pins.
Carefully slide it into the rack until it is fully seated and the latches are secure.
4. If required, insert the two screws (one each side) to secure the enclosure to the rack.
5. Install the cable management arm at the rear by using the instructions included with the
arm. This can be fitted on either side.
6. Repeat the preceding steps for the second file module.
11.3.7 Cabling
There are a number of cables that are needed to interconnect the modules of the Storwize
V7000 Unified and to provide connectivity to the network and servers. Most are required
before proceeding with the installation. Connect the cables as follows.
Power
Each module has two power cords. These should be connected to diverse power supplies
and electrically separated as much as possible, preferably to different power strips in the rack,
which in turn are powered from different distribution boards.
Control Enclosures
Follow these steps for each control enclosure:
1. Ensure that the power switches on both power supplies are turned off.
2. On one power supply, prepare the cable retention bracket, release, and extend the clip and
hold to one side.
3. Attach the power cable and ensure that it is pushed all the way in.
4. Push the retention clip onto the cable. Then, slide the clip down to fit snugly behind the
plug.
5. Tighten the fastener around the plug.
6. Route the cable neatly to the power source and connect. Dress the cable away from the
rear of the enclosure and secure any excess so it does not interfere with data cabling.
7. Repeat the preceding steps for the other power supply and install the second power cable.
Expansion enclosures (if present)
The power supplies are the same as the control enclosures. Connect the power cables, two
per enclosure using the same procedure, for each expansion enclosure.
Tip: Install the heaviest cables first (power) and the lightest cables last (fiber) to minimize
the risk of damage.
146 Implementing the IBM Storwize V7000 Unified
File modules
Follow these steps for each file module:
1. Attach the first power cable to the file module and ensure that it is pushed all the way in.
2. Route the cable through the cable management arm, allowing plenty of slack so that the
cable does not become tight when the arm and module are extended.
3. Connect to the power source and connect. Dress the cable and secure any excess so it
does not interfere with data cabling.
4. Repeat for the second power cable and for the two power cables in the second module.
Ethernet
There are six Ethernet ports on each file module and four on each control enclosure with an
optional four more if Internet Small Computer System Interface (iSCSI) is specified. The 1 Gb
ports require a copper cable being a minimum of CAT5 UTP. The 10 Gb ports are connected
using fiber cables and need multimode (MM) fibre cables with LC connectors. These are
connected as follows.
File Modules
Use Figure 11-1 as a reference for plug locations. Route each cable through the cable
management arm. Connect the following cables for each file module.
Figure 11-1 File module rear
Port 1: 1 Gb 7 left <required> Internal connection between file modules. Connect a short
cable from this port to port 1 on the other file module.
Port 2: 1 Gb 7 right <required> Internal connection between file modules. Connect a short
cable from this port to port 2 on the other file module.
Port 3: 1 Gb 8 left <required> Provides management connection and optional data.
Port 4: 1 Gb 8 right <optional> Alternate management connection and optional data.
Slot4-0: 10 Gb 2 right <optional> Data connection only.
Slot4-1: 10 Gb 2 left <optional> Data connection only.
Chapter 11. Implementation 147
Control Enclosure
Use Figure 11-2 as a reference for plug locations. Connect the cable to the Ethernet port and
route them neatly to the rack cable management system.
Figure 11-2 Control module rear
Port 1: 1 Gb <required> Management and service connection.
Port 2: 1 Gb <optional> Alternate management connection.
SAS cables
If expansion enclosures have been installed, they are connected to the control enclosure by
using the SAS cables shipped. If no expansion enclosures are installed, then skip this step.
The control enclosure has two SAS ports on each node canister. Port 1 from each canister
form a pair of chains and these normally connect to the expansion enclosures racked below
the control enclosure. Port 2 from each node canister forms the second chain, which normally
connects to the upper expansions. The top canister always connects to the top canister of the
next enclosure and the bottom to the bottom.
148 Implementing the IBM Storwize V7000 Unified
Connect the expansion enclosures by using Table 11-2. Refer to Figure 11-3 for port location.
Figure 11-3 SAS cabling for three expansions
In Table 11-2, we show the SAS connections.
Table 11-2 SAS connections
SAS Connections: How each unit connects to the next unit in the chain
First Unit Second Unit Number of Expansions
Controller Expansion 1 1 Expansion
Upper canister port 1 Upper canister port 1
Lower canister port 1 Lower canister port 1
Chapter 11. Implementation 149
Fiber optic cables
The default configuration is short wave SFPs, which require MM cables with LC connectors. If
long distance is required, the SFPs can be specified as long wave, in which case it is
important to use the appropriate single mode (SM) cable.
Dress cable gently and ensure that there are no kinks or tight bends. Do not use cable ties or
any other hard material to hold cables because these cause kinks and signal loss.
Hook-and-loop fastener cable wraps provide the most economic and safe method of tying
cables.
Controller Expansion 2 2 Expansions
Upper canister port 2 Upper canister port 1
Lower canister port 2 Lower canister port 1
Expansion 1 Expansion 3 3 Expansions
Upper canister port 2 Upper canister port 1
Lower canister port 2 Lower canister port 1
Expansion 2 Expansion 4 4 Expansions
Upper canister port 2 Upper canister port 1
Lower canister port 2 Lower canister port 1
Expansion 3 Expansion 5 5 Expansions
Upper canister port 2 Upper canister port 1
Lower canister port 2 Lower canister port 1
Expansion 4 Expansion 6 6 Expansions
Upper canister port 2 Upper canister port 1
Lower canister port 2 Lower canister port 1
Expansion 5 Expansion 7 7 Expansions
Upper canister port 2 Upper canister port 1
Lower canister port 2 Lower canister port 1
Expansion 6 Expansion 8 8 Expansions
Upper canister port 2 Upper canister port 1
Lower canister port 2 Lower canister port 1
Expansion 7 Expansion 9 9 Expansions
Upper canister port 2 Upper canister port 1
Lower canister port 2 Lower canister port 1
SAS Connections: How each unit connects to the next unit in the chain
First Unit Second Unit Number of Expansions
150 Implementing the IBM Storwize V7000 Unified
File module
The use of 10 Gb Ethernet cables has already been covered in the Ethernet section. The
other fiber optic cables that are connected to the file module are Fibre Channel (FC) cables
that connect to the control enclosure.
There are four FC cables required: Two from each file module to each node canister in the
control enclosure. These are connected in a mesh pattern to provide full redundancy of
pathing.
Select short cables because the file module and control enclosure should normally be close
to each other in the same rack. Connect the cables as shown in Table 11-3.
Table 11-3 Cable connections
Refer to Figure 11-4 for connector locations.
Figure 11-4 File to control modules, Fibre Channel cables
Control enclosure
If block storage is being configured, FC connections are required to your storage area
network (SAN) fabrics. Two FC ports are available on each node canister for connection to the
SAN. These should be connected in a mesh arrangement to provide fabric path redundancy.
File module Controller
File module 1: Fibre Channel slot 2, port 1 Upper canister Fibre Channel port 1
File module 1: Fibre Channel slot 2, port 2 Lower canister Fibre Channel port 1
File module 2: Fibre Channel slot 2, port 1 Upper canister Fibre Channel port 2
File module 2: Fibre Channel slot 2, port 2 Lower canister Fibre Channel port 2
Chapter 11. Implementation 151
11.4 Power on and check out
We describe how to power on and check out in the topics that follow.
11.4.1 Network
Ensure that all network devices are powered on and ready, including Ethernet and SAN
switches. Access should also be available to network resources from the
Storwize V7000 Unified, including DNS, Network Time Protocol (NTP), and email server.
11.4.2 Power on expansions and controllers
Power up the disk storage first. Start with the expansion enclosures, then the control
enclosure, followed by the file modules. Ensure that each group of enclosures is successfully
powered on before beginning the next.
Expansion enclosures
Power on both power supplies by using the switch on each one.
Successful power-on is indicated by the following status:
Power supply LEDs, power LED on, three fault LEDs off, on both power supplies
Front left-end cap, power LED on, all others off
Canister LEDs, status LED on, fault LED off on both canisters
Drive LEDs come on as each drive becomes ready
Control enclosure
Power on both power supplies by using the switch on each one.
Successful power-on is indicated by the following status:
Power supply LEDs, power LED on, battery good LED on or flashing (if charging) four fault
LEDs off, on both power supplies
Front left-end cap, power LED on, others ignore
Canister LEDs on both canisters, power status LED on, fault LED and system status
depend on current state and can be ignored at this time
Drive LEDs come on as each drive becomes ready
11.4.3 Power on file modules
The power management and support functions are running if there is AC power on one or
both of the power cables, but the main processor is not powered on. When power is applied to
the cable, the management functions start. This process takes approximately 3 minutes.
When complete, as indicated by a flashing power LED on the front of the module, the unit is
ready to be powered on.
Connect a keyboard and display to the module and a mouse if it is a new installation. Monitor
the boot process on the display.
Power on by pressing the power button on the front of the module. The power LED lights on
solid and the boot process starts. If the file module has code loaded, the module boots and
completion is indicated by a flashing blue attention LED. If no code is present, a message is
displayed on the display to indicate that the boot has failed.
152 Implementing the IBM Storwize V7000 Unified
11.5 Install the latest software
Ensure that as part of the implementation, the latest software levels are installed. In most
cases, a more recent level will be available than that shipped on the module, so an upgrade
will be required. Or, if the file modules have no software installed, then the full package can be
installed by using the following procedure. The control enclosure (Storwize V7000) will be
automatically upgraded, if required, as part of the file module upgrade.
In most cases, the Storwize V7000 Unified is preloaded and a software upgrade can be
performed when the cluster initialization and configuration is complete. In this case, you can
skip the rest of this section and go to 11.6, Initialize the system on page 154.
If the previous state of the cluster is not known or the integrity of the software is suspect,
perform the full restore procedure to begin with a clean software installation.
For details about the software structure and concurrent upgrade processes, see 11.5, Install
the latest software on page 152. Because the full DVD restore process is rarely required and
if done, is part of an initialization, it is included here.
11.5.1 Determine current firmware and code levels
This process can be difficult to determine because it generally requires the system to be
booted and running and logon user and password details known. If the intent is to restore the
code regardless, the previous level is not important and the restore can proceed.
To determine the file module software level, connect a keyboard and display to the module.
Boot and wait for the login prompt. The initial prompt will be preceded by the host name. If
console messages have been displayed, press Enter to restore the prompt message.
login:
The default user/password is admin/admin. If the cluster has been in use previously, the user
settings might have been changed, so you will need to know a valid user login.
When logged in, issue the lsnode command. This displays the file nodes. The software level
is listed under the heading Product version. If the modules are not currently a cluster, you
might need to repeat the process on the other module. If the modules are at different software
levels, the node performing the initialization (the one you put the key in) will reload the other
node to the same level as itself.
If it is wanted to know the level of the Storwize V7000 control enclosure, connect to the
service assistant directly by using the service IP address of one of the nodes, or to the
management address followed by /service and logon using the superuser password. If the
password or IP are not known and not default, wait until the initialization step at 11.6.2,
Initialize the Storwize V7000 controller on page 157 is complete. Then, you will be able to
connect with the management IP address that you defined and the default superuser
password of passw0rd.
11.5.2 Preparation for reload
We describe preparing to reload the code in the topics that follow.
Download and prepare software
To do a full DVD restore, you must obtain the International Organization for Standardization
(ISO) image from IBM Support.
Chapter 11. Implementation 153
Using a suitable application, burn the DVD from the xxx.iso file. This file will be large, over
4 GB and will create a bootable DVD containing all the required software.
The file modules are restored individually. If time is critical, you might choose to burn two
DVDs and perform the installations at the same time, although the process ejects the DVD
after about 25 minutes. Therefore, the second module can be started then, which works well.
Prepare the hardware
The file modules must be installed in the rack and power connected. The power indicator on
the front of the module will be flashing to indicate the power control module is booted up and
the server is powered off and ready to start.
Connect a standard PC monitor and keyboard (or equivalent KVM tool) to the module.
The following steps are not required with a new installation and are only needed if the file
module has been redeployed or the install aborted and cleanup is required. These steps
might also be required as part of a recovery. The following steps should only be done if
required and are included here for completeness:
1. If this module has previously been used as a file module, to achieve a clean installation,
the previous configuration should be removed. It is important that the entire directory
/persist/ should be removed.
2. If any hardware has been replaced in the server or if the basic input/output system (BIOS)
settings are in doubt, then perform the BIOS reset procedure found in the information
center. This details the following process:
a. Power on the server using the power button on the front.
b. When the BIOS splash window is displayed, press F1 to start the BIOS setup.
c. Select Load Default Settings.
d. Then, select Boot Manager.
e. Select Add device.
f. Select Legacy only.
g. Exit the panels by using the escape key and save the configuration on exit.
3. If necessary, configure the Redundant Array of Independent Disks (RAID) controller on the
server for the mirrored disk drive that is used for the system by using the procedure in the
information center.
11.5.3 Reinstall the software
Power on the server. If you have just connected the ac power, you will need to wait a few
minutes for the power controls to initialize before being able to power on. If already on, reboot
the server by using Ctrl-Alt-Del or if in BIOS, by exiting the BIOS setup utility.
Insert the DVD in the drive. Power must be on to open the DVD drive. Therefore, as soon as
the server is powered up, load the DVD.
Watch the video monitor and wait for the DVD to boot. The software installation utility will
immediately prompt for confirmation with this message:
- To install software on a node, press the <ENTER> key.
*NOTE* - this will destroy all data on the node
Caution: This process reinstalls the software from scratch. Perform this action only if you
have determined it is required.
154 Implementing the IBM Storwize V7000 Unified
- Use the Function keys listed below for more information
[F1 - Main] [F5 - Rescue]
boot:
Press Enter to begin the installation process.
The utility now builds the disk partitions if needed and begins loading the Linux operating
system and Storwize V7000 Unified software. There will be at least two reboots to complete
the installation. The first phase copies the software to the disk. Progress can be monitored on
the video. After about 25 minutes, the tool will eject the DVD and reboot the server.
Then, the server is booted from its disk and Linux installs its components and builds the Linux
operating system and installs the Storwize V7000 Unified software. This phase takes about
30 minutes.
When this process is complete, the server is booted up on the operational code and is ready
for use. This boot takes less than 5 minutes.
Successful installation and preparation of the file module is indicated by the blue attention
light on the server flashing.
Repeat the process for the other file module.
11.6 Initialize the system
The next step is to initialize the system and build a cluster incorporating the Storwize V7000
storage (control enclosure and any expansion enclosures) and the two file modules.
11.6.1 Configure USB key
A USB key is shipped with the Storwize V7000 Unified. If the key is misplaced, any USB mass
storage key can be used. However, some models of keys are not recognized; therefore, it
might require trying a few to succeed. Take care not to use a large capacity key because this
can cause the key to not be recognized.
The key must be formatted with a FAT32, EXT2, or EXT3 file system on its first partition.
The shipped key is formatted and preinstalled with the initialization tool. If this is missing or
you are using a replacement key, this tool can be downloaded from the IBM Support site:
http://www.ibm.com/storage/support/storwize/v7000/unified
Follow these steps to configure the USB key:
Note: After 25 minutes, the DVD is ejected and can now be used to begin the installation
process on the other server if required.
Note: There should be a key shipped with the Storwize V7000 controller unit and one with
each file module. Use the key that is shipped with a file module because it should have the
latest version of the inittool. If the latest tool is downloaded, then any of the keys will work.
Keep the keys stored and secure for later use in case of recovery, rebuild, or redeployment.
Chapter 11. Implementation 155
1. Insert the key into any Windows XP or higher workstation. If the tool does not auto launch,
then open the USB key and run InitTool.exe. This launches a window, as shown in
Figure 11-5.
Figure 11-5 Init tool first window
2. Set the storage service IP addresses. The service IP addresses assigned to the Storwize
V7000 storage enclosure nodes are not considered part of the cluster configuration.
These addresses are the base addresses for the nodes and are active independent of the
cluster software. They are not set or changed by the cluster configuration. It is important to
set these addresses in case access is needed during recovery.
Use the set service assistant IP address option on the init tool to prepare the USB key.
Insert the key in the enclosure nodes to set their address. The tool creates one address.
Therefore, the procedure needs to be done twice, once for each node.
3. Select option Unified (File and Block). This universal tool is used for all Storwize V7000
installations. The first option, block system only, sets up the initialization process for
installing just a Storwize V7000 storage controller. The second option is required to
initialize both the storage controller and the file modules for a unified configuration.
USB key files: The USB key will contain the following files:
autorun.inf ........ windows auto run file
inittool.exe ....... Storwize initialization tool
156 Implementing the IBM Storwize V7000 Unified
4. Click Next to get the first setup options, as shown in Figure 11-6.
Figure 11-6 Init tool system settings
This IP address is the management address for the System component, which is the
Storwize V7000 storage controller. As described in the planning section, all the
management IP addresses must be in the same subnet.
5. Enter the IP address, mask, and optional gateway. Then, click Next to get the window
shown in Figure 11-7.
Figure 11-7 Init tool file settings
6. On this panel, give the IP details for the unified management interface, and also the IP
addresses for file modules 1 and 2. Select from the pull-down menu a range of addresses
that are not being used in your network. These addresses are used internally between the
modules and cannot be accessed from your network.
Chapter 11. Implementation 157
7. Click Next to see the instructions to use the key, as shown in Figure 11-8.
Figure 11-8 Init tool final window
8. Click Finish. Confirm that the key now has two new files, satask.txt and cfgtask.txt.
9. Eject and remove the USB key from your workstation.
11.6.2 Initialize the Storwize V7000 controller
Take the following steps to initialize the Storwise V7000 controller:
1. Ensure that both the node canisters are in candidate status before continuing. If the
service IP is known on either node, browse to this address and logon using the superuser
password (default:passw0rd). The status of the nodes is displayed on the home window.
Or, confirm the status of the three LEDs on each canister to be on-off-flashing. Read from
the right on the top canister and from the left on the bottom one.
2. If the status is incorrect, you need to connect to the service assistant interface to resolve
this. The IP address can be set by using the USB key if not known. If the status shows as
service, then perform the action to remove from service. If the status is active, an
operational cluster is present and needs to be removed. Follow the Storwize V7000
procedures to remove the cluster or contact IBM for assistance.
3. When you have both nodes in candidate status, insert the key in any USB port on the rear
of your Storwize V7000 controller enclosure. It is better to use a USB port on the top
canister because the canister executing the initialize command becomes the first node,
node1. This helps prevent confusion later with node1 now being in slot 1. The fault LED
begins flashing. When the fault LED stops flashing, the task is complete and the key can
be removed.
Hint: In case the following steps need to be tried again, it is wise to make a copy of the
files on the key on to your workstation now so it is easier to resume from this point.
USB key files: The USB key contains the following files:
autorun.inf ........ unchanged
inittool.exe ....... unchanged
satask.txt ......... Storage command file. Contains the initialize command
cfgtask.txt ........ File module command file. Contains initialize command
158 Implementing the IBM Storwize V7000 Unified
4. Confirm also that the three status LEDs are on-off-on both canisters in the enclosure,
indicating that the node canisters are active in a cluster.
11.6.3 Initialize the file modules
Initialize the file modules by performing the following steps:
1. Confirm that both the file modules are booted up and ready to be initialized. This is
indicated by the flashing blue attention indicator on each file module. Both must have this
LED flashing before continuing.
2. Now insert the key into any USB port on one file module.
3. The blue LED comes on solid and remains on until the process has completed. If the blue
attention LED on the module that the key is inserted in begins flashing again, this indicates
a failure. If this happens, remove the key and interrogate the results file. Refer to the
Information Center for analysis of the error and recovery actions. Successful initialization
is indicated by both blue LEDs going off.
4. Wait for the initialization process to complete (both attention lights going off). Normal
initialization takes approximately 15 minutes. However, this time can be extended if the
other file module needs to be upgraded to the same code level (plus 1 hour) or if the
control enclosure (Storwize V7000) requires code upgrade (plus 2 hours).
5. Remove the USB key and insert the key into your workstation and perform either of these
actions:
a. Review the results file in an editor.
b. Start the InitTool program from the key (if it did not auto run). This action inspects the
results file and gives a message to say the initialization was successful.
Hint: After the command has been executed, a result file is written back onto the key
and a Secure Shell (SSH) key file. You can review the result file satask_result.html to
confirm that the creation of the nascluster was successful. Also, when successfully
executed, the command file (satask.txt) is deleted to prevent it accidentally being run
again.
Hint: In case the following steps need to be tried again, make a copy of the files on the
key onto your workstation now so it is easier to resume from this point.
USB key files: The USB key contains the following files:
autorun.inf ........ unchanged
inittool.exe ....... unchanged
.................... If successful, satask.txt has been deleted
cfgtask.txt ........ File module command file. Contains initialize command
NAS.ppk ............ SSH key file, needed by the file module
satask_result.html . Result output from the initialize command
Chapter 11. Implementation 159
6. Confirm that you can access the cluster. Using the browser on a workstation that has IP
connectivity to the management ports of the Storwize V7000 Unified, go to
https://<management_port_ip>. This is the management IP that you assigned to the
Storwize V7000 Unified cluster when initializing the USB key.
Initialization of the Storwize V7000 Unified will begin and a screen will display similar to the
example given in Figure 11-9.
Figure 11-9 V7000 Unified System Initialization
USB key files: The USB key contains the following files:
autorun.inf ........ unchanged
inittool.exe ....... unchanged
.................... If successful,cfgtask.txt has been deleted
.................... NAS.ppk will be deleted when copied to File module
satask_result.html . unchanged
SONAS_result.txt ... Result output from the initialize command
160 Implementing the IBM Storwize V7000 Unified
11.7 Base configuration
The Storwize V7000 Unified cluster is now created and consists of two main components:
The Storwize V7000 storage consisting of a control enclosure and optional expansion
enclosures, and the file server consisting of two file modules. These can all be managed from
the Storwize V7000 Unified graphical user interface (GUI) interface, which is presented from
the primary file modules management IP address.
To use the cluster, it must first be configured. A setup tool, called EZ-Setup, is provided that
will run once only the first time the cluster is logged in to after an initialization. The steps to
configure the cluster by using EZ-Setup are described next.
11.7.1 Connect to the graphical user interface
Using the browser on a workstation that has IP connectivity to the management ports of the
Storwize V7000 Unified, go to https://<management_port_ip>. You will be presented with the
logon window, as shown in Figure 11-10. Ensure that you are connected to the Storwize
V7000 Unified and not the Storwize V7000 storage control enclosure directly.
Figure 11-10 Easy Setup login
This login window leads into the Easy Setup wizard, which guides you through the base
configuration of the machine. It is important to have all the information about how you want to
configure the cluster ready before continuing.
Note: EZ-Setup will only run once and cannot be manually started. Any configuration
options that are skipped must be manually set up or changed from the appropriate
configuration panels later.
Chapter 11. Implementation 161
Default user is admin and the default password is admin. For this initial setup login, only admin
is available. Enter the password and click Continue.
11.7.2 Easy Setup wizard
License agreement
The first windows that Easy Setup presents are for the license agreement, as shown in
Figure 11-11. Read the license agreements on each tab and then click the appropriate
button. Then, click Next to continue.
Figure 11-11 License Agreement window
Welcome Screen
Once the license agreement has processed you will be shown the Welcome Screen
displayed below in figure Figure 11-12 on page 162:
Tip: It is important to have all your parameters ready and preferably written out on your
planning sheet. Also, ensure that the various servers are operational and ready. Many of
the configuration processes described in the following procedures will test and confirm that
the resources you address (for example, DNS servers, authentication servers, gateways,
and so on) are reachable and operating. The setup steps might fail if they cannot be
contacted and connected to.
162 Implementing the IBM Storwize V7000 Unified
Figure 11-12 Welcome Screen
To procede with the setup Click Next.
System attributes
Enter the required fields in the system attributes window, as shown in Figure 11-13.
Figure 11-13 System attributes
The following definitions describe the system attributes:
System name The name of this Storwize V7000 Unified. This name is displayed on
the screens and is the name by which you will know this cluster.
Chapter 11. Implementation 163
NetBIOS name The NETBIOS name by which this cluster will be seen in the network.
This name is used in the Server Message Block (SMB) protocol.
Time zone Choose the time zone from the selection that best represents where
this machine is.
Click Next. A progress window displays while configuration changes are made. Wait for Task
Completed, then close.
Verify Hardware
You are now presented with a graphical representation of the hardware. Check that all
modules and enclosures are correctly shown and that the cabling is correct. See
Figure 11-14.
Figure 11-14 Verify Hardware
Proceed with setup by choosing Next.
Configure Storage
Configure Storage is where you can configure your internal storage arrays. The system
automatically detects the drives that are attached to it. These are configured into RAID
arrays. You will see a menu like the example given below in Figure 11-15 on page 164.
164 Implementing the IBM Storwize V7000 Unified
Figure 11-15 Configure Storage
Click Finish. You will see the task running by the graphic displayed like the example given in
Figure 11-16.
Figure 11-16 Creating RAID Arrays
This completes this section on the Easy Setup Wizard. Notification is given by a dialog box
like example in Figure 11-19 on page 166.
Use the mouse to hover over each component to display more detail, as shown in
Figure 11-17 on page 165.
Chapter 11. Implementation 165
Figure 11-17 Hardware detail
If problems are found, attempt to resolve them now by using the options available on this
window. Ensure that all expansion enclosures are included in the graphic. If components are
missing, ensure that they are powered up and cabled correctly.
Click Next to continue.
A task status window is displayed. Wait for Task Complete, then close.
Configure storage
After the storage has been added, you are asked to configure it. The system will suggest a
best practice configuration, which is detailed on the next window. If the default configuration is
acceptable, tick the Yes box to automatically configure the storage, as shown in the example
in Figure 11-18 on page 166. Otherwise, clear the box to skip auto-configuration. You then
must manually configure the storage later.
Note: Storwize V7000 Unified file module and SONAS give the best performance using
arrays with eight data spindles because of the IBM General Parallel File System (GPFS)
block size. The automatic configuration might therefore suggest 8+P arrays. Where
possible, use 8+P or 4+P arrays. For more details about this topic, see the SONAS support
documentation.
In a 24 drive enclosure, this conveniently gives an 8+P, 8+P, 4+P, S configuration.
166 Implementing the IBM Storwize V7000 Unified
Figure 11-18 Automatically configure internal storage
Figure 11-19 Easy Setup Complete
After the System Setup wizard completes you will be prompted to complete the Support
Services wizard when you click Close.
What to do next?
Once you click close upon completion of the Easy Setup Wizard, you will have a pop-up to
ask you what you would like to do next. Configure NAS File Services or Service Support? See
Figure 11-20.
Chapter 11. Implementation 167
Figure 11-20 Whats next?
If you choose to click Close at this point, you will be given a warning message about the call
home feature and the Call Home Warning dialog appears. You can bypass this at this time but
the Call Home warning dialog will appear. At this point we are going to start with Service
Support Setup. To set values, click Service Support.
Service Support Notifications
The next window, as shown in Figure 11-21 on page 167, gives the option to configure the
support notifications.
Figure 11-21 Support notifications
This menu goes through setting up the Service IP Port and Call Home sections. Click on Next
to start configuring these sections.
Service IP Port
The next window asks for your service IP address information for each canister, as shown in
Figure 11-22.
168 Implementing the IBM Storwize V7000 Unified
Figure 11-22 Service IP Information
Once this is complete for each Node Canister we can start configuring the Call Home
information. Click on Next.
Call Home
To configure the Call Home information you will be given a graphic as displayed in
Figure 11-23.
It is necessary to configure the call home feature in order to allow IBM to be made aware of
any hardware configuration issues and overall health of the system.
Chapter 11. Implementation 169
Figure 11-23 Call Home
Once you have entered in all the information which will allow the machine to call home you
can click Finish.
The first field is the IP address of your email server, which needs to be accessible to the
cluster and will allow the cluster to send email.
Also, enter your company name, contact email address, and prime shift telephone number.
The off-shift number is optional and only required if the prime number is not 24 hour. These
numbers must be phones that will be answered at any time so that IBM Support personnel
can contact you in the event of the cluster calling call home.
The last field should be left as default. This is the address in IBM for call home alerts.
Click Finish.
In order to use the NAS part of the unified, you will have to configure that side of it. When you
choose any option under the NAS file folder, you will be prompted to first configure with the
dialog box given in example Figure 11-24.
170 Implementing the IBM Storwize V7000 Unified
Figure 11-24 Configure NAS
Click Yes. You will be taken to the Welcome screen for configuring the NAS File Service
section as shown in example Figure 11-25.
Figure 11-25 Welcome NAS
We will go through configuring the NTP, DNS, authentication, and publie networks in this
section. Click Next to continue.
Chapter 11. Implementation 171
Figure 11-26
Network Time Protocol server
It is essential that a Network Time Protocol (NTP) is defined in the cluster to ensure
consistent clocks in the file modules. This is needed for recovery and resolving deadlocks.
Or, use the CLI:
setnwntp ip,xxx.xxx.xxx.xxx,xxx.xxx.xxx.xxx <= separate addresses with comma.
lsnwntp
rmnwntp
Domain name service
Next, enter your domain name information. The process will test that the servers listed are
present and fail if they cannot be contacted.
Domain name This is the public network domain, which is appended to your cluster
name. This will typically be common to your whole enterprise. For
example, customer.com.
DNS servers IP address of your DNS server. To add a server IP to the list, click the
+ symbol, and use the X to delete an entry. Add as many DNS server
entries as wanted. At least one is required.
DNS search domains Optional. Additional domain names that should be searched.
When complete, click Next.
A progress panel will display. Wait for Task Completed, then close.
Authentication
On this next window (shown in Figure 11-27 on page 172), you can set the method that will be
used for file access authentication and control. This is optional at this time, so this step can be
deferred and skipped. However, no file access is possible until this section has been
configured.
172 Implementing the IBM Storwize V7000 Unified
Figure 11-27 Authentication
Choose from the available authentication methods, as shown on Figure 11-27, by clicking the
appropriate selection, then click Next.
Only one form of authentication can be active. Depending on your choice, you will be taken
through a series of setup panels as follows.
Active Directory
If you chose Active Directory, then your next window will be as shown in Figure 11-28.
Figure 11-28 Active Directory settings
Enter the following information as required:
AD Server This is the IP address of the Active Directory server.
User ID and password User ID and password that has sufficient authority to connect to the
AD server and access authentication and mapping information.
Typically, this will be the Administrator user ID, or an ID with
equivalent authority.
Enable SFU If support for UNIX is required, enable the Services for UNIX (SFU)
feature and complete the configuration box below.
Chapter 11. Implementation 173
Domain name This is the domain that SFU belongs to.
UID and GID range Lower to upper limits of the range of user and group IDs that will be
used by the AD server.
SFU schema The SFU schema mode being used.
Using the + symbol, add as many line items as required.
When done, click Finish. A progress window displays while the configuration process runs.
Wait for Task Completed, then close the window.
Lightweight Directory Access Protocol
For Lightweight Directory Access Protocol (LDAP), you will see Figure 11-29.
Figure 11-29 LDAP settings
Enter the required information as follows:
LDAP server This is the IP address of the LDAP server. Click the + symbol to add
extra servers if wanted.
Search base The base domain suffix.
DN The root distinguished name.
Bind password User ID and password that is required to access the LDAP server.
User/Group suffix User and Group suffix as defined by the LDAP server.
Security method Select the Secure Sockets Layer (SSL) mode that will be used. If SSL
security is used, a certificate file is needed. When this option is
selected, a new box is displayed for the Certificate. Click Browse to
locate the certificate file on your workstation.
174 Implementing the IBM Storwize V7000 Unified
Enable Kerberos If SSL is not used, then the option for Kerberos is displayed. Tick the
box to enable.
Kerberos name Enter the name of the server.
Kerberos realm Enter the Kerberos realm.
Key tab file Browse to the location of the Kerberos key tab file.
Samba primary domain controller
If Samba PDC was selected, the PDC configuration window is presented, as shown in
Figure 11-30.
Figure 11-30 PDC settings
Enter the required information as follows:
Server Host This is the IP address of the NT4 PDC server.
Admin ID and password The user ID and password that is used to access the NT4 server
that has administrative authority.
Domain name The NT4 Domain name.
NetBios name NT4 NetBIOS name.
NIS Basic
If using basic NIS authentication, you get the setup window as shown in Figure 11-31.
Figure 11-31 NIS Settings
The following definitions apply to NIS basic settings:
NIS domain The name of the primary NIS domain.
Chapter 11. Implementation 175
NIS Server/Domain The address of each server. For each server, map the supported
domains. Domains are entered as a comma-separated list for each
server. Use the + symbol to add extra servers.
Extended NIS
For Active Directory and PDC authentication, there is an option to include extended support
for NIS. If this was selected, you are also presented with a configuration settings window, as
shown in Figure 11-32.
Figure 11-32 Extended NIS settings
The following definitions apply to extended NIS settings:
Primary domain The name of the primary NIS domain.
NIS Server/Domain The address of each server. For each server, map the supported
domains. Domains are entered as a comma-separated list for each
server. Use the + symbol to add more servers.
If NIS is used for user ID mapping, tick the enable box and complete the remaining fields. If
not, then this panel is complete:
Domain map Add entries here to map the AD domain to the NIS domains. Use the +
symbol to add lines.
User map Add entries for user mapping exceptions as required.
Local Authentication
Select the Local Authentication option to configure the system with an internal authentication
mechanism in which users and groups are defined locally on this system. If this was selected,
you are also presented with the window as shown in Figure 11-33 on page 176.
Note: With Local Authentication you enter user and group information locally, which is
covered in Section 11.13.3, Create local users by using local authentication for NAS
access on page 197.
176 Implementing the IBM Storwize V7000 Unified
Figure 11-33 Local Authentication configuration window
Completion
When the chosen method has been configured and the processing is complete as indicated
by Task Completed, close the status window.
Public networks
You are now presented with a window to define the public networks, as shown in
Figure 11-34. It is necessary to configure several networks, because each network is tied to
an Ethernet logical interface.
To create a network, click New Network, which starts a window, as shown in Figure 11-34.
Figure 11-34 New Network panel
The following definitions apply to the New Network panel fields:
Subnet The IP subnet this definition is for. The format of this value is using the
CIDR syntax. xxx.xxx.xxx.xxx/yy where yy is the decimal mask given
as the number of left-aligned bits in the mask.
Chapter 11. Implementation 177
VLAN ID If VLANs are being used, enter the VLAN number here; otherwise,
leave the field blank. Valid values are 2-4095. VLAN 1 is not supported
for security reasons.
Default gateway The default gateway (or router) within this subnet for routing. This is
not a required field if all devices needing to be connected to this
interface are in the same subnet.
Interface pool Using the + key, add IP addresses to the pool for use on this logical
interface. These must be in the same subnet as entered above. A
minimum of one address is required.
Additional Gateways If more than one gateway (router) exists in the subnet, add the IP
addresses here.
Interface Select the logical interface that this definition is assigned to.
A progress window is displayed. Wait for the completion message. Click OK when complete.
Repeat the process for each network by using the New Network selection until all required
networks have been defined. When done, click Finish.
A reboot progress window now displays indicating that the file modules are being restarted.
When complete, close the window.
This is followed by the applying settings status window, as shown in Figure 11-35.
Figure 11-35 Applying settings
Wait for the process to complete. Easy Setup is now complete. The home page for the
Storwize V7000 Unified is automatically displayed.
11.7.3 Set up periodic configuration backup
Management and configuration information is stored on the file modules in a trivial database
(TDB). It is recommended that you set up a periodic backup of the TDB at a suitable time, and
we recommend daily. This backup might be required by service personnel if the TDB
becomes lost or corrupted or in the event of a recovery:
1. Start an SSH session with the cluster management address that you set when initializing
the cluster.
2. Log on with user admin and password admin.
3. Issue the command to perform the periodic configuration backup. See Example 11-1.
Example 11-1
[7802378.ibm]$
[7802378.ibm]$ mktask BackupTDB --minute 0 --hour 2 --dayOfWeek "*"
EFSSG0019I The task BackupTDB has been successfully created.
EFSSG1000I The command completed successfully.
178 Implementing the IBM Storwize V7000 Unified
[7802378.ibm]$
If you receive an error that the management service is stopped, wait a few minutes for it to
complete its startup from the recent reboot.
4. This command schedules a backup to run at 02:00 a.m. every day. You can change the
time to suit your own environment.
5. Exit the session.
11.8 Manual setup and configuration changes
If you are redeploying an existing configuration, or if you skipped any steps in Easy Setup, or
you need to alter any values, use the following procedures to manually set or alter these
configuration settings.
If Easy Setup was completed and all values are entered correctly, then this section is for your
reference and some of the topics can be skipped. We recommend that you set up the optional
support details to enable remote support functionality of the cluster, as detailed below in this
section.
11.8.1 System names
If you need to change the cluster name, use the following command-line interface (CLI)
command:
chsystem -name <new_name>
11.8.2 System licenses
If the system licenses need to be changed, go to the window that is shown in Figure 11-36.
Select Settings General, then select Update License. Type over the values with the new
ones to match the enclosure licenses being applied.
Although warning messages might be posted if the entered licenses are exceeded, IBM
works on an honesty system for licensing. Enter the value of only the license that you
purchased. See Figure 11-36.
Note: To use the compression function, you must obtain the optional IBM Real-time
Compression license.
Chapter 11. Implementation 179
Figure 11-36 Update license
11.8.3 Support
Use the following panels to define or update details about remote support functions and also
to gather support information.
Call Home
To define call home details, go to the window that is shown in Figure 11-37 by selecting
Settings Support. Then, click Call Home.
180 Implementing the IBM Storwize V7000 Unified
Figure 11-37 Call home definitions
Assist On-Site (AOS)
Assist On-site is an IBM Tivoli based product that is used by IBM Support to securely connect
to the Storwize V7000 Unified to perform problem determination and engineering actions if
required. This can be configured in two ways: lights on and lights out. Lights on requires
manual authorization from the GUI onsite to allow a connection to complete.
Lights out connects immediately. This feature requires access to the Internet over HTTP and
HTTPS protocols.
To configure, go to the setup window, as shown in Figure 11-38, and select Settings
Support. Click AOS. To enable, tick the box and then select Lights on or Lights off. If your
site uses a proxy for Internet access, enter the IP address and port details. Also, a user
password is needed. To disable the proxy, clear the Proxy server field.
Chapter 11. Implementation 181
Figure 11-38 AOS configuration
Support Logs
For most problems, IBM Support requests logs from the cluster. This is easily done by going
to the Download Logs window, as shown in Figure 11-39. Select Settings Support.
Then, click Download Logs. From this panel, you can view the support files by clicking the
show full log listing. This displays the contents of the logs directory and includes previous
data collects, dumps, and other support files. These files can be downloaded or deleted from
this window.
To create and download a current support package, click Download Support Package. This
displays a window asking for the type of package, as shown in Figure 11-39. Select the log
type as requested by IBM Support. If there is any doubt, take the full logs, but also consult
IBM Support. Then, click Download. A progress window is displayed while the data is created
and gathered. When complete, the window closes and a download option window is
displayed. Use this window to download the file to the workstation you are browsing from. The
file can be retrieved anytime later or by any user with access from the show full log listing
window.
182 Implementing the IBM Storwize V7000 Unified
Figure 11-39 Support logs
To create and download a current support package, click Download Support Package. This
displays a window asking for the type of package, as shown in Figure 11-40 on page 182.
Select the log type as requested by IBM Support. If there is any doubt, take the full logs, but
also consult IBM Support. Then, click Download. A progress window is displayed while the
data is created and gathered. When complete, the window closes and a download option
window is displayed. Use this window to download the file to the workstation you are browsing
from. The file can be retrieved anytime later or by any user with access from the show full log
listing window.
Figure 11-40 shows the Download Support Package panel.
Figure 11-40 Download Support Package window
Chapter 11. Implementation 183
11.9 Network
There are a number of network adapters in the Storwize V7000 Unified. Each is configured
with IP addresses and these can be changed if required. Here we go through the various
adapters and where to go to reconfigure them.
11.9.1 Public Networks
These addresses are on the 10 Gb Ethernet ports and also the two client side 1 Gb ports.
These are the addresses that the hosts and clients use to access the Storwize V7000 Unified
and open file shares. Go to Settings Network option. This option gives you the window as
shown in Figure 11-41. Click Public Networks.
Figure 11-41 Public networks configuration
From here, you can add new network definitions and delete existing ones. Each definition is
defined to one of the virtual adapters. Refer to Public networks on page 176 for details about
configuring public networks.
11.9.2 Service ports
It is important that the service ports on the control enclosure are set to an IP address. These
are seldom used in normal operation but are important in times of a problem and it is better to
have them set up beforehand. One IP address is needed for each node canister on its port 1.
This IP will share the port and coexist with the management IP that also is presented on the
same port but only from the config node.
184 Implementing the IBM Storwize V7000 Unified
To set or alter these IP addresses, select Settings Network. Click Service IP Addresses
to display the configuration window, as shown in Figure 11-42. Hover the mouse over port 1 to
display the settings window. If you must change it, click in the fields and type the new values.
Click OK to save. You must set both canisters. Therefore, use the pull-down menu to display
the other canisters ports and configuration.
Figure 11-42 Service IP
11.9.3 Internet Small Computer System Interface
If you have the feature to add the additional Ethernet ports to the control enclosure for
Internet Small Computer System Interface (iSCSI) service, you must set the addresses for
these ports. This is an optional feature and is only used for iSCSI connection to block storage.
Go to Settings Network. Select iSCSI from the list. You get the window that is shown in
Figure 11-43.
Here, you see a graphical representation of the ports. There are two diagrams, one for each
node canister. Hover over each port to display the configuration panel and enter the IP
address details. Click OK to save.
Chapter 11. Implementation 185
Figure 11-43 iSCSI IP addresses
11.9.4 Fibre Channel ports
Use the Fibre Channel panel to display the Fibre Channel connectivity between nodes,
storage systems, and hosts. The results for the ports for all nodes, storage systems, and
hosts can be displayed. Go to Settings Network. Select Fibre Channel from the list.
The display in Figure 11-44 shows the results for all hosts that are attached to block volumes.
186 Implementing the IBM Storwize V7000 Unified
Figure 11-44 Shows Fibre Channel results for hosts
11.9.5 Fibre Channel ports
The IP Report panel displays all the IP addresses that are currently configured on the system.
The File Module table provides details about the all IP addresses that are currently configured
to manage network-attached storage (NAS) services and public networks that clients use to
connect to the system. File modules provide the services to access the file data from outside
the system and provide the back-end storage and file system that store the file system data.
The Control Enclosure area shows details about all the IP addresses related to managing
block storage. The control enclosure provides the services to access the block storage for
block clients. A sample display is shown in Figure 11-45.
Chapter 11. Implementation 187
Figure 11-45 IP Report panel
11.10 Alerting
Storwize V7000 Unified supports several methods of alerting events and problems. The call
home for IBM Support has been described in 11.8.3, Support on page 179. Additionally,
alerts can be sent to an email address and to an SNMP server. To configure alerting, go to
Settings Event Notifications.
11.10.1 Email
For email, first set up the details of your Simple Mail Transfer Protocol (SMTP) mail server on
the panel, as shown in Figure 11-46 on page 188.
Click Enable email notifications, then insert the IP address of your SMTP server. Also, you
must enter the reply address for the email and a senders name. These identifiers show on
the email header and identify to the recipient where the email is from. Therefore, use an
easily recognized name to make alerts clear.
Using the ... button, define as many subject options as needed. Then, select the wanted
subject content from the pull-down menu. Complete the optional header and footer details to
be included around the event text if wanted.
A test email can be sent at any time to confirm the settings and prove the alerting path. Enter
a valid email target and click Test Email (see Figure 11-46). We recommend that you send a
test email when complete.
188 Implementing the IBM Storwize V7000 Unified
Figure 11-46 Email server configuration
Next, click email recipients to enter where the emails will be sent. A list of current recipients is
displayed. You can select any line to edit or delete from the action drop-down menu. To add
an entry, click New Recipient. This displays the entry dialog, as shown in Figure 11-47.
Figure 11-47 Event recipient
Enter a name for the recipient and the email address. Tick the events box for each type of
event type that you want this recipient to receive an alert for. Several events have a criticality
Chapter 11. Implementation 189
level. Select the wanted threshold. The critical only threshold is best for most users to reduce
the number of emails received.
Click the reports box if reports are also required. For quota, choose the percentage of the
hard limit for the threshold.
11.10.2 SNMP
If you use an SNMP server to monitor your environment and want to receive alerts from the
Storwize V7000 Unified, go to Settings Event Notifications and click SNMP. Complete
the form as shown in Figure 11-48.
Figure 11-48 SNMP server settings
Use New SNMP Server to add a server or highlight the line. Use Actions to edit or delete.
The add and edit pop-up window is shown in Figure 11-49.
190 Implementing the IBM Storwize V7000 Unified
Figure 11-49 Edit SNMP server
Enter the server IP address and port. Complete the SNMP community name. Tick the event
types that are to be sent and for each select the severity.
11.10.3 Syslog Server
If you want to offload cluster messages to a logging server, enter the details in the panel as
shown in Figure 11-50. Go to Settings Event Notifications. Select Syslog Server.
Figure 11-50 Syslog settings
Chapter 11. Implementation 191
11.11 Directory Services and Authentication
Go to Settings Directory Services. First, set up the DNS server settings by clicking DNS
in the left panel.
11.11.1 Domain Name System
In the Domain Name System (DNS) settings panel, as shown in Figure 11-51, click Edit to
make changes. First, ensure that the DNS domain name is correct. This is important with
Active Directory and is a key component with authentication. Define your DNS server and any
backup DNS servers by using the + icon to create more entries. Next, add any search
domains that are outside the primary domain name that will be involved in accessing this
cluster.
Figure 11-51 DNS settings
11.11.2 Authentication
Now, click the Authentication icon as shown in Figure 11-52. The system requires that you
determine a method of authentication for users of the system. The system supports either a
remote authentication service or a local authentication. Remote authentication is provided by
an external server that is dedicated to authenticate users on the system. Before configuring
remote authentication on your system, ensure that the remote authentication service is set up
correctly. Local authentication is provided by an internal mechanism in the system that is
dedicated to authentication.
192 Implementing the IBM Storwize V7000 Unified
Figure 11-52 Authentication settings
If you must make any changes, click Edit, which starts the authentication windows as shown
in Figure 11-54 on page 193. You are asked to acknowledge your desire to edit these settings
by a dialog box as shown in Figure 11-53.
Figure 11-53 Change Authentication
Warning: Use extreme care when editing the authentication settings on a running cluster
because this might cause loss of access to shares.
Chapter 11. Implementation 193
Figure 11-54 Presented Edit window
The cluster GUI now takes you through the setup panels for the authentication setup. These
are the same panels as the EZ-Setup wizard guided you through and are described in detail
in Authentication on page 171. If any configuration has already been set, the panel entry
fields auto fill with the existing values.
11.12 Health check
Check the health of the system and correct any problems before continuing by checking the
bottom of the panel to see the status.
First, confirm the color of the status indicator at the bottom of the GUI window. This is a good
indicator of any serious issues. If not green, click X to get a summary of the areas that are
unhealthy.
To get more detail, using the CLI command lshealth gives a listing of each function in the
cluster and its status. Use the -r parameter to force the task to refresh the information. You
can also drill down into a particular function by using the -i parameter.
Next, review the event logs for the file and block storage, and review the logs for each
individual file module in system details. Ensure that there are no unfixed events.
11.13 User security
It is important to change the default passwords and define profiles for administrators of the
cluster. Change the passwords for the following user IDs:
Note: With local authentication, you enter user and group information locally, which is
covered in Section 11.13.3, Create local users by using local authentication for NAS
access on page 197.
Important: Make sure that storage pools are maintained in a green state specially if
compression is used. If the storage pool is allowed to run out of physical space then it will
cause compressed volumes to go offline.
194 Implementing the IBM Storwize V7000 Unified
admin This is the default user ID for the Storwize V7000 Unified cluster
superuser This is the default user ID for the Storwize V7000 storage enclosure.
You can also define more users as required.
At the time of writing the root password for the file modules is widely known to IBM Support
and implementation teams to assist in setup and recovery. In the future, when all needed
functions have been made available through the GUI and CLI, this password will be changed.
If this has occurred, then this comment can be ignored. As the password is still the widely
known default, ask your IBM team to assist you to change it. The CLI command chrootpwd
changes it across both nodes.
11.13.1 Change passwords
In the following sections, we show how to change the passwords.
Change storage enclosure password
You will need to log on to the Storwize V7000 storage GUI directly. Go to Access Users.
The All Users view is displayed by default. Highlight the superuser user definition and use
the Actions pull-down menu or right-click to select Properties.
This action starts the user properties window, as shown in Figure 11-55. Click Change in the
Users password section and enter a new password. Click OK to action and complete.
Warning: It is important to change the default passwords on both the cluster and the
Storwize V7000 storage enclosure.
Chapter 11. Implementation 195
Figure 11-55 Edit user: block
Alternatively, you can use the following command:
svctask chuser -password <xxxxxxxx> superuser
Change cluster admin password
On the Storwize V7000 Unified GUI, go to Access Users. The All Users view is displayed
by default. Highlight the admin user definition and use the Actions pull-down menu or
right-click to select Edit. This selection starts the Edit User window, as shown in Figure 11-56.
Click Change in the Users password section and enter a new password. Click OK to action
and complete.
196 Implementing the IBM Storwize V7000 Unified
Figure 11-56 Edit user: Unified
Alternatively, you can use the following command:
chuser admin -p <xxxxxxxx>
11.13.2 Create cluster users
Go to Access Users and click New User, as seen in Figure 11-57.
Figure 11-57 Cluster users
Chapter 11. Implementation 197
This starts the New User window, as shown in Figure 11-58. Type in the users name and
select the level of authority from the list. Set a password and type it again to confirm. Click OK
to create.
With local authentication, you enter user and group information by using the method that is
described in Section 11.13.3, Create local users by using local authentication for NAS
access on page 197.
Figure 11-58 New User panel
Alternatively, you can use the following command:
mkuser Trevor -p passw0rd -g Administrator
11.13.3 Create local users by using local authentication for NAS access
Besides external authentication such as Active Directory or LDAP, the system supports user
authentication and ID mapping using local authentication server for NAS data access. After
you have configured local authentication on your system, you need to define groups and
users who are registered in the local authentication service to access data using the NAS
protocols that are supported by this system.
Using local authentication eliminates the need for a remote authentication service, such as
Active Directory or Samba primary domain controller (PDC), thus simplifying authentication
configuration and management. Local authentication is best used for environments where no
external authentication service is present or if the number of users are relatively small. Local
authentication supports up to 1000 users and 100 user groups. For configurations where all
users are in a single group, the system supports 1000 users. A single user can belong to 16
groups. For larger numbers of users and groups, remote authentication should be used to
minimize performance impacts to the system.
When creating users and groups for local authentication, ensure user, groups names, and IDs
are consistent across multiple systems in your environment. If NAS users access data from
two or more systems, ensure that those users have the same user name, user ID, and
primary group on each system. Consistent user and group attributes are required for using
advanced functions such as IBM Advanced Cloud Engine and asynchronous replication. In
198 Implementing the IBM Storwize V7000 Unified
addition, it also provides flexibility in moving data between systems in your environment and
simplifies migration to an external Lightweight Directory Access Protocol (LDAP) server.
When managing multiple systems, the administrator should designate a primary system that
contains all users and groups. Any new user and group should be created first on the primary
system and then created on any other systems, using the same user ID or group ID that was
defined on the primary system. This practice helps ensure that a user ID or group ID is not
overloaded. When specifying user IDs and group IDs, you can have the system automatically
generate these values. However, to ensure control over these values and to minimize any
authorization problems that might be introduced over time, assign these values manually.
When creating users and groups for local authentication, the user and group names are not
case sensitive. For example, if a user exists on the system, named John, a new user named
john cannot be created. This is a limitation of some of the supported protocols. In addition,
NAS user and group names cannot be the same as CLI users and system users.
Go to Access Local Authentication, as shown in Figure 11-59. New local users and new
local groups can be created by using this process. You can modify a current user by using the
Actions pull-down menu or by right-clicking a user or group.
Figure 11-59 User using local authentication
The pop-up window in Figure 11-60 shows how to add a new user for NAS access.
Chapter 11. Implementation 199
Figure 11-60 Add new user
The pop-up window in Figure 11-61 shows how to add a new group for NAS access.
Figure 11-61 Add new group
11.14 Storage controller configuration
The Storwize V7000 storage component provides the storage arrays for the file systems. It is
necessary to configure the controllers even if not using the Storwize V7000 for block devices.
The basic configuration and initialization should have been completed using the USB key in
the preceding implementation steps. If the storage was auto configured during the Easy
200 Implementing the IBM Storwize V7000 Unified
Setup process and no block volumes are required, the configuration is complete and you can
skip this step, otherwise continue.
Reference the IBM Storwize V7000 Unified Information Center for your version:
http://pic.dhe.ibm.com/infocenter/storwize/unified_ic/topic/com.ibm.storwize.v7000
.unified.140.doc/ifs_ichome_140.html
11.14.1 External SAN requirements
Block volumes are accessed over the Fibre Channel storage area network (SAN). The
Storwize V7000 is fitted with four Fibre Channel ports on each node. When part of the unified
cluster, two from each node are dedicated to the file module connections. The remaining two
are for connection to the SAN. As is normal practice, two SAN fabrics are needed in parallel
to form redundancy and manageability. Connect one port from each node to each fabric.
Zone these ports to each other in each fabric to create more inter-node links.
Host (HBAs) on the SAN can now be zoned to the Storwize V7000 ports as required for
access to the storage.
11.14.2 Configure storage
Presenting block volumes to hosts is the same for the Storwize V7000 Unified as it is for the
standard Storwize V7000. The only difference is that the Storwize V7000 Unified GUI is used
for both file and block, but the interface and functions are still the same.
For this reason, we recommend using Implementing the IBM Storwize V7000 V6.3,
SG24-7938 for guidance on configuration and management of the block storage. We provide
a summary of the steps that are involved to assist experienced users.
11.15 Block configuration
Summarized below are the steps to configure block storage volumes for access by hosts over
the SAN:
External storage If external SAN-attached storage is being managed by the Storwize
V7000, this must be added first. The storage when installed and
connected to the SAN, needs to be configured to present volumes
(logical unit numbers (LUNs)) to the cluster. The storage should map
the volume as though the Storwize V7000 cluster were a host. The
cluster then discovers these as MDisks.
Arrays The physical drives that are fitted to the Storwize V7000 must be built
into RAID arrays. These arrays become MDisks and are visible to the
cluster when built.
MDisks The array MDisks and the external MDisks can now be defined. They
can be given descriptive names and are ready for use.
Pools Extent pools, which are used to build the volumes, must now be
defined. At least one pool is required. There are various reasons why
multiple pools might be used. The main reason is to separate disks
into groups of the same performance. Previously known as
MDiskgroups. Another reason could be to separate uncompressed
from compressed volumes.
Chapter 11. Implementation 201
Volumes From the pools, the volumes that the hosts are using can be defined. A
volume must get all its extents from one pool. The volume can be
generic, thin-provisioned, mirror, or compressed.
Hosts Each host that is accessing the cluster needs to be defined. This
requires describing the host and giving it a name, defining the access
protocols it will use based on its operating system type, and defining
its FC ports. The port definition includes the ports worldwide name
(WWN).
Mapping The last step is to create mappings for the volumes to the hosts. A
volume can be mapped to multiple hosts, provided the hosts support
disk sharing and are aware that the disk is shared, and hosts can see
multiple volumes.
11.15.1 Copy Services
Several methods of copying volumes are available to suit your business requirements. These
are covered in detail in Chapter 8, Copy services overview on page 75.
Because copy services are covered in depth in several manuals and Redbooks publications,
we summarize the setup steps here and recommend that you consult those books for concise
details. For the Storwize V7000, we recommend that you see Implementing the IBM Storwize
V7000 V6.3, SG24-7938.
Partnership
To work with block remote copy services to another cluster, you must have defined a
relationship with at least one other cluster. Partnerships can be used to create a disaster
recovery environment or to migrate data between clusters that are in different locations.
Partnerships define an association between a local clustered system and a remote system.
SAN connection The cluster must be visible to each other over the SAN network.
Create zoning so that at least one port from each node can see at
least one port on all the remote nodes. Ensure if dual fabric, that this is
true on both fabrics for redundancy. It is not necessary, nor
recommended to zone all ports.
Partnership (local) After the zoning is in place, go to Copy Services Partnerships to
display the partnership list. Click New Partnership. This begins a
discovery process and provided there is at least one working path to a
remote cluster, it is considered a candidate. Select the partner cluster
from the selection list.
You must also set the bandwidth setting. This is tunable at any time,
but it is important to set it correctly. The bandwidth defines to the
cluster the maximum aggregate speed that is available to the other
cluster. This is the speed that data will be sent at. Do not overrate this
setting or the link will be flooded. Do not underrate it or throughput will
be affected as this caps the data rate. This setting is for data from this
cluster only, not received, which must be set at the remote cluster.
The cluster will show as partially configured. It must now also be
defined from the remote site to be complete.
Partnership (remote) Go to the remote cluster and perform the same operation as shown in
Partnership (local), above. The partnership must be set up from both
clusters to be operational.
202 Implementing the IBM Storwize V7000 Unified
The bandwidth setting on the remote cluster is an independent setting
and controls the data flow rate from the remote back to the local.
Normally, these will be the same, but they can be set differently if
wanted.
When established, the partnership is now in a state of fully configured.
Remote Copy
Go to Copy Services Remote Copy to display the list of remote copy relationships. To
create a relationship, click New Relationship. The Metro Mirror and Global Mirror Copy
Services features enable you to set up a relationship between two volumes so that updates
that are made by an application to one volume are mirrored on the other volume. The
volumes can be in the same system or on two different systems.
Now select the type of mirror relationship from Metro, Global, and Global with Change
Volumes.
On the next panel, you have the choice of where the auxiliary volume is located. It is possible
to set it on the same cluster, which you might find useful for some applications.
The next panel allows you to choose the volumes and lists all eligible volumes on the
specified cluster. Use the pull-down menus to select the volume you want in this relationship.
You are then asked if the relationship is already synchronized. Normally this is no, but there
are situations during recovery and build when a relationship is being re-established between
volumes and the data has not changed on either.
The last question is to begin starting the copy. This is the initial synchronization process that
runs in the background until complete.
FlashCopy
Because FlashCopy is only available on the same cluster, there is no requirement for a
relationship. To create a copy, go to Copy Services FlashCopy Mappings. Click New
FlashCopy Mapping. This starts the window to create a mapping.
Eligible volumes are listed. Select the source and target volumes by using the pull-down
menu.
On the next window, you need to select the preset type for the copy:
Snapshot This gives a point in time copy of the volume, thin provisioned. No
background copy. Intended for temporary use to freeze the volume.
Clone A one-time use, full copy of the volume.
Backup A full, point-in-time copy of the volume that can be repeatedly
refreshed.
Using the Advanced Settings tab, you can adjust the background copy rate, set the mapping
to being incremental (only new changes are written), delete after complete and set the
cleaning rate.
Then, you are asked if you want to add the mapping to a consistency group.
Consistency groups
There are two main reasons for using consistency groups. By grouping a number of copies
together, management actions such as start and stop can be applied to the group, reducing
Chapter 11. Implementation 203
workload. But more importantly, the cluster ensures that all members perform the action at
the same point in time. This means that for a stop, the input/outputs (I/Os) completed to the
remote volumes or copy targets are stopped in sync across all members of the group, which
is important for system and application recovery.
To create a FlashCopy group, go to Copy Services Consistency Groups. Select New
Consistency Group to start the create window. Give the group a name and that is it. To add
a mapping to this group, either chose the group while creating the mapping or highlight the
mapping in the listing and use the actions to move it to a consistency group.
To create a remote copy group, go to Copy Services Remote Copy. Click New
Consistency Group. Give the group a name and define if the relationships are local to local
or remote. You are able to create new relationships to be members now, or add them later. To
add a relationship, highlight the entry and use the actions to add to a consistency group.
FlashCopy mappings
A FlashCopy mapping defines the relationship between a source volume and a target volume.
The FlashCopy feature makes an instant copy of a volume at the time that it is started. To
create an instant copy of a volume, you must first create a mapping between the source
volume (the disk that is copied) and the target volume (the disk that receives the copy). The
source and target volumes must be of equal size.
A mapping can be created between any two volumes in a system. The volumes do not have
to be in the same I/O group or storage pool. When a FlashCopy operation starts, a checkpoint
is made of the source volume. No data is copied at the time a start operation occurs. Instead,
the checkpoint creates a bitmap that indicates that no part of the source volume has been
copied. Each bit in the bitmap represents one region of the source volume. Each region is
called a grain.
After a FlashCopy operation starts, read operations to the source volume continue to occur. If
new data is written to the source or target volume, the existing data on the source is copied to
the target volume before the new data is written to the source or target volume. The bitmap is
updated to mark that the grain of the source volume has been copied so that later write
operations to the same grain do not recopy the data.
File copy services
Use this function to select different methods to replicate data to and from different file
systems. The system supports two types of file system replication: replicate file system and
remote caching.
File system replication provides asynchronous replication of all file system data on one
system to another file system located remotely over an IP network. The two systems should
be separated geographically to provide data recovery and high availability. Asynchronous
replication allows one or more file systems in a Storwize V7000 Unified file name space to be
defined for replication to another Storwize V7000 Unified system over the client network
infrastructure. Files that have been created, modified, or deleted at the primary location are
carried forward to the remote system at each invocation of the asynchronous replication.
Asynchronous replication is configured in a single direction one-to-one relationship, such that
one site is considered the source of the data, and the other is the target. The replica of the file
system at the target remote location is intended to be used in read-only mode until a system
or network failure or other source file system downtime occurs. During a file system failure
recovery operation, failback is accomplished by defining the replication relationship from the
original target back to the original source.
204 Implementing the IBM Storwize V7000 Unified
Remote caching provides transparent data distribution among data centers and multiple
remote locations over a wide area network (WAN). Remote caching provides local access to
centrally stored files and allows users in remote locations to work with files without creating
inconsistencies. Data created, maintained, updated, and changed on the home system can
be viewed and used on a cache system located anywhere in the WAN.
Chapter 11. Implementation 205
11.16 File Services configuration
To create a share (or export), there needs to be a number of building blocks in place. We
describe these here and give some examples of how to create each step.
11.16.1 File service components
The file services function is built up in the following layers. We already completed the storage
hardware and depending on the choices during Easy Setup, the storage pools might have
also been built.
Managed disks
At the bottom is the physical storage, which is based on the Storwize V7000 storage
controller. This might include in its configuration extra expansion enclosures or external
storage systems. The physical storage is made up of RAID arrays and from these arrays,
managed disks (MDisks) are created. In the case of external storage, LUNs are created from
the arrays and these are presented to the Storwize V7000 and managed as MDisks.
If the automatically configure storage box was ticked during the Easy Setup procedure, the
storage was configured with a best practice approach using all the available disks. They have
been grouped into arrays as a single MDisk and added to the same pool.
Pools
These MDisks are grouped into pools and the logical blocks of storage are merged together
creating a single sequential pool of blocks (or extents). The system stripes the sequence
across the MDisks to increase performance. From these pools, volumes can be created and
presented to hosts as Fibre Channel-attached SCSI LUNs or to the file modules to build a file
system on. This is the block storage layer.
Volumes
Block volumes that are to be presented as FC-attached are described in 11.15, Block
configuration on page 200. Volumes are defined for file systems by the file system build
process, so there is no need to create them. The file system build process creates a number
of volumes that are based depending on its size and internal requirements. Also, because
they are only used by the file system, the volumes are not visible in the GUI. Volumes were
previously known as vdisks. There are two types of volumes that the build process can use:
uncompressed and compressed volumes.
File systems
A minimum of one file system is required. This provides the base file system and global
namespace from which shares and exports can be created. Each file system therefore
comprises a structure of directories or paths that are based on a single root. Any of these
directories can be used to create a share or export and the directory name that is being
shared is known as a junction point. This file system is the IBM GPFS. A minimum of one
storage pool is required to build a file system, although a file system can use multiple pools
and multiple file systems can be created from a pool. A special type of file system can also be
created, which uses two storage pools with a default system pool for uncompressed data,
such as metadata and incompressible data and a compressed pool for compressible data.
File sets
To improve the manageability of the file system, GPFS has a methodology that creates
subsets of the file system to allow for control of file system management at a more granular
206 Implementing the IBM Storwize V7000 Unified
level. These are called file sets, which behave similarly to a file system. The file sets root can
be anywhere in the directory structure of the parent file system and includes all files and
directories above that junction point.
When creating a file set, the base directory path must exist. However, the directory (or
junction point) being defined must not exist because it will be created as part of the file set
creation process.
You must also define the file set as dependent or independent. A dependent file set shares
the same file system and inode definitions as the parent independent file set that contains it. If
set to independent, the file set has its own inode space allowing for independent
management, such as quotas.
When a file system is created, an initial file set is also created automatically at the root
directory.
Shares and exports
These are basically the same thing, the terms simply being used differently for Common
Internet File System (CIFS) Windows (share) and UNIX (export). Each export picks up a
junction point in the file system. It then presents these files and subdirectories that are in the
directory as allowed by the authentication process. These directory structures and files are
shared using the selected network sharing protocols that the hosts will be using to access the
export.
For each export, the following components are needed:
File system The base file system as configured on the file module.
Junction point This is the directory in the file system that contains the data and
subdirectories being shared. If files already exist in the directory or
subdirectory, for security reasons, access controls must be obeyed to
create the share. If the junction does not exist, it is created.
Network protocol Choose from the available supported protocols. For example: CIFS,
NFS, and File Transfer Protocol (FTP).
Depending on the protocol, some of the following parameters will also be needed:
User This is the owner of this file set. When there are files in the file set, this
cannot be changed from the Storwize V7000 Unified. This user must
be known to the authentication server.
Access control If this is a new and empty file set, you can define the ACL controls at
export creation time.
Access control
When complete, this now presents this share to the network over the defined protocol. Before
this share can be accessed and files read or written, the access control server must be
updated. All users needing access to the export should exist. Access control of the files and
directories is defined at the authentication server. Although we allow access to be defined
while no files exist during export creation, after files are written, the Storwize V7000 Unified
administrator cannot change access control, nor can they define a new access to avoid this
restriction.
Operation and administration of the access control server is beyond the intended scope of
this book.
Chapter 11. Implementation 207
11.16.2 File systems examples
The following examples are given to demonstrate the processes, show the steps involved,
and show the GUI windows that should be expected. These are limited examples and do not
cover all possible configurations. Refer to Chapter 16, Real-time Compression in the IBM
Storwize V7000 Unified on page 287 for information on compressed file systems.
Create a standard uncompressed file system
Go to Files File Systems. This displays the File Systems panel. Click New File System to
create a file system, as shown in Figure 11-62.
Figure 11-62 File systems
This starts the New File System window, as shown in Figure 11-63 on page 208. There are
four options to build a file system, which are selected by clicking the appropriate icon at the
top of the page.
The Single Pool option is the most commonly used. Here, you select from the list of defined
storage pools, a single pool to provide the extents for this file system. You are therefore
limited to the free space of that pool. After entering a name for the file system and selecting
the wanted pool, you then can choose how much of the available free space will be used for
this file system. Do this by adjusting the slider at the bottom of the window.
When done, click OK to build the file system. A pop-up window sill shows the progress of the
build. The process first builds the NSDs (Network Storage Devices) as the file module knows
them, or volumes as they are to the block storage. Typically, five NSDs are created and their
size is set to make up the specified file system size.
208 Implementing the IBM Storwize V7000 Unified
Figure 11-63 New File System panel
The second option is to create a compressed file system. In order to compress data
selectively you will be prompted to contact IBM at [email protected]. A specialist will
contact you to provide assistance and a code to enable this function. Below is an example or
the dialog box that you will have to enter in the code given to you by IBM Support
(Figure 11-64).
Figure 11-64
Another option is to create a file system as a Migration-ILM. The SONAS software has a
feature that allows the file system to use multiple pools of different performance. This is used
to build a file system that automatically migrates data to the slower storage based on
predefined thresholds.
Chapter 11. Implementation 209
Using this feature is beyond the intended scope of this book. If you is wanted to use this,
contact your IBM representative for assistance and refer to the following SONAS
documentation:
SONAS Implementation Guide and Best Practices Guide, SG24-7962
SONAS Concepts, Architecture, and Planning Guide, SG24-7963
The fourth option is Custom. This gives greater flexibility in creating the file system. The
storage can be obtained from multiple pools and you can manually define the file system
policies. This feature requires in-depth knowledge of the GPFS file system, which provides
the ability to set policies to automate management of data in a file system. Policies can be set
and run manually from the command line. If you want to use policies, seek assistance from
IBM.
Create a file set
There is no need to create file sets, but they are useful in helping with defining and managing
the data shares. To work with file sets, go to Files File Sets. The cluster automatically
defined a file set on the base of each file system, called root, as shown in Figure 11-65.
Figure 11-65 File Sets panel
More file sets can be created at any junction point in a file system. To create a file set, click
New File Set at the top of the window. This starts the New File Set window. You can use the
basic option to define the file set and its path. Or, you can use the custom option to complete
the quota and snapshot details now. These can be added later if wanted.
Tip: When browsing for the path, you can right-click a directory to add a subdirectory
under it.
210 Implementing the IBM Storwize V7000 Unified
Shares
To manage shares, go to Files Shares. This displays the main shares window and lists all
shares on the cluster. To add a share, click New Share at the top of the window, which will
start the New Share window, as shown in Figure 11-66.
A share can be accessed by using any of the supported access methods or any combination
of these. Use buttons at the top of the window to simplify the configuration entry. Use the
custom function to use multiple access methods.
All methods have the same common information:
Share name The name by which the share is known and accessed in the network.
Path The directory path to the shared directory, which includes the file
system.
Owner The user ID of the owner of this share.
Figure 11-66 New Share panel
Create a Common Internet File System share
When creating a Common Internet File System (CIFS) share, first complete the common
information as noted above. If using the custom option, click the CIFS tab and click Enable
the CIFS access control list (ACL). If creating CIFS only, this information is on the main
panel.
Set the read only, browseable, and hide objects options to the wanted choice and then click
Advanced. This action starts the window that is shown in Figure 11-67.
Tip: When browsing for the path, you can right-click a directory to add a subdirectory
under it.
Chapter 11. Implementation 211
There is a field here to enter a verbose comment about the share. This panel also allows you
to add the CIFS users or groups that will have access to this share and what that access is.
You can add as many entries as wanted. Security policy allows this access to be defined on a
new share. However, when data has been written to the directory (or subdirectories), then
access control can be altered only by using the Active Directory and its associated security
processes.
Figure 11-67 CIFS Advanced Settings panel
Create an NFS export
To configure a share for NFS access, click the NFS icon at the top of the New Share window.
Or, if you are using custom for multiprotocol, click the NFS tab, as shown in Figure 11-68 on
page 212. Use the Add NFS clients section to add an entry for each client ID that will have
NFS access to the share. Use the + icon to add each new entry. Set the access to read only
or not read only as wanted.
Unless you have a good reason not to, leave the Root squash option ticked (see
Figure 11-68). If not ticked, this allows the root user on any system to receive root-privileged
access to the share, so full authority is granted. Using root squash removes all access from a
remote connection from root, unless specifically allowed.
212 Implementing the IBM Storwize V7000 Unified
Figure 11-68 NFS share
Tick the Secure box if you will be using an NFS secure connection to access this share.
Create an HTTP, FTP, or SCP export
If this share is already defined, use the Edit function to add any of these protocols. If this is a
new share, create a share and enter the common information as described above.
Click the Custom icon to show the options, as shown in Figure 11-69 on page 213.
Caution: Always enable root squash to prevent unauthorized access.
Chapter 11. Implementation 213
Now tick the HTTP, FTP, and Secure Copy Protocol (SCP) options as wanted. User access
and authentication will be the same as for the other protocols by using the configured
authentication method.
Figure 11-69 HTTP share
214 Implementing the IBM Storwize V7000 Unified
Copyright IBM Corp. 2013. All rights reserved. 215
Chapter 12. Antivirus
This chapter describes the antivirus feature built into the IBM Storwize V7000 Unified. We
also describe how the Storwize V7000 Unified interacts with the optional external scan
engines for antivirus protection, what configuration options exist, and how to set it up.
12
216 Implementing the IBM Storwize V7000 Unified
12.1 Overview
In general, the antivirus (AV) protection is intended for Windows and Common Internet File
System (CIFS) users who require an extra level of data protection, for example, against
malicious, virus-type software. Because UNIX environments are much less exposed, typically
there is no antivirus protection required for these. Consequently, this antivirus feature of the
Storwize V7000 Unified is not supported for Network File System (NFS).
The IBM Storwize V7000 Unified is provided with an AV-Connector interface to communicate
with the external scan engines. Here is a summary of the supported options:
Antivirus protection for data in CIFS file shares only
McAfee and Symantec antivirus products are supported
Scalability and high availability of the antivirus scanning can be achieved through the
definition of multiple scan nodes
Scan on individual files: on file open command
Scan on individual files: on file close command (after creation or modification)
Scheduled scans on all files with these configurable options:
Manually (for example, after update of antivirus software itself or the known virus
signatures)
On defined schedule
The co-operation between the Storwize V7000 Unified and the external AV scan engines is
shown in Figure 12-1.
Figure 12-1 V7000 Unified intercepts I/O to enable antivirus scan first
If antivirus is configured, the AV connector in Storwize V7000 Unified intercepts the defined
I/O operations (on open for read, if configured: on close after write) on individual files and
sends a scan request to the defined external scan nodes. These use a stored file signature to
verify if the file needs to be scanned. This is required if the signature of a file has changed,
either because the file itself has changed (which can potentially be a sign of an infection) or
an antivirus software update has invalidated the associated signatures of all files. In this case,
the scan engines will perform a scan of the file to ensure that there is no infection and will
update the files signature after completing the scan successfully.
Chapter 12. Antivirus 217
12.2 Scanning individual files
For individual files, there is a mandatory scan on file open and a configurable option to scan
the file on close after write. The AV settings use an inheritance model, and this means the
settings are applied to all subdirectories of the specified path as well. For individual file scans,
the scope of files to be scanned can be configured by using an inclusion list or exclusion list.
It is also possible to specify if access to a file is to be denied if it cannot be scanned. For
example, a file can be denied if no scan nodes are currently available or if there is no further
bandwidth to be able to scan this file in parallel.
The AV connector intercepts the file open (for read) and close (after write, if configured) in
Samba:
Scan on file open for read:
Only if file has changed or virus signature has changed since the last scan
Result of last scan (signature) stored in extended attributes of a file:
Includes time of last scan, virus signature definition used
Optional: Scan on file close after write:
This is a proactive scan operation, and might improve performance of next file open for
read if no other changes occur in the meantime
Optional: Deny access if file scanning is not available (increases security)
The performance requirements for the file scans determine the required scalability:
The ability to scan the files needed fast enough on open for read determines the bandwidth
and number of scan nodes required.
12.3 Scheduled scan
In addition to the scan operations on individual files, a schedule scan (bulk scan) of all files in
a given path can be configured. This can be an entire file export or a subdirectory tree
therein. The scan then includes all files and subdirectories of the path specified.
All these files and subdirectories in the path will then be rescanned after the AV vendor
updates its software or the AV virus signatures.
This proactive scheduled scan eliminates the need that all files have to be scanned on first
access, when users have requested them and are waiting for it. A scheduled scan provides
the following benefits:
Helps to mitigate impact on performance on file access after an AV virus signature update,
for example, when everyone logs in the next morning
Note: After scanning a file when it is opened, the next scan can occur only (if configured
accordingly) when the file gets closed. If there are simultaneous read and write accesses
to the same file with byte-range locking while it continues to stay open, the safety of the
individual write updates cannot be guaranteed by antivirus. A write process might write
suspicious data into the file, which a subsequent read might pick up unscanned and
unprotected while the file is still open. This is, for example, the case for a VMware VMDK
file while the virtual system is running.
218 Implementing the IBM Storwize V7000 Unified
Updates interval of virus signatures and the AV software determines the required scan
interval, which might be required every night
Stores the result of schedule scan in the extended attributes (EAs) of the files, which
means there is no need to scan the file on first access anymore if its signature has not
changed in the meantime
Verifies the files signatures and EAs on every file open to provide a guarantee that there
has been no change to the file since the last scan
12.4 Set up and configure antivirus
In this section, we describe the steps and options available to set up the antivirus
configuration according to specific needs. The options are described using the graphical user
interface (GUI), but similarly the setup can also be done by using the command-line interface
(CLI).
The infrastructure for the antivirus scanning process is provided outside of the Storwize
V7000 Unified system.
The following prerequisites are required for this infrastructure:
Client supplies and maintains scan nodes
Client installations supported antivirus vendor product on scan nodes:
Symantec
McAfee
The scan nodes are attached to a clients IP network and can communicate to the V7000
Unified systems
There are other important considerations related to this topic. Availability and scalability are
achieved by provisioning multiple scan nodes:
During AV setup, a pool of scan nodes is configured to be used in the V7000 Unified
Each AV connector instance randomly chooses a scan node from this pool
If a scan node fails, it is temporarily removed from the pool:
The AV connector checks regularly to see if the node is back online
If it is back online again, it is readded to the pool of available scan nodes automatically
Important: When using hierarchical storage management (HSM) and antivirus bulk scans,
a bulk scan does not rescan files that have been migrated off the file system using HSM.
This means that no file recall is required, preserving the results of the HSM policies
defined. Scanning a file updates its last access time property.
Chapter 12. Antivirus 219
12.4.1 Antivirus setup steps
The antivirus configuration panels can be found in the Files Services Antivirus panels,
as shown in Figure 12-2.
Figure 12-2 Menu path for Antivirus: Files Services
Figure 12-3 shows the available services. Antivirus is already selected but in this example, it
is not configured yet.
Figure 12-3 Available services and antivirus selection
After selecting Configure, the next step is to specify three settings according to your needs:
List of available scan nodes
220 Implementing the IBM Storwize V7000 Unified
Scan protocol that they are using
Global timeout for every scan
The corresponding GUI panels are shown in Figure 12-4 and Figure 12-5.
Figure 12-4 Definition of scan nodes, protocols, and timeout value
Figure 12-5 shows the selection of supported scan protocols.
Figure 12-5 Selection of supported scan protocols
After saving the scan node settings, the main window for Antivirus is started, showing three
tabs: Definitions, Scheduled Scans, and Scan Now. See Figure 12-6.
Chapter 12. Antivirus 221
The settings for the scan nodes are not displayed on the main window. They are shown or can
be changed by using the Actions drop-down menu.
Figure 12-6 shows the main window.
Figure 12-6 Antivirus overview window showing the two main tabs: Definitions and Scheduled Scans
Selection of adding a new antivirus definition starts the next window to specify multiple
options, as shown in Figure 12-7 on page 222:
Specify the path to be scanned
Allows you to browse for available paths
Each antivirus definition can be enabled or disabled
Enable scan of files on close after write if required
Provides extra security
Deny client access if the file cannot be validated
Provides extra security
Prevents access even if just the scan nodes are not available or not reachable on the
network
Specify which default action to take for infected files
Options are No action, Delete, or Quarantine
Definition of scope for the scan
Options are all files, include files with extensions specified (which means: scan just these),
or exclude files with extensions specified (which means: scan all others)
222 Implementing the IBM Storwize V7000 Unified
Figure 12-7 Create new antivirus definition: Configurable options
When a new definition is created, it is shown on the main window for Antivirus, as shown in
Figure 12-8.
Figure 12-8 List of stored antivirus definitions
Beside the antivirus definition, there is an optional feature to define Sceduled Scans.
This option can be set up in the corresponding tab, as shown in Figure 12-9 on page 223.
The following options are available for Scheduled scans:
Frequency: Once a Day, Multiple Days a Week, Multiple Days a Month, Once a Week, or
Once a Month
Time of day: Presets built-in with steps of 15 minutes, but any value can be entered
Paths to scan: Specify for which path this Scheduled scan definition will be used
Figure 12-9 shows how to define a new Scheduled scan.
Note: Because of the usual change rate for virus definitions of at least once a day for all
major vendors of antivirus software, the most common setting for this is expected to be
Once a Day.
Chapter 12. Antivirus 223
Figure 12-9 Definition of new scheduled scan
The scheduled scans defined are listed on the main page for scheduled scans, as shown in
Figure 12-10.
Figure 12-10 List of stored scheduled scan definitions
To do an immediate scan, you have the option to do a Scan Now as shown in Figure 12-11 on
page 224.
224 Implementing the IBM Storwize V7000 Unified
Figure 12-11 Scan Now
With the Scan Now tab, you are able to change the protocol you wish to use, the node, port
and timeout value of the search engine as shown in Figure 12-12.
Figure 12-12 Scan Now Protocol
Once you have choosen the options you wish to use, click OK. You will now have the option to
Scan Now or Reset as shown in Figure 12-11.
Antivirus setup is now complete. We suggest that you test the antivirus scan engine to ensure
the correct and expected operation.
Copyright IBM Corp. 2013. All rights reserved. 225
Chapter 13. Performance and monitoring
In this chapter, we describe how to tune Storwize V7000 Unified for best performance. We go
on to explore how performance of the Storwize V7000 Unified can be monitored. Finally, we
describe the approach to identifying and rectifying performance problems.
13
Real-time Compression (RtC): If compression is being used or considered, we strongly
suggest that you consult Real-time Compression in SAN Volume Controller and Storwize
V7000, REDP-4859, especially the chapter on performance guidelines, available at the
following website:
http://www.redbooks.ibm.com/redpieces/abstracts/redp4859.html
226 Implementing the IBM Storwize V7000 Unified
13.1 Tuning Storwize V7000 Unified for performance
In the topics that follow we describe how to tune the Storwize V7000 Unified to achieve better
performance.
13.1.1 Disk and file-system configuration
In this topic we discuss disk and file-system configuration.
GPFS Block Size
Writes from applications are often smaller than the GPFS block size (which defaults to 256K).
However, for sequential accesses GPFS will combine the individual writes from an application
and send a single write request to the underlying Storwize V7000 for the full GPFS block size.
The Storwize V7000 block storage system will then stripe the data as it writes it to the
underlying MDisk. The stripe size can be configured to either 128K or 256K; the default is
256K.
One of the main objectives when deciding on the number of physical drives that make-up a
MDisk (also known as the array size) is whether it increases the likelihood of achieving
full-stride writes. A full-stride write in one in which all the data in the disk array is written in a
single chunk, ensuring the parity is calculated only using the new data. In other words, it
avoids the need to read-in existing data and parity for the current stripe and combine it with
the new data.
Referring to the example MDisk in Figure 13-1, to achieve a full-stride write requires the
GPFS block size to be significantly larger than 256K - in this particular example it needs to be
1MB in size (i.e. 8x128K). For this reason, in addition to the 256K default, 1MB and 4MB
GPFS block sizes are supported.
Figure 13-1 MDisk Configuration
We recommend using a 1Mb or 4MB GPFS block size only on sequential workloads. The
reason is a sub-block, which is defined as 1/32 of a full GPS block, is the minimum size of
data that GPFS can allocate space for. Therefore using a 1 MB or 4 MB block size on random
workloads, where GPFS is unable to combine writes, will most likely result in the sub-block
being significantly larger than the size of data to be written. This will result in poor space
usage, and reduce the effectiveness of caching, which will negatively impact performance.
Number of volumes
The number of volumes used to back the GPFS file-system can also impact performance.
This is because each volume has its own fixed queue depth and therefore with only a small
number of volumes there is risk of IO getting backed-up within GPFS.
We recommend that if a file-system is allocated with greater than or equal to 50% of the total
MDisk Group capacity then the number of volumes created should be two times the number
Chapter 13. Performance and monitoring 227
of MDisks. For example, if the file-system is allocated 60% of the total MDisk Group capacity
and the MDisk Group contains 11 MDisks, then 22 volumes should be created.
If the file-system is allocated with less than 50% of the total MDisk Group capacity then the
number of volumes created should equal the number of MDisks. We recommend that a GPFS
file-system should be backed by no less than 3 volumes.
Separation of Data and Meta-Data
SSDs can achieve two orders of magnitude more I/O operations per second (IOPS)
compared to traditional spinning disk technology. For example, Near-Line SAS drives achieve
approximately 100-150 IOPS, whereas enterprise grade SSDs can deliver 10,000 IOPs.
However this extra performance comes at a financial cost, and therefore SSDs need to be
used where they offer the greatest benefit.
To make best use of SSDs Storwize V7000 provides its EasyTier functionality. This
technology exploits the fact that in general only a small percentage of the total stored data is
accessed frequently. EasyTier automatically identifies this frequently accessed data (known
as hot data), and migrates it onto the SSDs. This ensures that with only a small number of
SSDs a significant boost to performance can be achieved.
The IO originating from the Unified component is stripped by GPFS onto the underlying
Storwize V7000 volumes. This means it balances IO accesses, avoiding the hot and cold
variation seen in traditional block workloads. Therefore enabling EasyTier on volumes that
back GPFS file-systems provides no performance benefit. Instead it is better to reserve SSDs
for the storage of meta-data.
Each file in a file-system has meta-data associated with it. This includes name, size,
permissions, creation time, as well as details of where the file data is located. The latency of
accessing this type of data can have a disproportionate effect on the overall performance of
the system, especially if the file-system contains a large number of small files. Snapshots,
backup and asynchronous replication all require the scanning of meta-data, therefore we
recommend using SDDs when making extensive use of these features.
Meta-data typically accounts for between 5-10% of the total space requirements of a
file-system, and therefore it is cost effective to place on SSDs.
SMB Configuration
If a file-system is to be accessed only using SMB protocol then the multi-protocol
interoperability support is not required. In the majority of cases disabling this support will
provide a noticeable improvement in performance.
There are three parameters that provide multi-protocol interoperability support; leases,
locking and sharemodes. Leases is used to indicate whether a SMB client is informed if
another client (using a different protocol) accesses the same file at the same time. Locking is
used to indicate whether a check is performed prior to granting a byte range lock to a SMB
client. Sharemodes is used to indicate whether the share modes are respected by other
protocols. (SMB uses share modes to enable applications to specify if simultaneous access to
files are permitted.) All three parameters can be set on the command line either during
creation of the share (using the mkexport command) or after the share has been created
(using the chexport command). For example:
Important: Spread SSDs across both of the Storwize V7000 chains, such that half are
connected via port1 and the other half are connected via port 2. This will ensure the IO
generated by the SSDs will be serviced equally by both available ports.
228 Implementing the IBM Storwize V7000 Unified
chexport exportName --cifs leases=no,locking=no,sharemodes=no
13.1.2 Network Configuration
We describe two network configuration considerations.
10GBit verses 1GBit Interface
It is strongly advised that for best performance clients are connected to the Storwize V7000
Unified using the 10GBit Interface.
Bonding Modes
Storwize V7000 Unified supports a variety of bonding modes. The default is Mode 0, referred
to as active-backup. In this mode only one of the two links is active at anyone time. This is the
simplest of the bonding modes, and is designed solely to provide resilience in case of a link
failure.
Although the other modes provide potentially higher throughput than active-backup (as they
utilize both links), they are more complex in nature and have a high-probability of reacting
adversely to the connected Ethernet switches and associated network infrastructure. It is
therefore strongly recommended to keep the default Mode 0.
13.1.3 Client Configuration
The clients connected to the Storwize V7000 Unified play a critical role in determining the
performance of the system. For example, if the client is unable to send a sufficient number of
requests to the Storwize V7000 Unified then performance will be bound by the client. Not only
is it important to ensure that there are a sufficient number of clients to generate the necessary
load, but it is also important that the clients are correctly configured.
NFS Clients
NFS clients commonly used within a UNIX environment offer many parameters on how to
connect the NFS File Server. One such parameter that has a major effect on determining
overall performance, is the use of the sync mount option.
Both the client and server side have a sync parameter. The server side sync is enabled on the
Storwize V7000 Unified. It is strongly recommended that this setting remains unchanged as it
ensures that on receiving a write or commit from the NFS client the data is stored to the
underlying Storwize V7000 prior to sending back an acknowledgement. This behavior is
required when strictly adhering the NFS protocol.
The client side sync parameter defines whether each NFS write operation is sent immediately
to the Storwize V7000 Unified or whether it remains on the client until a flush operation is
performed, such as a commit or close. Setting the sync parameter comes at a significant cost
to performance and often the extra resilience is not required. Applications that demand full
synchronous behavior should open files using the O_SYNC option, which forces the
synchronous behavior regardless the client side sync parameter setting.
Important: The above SMB settings should not be used when the file-system is accessed
via other protocols, such as NFS. Doing so can lead to data corruption.
Chapter 13. Performance and monitoring 229
SMB Clients
There are two main versions of the SMB protocol. Version 2 of the protocol, introduced in
2007, significantly reduces the number of separate commands and sub-commands compared
to the original protocol. This significantly reduces the frequency of communication between
the client and the file server, and therefore offers a significant performance benefit.
By default, the Storwize V7000 Unified will attempt to negotiate to use Version 2 of the SMB
protocol. However, if the client machine is unable to use this later version Unified will fallback
to the earlier and less optimal Version 1. To ensure Version 2 is used it is essential the client
machines are running Microsoft Windows Server 2008 or later, or alternatively Microsoft
Windows 7 or later.
13.2 Monitoring the performance of Storwize V7000 Unified
The performance of the Storwize V7000 Unified can be monitored through both the Graphical
User Interface and Command Line Interface. Tivoli Storage Productivity Center can also be
used to monitor performance.
13.2.1 Graphical Performance Monitoring
The Storwize V7000 Unified provides a quick-look set of graphs that show functions for file
and block performance. These graphs have differing collection time frames with block
showing only 5 minutes, which are not recorded. They are intended to highlight problem areas
in real-time and are useful in narrowing down the search during sudden and critical
performance hits.
To display the graphs, go to Monitoring Performance. The three tabs available for
performance monitoring are for File, Block, and File Modules.
The File performance graphs show metrics about client throughput, file throughput, latency,
and operations. The time frame for these graphs can be selected from minutes to a year. See
Figure 13-2 for details.
230 Implementing the IBM Storwize V7000 Unified
Figure 13-2 File performance
The Block performance graphs show real-time statistics that monitor CPU usage, volume,
interface, and managed disk (MDisk) bandwidth of your system and nodes. The CPU usage
graphs also show separate system and compression statistics. Each graph represents 5
minutes of collected statistics and provides a means of assessing the overall performance of
your system. See Figure 13-3 for details.
Figure 13-3 Block performance
The File Modules performance graphs show the performance metrics for each file module for
CPU, memory, and public network. You can also select numerous data types for each item
selected such as collisions, drops, errors, and packets for public network statistics and other
Chapter 13. Performance and monitoring 231
related data types for CPU and memory statistics. The time frame for these graphs can be
selected from minutes to a year. See Figure 13-4 for details.
Figure 13-4 File modules performance
The block storage system is continuously logging raw statistical data to files. Depending on
the setting for the statistics interval (default = 5 minutes) all the statistical counters are
collected and saved regularly. Only the last 15 files are kept and the oldest ones are purged
on a first-in last-out (FILO) basis. Tivoli Storage Productivity Center and other performance
tools collect these raw files. These stats files are also collected by the support data collection,
which collects all current files, that is, the last 15 saves. IBM technical support uses this data
if required, to analyze problems.
13.2.2 Command Line Interface (CLI) Performance Monitoring
In addition to the performance graphs it is possible to monitor performance by using the CLI.
The lsperfdata command provides access to a range of Storwize V7000 Unified file
performance metrics, including the number of file operations performed on the cluster. Data is
available over a range of time periods, ranging from seconds to a year. Use the
lsperfdata command with addition of the -l parameter to list all the performance metrics
available.
The block performance of Storwize V7000 can also be monitored using the CLI. The
command lssystemstats provides a range of metrics about the performance of the
underlying block storage, including the throughput and latency of the volumes, MDisks and
physical drives.
13.2.3 Tivoli Storage Productivity Center
IBM Tivoli Productivity Center has been enhanced to include the Storwize V7000 Unified.
Tivoli Storage Productivity Center collects and tracks data from the cluster and is able to
provide details about demand. Some areas that are able to be drilled down to are general
cluster information, nodes, pools, Network Share Disks (NSDs), file systems, file sets,
exports, and selected directories.
232 Implementing the IBM Storwize V7000 Unified
Configuration
Tivoli Storage Productivity Center does not use a Common Information Model (CIM) agent,
but is able to natively connect to the Storwize V7000 Unified. To configure the cluster to
communicate with TPC requires one simple step, which is to create a user ID with full
authority, for Tivoli Storage Productivity Center to use. Assign a password and give the user
ID and password details to the Tivoli Storage Productivity Center administrator. All the
remaining configuration is done from Tivoli Storage Productivity Center and the configuration
details are retrieved and built automatically.
Additionally, if specific file system scanning is required, you might need to create and export a
share to host running the Tivoli Storage Productivity Center agent.
13.3 Identifying and resolving performance problem
As shown in Figure 13-5 there are many components that need to be considered when
analyzing a performance problem. At the top of the storage stack, is the client application that
initiates the IO requests, and the client server (also known as host), which is responsible for
sending the IO requests to the Storwize V7000 Unified. The next level is the networking,
which includes any inter-connected switches and/or routers. Issues at this level usually
present in the form of high packet loss rates observed at either the host or the Storwize
V7000 Unified. See 13.3.2, Network Issues on page 233 for more details on how to identify
and resolve issues at this layer.
Figure 13-5 Storage Stack
Performance issues might also result due to the over-utilization of the resources in either the
file modules or block modules. Identifying and resolving issues at this bottom layer of the
storage stack is described in Section 13.3.3, High Latencies on page 233.
13.3.1 Health Status
The health status indicator at the bottom of the Unified GUI window, alerts, and the event logs
provide the first point of investigation for a problem, which is described in the troubleshooting
chapter. Using the lshealth command to expand in detail an unhealthy system can be
useful. By using the -i parameter, more detail can be seen on individual components.
Important: It is important to address all issues and maintain the health status of your
system as green to prevent issues such as volumes going offline because of lack of
available physical storage.
Chapter 13. Performance and monitoring 233
13.3.2 Network Issues
Poor performance is often a result of an issue in the network connecting the hosts to the
Storwize V7000 Unified. Using the performance monitoring graphs select the File Modules
tab and use the drop-down Item menu to select Public network. Then use the Data Item to
monitor the Errors, Drops and Collisions.
If the data indicates an issue then the logs on the Ethernet switches and routers should be
examined. In addition if the default bonding Mode 0 (active-backup) is not being used then it
is advised to revert to this mode to determine if that resolves the problem.
It is also useful to isolate any potential issues by connecting the hosts directly (or directly as
possible), to the Storwize V7000 Unified. If performance improves when the intermediate
networking components have been excluded then it indicates that the issue lies with how the
Storwize V7000 Unified is interacting with the additional network compontents.
13.3.3 High Latencies
High latencies are often an indication of a bottleneck. The next step is to determine whether
the latency is within the file layer or the block layer. This is achieved by comparing the cluster
read and writes latencies, (provided as a graph on the File tab), with the read and write
latency of volumes and MDisks (provided as a graph on the Block tab). If the cluster latencies
are significantly higher than the volume and MDisk latencies then it indicates the issue is
within the file modules.
File Modules
High latencies within the file layer may be due to insufficient volumes being created, see
Section Number of volumes on page 226. Another possible cause is high utilization of the
CPUs. This can be monitored using the performance graphs. Select the File Modules tab and
use the drop-down Item menu to select CPU. Then use the Data Item to monitor System,
User, I/O Wait and Idle.
The CLI command lslog is useful for showing the activity on the box over the previous day or
so. It lists the logs from the event log database and can be used to discover when activities
such as backup and asynchronous replication started. It also contains information messages
showing when the CPU usage on either file module cross the 75% threshold. This indicates
that the machine is under high utilization which if continues may negatively impact
performance.
It is important to ensure that the resource utilization of the two file modules is balanced, with
the file workload being sent equally to both nodes. Also advanced features such as
snapshots, backup and asynchronous replication can be resource intensive. It is therefore
important to ensure they are scheduled such that the times they are running do not overlap.
Block Modules
High latencies within the block layer is often a indication that the drives are overloaded.
Performance can be improved either by increasing the number of drives available and/or
upgrading to faster storage, such as SSD.
234 Implementing the IBM Storwize V7000 Unified
Copyright IBM Corp. 2013. All rights reserved. 235
Chapter 14. Backup and recovery
In this chapter, we look at the backup and recovery of the Storwize V7000 Unified
configuration data, and the host user data written to the Storwize V7000 Unified. These are
two distinctly different and unrelated areas and we cover each one separately. We also look at
the recovery processes to restore from backup.
14
236 Implementing the IBM Storwize V7000 Unified
14.1 Cluster backup
In the topics that follow, we describe Storwize V7000 Unified cluster backup.
14.1.1 Philosophy for file and block
The storage cluster is made up of two major components: the Storwize V7000 storage
subsystem, and the Storwize V7000 Unified file modules. Because each subsystem has its
own existing and established backup processes that are different in requirements, they are
backed up independently.
Storwize V7000 (block)
The primary level of backup is the knowledge that the most current copy of the configuration
and status is held by the other operational storage nodes in the cluster. Therefore, should a
node fail or at any time need to join the cluster, it receives all the required configuration and
status information from the config node.
Additionally, the config node is saving regular checkpoint data on the quorum disks. These
saves occur about every 10 minutes. The three quorum disks have several functions,
including tie-breaking in the event of a split cluster.
A full config XML file is generated every day at 01:00:00. This file is saved on the hard disk
drive of the config node then and also written to the quorum. This file contains the hardware
and storage configuration, including MDisk, volume, and host mapping details, but it does not
include the storage extent mapping tables. This backup can be manually triggered at anytime
through the command-line interface (CLI).
File modules (file)
Again, like the storage, the Storwize V7000 Unified component relies on the knowledge that if
one file module is operational and online, the other can be restored, even from a complete
loss of all system data. The difference here is that the file modules are so sufficiently unique
that the backup is only usable on the particular module it was created from.
A backup is taken automatically at 02:22:00 every day on each file module. This backup is
then packaged into a single file and copied to the other file module and stored on its hard disk
drive so that each file module has a copy of the others backup. Previous generations of
backup are kept in case of corruption or a need to step back. This backup can be manually
triggered at anytime through the CLI.
14.1.2 Storage enclosure backup (Storwize V7000)
The storage component is backed up independently of the file modules and uses the inherent
processes of the Storwize V7000. The backup process is designed to back up configuration
information and the current status of your system, such as cluster setup, user ID setup,
volumes, local Metro Mirror information, local Global Mirror information, managed disk
(MDisk) groups, hosts, hosts mappings, and nodes. Three files are created:
svc.config.backup.xml This file contains your current configuration data and status.
svc.config.backup.sh This file contains a record of the commands issued by the backup
process.
svc.config.backup.log This file contains the backup command output log.
Chapter 14. Backup and recovery 237
If an immediate backup is wanted, such as before or after a critical or complex change or
before a support data collection, then issue this CLI command:
svcconfig backup
This starts an immediate backup process. Successful completion is indicated by a return to
the prompt without an error message. While the backup is running, any commands or
processes that might change the configuration are blocked.
A manual backup is not normally required because the task runs daily at 01:00:00. Manual
backup is only needed if changes are made or if the status must be reflected in the support
data.
There are high speed paths between all the nodes that form the cluster. These paths can be
over the SAN and also with Storwize V7000, there is a high speed internal bus in the
enclosure between the node canisters that also carries these paths. This enclosure link
removes the need for SAN paths between the nodes and allows the cluster to operate without
a connection to the SAN if wanted.
These links are used to maintain data integrity between the two nodes in an input/output (I/O)
group and are also used to ensure that the cluster status and configuration is known to all
nodes at all times. Any changes to the configuration, including the extent maps, are shared
over these paths. This means that any node can assume the config role and will have fully
up-to-date configuration at any time. Should a node be disconnected from the cluster (that is,
from the other nodes) for any amount of time, it cannot continue handling data. It must
therefore leave and rejoin to relearn the current configuration, thereby maintaining complete
integrity of the cluster.
The code will always try to use three MDisks as quorum drives, if at least three are available.
One will be marked as the active quorum. The Storwize V7000 will reserve an area of each
MDisk to contain the quorum data. Which MDisks are used and which one is active can be
altered if wanted. These MDisks are seen by all nodes in the cluster. The config node is
periodically writing checkpoint data to the quorums (about every 10 minutes). The quorums
also serve other functions including being a means of communication between nodes that
have become isolated so that tie-breaking can be resolved.
If a node is properly shutdown, it will save hardened data and gracefully leave the cluster to
be offline. When an offline node starts, it searches for the active cluster (that is, if any other
nodes from this cluster are already active and have formed the cluster). If so, then it will
request and be granted permission to rejoin. If no cluster exists (that is, no node responds)
and the node can see the quorums to confirm that the cluster is not active, this node will
assume it is the first alive and form the cluster.
Should a node fail, or for any reason be unable to save hardened data, then it has exited the
cluster and can never rejoin. Any node starting up that was a cluster member but has no valid
hardened data, will fail and post a permanent error, typically 578. This node must now have its
definition in the cluster deleted and be added again as a new node.
If all nodes fail without an appropriate shutdown, that is, all nodes have failed from the cluster
and no active nodes remain, the quorum data can be used to rebuild the cluster. This is a rare
situation and is known as a tier 3 recovery.
14.1.3 File module backup
A pair of 1 Gb Ethernet connections form the physical path over which a link exists between
the two file modules. This link is used for communications and to maintain integrity of the file
238 Implementing the IBM Storwize V7000 Unified
systems across the cluster. It also is used to resolve tie-break situations, for recovery actions,
software loading, and for transfer of backup data.
A backup process runs at 02:22:00 each day, which creates a packaged single file. A copy of
the backup file is then stored on the other node, so each file module has a copy of the other
nodes backups. The file is given a unique name and stored on the file module in directory:
/var/sonas/managementnodebackup.
This backup can be manually triggered at any time through the CLI:
backupmanagementnode -v
14.2 Cluster recovery
In the topics that follow, we describe Storwize V7000 Unified cluster recovery.
14.2.1 Storage enclosure recovery (V7000)
As with the other products in this family (SAN Volume Controller and Storwize V7000),
recovery of a node or cluster from backup is defined in four levels that are known as tiers.
These tiers are numbered in increasing level of impact and severity. For the Storwize V7000
Unified, the tiers are summarized as follows:
Tier 1 (T1) Recovers from single failures of hardware or software without loss of
availability or data.
Tier 2 (T2) Recovers from software failures occurring on nodes with loss of
availability but no loss of data.
Tier 3 (T3) Recovers from some double hardware failures but does potentially
involve some loss of client data.
Tier 4 (T4) Assumes the loss of all data managed by the cluster and provides a
mechanism to restore the clusters configuration to a point where it is
ready to be restored from an off-cluster backup (for example, tape
backup).
The following descriptions are to assist you in understanding the effect of each recovery and
an overview of the processes involved. In all cases where action is required, IBM
documentation and IBM technical support guidance should be closely followed.
Tier 1
A tier 1 recovery is where the cause of the problem can be resolved by warm starting the
node. The most common trigger for this is a hardware or software error being detected by the
storage nodes software, which triggers an assert. An assert is a software-initiated warm start
of the node. It does not reboot the operating system, but restarts services and resumes its
previous operational state. It also takes a dump for later analysis.
The warm start occurs quickly enough that no connectivity is lost and paths are maintained.
No data is lost and cache destages when the node is recovered. There should be no effect on
the file modules, all I/O is eventually honored. Most asserts are only observed in the event log
or by an alert being posted. Any assert should be reported to IBM Support for investigation.
Chapter 14. Backup and recovery 239
Tier 2
The tier 2 recovery level is much the same as a tier 1, but the node has failed and must be
readded. Again this process is automated and when the storage node has rebooted, the
cluster will perform recovery procedures to rebuild the nodes configuration and add it back
into the cluster. During this process, most configuration tasks are blocked. This recovery is
normally indicated by a 1001 error code. This recovery takes about 10 minutes.
It is important to follow the procedures on the information center and as directed by the
maintenance procedures and IBM Support carefully.
The Storwize V7000 storage component recovers and becomes fully operational again
without intervention. No data is lost; cache is recovered and destaged from the partner node.
Any I/O being directed to the node at the time of failure will fail. Depending on multipath driver
support on the hosts, this I/O will either be hard failed or try again to the other node in the I/O
group.
File services will need manual intervention to recover. Wait for the Storwize V7000 to
complete its recovery and a message to be displayed on the graphical user interface (GUI)
console. Call IBM Support for assistance and guidance. Start with the event log and perform
maintenance on the error. Ensure that you follow the directions carefully and also the
information center.
Recovery of the file services will likely entail reboots of the file modules one at a time and
checking of the status of file systems. Follow the guidance of IBM Support and the online
maintenance procedures. When the recovery is complete, support will ask for logs to confirm
health before advising to release the change locks.
Tier 3
Tier 3 recovery is required if there are no storage nodes remaining in the cluster. The cluster
is then rebuilt from the last good checkpoint, as stored on the quorum disks. This is a rare
situation but should it be required, direct assistance from IBM Support is needed.
The Storwize V7000 storage component can be recovered by the user by following the
procedures in the information center. IBM Support can provide assistance if needed.
When the Storwize V7000 storage component has been recovered, direct IBM Support is
needed to recover the file services. This requires collection and transmission of log files and
remote access to the CLI, which can be achieved using IBM Assist On-site (AOS) or any
acceptable process that allows support access. IBM Support will investigate the status of
various components in the file modules and repair as needed. It is important that no actions
are done onsite without support guidance. This process can take some time.
There is potential for data loss if data existed in a nodes cache at the time of failure. All
access to data on this cluster is lost until the recovery is complete.
Tier 4
This rare process is unlikely and is not automatically started. It is directly driven by
IBM Support after all attempts to recover data have failed or been ruled out. All user data is
considered lost and the Storwize V7000 Unified will be reinitialized and restored to the last
known configuration.
On recovery completion, the storage component will have the MDisks, pools, and volumes
defined. All previously mapped volumes will be mapped to the host definitions. No user data
in volumes is recovered and must be restored from backup.
240 Implementing the IBM Storwize V7000 Unified
The file modules will be reloaded and reinitialized. The user must provide all the configuration
data used to initially build the system, which will be used to rebuild the configuration to the
same point. All file system configuration will also be reentered, including file systems, file
sets, and shares.
When file services are resumed, user data can then be restored from backup. The process
that is used depends on the backup system in use. This is covered in section 14.3, Data
backup on page 240.
14.3 Data backup
The file data stored on the cluster can be backed up using conventional means by the servers
or a server-based backup system. Alternatively, the file systems mounted in the
General Parallel File System (GPFS) file system on the cluster can be backed up directly.
Currently, two methods of data backup are supported for backing up the file systems on the
Storwize V7000 Unified: Tivoli Storage Manager and Network Data Management Protocol
(NDMP). Tivoli Storage Manager uses the IBM proven backup and restore tools. NDMP is an
open standard protocol for network-attached storage (NAS) backups.
14.3.1 Data backup philosophy
The following topics deal with the data backup philosophy.
Server
This method reads the data from the cluster using the same methodology that the servers use
to read and write the data. The data might be read by the server that wrote it, or by a
stand-alone server dedicated to backup processes that has share access to the data. These
methods are outside the intended scope of this book.
Tivoli Storage Manager
The storage agent is included with the cluster software and when enabled, runs on the file
modules. When configured, Tivoli Storage Manager can be scheduled to back up file systems
from the GPFS system running on the file modules to external disk or tape devices. This data
backup and management is controlled by Tivoli Storage Manager.
Network Data Management Protocol
A Data Management Application (DMA) is installed on an external server. When enabled and
configured, the NDMP agent runs on the file modules and communicates with the DMA.
Backup scheduling and management is controlled in the DMA. Data is prepared by the cluster
and sent to the DMA server, which then stores the backup data onto external disk or tape.
14.3.2 Tivoli Storage Manager
Tivoli Storage Manager is able to use the Active Cloud Engine (ACE), which is incorporated in
the Storwize V7000 Unified.
Warning: Do not attempt to use NDMP with HSM or Tivoli Storage Manager. This is not
supported and can cause an outage.
Chapter 14. Backup and recovery 241
To configure Tivoli Storage Manager, you must enable Tivoli Storage Manager as the backup
technology and create a Tivoli Storage Manager server definition on the Storwize V7000
Unified. As the first step, select Files Services and click Backup Selection. This displays
the window shown in Figure 14-1.
Figure 14-1 Backup Selection tab
Confirm that the Tivoli Storage Manager option is selected. If not (as shown in Figure 14-1),
use the Edit function to change it.
Figure 14-2 shows the Backup Selection tab after changing the backup technology to Tivoli
Storage Manager.
242 Implementing the IBM Storwize V7000 Unified
Figure 14-2 Select Tivoli Storage Manager protocol
Now click the Backup icon and this displays a message indicating that Tivoli Storage
Manager is not configured (see Figure 14-3). Click Configure.
Figure 14-3 Tivoli Storage Manager backup not configured
This starts the Tivoli Storage Manager configuration applet, which shows that there are no
definitions found. From the Actions pull-down menu, select New Definition, as shown in
Figure 14-4.
Chapter 14. Backup and recovery 243
Figure 14-4 Add new definition
The New Definition window is displayed. This window has four tabs:
General
Node pairing
Script
Summary
The General tab is displayed first. Complete the Tivoli Storage Manager server and proxy
details, as shown in Figure 14-5.
244 Implementing the IBM Storwize V7000 Unified
Figure 14-5 New definition: General
Now click the Node Pairing tab. As we show in Figure 14-6, the two file modules are listed.
Add a prefix as required by typing the prefix in the Nodes prefix field and clicking Apply.
Also, add the password by entering in the Common password field and clicking Apply.
Figure 14-6 New definition: Node pairing
Chapter 14. Backup and recovery 245
Before clicking OK, you must configure this definition to the Tivoli Storage Manager server.
Click the Script tab to see the commands required to configure Tivoli Storage Manager.
These are provided for your convenience and can be copied and pasted to the Tivoli Storage
Manager console. We show the Script tab in Figure 14-7.
Figure 14-7 New definition: Script
Clicking the Summary tab displays the new configuration for your review (see Figure 14-8).
246 Implementing the IBM Storwize V7000 Unified
Figure 14-8 New definition: Summary
When the commands shown in the Script tab have completed successfully, click OK to build
the definition.
You are now able to perform backups from Tivoli Storage Manager of the data on this cluster.
Management and usage of Tivoli Storage Manager and the processes required to back up the
data on the cluster are handled by Tivoli Storage Manager and not within the intended scope
of this book. Consult your Tivoli Storage Manager technical support for assistance with Tivoli
Storage Manager.
14.3.3 Network Data Management Protocol
Configuring the Network Data Management Protocol (NDMP) agent on the Storwize V7000
Unified requires three basic steps:
1. Create the network group definition.
2. Set the configuration values.
3. Activate the node group.
The process begins by selecting the backup technology being used. Go to Files Services
and click Backup Selection, as shown in Figure 14-9.
Confirm that the Network Data Management Protocol (NDMP) option is selected. If not, use
the edit function to change it.
Chapter 14. Backup and recovery 247
Figure 14-9 Backup selection: NDMP
Next, click the Backup icon to display the backup management window. Click New NDMP
Node Group to create the network group. We show this in Figure 14-10.
248 Implementing the IBM Storwize V7000 Unified
Figure 14-10 New NDMP Node Group selection
This action opens up a window shown in Figure 14-11.
Ensure that Enable NDMP session field is ticked and enter a name for the group. Next,
select the file systems that will be included in this group and the default network group. The
following CLI command is used to create the group:
cfgndmp ndmpg1 --create
ndmpg1 is the name of the group in our example.
Chapter 14. Backup and recovery 249
Figure 14-11 New NDMP Node Group: General tab
Click the Advanced tab to show the remaining configuration parameters. Set these
parameters as detailed in Table 14-1 on page 250, and as shown in Figure 14-12.
Figure 14-12 New NDMP Node Group: Advanced tab
250 Implementing the IBM Storwize V7000 Unified
When complete, click OK to create the group. A pop-up window shows the progress. Upon
completion, you can review the details, as shown in Figure 14-13.
Figure 14-13 Configure NDMP node group - task completed
Click Close when you are ready to continue.
Table 14-1 shows the NDMP group configuration.
Table 14-1 NDMP group configuration
Definition Default CLI command Comment
NETWORK_GROUP_ATTACHED cfgndmp ndmpg1 --networkGroup
<xxx>
The group of nodes to be
attached with the NDMP
group <optional>.
DEFAULT_PROTOCOL_VERSION 4 cfgndmp ndmpg1 --protocol 4 Do not change this setting.
AUTHORIZED_NDMP_CLIENTS cfgndmp ndmpg1 --limitDMA
192.168.0.3
cfgndmp ndmpg1 --freeDMA
Default = null, no restrictions.
NDMP_PORT 10000 cfgndmp ndmpg1 --dmaPort 10000
DATA_TX_PORT_RANGE cfgndmp ndmpg1
--dataTransferPortRange
10020-10025
DATA_TX_IP_ADDRS cfgndmp ndmpg1 --limitDataIP
17.0.0.0/24
cfgndmp ndmpg1 --freeDataIP
Default = null, no restrictions.
NDMP_TCP_WND_SZ 160 cfgndmp ndmpg1 --tcpSize 160
LOG_FILE_TRACE_LEVEL 0 cfgndmp ndmpg1 --logLevel 3
NDMP_USER_NAME ndmp cfgndmp ndmpg1 --userCredentials
ibmuser%mypass%mypass
Specifies user name of
ibmuser and password of
mypass (repeated).
NDMP_PASSWORD ndmp
Chapter 14. Backup and recovery 251
This returns you to the backup tab and now there is a line entry on the window for this new
group. Highlight the line, then use the Actions pull-down menu or right-click to manage the
group (as shown in Figure 14-14).
FILESYSTEM_PATHS cfgndmp ndmpg1 --addPaths
/ibm/gpfs1
cfgndmp ndmpg1 --removePaths
/ibm/gpfs1
Define the paths to each file
system that will be included in
this backup group.
ENABLE_NEW_SESSIONS allow cfgndmp ndmpg1
--allowNewSessions
cfgndmp ndmpg1 --denyNewSessions
Prefetch settings
cfgndmpprefetch ndmpg1
-activate
Enable prefetch if wanted.
NDMP must be deactivated to
change prefetch status.
PF_APP_LIMIT 4 cfgndmpprefetch ndmpg1
-applimit 4
Maximum sessions using
prefetch at one time. (1-10)
NDMP must be deactivated to
change prefetch status.
PF_NUM_THREADS 100 cfgndmpprefetch ndmpg1
-numThreads 180
Threads per node to be used
for prefetching. (50-180)
NDMP must be deactivated to
change prefetch status.
Definition Default CLI command Comment
252 Implementing the IBM Storwize V7000 Unified
Figure 14-14 NDMP backup - actions
You can edit the groups parameters. This takes you back through the same panels as the
create task. You can also activate and deactivate the NDMP group, and also stop a session.
Using the View Backup Status option, you can show all currently running sessions. The View
Service Status option shows you the status of the NDMP service on each node.
The following CLI command activates the group:
cfgndmp ndmpg1 --activateSnapshot
Prefetching
This feature is designed to greatly improve backup performance of NDMP with small files. The
process predicts the file sequence for the NDMP group being backed up and reads the files
into the nodes cache. This gives the effect of a high read cache hit rate.
Prefetching can be turned on for up to ten NDMP sessions and the threads setting is provided
to allow tuning of the backup workload across the nodes based on the nodes cache memory
Caution: Do not change the NDMP settings or prefetch configuration while a backup is
being performed. This causes current backups to fail when NDMP recycles.
Caution: Activate NDMP well before a backup is scheduled to run. Activating NDMP
immediately before a backup might affect the backup while NDMP does start-up
housekeeping. If snapshots have expired and require deletion, this might prevent a new
snapshot being created.
Chapter 14. Backup and recovery 253
and performance. Prefetch is designed to work with small files, under 1 MB. Files over 1 MB
will not be prefetched. Therefore, turning on this feature on groups with predominantly large
files will have little effect.
Prefetching is disabled by default. Activating causes any currently running NDMP sessions to
cease.
Taking backups
When the NDMP sessions are configured and NDMP is active, you are able to perform the
backups. This is done by using your NDMP-compliant DMA on an external server.
Configuration and management of the DMA is beyond the intended scope of this book.
More information about managing NDMP can be found in the IBM Storwize V7000 Unified
1.4.2 Information Center:
http://pic.dhe.ibm.com/infocenter/storwize/unified_ic/index.jsp?topic=%2Fcom.ibm.s
torwize.v7000.unified.142.doc%2Fmng_NDMP_topic_welcome.html
14.4 Data recovery
Although we do not show a data recovery scenario, we describe the steps that are required to
perform a recovery.
For more information about recovery procedures in general, refer to the following link that
describes these topics:
User ID and system access
File module-related issues
Control enclosure-related issues
Restoring data
Upgrade recovery
See the IBM Storwize V7000 Unified 1.4.2 Information Center:
http://pic.dhe.ibm.com/infocenter/storwize/unified_ic/topic/com.ibm.storwize.v7000
.unified.142.doc/svc_webtroubleshooting_21pbmm.html
14.4.1 Tivoli Storage Manager
Because the Storwize V7000 Unified software contains a Tivoli Storage Manager client, the
restore of a file system must be performed on the cluster to recover from your
Tivoli Storage Manager server.
The specific configuration and implementation of Tivoli Storage Manager differs from site to
site; therefore, we describe the process generically. You will need to consult the relevant
manuals and your Tivoli Storage Manager administrator before proceeding with a restore
operation. The basic steps are as follows:
1. Ensure that there are no backups or restores currently running by issuing the command:
lsbackupfs
2. You can restore a single file system or part of it using the command:
startrestore
254 Implementing the IBM Storwize V7000 Unified
A file pattern must be given. Using wildcards, the whole or part of a file system can be
restored, even to a specific file. It is also possible to filter to a time stamp which restores
files as they were then, based on the backups available. Overwrite can be enabled or
disabled. Review the command description in the information center and the examples
that are given before using this command.
3. The lsbackupfs command can again be used to monitor progress.
Refer to the IBM Storwize V7000 Unified 1.4.2 Information Center for a full example:
http://pic.dhe.ibm.com/infocenter/storwize/unified_ic/index.jsp?topic=%2Fcom.ibm.s
torwize.v7000.unified.142.doc%2Fifs_rcvrtsmdata.html
14.4.2 Asynchronous data recovery
Recovering a file system with asynchronous replication requires that you configure and start a
replication relationship from the target site to the source site.
For more information and an example, refer to the IBM Storwize V7000 Unified 1.4.2
Information Center:
http://pic.dhe.ibm.com/infocenter/storwize/unified_ic/index.jsp?topic=%2Fcom.ibm.s
torwize.v7000.unified.142.doc%2Fifs_rcvrasyncdata.html
14.4.3 Network Data Management Protocol
Like the backup process, restoring data with NDMP is performed on the DMA server and is
outside the intended scope of this book. This process requires that the NDMP agent running
on the cluster has been defined. In a running system requiring backup of a file system or part
thereof, the agent is already ready. In the case of a full rebuild, it might be necessary to define
and configure the agent to proceed.
More information about Managing NDMP can be found in the IBM Storwize V7000 Unified
1.4.2 Information Center:
http://pic.dhe.ibm.com/infocenter/storwize/unified_ic/index.jsp?topic=%2Fcom.ibm.s
torwize.v7000.unified.142.doc%2Fmng_NDMP_topic_welcome.html
Copyright IBM Corp. 2013. All rights reserved. 255
Chapter 15. Troubleshooting and
maintenance
In this chapter, we look at how errors and events are reported and logged, and what tools are
available to analyze and recover from a problem. We also describe the procedures to follow,
and the methodologies and tools provided with the Storwize V7000 Unified to enable IBM
Support to assist.
We also cover common maintenance procedures, including software upgrading, and address
compression-related recovery, such as recovering from a volume going offline.
15
256 Implementing the IBM Storwize V7000 Unified
15.1 Maintenance philosophy
Many events or problems that occur in your Storwize V7000 Unified environment require little
or no user action. This is because the system employs a self healing philosophy so that
where possible, automatic recovery is triggered for many events. Also, when the cause of a
problem has abated, recovery procedures automatically run and the event warning is closed,
such as when a storage or host path is lost and then later recovered.
When a problem occurs, an entry describing the problem, using errors codes, is entered into
the event log. If the event required action cannot be automatically resolved, it is marked as
unfixed. Only unfixed events require action. Alerting can be configured to send an email or
Simple Network Management Protocol (SNMP) alert and this can be filtered by the type of
event. Recovery actions are taken by running the built-in guided maintenance procedures,
which the user starts from the event display.
If the problem cannot be resolved using guided procedures, the user will be prompted to call
IBM for support. This is achieved by calling IBM service using the local procedure. Depending
on how you set up your call home, the Storwize V7000 Unified might also have sent an alert
to IBM and IBM Support might even call you first. The primary purpose of this call home
function is to get information about a possible problem to IBM in a timely manner and serves
as a backup for problem notification. However, it remains the clients responsibility to be
aware of the status and health of their systems and raise a call with IBM if service is required,
unless special arrangements have been made beyond the standard maintenance.
All user and technical manuals are incorporated in a single interactive repository that is called
the information center. This online web-based system is described in 15.4, Information
center on page 272.
Support of the Storwize V7000 Unified is primarily using the IBM remote support model,
where the first contact is with the IBM Remote Technical Services (RTS) desks in each
region. This proven process gives the fastest response and immediate engagement of
specialized support. If required, higher levels of support can be engaged quickly. An IBM
service support representative (SSR) is dispatched to the client site only if there are actions
needed that require an IBM representative.
Most of the parts in the Storwize V7000 Unified are designated client replace. If such a part
needs replacement, the customer-replaceable unit (CRU) will be couriered to the site and
instruction given on the replacement process by the RTS specialist.
If there is a requirement for IBM Support to observe behavior or log in to perform complex
procedures, the specialist will use the IBM Assist On-site product. When configured, this will
connect directly to the Storwize V7000 Unified, or a client workstation. This tool is web-based,
and by using secure authorization, provides a simple remote KVM function to the cluster or to
a workstation in the clients environment. It uses only normal HTML-based ports that are
typically not blocked and also allows selected authorized specialists within IBM to jointly
access the session if needed. The client maintains control of their workstation and can
observe all activity. This tool greatly speeds up resolution time by getting a specialist onto the
system quickly.
15.2 Event logs
Logging in the Storwize V7000 Unified is done in the event log. An event might be critical
failures of a component that affects the data access or might be a record of a minor
Chapter 15. Troubleshooting and maintenance 257
configuration change. The two major components of the Storwize V7000 Unified both
maintain an event log, but these logs are stored and handled independently. The format of the
data in the log and the tools available to interrogate and act on the log entries also differs
significantly, so they are described separately.
All log entries are tagged with a status indicating the impact or severity of the event. This
quickly highlights the importance of an event and allows for sorting and filtering for display.
Many trace and debug logs are also recorded at a low level within both components. Many of
these logs wrap and must be collected close to a problem event. The logs are unformatted
and not visible to the user. Many are collected by the data collection process Download
Support Package. If more logs must be collected or generated, IBM Technical Support will
provide instructions or connect to the device.
There are two places to look for these events. The first item to look at is the lower right bar,
which shows the overall health status of the system as green, yellow, or red. As shown in
Figure 15-1, there are tabs for File and Block events. For file type events, you also need to
check the status for each file module by using Monitoring System Details <select a
File Module>, as seen in Figure 15-2 on page 258.
Figure 15-1 Block and file events
Figure 15-2 shows file module events.
258 Implementing the IBM Storwize V7000 Unified
Figure 15-2 File module events
15.2.1 Storwize V7000 storage controller event log (block)
To display the events, go to Monitoring Events. Click the Block tab. This action opens the
window that is shown in Figure 15-3.
Figure 15-3 Event logs: Block
Chapter 15. Troubleshooting and maintenance 259
There are some options to make this window more meaningful. You can sort on the event
status. The pull-down window gives a choice of Recommended Actions, Unfixed Messages
and Alerts, and Show All. The following list describes these options:
Recommended Actions Events that need actions performed against them, usually
by running guided maintenance.
Unfixed Messages and Alerts This lists all events that are marked as unfixed.
Show All Lists all entries in the event log.
A filter option is also provided, although it is needed only if the log has become cluttered with
a high volume of events. Next to the magnifying glass is a pull-down arrow. Select the field to
filter on. Then, in the filter box, enter the value of your filter.
Details
To display the event details, use the actions to select properties. An example of a minor event
is shown in Figure 15-4.
Figure 15-4 Block event: Properties
Two time stamps are logged, the first occurrence and the last. These are read with the event
count. If the event count is one, the time stamps will be the same, so the event occurred only
once. If the exact same event occurs more than once, the count will increase and the last time
stamp will show when the last one occurred.
The sequence number uniquely identifies this event. This number increments for every new
event for the life of the cluster.
The object type, ID, and name, identify the resource that this event refers to.
The reporting node is the node that detected and logged this event.
260 Implementing the IBM Storwize V7000 Unified
If this event is related to another event, the root sequence number will be the event that
triggered this one. This event would therefore be considered together in problem
determination.
The event ID is a number that uniquely relates to the event being logged and can be searched
on in the manuals or support knowledge bases to obtain more detail on the event. Associated
with the event ID might be some text to give a verbose description of the event.
An error code might also be displayed and might have associated text. This identifies the error
that caused this event to be logged. All events requiring guided maintenance will have an
error code.
Status gives the type of event and its severity.
The notification type gives the priority of the event and the level of notification. This is used by
the alerting tasks in the cluster to determine based on the configured rules what alerts will be
generated.
The fixed field is important. Unfixed events can cause recovery routines to not be run if locks
exist on processes as a result of an unfixed problem. Unfixed problems require action and
there should be no unfixed events in the log unless they are for a current issue.
More information might also be supplied including sense data, which will vary based on the
event.
Actions
Highlight a log entry by left clicking. It will backlight yellow. Then, either right-click or use the
Actions pull-down menu to display the choices. The actions are as follows:
Run Fix Procedures If the event is an error or qualifies for guided maintenance (and
therefore has an error code logged), this option will be available,
otherwise it is disabled. This is the correct action for any unfixed
problem needing maintenance. This action will start directed
maintenance procedures.
Mark as Fixed This option changes the fix status of an event to Fixed = yes. This is
useful when multiple events have been logged as a result of a problem
and the problem is being fixed using another event. It is also useful
when you are confident the cause has been resolved and there is no
need to run maintenance, but do so with caution as this action might
bypass cleanup routines. If a problem is informational and fix
procedures are not available, then use this option to remove the event
from the unfixed list.
Mark as Unfixed If an event was incorrectly marked as fixed, this will mark it as unfixed.
Filter by Date Gives a choice of start and end dates to customize the display.
Show entries with... Expands to give a choice of minutes, hours, or days previous to limit
the displayed events.
Reset Date Filter Resets the Show entries with... choice, above.
Important: Regularly check for unfixed problems in the event log. Although high priority
events will cause alerts based on your configuration, it is possible to miss important events.
Ensure that all unfixed problems are actioned. If thin provisioning or compression is used, it
is important that any out of space warnings be corrected immediately before it causes
issues with volumes going offline because of lack of resources.
Chapter 15. Troubleshooting and maintenance 261
Clear Log Use Clear Log with great caution. This will delete the event contents
from the log which might compromise IBM Supports ability to
diagnose problems. There is no need to clear the log, there is plenty of
space to store the data and the cluster performs its own housekeeping
periodically to purge old unimportant entries. The refresh button
refreshes the display.
Properties Displays the details of the event and the control fields as described
inDetails on page 259 above.
15.2.2 V7000 Unified File Module Event log file
The File event log is cluster wide and is common for both of the file modules. To display the
events, go to Monitoring Events and click the File tab to see it, as shown in Figure 15-5.
Figure 15-5 File event
Like the block display, there are some options to make this window more meaningful. You can
sort on the event status and currency:
Current Critical/Warning Unresolved events that are status critical or warning.
Critical/Warning All events in the log that are status critical or warning.
Show All Lists all entries in the event log.
Additionally, you can filter the display to show the events generated by either or both file
modules
Suggestion: Do not clear the event log. There is no practical gain in doing so, other than in
special cases for site security reasons.
262 Implementing the IBM Storwize V7000 Unified
A filter option is also provided that allows you to further reduce the display by including a
search word or phrase. Simply type the word and press Enter. A reset option clears the filter.
Actions
Unlike the block event log, there are no actions to fix or clear a specific log, this display is for
viewing only. Highlight a log entry by left-clicking. It will backlight yellow. Then, either
right-click or use the Actions pull-down menu to display the choices.
The following actions are available:
Filter by Date Gives a choice of start and end dates to customize the display.
Show entries within... Expands to give a choice of minutes, hours, or days previous to
limit the displayed events.
Reset Date Filter Resets the preceding choice.
Clear Log Clears all entries from the event log. Again, use with caution.
Properties This gives more information about the event. Also in this window is
a hyperlink to the Storwize V7000 Unified information center, which
will take you directly to a description of the error code, as seen in
Figure 15-8 on page 265.
In the topics that follow, we describe guided maintenance and the procedures therein.
15.2.3 Block
Any event that needs maintenance action can run guided maintenance enabled and is set to
unfixed. To display unfixed problems, use the procedure that is detailed in 15.2.1, Storwize
V7000 storage controller event log (block) on page 258.
It is important to complete maintenance on all unfixed events in the cluster. Unfixed events
can prevent maintenance routines from running and create contention for maintenance
resources, which can prevent problems from auto recovering. Make regular checks of the
event log to ensure that all unfixed events are actioned. Of particular importance are out of
space conditions when compression is being used and, which must be addressed before
physical storage is exhausted.
There are several ways of actioning an unfixed event:
1. Use the Run Fix Procedures action to perform guided maintenance on the event. This is
the preferred option.
2. Mark the event as fixed. This would be done if the event was informational and fix
procedures were not enabled. In this case, the event is intended to be read with another
event or is purely for your information and is marked as unfixed to gain your awareness.
Events can also be marked as fixed when there are a number of similar events and the
problem has been resolved by running maintenance on another entry. Use this option with
caution because this bypasses the guided maintenance routines, which include cleanup
and discovery processes, and this might leave a resource in a degraded or unusable state.
3. The Storwize V7000 uses a self healing philosophy where possible. Many events, which
are known to be transient, will trigger an autorun of the maintenance procedures and will
recover. Subsequent events that clearly negate an earlier one will automatically mark the
earlier event as fixed. For example, a node offline message will remain in the log, but get
marked as fixed when a node online message for the same node is logged 10 minutes
later.
Chapter 15. Troubleshooting and maintenance 263
Run fix procedure
To display the block event log, select Recommended Actions from the filter pull-down menu.
If there are any events requiring guided maintenance to be run, they will be listed in the
window. The software has also made a suggestion on which is the first event to work on,
which can be seen in the section above the event list. This is generally based on the lowest
numbered error code. The rule of thumb is to start with the lowest number. The shortcut icon
in this window will start you straight into the guided maintenance of the recommended event.
An example of this process is shown in Figure 15-6.
Figure 15-6 Block events: Error
To manually select the event, highlight the event in the list with a left-click. Then, use the
Actions pull-down menu or right-click to see the action list and select Run This Fix
Procedure.
This will start the guided maintenance. The panels that display are unique to the error and will
vary from a single window giving information about how to resolve the error to a series of
questions requiring responses and actions to perform. For a part failure, the guided
maintenance will step through diagnosing and identifying the failed part, preparing the cluster
for its isolation, powering off components if required, and guiding you through replacement
and testing. The procedure will confirm that the original error is now cleared and put the
appropriate resources online. Finally, it will mark the event as fixed, closing the problem.
For most errors, the panels are interactive and will ask for confirmation of status or
configuration. Be careful answering these questions. Ensure that the requested state is
definitely correct before confirming. For example, for a pathing error, you might be asked if all
zoning and pathing is currently operational and as intended. If you answer yes, even though
one path has failed, then the process assumes that previously missing paths are no longer in
the configuration and will not look for them again.
264 Implementing the IBM Storwize V7000 Unified
In our example as shown in Figure 15-7, the guided maintenance will not take any action on
the status because it does not know the reason why it is set. Service mode would typically be
set manually by the user and therefore is set for a reason. It can also be set by a failed node,
and therefore there would be an accompanying event of higher importance. The panel that is
displayed gives details about what the state is and where to go to clear it. For this particular
error, the process leaves the event marked as unfixed, advising that it will automatically be
marked fixed, when the state clears.
Figure 15-7 Guided maintenance: 1189 error
In the example used in Figure 15-7, there are three distinctly different codes:
Event code This is the reference code (five digits) of the event and uniquely
identifies the type of event that has occurred and the verbose detail
given.
Error code An error code (four digits) is only posted if an error has occurred and
this code is used by the guided maintenance and support personnel to
repair the fault.
Node error code This code (three digits) is related to the node module itself, not the
cluster software.
15.2.4 File
The code running in the file modules does not incorporate a guided maintenance system. All
maintenance and recovery procedures are guided from the information center. Use the error
code as a starting point and search the code in the information center. The information center
gives appropriate actions for each error.
Chapter 15. Troubleshooting and maintenance 265
Events in the system are logged in the event log as described earlier. By viewing the event log
and using the Actions pull-down menu (or right-click) to display the properties, you can see
the event details that are shown in this example in Figure 15-8. This gives more information
about the event. Also in this window is a hyperlink to the Storwize V7000 Unified Information
Center, which will take you directly to a description of the error code.
Figure 15-8 File event: Details
In our example, this hyperlink launches the page that is shown in Figure 15-9.
Figure 15-9 Information center
Use the procedures that are outlined in the information center to resolve the problem. When
the issue has been resolved, you need to Mark Event as Resolved by using the following
procedure.
266 Implementing the IBM Storwize V7000 Unified
For file type events, you need to check the status for each file module by using Monitoring
System Details <select a File Module>. This will be the window that is used to mark the
error as being fixed, as shown in Figure 15-10.
Figure 15-10 Mark event as resolved
A specific example for resolving a volume out of space condition and marking the event as
resolved is covered in the 15.2.5, Working with compressed volumes out of space
conditions on page 266.
15.2.5 Working with compressed volumes out of space conditions
The most important utilization metric to monitor is the usage of the physical space that is
currently used in a storage pool. This metric relates to the actual physical storage capacity
already used for storing compressed data written to the pool. It is important to ensure that
physical allocation does not go over the suggested threshold (the default is set to 80%). To
reduce the usage of the used space, the storage pool size needs to be increased. Adding
physical capacity to the pool reduces its usage. Whenever a threshold is passed and an alert
is generated, the system points you to the procedures outlined in the information center to
resolve the problem. If a corrective action to reduce usage is not performed before the
storage pool reaches 100% utilization, Network Shared Disks (NSDs) can go offline, which
can cause the file system to go offline.
The overall steps that are required to correct an unmounted file system as a result of a
compressed volume (NSD) going offline are as follows:
1. Block Event. Run fix procedure for the NSD that went offline and follow the steps provided
by the fix procedure for the block event. The NSD offline condition must be addressed
before the file system can be mounted.
2. Start the NSD when space is available.
Note: The steps that follow were verified during the publication of this book. However, we
suggest that the information center be used because it might contain updated steps to
resolve volume out of space conditions.
Chapter 15. Troubleshooting and maintenance 267
3. Determine if there are any stale NFS file handles.
4. Perform single node reboots to clear the stale NFS file handles.
5. Mount the file system.
Proceed with the following detailed steps.
Run fix procedure (block)
To display the block event log, select Recommended Actions from the filter pull-down menu.
If there are any events requiring guided maintenance to be run, they will be listed in the
window. The error for a volume copy offline because of insufficient space is Error 1865. The
software has also made a recommendation on which is the first event to work on, which can
be seen in the section above the event list. This is generally based on the lowest numbered
error code. The rule of thumb is to start with the lowest number. The shortcut icon in this
window launches you straight into the guided maintenance of the recommended event.
An example of this is shown in Figure 15-11.
Figure 15-11 Run fix procedure for NSD offline
The guided maintenance procedure will provide guidance as to how to resolve the volume
offline condition, as seen in Figure 15-12.
268 Implementing the IBM Storwize V7000 Unified
Figure 15-12 Fix the problem
When the issue that caused the NSD to go offline has been resolved, you have to specify that
the problem has been fixed. This will cause the NSD to come back online and the 1865 error
will be cleared.
After the NSD is back online, the file system remains unmounted. Now you must follow the
steps that are provided by the file events.
The first step is to verify the state of the file system, as shown in Figure 15-13, which
indicates that the file system is not mounted.
Figure 15-13 File system not mounted
Chapter 15. Troubleshooting and maintenance 269
By viewing the file event log for the file system unmounted error and using the Actions
pull-down menu (or right-click) to display the properties, you can see the event details shown
in Figure 15-14. There is a hyperlink, Message Description, that connects to the Storwize
V7000 Unified Information Center. This link takes you directly to a description of the error
code and instructions to get the file system checked and remounted.
Figure 15-14 File system not mounted
The first page that the hyperlink will show is shown in Figure 15-15.
Figure 15-15 Event EFSSI0105C
Select the User response link to obtain details and steps that are needed to bring the file
system online again. Ensure that the steps are performed in sequence and completed. The
high-level steps are shown in Figure 15-16.
Note: The steps that follow were verified during the publication of this book. However, we
suggest that the information center be used because it might contain updated steps to
resolve a file system unmounted condition.
270 Implementing the IBM Storwize V7000 Unified
Figure 15-16 Checking the General Parallel File System (GPFS)
15.3 Collect support package
The three modules of the Storwize V7000 Unified each have their own data collection
processes. Each file module collects its own log and configuration files and the storage
control enclosure also collects its own snap file. The cluster triggers these collects and
combines them into a single file so only one operation is required.
This is triggered from the cluster graphical user interface (GUI). Go to Settings
Support Download Logs.
This presents the download logs window, as shown in Figure 15-17.
Note: The workstation that you are using to connect to the cluster will need to have
Internet access to be able to display the information center.
Chapter 15. Troubleshooting and maintenance 271
Figure 15-17 Download logs
Here, you can show a listing of the packages currently stored on the cluster and you can
initiate a current data collection. If the file list is not displayed, select Show full log listing...
to see the file list. Then, select the node to view from the pull-down menu.
To collect the logs, click Download Support Package. This then asks for the type of logs
required, as seen in Figure 15-18. Choose Full logs and then click Download to obtain all
the logs. A progress window then displays. Wait for the entire task to complete and confirm it
was successful. Then, close the window.
Figure 15-18 Download logs: Full logs
Locate the resultant file in the file list. The file name includes a date-time stamp, use this to
identify the recent log. Highlight the file and either right-click or use the Action pull-down
menu, then select Download. This file can then be saved to your workstation and is ready to
be uploaded to IBM Support. Avoid renaming this file.
Caution: Each module stores only the data that it collects. The data collect file is collected
and stored by the current management node. Ensure that you are looking at the listing for
the correct node.
272 Implementing the IBM Storwize V7000 Unified
If using compression, the receive-any control element (RACE) module maintains internal
diagnostics information, which is kept in memory in each of the V7000 nodes in a
compression I/O Group. This information is available as part of the support package for the
block storage. The full logs contain the Standard logs plus most recent statesave from each
node, which is fine if the system observed a failure. For all other diagnostic purposes such as
a performance issue, the Standard logs plus new statesaves is required.
To obtain the Standard logs plus new statesaves select Block storage only, as shown in
Figure 15-19.
Figure 15-19 Block storage only
Then, select Standard logs plus new statesaves, as shown in Figure 15-20.
Figure 15-20 Standard logs plus new statesaves
15.4 Information center
All documentation for the Storwize V7000 Unified is compiled into the online IBM Storwize
V7000 Unified Information Center. This includes installation and configuration,
troubleshooting, and administration. The information center home page is shown in
Figure 15-21 on page 273, and can be found at this website:
http://publib.boulder.ibm.com/infocenter/storwize/unified_ic/index.jsp
Note: The files will remain on the file module indefinitely. It is your job to maintain these
and delete old copies. There is no concern about how many remain on the module other
than confusion. Our suggestion is to delete files only after the problem is resolved and
always keep the last one.
Chapter 15. Troubleshooting and maintenance 273
As seen in Figure 15-21 on page 273, the page is broken up into three main areas: the search
bar, left pane, and right pane. The left pane has three tabs at the bottom to choose the view
that is displayed: Either contents (displayed as default), index, and search results. The right
pane is used to display the selected detail page. The search bar can be used to do a context
search of the entire information center by using rich search arguments.
15.4.1 Contents
This tab in the left pane displays an expandable map view of the headings and subheadings
of the information available. This view can be used to go quickly to the information and to see
all information available under a subject heading.
15.4.2 Index
In this tab, there is an alphabetical listing of keywords that can be scrolled through or
searched by using the word find box at the top of the panel.
15.4.3 Search results
The results of a search performed in the search toolbar at the top of the page are listed in this
tab. For each result the section heading matching the search argument is given along with an
abbreviated summary of the text in that section. By clicking the heading, which is a URL, the
page is displayed in the right panel. See Figure 15-21.
Figure 15-21 IBM Storwise V7000 Unified Information Center: Home
The most successful way to locate information in the information center is to construct a good
search argument for the information you want to see. Enter that into the search field in the
search bar at the top of the page. Scan the results for the section that best meets your
requirements and display each one in the right window by clicking the URL. If you want to look
at several, use your browsers functions to open the links in a new tab or window. By hovering
over the URL, a pop-up window displays showing the source of the item. Be careful of
multiple similar hits where some are marked previous version.
274 Implementing the IBM Storwize V7000 Unified
15.4.4 Offline information center
For situations where internet access is not possible or poor, a downloadable version of the
information center is available. Go to the download section of the support page to locate the
link to the files. The current download can be found at the following website:
http://www-01.ibm.com/support/docview.wss?uid=ssg1S4001035
There are two modes that the offline information center can be installed in, either on your
workstation for personal use or on a server for use by anyone. The readme file gives details
about each and how to set up.
The only way to learn about what the information center offers, is to use it. Spend time looking
at what information is available and how to find it. Practice with searches and locating
procedures and references. In a short time, you will be comfortable with the layout of the
information and able to quickly locate the topics you are looking for.
15.5 Call home and alerting
Although the cluster will automatically detect events and errors and log them, alerting is
needed to ensure that the user is made aware of an issue in a timely manner. The cluster
incorporates three methods of alerting. For the configuration of these services, refer to 11.8.3,
Support on page 179.
15.5.1 Simple Network Management Protocol
This agent uses the well known Simple Network Management Protocol (SNMP) protocols,
which are in common use. Alerts are sent to the designated SNMP server. System health
reporting and error alerting are determined by the individual implementation of the SNMP
process within each enterprise and is beyond the scope of this book.
The cluster is configured with the address of the SNMP server. Additionally, the severity of
each alert type can be individually set to filter the alerts being presented. Multiple server
targets, each individually defined, can be configured.
15.5.2 Email
This is a versatile method of alerting, which uses a simple and well known process (email) to
send an alert. This can be to your mailbox, a group mailbox, or even a server.
The cluster is first configured to allow email alerts. The SMTP server that it will send the
emails to must be defined. Next, each recipient needs to be defined. As many email recipients
as wanted can be configured. Each definition needs the email address and can be individually
Note: The download is very large, over 1 GB. When installed, it requires a lot of storage
resource.
Caution: When downloaded, the information in the information center is not updated and
might not reflect the most current detail. In general, the information provided will be correct
and valid, but if in doubt, particularly with recovery procedures, consult the online system
to confirm. To maintain concurrency, the file needs to be downloaded and the offline
information center reinstalled on a regular basis.
Chapter 15. Troubleshooting and maintenance 275
set up for the type of event to alert and the severity of each type. This gives good flexibility for
each user to receive only the alerts they are interested in.
15.5.3 Call home
Call home is one of the IBM Support tools included to assist IBM in supporting your system.
Using the same process as the email alerting, an email is sent directly to an IBM server on
the web, which includes a brief log of the error. If your system has been properly set up in the
IBM Support system, this will generate a problem management record (PMR). A call will then
be queued to the support team for your region.
If you have not already called IBM, depending on the nature and severity of the error, IBM
might call you. This call home process gives IBM Support timely awareness of the problem
and allows them to begin analysis. It also serves as a backup for the client alerting.
15.6 IBM Support Remote Access
There might be times during an outage that IBM Support needs to connect to the cluster. This
might be to speed up problem determination and recovery by allowing IBM to issue
commands and see responses directly. Or, there might be commands and access required
that can be performed only by IBM.
IBM uses an existing tool that is called Assist On-site (AOS), which has been used by IBM
Support for many years in support of System x and storage products. This involves a client
daemon that is loaded on the clients workstation or system and this is connected to a secure
server within IBM. IBM Support specialists can access this server internally and are able to
view the console and use the keyboard and mouse. Access within IBM is strictly controlled
and restricted. If required, when a session is established, other approved IBM specialists can
also monitor the session allowing a cooperative approach ultimately speeding up recovery.
For a Windows workstation, the client accesses a web page and registers. A small temporary
agent is downloaded which provides the client side daemon. Access is via well known ports
80 and 433 and is secure. A session unique pass key must be entered to complete the
connection, which is passed verbally. The client and the IBM specialist share the video,
mouse, and keyboard and the client maintains full control.
With Storwize V7000 Unified, a permanent client has been included in the software package.
This is disabled by default and must be enabled and configured by the user. There are two
modes of operation, lights on and lights out. Lights on implies that the site is manned at all
times that a connection is needed. Any connection attempt requires authorization by the client
connecting locally to the cluster GUI and approving the connection. Lights out implies the
cluster is in an unmanned (at least for some of the time) location. Connection attempts will
Caution: Resist the urge to enable all levels of alerting. This will generate a high volume of
alerts because every minor change in the system will be notified. We suggest Critical only
for most users and Critical/Warning for normal monitoring.
Important: Although IBM might receive direct and early alerting for an error, this does not
constitute a service call. It is the clients responsibility to ensure that alerting within their
enterprise is effective and critical events reacted to. The client should always place a
service call, unless IBM has already called them. In most cases, the call home improves
IBM responsiveness and leads to faster analysis and recovery.
276 Implementing the IBM Storwize V7000 Unified
therefore complete without the need for local approval. Use this setting to define how you
want IBM Support to connect based on your local security policies. We suggest lights on as
providing a level of protection and awareness, provided that if needed, approval via the GUI
can be given in a timely manner.
Setup of the AOS connection is covered in the Chapter 11, Implementation on page 139.
15.7 Changing parts
If a part or component of the Storwize V7000 Unified fails, your first indication is an entry in
the event log and most likely an alert, depending on your alert settings. Depending on the
nature of the failure and the impact, you might have run guided maintenance procedures
against the event (in the case of block storage). Or, you might have researched the error code
in the information center. If the procedures indicate that a part should be changed, you need
to place a call to IBM using your local procedures. If at any time during your analysis of the
problem you feel uncomfortable or the problem is pervasive, immediately call IBM for
assistance.
IBM Support will almost certainly ask you for a data collection so that they can review the
logs, interrogate other related information in the data, and research the problem against
internal problem databases. IBM Support might ask you to perform other recovery actions,
data gathering, or to execute commands. If IBM determines that a part needs replacement,
they will arrange the shipment of the part and depending on which part, give you instructions
on the replacement procedures or send a service representative to perform the actions.
The Storwize V7000 Unified is designed so that many parts can be replaced by the client.
This speeds up recovery and reduces overall cost. Almost all parts in the Storwize V7000
storage enclosure can be replaced by the client. These are typically done under the guidance
of the guided maintenance procedures, which are started against the error log. IBM terms
these parts as CRUs.
If the part is not a CRU, or if more technical problem determination is required, IBM will
dispatch a service representative.
It is important to ensure that you are familiar with the safety procedures outlined in the
information center and have a good understanding of the process before changing any parts.
If at any time you feel unsure, call IBM Support for guidance.
CRU parts are typically couriered to your location. Depending on local country procedures,
this shipment will include a process to return the removed part, which you must ensure is
completed promptly.
15.8 Preparing for recovery
When a disaster strikes, time is always the most important aspect. Getting the problem
diagnosed and systems recovered is generally against the clock. Many tasks and procedures
are only ever encountered or needed at the worst time and if any of these are unprepared or
untested, this frequently leads to an outage being longer than it needed to be.
The following are a number of key items that if prepared beforehand greatly improve the
diagnosis and recovery time.
Chapter 15. Troubleshooting and maintenance 277
15.8.1 Superuser password
This is the password for the user ID superuser, which is used to log on to the Storwize V7000
storage enclosure directly. In the unified cluster, this password is seldom, if ever used
because all functions are performed from the Unified GUI and CLI. But if certain problems
occur on the storage component, this password becomes essential and if it is not handy, will
delay service actions.
Store the password securely, but ensure that it can be quickly accessed.
15.8.2 Admin password
Although this user ID might be in regular use, the password needs to be made available at
short notice when service is required. If the user ID and password are held aside and all
users have their own IDs, a user ID and password with full authority needs to be available for
service use. Alternate methods of producing a valid ID and password need to be in place if
the authorized person is not present or available.
15.8.3 Root password
As described in the implementation section, currently the root password to the file modules is
widely known to IBM Support personnel. If this poses a risk, we suggest that the client ask
IBM to change it at installation time. If this is done, the password needs to be secured and
able to be produced on demand. Currently, a number of service and recovery procedures
require root access. If it is not available, IBM Supports ability to recover a problem will be
compromised. IBM intends to negate the need for field support to use this level of access in
the future. When this happens, this action will no longer be required.
15.8.4 Service IP addresses
An often overlooked step during implementation is to set the IP addresses for service access
to the storage nodes and the file modules. The storage node service IP addresses have
default settings, but these are unlikely to match the clients networks, needing local and direct
physical access to gain a connection. They should be set to assigned IPs in the local LAN so
they are accessible from the clients network. These addresses need to be documented and
easily accessed.
Considerable time is wasted if these addresses are not known or not in the local networks.
15.8.5 Test GUI connection to the Storwize V7000 storage enclosure
Ensure that as part of the installation and regularly, you have confirmed that you can connect
to the Storwize V7000 storage nodes directly and that this is tested regularly. You should be
able to connect to and log on to the storage system GUI over the management address and
to both the service assistant GUIs over the two service IP addresses.
Also, set up both the CLI connections and test.
15.8.6 Assist On-site
IBM has included a tool in the software that allows simple access to the Storwize V7000
Unified. Assist On-site (AOS) needs to be configured, so ensure that this has been done and
278 Implementing the IBM Storwize V7000 Unified
tested before it is needed to ensure that no time is lost if there is a need for IBM Support to
connect to the cluster. This includes resolving any firewall and security issues. We encourage
you to do this configuration and testing during the implementation.
15.8.7 Backup config saves
Before any maintenance activity, do a config backup and then offload it either as files or by
doing a normal data collection. This will greatly assist support in the event of a problem and
might increase the likelihood of a successful recovery.
For the storage units, issue the CLI command:
svcconfig backup
This will gather and save a current version of the config.xml file. Then, do a data collect as
normal, offload, and save the file safely. It is wise to perform this activity at regular intervals
(for example, monthly) as a matter of housekeeping.
15.9 Software
All the software that is used in the Storwize V7000 Unified is bundled into a single package.
Each bundle is uniquely named using the IBM standard for software name and version
control. The bundle is identified by its Version.Release.Modification.Fix (VRMF) number (for
example, GA release was 1.4.0.0). These numbers increase as new code is released. The
meaning of each field is as follows:
Version Only increments for a major change to the functions and features of
the software product.
Release Increments as new functions or features are released.
Modification Increments when a function or feature is modified.
Fix This number increments for each group of program temporary fixes
(PTFs) that are released. Note, PTFs do not change function, only fix
code problems.
15.9.1 Software package
There are five different software packages that are used with the Storwize V7000 Unified:
Full software installation DVD. This is only needed in the case of a full rebuild of a cluster.
It is a bootable DVD, which will reload all the file module software, management software,
update hardware (HW) basic input/output system (BIOS) and utilities if needed, and
update Storwize V7000 Software if required. This process initializes the system,
destroying previous configurations. This is an ISO image and can be obtained only from
IBM Support for a specific situation.
Software update. This file is used to concurrently upgrade the file module software,
management software, update HW BIOS and utilities if required and update Storwize
V7000 Software if required. This file can be downloaded from IBM and is installed by using
the upgrade tools in the Storwize V7000 Unified cluster.
Storwize V7000 control enclosure software update. This update file is included in the
Storwize V7000 Unified package and is managed and installed by the File Module
management software. It will be automatically updated if required. Do not attempt to
upgrade the storage control enclosure manually unless directed to do so by IBM Support.
Chapter 15. Troubleshooting and maintenance 279
File Module HW BIOS and Utilities. This firmware is managed by the Storwize V7000
Unified software and will be automatically upgraded if required. Do not attempt to manually
upgrade this firmware.
Install test utility. This small package is loaded in the same way as the normal software
update. It installs a test utility that then checks for known issues or conditions that can
potentially cause the software apply to fail. It is important to run this utility first.
Download
If your Storwize V7000 Unified has a connection to the Internet and sufficient bandwidth is
available to perform the download, log on to the management GUI and from the upgrade
software option, and download the upgrade file. If connectivity is not available or direct
download is not wanted, then download the files from the web as described here.
Web download
All software packages are downloaded from IBM Fix Central. To access this site, you need an
IBM ID. If you do not already have one, there is a link on the front page to register.
Select two packages from the list of available software. First, select and download the latest
UpgradeTestUtility. This is a small file and contains the test utility that is installed first to check
for known problems.
Then, select and download the latest Storwize V7000 Unified software bundle. This includes
the main upgrade file. Also included is the license agreement document, the release notes for
this release, and a small file with the MD5 checksum value. It is important to read the release
notes carefully. This file constitutes the read me and special instructions for this release.
To access the latest software, see the following website:
http://www.ibm.com/support/fixcentral/
In the Find product tab of Fix Central, type the following product into the Product selector
field: IBM Storwise V7000 Unified
Then, select your Release and Platform. Then, click Continue.
This will show a list of downloads for the Storwize V7000 Unified. Click the latest version to
see the downloadable files page. Scroll down and review as required the information that is
given on this page, which includes details and links for compatibility tables, interoperability,
and restrictions. At the bottom is the package download links. Choose the applicable line and
select the download option, HTTP (or FTP if available).
Read and accept the terms and conditions in the pop-up window. This action then takes you
to the download page, as shown in Figure 15-22.
Warning: Do not use the DVD to upgrade an operational system. The DVD is only used to
install and initialize a new or recovering system. Installing the software from the DVD will
destroy the existing configuration.
280 Implementing the IBM Storwize V7000 Unified
Figure 15-22 Software download
Ensure the package that you need is selected and click Continue. Follow the prompts to save
the package.
15.9.2 Software upgrade
There are two methods of installing software on the Storwize V7000 Unified. The normal
process is to perform an upgrade, which is applied concurrently. This involves a stepped
process of stopping and upgrading one file module, then bringing it back online. After a
settling time, the other file module is done in the same way. If required, the control enclosure
is also upgraded. This involves stopping and upgrading one node, waiting 30 minutes, then
doing the other node. All this is concurrent with the Storwize V7000 Unified operation and
client access is not interrupted.
File system access will switch between the modules and host access will continue
uninterrupted if the hosts have network connectivity to both file modules. For block access
over the SAN, all hosts must have operational multipathing and at least one path to each
node canister.
The second method involves restoring all software from DVD. This is disruptive and by design
will destroy the Storwize V7000 Unified configuration and any data stored on the system will
be lost. Unless this is a new installation or rebuild where no data exists on the system, do not
perform this procedure without direct IBM Support direction.
Important: Always read the release notes before continuing.
Important: Ensure that all LAN-connected file system hosts have confirmed access to
both file modules and all SAN-attached Fibre Channel hosts have paths to both node
canisters.
Chapter 15. Troubleshooting and maintenance 281
Install new software from DVD
This procedure is described in the installation procedures 11.5, Install the latest software on
page 152 and should be used only on a new system or under the direction of support during a
recovery action.
Concurrent software upgrade
The following procedures are run from the Upgrade Software option on the management GUI
of the Storwize V7000 Unified. Make the following selection:
Settings ==> General ==> Upgrade Software
You get a window as shown in Figure 15-23.
Figure 15-23 Upgrade software: Home
If your cluster has good web access, you can click Check for updates to connect to IBM and
download the latest upgrade file to the cluster. This file is over 1 GB and might take some time
to download by using this method.
Alternatively, using the procedure detailed above, connect to IBM Fix Central and download
the upgrade file to your workstation. You then must upload the file to the cluster. Click Upload
File on the page and this displays a pop-up window, as shown in Figure 15-24.
282 Implementing the IBM Storwize V7000 Unified
Figure 15-24 Upload file
Browse to the upgrade file that you downloaded on your workstation, then click OK.
Repeat the upload step for the test utility that you also downloaded with the code file.
The new files are opened and interrogated by the cluster code and if acceptable, are
displayed in the list. First, run the upgrade test, always using the latest version available.
Select the file. Then, right-click to select Install, or use the Actions pull-down menu, as
shown in Figure 15-25.
Hint: When the upgrade process has completed (success or fail), the file is deleted. For
this reason, you might find it beneficial to download the file to your workstation, then upload
it to the cluster, rather than use the Check for updates icon. This way, you always have a
copy of the file and need to upload it again only in the event of a failure. This also saves
time and bandwidth if you have multiple clusters. The likelihood of needing the file again on
the same machine after the upgrade has been successfully applied is very low, but this
method does offer some piece of mind.
Chapter 15. Troubleshooting and maintenance 283
Figure 15-25 Upgrade software: Install test utility
Wait for the utility to run and when complete, review the result in the window before closing it.
An example of a successful run is shown in Figure 15-26.
Figure 15-26 Upgrade software: Test results
Hint: You must select the file before performing an action, which is shown as highlighted
yellow. Then, the actions are available.
284 Implementing the IBM Storwize V7000 Unified
If there were any problems reported, they must be resolved before continuing with the
software upgrade.
When you confirm that the test utility has shown no problems, the next step is to click the
upgrade file to select it. Then, select Install, as shown in Figure 15-27.
Figure 15-27 Upgrade software: Install
A pop-up progress window displays. Wait for the task to complete, then close. This does not
indicate that the upgrade is applied, but that it has been successfully started. The upgrade
software window now shows a status window with three progress bars. The top one is the
overall progress, and the other two are for the file modules. If the new upgrade includes a
code level that is higher than that currently on the storage control enclosure, it automatically
performs an upgrade of the control enclosure first. This step takes about 90 minutes and can
be monitored by the overall progress bar. It is complete when the bar is at 33%.
When the storage is complete, the process moves on to the first file module, as shown in
Figure 15-28.
Important: While code upgrades are being applied, all data access will continue. Block
access continues to be provided over the SAN if hosts have multipathing correctly
configured with active paths to both nodes, and that hosts with file access have IP
connectivity to both file modules. It will support a failover of the IP addresses between the
file modules.
However, do not make any configuration changes to the cluster during the software
upgrade. Many tools and functions will be temporarily disabled by the apply process to
prevent changes from occurring.
Chapter 15. Troubleshooting and maintenance 285
Figure 15-28 Upgrade software: Progress
The file modules take about 30 minutes to upgrade each. When the first one is complete, the
process moves on and upgrades the second one. At this time, the GUI connection is lost
because of the active management module now being taken down and swapped over. You
must log back on to continue monitoring the progress.
When all three modules are successfully upgraded, the process to upgrade cluster common
modules and processes begins. This takes about another 30 minutes. The upgrade is
complete when this process finishes, as shown in Figure 15-29 on page 286.
286 Implementing the IBM Storwize V7000 Unified
Figure 15-29 Upgrade software: Complete
Your software is now upgraded.
Copyright IBM Corp. 2013. All rights reserved. 287
Chapter 16. Real-time Compression in the
IBM Storwize V7000 Unified
In this chapter, we discuss Real-time Compression on the Storwize V7000 Unified.
16
288 Implementing the IBM Storwize V7000 Unified
16.1 General considerations: Compression and block volume
compression use cases
Real-time Compression in SAN Volume Controller and Storwize V7000, REDP-4859,
describes the base technology for real time compression in great detail and describes the
compression use cases for compressed block volumes. Support for Real-time Compression
within the Storwize V7000 Unified started with Code Release 1.4 and it uses the Real-time
Compression functionality in the Storwize V7000 backend and applies it to block volumes
used for File systems by the Storwize V7000 Unified.The following sections cover specific
considerations of compression in the file system context of the IBM Storwize V7000 Unified.
16.2 Compressed file system pool configurations
Using this feature to compress volumes together with the NAS feature of the Storwize V7000
Unified provides some benefits over using compression with an external host that creates a
file system on a compressed volume:
The policy language capabilities of the Storwize V7000 Unified allows a flexible way to
create compression rules that are based on file types and the path where files are placed.
The file systems graphical user interface (GUI) panel allows us to monitor the capacity of
file systems, file system pools, and the related MDiskgroups in a single view.
The creation of file systems by using the Storwize V7000 Unified GUI will ensure that best
practices are followed.
Compression in the Storwize V7000 Unified Storage system is defined per volume. Not all the
degrees of freedom resulting from this implementation should be used. This section
describes some example configurations that are applicable to different use cases.
The file system pools should be regarded as the smallest entity where compression is
enabled. So although compression is implemented at a volume level, we refer to compressed
pools or compressed file systems meaning that all data volumes of a file system, or all data
volumes of a file system pool are compressed.
16.2.1 Selectively compressed file system with two pools
This configuration is considered as best practice to create file systems that make use of
compression. The creation of two pools together with a placement policy allows placing only
compressible files to a compressed pool. This approach provides high compression savings
with best performance. This configuration is shown in Figure 16-1.
Chapter 16. Real-time Compression in the IBM Storwize V7000 Unified 289
Figure 16-1 Selective compressed configuration with two file system pools
This file system consists of two file system pools. The system file system pool is based on
uncompressed volumes. The compressed pool consists of compressed volumes only. A
placement policy decides where files are placed during creation.
The placement rules employed will direct files that are known to not compress well to the
system file system pool, whereas all other files are placed to the compressed file system pool.
The placement rules are only evaluated while a file is created. Updating a file will not change
the placement. A change in a placement rule will also not work retrospectively. To correct a
faulty placement, a migration rule can be run. In 16.5, Managing compressed file systems
on page 322, some examples of migration rules are described.
Pools: File system storage pools are different from V7000 storage pools.
The Storwize V7000 storage pools are synonymous with MDiskgroups. To avoid confusion,
the following sections use the term MDiskgroup rather than storage pool.
The file system storage pools are part of a file system. The initial file system pool of a file
system is called system. Only the system pool contains the metadata of the file system, but
it can also contain data. All other file system pools contain data only. When a file volume is
created, the usage type of the volume is either data only, metadata only, or data and
metadata. Compressed volumes should be data-only volumes.
290 Implementing the IBM Storwize V7000 Unified
16.2.2 Configuring a selective compressed file system
This section describes the steps to create a selective compressed file system by using the
GUI compressed preset:
1. Open the files panel as shown in Figure 16-2 and select File Systems: Figure 16-2.
Figure 16-2 Creating a File System
2. Click on New File System at the top of the GUI and select the Compressed preset as
shown in Figure 16-3 on page 291.
Chapter 16. Real-time Compression in the IBM Storwize V7000 Unified 291
Figure 16-3 Compressed File System preset
3. The Restricted Support for Compression Feature pop-up as shown in Figure 16-4 will
ask for a code to enable compression. This code will be provided by
[email protected].
Figure 16-4 Compression function code
4. Complete the screen information as shown in Figure 16-5 on page 292 with the following
information:
a. File system name: <name>
b. Owner: root
c. Set the pool sizes. There is a minimum size requirement for system plus compressed
pools of 100GB which is enforced
d. The system pool is used for storing the metadata and uncompressed files. The
compressed pool is used for storing the compressed files.
292 Implementing the IBM Storwize V7000 Unified
Figure 16-5 Compressed file system preset
We suggest that each file system pool with compressed and uncompressed files be located in
separate storage pools (MDiskgroup), however this is not enforced since you can select the
same storage pool for both file system pools. Also for improved performance, the system pool
that contains metadata should reside on a separate MDiskgroup and in systems that contain
solid-state disks (SSDs), the MDiskgroups that contain the SSDs can then be used to provide
metadata-only volumes.
When only one MDiskgroup is chosen for the system file system pool, the metadata
replication feature is not configured. If sufficient MDiskgroups are available, it is suggested to
define two MDiskgroups for the system pool and enable metadata replication.
5. Complete the file system creation. Finally, the file system creation process is triggered by
clicking OK and confirming the summary message. The file system creation dialog will
provide the command-line interface (CLI) commands, which are called while the file
system is created.
The Custom preset as shown in Figure 16-6 on page 293 can also be used to configure a
selective compressed file system but you will have to configure some items manually which
are pre-configured within the Compressed preset such as:
Note: Use the following steps ONLY if you wish to use a Custom preset to create a
compressed file system.
Chapter 16. Real-time Compression in the IBM Storwize V7000 Unified 293
Figure 16-6 Create a file system using the Custom preset
1. A file system pool named compressed is automatically added and enabled for
compression. When adding a second pool using the Custom preset, by default it names it
silver although this name can be changed and you have to also select it to be
compressed.
2. You have to enable the use of separate disks for data and metadata for the system pool
usage with the Custom preset and specify the block size which has a default of 256 KB
but can be set to 1 MB or 4 MB. The default GPFS filesystem block size is set to 256 KB
with the Compressed preset.
3. A placement policy is included with the Compressed preset whereas this needs to be
added manually with a Custom preset. The placement policy lies at the heart of how the
Storwize V7000 Unified implements compression. The default placement policy contains a
list of file extensions that are known to be highly compressed by nature and should not be
placed in a compressed pool as the data is already compressed and will not yield any
savings. The Storwize V7000 Unified uses this method to automatically place files that are
compressible into the compressed pool and files that are not compressible to be placed in
the system pool which is performed when the file is written to disk. If using a Custom
preset you must define a placement policy to exclude uncompressible files from the
compressed pool Example 16-1 lists file types which, at the time of writing, are known to
not compress well because they either contain compressed or encrypted data.
Example 16-1 Extensions of file types that are known to not compress well
*.7z, *.7z.001, *.7z.002, *.7z.003, *.7zip, *.a00, *.a01, *.a02, *.a03, *.a04,
*.a05, *.ace, *.arj, *.bkf, *.bz2, *.c00, *.c01, *.c02, *.c03, *.cab, *.cbz,
*.cpgz, *.gz, *.nbh, *.r00, *.r01, *.r02, *.r03, *.r04, *.r05, *.r06, *.r07,
294 Implementing the IBM Storwize V7000 Unified
*.r08, *.r09, *.r10, *.rar, *.sisx, *.sit, *.sitx, *.tar.gz, *.tgz, *.wba, *.z01,
*.z02, *.z03, *.z04, *.z05, *.zip, *.zix, *.aac, *.cda, *.dvf, *.flac, *.gp5,
*.gpx, *.logic, *.m4a, *.m4b, *.m4p, *.mp3, *.mts, *.ogg, *.wma, *.wv, *.bin,
*.img, *.iso, *.docm, *.pps, *.pptx, *.acsm, *.menc, *.emz, *.gif, *.jpeg, *.jpg,
*.png, *.htm, *.swf, *.application, *.exe, *.ipa, *.part1.exe, *.crw, *.cso,
*.mdi, *.odg, *.rpm, *.dcr, *.jad, *.pak, *.rem, *.3g2, *.3gp, *.asx, *.flv,
*.m2t, *.m2ts, *.m4v, *.mkv, *.mov, *.mp4, *.mpg, *.tod, *.ts, *.vob, *.wmv,
*.hqx, *.docx, *.ppt, *.pptm, *.thmx, *.djvu, *.dt2, *.mrw, *.wbmp, *.abr, *.ai,
*.icon, *.ofx, *.pzl, *.tif, *.u3d, *.msi, *.xlsm, *.scr, *.wav, *.idx, *.abw,
*.azw, *.contact, *.dot, *.dotm, *.dotx, *.epub, *.keynote, *.mobi, *.mswmm,
*.odt, *.one, *.otf, *.pages, *.pdf, *.ppsx, *.prproj, *.pwi, *.onepkg, *.potx,
*.tiff, *.!ut, *.atom, *.bc!, *.opml, *.torrent, *.xhtml, *.jar, *.xlsx, *.fnt,
*.sc2replay, *.1st, *.air, *.apk, *.cbr, *.daa, *.isz, *.m3u8, *.rmvb, *.sxw,
*.tga, *.uax, *.crx, *.safariextz, *.xpi, *.theme, *.themepack, *.3dr, *.dic,
*.dlc, *.lng, *.ncs, *.pcd, *.pmd, *.rss, *.sng, *.svp, *.swp, *.thm, *.uif,
*.upg, *.avi, *.fla, *.pcm, *.bbb, *.bik, *.nba, *.nbu, *.nco, *.wbcat, *.dao,
*.dmg, *.tao, *.toast, *.pub, *.fpx, *.prg, *.cpt, *.eml, *.nvram, *.vmsd, *.vmxf,
*.vswp
This list of extensions in this example can be used to define exclusion rules for a placement
policy. The GUI provides an editor to define the exclusion rules for file types that should not be
compressed, as shown in Figure 16-7 for the placement policy. The GUI placement editor
defines the placement rules in a non case-sensitive way. The file attribute Extension has to
be chosen and the operator NOT IN for the exclusion placement policy. The list of extensions
shown in this section can be added to the GUI dialog by using cut and paste. The GUI entry
field will accept extensions with various delimiters and will format it automatically. See
Figure 16-7.
Chapter 16. Real-time Compression in the IBM Storwize V7000 Unified 295
Figure 16-7 Defining an exclusion list of files that should not be placed on the compressed pool
4. Adapt placement policy to individual configuration
The list of file extensions that should be excluded from compression might need adapting
for different reasons:
a. Some file types are not compressible, but they are not listed in the compression
exclusion list
If you know that your environment contains files that are not compressible, because they
are already compressed or they are encrypted, they should be part of the compression
exclusion list. The Edit dialog of the Files File System panel for the file system in
question can be used to add the missing file types of the exclusion list. Adding the missing
file types prevents the system to try to compress file types that cannot be compressed
further. Compressing files that are already compressed can also affect performance.
b. Some files are compressible, but are listed in the compression exclusion list
Some extensions might be used by different applications with different content. Possibly
there are compressible files with an extension that is part of the compression exclusion
list. In this case, the extension in question can be removed from the exclusion list using the
Edit dialog of the Files File System panel for the file system in question.
Saving the adapted placement policy will not move files to other pools. Moving files that
were previously misplaced is possible by a migration policy.
296 Implementing the IBM Storwize V7000 Unified
16.2.3 Compression rules by file set
Placement rules can be applied to a whole file system or to a file set. A file set is a subtree of
a file system namespace that in many respects behaves like a separate file system. File sets
provide the ability to partition a file system to allow administrative operations at a finer
granularity than the entire file system.
Placement rules per file set allow different rules for parts of the file system. Shares that are
created in the directory tree below these file sets are subject to these special rules.
In the following complex scenario, we must create placement rules by directly entering
placement rule text.
To define placement rules, which are specific to a file set, the file sets can be dependent or
independent.
Example scenario
A placement rule for the whole file system such as in the selected compressed configuration
will ensure that uncompressible files are excluded from the compressed pool.
The shares in the file set uncompressed do not use compression at all.
The shares in the file set special_application are used by an application that writes only two
types of files. The *.cfg files are control files and are placed in the system pool, and the
*.pak files are highly compressible and are placed in the compressed pool. The file
extensions used by this application are different from the ones in the built-in file placement
policy in the Compressed preset, and special rules are needed to place the .cfg files in the
compressed pool, and .pak files in the non-compressed pool.
Steps to create the example scenario:
Take the following steps to create the example scenario:
1. Create a compressed file system with two pools
Use configuration as described in 16.2.1, Selectively compressed file system with two
pools on page 288 to create a file system and the default placement policy.
2. Create two file sets
Go to Files File Sets and start the New Fileset dialog by clicking + New File Set in the
file sets grid. A dialog as shown in Figure 16-8 on page 297 allows us to create file sets.
Use the Junction path browse function, choose the file system mount point, and provide a
subdirectory name which will be automatically created. The owner should be a user that
will own the path where the file set is linked in the file system tree. For Common Internet
File System (CIFS) shares, this can be the domain administrator. The file set name must
be unique per file system.
Placement rules can reference both dependent and independent file sets. In this example,
a dependent file set was chosen. See Figure 16-8.
Chapter 16. Real-time Compression in the IBM Storwize V7000 Unified 297
Figure 16-8 Create file sets in the Files File Sets New File Set dialog
3. Define placement rules
Use the Edit action on the file system in the Files File System panel to open the file
system edit dialog, which allows you to edit the placement rules.
The custom rules that we want to add cannot be configured by using the placement pool
wizard. The Policy Text editor in the GUI allows you to define the additional rules. The
three additional rules have to be added before the default placement rule text, which are
already in place as we defined them in step 1. When custom rules have been created by
using the policy text editor in the GUI, the placement policy editor cannot be used any
more. In our case, we refer to the system pool as a target pool for a special rule, which is
not supported in the placement policy editor. However, we created the default placement
policy using the graphic editor first, which is useful because it processes the list of known
file type extensions that do not compress well into a correct placement policy string. See
Example 16-2.
298 Implementing the IBM Storwize V7000 Unified
Example 16-2 Additional rules that have to be added before the default compression exclusion rule
RULE 'uncompressed' SET POOL 'system' FOR FILESET ('uncompressed');
RULE 'special_application_uncompressed' SET POOL 'system'
FOR FILESET('special_application')
WHERE NAME LIKE '%.cfg';
RULE 'special_application_compressed' SET POOL 'compressed'
FOR FILESET('special_application')
WHERE NAME LIKE '%.img';
Figure 16-9 shows the GUI Edit dialog that can be used to define more rules for the newly
created file sets.
Figure 16-9 Edit dialog for policy text
The rules are processed in the order that they are defined; therefore, order is important. The
rules that we have defined now will work in the following way:
All files that are placed to the uncompressed file set will be placed to the system pool.
All files with the extension .cfg that are written to the special_application file set will be
written to the system pool.
All files with the extension .img that are written to the special_application file set will be
written to the compressed pool.
Chapter 16. Real-time Compression in the IBM Storwize V7000 Unified 299
All files that belong to the exclusion list as specified in the generatedPlacementRule1 will
be written to the system pool. This means that the .img files, which are not written to the
file set special application, will not be compressed.
All other files will be written to the compressed pool, because of the NOT expression in the
generatedPlacementRule1.
The final default placement rule for the system pool will never be evaluated. All remaining
files are already covered by the generatedPlacementRule1
If we want to start over with a new set of exclusion rules and need the placement policy editor
back, we can apply a default placement policy to the system pool in the Policy Text editor as in
Example 16-3, and the placement policy editor comes back.
Example 16-3 Default placement policy
RULE 'default' SET POOL 'system';
16.2.4 Use case for compressing and uncompressing existing data.
This procedure goes through the detailed steps to migrate data from/to a compressed file
system. The procedures will guide you through the process of how to add a compressed
filesystem pool to an existing uncompressed filesystem and compress existing data
concurrently. The reverse procedure will also be used to uncompress existing data. The later
procedure might be needed to disable compression. Compression is enabled when the first
NSD/volume is created and disabled only when the last NSD/volume is uncompressed or
deleted.
Compressing an existing uncompressed file system
It is sometimes a good idea when testing compression and the performance effect on any
data set or workload, to create a baseline first with a non-compressed file system or you
might already have an existing file system with uncompressed data that needs to be
compressed.
When a file system that is not using compression is to be converted to a file system that uses
compression, several steps are needed after analysis of the data in the not compressed
filesystem.
Adding a compressed pool using the GUI
1. The following Figure 16-10 shows a filesystem (FS1) which is uncompressed and has just
one filesystem pool called system. Filesystem FS1 has a default placement policy RULE
that places all data to the filesystem system pool.
Note: A high number of file set-specific placement rules can affect performance. Although
it is possible to create thousands of file sets per file system, the management of separate
placement rules for each of them can become problematic. The first placement rule that
matches to the file that is written will be applied, which means that all rules must be
traversed for files that match only the default rule.
300 Implementing the IBM Storwize V7000 Unified
Figure 16-10 Uncompressed file system
2. Right-click on file system FS1 and select Edit File System as shown in Figure 16-11.
Figure 16-11 Edit file system
3. Select the + symbol as indicated in Figure 16-12 on page 301 to add another file system
pool.
Chapter 16. Real-time Compression in the IBM Storwize V7000 Unified 301
Figure 16-12 Add a file system pool
4. Enter the desired file system name such as compressed, select a storage pool, specify
the size for the pool, and select compressed as shown in Figure 16-13.
Figure 16-13 Add a compressed file system pool
5. You will get a popup asking for a code to enable the compression feature as shown in
Figure 16-14 on page 302 which must be obtained via [email protected]. Select
OK to create the new compressed pool after entering the code.
302 Implementing the IBM Storwize V7000 Unified
Figure 16-14 Compression feature
6. At this point all data still remains in the system pool. A migration policy needs to run in
order to move data that is compressable to the compressed pool. After the migration
completes, the migration policy needs to be disabled and then a placement policy needs
to be added so that when new files are created, they will be directed automatically to the
proper pool depending on whether these files are compressible or not. The migration
policy will run once and move the compressible data to the compressed pool. The
migration can be performed using the GUI or CLI. When using the GUI, the migration is
started immediately when OK is selected. The migration is concurrent but may cause a
temporary performance impact in which case it may be desirable and suggested to use
the CLI which can be controlled as to when it starts such as during low I/O activity during
the night or weekend. The following substeps will explain how to use the GUI or CLI to
migrate data and adding a placement policy to replace the current default policy.
Migrating data from the system pool to a compressed pool using the GUI
1. Right-click on the filesystem and select Edit File System and select the Migration Policy
tab as shown in Figure 16-15 on page 303. Check Enable file migration. The threshold to
start the migration at 1% and stop at 0% indicates that the migration will move the entire
analyzed contents from the system pool to the compressed pool. The threshold can be
used to control the amount of data that is migrated which could be important if you want to
move only a limited amount of data each time. Select the exclusion list by extension and
using the operator NOT IN. Copy the list of extensions as shown in Example 16-1 on
page 293 and paste to the Value section. You can apply the migration to the whole
filesystem or by individual filesets.
Chapter 16. Real-time Compression in the IBM Storwize V7000 Unified 303
Figure 16-15 Migration Tab.
2. Once you select OK, the migration will start immediately. You can obtain better control of
when to start the migration using the CLI. GUI control will return after the migration is
complete. The policy rule indicates that all files that do not contain the list of extensions
will be moved from the system pool to the filesystem pool called compressed.
3. Using the GUI > Monitoring > Capacity > File System Pools as shown in Figure 4. Verify
the compression rate and also the amount of data that was migrated between pools.
Figure 16-16 Monitoring capacity
4. Immediately after the migration completes, Edit the file system again and disable the
migration by deselecting Enable file migration on the Migration Policy tab. Select the
304 Implementing the IBM Storwize V7000 Unified
Placement Policy as seen in Figure 16-17 on page 304 tab and select Enable file
placement. Select Add exclusion rule which will automatically add the placement rule
that will place any new files that are created to either the compressed or uncompressed
file system pool. Select OK to complete.
Figure 16-17 Placement policy
Migrating data from the system pool to a compressed pool using the CLI
As previously mentioned, using the GUI to migrate data is initiated when you enter OK
which may not be desired at all times. You can use the GUI and CLI in combination to
accomplish the migration but it will only start when the runpolicy command is used.
The GUI policy editor can be used to create the migration policy mkpolicy command as it
nicely formats the long list of extensions to exclude from migration into the correct policy
language statements as shown in Figure 16-18 on page 305. Alternatively, the CLI can be
used to create such a policy (which is cumbersome), but it will make it possible to run the
policy using the runpolicy command which will execute the migration policy only once and at
the time the command is started.
The entire command needs to be in one line and there are two file extensions (!ut and bc!)
which need to be modified. The ! is a special Unix character and needs to be modified to
(\!ut and bc\!). You can use an editor such as Notepad to create the CLI command in
combination with the policy editor as shown below. Using the Migration Policy tab, create the
policy:
Note: You will Cancel out of this once the migration policy is created. You will not select
OK to run this from the GUI but will instead use the CLI runpolicy command.
Chapter 16. Real-time Compression in the IBM Storwize V7000 Unified 305
Figure 16-18 Migration policy value
1. Select the Policy Text tab and Edit as shown in Figure 16-19 on page 306. Select and
copy the policy language text and use it in an editor such as Notepad to format the CLI
command. Make sure you Cancel out of the GUI. You can also use the text bellow for the
mkpolicy command instead and edit it to suit your requirements.
306 Implementing the IBM Storwize V7000 Unified
Figure 16-19 Policy text
2. In Figure 16-20 is a Notepad sample of building the command. Note that it has to be in a
single line and also note the proper use of double quotes:
Figure 16-20 Notepad CLI command
3. Execution of the mkpolicy CLI command as shown in Example 16-4. The name of the
policy being created is migratetocompress, and the policy will migrate from the system
pool to the compress pool all files that do not have the extension that is listed. As the
threshold for the migration was set low, the system starts the migration job automatically.
You can modify the start and stop % for threshold in order to limit the amount of data that
will be migrated which can be important if there is a large amount of data to be transferred
within a certain time limit. The mkpolicy command will not initiate a migration. The
migration will only start when the runpolicy command is used.
Example 16-4 CLI mkpolicy command
[7803088.ibm]$
[7803088.ibm]$ mkpolicy migratetocompress -R "RULE 'migratetocompress' migrate from pool 'system' threshold(1,0)
to pool 'compressed' where NOT (NOT (LOWER(NAME) LIKE '%.7z' OR LOWER(NAME) LIKE '%.7z.001' OR LOWER(NAME) LIKE
'%.7z.002' OR LOWER(NAME) LIKE '%.7z.003' OR LOWER(NAME) LIKE '%.7zip' OR LOWER(NAME) LIKE '%.a00' OR LOWER(NAME)
LIKE '%.a01' OR LOWER(NAME) LIKE '%.a02' OR LOWER(NAME) LIKE '%.a03' OR LOWER(NAME) LIKE '%.a04' OR LOWER(NAME)
LIKE '%.a05' OR LOWER(NAME) LIKE '%.ace' OR LOWER(NAME) LIKE '%.arj' OR LOWER(NAME) LIKE '%.bkf' OR LOWER(NAME)
LIKE '%.bz2' OR LOWER(NAME) LIKE '%.c00' OR LOWER(NAME) LIKE '%.c01' OR LOWER(NAME) LIKE '%.c02' OR LOWER(NAME)
LIKE '%.c03' OR LOWER(NAME) LIKE '%.cab' OR LOWER(NAME) LIKE '%.cbz' OR LOWER(NAME) LIKE '%.cpgz' OR LOWER(NAME)
Chapter 16. Real-time Compression in the IBM Storwize V7000 Unified 307
LIKE '%.gz' OR LOWER(NAME) LIKE '%.nbh' OR LOWER(NAME) LIKE '%.r00' OR LOWER(NAME) LIKE '%.r01' OR LOWER(NAME)
LIKE '%.r02' OR LOWER(NAME) LIKE '%.r03' OR LOWER(NAME) LIKE '%.r04' OR LOWER(NAME) LIKE '%.r05' OR LOWER(NAME)
LIKE '%.r06' OR LOWER(NAME) LIKE '%.r07' OR LOWER(NAME) LIKE '%.r08' OR LOWER(NAME) LIKE '%.r09' OR LOWER(NAME)
LIKE '%.r10' OR LOWER(NAME) LIKE '%.rar' OR LOWER(NAME) LIKE '%.sisx' OR LOWER(NAME) LIKE '%.sit' OR LOWER(NAME)
LIKE '%.sitx' OR LOWER(NAME) LIKE '%.tar.gz' OR LOWER(NAME) LIKE '%.tgz' OR LOWER(NAME) LIKE '%.wba' OR
LOWER(NAME) LIKE '%.z01' OR LOWER(NAME) LIKE '%.z02' OR LOWER(NAME) LIKE '%.z03' OR LOWER(NAME) LIKE '%.z04' OR
LOWER(NAME) LIKE '%.z05' OR LOWER(NAME) LIKE '%.zip' OR LOWER(NAME) LIKE '%.zix' OR LOWER(NAME) LIKE '%.aac' OR
LOWER(NAME) LIKE '%.cda' OR LOWER(NAME) LIKE '%.dvf' OR LOWER(NAME) LIKE '%.flac' OR LOWER(NAME) LIKE '%.gp5' OR
LOWER(NAME) LIKE '%.gpx' OR LOWER(NAME) LIKE '%.logic' OR LOWER(NAME) LIKE '%.m4a' OR LOWER(NAME) LIKE '%.m4b' OR
LOWER(NAME) LIKE '%.m4p' OR LOWER(NAME) LIKE '%.mp3' OR LOWER(NAME) LIKE '%.mts' OR LOWER(NAME) LIKE '%.ogg' OR
LOWER(NAME) LIKE '%.wma' OR LOWER(NAME) LIKE '%.wv' OR LOWER(NAME) LIKE '%.bin' OR LOWER(NAME) LIKE '%.img' OR
LOWER(NAME) LIKE '%.iso' OR LOWER(NAME) LIKE '%.docm' OR LOWER(NAME) LIKE '%.pps' OR LOWER(NAME) LIKE '%.pptx' OR
LOWER(NAME) LIKE '%.acsm' OR LOWER(NAME) LIKE '%.menc' OR LOWER(NAME) LIKE '%.emz' OR LOWER(NAME) LIKE '%.gif' OR
LOWER(NAME) LIKE '%.jpeg' OR LOWER(NAME) LIKE '%.jpg' OR LOWER(NAME) LIKE '%.png' OR LOWER(NAME) LIKE '%.htm' OR
LOWER(NAME) LIKE '%.swf' OR LOWER(NAME) LIKE '%.application' OR LOWER(NAME) LIKE '%.exe' OR LOWER(NAME) LIKE
'%.ipa' OR LOWER(NAME) LIKE '%.part1.exe' OR LOWER(NAME) LIKE '%.crw' OR LOWER(NAME) LIKE '%.cso' OR LOWER(NAME)
LIKE '%.mdi' OR LOWER(NAME) LIKE '%.odg' OR LOWER(NAME) LIKE '%.rpm' OR LOWER(NAME) LIKE '%.dcr' OR LOWER(NAME)
LIKE '%.jad' OR LOWER(NAME) LIKE '%.pak' OR LOWER(NAME) LIKE '%.rem' OR LOWER(NAME) LIKE '%.3g2' OR LOWER(NAME)
LIKE '%.3gp' OR LOWER(NAME) LIKE '%.asx' OR LOWER(NAME) LIKE '%.flv' OR LOWER(NAME) LIKE '%.m2t' OR LOWER(NAME)
LIKE '%.m2ts' OR LOWER(NAME) LIKE '%.m4v' OR LOWER(NAME) LIKE '%.mkv' OR LOWER(NAME) LIKE '%.mov' OR LOWER(NAME)
LIKE '%.mp4' OR LOWER(NAME) LIKE '%.mpg' OR LOWER(NAME) LIKE '%.tod' OR LOWER(NAME) LIKE '%.ts' OR LOWER(NAME)
LIKE '%.vob' OR LOWER(NAME) LIKE '%.wmv' OR LOWER(NAME) LIKE '%.hqx' OR LOWER(NAME) LIKE '%.docx' OR LOWER(NAME)
LIKE '%.ppt' OR LOWER(NAME) LIKE '%.pptm' OR LOWER(NAME) LIKE '%.thmx' OR LOWER(NAME) LIKE '%.djvu' OR
LOWER(NAME) LIKE '%.dt2' OR LOWER(NAME) LIKE '%.mrw' OR LOWER(NAME) LIKE '%.wbmp' OR LOWER(NAME) LIKE '%.abr' OR
LOWER(NAME) LIKE '%.ai' OR LOWER(NAME) LIKE '%.icon' OR LOWER(NAME) LIKE '%.ofx' OR LOWER(NAME) LIKE '%.pzl' OR
LOWER(NAME) LIKE '%.tif' OR LOWER(NAME) LIKE '%.u3d' OR LOWER(NAME) LIKE '%.msi' OR LOWER(NAME) LIKE '%.xlsm' OR
LOWER(NAME) LIKE '%.scr' OR LOWER(NAME) LIKE '%.wav' OR LOWER(NAME) LIKE '%.idx' OR LOWER(NAME) LIKE '%.abw' OR
LOWER(NAME) LIKE '%.azw' OR LOWER(NAME) LIKE '%.contact' OR LOWER(NAME) LIKE '%.dot' OR LOWER(NAME) LIKE '%.dotm'
OR LOWER(NAME) LIKE '%.dotx' OR LOWER(NAME) LIKE '%.epub' OR LOWER(NAME) LIKE '%.keynote' OR LOWER(NAME) LIKE
'%.mobi' OR LOWER(NAME) LIKE '%.mswmm' OR LOWER(NAME) LIKE '%.odt' OR LOWER(NAME) LIKE '%.one' OR LOWER(NAME)
LIKE '%.otf' OR LOWER(NAME) LIKE '%.pages' OR LOWER(NAME) LIKE '%.pdf' OR LOWER(NAME) LIKE '%.ppsx' OR
LOWER(NAME) LIKE '%.prproj' OR LOWER(NAME) LIKE '%.pwi' OR LOWER(NAME) LIKE '%.onepkg' OR LOWER(NAME) LIKE
'%.potx' OR LOWER(NAME) LIKE '%.tiff' OR LOWER(NAME) LIKE '%.\!ut' OR LOWER(NAME) LIKE '%.atom' OR LOWER(NAME)
LIKE '%.bc\!' OR LOWER(NAME) LIKE '%.opml' OR LOWER(NAME) LIKE '%.torrent' OR LOWER(NAME) LIKE '%.xhtml' OR
LOWER(NAME) LIKE '%.jar' OR LOWER(NAME) LIKE '%.xlsx' OR LOWER(NAME) LIKE '%.fnt' OR LOWER(NAME) LIKE
'%.sc2replay' OR LOWER(NAME) LIKE '%.1st' OR LOWER(NAME) LIKE '%.air' OR LOWER(NAME) LIKE '%.apk' OR LOWER(NAME)
LIKE '%.cbr' OR LOWER(NAME) LIKE '%.daa' OR LOWER(NAME) LIKE '%.isz' OR LOWER(NAME) LIKE '%.m3u8' OR LOWER(NAME)
LIKE '%.rmvb' OR LOWER(NAME) LIKE '%.sxw' OR LOWER(NAME) LIKE '%.tga' OR LOWER(NAME) LIKE '%.uax' OR LOWER(NAME)
LIKE '%.crx' OR LOWER(NAME) LIKE '%.safariextz' OR LOWER(NAME) LIKE '%.xpi' OR LOWER(NAME) LIKE '%.theme' OR
LOWER(NAME) LIKE '%.themepack' OR LOWER(NAME) LIKE '%.3dr' OR LOWER(NAME) LIKE '%.dic' OR LOWER(NAME) LIKE
'%.dlc' OR LOWER(NAME) LIKE '%.lng' OR LOWER(NAME) LIKE '%.ncs' OR LOWER(NAME) LIKE '%.pcd' OR LOWER(NAME) LIKE
'%.pmd' OR LOWER(NAME) LIKE '%.rss' OR LOWER(NAME) LIKE '%.sng' OR LOWER(NAME) LIKE '%.svp' OR LOWER(NAME) LIKE
'%.swp' OR LOWER(NAME) LIKE '%.thm' OR LOWER(NAME) LIKE '%.uif' OR LOWER(NAME) LIKE '%.upg' OR LOWER(NAME) LIKE
'%.avi' OR LOWER(NAME) LIKE '%.fla' OR LOWER(NAME) LIKE '%.pcm' OR LOWER(NAME) LIKE '%.bbb' OR LOWER(NAME) LIKE
'%.bik' OR LOWER(NAME) LIKE '%.nba' OR LOWER(NAME) LIKE '%.nbu' OR LOWER(NAME) LIKE '%.nco' OR LOWER(NAME) LIKE
'%.wbcat' OR LOWER(NAME) LIKE '%.dao' OR LOWER(NAME) LIKE '%.dmg' OR LOWER(NAME) LIKE '%.tao' OR LOWER(NAME) LIKE
'%.toast' OR LOWER(NAME) LIKE '%.pub' OR LOWER(NAME) LIKE '%.fpx' OR LOWER(NAME) LIKE '%.prg' OR LOWER(NAME) LIKE
'%.cpt' OR LOWER(NAME) LIKE '%.eml' OR LOWER(NAME) LIKE '%.nvram' OR LOWER(NAME) LIKE '%.vmsd' OR LOWER(NAME)
LIKE '%.vmxf' OR LOWER(NAME) LIKE '%.vswp'));"
EFSSG1000I The command completed successfully.
[7803088.ibm]$
4. The lspolicy command can be used to verify that the policy exists and what the policy
contains. The lspolicy command by itself displays all existing policies as shown in
Example 16-5:
Example 16-5 lspolicy command
[7803088.ibm]$ lspolicy
Policy Name Declarations (define/RULE)
default default
FS1_generatedPolicy_2013_07_05_19_32_40generatedPlacementRule1
FS1_generatedPolicy_2013_07_08_19_44_51default
migratetocompress migratetocompress
migratetosystem migratetosystem
EFSSG1000I The command completed successfully.
[7803088.ibm]$
5. The lspolicy P <policy name> command will display the contents of the entire policy as
shown in Example 16-6:
Example 16-6 lspolicy -P command
[7803088.ibm]$ lspolicy -P migratetocompress
Policy Name Declaration Name Default Declarations
migratetocompress migratetocompress N RULE 'migratetocompress' MIGRATE FROM POOL 'system' THRESHOLD(1,0) TO
POOL 'compressed' WHERE NOT (LOWER(NAME) LIKE '%.7z' OR LOWER(NAME) LIKE '%.7z.001' OR LOWER(NAME) LIKE
308 Implementing the IBM Storwize V7000 Unified
'%.7z.002' OR LOWER(NAME) LIKE '%.7z.003' OR LOWER(NAME) LIKE '%.7zip' OR LOWER(NAME) LIKE '%.a00' OR LOWER(NAME)
LIKE '%.a01' OR LOWER(NAME) LIKE '%.a02' OR LOWER(NAME) LIKE '%.a03' OR LOWER(NAME) LIKE '%.a04' OR LOWER(NAME)
LIKE '%.a05' OR LOWER(NAME) LIKE '%.ace' OR LOWER(NAME) LIKE '%.arj' OR LOWER(NAME) LIKE '%.bkf' OR LOWER(NAME)
LIKE '%.bz2' OR LOWER(NAME) LIKE '%.c00' OR LOWER(NAME) LIKE '%.c01' OR LOWER(NAME) LIKE '%.c02' OR LOWER(NAME)
LIKE '%.c03' OR LOWER(NAME) LIKE '%.cab' OR LOWER(NAME) LIKE '%.cbz' OR LOWER(NAME) LIKE '%.cpgz' OR LOWER(NAME)
LIKE '%.gz' OR LOWER(NAME) LIKE '%.nbh' OR LOWER(NAME) LIKE '%.r00' OR LOWER(NAME) LIKE '%.r01' OR LOWER(NAME)
LIKE '%.r02' OR LOWER(NAME) LIKE '%.r03' OR LOWER(NAME) LIKE '%.r04' OR LOWER(NAME) LIKE '%.r05' OR LOWER(NAME)
LIKE '%.r06' OR LOWER(NAME) LIKE '%.r07' OR LOWER(NAME) LIKE '%.r08' OR LOWER(NAME) LIKE '%.r09' OR LOWER(NAME)
LIKE '%.r10' OR LOWER(NAME) LIKE '%.rar' OR LOWER(NAME) LIKE '%.sisx' OR LOWER(NAME) LIKE '%.sit' OR LOWER(NAME)
LIKE '%.sitx' OR LOWER(NAME) LIKE '%.tar.gz' OR LOWER(NAME) LIKE '%.tgz' OR LOWER(NAME) LIKE '%.wba' OR
LOWER(NAME) LIKE '%.z01' OR LOWER(NAME) LIKE '%.z02' OR LOWER(NAME) LIKE '%.z03' OR LOWER(NAME) LIKE '%.z04' OR
LOWER(NAME) LIKE '%.z05' OR LOWER(NAME) LIKE '%.zip' OR LOWER(NAME) LIKE '%.zix' OR LOWER(NAME) LIKE '%.aac' OR
LOWER(NAME) LIKE '%.cda' OR LOWER(NAME) LIKE '%.dvf' OR LOWER(NAME) LIKE '%.flac' OR LOWER(NAME) LIKE '%.gp5' OR
LOWER(NAME) LIKE '%.gpx' OR LOWER(NAME) LIKE '%.logic' OR LOWER(NAME) LIKE '%.m4a' OR LOWER(NAME) LIKE '%.m4b' OR
LOWER(NAME) LIKE '%.m4p' OR LOWER(NAME) LIKE '%.mp3' OR LOWER(NAME) LIKE '%.mts' OR LOWER(NAME) LIKE '%.ogg' OR
LOWER(NAME) LIKE '%.wma' OR LOWER(NAME) LIKE '%.wv' OR LOWER(NAME) LIKE '%.bin' OR LOWER(NAME) LIKE '%.img' OR
LOWER(NAME) LIKE '%.iso' OR LOWER(NAME) LIKE '%.docm' OR LOWER(NAME) LIKE '%.pps' OR LOWER(NAME) LIKE '%.pptx' OR
LOWER(NAME) LIKE '%.acsm' OR LOWER(NAME) LIKE '%.menc' OR LOWER(NAME) LIKE '%.emz' OR LOWER(NAME) LIKE '%.gif' OR
LOWER(NAME) LIKE '%.jpeg' OR LOWER(NAME) LIKE '%.jpg' OR LOWER(NAME) LIKE '%.png' OR LOWER(NAME) LIKE '%.htm' OR
LOWER(NAME) LIKE '%.swf' OR LOWER(NAME) LIKE '%.application' OR LOWER(NAME) LIKE '%.exe' OR LOWER(NAME) LIKE
'%.ipa' OR LOWER(NAME) LIKE '%.part1.exe' OR LOWER(NAME) LIKE '%.crw' OR LOWER(NAME) LIKE '%.cso' OR LOWER(NAME)
LIKE '%.mdi' OR LOWER(NAME) LIKE '%.odg' OR LOWER(NAME) LIKE '%.rpm' OR LOWER(NAME) LIKE '%.dcr' OR LOWER(NAME)
LIKE '%.jad' OR LOWER(NAME) LIKE '%.pak' OR LOWER(NAME) LIKE '%.rem' OR LOWER(NAME) LIKE '%.3g2' OR LOWER(NAME)
LIKE '%.3gp' OR LOWER(NAME) LIKE '%.asx' OR LOWER(NAME) LIKE '%.flv' OR LOWER(NAME) LIKE '%.m2t' OR LOWER(NAME)
LIKE '%.m2ts' OR LOWER(NAME) LIKE '%.m4v' OR LOWER(NAME) LIKE '%.mkv' OR LOWER(NAME) LIKE '%.mov' OR LOWER(NAME)
LIKE '%.mp4' OR LOWER(NAME) LIKE '%.mpg' OR LOWER(NAME) LIKE '%.tod' OR LOWER(NAME) LIKE '%.ts' OR LOWER(NAME)
LIKE '%.vob' OR LOWER(NAME) LIKE '%.wmv' OR LOWER(NAME) LIKE '%.hqx' OR LOWER(NAME) LIKE '%.docx' OR LOWER(NAME)
LIKE '%.ppt' OR LOWER(NAME) LIKE '%.pptm' OR LOWER(NAME) LIKE '%.thmx' OR LOWER(NAME) LIKE '%.djvu' OR
LOWER(NAME) LIKE '%.dt2' OR LOWER(NAME) LIKE '%.mrw' OR LOWER(NAME) LIKE '%.wbmp' OR LOWER(NAME) LIKE '%.abr' OR
LOWER(NAME) LIKE '%.ai' OR LOWER(NAME) LIKE '%.icon' OR LOWER(NAME) LIKE '%.ofx' OR LOWER(NAME) LIKE '%.pzl' OR
LOWER(NAME) LIKE '%.tif' OR LOWER(NAME) LIKE '%.u3d' OR LOWER(NAME) LIKE '%.msi' OR LOWER(NAME) LIKE '%.xlsm' OR
LOWER(NAME) LIKE '%.scr' OR LOWER(NAME) LIKE '%.wav' OR LOWER(NAME) LIKE '%.idx' OR LOWER(NAME) LIKE '%.abw' OR
LOWER(NAME) LIKE '%.azw' OR LOWER(NAME) LIKE '%.contact' OR LOWER(NAME) LIKE '%.dot' OR LOWER(NAME) LIKE '%.dotm'
OR LOWER(NAME) LIKE '%.dotx' OR LOWER(NAME) LIKE '%.epub' OR LOWER(NAME) LIKE '%.keynote' OR LOWER(NAME) LIKE
'%.mobi' OR LOWER(NAME) LIKE '%.mswmm' OR LOWER(NAME) LIKE '%.odt' OR LOWER(NAME) LIKE '%.one' OR LOWER(NAME)
LIKE '%.otf' OR LOWER(NAME) LIKE '%.pages' OR LOWER(NAME) LIKE '%.pdf' OR LOWER(NAME) LIKE '%.ppsx' OR
LOWER(NAME) LIKE '%.prproj' OR LOWER(NAME) LIKE '%.pwi' OR LOWER(NAME) LIKE '%.onepkg' OR LOWER(NAME) LIKE
'%.potx' OR LOWER(NAME) LIKE '%.tiff' OR LOWER(NAME) LIKE %.!ut OR LOWER(NAME) LIKE '%.atom' OR LOWER(NAME) LIKE
%.bc! OR LOWER(NAME) LIKE '%.opml' OR LOWER(NAME) LIKE '%.torrent' OR LOWER(NAME) LIKE '%.xhtml' OR LOWER(NAME)
LIKE '%.jar' OR LOWER(NAME) LIKE '%.xlsx' OR LOWER(NAME) LIKE '%.fnt' OR LOWER(NAME) LIKE '%.sc2replay' OR
LOWER(NAME) LIKE '%.1st' OR LOWER(NAME) LIKE '%.air' OR LOWER(NAME) LIKE '%.apk' OR LOWER(NAME) LIKE '%.cbr' OR
LOWER(NAME) LIKE '%.daa' OR LOWER(NAME) LIKE '%.isz' OR LOWER(NAME) LIKE '%.m3u8' OR LOWER(NAME) LIKE '%.rmvb' OR
LOWER(NAME) LIKE '%.sxw' OR LOWER(NAME) LIKE '%.tga' OR LOWER(NAME) LIKE '%.uax' OR LOWER(NAME) LIKE '%.crx' OR
LOWER(NAME) LIKE '%.safariextz' OR LOWER(NAME) LIKE '%.xpi' OR LOWER(NAME) LIKE '%.theme' OR LOWER(NAME) LIKE
'%.themepack' OR LOWER(NAME) LIKE '%.3dr' OR LOWER(NAME) LIKE '%.dic' OR LOWER(NAME) LIKE '%.dlc' OR LOWER(NAME)
LIKE '%.lng' OR LOWER(NAME) LIKE '%.ncs' OR LOWER(NAME) LIKE '%.pcd' OR LOWER(NAME) LIKE '%.pmd' OR LOWER(NAME)
LIKE '%.rss' OR LOWER(NAME) LIKE '%.sng' OR LOWER(NAME) LIKE '%.svp' OR LOWER(NAME) LIKE '%.swp' OR LOWER(NAME)
LIKE '%.thm' OR LOWER(NAME) LIKE '%.uif' OR LOWER(NAME) LIKE '%.upg' OR LOWER(NAME) LIKE '%.avi' OR LOWER(NAME)
LIKE '%.fla' OR LOWER(NAME) LIKE '%.pcm' OR LOWER(NAME) LIKE '%.bbb' OR LOWER(NAME) LIKE '%.bik' OR LOWER(NAME)
LIKE '%.nba' OR LOWER(NAME) LIKE '%.nbu' OR LOWER(NAME) LIKE '%.nco' OR LOWER(NAME) LIKE '%.wbcat' OR LOWER(NAME)
LIKE '%.dao' OR LOWER(NAME) LIKE '%.dmg' OR LOWER(NAME) LIKE '%.tao' OR LOWER(NAME) LIKE '%.toast' OR LOWER(NAME)
LIKE '%.pub' OR LOWER(NAME) LIKE '%.fpx' OR LOWER(NAME) LIKE '%.prg' OR LOWER(NAME) LIKE '%.cpt' OR LOWER(NAME)
LIKE '%.eml' OR LOWER(NAME) LIKE '%.nvram' OR LOWER(NAME) LIKE '%.vmsd' OR LOWER(NAME) LIKE '%.vmxf' OR
LOWER(NAME) LIKE '%.vswp'))
EFSSG1000I The command completed successfully.
[7803088.ibm]$
6. Before running the runpolicy command, you can use the GUI to verify current information
for all the pools as shown in Figure 16-21 on page 309. The system pool contains 4.40GB
of both compressed and non compressable data and the compressed pool has 375.75MB
of used data.
Chapter 16. Real-time Compression in the IBM Storwize V7000 Unified 309
Figure 16-21 Capacity
7. The CLI allows us to view the active jobs with the command lsjobstatus and also provides
a view on the migration job results using the showlog command. The following
Example 16-7 shows the sequence of using the CLI commands to initiate the migration
using the runpolicy command and monitor the progress.
Example 16-7 Job progress
[7803088.ibm]$
[7803088.ibm]$ runpolicy FS1 -P migratetocompress
EFSSA0184I The policy is started on FS1 with JobID 58.
[7803088.ibm]$
[7803088.ibm]$ lsjobstatus
File system Job Job id Status Start time End time/Progress RC Message
FS1 runpolicy 58 running 7/12/13 11:07:07 PM IDT Job has started.
EFSSG1000I The command completed successfully.
[7803088.ibm]$
[7803088.ibm]$ showlog 58
Primary node: mgmt001st001
Job ID : 58
[I] GPFS Current Data Pool Utilization in KB and %
compressed 6912 104857600 0.006592%
system 5104896 209715200 2.434204%
[I] 156427 of 1000448 inodes used: 15.635695%.
[I] Loaded policy rules from /var/opt/IBM/sofs/PolicyFiles/policy4599440165344313344.
Evaluating MIGRATE/DELETE/EXCLUDE rules with CURRENT_TIMESTAMP = 2013-07-12@20:07:08 UTC
parsed 0 Placement Rules, 0 Restore Rules, 1 Migrate/Delete/Exclude Rules,
0 List Rules, 0 External Pool/List Rules
RULE 'migratetocompress' migrate from pool 'system' threshold(1,0) to pool 'compressed' where NOT (NOT
(LOWER(NAME) LIKE '%.7z' OR LOWER(NAME) LIKE '%.7z.001' OR LOWER(NAME) LIKE '%.7z.002' OR LOWER(NAME) LIKE
'%.7z.003' OR LOWER(NAME) LIKE '%.7zip' OR LOWER(NAME) LIKE '%.a00' OR LOWER(NAME) LIKE '%.a01' OR LOWER(NAME)
LIKE '%.a02' OR LOWER(NAME) LIKE '%.a03' OR LOWER(NAME) LIKE '%.a04' OR LOWER(NAME) LIKE '%.a05' OR LOWER(NAME)
LIKE '%.ace' OR LOWER(NAME) LIKE '%.arj' OR LOWER(NAME) LIKE '%.bkf' OR LOWER(NAME) LIKE '%.bz2' OR LOWER(NAME)
LIKE '%.c00' OR LOWER(NAME) LIKE '%.c01' OR LOWER(NAME) LIKE '%.c02' OR LOWER(NAME) LIKE '%.c03' OR LOWER(NAME)
LIKE '%.cab' OR LOWER(NAME) LIKE '%.cbz' OR LOWER(NAME) LIKE '%.cpgz' OR LOWER(NAME) LIKE '%.gz' OR LOWER(NAME)
LIKE '%.nbh' OR LOWER(NAME) LIKE '%.r00' OR LOWER(NAME) LIKE '%.r01' OR LOWER(NAME) LIKE '%.r02' OR LOWER(NAME)
LIKE '%.r03' OR LOWER(NAME) LIKE '%.r04' OR LOWER(NAME) LIKE '%.r05' OR LOWER(NAME) LIKE '%.r06' OR LOWER(NAME)
LIKE '%.r07' OR LOWER(NAME) LIKE '%.r08' OR LOWER(NAME) LIKE '%.r09' OR LOWER(NAME) LIKE '%.r10' OR LOWER(NAME)
LIKE '%.rar' OR LOWER(NAME) LIKE '%.sisx' OR LOWER(NAME) LIKE '%.sit' OR LOWER(NAME) LIKE '%.sitx' OR LOWER(NAME)
LIKE '%.tar.gz' OR LOWER(NAME) LIKE '%.tgz' OR LOWER(NAME) LIKE '%.wba' OR LOWER(NAME) LIKE '%.z01' OR
LOWER(NAME) LIKE '%.z02' OR LOWER(NAME) LIKE '%.z03' OR LOWER(NAME) LIKE '%.z04' OR LOWER(NAME) LIKE '%.z05' OR
LOWER(NAME) LIKE '%.zip' OR LOWER(NAME) LIKE '%.zix' OR LOWER(NAME) LIKE '%.aac' OR LOWER(NAME) LIKE '%.cda' OR
LOWER(NAME) LIKE '%.dvf' OR LOWER(NAME) LIKE '%.flac' OR LOWER(NAME) LIKE '%.gp5' OR LOWER(NAME) LIKE '%.gpx' OR
LOWER(NAME) LIKE '%.logic' OR LOWER(NAME) LIKE '%.m4a' OR LOWER(NAME) LIKE '%.m4b' OR LOWER(NAME) LIKE '%.m4p' OR
LOWER(NAME) LIKE '%.mp3' OR LOWER(NAME) LIKE '%.mts' OR LOWER(NAME) LIKE '%.ogg' OR LOWER(NAME) LIKE '%.wma' OR
LOWER(NAME) LIKE '%.wv' OR LOWER(NAME) LIKE '%.bin' OR LOWER(NAME) LIKE '%.img' OR LOWER(NAME) LIKE '%.iso' OR
LOWER(NAME) LIKE '%.docm' OR LOWER(NAME) LIKE '%.pps' OR LOWER(NAME) LIKE '%.pptx' OR LOWER(NAME) LIKE '%.acsm'
OR LOWER(NAME) LIKE '%.menc' OR LOWER(NAME) LIKE '%.emz' OR LOWER(NAME) LIKE '%.gif' OR LOWER(NAME) LIKE '%.jpeg'
OR LOWER(NAME) LIKE '%.jpg' OR LOWER(NAME) LIKE '%.png' OR LOWER(NAME) LIKE '%.htm' OR LOWER(NAME) LIKE '%.swf'
OR LOWER(NAME) LIKE '%.application' OR LOWER(NAME) LIKE '%.exe' OR LOWER(NAME) LIKE '%.ipa' OR LOWER(NAME) LIKE
'%.part1.exe' OR LOWER(NAME) LIKE '%.crw' OR LOWER(NAME) LIKE '%.cso' OR LOWER(NAME) LIKE '%.mdi' OR LOWER(NAME)
LIKE '%.odg' OR LOWER(NAME) LIKE '%.rpm' OR LOWER(NAME) LIKE '%.dcr' OR LOWER(NAME) LIKE '%.jad' OR LOWER(NAME)
LIKE '%.pak' OR LOWER(NAME) LIKE '%.rem' OR LOWER(NAME) LIKE '%.3g2' OR LOWER(NAME) LIKE '%.3gp' OR LOWER(NAME)
LIKE '%.asx' OR LOWER(NAME) LIKE '%.flv' OR LOWER(NAME) LIKE '%.m2t' OR LOWER(NAME) LIKE '%.m2ts' OR LOWER(NAME)
310 Implementing the IBM Storwize V7000 Unified
LIKE '%.m4v' OR LOWER(NAME) LIKE '%.mkv' OR LOWER(NAME) LIKE '%.mov' OR LOWER(NAME) LIKE '%.mp4' OR LOWER(NAME)
LIKE '%.mpg' OR LOWER(NAME) LIKE '%.tod' OR LOWER(NAME) LIKE '%.ts' OR LOWER(NAME) LIKE '%.vob' OR LOWER(NAME)
LIKE '%.wmv' OR LOWER(NAME) LIKE '%.hqx' OR LOWER(NAME) LIKE '%.docx' OR LOWER(NAME) LIKE '%.ppt' OR LOWER(NAME)
LIKE '%.pptm' OR LOWER(NAME) LIKE '%.thmx' OR LOWER(NAME) LIKE '%.djvu' OR LOWER(NAME) LIKE '%.dt2' OR
LOWER(NAME) LIKE '%.mrw' OR LOWER(NAME) LIKE '%.wbmp' OR LOWER(NAME) LIKE '%.abr' OR LOWER(NAME) LIKE '%.ai' OR
LOWER(NAME) LIKE '%.icon' OR LOWER(NAME) LIKE '%.ofx' OR LOWER(NAME) LIKE '%.pzl' OR LOWER(NAME) LIKE '%.tif' OR
LOWER(NAME) LIKE '%.u3d' OR LOWER(NAME) LIKE '%.msi' OR LOWER(NAME) LIKE '%.xlsm' OR LOWER(NAME) LIKE '%.scr' OR
LOWER(NAME) LIKE '%.wav' OR LOWER(NAME) LIKE '%.idx' OR LOWER(NAME) LIKE '%.abw' OR LOWER(NAME) LIKE '%.azw' OR
LOWER(NAME) LIKE '%.contact' OR LOWER(NAME) LIKE '%.dot' OR LOWER(NAME) LIKE '%.dotm' OR LOWER(NAME) LIKE
'%.dotx' OR LOWER(NAME) LIKE '%.epub' OR LOWER(NAME) LIKE '%.keynote' OR LOWER(NAME) LIKE '%.mobi' OR LOWER(NAME)
LIKE '%.mswmm' OR LOWER(NAME) LIKE '%.odt' OR LOWER(NAME) LIKE '%.one' OR LOWER(NAME) LIKE '%.otf' OR LOWER(NAME)
LIKE '%.pages' OR LOWER(NAME) LIKE '%.pdf' OR LOWER(NAME) LIKE '%.ppsx' OR LOWER(NAME) LIKE '%.prproj' OR
LOWER(NAME) LIKE '%.pwi' OR LOWER(NAME) LIKE '%.onepkg' OR LOWER(NAME) LIKE '%.potx' OR LOWER(NAME) LIKE '%.tiff'
OR LOWER(NAME) LIKE '%.\!ut' OR LOWER(NAME) LIKE '%.atom' OR LOWER(NAME) LIKE '%.bc\!' OR LOWER(NAME) LIKE
'%.opml' OR LOWER(NAME) LIKE '%.torrent' OR LOWER(NAME) LIKE '%.xhtml' OR LOWER(NAME) LIKE '%.jar' OR LOWER(NAME)
LIKE '%.xlsx' OR LOWER(NAME) LIKE '%.fnt' OR LOWER(NAME) LIKE '%.sc2replay' OR LOWER(NAME) LIKE '%.1st' OR
LOWER(NAME) LIKE '%.air' OR LOWER(NAME) LIKE '%.apk' OR LOWER(NAME) LIKE '%.cbr' OR LOWER(NAME) LIKE '%.daa' OR
LOWER(NAME) LIKE '%.isz' OR LOWER(NAME) LIKE '%.m3u8' OR LOWER(NAME) LIKE '%.rmvb' OR LOWER(NAME) LIKE '%.sxw' OR
LOWER(NAME) LIKE '%.tga' OR LOWER(NAME) LIKE '%.uax' OR LOWER(NAME) LIKE '%.crx' OR LOWER(NAME) LIKE
'%.safariextz' OR LOWER(NAME) LIKE '%.xpi' OR LOWER(NAME) LIKE '%.theme' OR LOWER(NAME) LIKE '%.themepack' OR
LOWER(NAME) LIKE '%.3dr' OR LOWER(NAME) LIKE '%.dic' OR LOWER(NAME) LIKE '%.dlc' OR LOWER(NAME) LIKE '%.lng' OR
LOWER(NAME) LIKE '%.ncs' OR LOWER(NAME) LIKE '%.pcd' OR LOWER(NAME) LIKE '%.pmd' OR LOWER(NAME) LIKE '%.rss' OR
LOWER(NAME) LIKE '%.sng' OR LOWER(NAME) LIKE '%.svp' OR LOWER(NAME) LIKE '%.swp' OR LOWER(NAME) LIKE '%.thm' OR
LOWER(NAME) LIKE '%.uif' OR LOWER(NAME) LIKE '%.upg' OR LOWER(NAME) LIKE '%.avi' OR LOWER(NAME) LIKE '%.fla' OR
LOWER(NAME) LIKE '%.pcm' OR LOWER(NAME) LIKE '%.bbb' OR LOWER(NAME) LIKE '%.bik' OR LOWER(NAME) LIKE '%.nba' OR
LOWER(NAME) LIKE '%.nbu' OR LOWER(NAME) LIKE '%.nco' OR LOWER(NAME) LIKE '%.wbcat' OR LOWER(NAME) LIKE '%.dao' OR
LOWER(NAME) LIKE '%.dmg' OR LOWER(NAME) LIKE '%.tao' OR LOWER(NAME) LIKE '%.toast' OR LOWER(NAME) LIKE '%.pub' OR
LOWER(NAME) LIKE '%.fpx' OR LOWER(NAME) LIKE '%.prg' OR LOWER(NAME) LIKE '%.cpt' OR LOWER(NAME) LIKE '%.eml' OR
LOWER(NAME) LIKE '%.nvram' OR LOWER(NAME) LIKE '%.vmsd' OR LOWER(NAME) LIKE '%.vmxf' OR LOWER(NAME) LIKE
'%.vswp'))
[I] Directories scan: 144224 files, 8224 directories, 0 other objects, 0 'skipped' files and/or errors.
[I] Summary of Rule Applicability and File Choices:
Rule# Hit_Cnt KB_Hit Chosen KB_Chosen KB_Ill Rule
0 57894 1128288 57894 1128288 16 RULE 'migratetocompress' MIGRATE FROM POOL 'system'
THRESHOLD(1,0) TO POOL 'compressed' WHERE(.)
[I] Filesystem objects with no applicable rules: 94540.
[I] GPFS Policy Decisions and File Choice Totals:
Chose to migrate 1128288KB: 57894 of 57894 candidates;
Chose to premigrate 0KB: 0 candidates;
Already co-managed 0KB: 0 candidates;
Chose to delete 0KB: 0 of 0 candidates;
Chose to list 0KB: 0 of 0 candidates;
16KB of chosen data is illplaced or illreplicated;
Predicted Data Pool Utilization in KB and %:
compressed 1135200 104857600 1.082611%
system 3994784 209715200 1.904861%
[I] Because some data is illplaced or illreplicated, predicted pool utilization may be negative and/or
misleading!
----------------------------------------------------------
End of log - runpolicy still running
----------------------------------------------------------
EFSSG1000I The command completed successfully.
[7803088.ibm]$
[7803088.ibm]$ showlog 58
Primary node: mgmt001st001
Job ID : 58
[I] GPFS Current Data Pool Utilization in KB and %
compressed 6912 104857600 0.006592%
system 5104896 209715200 2.434204%
[I] 156427 of 1000448 inodes used: 15.635695%.
[I] Loaded policy rules from /var/opt/IBM/sofs/PolicyFiles/policy4599440165344313344.
Evaluating MIGRATE/DELETE/EXCLUDE rules with CURRENT_TIMESTAMP = 2013-07-12@20:07:08 UTC
parsed 0 Placement Rules, 0 Restore Rules, 1 Migrate/Delete/Exclude Rules,
0 List Rules, 0 External Pool/List Rules
RULE 'migratetocompress' migrate from pool 'system' threshold(1,0) to pool 'compressed' where NOT (NOT
(LOWER(NAME) LIKE '%.7z' OR LOWER(NAME) LIKE '%.7z.001' OR LOWER(NAME) LIKE '%.7z.002' OR LOWER(NAME) LIKE
'%.7z.003' OR LOWER(NAME) LIKE '%.7zip' OR LOWER(NAME) LIKE '%.a00' OR LOWER(NAME) LIKE '%.a01' OR LOWER(NAME)
LIKE '%.a02' OR LOWER(NAME) LIKE '%.a03' OR LOWER(NAME) LIKE '%.a04' OR LOWER(NAME) LIKE '%.a05' OR LOWER(NAME)
LIKE '%.ace' OR LOWER(NAME) LIKE '%.arj' OR LOWER(NAME) LIKE '%.bkf' OR LOWER(NAME) LIKE '%.bz2' OR LOWER(NAME)
LIKE '%.c00' OR LOWER(NAME) LIKE '%.c01' OR LOWER(NAME) LIKE '%.c02' OR LOWER(NAME) LIKE '%.c03' OR LOWER(NAME)
LIKE '%.cab' OR LOWER(NAME) LIKE '%.cbz' OR LOWER(NAME) LIKE '%.cpgz' OR LOWER(NAME) LIKE '%.gz' OR LOWER(NAME)
LIKE '%.nbh' OR LOWER(NAME) LIKE '%.r00' OR LOWER(NAME) LIKE '%.r01' OR LOWER(NAME) LIKE '%.r02' OR LOWER(NAME)
LIKE '%.r03' OR LOWER(NAME) LIKE '%.r04' OR LOWER(NAME) LIKE '%.r05' OR LOWER(NAME) LIKE '%.r06' OR LOWER(NAME)
LIKE '%.r07' OR LOWER(NAME) LIKE '%.r08' OR LOWER(NAME) LIKE '%.r09' OR LOWER(NAME) LIKE '%.r10' OR LOWER(NAME)
LIKE '%.rar' OR LOWER(NAME) LIKE '%.sisx' OR LOWER(NAME) LIKE '%.sit' OR LOWER(NAME) LIKE '%.sitx' OR LOWER(NAME)
LIKE '%.tar.gz' OR LOWER(NAME) LIKE '%.tgz' OR LOWER(NAME) LIKE '%.wba' OR LOWER(NAME) LIKE '%.z01' OR
LOWER(NAME) LIKE '%.z02' OR LOWER(NAME) LIKE '%.z03' OR LOWER(NAME) LIKE '%.z04' OR LOWER(NAME) LIKE '%.z05' OR
LOWER(NAME) LIKE '%.zip' OR LOWER(NAME) LIKE '%.zix' OR LOWER(NAME) LIKE '%.aac' OR LOWER(NAME) LIKE '%.cda' OR
LOWER(NAME) LIKE '%.dvf' OR LOWER(NAME) LIKE '%.flac' OR LOWER(NAME) LIKE '%.gp5' OR LOWER(NAME) LIKE '%.gpx' OR
LOWER(NAME) LIKE '%.logic' OR LOWER(NAME) LIKE '%.m4a' OR LOWER(NAME) LIKE '%.m4b' OR LOWER(NAME) LIKE '%.m4p' OR
LOWER(NAME) LIKE '%.mp3' OR LOWER(NAME) LIKE '%.mts' OR LOWER(NAME) LIKE '%.ogg' OR LOWER(NAME) LIKE '%.wma' OR
LOWER(NAME) LIKE '%.wv' OR LOWER(NAME) LIKE '%.bin' OR LOWER(NAME) LIKE '%.img' OR LOWER(NAME) LIKE '%.iso' OR
Chapter 16. Real-time Compression in the IBM Storwize V7000 Unified 311
LOWER(NAME) LIKE '%.docm' OR LOWER(NAME) LIKE '%.pps' OR LOWER(NAME) LIKE '%.pptx' OR LOWER(NAME) LIKE '%.acsm'
OR LOWER(NAME) LIKE '%.menc' OR LOWER(NAME) LIKE '%.emz' OR LOWER(NAME) LIKE '%.gif' OR LOWER(NAME) LIKE '%.jpeg'
OR LOWER(NAME) LIKE '%.jpg' OR LOWER(NAME) LIKE '%.png' OR LOWER(NAME) LIKE '%.htm' OR LOWER(NAME) LIKE '%.swf'
OR LOWER(NAME) LIKE '%.application' OR LOWER(NAME) LIKE '%.exe' OR LOWER(NAME) LIKE '%.ipa' OR LOWER(NAME) LIKE
'%.part1.exe' OR LOWER(NAME) LIKE '%.crw' OR LOWER(NAME) LIKE '%.cso' OR LOWER(NAME) LIKE '%.mdi' OR LOWER(NAME)
LIKE '%.odg' OR LOWER(NAME) LIKE '%.rpm' OR LOWER(NAME) LIKE '%.dcr' OR LOWER(NAME) LIKE '%.jad' OR LOWER(NAME)
LIKE '%.pak' OR LOWER(NAME) LIKE '%.rem' OR LOWER(NAME) LIKE '%.3g2' OR LOWER(NAME) LIKE '%.3gp' OR LOWER(NAME)
LIKE '%.asx' OR LOWER(NAME) LIKE '%.flv' OR LOWER(NAME) LIKE '%.m2t' OR LOWER(NAME) LIKE '%.m2ts' OR LOWER(NAME)
LIKE '%.m4v' OR LOWER(NAME) LIKE '%.mkv' OR LOWER(NAME) LIKE '%.mov' OR LOWER(NAME) LIKE '%.mp4' OR LOWER(NAME)
LIKE '%.mpg' OR LOWER(NAME) LIKE '%.tod' OR LOWER(NAME) LIKE '%.ts' OR LOWER(NAME) LIKE '%.vob' OR LOWER(NAME)
LIKE '%.wmv' OR LOWER(NAME) LIKE '%.hqx' OR LOWER(NAME) LIKE '%.docx' OR LOWER(NAME) LIKE '%.ppt' OR LOWER(NAME)
LIKE '%.pptm' OR LOWER(NAME) LIKE '%.thmx' OR LOWER(NAME) LIKE '%.djvu' OR LOWER(NAME) LIKE '%.dt2' OR
LOWER(NAME) LIKE '%.mrw' OR LOWER(NAME) LIKE '%.wbmp' OR LOWER(NAME) LIKE '%.abr' OR LOWER(NAME) LIKE '%.ai' OR
LOWER(NAME) LIKE '%.icon' OR LOWER(NAME) LIKE '%.ofx' OR LOWER(NAME) LIKE '%.pzl' OR LOWER(NAME) LIKE '%.tif' OR
LOWER(NAME) LIKE '%.u3d' OR LOWER(NAME) LIKE '%.msi' OR LOWER(NAME) LIKE '%.xlsm' OR LOWER(NAME) LIKE '%.scr' OR
LOWER(NAME) LIKE '%.wav' OR LOWER(NAME) LIKE '%.idx' OR LOWER(NAME) LIKE '%.abw' OR LOWER(NAME) LIKE '%.azw' OR
LOWER(NAME) LIKE '%.contact' OR LOWER(NAME) LIKE '%.dot' OR LOWER(NAME) LIKE '%.dotm' OR LOWER(NAME) LIKE
'%.dotx' OR LOWER(NAME) LIKE '%.epub' OR LOWER(NAME) LIKE '%.keynote' OR LOWER(NAME) LIKE '%.mobi' OR LOWER(NAME)
LIKE '%.mswmm' OR LOWER(NAME) LIKE '%.odt' OR LOWER(NAME) LIKE '%.one' OR LOWER(NAME) LIKE '%.otf' OR LOWER(NAME)
LIKE '%.pages' OR LOWER(NAME) LIKE '%.pdf' OR LOWER(NAME) LIKE '%.ppsx' OR LOWER(NAME) LIKE '%.prproj' OR
LOWER(NAME) LIKE '%.pwi' OR LOWER(NAME) LIKE '%.onepkg' OR LOWER(NAME) LIKE '%.potx' OR LOWER(NAME) LIKE '%.tiff'
OR LOWER(NAME) LIKE '%.\!ut' OR LOWER(NAME) LIKE '%.atom' OR LOWER(NAME) LIKE '%.bc\!' OR LOWER(NAME) LIKE
'%.opml' OR LOWER(NAME) LIKE '%.torrent' OR LOWER(NAME) LIKE '%.xhtml' OR LOWER(NAME) LIKE '%.jar' OR LOWER(NAME)
LIKE '%.xlsx' OR LOWER(NAME) LIKE '%.fnt' OR LOWER(NAME) LIKE '%.sc2replay' OR LOWER(NAME) LIKE '%.1st' OR
LOWER(NAME) LIKE '%.air' OR LOWER(NAME) LIKE '%.apk' OR LOWER(NAME) LIKE '%.cbr' OR LOWER(NAME) LIKE '%.daa' OR
LOWER(NAME) LIKE '%.isz' OR LOWER(NAME) LIKE '%.m3u8' OR LOWER(NAME) LIKE '%.rmvb' OR LOWER(NAME) LIKE '%.sxw' OR
LOWER(NAME) LIKE '%.tga' OR LOWER(NAME) LIKE '%.uax' OR LOWER(NAME) LIKE '%.crx' OR LOWER(NAME) LIKE
'%.safariextz' OR LOWER(NAME) LIKE '%.xpi' OR LOWER(NAME) LIKE '%.theme' OR LOWER(NAME) LIKE '%.themepack' OR
LOWER(NAME) LIKE '%.3dr' OR LOWER(NAME) LIKE '%.dic' OR LOWER(NAME) LIKE '%.dlc' OR LOWER(NAME) LIKE '%.lng' OR
LOWER(NAME) LIKE '%.ncs' OR LOWER(NAME) LIKE '%.pcd' OR LOWER(NAME) LIKE '%.pmd' OR LOWER(NAME) LIKE '%.rss' OR
LOWER(NAME) LIKE '%.sng' OR LOWER(NAME) LIKE '%.svp' OR LOWER(NAME) LIKE '%.swp' OR LOWER(NAME) LIKE '%.thm' OR
LOWER(NAME) LIKE '%.uif' OR LOWER(NAME) LIKE '%.upg' OR LOWER(NAME) LIKE '%.avi' OR LOWER(NAME) LIKE '%.fla' OR
LOWER(NAME) LIKE '%.pcm' OR LOWER(NAME) LIKE '%.bbb' OR LOWER(NAME) LIKE '%.bik' OR LOWER(NAME) LIKE '%.nba' OR
LOWER(NAME) LIKE '%.nbu' OR LOWER(NAME) LIKE '%.nco' OR LOWER(NAME) LIKE '%.wbcat' OR LOWER(NAME) LIKE '%.dao' OR
LOWER(NAME) LIKE '%.dmg' OR LOWER(NAME) LIKE '%.tao' OR LOWER(NAME) LIKE '%.toast' OR LOWER(NAME) LIKE '%.pub' OR
LOWER(NAME) LIKE '%.fpx' OR LOWER(NAME) LIKE '%.prg' OR LOWER(NAME) LIKE '%.cpt' OR LOWER(NAME) LIKE '%.eml' OR
LOWER(NAME) LIKE '%.nvram' OR LOWER(NAME) LIKE '%.vmsd' OR LOWER(NAME) LIKE '%.vmxf' OR LOWER(NAME) LIKE
'%.vswp'))
[I] Directories scan: 144224 files, 8224 directories, 0 other objects, 0 'skipped' files and/or errors.
[I] Summary of Rule Applicability and File Choices:
Rule# Hit_Cnt KB_Hit Chosen KB_Chosen KB_Ill Rule
0 57894 1128288 57894 1128288 16 RULE 'migratetocompress' MIGRATE FROM POOL 'system'
THRESHOLD(1,0) TO POOL 'compressed' WHERE(.)
[I] Filesystem objects with no applicable rules: 94540.
[I] GPFS Policy Decisions and File Choice Totals:
Chose to migrate 1128288KB: 57894 of 57894 candidates;
Chose to premigrate 0KB: 0 candidates;
Already co-managed 0KB: 0 candidates;
Chose to delete 0KB: 0 of 0 candidates;
Chose to list 0KB: 0 of 0 candidates;
16KB of chosen data is illplaced or illreplicated;
Predicted Data Pool Utilization in KB and %:
compressed 1135200 104857600 1.082611%
system 3994784 209715200 1.904861%
[I] Because some data is illplaced or illreplicated, predicted pool utilization may be negative and/or
misleading!
[I] A total of 57894 files have been migrated, deleted or processed by an EXTERNAL EXEC/script;
0 'skipped' files and/or errors.
----------------------------------------------------------
End of log - runpolicy job ended
----------------------------------------------------------
EFSSG1000I The command completed successfully.
[7803088.ibm]$
8. After running the runpolicy command, you can use the GUI to verify current information
for all the pools as shown Figure 16-22 on page 312. Verify that data moved between
pools as expected.
312 Implementing the IBM Storwize V7000 Unified
Figure 16-22 Capacity
Enabling a placement policy after the migration completes
After the migration completes, a placement policy needs to be added so that when new files
are created, they will be directed automatically to the proper pool depending on whether
these files are compressible or not.
1. This can be done via the GUI as shown in Figure 16-23 on page 312 by editing the file
system and selecting the Placement Policy tab and selecting Enable file placement.
Select Add exclusion rule which will automatically add the placement rule that will place
any new files that are created to either the compressed or uncompressed file system pool.
Select OK to complete.
Figure 16-23 Placement policy
Chapter 16. Real-time Compression in the IBM Storwize V7000 Unified 313
Migrating data from the compressed pool to the system pool
It is possible to revert a selective compressed file system to become uncompressed while
there is full data access. These steps are needed to migrate a file system that has an
uncompressed system pool and a second compressed file system pool to a fully
uncompressed file system pool. The CLI will be used with the following steps.
1. Analyze how much physical space is needed to move all data to the system pool. The
conversion can be done most easily when the uncompressed space needed is available
as free disk space in addition to the compressed pool. If this is not the case, a step by step
approach must be taken and volumes from the compressed pool that is emptied must be
removed.
2. Create a default placement rule for the file system using the rule RULE 'default' SET
POOL 'system';
3. Create and run a migration rule that will move data to the system pool.
In order to migrate data back to the system pool, a simple policy rule that will move all data to
the system pool from the compressed pool is created. A migration policy can be applied to a
file system which will ensure the migration starts automatically and is regularly executed. This
will happen if the policy is created in the GUI policy text editor. Because the migration rule
should only run once, another approach is chosen. The CLI will be used to create a policy,
start a policy run and monitor the migration of data. The migration will only start when the
runpolicy command is executed via the CLI. This procedure is shown with the following
example Example 16-8 on page 313:
Example 16-8 Uncompressing data
# Create a policy rule - this policy is not applied to any file system
[7802378.ibm]# mkpolicy migratetosystem -R "RULE 'migratetosystem' migrate from
pool 'compressed' to pool 'system';"
EFSSG1000I The command completed successfully.
# The runpolicy command will only run the policy once on our file system
[7802378.ibm]$ runpolicy FS1 -P migratetosystem
EFSSA0184I The policy is started on FS1_test with JobID 806.
# The showlog command provides the output of policy runs
[7802378.ibm]$ showlog 806
Primary node: mgmt001st001
Job ID : 806
[I] GPFS Current Data Pool Utilization in KB and %
compressed 6531584 39845888 16.392116%
system 20494848 49283072 41.585979%
[I] 1000507 of 1200640 inodes used: 83.331140%.
[I] Loaded policy rules from
/var/opt/IBM/sofs/PolicyFiles/policy4252244384126468096.
Evaluating MIGRATE/DELETE/EXCLUDE rules with CURRENT_TIMESTAMP =
2012-11-15@07:39:05 UTC
parsed 0 Placement Rules, 0 Restore Rules, 1 Migrate/Delete/Exclude Rules,
0 List Rules, 0 External Pool/List Rules
RULE 'migratetosystem' migrate from pool 'compressed' to pool 'system'
[I] Directories scan: 923603 files, 72924 directories, 0 other objects, 0
'skipped' files and/or errors.
[I] Summary of Rule Applicability and File Choices:
Rule# Hit_Cnt KB_Hit ChosenKB_Chosen nkB_Ill Rule
0574246 49452407 424649452400RULE 'migratetosystem' MIGRATE FROM POOL
'compressed' TO POOL 'system'
[I] Filesystem objects with no applicable rules: 422280.
[I] GPFS Policy Decisions and File Choice Totals:
Chose to migrate 4945240KB: 574246 of 574246 candidates;
Chose to premigrate 0KB: 0 candidates;
Already co-managed 0KB: 0 candidates;
Chose to delete 0KB: 0 of 0 candidates;
Chose to list 0KB: 0 of 0 candidates;
0KB of chosen data is illplaced or illreplicated;
Predicted Data Pool Utilization in KB and %:
compressed1586344398458883.981199%
system254728564928307251.686827%
[I] A total of 574246 files have been migrated, deleted or processed by an
EXTERNAL EXEC/script;
0 'skipped' files and/or errors.
----------------------------------------------------------
End of log - runpolicy job ended
----------------------------------------------------------
314 Implementing the IBM Storwize V7000 Unified
EFSSG1000I The command completed successfully.
4. Remove all volumes of the compressed pool
After the policy run has completed, all data was moved to the system pool and the volumes of
the compressed pool can be removed. The GUI can be used to perform this task in the edit
file system dialog. Setting the capacity of the compressed pool to 0 as shown in Figure 16-24
on page 314 will remove all compressed volumes (NSDs) and finally remove the compressed
file system pool.
Figure 16-24 Emptying the compressed pool
Chapter 16. Real-time Compression in the IBM Storwize V7000 Unified 315
16.3 Capacity Planning
The capacity planning approach is determined by the file system storage pool architecture
that has been chosen. The capacity planning approaches that are described here are a fully
compressed file system and a selectively compressed file system.
16.3.1 Planning capacity with the NAS Compression Estimation Utility
The NAS Compression Estimation Utility is a command line host-based utility that can be
used to estimate the expected compression ratio in a Network-Attached Storage (NAS)
environment and can also be used to plan the size of a file system. The utility uses a file type
extension list (located in the installation directory) with the corresponding compression ratio.
The utility will list (creates an Excel spreadsheet as shown in Figure 16-25) not only the
expected compression rate but also the total size before and after compression and expected
savings. The information is presented as a total for each file extension. This information can
be used to size the file system compression pool.
The utility runs on a Windows host that has access to the target filer/share that will be
analyzed, and performs only read operations so it has no effect on the data stored on the
NAS. This information provides insight on how much physical space is needed overall to store
compressed data. The average compressibility of the data provides the information about
how to size and configure a fully compressed file system. The NAS Compression Estimation
Utility is currently available for download at
http://www14.software.ibm.com/webapp/set2/sas/f/comprestimator/NAS_Compression_esti
mation_utility.html
Figure 16-25 NAS Compression Utility output
The utility is supported on:
Windows file servers running Windows Server 2003, 2008, 2012 and higher.
NetApp or N series filers running ONTAP 8.0.x and higher
Note: The utility updates the access time of scanned directories
Note: The utility may run for a long period of time when scanning a whole NAS or a share
with a large amount of data.
316 Implementing the IBM Storwize V7000 Unified
16.3.2 Installing and using the NAS Compression Estimation Utility
If you installed the NAS Compression Estimation Utility using a Windows installation file, the
NAS Compression Estimation Utility files are available in the folder selected during the
installation.
By default, the files are copied to:
In Windows 64-bit:
C:\Program Files (x86)\ibm\NAS Compression Estimation Utility
In Windows 32-bit:
C:\Program Files\ibm\ NAS Compression Estimation Utility
To install NAS Compression Estimation Utility on Windows:
Run NAS_Compression_estimation_utility.exe installation binary.
Follow the installation wizard.
16.3.3 Using NAS Compression Estimation Utility
To use the NAS Compression Estimation Utility on a Windows server:
Log into the host using an account with DOMAIN administrator privileges. If it is a CIFS mount
on another domain then it is advised to be authenticated using a Domain Admin user from
that domain, in order to traverse all the files.
Open an elevated Command Prompt with administrator rights (Run as Administrator).
Navigate to the installation folder.
Run the NAS Compression Estimation Utility using the following syntax and flags listed in
Table 16-1 on page 317.
Syntax
Using Windows:
NAS_RtC_Estimate.exe -target <unc_path> | -filer <hostname|ip> [-batch <file> [-detail]] [-casesense]
[-category] [-heartbeats] [-ignoreDirs <dir1,dir2...>] [-logFile <file>] [-loglevel <1-6>] [-threads <1-50>]
[-loglevel <1-6>]
Chapter 16. Real-time Compression in the IBM Storwize V7000 Unified 317
Table 16-1 Syntax
Example 16-9 shows a typical use case for the utility:
Example 16-9 Running the utility
c:\Program Files (x86)\IBM\NAS Compression Estimation Utility>NAS_RtC_Estimate --target \\9.11.235.74\oracle
Scan initiated - Version: 1.1.001.025
Username: jquintal
Command line: NAS_RtC_Estimate --target \\9.11.235.74\oracle
Processing share #1 - \\9.11.235.74\oracle
Start time: Fri Nov 08 16:15:16 2013
Total data: 19.99 GB Savings: 9.99 GB Number of files: 2 Number of folders: 1 Compression Ratio: 50 Number of errors:
0
End time: Fri Nov 08 16:15:17 2013
16.3.4 Capacity planning for selectively compressed file systems
The sizing for the overall physical disk capacity that is needed for a selectively compressed
file system can also be done with the output of the NAS Compression Estimation Utility.
However, to configure a selectively compressed file system, more analysis steps may be
needed.
--target Specifies shares to analyze. Also can specify a local or mapped drive.
Example 1: --target \\192.168.1.2\sharename
Example 2: --target M:
--filer Specifies the whole filer by IP address or a hostname. The utility will prepare the list
of all shares and scan them one by one.
Example: --filer 192.168.1.2
--batch Configuration file that contains a list of shares to analyze. This file should contain a
list of shares in specific format, one share in each line.
Example 1: \\192.168.1.2\sharename
Example 2: \\192.168.1.2\sharename\directory
Example 3: \\hostname01\sharename
--casesense By default, the file extensions are case-insensitive.
Example: .ISO and .iso are considered as the same file extension. Set --casesense
to modify the utility to be case-sensitive.
--category Group file types by category in the final report (these are listed in the extensions .csv
file)
--detail Used only with --batch. Will create a separate .csv for each share listed in the file.
--heartbeats Specifies the frequency of the progress information update during the scan. Displays
how many files and folders are scanned every n seconds. Default value is 5.
Example: --heartbeats 2 will display progress information every 2 seconds.
--ignoreDirs Used to specify an exclude list for certain directories. By default, directories named
.snapshot, etc are skipped. Hidden folders will be scanned.
--logFile Specify the logfile path and name or only name (installation folder) for the log file, by
default utility.log is used.
--loglevel Specifies the loglevel. Default value 2, range 1-6. Levels from high to low: TRACE,
DEBUG, INFO, WARN, ERROR, FATAL.
--threads Specifies the number of concurrent running threads. Default value is 10, valid range
1-50.
Example: --threads 15
318 Implementing the IBM Storwize V7000 Unified
To configure a selectively compressed file system, not only the average compression rate
must be known, but also the distribution of files into the compressed and uncompressed
pools. For best performance, files that yield compression savings above 40% are
recommended for compression and files yielding less than 40% should be evaluated for
performance. This rule is also applied by the exclusion list as described in 16.2.1, Selectively
compressed file system with two pools on page 288.
The average compression rate can be gathered by analyzing external storage systems with
the NAS Compression Estimation Utility, as discussed above. The final step that is needed to
do a prediction on the correct file system pool sizes can be done by creating a small file
system with two pools and the appropriate placement rules. This file system can then be
populated with sample data, and the Storwize V7000 Unified reporting will provide the
required information that is necessary to plan for the final file system. When following this
procedure, files should not be deleted from the sample file system before the estimation is
complete.
The Monitoring Capacity File System Pools window, as shown in Figure 16-26 on
page 319, provides the metrics that are needed to complete the planning:
The File System Used capacity provides the information about how much uncompressed
data was placed into the two pools by the placement policy. The ratio of the two values for
the compressed and system pool should be used to scale the file system pool sizes in the
New File System creation dialog.
The Real Capacity value provides the information about how much capacity on real disk
was allocated. For the uncompressed pool, which is fully provisioned, this value is
identical to the file system pool size. However, for the compressed pool, this value together
with the File System Used value indicates how much real data on disk was used for the
compressed pool. The compression savings metric ignores zeros, which are not
accounted. Therefore, the ratio of real capacity to file system used provides a value that
determines how much space is required on real disk relative to the file system size.
The Used block volume capacity accounts only disk capacity that has been taken up by
data already. The Real Capacity block volume capacity accounts for the already allocated
blocks on disk, which belong to the volume. The Used capacity is only displayed per
volume and not per file system pool and can be viewed in the File File Systems grid by
viewing the Network Shared Disk (NSD) properties. Because the used capacity is smaller
than the real capacity, it will provide more accurate and even better reduction rates. The
rsize value, which determines the real capacity that is allocated for a given used capacity,
is set to 2% for volumes created in the GUI. Therefore, the difference is small. For practical
sizing purposes, the Real Capacity value is good enough and gives a small contingency,
unless the rsize has been updated per CLI and is set to a much higher value.
Chapter 16. Real-time Compression in the IBM Storwize V7000 Unified 319
Figure 16-26 The Monitoring Capacity File System Pools view can help with capacity planning based on a data
sample
Example 16-10 shows the calculation that is based on Figure 16-26.
Example 16-10 Calculation
Planned file system size: 2000 GB
Sample file system overall used capacity: 21.15 GB
System file system pool:
Sample system pool used capacity: 6.42 GB
File system pool capacity fraction: 6.42 GB/21.15 GB = 30.3 %
Planned system file system pool capacity: 2 TB * 30.3 % = 606 GB
Planned Physical disk for the system pool: 606 GB
Compressed file system pool:
Sample file system compressed pool used capacity: 14.62 GB
File System pool capacity fraction = 14.62 GB/21.15 GB = 69.1 %
Planned compressed file system pool capacity: 2 TB * 69.1 % = 1382 GB
Sample compressed pool Real Capacity: 4.76 GB
Reduction ratio (compression and thin provisioning): 4.76 GB/14.62 GB = 32.5 %
Physical disk for the compressed pool: 2000 GB * 69.1 % * 32.5 % = 449 GB
With the data as calculated in the example, finally the file system can be configured as shown
in Figure 16-27. The expected uncompressed file system pool capacities are entered in the
pool size entry fields. For the fully provisioned MDiskgroup, the GUI checks if the capacity
requirements are sufficient. For the compressed pool, the GUI allows us to create the
compressed file system pool if at least 25% of the entered uncompressed file size is available.
This assumes a compression savings of 75%, and for most cases this value is too optimistic.
It is therefore recommended to ensure that available space in the MDiskgroup that will host
the compressed pool is at least as large as the estimated needed physical disk capacity of the
compressed pool. In the above estimation, this is 449 GB.
320 Implementing the IBM Storwize V7000 Unified
Figure 16-27 The evaluated file system pool sizes are entered to create a file system
Notice the slider below the compressed file system pool size entry box. The slider maximum
value allows us to configure four times the available physical size of the chosen storage pool.
16.4 Compression metrics for file systems
The Storwize V7000 Unified provides several metrics that are specific to file systems and
compression. As the compression feature introduces the thin provisioning concept, viewing
these metrics provide insight both on the compression aspects and on thin provisioning
related to file systems usage data.
In Figure 16-28, the perspectives of the file system, the block device thin provisioning and the
compression engine on a file system with a few files are compared.
Chapter 16. Real-time Compression in the IBM Storwize V7000 Unified 321
Figure 16-28 Capacity views from the file system, block device, and compression engine perspective
The file system is aware of the uncompressed files and snapshots, clones, and metadata that
it handles. The compressibility of files and the fact that files might contain zeros is not
accounted for by the file system.
The block storage device layer handles zeros inside files in a space efficient way, because
zeros are not written to disk and also not accounted for otherwise. The thin provisioning
approach assumes all blocks that are not specifically written with non-zero data contain
zeros. Therefore, files that contain zeros like virtual machine images or empty database
containers are not accounted on the block device.
The Thin Provisioning Efficiency option that is shown on the Storwize V7000 Unified GUI in
the Monitor Capacity File Systems by Pools view shows per file system pool the ratio of
the file system used capacity divided by the block device uncompressed used capacity. The
following examples provide scenarios with expected ratios and potential user actions:
A file system that is used as archive where no files have been deleted yet and files do not
contain zeros shows a thin provisioning efficiency of 100%.
A file system that has been just created and was filled with newly formatted database
container files shows a thin provisioning efficiency of 100%. The file system fully accounts
the file size, whereas the block devices save space on the zeros, which are also not
accounted for. Theoretically, the ratio between file system used capacity and before
compression capacity is even higher than 100%, but the efficiency value is limited to
100%.
322 Implementing the IBM Storwize V7000 Unified
A file system was chosen much larger than its current file system used capacity. Files are
regularly created and deleted. The thin provisioning capacity is much lower than 100%
because the block device still accounts for the files that have been deleted. Potentially, this
file system was chosen too large and the file system size should be reduced to save
space. Similar to a fully provisioned file system, the compressed pools of a file system
should not be oversized to avoid wasted space.
16.5 Managing compressed file systems
In the following sections, we describe areas of interest for managing compressed file
systems.
16.5.1 Adding a compressed pool to a file system
When a file system that is not using compression is to be converted to a file system that uses
compression, several steps are needed after analysis of the data in the not-compressed file
system provided insight regarding if the file system is a good candidate for compression. See
the Planning section of this chapter for approaches to analyze data.
Create a compressed pool
Another pool that has the compression flag activated is created by using the GUI edit dialog in
Files File Systems. The pool sizes should match the expected uncompressed capacity. At
this point, it is not yet necessary to define a placement policy
Create and run a migration policy
The compressible files in the system pool should be compressed. Therefore, a migration
policy that moves the compressible files to the compressed pool has to be defined and
executed. The screen capture on how to define a migration policy is shown in Figure 16-29 on
page 323. The default list of non-compressible file types or an adapted version that matches
the system environment, as listed in Example 16-1 on page 293, can be entered in the
exclusion rule for this migration policy. In this example, the policy is executed automatically
because of the low thresholds, because an upper threshold of 1% and a lower threshold of 0
was chosen.
The GUI policy editor was used to create the migration policy because it nicely formats the
long list of extensions to exclude from migration into the correct policy language statements.
Alternatively, the CLI can be used to create such a policy (which is more cumbersome), but
also to run the policy using the runpolicy command, which will execute the migration policy
only once and at the time the command is started. The creation and CLI-based execution of a
migration run is described in 16.5.2, Making a selectively compressed file system
uncompressed on page 325.
Figure 16-29 shows the GUI migration policy editor.
Chapter 16. Real-time Compression in the IBM Storwize V7000 Unified 323
Figure 16-29 GUI migration policy editor
The GUI migration policy editor can be used to define which files should not be migrated to
the compressed pool.
Because the threshold for the migration was set low, the system starts the migration job
automatically. The CLI allows us to view the active jobs with the lsjobstatus command and
also provides a view on the migration job results by using the showlog command. See
Example 16-11.
Example 16-11 View the migration policy and monitor the migration job by using the CLI
# ******* View the GUI generated policy rule for migration
# ******* The exclusion list is truncated in the example
[7802378.ibm]$ lspolicy -D converttocompressed
RULE 'generatedMigrationRule0'
MIGRATE
FROM POOL 'system'
THRESHOLD(1,0)
TO POOL 'compressed'
WHERE NOT (
(LOWER(NAME) LIKE '%.7z' OR LOWER(NAME) LIKE '%.7z.001'
... OR LOWER(NAME) LIKE '%.zip' OR LOWER(NAME) OR LOWER(NAME) LIKE '%.docx'))
RULE 'default' SET POOL 'system'
EFSSG1000I The command completed successfully.
# ******* List current active jobs, for all jobs used the --all keyword
[7802378.ibm]$ lsjobstatus
File system Job Job id Status Start time End
time/Progress RC Message
converttocompressed auto_migration 42 running 11/13/12 3:19:45 PM IST
324 Implementing the IBM Storwize V7000 Unified
EFSSG1000I The command completed successfully.
# ******* Show the detailed log of the job, the log can already be viewed
# ******* while it is still running. Log truncated in this example
[7802378.ibm]$ showlog 42
Primary node: mgmt002st001
Job ID : 42
[I] GPFS Current Data Pool Utilization in KB and %
compressed69121048576000.006592%
system79142401048576007.547607%
[I] 238008 of 12336640 inodes used: 1.929277%.
[I] Loaded policy rules from /var/mmfs/tmp/tspolicyFile.mmapplypolicy.943899.
Evaluating MIGRATE/DELETE/EXCLUDE rules with CURRENT_TIMESTAMP =
2012-11-13@13:19:47 UTC
parsed 1 Placement Rules, 0 Restore Rules, 1 Migrate/Delete/Exclude Rules,
0 List Rules, 0 External Pool/List Rules
RULE 'generatedMigrationRule0'
MIGRATE
FROM POOL 'system'
THRESHOLD(1,0)
TO POOL 'compressed'
WHERE NOT (
(LOWER(NAME) LIKE '%.7z' OR LOWER(NAME) LIKE '%.7z.001' OR LOWER(NAME) LIKE
'%.hqx' OR LOWER(NAME) LIKE '%.docx'))
RULE 'default' SET POOL 'system'
[I] Directories scan: 223268 files, 10751 directories, 0 other objects, 0
'skipped' files and/or errors.
[I] Summary of Rule Applicability and File Choices:
Rule#Hit_CntKB_HitChosenKB_ChosenKB_IllRule
0134646490876013464649087600RULE 'generatedMigrationRule0' MIGRATE FROM POOL
'system' THRESHOLD(1,0) TO POOL 'compressed' WHERE(.)
[I] Filesystem objects with no applicable rules: 99362.
[I] GPFS Policy Decisions and File Choice Totals:
Chose to migrate 4908760KB: 134646 of 134646 candidates;
Chose to premigrate 0KB: 0 candidates;
Already co-managed 0KB: 0 candidates;
Chose to delete 0KB: 0 of 0 candidates;
Chose to list 0KB: 0 of 0 candidates;
0KB of chosen data is illplaced or illreplicated;
Predicted Data Pool Utilization in KB and %:
compressed49156721048576004.687950%
system30234001048576002.883339%
[I] A total of 134646 files have been migrated, deleted or processed by an
EXTERNAL EXEC/script;
11 'skipped' files and/or errors.
----------------------------------------------------------
End of log - auto_migration job ended
----------------------------------------------------------

EFSSG1000I The command completed successfully.
Chapter 16. Real-time Compression in the IBM Storwize V7000 Unified 325
Remove the migration policy and enable a placement policy
The migration policy can now be disabled by clearing the placement and a placement policy
can be defined. For a compressed file system pool, a placement policy can be created just
like during the initial creation of a file system with a compressed pool as described above.
The placement policy that will be enabled will ensure that only compressible files will be
added to the new pool in the future.
16.5.2 Making a selectively compressed file system uncompressed
It is possible to revert a selective compressed file system to become uncompressed while
there is full data access. The following example shows which steps are needed to migrate a
file system that has an uncompressed system pool and a second compressed file system
pool to a fully uncompressed file system pool.
The following steps are needed to make a selectively compressed file system uncompressed:
1. Analyze how much physical space is needed to move all data to the system pool. The
conversion can be done most easily when the uncompressed space needed is available
as free disk space in addition to the compressed pool. If not, a step-by-step approach
must be taken and volumes from the compressed pool that is emptied must be removed.
2. Create a default placement rule for the file system. See Example 16-12.
Example 16-12 Policy text to define a default placement rule for file system converted to
uncompressed
RULE 'default' SET POOL 'system';
3. Create and run a migration rule that will move data to the system pool.
In order to migrate data back to the system pool, a simple policy rule that will move all data
to the system pool from the compressed pool is created. A migration policy can be applied
to a file system, which will ensure the migration starts automatically and is regularly
executed. This will happen if the policy is created in the GUI policy text editor. Because the
migration rule should run only once, another approach is chosen. The CLI will be used to
create a policy, start a policy run, and monitor the migration of data.
Example 16-13 shows the simple policy rule.
Example 16-13 Simple policy rule
# Create a policy rule - this policy is not applied to any file system
[7802378.ibm]# mkpolicy migratetosystem -R "RULE 'migratetosystem' migrate from
pool 'compressed' to pool 'system';"
EFSSG1000I The command completed successfully.
# The runpolicy command will only run the policy once on our file system
[7802378.ibm]$ runpolicy markus_test -P migratetosystem
EFSSA0184I The policy is started on markus_test with JobID 806.
# The showlog command provides the output of policy runs
[7802378.ibm]$ showlog 806
Primary node: mgmt001st001
Job ID : 806
[I] GPFS Current Data Pool Utilization in KB and %
compressed 6531584 39845888 16.392116%
system 20494848 49283072 41.585979%
[I] 1000507 of 1200640 inodes used: 83.331140%.
326 Implementing the IBM Storwize V7000 Unified
[I] Loaded policy rules from
/var/opt/IBM/sofs/PolicyFiles/policy4252244384126468096.
Evaluating MIGRATE/DELETE/EXCLUDE rules with CURRENT_TIMESTAMP =
2012-11-15@07:39:05 UTC
parsed 0 Placement Rules, 0 Restore Rules, 1 Migrate/Delete/Exclude Rules,
0 List Rules, 0 External Pool/List Rules
RULE 'migratetosystem' migrate from pool 'compressed' to pool 'system'
[I] Directories scan: 923603 files, 72924 directories, 0 other objects, 0
'skipped' files and/or errors.
[I] Summary of Rule Applicability and File Choices:
Rule# Hit_Cnt KB_Hit ChosenKB_Chosen nkB_Ill Rule
0574246 49452407 424649452400RULE 'migratetosystem' MIGRATE FROM POOL
'compressed' TO POOL 'system'
[I] Filesystem objects with no applicable rules: 422280.
[I] GPFS Policy Decisions and File Choice Totals:
Chose to migrate 4945240KB: 574246 of 574246 candidates;
Chose to premigrate 0KB: 0 candidates;
Already co-managed 0KB: 0 candidates;
Chose to delete 0KB: 0 of 0 candidates;
Chose to list 0KB: 0 of 0 candidates;
0KB of chosen data is illplaced or illreplicated;
Predicted Data Pool Utilization in KB and %:
compressed1586344398458883.981199%
system254728564928307251.686827%
[I] A total of 574246 files have been migrated, deleted or processed by an
EXTERNAL EXEC/script;
0 'skipped' files and/or errors.
----------------------------------------------------------
End of log - runpolicy job ended
----------------------------------------------------------

EFSSG1000I The command completed successfully.
4. Remove all volumes of the compressed pool
After the policy run completed, all data was moved to the system pool and the volumes of
the compressed pool can be removed. The GUI can be used to perform this task in the
edit file system dialog. Setting the capacity of the compressed pool to zero will remove all
compressed volumes and finally remove the file system pool.
Figure 16-30 shows how to remove the compressed volumes.
Chapter 16. Real-time Compression in the IBM Storwize V7000 Unified 327
Figure 16-30 The compressed file system pool size is set to 0, which triggers the removal of all related
compressed volumes
16.6 Compression saving reporting
Understanding capacity in a virtualized storage system can be a challenge. Probably the best
approach is to review the various layers in storing data, from different points of view:
Data: The data itself as stored by the host operating system onto the volume.
Shares: A logical unit that the user writes the data into.
File system: Logical unit in the Storwize V7000 Unified system. Composed from file
system pools.
Pools: Composed from MDisks.
File system pool: The storage container of compressed volumes.
By design, the compression technology implemented in the Storwize V7000 Unified is
transparent to the host. It compresses the client data before writing to the disk, and extracts
the data as it is read by the host. The data size looks different from a different point of view.
For example, the compressed size is not reflected to the user, and can be seen only from the
Storwize V7000 Unified point of view.
16.6.1 Reporting basic overview
The following are basic concepts of the Storwize V7000 Unified system reporting of
compressed data:
328 Implementing the IBM Storwize V7000 Unified
Real Capacity Real size is the amount of space from the storage pool
allocated to allow the volume data to be stored. A compressed
volume is by default a thin-provisioned volume that allows you
to allocate space on demand. When a new volume is created,
there is an option to define the actual size it creates as a
percentage of the original volume size. By default it is 2%. The
volume expands automatically according to the usage.
Used Capacity The amount of real size that is used to store data for the
volume, which is sometimes called compressed size.
Before Compression size The size of all the data that has been written to the volume
calculated as though it was written without compression. This
size is reflected on the host operating system.
Virtual capacity Virtual capacity is the volume storage capacity that is available
to a host. Real capacity is the storage capacity that is allocated
to a volume copy from a storage pool. In a fully allocated
volume, the virtual capacity and real capacity are the same. In
a thin-provisioned or compressed volume, however, the virtual
capacity can be much larger than the real capacity.
Compression Savings The ratio between the compressed size and the uncompressed
size.
16.6.2 Reporting of compression in the GUI
Reporting of compressed data is available in the GUI in the following windows:
System level: Monitoring System
File system level: Monitoring Capacity and in files and file systems
Storage pool level: Pools MDisks by pools
Volume level: Volumes by pool or file system page
System level
To view the file system level, click Monitoring System.
Figure 16-31 shows three areas of the system disks:
Compressed size
Before compression size
Virtual capacity
As you can see in Figure 16-31 by looking at the tube to the left of the Storwize V7000
Unified, the compression ratio is approximately 50%. The second area of the capacity before
compression is greater than the lower section of the compressed capacity.
Chapter 16. Real-time Compression in the IBM Storwize V7000 Unified 329
Figure 16-31 Areas of the system disks
File system level
To get information at the file system level, click the Monitoring icon and choose the Capacity
option. Choose the File System Pools tab and you can see the compression saving of each
file system and the amount of used space from each file system pool.
In Figure 16-32, you can see the real capacity, file system used capacity, and the
compression saving capacity of each file system. By expanding the plus sign, you can see it
per file system pool.
In this example in the compressed_fs file system, you can see that the real capacity of the
system and the total capacity are equal. This is because an uncompressed file system is fully
allocated. But these values of the compressed pool are different. The real capacity is the
allocated size from the pool, in this case 22.19 GB.
Note: The reporting view from the IBM Storwize V7000 Unified system is different from the
reporting from the Storwize V7000. For more information about the Storwize V7000
reporting, see Real-time Compression in SAN Volume Controller and Storwize V7000,
REDP-4859:
http://www.redbooks.ibm.com/Redbooks.nsf/RedpieceAbstracts/redp4859.html?Open
330 Implementing the IBM Storwize V7000 Unified
Figure 16-32 Real capacity, file system used capacity, and the compression saving capacity of each file system
Storage pool level
To get information at the storage pool level, click MDisks by pool in the pools icon menu.
Clicking this option provides the compression ratio benefits of the entire pool.
Figure 16-33 shows the amount of used capacity and the virtual capacity.
Used size: The percentages in the shaded blue part are the amount of used space from the
allocated capacity. In this example,1TB/2.98TB is used, which means 47% of pool0 is used.
Virtual Capacity: The numbers also present the virtual capacity. This number is the sum of
all volume sizes in the pool. In this example, pool0 has 4.08 of virtual capacity.
Figure 16-33 Amount of used capacity and the virtual capacity
Volume level
There are two methods to report at a volume level.
Method 1: Click Files File systems.
Expand the file system and the file system pools to see the volumes that it is composed of.
See Figure 16-34.
Notice: The pool contains volumes that are used by the file system of the IBM Storwize
V7000 Unified system and VDisks of the Storwize V7000 block devices.
Note: The file system is composed of volumes. Therefore, this view includes the file
system volumes and the block device volumes that are using the same storage pool.
Chapter 16. Real-time Compression in the IBM Storwize V7000 Unified 331
Figure 16-34 Volumes used capacity
The view in Figure 16-34 shows the used capacity of each volume in the file system per file
system pool.
To see more details, right-click a volume and choose properties.
In Figure 16-35, you can see the before compression size, real size, and the compression
savings of the volume. In this example, the original data size is 23.45 GB and it got
compressed to 6.67 GB. Therefore, the compression savings is 71.57%.
Figure 16-35 Before compression size, real size, and the compression savings of the volume
Ensure that the Show Details box is checked.
Method 2: Click the Pools icon and go to Volumes by pool. The table provides the
compression savings per volume in a specific pool. This view is more useful for block device
volumes rather than file systems.
In the graph at the right upper side of Figure 16-36, you can see the volume capacity: 5.41 TB
and the virtual capacity: 10.82 TB. The volumes that are configured in this pool are over
allocated and the total size of them is 10.82 TB.
The second bar reports the compression savings. The compressed size of the data is 4.3 GB
and the original size of the data is the compressed size + saved size - 4.3 + 5.3 = 9.6 GB. The
Notice: The compressed data is divided equally between the volumes in the same file
system pool so the compression savings shown in one volume are reflected in the entire
file system pool.
332 Implementing the IBM Storwize V7000 Unified
compression ratio of the entire pool0 is 44% and calculated with this formula: Compressed
size/total size - 4.3/9.6. = 44%. The saving ratio is ~55%.
Figure 16-36 Volume capacity: 5.41 TB and the virtual capacity: 10.82 TB
16.6.3 Compression reporting by using the CLI
The CLI commands provide information in a volume and storage pool level. Therefore, to see
the capacity reporting of a file system through the CLI, you should know the VDisks name the
file system is composed of.
You can generate reports on compressed data by using the following CLI commands:
System level: lssystem
Specific storage pool: lsmdiskgrp [<mdisk_grp_name>]
All storage pools: lsmdiskgrp
Specific volume: lssevdiskcopy [<vdisk_name>]
All volumes in a pool: lssevdiskcopy -filtervalue
mdisk_grp_name=[<mdisk_grp_name>]
All volumes in a system: lssevdiskcopy
Example 16-14 shows the output of the lssystem command.
Example 16-14 lssystem output
[7802378.ibm]$ lssystem
id 00000200A7003A05
name Violin Unified
location local
partnership
Consideration: The reported numbers are dynamically updated as data is sent from the
Storwize V7000 cache to the receive-any control element (RACE) component. This
updating causes a slight delay (typically a few seconds) between the host writes and the
updated reporting.
Chapter 16. Real-time Compression in the IBM Storwize V7000 Unified 333
bandwidth
total_mdisk_capacity 3.3TB
space_in_mdisk_grps 3.3TB
space_allocated_to_vdisks 1.57TB
total_free_space 1.7TB
total_vdiskcopy_capacity 5.97TB
total_used_capacity 1.48TB
total_overallocation 183
total_vdisk_capacity 5.95TB
total_allocated_extent_capacity 1.58TB
[...]
compression_active yes
compression_virtual_capacity 4.58TB
compression_compressed_capacity 166.52GB
compression_uncompressed_capacity 412.73GB
[...]
Example 16-15 shows the output of the lsmdiskgrp command for a specific pool.
Example 16-15 lsmdiskgrp output
[7802378.ibm]$ lsmdiskgrp pool0
id 0
name pool0
status online
mdisk_count 2
vdisk_count 129
capacity 2.98TB
extent_size 256
free_capacity 1.58TB
virtual_capacity 4.08TB
used_capacity 1.34TB
real_capacity 1.40TB
overallocation 136
warning 80
easy_tier auto
easy_tier_status inactive
tier generic_ssd
tier_mdisk_count 0
tier_capacity 0.00MB
tier_free_capacity 0.00MB
tier generic_hdd
tier_mdisk_count 2
tier_capacity 2.98TB
tier_free_capacity 1.58TB
compression_active yes
compression_virtual_capacity 2.78TB
compression_compressed_capacity 113.89GB
compression_uncompressed_capacity 273.40GB
Example 16-16 shows the output of the lsmdiskgrp command for the entire node.
Example 16-16 lsmdiskgrp output for entire node
[7802378.ibm]$ lsmdiskgrp
334 Implementing the IBM Storwize V7000 Unified
0 pool0 online 2 129 2.98TB 256 1.58TB 4.08TB 1.34TB 1.40TB 136 80 auto inactive yes 2.78TB 113.89GB
Example 16-17 shows the lssevdiskcopy output.
Example 16-17 lssevdiskcopy -nohdr output
[7802378.ibm]$ lssevdiskcopy -nohdr -filtervalue mdisk_grp_name=pool0
0 Nitzan 0 0 pool0 400.00MB 4.63MB 24.00MB 19.38MB 1666 on 80 no yes 4.44MB
1 CSB-SYSTEM_AG_COMPRESSED 0 0 pool0 50.00GB 21.24GB 22.25GB 1.02GB 224 on 80 no yes 49.96GB
3 bosmatTest 0 0 pool0 100.00GB 15.10GB 17.11GB 2.01GB 584 on 80 no yes 28.04GB
12 VM_Compressed 0 0 pool0 34.00GB 7.09GB 7.78GB 711.68MB 436 on 80 no yes 15.56GB
18 export for esx 200 0 0 pool0 100.00GB 3.25MB 2.02GB 2.01GB 4961 on 80 no yes 0.00MB
19 New_VM_Comp 0 0 pool0 300.00GB 22.12GB 28.13GB 6.01GB 1066 on 80 no yes 48.63GB
20 New_VM_Clear 0 0 pool0 100.00GB 33.96GB 35.96GB 2.00GB 278 on 80 256 yes no 33.96GB
25 Nitzan_01 0 0 pool0 400.00MB 0.75MB 17.25MB 16.50MB 2318 on 80 256 yes no 0.75MB
26 VM_Comp_P0_0 0 0 pool0 150.00GB 5.19GB 8.20GB 3.01GB 1828 on 80 no yes 11.21GB
27 VM_Comp_P0_1 0 0 pool0 150.00GB 5.19GB 8.20GB 3.01GB 1829 on 80 no yes 11.21GB
28 VM_Comp_P0_2 0 0 pool0 150.00GB 5.20GB 8.20GB 3.01GB 1829 on 80 no yes 11.21GB
29 VM_Comp_P0_3 0 0 pool0 150.00GB 5.19GB 8.20GB 3.01GB 1829 on 80 no yes 11.21GB
30 VM_Comp_P0_4 0 0 pool0 150.00GB 5.19GB 8.20GB 3.01GB 1828 on 80 no yes 11.21GB
36 Copy Mirror for Pendulum 0 0 pool0 500.00GB 494.69MB 10.48GB 10.00GB 4768 on 80 no yes 1.31GB
50 IFS1352811948934 0 0 pool0 33.00GB 521.38MB 1.18GB 683.38MB 2804 on 80 no yes 1.57GB
59 IFS1352811949068 0 0 pool0 33.00GB 513.53MB 1.18GB 691.21MB 2804 on 80 no yes 1.56GB
60 IFS1352812042137 0 0 pool0 34.00GB 516.28MB 1.20GB 708.93MB 2841 on 80 no yes 1.55GB
85 lev_production_backup 0 0 pool0 500.00GB 17.49GB 27.50GB 10.02GB 1817 on 80 no yes 26.83GB
91 IFS1352369292204 0 0 pool0 1.00GB 3.31MB 36.48MB 33.17MB 2806 on 80 no yes 0.00MB
92 IFS1352369292340 0 0 pool0 1.00GB 3.31MB 36.48MB 33.17MB 2806 on 80 no yes 0.00MB
93 IFS1352369304265 0 0 pool0 2.00GB 3.31MB 56.96MB 53.65MB 3595 on 80 no yes 0.00MB
94 jq_vol1 0 0 pool0 5.00GB 3.25MB 118.40MB 115.15MB 4324 on 80 no yes 0.00MB
95 jq_vol2 0 0 pool0 5.00GB 8.19MB 118.40MB 110.21MB 4324 on 80 no yes 4.94MB
100 IFS1352638421928 0 0 pool0 33.00GB 972.16MB 1.61GB 681.28MB 2043 on 80 no yes 14.03GB
101 IFS1352638422076 0 0 pool0 33.00GB 972.19MB 1.61GB 681.09MB 2043 on 80 no yes 14.04GB
102 IFS1352638501197 0 0 pool0 34.00GB 988.22MB 1.65GB 701.51MB 2060 on 80 no yes 14.27GB
For further information about capacity reporting of block device disks, see Real-time
Compression in SAN Volume Controller and Storwize V7000, REDP-4859:
http://www.redbooks.ibm.com/redpapers/pdfs/redp4859.pdf
Copyright IBM Corp. 2013. All rights reserved. 335
Related publications
The publications listed in this section are considered particularly suitable for a more detailed
discussion of the topics covered in this book.
IBM Redbooks
The following IBM Redbooks publications provide additional information about the topic in this
document. Note that some publications referenced in this list might be available in softcopy
only:
Implementing the IBM System Storage SAN Volume Controller V6.3, SG24-7933
Implementing the IBM Storwize V7000 V6.3, SG24-7938
Real-time Compression in SAN Volume Controller and Storwize V7000, REDP-4859
Introduction to Storage Area Networks, SG24-5470
IBM System Storage: Implementing an IBM SAN, SG24-6116
DS4000 Best Practices and Performance Tuning Guide, SG24-6363
IBM System Storage Business Continuity: Part 1 Planning Guide, SG24-6547
IBM System Storage Business Continuity: Part 2 Solutions Guide, SG24-6548
Get More Out of Your SAN with IBM Tivoli Storage Manager, SG24-6687
IBM Tivoli Storage Area Network Manager: A Practical Introduction, SG24-6848
DS8000 Performance Monitoring and Tuning, SG24-7146
Monitoring Your Storage Subsystems with TotalStorage Productivity Center, SG24-7364
Using the SVC for Business Continuity, SG24-7371
SAN Volume Controller: Best Practices and Performance Guidelines, SG24-7521
SAN Volume Controller V4.3.0 Advanced Copy Services, SG24-7574
IBM XIV Storage System: Architecture, Implementation and Usage, SG24-7659
IBM Tivoli Storage Productivity Center V4.1 Release Guide, SG24-7725
IBM SAN Volume Controller 4.2.1 Cache Partitioning, REDP-4426
Other publications
These publications are also relevant as further information sources:
IBM System Storage SAN Volume Controller: Planning Guide, GA32-0551
IBM System Storage Open Software Family SAN Volume Controller: Planning Guide,
GA22-1052
IBM System Storage SAN Volume Controller: Service Guide, GC26-7901
IBM System Storage SAN Volume Controller Model 2145-8A4 Hardware Installation
Guide, GC27-2219
IBM System Storage SAN Volume Controller Model 2145-8G4 Hardware Installation
Guide, GC27-2220
IBM System Storage SAN Volume Controller Models 2145-8F2 and 2145-8F4 Hardware
Installation Guide, GC27-2221
IBM SAN Volume Controller Software Installation and Configuration Guide, GC27-2286
IBM System Storage SAN Volume Controller Command-Line Interface Users Guide,
GC27-2287
336 Implementing the IBM Storwize V7000 Unified
IBM System Storage Master Console: Installation and Users Guide, GC30-4090
Multipath Subsystem Device Driver Users Guide, GC52-1309
IBM System Storage SAN Volume Controller Model 2145-CF8 Hardware Installation
Guide, GC52-1356
IBM System Storage Productivity Center Software Installation and Users Guide,
SC23-8823
IBM System Storage Productivity Center Introduction and Planning Guide, SC23-8824
Subsystem Device Driver Users Guide for the IBM TotalStorage Enterprise Storage
Server and the IBM System Storage SAN Volume Controller, SC26-7540
IBM System Storage Open Software Family SAN Volume Controller: Installation Guide,
SC26-7541
IBM System Storage Open Software Family SAN Volume Controller: Service Guide,
SC26-7542
IBM System Storage Open Software Family SAN Volume Controller: Configuration Guide,
SC26-7543
IBM System Storage Open Software Family SAN Volume Controller: Command-Line
Interface Users Guide, SC26-7544
IBM System Storage Open Software Family SAN Volume Controller: CIM Agent
Developers Reference, SC26-7545
IBM System Storage Open Software Family SAN Volume Controller: Host Attachment
Guide, SC26-7563
Command-Line Interface Users Guide, SC27-2287
IBM System Storage Productivity Center Users Guide Version 1 Release 4, SC27-2336
IBM TotalStorage Multipath Subsystem Device Driver Users Guide, SC30-4096
IBM System Storage SAN Volume Controller V5.1.0 - Host Attachment Guide, SG26-7905
IBM Tivoli Storage Productivity Center IBM Tivoli Storage Productivity Center for
Replication Installation and Configuration Guide, SC27-2337
IBM TotalStorage Multipath Subsystem Device Driver Users Guide, SC30-4096
Online resources
These websites are also relevant as further information sources:
IBM TotalStorage home page
http://www.storage.ibm.com
SAN Volume Controller supported platform
http://www-1.ibm.com/servers/storage/support/software/sanvc/index.html
Download site for Windows Secure Shell (SSH) freeware
http://www.chiark.greenend.org.uk/~sgtatham/putty
IBM site to download SSH for AIX
http://oss.software.ibm.com/developerworks/projects/openssh
Open source site for SSH for Windows and Mac
http://www.openssh.com/windows.html
Related publications 337
Cygwin Linux like environment for Windows
http://www.cygwin.com
IBM Tivoli Storage Manager for Storage Area Networks
http://www.ibm.com/software/products/us/en/tivostormanaforstorareanetw/
Microsoft Knowledge Base Article 131658
http://support.microsoft.com/kb/131658
Microsoft Knowledge Base Article 149927
http://support.microsoft.com/kb/149927
Sysinternals home page
http://www.sysinternals.com
Subsystem Device Driver download site
http://www-1.ibm.com/servers/storage/support/software/sdd/index.html
IBM TotalStorage Virtualization home page
http://www-1.ibm.com/servers/storage/software/virtualization/index.html
SAN Volume Controller support page
http://www-947.ibm.com/systems/support/supportsite.wss/selectproduct?taskind=4&
brandind=5000033&familyind=5329743&typeind=0&modelind=0&osind=0&psid=sr&continu
e.x=1
SAN Volume Controller online documentation
http://pic.dhe.ibm.com/infocenter/svc/ic/index.jsp
lBM Redbooks publications about SAN Volume Controller
http://www.redbooks.ibm.com/cgi-bin/searchsite.cgi?query=SVC
Help from IBM
IBM Support and downloads
ibm.com/support
IBM Global Services
ibm.com/services
338 Implementing the IBM Storwize V7000 Unified
Copyright IBM Corp. 2013. All rights reserved. 339
Index
A
Access 11, 93
access authentication 171
Access control 29, 206
access control 14, 28, 37, 3940, 42, 59
access control entries 46
access control list 42
Access Control Lists 13
access control lists 35
access control mechanism 42
access levels 93
access protocol 94
access to files 22
Accessed Time stamp 30
ACE 46, 73, 240
ACL 13, 35, 39, 42
ACL formats 42
ACL mapping 42
ACL type 46
ACLs 30, 38
Active Cloud Engine 73, 240
active cluster 237
Active Directory 41, 44, 46, 108, 125, 172, 191
Active Directory domain 44
Active Directory ID mappings 46
Active Directory Server 39
Active Directory server 172
active mode 1718
active quorum 237
active-active 34
AD 32, 41
AD authentication 44
AD with SFU 44
adaptive 71
Additional Gateways 177
address space 11
admin password 277
air circulation 144
airflow 144
alerting 94, 187, 274
alerting tasks 260
alerts 94, 259
analyze workloads 102
animated graphics 86
Antivirus 72, 92, 216
Antivirus Definition 221
Antivirus scanner 73
Antivirus vendor product 218
AOS 180, 239, 277
API 16
Application Programming Interface 16
Archive bit 30
Arrays 200
assert 238
Assist on site 180
Assist-on-Site 275
asynchronous 12, 78
asynchronous replication 80
asynchronous scheduled process 27
attention lights 158
Audit Log 93
authenticated 38
authentication 12, 2930, 37, 172
authentication method 94
authentication methods 39
authentication support 16
authority 34, 197
authorization 29, 37
automated data placement 61
automated storage tiering 27
automatic extent level migration 26
automatic recovery 256
automatically detect events 274
auxiliary 77
auxiliary VDisk 77
AV connector 217218
availability 60
available scan nodes 219
AV-Connector 216
B
background copy 7678
rate 78
background copy rate 202
Backup 202, 205
backup 7778
backup command output log 236
backup file 238
backup process 238
backup workload 252
backups from TSM 246
bandwidth 217
bandwidth friendly 80
bandwidth setting 201
base addresses 155
Base configuration 160
base file system 205
base licences 119
Batch Scan 222
battery backup 23
bitmaps 76
blank carriers 143
block 6
block commands 95
block devices 199
block remote copy services 201
block size 67
block storage 6, 150
340 Implementing the IBM Storwize V7000 Unified
block-level changes 80
blocks 6
bootable DVD 153, 278
buffer resources 50
BypassTraversalCheck 47
byte range locks 32
byte-range 71
byte-range locking 34
C
cable 142
cable management system 147
cache 239
cache data 14
caching 34, 71, 121122
call home process 275
candidate 157
canister 147
canisters 23
Capacity 90
capacity 119
thin provisioned volumes 77
Capacity Magic 119
capacity requirements 119
Cascaded FlashCopy 77
case insensitive file lookup 30
centralized management concept 62
Certificate 173
chain 125
chains 147
change rate 79
change the cluster name 178
change the passwords 194
Change Volumes 76, 79
checklist 140
checkpoint 239
checkpoint data 236
child directory 46
child file 46
CIFS 8, 16
CIFS ACLs 108
CIFS share 108, 210
CIFS Windows 206
CKD 6
class 92
cleaning rate 202
clear text 39
CLI 218
CLI session 95
client access 221
Client Replaceable Unit 256
client server infrastructures 8
client server protocol 13
client side caching 30, 121
client side firewalls 18
client systems 71
Clone 202
clone 76
clones 112
close after write 217
cluster 26, 78
cluster admin password 195
cluster manager 28, 3334, 71
cluster messages 190
cluster nodes 68
cluster performance graphs 233
clustered 13, 29
Clustered File Modules 60
clustered file system 66
clustered implementation 33
Clustered Trivial Data Base 29
clustering 25, 34
code levels 152
Command Line Interface 218
command set 13
commit request 12
Common Internet File System 8
common tasks 90
Concurrency 11
concurrent access 34
concurrent read 34
concurrent users 34
config node 84, 183, 236
config role 237
config saves 278
configuration 236
configuration change 257
configuration data 235
configuration tasks 140
configure alerting 187
configure block storage 200
configure the controllers 199
Configure USB key 154
configuring 140
connecting remotely 86
connection oriented 14
connectivity 24
Consistency Groups 93, 202
consistency groups 76
consistent 7677
consistent snapshot 76
consistent state 80
Control Enclosure 147
Control enclosure 142, 150
control enclosure 150
Control Enclosures 145
control port 18
Controller Enclosure 84
controller enclosures 144
cooling 144
copy
operation 78
rate 78
copy operations 93
copy process 76
Copy Services 93
Count Key Data 6
Create a CIFS share 210
Create a file set 209
Create a file system 207
Index 341
Create an HTTP, FTP, SCP export 212
Create an NFS export 211
Created Time stamp 30
credential 39
credential verification 3839
credentials 37
critical 189
critical failures 256
cross-node 28
cross-platform mapping 29
cross-protocol 28
CRU 256, 276
CTDB 29, 34
current configuration data 236
current firmware 152
Custom 209
cycling period 79
D
daemons 18, 71
DAP 41
data
production 77
data access 59
data blocks 73
data collect file 271
Data Collection 276
data collection processes 270
data collects 181
data integrity 32, 237
Data Management Application 112, 240
data mining 77
data organization 6
data placement 60
data port 18
data protection 60
data transfer 12, 34
Database Management System 6
DBMS 6
DC 41
debug 257
decrypts 39
default ACL 47
Default Gateway 177
default passwords 193
default rule 73
Delete 221
delete
FlashCopy mappings 78
delete after complete 202
deletion 61
deny access 217
deny-mode 33
dependent file set 206
dependent File Sets 71
dependent file sets 71
destage 238
destaged 239
DFS 31
dialect 14, 16
dialects 13
digital certificates 38
directory 38
Directory Access Protocol 41
directory service 40, 43
Directory Services 94
disable the animation 86
disaster recovery 80
disaster tolerance 80
disk 78
Disk Magic 103, 119
disk partitions 154
distance 78
distributed 62
distributed computing 11, 13
Distributed File System 31
distributed file system 22
distribution boards 145
DMA 112, 240
DMA server 254
DNS 34, 41, 125
DNS domain 94
DNS server 171
DNS servers 127
Domain Controller 41
domain controller 38
domain name 171
Domain Name System 34, 41, 117
DOS attributes 30
Download Logs 181
Download Support Package 271
download support package 181182
drain 81
Drive assemblies 143
drive bays 23
drive class 92
drive failures 25
drive types 27
dual fabric 201
dumps 181
duplicate transfers 32
DVD restore 152
E
EA 218
Easy Setup wizard 161
Easy Tier 26
e-mail 94
emulate 32
enclosure link 237
enclosure management address 95
enclosure nodes 155
enclosures 92
encrypt 18
encrypted 39
encrypted transfer 18
encryption algorithm 13
enforcement of ACLs 32
error alerting 274
error code 260
342 Implementing the IBM Storwize V7000 Unified
errors 77
Ethernet 146
event 256
event count 259
event details 259
event ID 260
event log 239, 256257
event logs 90, 193
Event Notifications 94
Events 90
events 187
exclusion list 217
Expansion enclosure 142
Expansion Enclosures 145
expansion enclosures 23, 144
export 47, 205
exports 1112, 205
EXT2 154
EXT3 154
extended attributes 217218
Extended NIS 175
extent maps 237
extent sizes 25
extents 92
external directory service 38
external disks 92
external scan engines 113
External storage 200
external storage 76, 92
EZ-Setup 160
F
failover 112
Failure 11
failure group 72
failure groups 72
FAT32 154
FB 6
FBA 6
FC 6, 2425
FCP 25
Fiber optic cables 149
Fibre Channel 6, 94
Fibre Channel Protocol 25
File 146
file 150
file access 38, 57
file access concurrency 29
file access IP addresses 94
file access protocols 8, 11, 22
file access request 38
file clients 22
file export 217
File Module 150
File module backup 237
file module software level 152
File Modules 2224, 146
file modules 25, 76, 85, 145
file offset 12
file open 217
file path list 80
file pattern 254
file recall 113, 218
file replication 93
file server 22
File Server Appliance 9
file server subsystem 2224
File Service 91
File service components 205
File Services configuration 205
file serving 7, 26
File sets 205
file sets 91, 209
file shares 183
File sharing 29
file sharing 7, 26
file sharing functions 29
file sharing protocol 17
file sharing protocols 8, 11, 22
file signature 216
file space 16
File System 6
file system 6667
file system pools 68
file system-level snapshots 27
File systems 205
file systems 7, 22
File Transfer Protocol 9, 17
file transfer protocol 19
file transfer services 26
file usage by each user 91
file versions 59
filer 59
files 9
filler panel 144
firewall 18
firewalls 12, 18
Fix 278
Fixed Block Architecture 6
fixed block storage 6
fixed field 260
FlashCopy 7677, 93, 202
create 76
creating 76
mappings 7778
operations 77
source 76
target 76
FlashCopy Mappings 93
FM 22
Frequency 222
frequency 59
FS 6
FTP 9, 11, 17, 22
FTP client 9
FTP file client 9
FTP file server 9
FTP over SSH 19
FTP server 9
FTP support 31
Index 343
G
General 94
General Parallel File System 22, 65
Generic Security Services Application Program Interface
16
GID 40
Global Mirror 7678, 93
global name space 27, 34
global namespace 205
Global time-out 220
GNU General Public License 29
GPFS 22, 2729, 32, 34, 40, 42, 5961, 6566, 119,
205, 209, 240
GPFS ACLs 42
GPFS cluster 68
GPFS file sets 71
GPFS NFSv4 42
GPFS one-tier architecture 63
GPFS quota management 74
GPFS Snapshots 73
GPFS technical concepts 66
GPL 29
grace period 74
grain size 26
granular space management 59
Graphical User Interface 218
grid parallel architecture 66
group identifiers 40
GSSAPI 16
GUI 218
Guided Maintenance 262
Guided Maintenance Procedures 276
Guided Maintenance routines 262
H
Hard Disk Drives 6
hard quota 59, 74
hard quotas 74
hardened data 237
hardware 163
hardware placement 125
hash 80
hashes 38
HDD 6, 26
HDDs 24
health 193
health reporting 274
health state 33
heterogeneous file sharing 39
high latency 27
high-available 34
HIM 24
historic usage 90
home directory 59
Host Interface Module 24
Host Mappings 93
host user data 235
Hosts 92
hosts 92
HTTP 9, 11, 18
HTTP aliases 32
HTTP client 9
HTTP over SSL 18
HTTP Secure 18
HTTP server 9
HTTP/1.1 18
HTTPS 18, 22
https connection 86
HTTPS support 32
Hypertext Transfer Protocol 9, 18
I
I/O 6
I/O group 78
I/O patterns 102
IBM SAN File System 66
IBM Support Remote Access 275
ID mapping 43
ID mapping parameters 46
identity 38
IETF 12, 18
IETF standard 16
ILM 27, 73, 100
image mode 25
implementation 140
implementation checklist 140
in-band 34
inclusion list 217
inconsistent 77
increases security 217
incremental 202
incremental changes 80
Incremental FlashCopy 77
independent 206
independent File Sets 71
independent file sets 71, 73
individual files 217
infected files 221
infection 216
in-flight workloads 33
Information Archive 66
Information Lifecycle Management 27, 73, 100
inheritance model 217
init tool 155
initial file set 206
Initialise the file modules 158
initialization process 158
initialize the system 154
initiator 25
inode 206
input/output 6
install DVD 278
install utility 153
installation 140
intercluster 78
interface node 26
interface node function 33
Interface Nodes 61
interface nodes 63, 80
344 Implementing the IBM Storwize V7000 Unified
Interface Pool 177
internal disk drives 92
Internal Storage 92
internal storage clients 22
Internet Engineering Task Force 12
Internet SCSI 6
intracluster 78
IP addresses 142
IPv4 106
iSCSI 6, 2425, 94, 184
ISO image 152
J
junction 206
junction point 206
K
Kerberos 1213, 16, 39, 41, 43
kerberos 174
Kerberos 5 16
Kerberos API 16
Kerberos name 174
Kerberos realm 174
Kerberos server 39
key 39, 95
key file 89
keys 38, 94
L
layers 79
layers from 9
LBA 6
LBA storage 6
LDAP 32, 38, 41, 108, 173
LDAP authentication 45
LDAP database 41
LDAP entry 41
LDAP server 173
leaf directory 47
leases 30
level of notification 260
License Agreement 161
licensing details 94
lights on 180
lights out 180
Lights-on 180
Lights-out 180
Lightweight Directory Access Protocol 41
Linux 40
LIPKEY 13
listening port 17
locking 14, 30, 34
locking change 32
locking characteristics 32
locking mechanisms 14, 59, 71
locking requests 33
locking services 28
locks 32
log listing 181
logging 94
logging server 190
Logical Block Addressing 6
logical unit 6
Logical Unit Number 6
logical volume 6
Logical Volume Manager 6
logical volumes 22, 24
logs 181
logs directory 181
low bandwidth 27
Low Infrastructure Public Key Mechanism 13
low reliability 27
LU 6
LUN 6
LV 6
LVM 6
M
mailbox 274
maintenance procedures 255256
maintenance routines 262
manage shares 210
management 271
management address 95
management IP address 85
mandatory scan 217
manual authorisation 180
manual backup 237
manually triggered 236
mapped 76
mapped volumes 239
mapping 76, 93, 202
mapping details 236
mapping logical volumes 25
mappings 77
Massachusetts Institute of Technology 16, 39
MBCS 35
McAfee 113
MDisk Groups 25
MDisks 92, 205
mesh 150
mesh pattern 150
metadata 34, 7273, 113
Metro Mirror 76, 78, 93
migration 27, 61
Migration policies 61
Migration-ILM 208
mirroring 25
MIT 16
MIT Kerberos 39
Mobility 11
modelling tools 119
Modification 278
Modified Time stamp 30
module 145
mount 78
mounts 11
multipath driver 239
Index 345
multipathing 280
multiple scan nodes 218
Multiple Target FlashCopy 77
N
Name resolution 29
namespace 59, 61
NAS 2, 9
NDMP 91, 246, 254
NDMP agent 240
NDMP topologies 112
Nearline 27
nested groups 42
nested mounts 28
NetBIOS 14, 16
NetBios name 174
Netgroup 45
netgroup 43
netgroups 42
Network 94, 183
network adapters 183
Network Attached Storage 2, 9
network authentication protocol 39
Network Data Management Protocol 91
network devices 151
network file sharing 27
Network File System 8, 11
Network Information Service 4243
Network Protocol 94
Network Shared Disk 66
Network Shared Disks 62
Network Storage Devices 207
Network Time Protocol 117, 125
network traffic 102
new definition 222
New File System 207
NFS 8, 11, 22, 38, 108
NFS access 211
NFS daemon 35
NFS export 38
NFS exports 45
NFS file client 8
NFS file server 8, 38
NFS file service 8
NFS protocol support 28
NFS requests 12
NFS Version 2 12
NFS Version 3 12
NFS Version 4 13
NFS version 4.1 13
NFSv2 12
NFSv3 12
NFSv4 13, 28, 42
NFSv4 ACL 28, 42
NFSv4.1 13
NIS 42, 45
NIS authentication 174
NIS domain 174
NIS ID mappings 46
No action 221
node canister 150
node canisters 237
node fail 236
node settings 220
nodes 41, 61
NSD 66
NSD clients 66
NSD grouping 72
NSD servers 66
NSDs 62, 72, 207
NT LAN Manager 16
NT4 16
NTP 117, 125, 127, 171
O
offload data 73
operational storage nodes 236
oplock 14, 33
oplocks 31, 71, 121
opportunistic locking 14, 33, 71, 121
Opportunistic locks 30
opportunistic locks 31
OSI 17
P
P2P 8
packets 14
packing slip 142
parallel 217
parallel access 71
parallel computing 66
parallel processing 13
parent directory 46
partition 154
Partnership 201
Partnerships 93
parts 276
passive mode 1718
password 39, 197
passwords 38
path 217, 221
PDC server 174
Peer-to-Peer 8
performance 12, 34, 91, 119, 217
performance profiles 27
performance requirements 27
periodic backup 177
periodic configuration backup 177
permissions 12, 42
physical volume 6
Placement policies 61
planning 140
planning worksheet 144
plug locations 147
PMR 275
point-in-time 93
point-in-time copy 27, 76
policy engine 73
policy engine scan 80
346 Implementing the IBM Storwize V7000 Unified
policy-based automated placement 27
pool of scan nodes 218
Pools 205
port mapper 12
Portable Operating System Interface 13
portmap 12
ports 12
Ports by Host 93
POSIX 13, 28
POSIX attributes 30
POSIX bits 42
POSIX byte range locks 32
POSIX locks 32
POSIX-compliant 29
power 144
power cords 142, 145
power outlets 125
power strips 145
Prefetch 253
Prefetching 252
presets 77
primary 78
Primary Domain Controller 43
proactive scan operation 217
Problem Management Record 275
profiles 193
protocol 25
public IP addresses 33
public network domain 171
Public Networks 94
PuTTY 88
PV 6
Q
Quarantine 221
quorum 68, 236
quorum data 237
quorum disk 68
quorum disks 239
quota 59, 91
quotas 16, 206
R
rack 142
rack space 144
racking of the modules 144
Rack-mounting hardware kit 142
RAID 6, 25
random access 6
random access mass storage 6
random port 17
read-ahead 34
ReadOnly bit 30
record updates 33
recovery 11, 276
recovery of a node 238
recovery procedures 256
recovery routines 260
recovery situations 95
Redbooks website
Contact us xvi
RedHat 6.1 22
redundancy 72, 150
Redundant Array of Independent Disks 6
redundant communication 107
redundant power supplies 23
registry 38
relationship 76, 78, 93
removing 76
Release 278
reload 152
Remote Copy 93
Remote copy 202
remote copy partner 79
Remote Copy Services 119
Remote Procedure Call 11
remote support functionality 178
remote support functions 179
Remote Technical Support 256
Replicate File Systems 93
replication 27
replication layer 79
Representational State Transfer 32
Request For Comment 11
reset packet 34
REST 32
retention 59
retention period 111
Reverse FlashCopy 77
RFC 1112
RFC 1813 12
RFC 3010 13
RFC 5661. NFSv4 13
roles 71
root 206
root directory 47
Root password 277
root squash 211
round-robin access 34
RPC 11
rpc.portmap 12
rpcbind 12
rsa key 89
rsync 27, 80
rsync transfer 80
RTS 256
rule 73
S
SA 84
SAMBA 43
Samba 29, 217
Samba PDC 44, 174
SAMBA PDC authentication 44
Samba4 41
SAN fabrics 150
SAS 2425
SAS cables 125, 147
SAS chains 125
Index 347
SAS ports 147
save files 22
SBOD 23
Scalability 11
scalability 217
Scale Out Network Attached Storage 22
scale-out NAS 61
scan engine 72
scan nodes 216218
Scan protocol 220
scan request 216
scanning process 218
scope 59
SCP 9, 11, 18, 22
SCP and SFTP support 32
scripting tool 95
scripts 33
SCSI 6
search tool bar 273
secondary 7778
secret-key strong cryptography 16
sector size 6
Secure Copy Protocol 9, 18
Secure FTP 9, 19
Secure Shell 9
Secure Sockets Layer 18, 43
security 13
security identifier 40
security key 89
self healing 256, 262
self-healing 71
semantics 30
sequential 25
Server Message Block 8, 13
service access 277
Service announcement 29
Service Assistant 84
service assistant IP address 155
service call 275
Service for Unix 172
Service IP Addresses 94
Service IP addresses 155
Service ip addresses 277
service layers 6
service ports 183
services 91
Services for Unix 41, 4344
session 13
session based 38
session credential 39
SFTP 9, 11, 19, 22
SFU 41, 4344, 172
share 17, 59, 91, 205
share concept 32
share files 22, 59
share level 14
Shares 91, 210
shares 8, 14, 60, 205
Shares/Exports 206
SID 40
signature 216217
Simple FTP 19
Simple Public Key Mechanism 13
simplified graphics 86
Single Pool 207
Small Computer System Interface 6
Smart Analytics 66
SMB 8, 11, 13, 22
SMB 2 17
SMB client 39
SMB file client 8
SMB file server 8, 39
SMB file service 8
SMB file sharing function 28
SMB protocol 29, 39
SMB protocol support 29
SMB share 39
SMB Version 2 17
SMTP mail server 187
Snapshot 202
snapshot 77, 91
Snapshots 59, 111
snapshots 27
SNIA 56
SNMP 94
SNMP alert 256
SNMP protocols 274
SNMP server 187, 189
soft quota 59, 74
Soft quotas 74
software bundle 279
software components 26
software levels 152
Software package 278
software stack 57, 63
software upgrades 94
software upgrading 255
Solid State Drives 6
SONAS 22, 27, 61, 236
SONAS cluster manager 29, 3334
SONAS file modules 236
SONAS HSM 30
SONAS Snapshots 30
SONAS software 26
source 76
sources 77
space efficient 73, 80
Space Efficient FlashCopy 77
space-efficient 79
SPKM-3 13
split brain 68
SQL 73
SSD 6, 26
SSDs 24
SSH 9, 88
SSH daemon 19
SSH File Transfer Protocol 19
SSH2 19
sshd 32
SSL 18, 43
348 Implementing the IBM Storwize V7000 Unified
SSL mode 173
SSL secured communication 31
SSL security 173
stateful protocol 13
stateless operation 13
stateless protocol 14
states 78
Synchronized 78
Static data 61
statistical counters 231
statistical data 231
statistics interval 231
status 236
status indicator 193
status information 236
storage agent 240
storage client 6
storage clients 24
Storage controller configuration 199
storage enclosure password 194
storage extent mapping tables 236
storage functions 25
storage layer 79
Storage Networking Industry Association 56
storage node 26
Storage Nodes 62
storage nodes 63, 84
storage pools 25, 92
storage tier 61
string 13
stripe size 67
striped 25
stripes 67
structure 7
stub file 61, 73
sub block 67
subdirectory tree 217
suggested tasks 90
Sun 11
Sun Microsystems 11
Superuser password 277
Support 94
support files 181
support notifications 167
support package 181182
Switched Bunch Of Drives 23
Symantec 113
symbolic links 31
symmetric key cryptography 39
symmetrical 125
synchronized 77
Synchronized state 78
synchronous 12
Syslog Server 190
System 90
system attributes 162
System bit 30
System details 90
system licences 178
System Migration 92
T
T1 238
T2 238
T3 238
T4 238
tape restore 80
target 25, 7677
targets 77
TCP 11, 13
TCP/IP 14
TDA 118
TDB 177
Technical Delivery Assessment 118
terminology 6
test e-mail 187
test utility 279, 282, 284
threshold 27
ticket 39
tickle-ACK 34
tie breaker 68
tie-break 238
tie-breaking 236
tier 66
Tier 1 238
tier 1 recovery 238
Tier 2 238
tier 2 239
Tier 3 238239
tier 3 recovery 237
Tier 4 238239
tiering 27
tiers 26, 100, 238
time 77
time and date 94
time consistency 93
time stamps 30
timestamps 259
Time-Zero 76
Tivoli Storage Manager 91
TLS 18, 43
trace 257
transfer size 12
transferring files 9
Transmission Control Protocol 11
transparent file recall 61
transport layer 11, 13
transport layer protocols 14
Transport Layer Security 18
Transport Level Security 43
transport protocol 14
traversal permission 47
tree 11, 41
tree view 90
trees 17
Trivial DataBase 177
trusted 38
trusted networks 12
TSM 30
tuning 252
two-tier architecture 63, 66
Index 349
U
UDP 11, 14
UID 14, 3839
uncommitted data 12
unfixed events 193, 256, 260, 262
unique user IDs 38
universal tool 155
UNIX 11, 172, 206
Unix ACLs 29
Unix authentication 3839
Unix NFS client 38
UNIX permissions 13
unpack 142
unstable writes 12
upgrade 152
upper layers 6
usage metrics 91
USB key 154
USB mass storage key 154
use cases 57
user access 93
user authentication 16, 32
User Datagram Protocol 11
user groups 93
User ID 14
user identifiers 39
user names 39
User Security 193
Users 93
utility 279
V
V7000 24
V7000 controller enclosure 23
V7000 expansion enclosures 23
V7000 storage pools 68
VDisk 7678
Version 12, 278
virtual volumes 24
virtualized NSDs 68
virus signature 217
virus signature definition 217
VLAN ID 177
volume 6
volume mappings 68
Volume Shadow Service 80
Volume Shadow Services 30
Volumes 93, 205
volumes 92
source 78
target 77
thin-provisioned 77
Volumes by Host 92
Volumes by Pool 92
Volumes The 92
VRMF 278
vsftpd 31
VSS 30, 80
W
warm start 238
warning levels 59
web server 84
Web-based Distributed Authoring and Versioning 32
WebDAV 32
well known port 13
well known ports 18
win32 share modes 30
Windows access control semantics 29
Windows Active Directory 39
Windows authentication 3839
Windows domain controller 39
Windows registry 39
wizard 92, 160
workload parameters 119
workload-sharing 34
write caching options 121
write operations 12
writes 78
Z
zoned 200
zoning 201
350 Implementing the IBM Storwize V7000 Unified
(
0
.
5


s
p
i
n
e
)
0
.
4
7
5

<
-
>
0
.
8
7
3

2
5
0

<
-
>

4
5
9

p
a
g
e
s
I
m
p
l
e
m
e
n
t
i
n
g

t
h
e

I
B
M

S
t
o
r
w
i
z
e

V
7
0
0
0

U
n
i
f
i
e
d
I
m
p
l
e
m
e
n
t
i
n
g

t
h
e

I
B
M

S
t
o
r
w
i
z
e

V
7
0
0
0

U
n
i
f
i
e
d
I
m
p
l
e
m
e
n
t
i
n
g

t
h
e

I
B
M

S
t
o
r
w
i
z
e

V
7
0
0
0

U
n
i
f
i
e
d
I
m
p
l
e
m
e
n
t
i
n
g

t
h
e

I
B
M

S
t
o
r
w
i
z
e

V
7
0
0
0

U
n
i
f
i
e
d
I
m
p
l
e
m
e
n
t
i
n
g

t
h
e

I
B
M

S
t
o
r
w
i
z
e

V
7
0
0
0

U
n
i
f
i
e
d
I
m
p
l
e
m
e
n
t
i
n
g

t
h
e

I
B
M

S
t
o
r
w
i
z
e

V
7
0
0
0

U
n
i
f
i
e
d

SG24-8010-01 ISBN 0738437166


INTERNATIONAL
TECHNICAL
SUPPORT
ORGANIZATION
BUILDING TECHNICAL
INFORMATION BASED ON
PRACTICAL EXPERIENCE
IBM Redbooks are developed
by the IBM International
Technical Support
Organization. Experts from
IBM, Customers and Partners
from around the world create
timely technical information
based on realistic scenarios.
Specific recommendations
are provided to help you
implement IT solutions more
effectively in your
environment.
For more information:
ibm.com/redbooks

Implementing the IBM


Storwize V7000 Unified
Consolidates storage
and file serving
workloads into an
integrated system
Simplifies
management and
reduces cost
Integrated support for
IBM Real-time
Compression
In this IBM Redbooks publication we introduce the
IBM Storwize V7000 Unified (V7000U). Storwize V7000 Unified is a
virtualized storage system designed to consolidate block and file
workloads into a single storage system. Advantages include simplicity
of management, reduced cost, highly scalable capacity, performance,
and high availability. Storwize V7000 Unified storage also offers
improved efficiency and flexibility through built-in solid-state drive
(SSD) optimization, thin provisioning, IBM Real-time Compression, and
nondisruptive migration of data from existing storage. The system can
virtualize and reuse existing disk systems offering a greater potential
return on investment.
Back cover

You might also like