Thanks to visit codestin.com
Credit goes to www.scribd.com

0% found this document useful (0 votes)
152 views9 pages

RAID Concepts and Configuration

RAID levels 0, 1, 4, and 5 are described. RAID 0 stripes data across disks for increased performance but no redundancy. RAID 1 mirrors all data to another disk for redundancy but decreased write performance. RAID 4 stores parity on one disk and data on others, allowing reconstruction if one disk fails but the parity disk is a bottleneck. RAID 5 distributes parity across disks, avoiding the parity disk bottleneck of RAID 4. Configuring the different RAID levels in Linux involves editing /etc/raidtab and using mkraid to initialize and start the array.

Uploaded by

svganeshkumar
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
152 views9 pages

RAID Concepts and Configuration

RAID levels 0, 1, 4, and 5 are described. RAID 0 stripes data across disks for increased performance but no redundancy. RAID 1 mirrors all data to another disk for redundancy but decreased write performance. RAID 4 stores parity on one disk and data on others, allowing reconstruction if one disk fails but the parity disk is a bottleneck. RAID 5 distributes parity across disks, avoiding the parity disk bottleneck of RAID 4. Configuring the different RAID levels in Linux involves editing /etc/raidtab and using mkraid to initialize and start the array.

Uploaded by

svganeshkumar
Copyright
© Attribution Non-Commercial (BY-NC)
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 9

RAID Concepts and Configuration RAID 0, RAID 1, RAID 5

RAID Redundant Array Of Independent Disks


The basic idea behind RAID is to combine multiple small, inexpensive disk drives into an array to accomplish performance or redundancy goals not attainable with one large and expensive drive. This array of drives will appear to the computer as a single logical storage unit or drive.

Concept of RAID The current RAID drivers in Linux supports the following levels:

Linear mode
a) Two or more disks are combined into one physical device. The disks are appended to each other, so writing linearly to the RAID device will fill up disk 0 first, then disk 1 and so on. b) The disks does not have to be of the same size. In fact, size doesnt matter at all here There is no redundancy in this level. If one disk crashes you will most probably lose all your data. You can however be lucky to recover some data, since the filesystem will just be missing one large consecutive chunk of data. c) The read and write performance will not increase for single reads/writes. But if several users use the device, you may be lucky that one user effectively is using the first disk, and the other

user is accessing files which happen to reside on the second disk. If that happens, you will see a performance gain.

Linear Mode RAID Configuration


Ok, so you have two or more partitions which are not necessarily the same size (but of course can be), which you want to append to each other. Set up the /etc/raidtab file to describe your setup. I set up a raidtab for two disks in linear mode, and the file looked like this: raiddev /dev/md0 raid-level linear nr-raid-disks 2 chunk-size 32 persistent-superblock 1 device /dev/sdb6 raid-disk 0 device /dev/sdc5 raid-disk 1 Spare-disks are not supported here. If a disk dies, the array dies with it. Theres no information to put on a spare disk. Youre probably wondering why we specify a chunk-size here when linear mode just appends the disks into one large array with no parallelism. Well, youre completely right, its odd. Just put in some chunk size and dont worry about this any more. Ok, lets create the array. Run the command mkraid /dev/md0 This will initialize your array, write the persistent superblocks, and start the array. If you are using mdadm, a single command like mdadm create verbose /dev/md0 level=linear raid-devices=2 /dev/sdb6 /dev/sdc5 should create the array. The parameters talk for themselves. The output might look like this mdadm: chunk size defaults to 64K mdadm: array /dev/md0 started. Have a look in /proc/mdstat. You should see that the array is running. Now, you can create a filesystem, just like you would on any other device, mount it, include it in your /etc/fstab and so on.

RAID-0 (RAID zero)

a) Also called stripe mode. The devices should (but need not) have the same size. Operations on the array will be split on the devices; for example, a large write could be split up as 4 kB to disk 0, 4 kB to disk 1, 4 kB to disk 2, then 4 kB to disk 0 again, and so on. If one device is much larger than the other devices, that extra space is still utilized in the RAID device, but you will be accessing this larger disk alone, during writes in the high end of your RAID device. This of course hurts performance. b) Like linear, there is no redundancy in this level either. Unlike linear mode, you will not be able to rescue any data if a drive fails. If you remove a drive from a RAID-0 set, the RAID device will not just miss one consecutive block of data, it will be filled with small holes all over the device. e2fsck or other filesystem recovery tools will probably not be able to recover much from such a device. c) The read and write performance will increase, because reads and writes are done in parallel on the devices. This is usually the main reason for running RAID-0. If the busses to the disks are fast enough, you can get very close to N*P MB/sec.

RAID 0 Configuration
You have two or more devices, of approximately the same size, and you want to combine their storage capacity and also combine their performance by accessing them in parallel. Set up the /etc/raidtab file to describe your configuration. An example raidtab looks like: raiddev /dev/md0 raid-level 0 nr-raid-disks 2 persistent-superblock 1 chunk-size 4 device /dev/sdb6 raid-disk 0 device /dev/sdc5 raid-disk 1 Like in Linear mode, spare disks are not supported here either. RAID-0 has no redundancy, so when a disk dies, the array goes with it. Again, you just run mkraid /dev/md0 to initialize the array. This should initialize the superblocks and start the raid device. Have a look in /proc/mdstat to see whats going on. You should see that your device is now running. /dev/md0 is now ready to be formatted, mounted, used and abused.

RAID 1
a) This is the first mode which actually has redundancy. RAID-1 can be used on two or more disks with zero or more spare-disks. This mode maintains an exact mirror of the information on

one disk on the other disk(s). Of Course, the disks must be of equal size. If one disk is larger than another, your RAID device will be the size of the smallest disk. b) If up to N-1 disks are removed (or crashes), all data are still intact. If there are spare disks available, and if the system (eg. SCSI drivers or IDE chipset etc.) survived the crash, reconstruction of the mirror will immediately begin on one of the spare disks, after detection of the drive fault. c) Write performance is often worse than on a single device, because identical copies of the data written must be sent to every disk in the array. With large RAID-1 arrays this can be a real problem, as you may saturate the PCI bus with these extra copies. This is in fact one of the very few places where Hardware RAID solutions can have an edge over Software solutions if you use a hardware RAID card, the extra write copies of the data will not have to go over the PCI bus, since it is the RAID controller that will generate the extra copy. Read performance is good, especially if you have multiple readers or seek-intensive workloads. The RAID code employs a rather good read-balancing algorithm, that will simply let the disk whose heads are closest to the wanted disk position perform the read operation. Since seek operations are relatively expensive on modern disks (a seek time of 6 ms equals a read of 123 kB at 20 MB/sec), picking the disk that will have the shortest seek time does actually give a noticeable performance improvement.

Configuration of RAID 1

RAID 1 Configuration
You have two devices of approximately same size, and you want the two to be mirrors of each other. Eventually you have more devices, which you want to keep as stand-by spare-disks, that will automatically become a part of the mirror if one of the active devices break.

Set up the /etc/raidtab file like this: raiddev /dev/md0 raid-level 1 nr-raid-disks 2 nr-spare-disks 0 persistent-superblock 1 device /dev/sdb6 raid-disk 0 device /dev/sdc5 raid-disk 1 If you have spare disks, you can add them to the end of the device specification like device /dev/sdd5 spare-disk 0 Remember to set the nr-spare-disks entry correspondingly. Ok, now were all set to start initializing the RAID. The mirror must be constructed, eg. the contents (however unimportant now, since the device is still not formatted) of the two devices must be synchronized. Issue the mkraid /dev/md0 command to begin the mirror initialization. Check out the /proc/mdstat file. It should tell you that the /dev/md0 device has been started, that the mirror is being reconstructed, and an ETA of the completion of the reconstruction. Reconstruction is done using idle I/O bandwidth. So, your system should still be fairly responsive, although your disk LEDs should be glowing nicely. The reconstruction process is transparent, so you can actually use the device even though the mirror is currently under reconstruction. Try formatting the device, while the reconstruction is running. It will work. Also you can mount it and use it while reconstruction is running. Of Course, if the wrong disk breaks while the reconstruction is running, youre out of luck.

RAID-4
a) This RAID level is not used very often. It can be used on three or more disks. Instead of completely mirroring the information, it keeps parity information on one drive, and writes data to the other disks in a RAID-0 like way. Because one disk is reserved for parity information, the size of the array will be (N-1)*S, where S is the size of the smallest drive in the array. As in RAID-1, the disks should either be of equal size, or you will just have to accept that the S in the (N-1)*S formula above will be the size of the smallest drive in the array.

b) If one drive fails, the parity information can be used to reconstruct all data. If two drives fail, all data is lost. c) The reason this level is not more frequently used, is because the parity information is kept on one drive. This information must be updated every time one of the other disks are written to. Thus, the parity disk will become a bottleneck, if it is not a lot faster than the other disks. However, if you just happen to have a lot of slow disks and a very fast one, this RAID level can be very useful.

RAID 4 Configuration
You have three or more devices of roughly the same size, one device is significantly faster than the other devices, and you want to combine them all into one larger device, still maintaining some redundancy information. Eventually you have a number of devices you wish to use as spare-disks. Set up the /etc/raidtab file like this: raiddev /dev/md0 raid-level 4 nr-raid-disks 4 nr-spare-disks 0 persistent-superblock 1 chunk-size 32 device /dev/sdb1 raid-disk 0 device /dev/sdc1 raid-disk 1 device /dev/sdd1 raid-disk 2 device /dev/sde1 raid-disk 3 If we had any spare disks, they would be inserted in a similar way, following the raid-disk specifications; device /dev/sdf1 spare-disk 0 as usual. Your array can be initialized with the mkraid /dev/md0 command as usual. You should see the section on special options for mke2fs before formatting the device.

RAID-5

a) This is perhaps the most useful RAID mode when one wishes to combine a larger number of physical disks, and still maintain some redundancy. RAID-5 can be used on three or more disks, with zero or more spare-disks. The resulting RAID-5 device size will be (N-1)*S, just like RAID-4. The big difference between RAID-5 and -4 is, that the parity information is distributed evenly among the participating drives, avoiding the bottleneck problem in RAID-4. b) If one of the disks fail, all data are still intact, thanks to the parity information. If spare disks are available, reconstruction will begin immediately after the device failure. If two disks fail simultaneously, all data are lost. RAID-5 can survive one disk failure, but not two or more. c) Both read and write performance usually increase, but can be hard to predict how much. Reads are similar to RAID-0 reads, writes can be either rather expensive (requiring read-in prior to write, in order to be able to calculate the correct parity information), or similar to RAID-1 writes. The write efficiency depends heavily on the amount of memory in the machine, and the usage pattern of the array. Heavily scattered writes are bound to be more expensive.

RAID 5 Disks

RAID 5 Configuration
You have three or more devices of roughly the same size, you want to combine them into a larger device, but still to maintain a degree of redundancy for data safety. Eventually you have a number of devices to use as spare-disks, that will not take part in the array before another device fails. If you use N devices where the smallest has size S, the size of the entire array will be (N-1)*S. This missing space is used for parity (redundancy) information. Thus, if any disk fails, all data stay intact. But if two disks fail, all data is lost. Set up the /etc/raidtab file like this: raiddev /dev/md0 raid-level 5 nr-raid-disks 7 nr-spare-disks 0 persistent-superblock 1

parity-algorithm left-symmetric chunk-size 32 device /dev/sda3 raid-disk 0 device /dev/sdb1 raid-disk 1 device /dev/sdc1 raid-disk 2 device /dev/sdd1 raid-disk 3 device /dev/sde1 raid-disk 4 device /dev/sdf1 raid-disk 5 device /dev/sdg1 raid-disk 6 If we had any spare disks, they would be inserted in a similar way, following the raid-disk specifications; device /dev/sdh1 spare-disk 0 And so on. A chunk size of 32 kB is a good default for many general purpose filesystems of this size. The array on which the above raidtab is used, is a 7 times 6 GB = 36 GB (remember the (n-1)*s = (71)*6 = 36) device. It holds an ext2 filesystem with a 4 kB block size. You could go higher with both array chunk-size and filesystem block-size if your filesystem is either much larger, or just holds very large files. Ok, enough talking. You set up the /etc/raidtab, so lets see if it works. Run the mkraid /dev/md0 command, and see what happens. Hopefully your disks start working like mad, as they begin the reconstruction of your array. Have a look in /proc/mdstat to see whats going on. If the device was successfully created, the reconstruction process has now begun. Your array is not consistent until this reconstruction phase has completed. However, the array is fully functional (except for the handling of device failures of course), and you can format it and use it even while it is reconstructing. See the section on special options for mke2fs before formatting the array. Ok, now when you have your RAID device running, you can always stop it or re-start it using the raidstop /dev/md0 or raidstart /dev/md0 commands.

With mdadm you can stop the device using mdadm -S /dev/md0 and re-start it with mdadm -R /dev/md0 Instead of putting these into init-files and rebooting a zillion times to make that work, read on, and get autodetection running.

You might also like