I recently had a customer ask me about the size of a RAID 5 array I was quoting for them. He wanted to know why total available space was not equal to the total raw hard drive space. This is a really common question I get when it comes to any kind RAID setup, people know the name, and that it’s good, but not much else. It’s understandable as RAID has been around in one form or another since the 1960’s and has been widely used since the the late 1970’s. The term RAID wasn’t used until the 1980’s, before they had branded names. In those early days, the main goal of RAID was to improve speed of drive access and to improve the reliability of data storage systems.

The early days

The early IBM system, call the 7612 disk synchronize, was used on the IBM 353 disk drives. Keep in mind these drives were in a cabinet that was 152cm x 174cm x 74cm (60″x68″x29″) and weighed in around 1 metric ton!

IBM 353(RAMAC)

In this early system, the controller would write data to the disk in 39 bit “words”. 32 of those were bit data and 7 of those bits where used for Error Check and Correction (ECC), which I’ll talk about in another blog post. The system used two or more platters in parallel, creating a redundant copy of the data, equivalent to our current RAID 0 system. The system was able to correct up to one bit per “word” (byte) and detect multiple erred bits per byte. While reading, the system was able to compare the data between platters. If an error was detected, then the system used data from the good copy to correct the corrupted data.

Later on, when hard drives no longer required a forklift to install, RAID started to make use of the multiple, smaller hard drives to gain better speed. Digital Equipment Company (DEC) did this with their HSC50 and HSC70 systems that were part of the their Digital Storage Array. They took the data and spread it out over multiple disks using a system similar to IBM’s, writing ECC info to each byte. The DEC system was able to achieve a 3 Mb/s transfer rate by retrieving the data in parallel from the disks, then allowing the memory buffers to put it all together.

Years later, when drives were smaller and less expensive than these monsters, it was still hard to get larger amounts of storage for a good price. Once again, IBM took the lead, filing a patent for what would later be called RAID by UC Berkeley in a paper called “A Case for Redundant Arrays of Inexpensive Disks (RAID)” written in 1988 by David Patterson, Garth A. Gibson, and Randy Katz. Their idea of many hard drives working together to be faster and more reliable is still the base for RAID today.

RAID fundamentals:

RAID can be broken down into 6 basic groups:
*Table pulled from http://en.wikipedia.org/wiki/RAID)

RAID 0

RAID 0 consists of striping, without mirroring or parity. The capacity of a RAID 0 volume is the sum of the capacities of the disks in the set, the same as with a spanned volume. There is no added redundancy for handling disk failures, just as with a spanned volume. Thus, failure of one disk causes the loss of the entire RAID 0 volume, with reduced possibilities of data recovery when compared to a broken spanned volume. Striping distributes the contents of files roughly equally among all disks in the set, which makes concurrent read or write operations on the multiple disks almost inevitable. The concurrent operations make the throughput of most read and write operations equal to the throughput of one disk multiplied by the number of disks. Increased throughput is the big benefit of RAID 0 versus spanned volume.

RAID 1

RAID 1 consists of mirroring, without parity or striping. Data is written identically to two (or more) drives, thereby producing a “mirrored set”. Thus, any read request can be serviced by any drive in the set. If a request is broadcast to every drive in the set, it can be serviced by the drive that accesses the data first (depending on its seek time and rotational latency), improving performance. Sustained read throughput, if the controller or software is optimized for it, approaches the sum of throughputs of every drive in the set, just as for RAID 0. Actual read throughput of most RAID 1 implementations is slower than the fastest drive. Write throughput is always slower because every drive must be updated, and the slowest drive limits the write performance. The array continues to operate as long as at least one drive is functioning.

RAID 2

RAID 2 consists of bit-level striping with dedicated Hamming-code parity. All disk spindle rotation is synchronized and data is striped such that each sequential bit is on a different drive. Hamming-code parity is calculated across corresponding bits and stored on at least one parity drive. This level is of historical significance only; although it was used on some early machines (for example, the Thinking Machines CM-2), as of 2014 it is not used by any of the commercially available systems.

RAID 3

RAID 3 consists of byte-level striping with dedicated parity. All disk spindle rotation is synchronized and data is striped such that each sequential byte is on a different drive. Parity is calculated across corresponding bytes and stored on a dedicated parity drive. Although implementations exist, RAID 3 is not commonly used in practice.

RAID 4

RAID 4 consists of block-level striping with dedicated parity. This level was previously used by NetApp, but has now been largely replaced by a proprietary implementation of RAID 4 with two parity disks, called RAID-DP.

RAID 5

RAID 5 consists of block-level striping with distributed parity. Unlike in RAID 4, parity information is distributed among the drives. It requires that all drives but one be present to operate. Upon failure of a single drive, subsequent reads can be calculated from the distributed parity such that no data is lost. RAID 5 requires at least three disks. RAID 5 is seriously affected by the general trends regarding array rebuild time and chance of failure during rebuild. In August 2012, Dell posted an advisory against the use of RAID 5 in any configuration and of RAID 50 with “Class 2 7200 RPM drives of 1 TB and higher capacity” for business-critical data. 

RAID 6

RAID 6 consists of block-level striping with double distributed parity. Double parity provides fault tolerance up to two failed drives. This makes larger RAID groups more practical, especially for high-availability systems, as large-capacity drives take longer to restore. RAID 6 requires a minimum of four disks. As with RAID 5, a single drive failure results in reduced performance of the entire array until the failed drive has been replaced. With a RAID 6 array, using drives from multiple sources and manufacturers, it is possible to mitigate most of the problems associated with RAID 5. The larger the drive capacities and the larger the array size, the more important it becomes to choose RAID 6 instead of RAID 5. RAID 10 also minimizes these problems.

RAID in practical terms

As you can see from the list above, there are many options for a RAID systems. In practical terms, however, what RAID is good for what application? What can RAID do for your system in every day terms? I have found that RAID is best for systems that need a large amount of uptime and have a high demand on the the storage array, like a Network Area Storage server (NAS). A NAS is a device many offices use as a backup location, central storage area, or a office/company wide file server. A NAS needs to be there 24/7 and not lose any data, as if it did it could bring the office to a grinding halt — a perfect application for RAID!

As far as which RAID to use, the decision comes down to cost and function. Most widely available RAID controllers will have implementations of RAID 0, 1 and 5.

Most of the time I suggest RAID 5 for several reasons:

  1. longer up time
  2. easy recovery, and
  3. simple implementation

If you have a hard drive fail and you have to hot-swap drives, just pull out the failed drive and plug in a new one. The system will rebuild the array for you and you will never lose access to your data. The systems are easy to set up via the build in GUI on the controller cards, and depending on the systems you can use commonly available drives.

However, it’s important not to be lulled into a false sense of RAID being a fix-it-all solution. RAID does have limitations, and it is only as good as the hardware it’s running. So if your looking for 99.999% of up time a year, don’t build your server from that old desktop PC sitting in your parts piles with a RAID controller stuck into it.

Make sure you use server-grade hardware that has redundancy built in to all the system, not just the storage array. This will cost you more to start with, but will save you in the long run.

On top of it all, BACK UP YOUR DATA! Even the best systems will fail at some point, and you having backups ready will save you! RAID systems of all kinds are still at the mercy of mechanical devices, and those devices will fail.

You can, however, with proper maintenance, have your system running for years. Here are some of my favorite tips to make that happen:

RAID tips and Tricks

  • Make sure you always have two spare drive for your RAID 5 array. Statistically speaking, there is a 4% chance that you will have two drives fail with in 10 hours of each other. Another trick I have found to work well is to make sure the drives are not all from the same batch. I like to install drives that are spread out in their manufacturing dates if I can, that way they are less likely to fail all with in the same time window. For more info on hard drive failure rates see the Study done by Carnegie Mellon University.
  • Make sure you back your data up regularly. It’s best to run a system that will do it for you automatically. If the array is your back up, make sure to create off-site backups.
  • Clean your server out at lest once a year. No matter how hard you try to keep your server room clean, dust will collect. Dust acts as an insulator, and as it builds up it can cause components to overheat and fail. I like to schedule a once-a-year system shutdown to clean the fans and heatsinks. If the system can’t be shut down, make sure you mount it in rack rails and make the cables are long enough that you can open the case while the server is live to clean it out.
Redundant Array of Independent Disks (RAID) Explained

Leave a Reply

Your email address will not be published. Required fields are marked *