Promise FastTrak S150 TX2Plus Raid Controller 15th June 2003
Feedback from our recent review of the Promise SX4000 RAID Controller card indicates that HDTach graphs are a little too confusing for most our readers who would like to see simpler graphs. We still believe HDTach provides the most comprehensive analysis of hard drives across their entire surface area but instead of detailed graphs we will now be providing sustained read and write scores (ignoring the effects of burst read and writes as well as random read/writes - we've never encountered a hard drive fragmented badly enough to come close to random read/write figures).
The card also has a lot of exciting features. Here's what you get.
Included are two S-ATA leads, a parallel lead and a power splitter. As usual promise include everything needed except for the drives themselves. It should be noted that if your S-ATA drives use serial power leads rather than Molex sockets (like Seagate S-ATA drives) you will need separate leads in those cases they should be included with the drive
The card itself is quite small and has two Serial and one Parallel channel. It is aimed at those users with a parallel drive wanting to upgrade to RAID0/1 but also wishing to purchase a S-ATA drive instead of another parallel one (to stay future proof). It is also aimed at those users upgrading to S-ATA that don't want the hassle/expense of purchasing a new motherboard with onboard S-ATA and also want to have RAID capability for future expansion.
Here's a screen showing the configuration of the test system:
It should be noted that for testing purposes we managed to obtain 4 Maxtor S-ATA 250GB drives with 8MB cache and 7200rpm. These drives are aimed at the enterprise storage market and come with a 5-year warranty indicating the confidence Maxtor have in this product. For parallel drives we used 2 Maxtor ATA-133 160GB drives with 8MB cache and 7200rpm to match the specifications of the S-ATA drives as closely as possible. In practice, those purchasing this product may not be able to match as closely as this (or may not desire to if they want to purchase the fastest S-ATA drive possible) so readers will need to remember the basics of using RAID on non-matched drives. These basic rules stipulate that performance is limited to N times the speed of the slowest drive and total size is limited to N times the capacity of the smallest drive where N is the number of drives in the array. In practice these are theoretical maximums that may be limited by other factors.
Here are the 4 drives we managed to pull together showing the ease of connectivity to the card:
The above picture clearly illustrates how little space is taken up by the connectors of S-ATA drives. The drives used are all Maxtor 250GB S-ATA 7200rpm 8MB cache ones (that's a Terra-byte of storage which may seem outrageous even by today's standards but then we remember Bill Gate's famous speech telling us we will never need more than 640K memory in our PCs and must remind ourselves that no one knows what future requirements will be). The drives with the black data connectors are plugged into the Promise controller and the ones with the red connectors are plugged into the onboard RAID controller.
For comparison we will use the Silicon Image S-ATA RAID controller built in to our ASUS motherboard. We will test the FastTrak S150 TX2Plus with the following configurations:
Each test will be conducted at stripe sizes of 16K, 32K and 64K to see which is most efficient.
RAID1 (mirroring) testing will be conducted at default stripe size as we are not given the option of changing this parameter.
Let's start with some HDTach graphs to show performance across the entire 500GB of each array.
The graph on the left is the FastTrak S150 TX2Plus and the one on the right is the Silicon Image controller. Both graphs show the results of RAID0 using two S-ATA drives. A full size graph can be obtained by clicking on them. OK, no more complicated graphs from here on.
The serial drives are significantly faster than the parallel ones - even faster than using 4 drives although this is probably due to the limitations of having the Parallel drives as one Master and one Slave. The Silicon Image array has write speed almost 50% higher despite still being limited by the PCI bus (as we see more south bridge designs with built-in S-ATA support such as Intel's ICH5/ICH5R we will be able to overcome this hurdle). The 110MB/sec threshold seems to be a PCI bandwidth limitation and therefore a performance ceiling in this test.
Increasing the stripe size actually hurts read performance slightly but gives a significant boost to write performance.
Using a 64K stripe set actually allows the 4-drive array to come out on top on the read stakes but there is no catching the Silicon Image onboard array in terms of write performance.
This involves using half the drives to mirror the other half, duplicating all write operations but ensuring data integrity. If one of the drives fails it can be reconstructed from the mirroring drive.
As expected performance is lower but those looking at this option want data integrity first and performance second. In any case the results are not too bad. The two parallel drives are quite poor in this test but this is due to the Master/Slave configuration as we only have one parallel channel causing the steep drop in performance. Again the Silicon Image array is best in this regard.
Tips for Improving Performance
Some thought needs to go into planning a RAID array. Small stripe sizes favor large numbers of files being transferred while large sizes favor activities such as video editing etc. Generally a 16K stripe size works best for everyday use. To get the best performance it is necessary to format the Array using a cluster size that is a whole multiple of the stripe size. For example if the stripe size is 16K then it is best to use 16K or 32K as the cluster size. The reason for this is that Windows sends/requests data in blocks that are made up of the cluster size. The RAID controller allocates these to the first free drive in the array in sizes of the stripe size. So a 32K cluster would be split into 2 16K blocks and sent to 2 disks and this would be optimal for 2 or 4 disk Arrays. 4K clusters would have to be accumulated until 16K was ready and then sent to the first disk while the other(s) were waiting for data. In practice this is not too bad as the cache on modern drives compensates for this but the greater the number of drives in an array the greater the need to take such factors into consideration.
The Promise FastTrak S150 TX2Plus allows the addition of S-ATA drives to a system without that capability. Furthermore, by having a parallel channel in addition to the two serial ones it is possible for users to take an existing parallel drive and buy a S-ATA drive and be safe in the knowledge that the two will work together in a RAID0/1 configuration. We were quite pleased at how well a parallel and serial drive worked together in a RAID array as we were expecting more of a performance penalty and the fully loaded configuration (2 serial and 2 parallel drives) worked well despite one channel being bogged down with a master/slave configuration issue.
While it is true that overall the Promise FastTrak S150 TX2Plus was not as fast in write speeds as the onboard Silicon array it should be noted that the main market for this product are those who want a cheap S-ATA solution without having to upgrade to a new motherboard. It's also cheaper for those wishing to upgrade to RAID from a single parallel drive to buy a single S-ATA drive and combine them into an array rather than have to purchase 2 S-ATA drives and find another use for the then obsolete parallel one.
A quick price search using the links at the top of this page show that the card can be picked up for as little as £50 here in the UK and we wouldn't hesitate to recommend it to those wanting a budget RAID solution with an upgrade to S-ATA included at the same time.
We give the Promise S150 TX2 Plus our Silver Award.
We would like to thank Promise Technology Inc. for the review sample.
All trademarks are the property of their respective owners.