Dell Compellent
SC4020 (Baby Compellent)/Compellent Enterprise Manager Pre-release Evaluation
The Compellent SC4020 is Dell’s answer to small footprint high
performance enterprise SAN solutions being offered by HP (3PAR), IBM (V
Series), NetApp (E and EF Series) and a number of specialty storage vendors
like Violin (6000 Series) as well as multinationals like Fugitsu with their
Eternus Series.
Dell bought Compellent back in 2011 to round out their
storage offerings in the enterprise level SAN market. In the years since a
considerable amount of work has been done at the firmware optimization level as
well as management software but little in the way of physical changes related
to power or footprint optimization. The
SC4020 is a near top to bottom redesign that applies all the lessons learned on
the firmware/software front and manages to actually meet or even beat the size
footprint of HPs excellent 3PAR 7200 by eliminating the Service Processor
server requirement.
Typically I go through the process of configuring any device
I test from the shipping box out. In this case Dell dispatched a storage
engineer to set us up.
While I appreciate the fact that Dell wants to ensure
that the setup goes smoothly and that the configuration is done properly for a
good testing environment I’m forced to put that on the negative side of the
leger. The engineer was clearly competent, I had actually met him on a previous
installation of a full sized Compellent for another demonstration, so his
comfort level with the hardware is obviously high. Bottom line was that the
installation took almost three hours. By way of comparison, the very first time
our company had ever attempted a configuration on a 3PAR, I was box to
operational in three hours. Subsequent installations typically take 30-40
minutes. In this case, the device was mounted, cabled, powered and networked
before the engineer stepped through the door. In addition, he had to make no
fewer than 4 calls in to his engineering support for various configuration and
software issues. In the end, I don’t feel confident that I could provision the
device without assistance even after watching it be done by a pro. That does
not bode well for field installation.
So what we have here is a Fibre fabric connected SAN with 18
- 300 GB drives. 11 - 15K SAS SFF and 6 - 10K SAS SFF drives in a 2 u chassis.
The chassis has redundant power supplies and redundant controllers offering an
array of connection options including 10 Gb iSCSI ports and 4 x 6 GBps SAS
ports for expansion in addition to the 8 (4 per controller) 8 Gbps, FC ports.
One thing I found interesting about the “best practice” cabling direction is
that in addition to the back plane connectivity between the controllers, they
suggest utilizing 2 of the 6 GB SAS ports for a redundant interconnect. I was
told that it will work without that connection but it’s not recommended.
The initial impression is a management package that is very
complete with a slick clean interface and logical progression in management
tasks and functions.
Dell has done a fair job of building all of their recent
interfaces with some common characteristics starting with color. I’m actually a
fan of simple black and white. It has good contrast for easy readability and
the focus of the moment is simple to deduce as well as the lines of delineation
between function buttons. The left side menu is also evident for primary task
shortcuts similar to the latest networking management and operations software
we have seen from Dell. By working our
way through the feature options in the interface we can also show all the
features we would expect to see in an enterprise SAN solution.
Under “Storage”, the top level menu item, we get what we
would expect to see on opening a management interface with the summary page
giving us an overview of the status, a short history, storage summary,
including all the objects under management and all current alerts.
As we got deeper into the menus, we find a literal plethora
of monitoring, management actions and views into performance, health and
refined views that are incredibly granular and configurable.
Charts can be set to custom scales based on traffic in
MB/Sec and performance metrics like IO/Sec and Latency at the volume level, the
provisioned server or even a specific port to allow for finite identification
of primary resource utilization sources.
With both 15K and 10K drives available we have intelligent tiering
options that are one of the big selling points of the Compellent line.
I haven’t utilized that functionality due to its lack of any
practical application in Bally Systems where moving components of the system
would be counter to proscribed storage configuration but for static or
predictable IT environments it could be of value.
There are a number of space saving strategies that can be
applied like variable assigned redundancy.
Thin provisioning is considered on by default but that would
be the one area where the management interface doesn’t offer any obvious
insight or configuration options.
Provisioning to servers is a simple mapping process once the
fabric is configured. In Compellent, the fabric is set for actual WWN to point
at a virtual for enhanced failover properties.
Mapping is as simple as selecting the Map Volume to Server
option with the volume selected in the Storage header.
Another function that I found interesting from a management/monitoring view was the capability of CEM to connect directly to the vCenter and display not only the servers that were provisioned with volumes from the SC4020 but all servers in the environment at large, their status…
…related Datastores, there sizes and even MPIO settings.
Replication is one of the prime features but having only a
source device it isn’t possible for me to test it. That is unfortunate as this
is another of the big selling points of this device and appears to be simple
and robust. Another time perhaps.
Monitoring is really just access to all the available
logging in simple text in most cases and which is quite verbose.
Threshold Alerts is just what you would expect with various
threshold values that can be configured to fit the user’s environmental
conditions based on IO Usage, IO/Sec, Storage percent Used in total or by a
single volume and numerous others, even QoS node availability for replication
services.
Then there is one of my favorite features, Chargeback. Dell
has incorporated a chargeback calculator right into the management interface.
This is configurable to calculate chargeback not only based
on allocated space but also utilization for replays and use of fast tier in
intelligent auto tiering to specific departments set by the admin and reports
can be automatically exported in PDF, HTML, XML, Text, CSV or Excel. Pretty
cool.
The automated reports in the Reports function can be set to
export custom or canned reports in all the same formats as Chargeback saving a
little administrative time and forcing a heads up for system review.
Each of the primary menus has anywhere from 1 to 4 submenus
that can have trees that extend the actionable items to dozens of possible procedures.
This is a mature, comprehensive software package that shows
the features of the device to great advantage. It is laid out well and gives
the administrator a multitude of options for management, monitoring and
resource management. Add in Chargeback and the incorporation of the vCenter
connectivity and you get a solid value added software package.
On to performance.
I will preface by saying that the SC4020 continues with Dell’s approach to flash tiering and auto tiering that may be a big factor for some organizations but for applications in the always on world of casinos it’s really not a factor. With the new blend of write-optimized SLC SSDs and read-optimized MLC SSDs, Compellent gets the speed of SSD in the realm of competitive with spinning disks. I don’t want to dwell too much on this technology as it doesn’t apply to our use case at this time but the approach is pretty impressive so here is an outline. Auto tiering is based on a process called Data Progression. Tiering models are based on Data Progression and the process is done once a day in traditional spinning disk arrays where data is marked for migration at a later time based on the active or passive nature of the data in question. In the blended SSD array Data Progression immediately migrates passive data, snap shots, to lower tiered storage maximizing the tier 0 for active data.
If you are in the performance/price-point sweet spot to utilize this feature it could well make the difference in your storage choice. Keeping all that and the fact that we have a limited number of traditional spinning disks in our test unit in mind, let’s get some results.
To measure performance I wanted to put on some real world workloads and do a direct compare between a similarly configured competitor the 3PAR 2700.
What we have here is a system with varying read\writes to a SQL DB via script over a period of 1 hour. Both are hosting 4 VMs which in turn are hosting the system. One variance is that the 3PAR has a full complement of 24 15K SAS drives while the Compellent has a mix of 10K and 15K drives. To mitigate the configuration deficiency, the guest OSs on the Compellent will be running on the 10K drive set and the SQL installation and databases will run in the 15K drive set. It won’t be a perfect apples to apples but it will get us in the realm.
3PAR
The writes should look the same in each case
If you are in the performance/price-point sweet spot to utilize this feature it could well make the difference in your storage choice. Keeping all that and the fact that we have a limited number of traditional spinning disks in our test unit in mind, let’s get some results.
To measure performance I wanted to put on some real world workloads and do a direct compare between a similarly configured competitor the 3PAR 2700.
What we have here is a system with varying read\writes to a SQL DB via script over a period of 1 hour. Both are hosting 4 VMs which in turn are hosting the system. One variance is that the 3PAR has a full complement of 24 15K SAS drives while the Compellent has a mix of 10K and 15K drives. To mitigate the configuration deficiency, the guest OSs on the Compellent will be running on the 10K drive set and the SQL installation and databases will run in the 15K drive set. It won’t be a perfect apples to apples but it will get us in the realm.
3PAR
The writes should look the same in each case
Write requests should also be the same
The 3PAR beats the Compellent in write latency by a narrow margin of 5ms at peak. We can’t discount the configuration differences here. It’s likely that we would have had a draw on write latency with 24 to 24, 15K drives.
Read latency; the 3PAR wins this round as well but again I think we can chalk that up to number of 15K drives to the blend of teir 1 and 2 drives in the Compellent.
I went on to test several scenarios with similar results. I’ll just bottom line this.
From the matrixes I could gather, the Compellent is in the running from a performance standpoint and I believe would have been close to a draw with the 3PAR in most pratical performance tests. The management software is top notch and the tiering features are among the most progressive and innovative I've seen. Initial configuration is difficult and time consuming and the price point is a bit of a moving target due to all the configuration options available which is both good and bad. The Compellent SC4020 is a sound, well thought out, solid device. Dell has the potential to go head to head with the competition in this segment with the footprint reduction and all the value added features that come stock with the 4020.
Most of the deficiencies I've seen can be put squarely on the “this is a BETA device” or the “not an apples to apples comparison” side of the board and I would expect most will be addressed by release time.
I would consider this a viable option to storage devices we currently deploy.



















No comments:
Post a Comment