A customer of mine, who was already running a vSphere environment on Cisco UCS blades, asked to expand his environment with a number of ESXi hosts that could run a VMware View environment. To be able to run as much View desktops on each host as possible, we offered them Cisco UCS Blades equipped with a Fusion IO card, the UCS 785GB MLC Fusion-io ioDrive2 to be exact.
Fusion-io called their architecture ioMemory. It was announced at the DEMO conference in 2007, and software called ioSAN was shown in 2008. In March 2009, Hewlett-Packard announced the HP StorageWorks IO Accelerator adaptation of Fusion-io technology specifically for HP BladeSystem C-Series servers. IBM’s project Quicksilver, based on Fusion-io technology, showed that solid. Fusion-io cards require matching firmware and drivers. This is different in many ways to how NVMe PCIe drives behave but it is something to remember with the ioDrive series. To ensure you have proper firmware first create the following folder and place firmware (with ffff extension) in there.
For this customer I had already deployed a vSphere environment a year ago, fully based on VMware Auto Deploy and it now was time to try and add a driver to this Auto Deploy environment, which I had never done before. It wasn’t an easy road, but I wrote down all the steps and hope it helps you should you ever have to do this too.
The Fusion ioMemory SX350 PCIe application accelerator offers a scalable and high capacity solution, with up to a 2.5x - 4x the price/performance benefits over the previous ioDrive2 PCIe card. The Fusion ioMemory SX350 PCIe series provides a cost-effective solution for read-intensive application workloads that include. Fussion IO Driver for Windows 7 (64-bit), Windows 8.1 (64-bit) - ThinkStation P500.
Download the drivers
The company Fusion IO has a very tight policy as it comes to driver downloads. You can only get access to the support site and download the drivers after registering the serial number of the Fusion IO card. This will grant you access to the support site, but only to the drivers for that specific card. Very inconvenient when you’re only the guy implementing the stuff and especially if the cards are already built-in the UCS Blades located in a datacenter far far away.
After some time on the phone with their support department, I was able to get a link to the Cisco Support site and download a big ISO file containing all drivers for the Cisco UCS Blade. After unpacking the ISO and browsing through the folders, the Fusion IO driver only contains a shortcut to……. the VMware Website. And there it is, free for download, the Fusion IO driver: VMware website – Fusion IO driver
After you downloaded the driver zip, place it in a separate directory and unpack it. In the directory in which you unpacked the zip, you will now find the following:
- iomemory-vsl-5X-220.127.116.119-offline_bundle-895853.zip <– this one we want !!!
- Subdir: doc
Prepare the Auto Deploy image
For the deployment we need three images. One is of course the ESXi image that can be downloaded from the VMware website, second is the Fusion IO driver (iomemory-vsl-5X-18.104.22.1689-offline_bundle-895853.zip ) and third very often is the vmware-fdm VIB for VMware HA.
Dell Fusion Io Drivers Download
I’m using vc-mgmt.vanzanten.local as the vCenter Server since that is the one running in my homelab, replace it by your own vCenter Server. Let’s get started:
Make sure that with both above actions you get a response similar to this:
After these three depots have been added, we can now combine packages from them into a rule, we just need to find out which ESXi image we would like to use. Let’s take a look:
You now get a list of all images available, for this case I pick the image: “ESXi-5.1.0-20130504001-standard” and use it to create a new ImageProfile called “ESXi-5.1.0-20130504001-std-FusionIO” and give it a Vendor ID.
You should get similar output like this:
Now we want to add the drivers to this new profile but for this we need to know the name of the software package that holds the Fusion IO driver. Run this to get a list of all software packages available:
When you browser through this list, you’ll see one line like this:
We now know that the software package is called “block-iomemory-vsl”. Let’s add it:
And last but not least, let’s add the VMware HA package, which is called “vmware-fdm”:
What we’ve done until now is create a new ESXi image that can be deployed to an ESXi host, but we didn’t create any rules for deployment yet. I have already explained this in earlier posts, so I’ll keep it short. Deploy this image to the ESXi host with IPv4 address 192.168.0.146 and assign the image “ESXi-5.1.0-20130504001-std-FusionIO”. Put the host in the cluster CL-MGMT and use the host profile named “ViewFusion-Hosts”.
You might get the following error when doing this:
New-DeployRule : 6/14/2013 19:56:58 New-DeployRule Could not find a trusted signer
Try this command to fix it and then run it again:
This command will take some time to complete. After this we need to push the Deploy rule to the “Active-Set”:
We’re almost done after this, just a little fix is needed:
When you now look at the screen of the ESXi hosts booting, you should see the line with “block-iomemory” passing by.
I posted about FusionIO couple times RAID vs SSD vs FusionIO and Testing FusionIO: strict_sync is too strict…. The problem was that FusionIO did not provide durability or results were too bad in strict mode, so I lost interest FusionIO for couple month. But I should express respect to FusionIO team, they did not ignore problem, and recently I was told that in last drivers FusionIO provides durability even without strict_mode (see http://www.mysqlperformanceblog.com/2009/06/15/testing-fusionio-strict_sync-is-too-strict/#comment-676717). While I do not fully understand how it works internally (FusionIO does not have RAM on board and takes memory from host, so there is always cache kept in OS memory), I also do not have reason to doubt ( but I will test that someone day …). So I decided to see what IO performance we can expect from FusionIO
in different modes.
First few words about card by itself. I have 160 GB SLC ioDrive, and simple google search
by word “FusionIO” drives you to Dell shop (so I do not disclosure any private information here).
By that link you can get card for $6,569.99, which gives us ~40$/GB. There we should
talk about real space. To provide “steady” write performance card comes pre-formatted
with only 120GB available (25% of space is reserved for internal needs), so real cost jumps to 50$/GB. But even that is not enough to get maximal write throughput, so if you want to get announced 600MB/sec you need to use even less space (you will see that from results of benchmarks). As you can see from description on Dell site you should expect “write bandwidth of 600 MB/s and read bandwidth of 700 MB/s”, let’s see what we get in real life.
There is a lot of numbers, so let me put my conclusions first, and later I will prove them by numbers.
- Reads: you can really get 700MB/s read bandwidth, but you need 32 working threads
for that. For single thread I got 140MB/s and for 4 threads – 446MB/s
- Writes is more complex story (random writes)
- - with filling 100GB on 125GB partition, I got 316.15MB/s, for 4 threads. for 1 thread – 131MB/s. However for 8 threads result drops to 162.96MB/s. The latency is also interesting. For 1 thread 95% of requests get response within 0.11ms (yes, it’s 110 microseconds!), but for 8 threads it is already 0.85ms, and for 16 threads – 1.55ms (which is still very good). And something happens with 32 threads – response time jumps to 19.71ms. I think there
some serialization issue inside driver or firmware
- - you can get about 600MB/s write bandwidth on 16GB file and with 16 threads. With increasing of file size write bandwidth drops.
- - I got strange results with sequential writes. With increasing number of threads, write bandwidth drops noticeable. Again it should be some serialization problem, I hope FusionIO address it also
Now to numbers and benchmarks. All details available here. I used sysbench fileio benchmark for tests (full script available under link). Block size is 16KB and I used directio mode.
Let me show some graphs, while numeric results are available in full report.
For reference I show also results for RAID10 on 8 disks, 2.5″ , 15K RPMS each.
From this graph we see that fully utilize FusionIO card we need 4 or more working threads.
Again 4 threads seem interesting here, we get peak on 4 threads, and with more, bandwidth drops. Again I assume it is some serialization / contention in driver or firmware.
With sequential reads the same story with 4 threads.
This is what I mention on sequential writes. Instead of increasing, bandwidth drops down if we have more than 1 working thread, and we can see only 130MB/sec max throughput.
Write bandwidth vs filesize:
So there I tested how filesize (fill factor) affects write performance ( 8 working threads).
The point here is that you can really get promised 600MB/s or close write throughput but only on small files size (<=32GB). With increasing size, throughput drops noticeable.
Ok, to finalize post I think FusionIO provides really good IO read and write performance.
If your budget allows and you fight with IO problems it could be good solution. Speaking about budget, $6,569.99 may look quite significant price ( and it is ), but we may also look into consolidation factor, how many servers we can replace with single card. Rough look says it may be 5-10 servers, and that amount of servers cost much more.
Dell 7040 Drivers
Entry posted by Vadim 9 comments
Dell Optiplex 3040 Drivers Download