DDP - Dynamic Drive Pool EXR redundant series were demonstrated for the first time at IBC September 2010 in Amsterdam and brought lots of attention as its fully scalable redundant shared storage system with no single point of failure.
DDP is very important for shared storage market as it’s the only SAN Shared Storage solutions over the Ethernet which enables a collaborative workflow in real time with industry standard applications such as Pro Tools, Fairlight®, Pyramix®, Avid® Media Composer, Apple® Final Cut Pro, Grass Valley® Edius, Adobe® Premiere, applications from Image Systems®, Autodesk®, are just to name the few.
1) Ardis Technologies has developed their own metadata controller. This is called AVFS and manages amongst others the read and write requests from all desktops connected.
AVFS stands for: the Ardis Virtual File System. AVFS is tightly integrated with their own block IO based iSCSI SAN technology in the DDP.
This allowed Ardis Technologies to make AVFS application specific. That is why Pro Tools, Fairlight, Edius Final cut Pro users on Mac and PC can simultaneously read and write to the same DDP drives. DDP also has Avid emulation software an Avid editor feels like working with Avid Unity storage with full green/red bin locking mechanism. This is also why DDP can be integrated with Cineon/DPX type applications from Image Systems and Autodesk to name a few.
Ardis Technologies has now taken AVFS one step further. AVFS now also analyzes whether the stream type is audio, video or film and uses an adaptive algorithm to optimize streaming for these types.
But it does not end there! Now AVFS also analyzes which video type is being used. This comes in very handy for example when working with Final Cut Pro to guarantee the smoothest operation possible.
2) Other facilities the DDP has are for example a defragmentation utility and a Workflow Manager. The defragmentation utility can be used to optimize Cineon/DPX sequences. The optional Workflow Manager can be used to set Access rights per users/groups on DDP drive resolution.
The model which we saw at IBC in Netherlands, was DDP32DEXR which is consisted of two server heads and two DDP16DEXR storage arrays.
DDPxDEXR represents a scalable redundant DDP storage solution with optional number of x drives where x is between 16 and 1600 with the current capacity of up to 6,4PB. (read 6.4 PetaBytes!)
The DDPxDEXR has "no single point of failure" and is meant for critical environments such as bigger postproduction and broadcast departments and rental companies. Bandwidth can be as high as requested.
3) When capacity is the priority and not so much bandwidth required a cost effective near line version of the DDPxDEXR can be delivered also called DDP Archive series.
DDP nearline (Archive) version uses Ardis Technologies MAID technology to minimize power usage. (MAID = massive arrays of idle disks).
It means procedures to shut down disk to a spare mode when they are not in use. When you attempt to read something from those drives or to write it becomes alive again.
A DDP32DEXR, consisting of two DDP Heads and two DDP16DEXR redundant storage arrays will be available to see and to try out at the Inter-Bee or at any of the DDP Japan Premiere Resellers from November 2010.
4) So…How it all works?
Well, DDP Server Head has minimum 1 Raid card to which DDP16EXR’s arrays are attached.
DDP16EXR are actually upgraded versions of standard DDP16EX.
As DDP16EX usually comes with SATA drives and as only SAS drives have dual connections, Interposer boards are added to every SATA drive and extra expansion board at the back of the DDP16EX making it full redundant in this case. So there is letter R – for Redundancy.
So going back to DDP Server Head: inside are 2 Drives with Linux Debian 64Bit OS and Kernel 2.6.xx and all set in Raid 10 to offer redundancy. Each server head also have redundant Power Supply.
Logic Board has up to 7 PCIe slots making it possible to use 5 Raid cards and 2x quad 10GbE SFP+ Network cards. Lets take the DDP16DEXR for example. Knowing that we can connect up to 8 x DDP16EXR per Raid controller it means in this case we can have up to 40 x DDP16EX x2TB. And that represent lots of storage and lots of bandwidth. Currently tests are ongoing with 3TB drives and even larger expansion arrays.
5) 2 x DDP16EXR arrays
2 x DDP16EXR arrays connected to Raid controller card would give aggregate bandwidth of approx. 700MB/s, and if we would connect 4x DDP16EXR in total, bandwidth could increase up to 1.2GB/s but that figure also depends on many other factors such as seek time, video resolution, number of users etc. Also SAS cable represent limitation, as max bandwidth is 1.2GB/s using SAS cables.
More then 4 x DDP16EXR boxes connected to the same Raid controller card will not add more bandwidth but only extra capacity so if more bandwidth is required, second Raid controller card can be installed in DDP Server Head making it two separated stacks but thanks to clever engineering by Ardis Technologies and their virtualization those two stack would appear as single volume and bandwidth would be double in this case.
Second DDP Server Head is identical to the 1st one, and it switches automatically in case 1st DDP Server head start to malfunction.
7) How about the costing?
You will be very surprised about the cost because if you compare with other similar storage systems on market, DDP is very cheap. I asked myself many times how is it possible to make DDP so cheap but the reason is very simple. All software is “developed in house” by Ardis Technologies, which makes great saving to end-users.
Data is probably most expensive in every case so anyone who needs this kind of protection DDP redundant systems are not only the right choice but only choice.
If DATA is important to you, there is only one choice:
DDP – redundant series