We provide object storage from some of the world’s best hardware and software providers to deliver a highly scalable and reliable storage solution that grows by adding additional compute nodes, containing processors, memory, storage and networking. Object provides the flexibility for any business to add additional compute nodes that have more storage, processors or memory and the fun part is that they don’t have to be the same model!
Today there are 3 main types of data storage these are File (typically used in NAS applications), Block (this is used in RAID, iSCSI or Fibre Channel) and finally Object (designed to handled billions of files and scale to multi-petabyte systems).
All the major cloud providers Google, Microsoft and Amazon all use object storage in their data centres and when you sign up to use their storage it’s object based.
The drawback in the early days of an object store was the lack of system connectivity and it did not support NFS or SMB. Many of the storage solutions now also support the Amazon S3 API allowing a seamless connection from the office to the cloud.
By adopting these protocols any business can now use object based storage in the data centre to provide a highly scalable, ultra reliable and fast storage system.
Object Storage Secret
Unlike RAID where you have a set number of parity drives or the parity bit is distributed across many drives, an object store breaks down the data in to blocks and distributes these across the compute nodes and then creates replicas of these blocks to keep on other nodes. Using a normal RAID that consumes a large amount of disk space for the parity bit and rebuild times could run in to weeks, object store consumes far less disk space and rebuild times are significantly faster. The other benefit of object is when the failed drive is being rebuilt there is no degradation in performance and no biting finger nails hoping it completes before another drive failure.
Object Storage Performance
In the early days of an object store, most use cases were for backup or archiving as memory and hard disk speeds were slow as the data was written across all the nodes. Today we have high speed networks 10Gb/s+, DDR 5 memory and NVMe flash drives along with processors from AMD and Intel that deliver massive processing power. With this much performance Object Storage can now provide a sustained throughput of millions of IOPS.
Data availability is critical in running any application from hosting a website to using a database and this has to perform 24x7x365. Many of the traditional RAID vendors adopted having multiple RAID controllers that could handle the load should one fail and they did for years reliably. Sure if the controller failed there would be a degradation in performance whilst a new one was installed but overall it worked. The issue was that as workloads increased the RAID controllers would hit the buffers as they couldn’t handle the demands due to their limited memory and system bus limitations.
With an object store all the above disappears as you are now deploying servers with processors, memory, storage and networking and each node has very clever software installed to handle the automatic data migrations, load balancing and delicate work of managing large volumes of data. For example you could start with 3 compute nodes and scale to 20, simply by connecting the new node to the storage network, powering on and configuring the software to automatically add the new node(s) and distribute the data.
As the data is distributed across multiple nodes you can now achieve 99.99999% uptime of data availability. To put that in to perspective RAID is normally 99.999% or 5×9’s this equates to a downtime of 5 minutes and 15 seconds per year whereas an object storage with 7×9’s or higher the downtime would be 3.16 seconds!
If you want to know more about object you can read our article here.
If you need a quotation or more information please contact us using the details below.