Fujitsu RX2540 M6 Server

Servers for a Data Driven World

The new Fujitsu RX2540 M6 is designed to be a high performance, small footprint server. It supports dual sockets providing the ideal balance of density and scalability using the new EDSFF drives. Processing power is handled by the new 3rd Generation Intel® Xeon® Scalable Processors with up to 40 cores per CPU all housed in a dense 1U form factor.

Fujitsu RX2540 M6 Specification Overview

The Fujitsu RX2540 M6 supports up to 64 EDSFF drives of 4TB providing 256TB of RAW storage space, alternatively you can choose 24x 2.5″ SAS/SATA/NVMe drives of up to 30.72TB capacity or 12x 3.5″ SAS/SATA of up to 144TB capacity.

With a choice of 1 or 2 processors from Intel® Xeon® Silver 43xx processor / Intel® Xeon® Gold 53xx processor / Intel® Xeon® Gold 63xx processor / Intel® Xeon® Platinum 83xx processor.

With up to 8TB of memory using up to 32 (16 DIMMs per CPU, 8 channels with 2 slots per channel) of DDR4 3200MHz memory or up to 12TB using Intel® Optane™ persistent memory.

Visit our web page for more details: fujitsu-primergy-rx2540-m6

Fujitsu RX2530 M6 Server

Servers for a Data Driven World

The new Fujitsu RX2530 M6 is designed to be a high performance, small footprint server. It supports dual sockets providing the ideal balance of density and scalability using the new EDSFF drives. Processing power is handled by the new 3rd Generation Intel® Xeon® Scalable Processors with up to 40 cores per CPU all housed in a dense 1U form factor.

Fujitsu RX2530 M6 Specification Overview

The Fujitsu RX2530 M6 supports up to 32 EDSFF drives of 4TB providing 128TB of RAW storage space, alternatively you can choose 10x 2.5″ SAS/SATA/NVMe drives of up to 15.36TB capacity or 4x 3.5″ SAS/SATA of up to 12TB capacity.

With a choice of 1 or 2 processors from Intel® Xeon® Silver 43xx processor / Intel® Xeon® Gold 53xx processor / Intel® Xeon® Gold 63xx processor / Intel® Xeon® Platinum 83xx processor.

With up to 4TB of memory using up to 32 (16 DIMMs per CPU, 8 channels with 2 slots per channel) of DDR4 3200MHz memory or up to 10TB using Intel® Optane™ persistent memory.

Visit our web page for more details fujitsu-rx2530-m6-server

Drive Writes Per Day – DWPD

DWPD (Drive Writes Per Day) Explained

SSD drives are all made using NAND flash and they are priced according to the number of DWPD Drive Writes Per Day. So, a consumer SSD would have far lower DWPD than an enterprise drive. DWPD is based on the number of cells used to hold a single bit of data.

Most of today’s storage arrays use Non-volatile NAND flash memory which is supplied as SSD drives. There are now four types of NAND and these are single-level cell (SLC), multi-level cell (MLC), triple-level cell (TLC) and quad-level cell (TLC) technology.

  • SLC stores one-bit-per cell has longer endurance but is significantly costlier to produce with higher capacities. Enterprise Class – 25 DWPD
  • MLC uses two bits per cell, the most common type of SSD used by the flash storage vendors – Enterprise Class – 10 DWPD
  • TLC TLC uses three bits per cell. These flash technologies have lower endurance, but hold larger capacities and can be produced at lower costs. Consumer Class – 3 DWPD
  • QLC uses four bits per cell. These flash technologies have the lowest endurance, highest capacities and lowest cost. Consumer Class – 1 DWPD

Example
We require 10TB’s of flash storage.

1TB MLC-10-DWPD with 5-year warranty – £500 per drive = £10,000

500GB TLC-3-DWPD with 3-year warranty – £150 per drive = £3,000

10 x 1TB x 10 x 5 x 365 = 182.5 PB’s can be written to the flash

20 x 500GB x 3 x 3 x 365 = 32.85 PB’s can be written to the flash

As you can see from the example both provide the ability to write a considerable amount of data during the lifetime of the SSD and effectively you could buy 3x more flash for the same money using TLC on the understanding that after 3 years it will be worn out.

What we don’t show is the drive performance which is always faster using fewer cells.

MLC – So for every write operation we need to perform 2 erases passes and 2 writes one for each bit

TLC – So for every write operation we need to perform 3 erases passes and 3 writes one for each bit

Wear Levelling

Flash stores data by using an electrical current to etch into Silicon a data bit and this causes Wear Levelling, whereby after so many program erase/write cycles the Flash wears out and this could be 10,000, 100,00 or 1,000,000 writes depending on the type of Flash Storage used. Manufacturers overcome this problem in many ways by using sophisticated algorithms to work out how many times each cell has been used and then automatically re-map those blocks to another portion of Flash Storage or by over-provisioning, this sets aside extra physical flash capacity for background operations, resulting in better write performance and higher endurance.

There are three types of Wear Levelling:

  • No Wear Levelling – Nothing is done with the cells and it wears out
  • Dynamic Wear Levelling – Data is written evenly across the whole of the Flash
  • Static Wear Levelling – Works out how many times a cell has been written and dynamically moves it.

Higher Capacity SSD
To achieve higher capacity SSD drives the silicon die uses a much smaller process maybe 20nm or even lower. For SSD drives this is not so good, sure we pack far higher cells within a given area but the size of transistors and gates within the silicon are much closer together and will not provide the endurance levels a 30nm SSD with a smaller capacity can endure.

Summary
Flash storage arrays can provide far higher performance than a comparably priced disk sub-system. Where flash pricing differs is how you intend to use it. If your application uses a lot of writes then you should consider SLC or MLC storage, if your application is an even split of reads/writes then MLC or TLC can be considered.

In essence, it is all down to workload, if you get this right then you could save a considerable sum of money, whilst the flash storage is delivering the performance you need!

If you would like to know more, please contact us using the details below.

Lenovo ThinkStation P920

Lenovo P920 ThinkStation – Advanced Graphic Workstation

Customise a Lenovo ThinkStation P920 for the performance you need, including the most I/O supporting up to 3x NVIDIA® Quadro® GP100 GPUs. Superior design features on this top-of-the-line workhorse include Flex Trays that hold up to two drives per bay for total versatility and patented Tri-Channel Cooling for enhanced reliability. Ideal for the highest-performance workflows including rendering, simulation, visualisation, deep learning and artificial intelligence across industries.

Lenovo ThinkStation P920 Features

Power to burn
The Lenovo ThinkStation P920 boasts the unbeatable performance the latest Intel® Xeon® processors and NVIDIA® Quadro® GP100 and P6000 GPUs — an industry exclusive. That means it has the power and speed to handle your workload with ease — including the toughest ISV-certified applications.

Faster memory, bigger storage
New, faster 2666 Mhz DDR4 memory — up to 2 TB — has more bandwidth and capacity than the previous generation, for a quicker response. And bigger, faster storage options include an onboard, RAID-capable M.2 PCIe solution, the capacity to handle up to 60 TB of HDD storage, and support for up to 12 drives. That means the P920 can handle even the most demanding workloads.

Unparalleled versatility
The Lenovo ThinkStation P920 features a superior modular design, including Flex Trays that hold up to two drives per bay. Configure only the components you need for the ultimate in usability and savings.

Built to last

Patented Tri-Channel Cooling ensures each component receives cool air, using a unique air baffle. The ThinkStation P920 uses fewer fans and stays cooler than the competition, so it continues to run for less downtime and a bigger bottom line.

Easy to enhance
Intuitive red touch points guide you to quick and easy component changes — even to the motherboard and swappable power supply — without having to use a single tool. And best-of-breed cable management means no wires or plugs, and superior serviceability.

How can we help?

As a fully accredited Lenovo partner we can provide support, configuration advice and evaluation systems for you to test in your environment.

Lenovo P920 Datasheet

Lenovo ThinkSystem DS6200

All Flash or Hybrid Storage Array

Speed, flexibility and capacity

Designed for I/O intensive applications, the Lenovo ThinkSystem DS6200 SAN array offers breakthrough performance and scale at best-in-class pricing, along with 99.999 percent availability.

The easy-to-use management interface makes complex administrative storage tasks simple, including setup in less than 15 minutes with the Rapid Deployment Wizard. The same interface is used across the DS Series family allowing for flexible IT administration.

The Lenovo ThinkSystem DS6200 offers connectivity choices and impressive storage capacities. Choose between 12Gb SAS, 8/16Gb Fibre Channel (FC), or 1/10Gb iSCSI to integrate into your existing network. The DS6200 can hold up to 24x 2.5-inch HDDs and SSDs internally, and supports up to 240 drives total. Both LFF and SFF enclosures are supported in the same array and up to nine expansion units can be added to the DS6200 for greater capacity to provide a balance of performance and capacity.

For ultimate flexibility, the DS Series supports replication between its new 12Gb platform and first generation S Series systems. This enables you to move your high-performance workloads to the faster-performing DS Series and redeploy the older models as data lakes or replication targets.

Lenovo ThinkSystem DS6200 Features Highlights

  • First-to-market midrange performance SAN at entry-level pricing
  • Rapid Data Placement Engine provides accelerated performance and 99.999 percent availability, for “always on” data access
  • Superior cost/GB without compromising performance
  • Simple storage management for fast setup and easy administration with the Rapid Deployment Wizard
  • NEBS-compliant and ruggedized components maximize uptime
  • Supercapacitor backup provides longer duration and less servicing than battery, for greater uptime
  • Other key features include Rapid RAID Rebuild, Rapid Tiering, asynchronous replication, multiprotocol support, Active/Active controllers, and Lenovo XClarity integration

Uncompromising performance

The Lenovo ThinkSystem DS6200 delivers higher IOPS and low latency through the Rapid Data Placement Engine, at a fraction of the cost of competing solutions. Designed for speed within a cost-optimized architecture, the DS6200 delivers superior cost/GB without compromising performance. It’s equipped with storage tiering, high availability, thin provisioning, and other enterprise-class features. This provides your business with a solid foundation to grow, while delivering the best price for performance in its class.

  • 375,000 IOPS
  • SAS, iSCSI, FC, Hybrid
  • Up to 240 2.5″ SFF/3.5″ LFF

You no longer need to decide between faster access to information and lowering the cost of your IT infrastructure. The Lenovo ThinkSystem DS6200 allows you to save money while maintaining operational excellence.

How can we help?

As a fully accredited Lenovo partner we can provide support, configuration advice and evaluation systems for you to test in your environment. If you would like to know more please phone us on 01256 331614 or complete our online form.

Lenovo ThinkSystem DS6200 Datasheet

Lenovo ThinkSystem Servers

88 Performance World Records

Lenovo ThinkSystem Servers are the first range of servers designed by Lenovo. The new range of ThinkSystem Servers have been completely redesigned so you now have common components across the range i.e. power supplies, drives, trays etc. In the past each IBM/Lenovo server had different drive part numbers for the same drive! By introducing common parts across the entire range now makes support and maintenance far easier with less complexity.

The Lenovo ThinkSystem servers currently hold 88 Performance World Records as of November 8, 2017. These World Records are in the areas of Business Processing, Big Data Analytics, Infrastructure Virtualization, Server Side Java and General and Technical Computing.

46 of these are new #1 benchmarks and 42 are maintained as #1 benchmarks.

These world-record benchmark results with the ThinkSystem SR950 and SR650 rack servers, shown below, as well as the ThinkSystem SD530 dense server, SR630 and SR850 rack servers, and the SN550 blade server. All ThinkSystem servers are based on the Intel Xeon Scalable Family of processors.

Business Processing

The ThinkSystem SR950 and SR650 have five world record benchmarks for Business Processing.

TPC-E benchmarks

The Lenovo results:

  • Overall Performance and 4-Socket Price/Performance – Two world recordsThe Lenovo ThinkSystem SR950 delivered the best performance result ever (all servers) and the best ever 4P price/performance TPC-E benchmark result.
  • 2-Socket Performance and Overall Price/Performance – Two world recordsThe Lenovo ThinkSystem SR650 holds the best performance result ever on the 2P TPC-E performance and overall (all servers) price/performance benchmark result.

SAP Sales and Distribution (SAP SD) Benchmark

The Lenovo result:

  • 4-Socket Performance – World Record on WindowsThe ThinkSystem SR950 delivered the best 4P performance result on Windows in the SAP Sales and Distribution Benchmark.
  •  

Big Data Analytics – 37 records

The Lenovo ThinkSystem SR950 and SR650 have has 37 world record Big Data Analytics benchmarks.

SAP HANA (BWoH)

The Lenovo result:

  • 4-Socket Performance – Six world recordsThe Lenovo ThinkSystem SR950 holds 6 performance world records with the 4 socket SAP HANA BWoH benchmark. This includes data load, query throughput and query runtime in two different data volumes.

STAC-M3 Shasta Suite

The Lenovo results:

  • 4-Socket performance – 16 world recordsThe Lenovo ThinkSystem SR950 holds 16 world records for the “big memory” STAC-M3 benchmark. This combines the SR950 new benchmark results and the ones previously published on July 11, 2017 that remain world records.
  • 2-Socket performance – 15 world recordsThe Lenovo ThinkSystem SR650 holds 15 performance world records with the Antuco suite of the STAC-M3 benchmark

Infrastructure Virtualization – 8 records

The Lenovo ThinkSystem SR950 and SR650 have eight world record Infrastructure Virtualization benchmarks.

SPECvirt_sc2013

  • 8-Socket performance – Three world recordsThe Lenovo ThinkSystem SR950 delivered world record performance, performance per watt and server performance per watt on the 8P SPECvirt_sc2013 benchmark.
  • 4-Socket performance – Two world recordsThe Lenovo ThinkSystem SR950 delivered two world record performance per watt and server performance per watt on the 4P SPECvirt_sc2013 benchmark.
  • 2-Socket – Three world recordsThe Lenovo ThinkSystem SR650 delivered world record performance and performance per watt and server performance per watt on the 2P SPECvirt_sc2013 benchmark.

Server-side Java – 19 records

The ThinkSystem SR650 and SR950 has set 19 World Records for the SPECjbb Server-side Java benchmark.

SPECjbb 2015

The Lenovo results:

  • 8-Socket performance – Four world records The Lenovo ThinkSystem SR950 holds 4 world records for 8P performance results for the SPECjbb2015-MultiJVM and SPECjbb2015-Distributed benchmarks.
  • 4-Socket performance – Five world records The Lenovo ThinkSystem SR950 holds 5 world records for 4P performance results for the SPECjbb2015-MultiJVM and SPECjbb2015-Distributed benchmarks.
  • 2-Socket performance, Four world records The Lenovo ThinkSystem SR650 holds 4 world records for 2P performance results for the SPECjbb2015-MultiJVM and SPECjbb2015-Distributed benchmarks.
  • 1-Socket performance, Six world records The Lenovo ThinkSystem SR650 set 6 world records for 1P performance results for the SPECjbb2015-MultiJVM and SPECjbb2015-Distributed benchmarks.

General Computing – 16 records

The ThinkSystem have set 16 new world records General Computing benchmarks.

SPEC CPU2006 and CPU2017

The Lenovo results:

  • 8-Socket performance – Five world records (3x SPEC CPU2017, 2x SPEC CPU2006)The Lenovo ThinkSystem SR950 holds 8P world records for compute-intensive applications with the SPEC CPU benchmarks. This include SPECspeed2017_int_base, SPECspeed2017_fp_base, SPECrate2017_fp_base, SPECint_base _2006 and SPECfp_base2006
  • 4-Socket performance – Two world records (SPEC CPU2017)The Lenovo ThinkSystem SR950 delivered two 4P world records for compute-intensive applications with the SPEC CPU2017 benchmark. This includes SPECint_rate_base2017 and SPECfp_rate_base2017.
  • 2-Socket Performance – World record (SPEC CPU2017)The Lenovo ThinkSystem SR630 delivered a 2P world record for compute-intensive applications with the CPU2017, SPECspeed2017_fp_base benchmark.
  • 1-Socket Performance – Four world records (3x SPEC CPU2017 and 1x CPU2006)The ThinkSystem SR650 and SR630 delivered four 1P world records for compute-intensive applications with the CPU2017 benchmark. This includes SPECrate2017_int_base, SPECint_rate_base2006, SPECspeed2017_int_base, SPECspeed2017_fp_base

How can we help?

As a fully accredited Lenovo partner we can provide support, configuration advice and evaluation systems for you to test in your environment. If you would like to know more please contact us using the details below.

https://lenovopress.lenovo.com/lp1145-lenovo-thinksystem-continues-to-lead-the-industry-in-performance

Flash Storage Array

Providing the fastest performance to applications

Today Enterprise Flash Storage arrays are being used in more applications and environments than ever before and is the No 1 choice for tier 1 storage. An all flash array is designed to provide the maximum number of IOPS for an application(s), whilst providing continuous up-time and data availability. Flash systems can deliver over 1 million IOPS and hold many PB’s of data, whilst integrating with your SAN using Fibre Channel, SAS or FCoE.

Performance is key when choosing flash storage, another key requisite should be reliability as flash drives suffer from wear of which there are three types:

No Wear Levelling – Nothing is done with the cells and it wears out
Dynamic Wear Levelling – Data is written evenly across the whole of the Flash
Static Wear Levelling – Works out how many times a cell has been written and dynamically moves it.

Flash stores data by use an electrical current to etch into Silicon a data bit and this causes Wear Levelling, whereby after so many programme erase / write cycles the Flash wears out and this could be 10,000, 100,00 or 1,000,000 writes depending on the type of Flash used.

Manufacturers overcome this problem in a number of ways by using sophisticated algorithms to work out how many times each cell has been used and then automatically re-map those blocks to another portion of Flash.

Flash Storage Array Technologies

Flash storage array systems are certainly taking center stage with more technologies vying to become the dominant leader with Intel and Micron 3D Xpoint, HP and SanDisk memristor/ReRam, IBM PCM, along with SLC, MLC, TLC, QLC, 3D NAND, SSD, PCIe and NVMe. With SSD drive capacities announced by both Seagate and Samsung reaching 60TB and 15TB respectively.

The most common type of Enterprise Flash Storage uses SSD NAND, this is normally is MLC and provides a high capacity and a far lower cost point than SLC. Whilst this does not have the highest Write Cycle the more sophisticated Enterprise Flash Storage vendors take this into consideration and employ Static Wear Levelling to ensure the maximum data life-cycle is attained and reliability remains intact.

NVMe the flash future

NVMe is an open logical device interface specification for accessing non-volatile storage media attached directly via the PCI Express (PCIe) bus. An NVMe device is on average 5x faster than a comparable SSD and 25x faster than a hard disk, whilst capacities today are 1TB.

The acronym NVM stands for non-volatile memory, which is commonly flash memory that comes in the form of solid-state drives (SSDs). NVMe, as a logical device interface, has been designed from the ground up to capitalise on the low latency and internal parallelism of flash-based storage devices, mirroring the parallelism of contemporary CPUs, platforms and applications.

The main issue with NVMe are as follows:

  1. Due to the performance NVMe delivers over SSD the PCIe bus can only handle today a maximum of eight NVMe devices
  2. Only the latest storage arrays and servers support NVMe
  3. Not all systems support booting to NVMe

By its design, NVMe allows host hardware and software to fully exploit the levels of parallelism possible in modern SSDs. As a result, NVMe reduces I/O overhead and brings various performance improvements in comparison to previous logical-device interfaces, including multiple, long command queues, and reduced latency. (The previous interface protocols were developed for use with far slower hard disk drives (HDD) where a very lengthy delay in computer terms exists between a request and data receipt, data speeds are much slower than RAM speeds, and where disk rotation and seek time give rise to further optimization requirements.)

NVMe devices exist both in the form of standard-sized PCI Express expansion cards and as 2.5-inch form-factor devices that provide a four-lane PCI Express interface through the U.2 connector (formerly known as SFF-8639). SATA Express storage devices and the M.2 specification for internally mounted computer expansion cards also support NVMe as the logical device interface.

How can we help?

We provide flash storage array systems, support, configuration advice and evaluation systems for you to test in your environment. If you would like to know more please phone us on 01256 331614 or complete our online form.

Cloud archive for business

Learn why a cloud archive is good for business

Our article on cloud archive demonstrates savings that can be made by businesses who want to move critical and important data to the cloud for long term archive purposes, compliance and legislation. We cover numerous topics including GDPR, CAPEX vs OPEX, unstructured data, file stubbing and cost controls. The whole article is 4,000+ words and is an important article for any business that is moving data to the cloud and wants to search, index and categorise information.

For the full article click here

NetApp AFF FAS

NetApp AFF FAS All-Flash Array

The NetApp AFF FAS is the world’s first and fastest enterprise all-flash storage that is powered by end-to-end NVMe. The AFF 800 array delivers low latency and massive throughput powered by a combination of NVMe SSDs and NVMe/FC connectivity. Plus the most cloud-integration choices for your artificial-intelligence workflow.

World’s Fastest Flash Array

The NetApp AFF A800 array delivers ultra-low latency of below 200 microsecond and massive throughput of 300 GB/s. With a maximum storage capacity of 700PB delivering over 11.4 million IOPS the A800 is the performance leader in all flash array storage.

The NetApp AFF A-Series harnesses the power of ONTAP and OnCommand software to deliver data management and protection, high efficiency, performance, and 99.9999% availability.
Features and Software Included with ONTAP Software

  • Efficiency: FlexVol®, inline deduplication, inline compression, inline compaction, and thin provisioning
  • Availability: Active-Active HA pair and Multipath I/O
  • Data protection: RAID-DP®, RAID TEC, and Snapshot®
  • Synchronous replication for disaster recovery: MetroCluster™
  • Performance Control: Adaptive quality of service (QoS), balanced placement
  • Management: OnCommand® Workflow Automation, System Manager, and Unified Manager
  • Scalable NAS container: FlexGroup

Flash Bundle

  • All storage protocols supported (FC, FCoE, iSCSI, NFS, pNFS, SMB)
  • SnapRestore®: Back up and restore entire Snapshot copies in seconds
  • SnapMirror®: Simple, flexible replication for backup and disaster recovery
  • FlexClone®: Instant virtual copies of files, LUNs, and volumes
  • SnapCenter®: Unified, scalable platform and plug-in suite for application-consistent data protection and clone management
  • SnapManager®: Application-consistent data backup and recovery for enterprise applications

Extended Value Software (optional)

  • NVMeTM over Fibre Channel (NVMe/FC) protocol: faster and more efficient host connection than original Fibre Channel
  • OnCommand Insight: Flexible, efficient resource management for heterogeneous environments
  • SnapLock®: Compliance software for write once, read many (WORM) protected data
  • Volume Encryption (free license): Granular, volume-level, data-at-rest encryption
  • FabricPool: Automatic data tiering to the cloud

NetApp AFF Datasheet

All Flash NVMe Storage

All Flash NVMe Storage delivers currently the fastest performance available. Nonetheless, deploying the best of the best in terms of I.T. equipment is essential for businesses to remain competitive. The compounded affects of improper, inadequate infrastructures really has measurable negative consequences.

Any business or organization today considering a data centre upgrade should seriously be looking at the potential that all flash NVMe storage arrays provide over all other types of storage. Traditional SAS, SATA and Fibre Channel connected storage all require an HBA card or controller that sits between the storage and CPU, this is where NVMe storage differs. NVMe flash directly addresses the CPU negating the need of an HBA card.

1. Performance – Using NVMe is up to 53x faster than hard disks and at least 5x faster than SSD.

2. How does it work? NVMe flash storage uses PCIe 3.0 or 4.0 to directly connect the CPU, unlike SAS/SATA/Fibre Channel that require an HBA (Host Bus Adapter)

What does NVMe stand for? Non-Volatile Memory Express

How can NVMe flash connect to our infrastructure? NVMe-oF (NVMe over Fabric using fibre channel), NVMe over Infiniband, Ethernet (RoCE and iWARP).

All flash NVMe storage performance

NVMe uses multiple PCIe bus lanes. Each lane has 2 pairs of wires to send and receive data.

PCIe 3.0 – Supports one, four, eight or sixteen lanes in a single PCIe slot, denoted as X1, X4, X8 or X16. Therefor the maximum performance for PCIe 3.0 is approximately 1GB/s per lane x 16 x 2 = 32GB/s in a single PCIe slot.

PCIe 4.0 – Supports one, four, eight, sixteen or 32 lanes lanes in a single PCIe slot, denoted as X1, X4, X8, X16 or 32X. Therefor the maximum performance for PCIe 4.0 is approximately 2GB/s per lane x 32 x 2 = 64GB/s in a single PCIe slot. These cards will work in a PCIe slot but will operate at PCIe 3 speeds.

NVMe can handle 64,000 command queues and send 64,000 commands per queue at the same time, whereas an SSD or hard disk only has 1 command queue and can send a maximum of 32 commands per queue. The NVMe to PCIe commands also require relatively low CPU cycles compared to SAS/SATA/SSD drives. Ideally the CPU should have as many multiple cores as possible in order to sustain the transfer rates that all flash NVMe storage provides.

Over the next 5 years, all data storage systems will be using NVMe & PCIe 4.0. With newer flash memory technologies emerging such as 3D XPoint and Optane, the future roadmap for NVMe based flash arrays is very bright.

Get yourself a data storage Apache, get yourself all flash NVMe storage for your business infrastructure.

Want to know more about NVMe, please contact us using the details below.

Fortuna Data
Smarter, Strategic, Thinking
Site designed and built using Oxygen Builder by Fortuna Data.
®2023 Fortuna Data – All Rights Reserved - Trading since 1994