The University of New Hampshire Interoperability Lab (UNH-IOL) has announced testing services, software, and an integrators list for Non-Volatile Memory Express over Fabrics(NVMe-oF). This standard expands on the NVMe standard, letting datacenters gain access to flash and solid-state drives (SSDs) over Ethernet and Fibre Channel networks. NVMe-oF lets datacenters run applications using a much larger storage pool than is possible with NVMe alone. EE Times met on December 18, 2017 with David Woolf, UNH-IOL Senior Engineer, Datacenter Technologies.
NMVe, a standardized interface for connecting PCIe SSDs, consists of a command set specifically designed for flash memory. That's as opposed to, say, SCSI, which, according to Woolf, has evolved over the years to include more than just transferring data to and from disc drives. "If you want to implement the SCSI protocol stack, you had to look through many documents," said Woolf.
NVMe's lighter command set gives better performance than SCSI and serial ATA (SATA) drives, which have been accessible over networks for years. The improved performance comes from lower latency because the commands are easier for a memory controller to process. When asked if the lower latency of NVMe was due to software processing only, Woolf acknowledged that PCIe's two highest data rates — 8 Gbps (Gen3) and 16 Gbps (Gen4) — beat SATA's top speed of 6 Gbps. Serial Attached SCSI (SAS) tops out at 12 Gbps, so there is some speed advantages from higher data rates.
Woolf explained that it's possible to create a small fabric of NVMe drives without network connectivity. "Locally attached drives can be NVMe-connected over PCIe. Some desktop and laptop computers have NVMe drives. They all run over a PCIe physical layer (PHY)." Thus, a server with, say, four SSDs (left side of Figure 1) might have four internal drives all connected over PCIe. The middle configuration in Figure 1 shows a rack containing 12 SSDs, all connected through PCIe, though the fabric could be larger.
The right side of Figure 1 shows how a datacenter can now have access to many more bays that aren't necessarily in proximity to the server. "People were interested in gaining access to NVMe drives over a network," noted Woolf. "They're used to having access to storage over a fabric such as Fibre Channel or RDMA over Converged Ethernet (RoCE). Now, they can use NVMe drives over networks as well. The drives can be located anywhere and connected through Ethernet or Fibre Channel." A server can now talk to a spinning SCSI drive or an NVMe drive on the same Fibre Channel network, even though the drives use different PHYs. "That's where you'd really see the performance difference from NVMe's simpler protocol stack," said Woolf. "The NVMe drive will perform faster."
Woolf explained that you can really see increased reliability if you compare NVMe-oF to Internet Small Computer Systems Interface(iSCSI). iSCSI is mapped over TCP/IP. NMVe-oF isn't right now, but it's mapped over Ethernet or Fibre Channel networks. The NVMe protocol runs over Ethernet, which can drop packets that limit reliability. Fortunately, the NMVe protocol adds the ability to compensate for those losses because it uses RoCE.
Because they use a streamlined protocol, NVMe-oF drives can't do all of the management that SCSI can, but they can provide increased reliability and speed. Additional storage management functions could be added in future releases, but that will have to be weighed against NVMe-oF's current speed and reliability advantage. The increased performance, according to Woolf, lets NVMe run applications such as image processing, encryption, and machine learning that other storage protocols can't handle. Such applications are both processor- and storage-intensive. The storage advantages give NVMe a leg up on other protocols.
UNH-IOL has developed software for conformance testing of NVMe-oF. Products that the lab qualifies are added to its NVMe-oF integrators list, which currently consists of Mellanox, Cavium, and Toshiba. Cavium demonstrated NVMe-oF concurrently over RoCE and iWARP on its FastlinQ 45000/41000 series Ethernet NICs at the 2017 Flash Memory Summit in August. More NVMe-oF announcements will surely follow in 2018.
在线留言询价
型号 | 品牌 | 询价 |
---|---|---|
CDZVT2R20B | ROHM Semiconductor | |
TL431ACLPR | Texas Instruments | |
BD71847AMWV-E2 | ROHM Semiconductor | |
MC33074DR2G | onsemi | |
RB751G-40T2R | ROHM Semiconductor |
型号 | 品牌 | 抢购 |
---|---|---|
TPS63050YFFR | Texas Instruments | |
IPZ40N04S5L4R8ATMA1 | Infineon Technologies | |
BP3621 | ROHM Semiconductor | |
STM32F429IGT6 | STMicroelectronics | |
BU33JA2MNVX-CTL | ROHM Semiconductor | |
ESR03EZPJ151 | ROHM Semiconductor |
AMEYA360公众号二维码
识别二维码,即可关注
请输入下方图片中的验证码: