UNH-IOL Adds <span style='color:red'>NVMe</span>-oF Testing and Certification
  The University of New Hampshire Interoperability Lab (UNH-IOL) has announced testing services, software, and an integrators list for Non-Volatile Memory Express over Fabrics(NVMe-oF). This standard expands on the NVMe standard, letting datacenters gain access to flash and solid-state drives (SSDs) over Ethernet and Fibre Channel networks. NVMe-oF lets datacenters run applications using a much larger storage pool than is possible with NVMe alone. EE Times met on December 18, 2017 with David Woolf, UNH-IOL Senior Engineer, Datacenter Technologies.  NMVe, a standardized interface for connecting PCIe SSDs, consists of a command set specifically designed for flash memory. That's as opposed to, say, SCSI, which, according to Woolf, has evolved over the years to include more than just transferring data to and from disc drives. "If you want to implement the SCSI protocol stack, you had to look through many documents," said Woolf.  NVMe's lighter command set gives better performance than SCSI and serial ATA (SATA) drives, which have been accessible over networks for years. The improved performance comes from lower latency because the commands are easier for a memory controller to process. When asked if the lower latency of NVMe was due to software processing only, Woolf acknowledged that PCIe's two highest data rates — 8 Gbps (Gen3) and 16 Gbps (Gen4) — beat SATA's top speed of 6 Gbps. Serial Attached SCSI (SAS) tops out at 12 Gbps, so there is some speed advantages from higher data rates.  Woolf explained that it's possible to create a small fabric of NVMe drives without network connectivity. "Locally attached drives can be NVMe-connected over PCIe. Some desktop and laptop computers have NVMe drives. They all run over a PCIe physical layer (PHY)." Thus, a server with, say, four SSDs (left side of Figure 1) might have four internal drives all connected over PCIe. The middle configuration in Figure 1 shows a rack containing 12 SSDs, all connected through PCIe, though the fabric could be larger.  The right side of Figure 1 shows how a datacenter can now have access to many more bays that aren't necessarily in proximity to the server. "People were interested in gaining access to NVMe drives over a network," noted Woolf. "They're used to having access to storage over a fabric such as Fibre Channel or RDMA over Converged Ethernet (RoCE). Now, they can use NVMe drives over networks as well. The drives can be located anywhere and connected through Ethernet or Fibre Channel." A server can now talk to a spinning SCSI drive or an NVMe drive on the same Fibre Channel network, even though the drives use different PHYs. "That's where you'd really see the performance difference from NVMe's simpler protocol stack," said Woolf. "The NVMe drive will perform faster."  Woolf explained that you can really see increased reliability if you compare NVMe-oF to Internet Small Computer Systems Interface(iSCSI). iSCSI is mapped over TCP/IP. NMVe-oF isn't right now, but it's mapped over Ethernet or Fibre Channel networks. The NVMe protocol runs over Ethernet, which can drop packets that limit reliability. Fortunately, the NMVe protocol adds the ability to compensate for those losses because it uses RoCE.  Because they use a streamlined protocol, NVMe-oF drives can't do all of the management that SCSI can, but they can provide increased reliability and speed. Additional storage management functions could be added in future releases, but that will have to be weighed against NVMe-oF's current speed and reliability advantage. The increased performance, according to Woolf, lets NVMe run applications such as image processing, encryption, and machine learning that other storage protocols can't handle. Such applications are both processor- and storage-intensive. The storage advantages give NVMe a leg up on other protocols.  UNH-IOL has developed software for conformance testing of NVMe-oF. Products that the lab qualifies are added to its NVMe-oF integrators list, which currently consists of Mellanox, Cavium, and Toshiba. Cavium demonstrated NVMe-oF concurrently over RoCE and iWARP on its FastlinQ 45000/41000 series Ethernet NICs at the 2017 Flash Memory Summit in August. More NVMe-oF announcements will surely follow in 2018.
Release time:2017-12-21 00:00 reading:575 Continue reading>>
<span style='color:red'>NVMe</span> Will Oust SCSI by 2020
  It wasn't long ago that flash storage was reserved for high-demand data only. Now all-flash array adoption is not only outpacing hybrid arrays, but those with NVMe look to be rapidly hitting the mainstream.  Tegile Systems is putting its stake in the ground with what it said is the first unified all-NVMe array on the market with its IntelliFlash N-series. However, the company is also giving customers the flexibility to dial up or dial down the amount of flash they want to use over the life off array. Rob Commins, Tegile's vice president of marketing,  told EE Times in a telephone interview that the new storage platform can take the form of an all NVMe flash array, use multiple grades of flash or a hybrid array with spinning disk. Tegile's management algorithm will absorb what's available to balance the density.  He said the N-5000 series is a “memory-class storage array" and comes with an extensive set of data management services, including deduplication and compression for data reduction, encrypted data at rest, and complete data protection with snaps, clones and replication. The N5000 Series comes in two flavors to start: the N5200 can deliver between 23 to 46 RAW TB, with 24 dual-ported PCIe SSDs and one 1 DWPD drives, front-ended by 448 GB DDR4 and 16 GB NVDIMM per system. Meanwhile, the N5800 boasts 76 to 153 RAW TB with 24 dual-ported PCIe SSDs and three DWPD drives accompanied by 3 TB DDR4 and 32 GB NVDIMM per system.  Commins said Tegile is taking a three-phase approach to implementing NVMe. “As the technology develops, we will put an embedded NVMe fabric in there that allows us to expand the pool of NVMe," he said. As the NVMe ecosystem matures over the next year-and-a-half to two years, he said, Tegile will expose NVMe at the front end into full memory / flash fabric that hosts can natively connect to over a 40Gb fabric.  Tegile's broader strategy has been to offer a level of modularity to its arrays so customers aren't always having to do forklift upgrades. They can swap drives out over the life of the array, as well as controllers. In May, Tegile expanded its Lifetime Storage program to now include the Lifetime Storage Controller Refresh Program so customers can refresh their storage array controllers every three to five years as part of their maintenance contract and without replacing the entire array.  Commins said that over the past two years new crops of vendors have built NVMe storage platforms as well as incumbent vendors. “It's going to be a race between us with a full suite of data management software getting into NVMe against NMVe hardware vendors who need to build software," he said.  With only a 20 percent premium on NVMe SSDs, he said, the protocol will quickly become the defacto standard. “It's going to flip pretty fast," he said.  Eric Burgener, IDC's research director for storage, said the research firm is forecasting more revenue for NVMe SSDs than any other interface in 2020, and that by then it will have replaced SCSI. “The trend we've seen with all-flash array vendors is a rush to put a stake in the ground as to what they are doing with NVMe," he said.  IDC has segmented the market in three categories: primary storage, big data, and rack scale flash. The latter includes vendors such as E8, which recently announced how it was taking advantage of dual ports to share NVMe SSDs in the same enclosure. “Most major enterprise storage players pretty far along, even if they've not made public announcements," Burgener said.  Burgener said vendors are taking two different approaches to NVMe in their arrays. One is to add it piecemeal with a roadmap for customers that allows them to integrate NVMe devices followed by controllers and then fabric to the host. The other is to ship a complete NVMe system right away. “Most enterprise workloads don't need this this kind of capability yet," he said, “but some of the vendors are going to be providing it. By and large it's positioning the platform for future growth. It gives customers a warm fuzzy that their vendor is on the leading edge."  There will be combination of things that drive the need NVMe, including real-time big data analytics, said Burgener, which today is generally only something undertaken by large enterprises with custom applications for that specific vertical. “But we see real-time big data analytics becoming a mainstream type of workload over the course of the next three years," Burgener said. More broadly, there's going to be more value in the storage appliances in terms of NVMe technology over the next few years.  Tegile started as a hybrid flash array vendor, and started to shift to all-flash in late 2015, said Burgener, while continuing to make the hybrid arrays available. One of its key differentiators is a common software operating environment that runs across both of those platforms. “That OS knows what media its talking to and takes the appropriate IO path," he said. “They implemented this in a very intelligent manner." This makes it possible to easily replicate data across hybrid arrays, SCSI all-flash arrays and NVMe all-flash arrays. “That provides a lot of flexibility," he added.  Tegile has previously used SanDisk's InfiniFlash in its arrays but is now using commodity SSDs, Burgener noted, as the company sees them as having caught up to custom flash and provides multi-sourcing options that can help it drive down costs. InfiniFlash was appealing when it launched because of the density — you couldn't get 8TB SSDs. “Density doesn't seem to a reason these days to go with a custom design," he said.
Key word:
Release time:2017-08-22 00:00 reading:3403 Continue reading>>
Micron Pushes Capacity Threshold in <span style='color:red'>NVMe</span> SSDs
  Micron Technology unveiled its second generation of NVM Express (NVMe) SSDs at the Flash Memory Summit, using its 3D NAND to push capacities past 10TB.  In an advance telephone briefing with EE Times, Dan Florence, SSD product manager for Micron's Storage Business Unit, said the 9200 Series of NVMe SSDs were built from the ground up to break the shackles of legacy hard drive interfaces. The new storage portfolio is designed to address surging data demands while at the same time maximizing data center efficiency so customers can improve their overall total cost of ownership, he said, and is the storage foundation for the Micron SolidScale Platform, an NVMe over Fabric architecture ahead of standards development, announced earlier this year.  Florence said the Micron 9200 SSD is up to 10 times faster than the fastest SATA SSDs with transfer speeds up to 4.6 GB/s and up to one million read IOPS, making them ideal for performance, high-capacity use cases as application/database acceleration, high frequency trading, and high-performance computing. “NVMe just as an interface offers a lot advantages over the legacy interfaces that were really built for spinning media," he said. “It cuts out a huge chunk of latency and obviously because it sits on the PCIe bus it offers a higher bandwidth which allows you to get much higher IOPS."  NVMe also offers better ease of use of previous iterations of PCIe, Florence added, which had a lot of custom drivers. The industry standard that allows for NVMe to be plugged into pretty much any system with any operating system is helping to fuel its adoption.  The 9200 series is three times the capacity of Micron's previous generation of NVMe SSDs, ranging from 1.6TB to 11TB. “This will be the first monolithic NVME SSD that's larger than 10TB," said Florence. This allows for lower power consumption and makes it easier for the operating system to manage. The U.2 form factor also of the new SSDs also allows for more density per server.  Depending on the use case and configuration, Micron is claiming the new NVMe SSDs outdo the faster hard drives by 300 to 1200 times for random performance, and three to seven time the random performance of the fastest SSD. Florence said random performance has become increasingly important for activities such as online transaction processing and database applications, as they use a random IO access pattern. “A lot of different data analyses workloads are similar. The sequential is more important for data ingest where you're working with large pipes of data." This includes user-generated content, he said, as well as massive amounts of Internet of Things data.  NVMs SSDs can essentially use most of the PCIe bandwidth, sand Florence, and most applications do require some level of random IO. “For a growing number of applications, the amount of data you can move and work with is what drives value. Dollar per IOPS becomes more important and NVMe clearly leads in that area."  For this latest batch of SSDs, Micron is using third-party controllers, said Florence; Microsemi, to be specific. In Micron's most recent quarterly update, and his first as CEO, Sanjay Mehrotra said having stronger controller and firmware capabilities with a roadmap for both internal and external controllers was a focus the company, as was having a mix of system level solutions in the NAND portfolio.  Micron's 9000 Series includes its FlexCapacity firmware, which provides advanced management and optimization tools to allow customers to tweak the drive so they can take full advantages of its capabilities and extend its lifespan, said Florence, but the company will continue to segment its SSDs product lines for specific workloads for those customers who just want to drop in a drive without customizations. “There is a demand for preconfigured devices."  Matthew Kimball, senior analyst for servers and storage at Moor Insights & Strategy, said the early and obvious adopters of NVMe has been data analytics, artificial intelligence and machine learning. “But there are also HPC applications that are already taking advantage of the performance gains seen by deploying NVMe." He believes that as NVMe technology “mainstreams", there are other obvious candidates that maybe aren't considered today due to cost factors, such as server virtualization and virtual desktop infrastructure (VDI). “With VDI in particular, organizations will not only see better virtual desktop performance, but NVMe goes a long way to reducing the issues associated with boot storms – the massive amounts of users logging in at the beginning of a work shift."  Kimball said the high capacity of the Micron NVMe SSDs it significant for both the “early-ish" adopters and the larger market. “The number of devices and edge points that comprise IoT leads to a crazy amount of data being collected on a minute-by-minute basis," he said. “Sifting through, analyzing and turning that data into useful information means having to deal with datasets of unprecedented sizes."  Although 10TB and beyond may seem large today, it won't seem large tomorrow, Kimball added. For the mainstream users, this storage density per drive, per server and per rack means a big reduction in direct and indirect costs as there will fewer servers and racks to buy, less floor space to consume, and lower management and operational costs, he said, all of which are significant.  The flexibility of the firmware to tune drives for the unique performance characteristics and needs of an application will also be appealing for many uses cases, he said. “Take HPC as an example. It's a big umbrella workload category with applications that have varied needs. To be able to tune my storage environment so that I can attack a dataset faster allows me to cut down my analysis time, sometimes significantly."  For the foreseeable future, there doesn't appear to be any barriers to NVMe's momentum, said Kimball. “What's important for any new technology to see adoption at scale is the openness of the technology, standards, protocols and interfaces," said. “I can take a Micron SolidScale box populated with the 9200 NVMe drives and drop that into my existing infrastructure and know that it's going to play well."  Companies like Micron are being smart in their approach to ensure this easy adoption, Kimball said, as price can also be a big barrier to adoption, so the cost curve on new technology will have to move in a direction where a “server room on Main Street" can afford to purchase it, not just the hyperscale players. “It looks like Micron and others are going to be able to drive competitive pricing."
Release time:2017-08-09 00:00 reading:1276 Continue reading>>
Does <span style='color:red'>NVMe</span> Have a Place in Industrial Embedded and IoT?
  Is NVM Express (NVMe) overkill for embedded, industrial applications?  Until recently, that's been the consensus, according to Scott Phillips, vice president of marketing at Virtium. But as big players such as Intel and Micron push the interface specification forward, Phillips said many industrial customers are approaching the company and asking about NVMe. In a telephone interview with EE Times, he said they see the potential benefits of the Internet of Things (IoT) and sensors that generate large volumes of data about operations, but don't know how to get started.  Phillips said that as Virtium's business model is one of “first in, last out," it's supporting DRAM that's been around for as long as eight years, while also having to be forward thinking about new technologies such as the demands of IoT and the role of NVMe. Right now, customers looking at it don't need the performance it offers, but as it goes mainstream it will end up having a place in embedded industrial applications.  He said most of the focus around NVMe SSDs to date has been in the data center with a focus on performance and how many millions of IOPs were possible. “They're not worrying about power, just cranking out data," Phillips said.  Power and heat must be taken into account for the fan-less designs required in the industrial embedded segment, and Phillips said the current approach of throttling drives make sense in the data center, but industrial OEMs don't want to see those kinds of ups and downs. “They want to see a steady state," he said.  This is starting to change, however, as controllers become small enough for the M.2 form factor, and power consumption is reduced in NVMe drives. Phillips said the ideal target is below 4 watts; it's still around 5 watts and can be throttled using firmware. Throttling higher wattages to bring them down is too dramatic a drop.  Aside from the current NVMe conundrum, there is also the ramp up of 3D NAND, which is getting a lot of attention. But for most customers in the industrial embedded space, planar meets their needs, with 2D 15nm NAND having been available for more than a year. They're looking for something that can be left in a piece of equipment without having to touch it again for years to come, said Phillips, while SATA remains “bullet proof" for their needs as well.  Phillips said the tipping point for embedded NVMe will be latency when applications need the less than 3ms it offers. But right now, that's few and far between, he added.  One potential niche application is inflight entertainment systems offering movies, games and Wi-Fi access. “There's a lot more going on now," Phillips said. "They're hitting those little servers in the planes a lot quicker and with a lot more different requests, so they're now concerned about lowering latency for response times."  The NVMe protocol does have the features to support embedded and IoT requirements, said Jonmichael Hands, NVM Express Inc.'s marketing committee co-chair. Ultimately, meeting the needs for industrial customers comes down to the designers of the products. Hands said the latest specification update does include features that makes it well-positioned for embedded and IoT use cases. For example, NVMe 1.3 supports bootstrapping in low-resource environments, including mobile, which will allow for lower cost NVMe devices in smaller spaces, such as the M.2 form factor.  The specification also recognizes that 3D NAND might be have more density than many embedded applications require, and supports other NVM options, said Hands. There are also a lot of thermal and power requirements in embedded, he said, but there a “boatload" of power features in NVMe such as autonomous power state transition, thermal management and near-zero power idle states. “It's really up to implementers and vendors to decide if they want to make something specific to that embedded segment," he said.  Hands said as IoT grows into edge analytics and autonomous driving, there will be a need for faster bandwidth and lower latencies. “That's going to be a clear transition where they're going to be requiring NVM Express," he said. “It's better to invest in a single technology in the longer term for controllers and architectures, so embedded is adopting similar designs to the client and data center."  And while SATA does have an established track record, Hands doesn't equate that with being more stable than NVMe, as the latter has been shipping since 2014. “It's just a legacy interface," he said. “People are getting serious about developing purpose-built NVMe devices for these markets."
Key word:
Release time:2017-07-17 00:00 reading:1185 Continue reading>>
<span style='color:red'>NVMe</span> Spec Gets Major Update
  The NVM Express (NVMe) specification is getting its first major update in nearly three years, putting it on the cusp of becoming the defacto standard for SSD interfaces.  In a telephone interview with EE Times, Jonmichael Hands, NVM Express Inc.'s marketing committee co-chair, said version 1.3 of the NVMe specification for SSDs on a PCI Express (PCIe) bus adds a significant number of new features, something that hasn't been done since November 2014. This update represents one of three core specifications for NVMe; the others are the NVM Express Management Interface and the NVMe-over-Fabrics specifications The latter is not due for an update until late 2018; Micron recently announced it was working ahead of the standard.  It takes time for vendors to take advantage of the new specifications and incorporate them into their products, said Hands. Devices with the NVMe 1.2 specification only began launching last fall. This two-year ramp up is typical, he said, although there's nothing holding device makers back except the time it takes them to update products with the new features.  NVMe 1.3 encompasses 24 technical proposals, said Hands, which can be spread across three major buckets that address client, enterprise and cloud features. The most significant step forward is improved support for virtualization so developers can more flexibly assign SSD resources to specific virtual machines, he said. “Right now, if you want to use an NVM Express device in a virtualized environment, the hypervisor's NVMe driver has to emulate an NVMe SSD to the guest OS," Hands said. "They do this pretty well, but there's a latency hit."  And when it comes to very fast storage class memory devices, Hands said it starts to add up since putting a raw device behind a hypervisor can significantly reduce the number of IOPs. The trick to getting the most performance from each SSD in a virtualized environment, he said, is to make it seem like the SSD is attached natively to each virtual machine. NVMe 1.3 takes advantage of the Single Root I/O Virtualization (SR-IOV) feature in PCIe to support shared storage with direct assignment. “Now you can partition and intelligently allocate resources," he said.  Hands said this offers a lot of value for companies supporting cloud environments and multi-tenancy, but to get the most value from it, developers should be writing this resource allocation into the software-defined storage stack. Some of the large hyperscale customers that are on the NVM Express board of directors are pushing for this feature, he said. The current approach is to use more, smaller SSDs for each workload so they are not impacting the quality of service of other workloads.  One of the most exciting features in 1.3 is Directives, said Hands, which is a new framework for the host and device to exchange metadata. It is particularly well-suited for an all-flash array to support better workload optimization on each drive. SSDs are getting much larger, he said, with the average size hitting 4TB today and quickly rising. In multi-tenant environment, this means mixing different customer workloads on a single SSD. “Inevitably it's going to hurt your endurance because you're going to have different workloads on the same drive," Hands said.  An early example of the Directives feature is Streams, which enables the host to indicate to the controller that the specified logical blocks in a write command are part of one group of associated data. This information may be used by the controller to store related data in associated locations or for other performance enhancements. Essentially, said Hands, Streams optimizes performance and improves endurance for NAND-based SSDs using simple tagging of associated data from different tenants in cloud hosting applications.  Among the other new features in NVMe 1.3 are enhanced debugging tools for SSDs, which until now have been the domain of the SSD vendors, said Hands, as well as more granular control for thermal throttling based not only on the temperature of the system but also the workload. “The host can now tell SSDs where throttle," Hands said.  The latest NVMe specificication also supports bootstrapping in low-resource environments, including mobile, said Hands, which will allow for lower cost NVMe devices in smaller spaces. NVMe 1.3 also offers broader operations for SSD erasure that are compliant with government standards. A webcast outlining all of the new features will be held on the morning of June 28 will be available on demand afterward.  Not unlike 3D NAND, which looks to be hitting its tipping point in 2018 with wide adoption, NVMe seems to be poised become the dominant interface for SSDs by the end of the year, said Hands. And while there remain markets for SATA and SAS, there are few features being added. “This is where NVMe pulls ahead in terms of innovation," Hands said.
Key word:
Release time:2017-06-28 00:00 reading:1134 Continue reading>>

Turn to

/ 1

  • Week of hot material
  • Material in short supply seckilling
model brand Quote
CDZVT2R20B ROHM Semiconductor
RB751G-40T2R ROHM Semiconductor
MC33074DR2G onsemi
BD71847AMWV-E2 ROHM Semiconductor
TL431ACLPR Texas Instruments
model brand To snap up
BU33JA2MNVX-CTL ROHM Semiconductor
IPZ40N04S5L4R8ATMA1 Infineon Technologies
TPS63050YFFR Texas Instruments
ESR03EZPJ151 ROHM Semiconductor
STM32F429IGT6 STMicroelectronics
BP3621 ROHM Semiconductor
Hot labels
ROHM
IC
Averlogic
Intel
Samsung
IoT
AI
Sensor
Chip
About us

Qr code of ameya360 official account

Identify TWO-DIMENSIONAL code, you can pay attention to

AMEYA360 mall (www.ameya360.com) was launched in 2011. Now there are more than 3,500 high-quality suppliers, including 6 million product model data, and more than 1 million component stocks for purchase. Products cover MCU+ memory + power chip +IGBT+MOS tube + op amp + RF Bluetooth + sensor + resistor capacitance inductor + connector and other fields. main business of platform covers spot sales of electronic components, BOM distribution and product supporting materials, providing one-stop purchasing and sales services for our customers.

Please enter the verification code in the image below:

verification code