A new machine learning group at ARM will create accelerator cores, blocks for its CPU and GPU cores and software to tie it all together. Exactly what the group will deliver and when remains under covers.
Analysts suggest ARM could be as much as three years behind products from rivals such as Cadence, Ceva and Synopsys. ARM counters it’s still early days for emerging markets where software is rapidly evolving, and lots of AI tasks are already running on its exiting cores.
ARM declined to share the number of people or budget for the group, run by ARM fellow Jem Davies, best known for a decade working on ARM’s media blocks. Rene Haas, president of ARM’s intellectual property group, defined it simply as “a big team in hardware and software.”
“It’s clear we will do [machine learning] in CPUs, GPUs and special purpose cores, but we are not announcing anything yet,” Davies said in a brief one-on-one after a press event at the ARM Tech Con here.
Davies defines himself as “the special purpose computing guy” at ARM. Officially, he had been focused on computer vision until the new group was created a few weeks ago.
Both Davies and Haas made the case AI is still in its early stage and may be overhyped.
“Workloads change. [Neural nets are] just another workload. In various descriptions they are becoming the new black,” Davies said, pointing to a varied history of specialized hardware.
“Java byte codes came and went, MP3 decode at sub-milliwatt power had a lifetime. MPEG looked like it would be general purpose, but trade-offs drove it to being a specialized chip. It’s early days,” he said.
ARM could be as much as three years behind its competition in silicon for machine learning, said Linley Gwennap, principal of the Linley Group.
“They don’t have anything close to Cadence, Ceva and Synopsys. Each of those companies has different ways of approaching machine learning,” added Mike Demler, a Linley Group analyst who just helped complete a report on the area.
Nvidia recently added multiply-accumulate arrays to its Xavier GPU, Demler noted. A year ago, rival Intel rolled out a road map based on technology it acquired from Nervana for training and Movidius for inference jobs. However, Gwennap noted Intel has yet to disclose specifics about its chips based on Nervana’s technology.
Meanwhile at least a dozen semiconductor startups have been funded to pursue some form of AI. More than 20 software frameworks are in flight. ARM already has a machine learning library that will continue to be the basis for how its chips interface with the libraries.
Tuesday a designer of ASICs for bitcoin mining announced a machine learning accelerator. Chips like Google’s TPUs, already in a second generation, typically pack lots of memory around arrays of multiply-accumulate units.
“Its such early days that its not clear what will win. We are putting a lot of resources on it, but leaving our options open,” said Haas.
The machne learning group was the big new thing out of a receent corporate reorg at ARM that created some new busienss units matrixed with central engineering teams, said Mike Muller, ARM’s chief technologist, in an interview with EE Times.
“Machine learning was the one thing we pulled out [in the reorg] as a seperate technology group. It cuts across all these lines of business in some way, and we knew we didn’t want it submerged into the enginneering pool that’s split across -M and -A and GPU cores. Theres enough to pull together [in machine learning] for one group,” Muller said.
Online messageinquiry
model | brand | Quote |
---|---|---|
BD71847AMWV-E2 | ROHM Semiconductor | |
TL431ACLPR | Texas Instruments | |
MC33074DR2G | onsemi | |
CDZVT2R20B | ROHM Semiconductor | |
RB751G-40T2R | ROHM Semiconductor |
model | brand | To snap up |
---|---|---|
STM32F429IGT6 | STMicroelectronics | |
BP3621 | ROHM Semiconductor | |
TPS63050YFFR | Texas Instruments | |
ESR03EZPJ151 | ROHM Semiconductor | |
IPZ40N04S5L4R8ATMA1 | Infineon Technologies | |
BU33JA2MNVX-CTL | ROHM Semiconductor |
Qr code of ameya360 official account
Identify TWO-DIMENSIONAL code, you can pay attention to
Please enter the verification code in the image below: