<span style='color:red'>Baidu</span> Backs Neuromorphic IC Developer
Swiss startup aiCTX has closed a $1.5 million pre-A funding round from Baidu Ventures to develop commercial applications for its low-power neuromorphic computing and processor designs and enable what it calls “neuromorphic intelligence.” It is targeting low-power edge-computing embedded sensory processing systems.Founded in March 2017 based on advances in neuromorphic computing hardware developed at the Institute of Neuroinformatics of the University of Zurich and ETH Zurich, aiCTX (pronounced “AI-cortex”) is developing “full-stack” custom neuromorphic processors for a variety of artificial-intelligence (AI) edge-computing applications that require ultra-low-power and ultra-low-latency features, including autonomous robots, always-on co-processors for mobile and embedded devices, wearable health-care systems, security, IoT applications, and computing at the network edge.Dylan Muir, senior R&D engineer at aiCTX, told EE Times that the company is building end-to-end dedicated neuromorphic IP blocks, ASICs, and SoCs as full custom computing solutions that integrate neuromorphic sensors and processors. “This approach ensures minimum size and power consumption and is fundamentally different from most other neuromorphic computing approaches that propose general-purpose solutions as a plug-and-play alternative to parts of machine-learning tool chains with conventional data paths.”He added, “We engineer spiking neural network and algorithmic solutions that implement computational neuroscience models of cortical computation. Our technology is based on over 20 years of research and development in neuromorphic models of cortical computation that started out at CALTECH in the mid-’90s and are still ongoing at the Institute of Neuroinformatics of the University of Zurich and ETH Zurich.”Baidu Ventures’ CEO, Wei Liu, said that they invested in aiCTX because it is different from other neuromorphic startups and corporations active in the field in that it has a unique technology and a product-driven focus. “They are developing complete commercial solutions, not simply designing computing fabrics,” he said.We asked Muir what that meant. He said that, at the moment, other neuromorphic solutions target desktop applications and are based on a standard clocked digital logic design flow. In contrast, aiCTX designs are either based on ultra-low-power mixed-signal analog-digital VLSI circuits or on fully asynchronous low-power hand-crafted digital designs (or both). “We are targeting applications that require ultra-low power (sub-mW to mW) always-on solutions for IoT edge-based computing on mobile and embedded systems that do not need to rely on the cloud,” said Muir.He added, “We are building demos around those applications and finding potential industrial partners. For example, we’re partnering with a health wearable company to provide ultra-low-power on-board signal processing using our neuromorphic processors.”Muir said that the company is currently finalizing its new DynapCNN chip, which is a scalable, fully configurable digital event-driven neuromorphic processor with 1M ReLU spiking neurons per chip for implementing spiking convolutional neural networks (sCNN). The chip supports various types of CNN layers (like ReLU, Cropping, Padding, and Pooling) and network architectures (like LeNet, ResNet, and Inception). The technology is aimed at always-on, ultra-low-power and ultra-low-latency event-driven sensory processing applications. Samples of the chip will be available during Q2 of 2019.In addition, aiCTX said that it is building a new family of neuromorphic chips that combine energy efficiency with features for low-latency, real-time end-to-end applications. The design will provide interfaces for converting analog sensory signals into spikes and for direct event-based input from dynamic-vision sensors, making the devices suitable for mobile health and robotics applications. This neuromorphic processor will tape-out by the end of 2018. The first development kits, along with a software development ecosystem, are planned to be released in Q3 of 2019.A fully neuromorphic smart vision processor is also under development by a joint venture between aiCTX and neuromorphic vision systems company iniVation. This is a compact, low-cost, single-chip solution for ultra-low-power (sub-mW) and ultra-low-latency (<10 ms) always-on IoT devices and edge-computing vision applications, such as gesture recognition, face or object detection, and surveillance. Samples of smart vision processors are planned for Q4 of 2019.In terms of business model, aiCTX is developing whole chip solutions for demonstrating and exploring potential applications but, in the long term, hopes to license and provide IP solutions. “The goal is to follow a model similar to that of Arm for the whole IoT edge-computing landscape,” commented Muir. “The IP provided by aiCTX will be tailored exactly to customer and application needs for maximum efficiency.”Muir said that because the company is developing what it believes is a completely new and disruptive approach to computing, this requires developments at all levels of the hierarchy from the basic computing devices to their design and configuration tools, the high-level algorithmic development, and the testing framework. Now that the optimal solutions and market needs are being identified, aiCTX can expand its chip design and system engineering activities and is starting to talk to investors for a Series A round in the next month. “Baidu Ventures’ investment will help us grow our team, so we can move faster on the applications we’ve identified,” he said.aiCTX VisionThe company told us that its vision is to develop this technology to solve AI problems and create a whole new field of research that will be called NI (for “neuromorphic intelligence”). Muir said that the landscape of computing is rapidly changing from bulky and power-hungry general-purpose computing systems to small, task-specific, low-power edge-computing embedded sensory processing systems.The spiking sensory and neural processing systems studied at the Institute of Neuroinformatics of the University of Zurich and ETH Zurich demonstrate that brain-inspired architectures can implement low-power computations efficiently and robustly. “The vision of the company is to exploit the know-how accumulated over the years in studying beyond-von-Neumann computational paradigms and to develop engineering solutions that have high potential in the growing IoT market,” added Muir.In particular, the last two years have seen tremendous leaps in state-of-the-art neural-network algorithms, especially in the context of application-oriented spiking neural networks. This has paved the path for demonstrable gains in the use of neuromorphic devices for solving complex pattern recognition and classification tasks.Muir summarized, “Our approach is to deliver not only new hardware tailored for a given application but to provide a full working solution — that means that we also develop neural-network configurations for the neuromorphic devices in-house. We have a research team within aiCTX to build full applications around our neuromorphic hardware.”
Key word:
Release time:2018-11-19 00:00 reading:1138 Continue reading>>
<span style='color:red'>Baidu</span> Accelerator Rises in AI
China’s Baidu followed in Google’s footsteps this week, announcing it has developed its own deep learning accelerator. The move adds yet another significant player to a long list in AI hardware, but details of the chip and when it will be used remain unclear.Baidu will deploy Kunlun in its data centers to accelerate machine learning jobs for both its own applications and those of its cloud-computing customers. The services will compete with companies such as Wave Computing and SambaNova who aim to sell to business users appliances that run machine-learning tasks.Kunlun delivers 260 Tera-operations/second while consuming 100 Watts, 30 times as powerful as Baidu’s prior accelerators based on FPGAs. The chip is made in a 14nm Samsung process and consists of thousands of cores with an aggregate 512 GBytes/second of memory bandwidth.Baidu did not disclose its architecture, but like Google’s Tensor Processing Unit, it probably consists of an array of multiply-accumulate units. The memory bandwidth likely comes from use of a 2.5D stack of logic and the equivalent of two HBM2 DRAM chips.Kunlun will come in a version for training (the 818-300 chip) and one for less computationally intensive inference jobs (the 818-100). It is aimed for use both in data centers and edge devices such as self-driving cars. Baidu did not comment on when it will offer access to the chip as a service on its Web site or its plans for merchant sales, if any, to third parties,Baidu said the chip will support Baidu’s PaddlePaddle AI framework as well as “common open source deep learning algorithms.” It did not mention any support for the wide variety of other software frameworks. It is geared for the usual set of deep learning jobs including voice recognition, search ranking, natural language processing, autonomous driving and large-scale recommendations.One of the few Western analysts at the Baidu Create event where Kunlun was announced on July 4 described the chip as “definitely interesting, but still [raising] lots of remaining questions.“My sense is that they will first leverage it in their data centers and offer it via an AI service that developers can tap into…in particular, it could get optimized for Baidu’s Apollo autonomous car platform,” said Bob O'Donnell, chief analyst at Technalysis Research LLC.Based on raw specs, Kunlun is significantly more powerful than the second-generation of Google’s TPU which delivers 45 TFlops at 600 GB/s memory bandwidth. However, “you always have to be careful making comparisons, since Baidu apparently didn’t describe what it’s operations are,” said Mike Demler of The Linley Group.Baidu released a picture of a mock up of its chip but no datasheet or availabilty. (Image: Baidu)Given it’s still early days for deep learning, Web giants such as Google and Baidu may use a mx of their own ASICs along with GPUs and FPGAs for some time, said Kevin Krewell of Tirias Research.“In areas where algorithms are changing, it may still be important to use more programmable and flexible solutions like CPUs, GPUs, and FPGAs. But in other areas where the algorithms become more fixed, then ASICs can provide a more power-efficient solution,” said Krewell.Kunlun is not Baidu’s first hardware initiative. Last year, it launched Duer, its own smart-speaker services with OEM and silicon partners.At the Beijing event this week, Baidu also announced an upgrade of its machine-learning service called Baidu Brian 3.0, supporting 110 APIs or SDKs including ones for face, video and natural language recognition. Users implementing the service with Baidu’s EasyDL tool to create computer vision models include one unnamed U.S. company deploying it at checkout stands in more than 160 grocery stores to check for unpaid products on the bottom shelf of a shopping cart.
Key word:
Release time:2018-07-09 00:00 reading:1183 Continue reading>>
Intel Pushes AI at <span style='color:red'>Baidu</span> Create
Baidu Create 2018 this week in Beijing, the second annual AI developer conference sponsored by China’s Internet giant, is looking more and more like Google I/O or the Apple Worldwide Developers Conference in Silicon Valley.Certainly, for thousands of local tech developers in China, this conference is a must. But it’s also turning into an event of choice for Intel, where the company can promote its own technology, unveil partnerships and tout the inroads it’s blazing in the Chinese market.Mobileye & Baidu's ApolloTimed with this year’s Baidu Create, Intel/Mobileye announced that Mobileye’s Responsibility Sensitive Safety (RSS) model will be designed into both Baidu’s open-source Project Apollo and commercial Apollo Drive programs.Mobileye sees its proposed RSS model as critical to providing “safety assurance of autonomous vehicle (AV) decision-making” in the era of artificial intelligence.More specifically, Mobileye recently acknowledged that because AI-based AVs operate probabilistically, they could make mistakes.To mitigate unsafe operations by AI-driven vehicles, Mobileye said that under the RSS model it is installing two separate systems: 1) AI based on reinforcement learning, which proposes the AV’s next action, and 2) a “safety layer” based on a formal deterministic system that can override an “unsafe” AV decision.Intel told us that Baidu is the first company to “publicly” announce the adoption of the RSS model.Now that Baidu’s Apollo program “has a huge backing from the Chinese government,” Egil Juliussen, IHS Markit director of research and principal analyst, for automotive technology, told us, “This will make [the RSS model] real, and help speed up its adoption, initially in China.”At Baidu Create, Baidu also announced plans to adopt Mobileye’s Surround Computer Vision Kit as its visual perception solution.  Apollo's impact on global auto marketIn one short year, Baidu’s Apollo platform has signed more than 100 companies, while making major advancements by enabling a host of new features that include telematics updates on its open AV platform.Juliussen noted that Baidu’s Apollo recruits include many technology powerhouses in the West — including Nvidia, Intel, ZF, Bosch and Continental.  “It’s an impressive list,” he said.Aside from the list of tech vendors who support Baidu’s open AV platform design, how significant is Apollo in the global automotive industry? By promoting an Android-like open platform among car OEMs and tier ones, can Baidu — and by extension, China — really leapfrog the Western AV/EV industry?One Chinese semiconductor industry executive, who spoke on the condition of anonymity, recently told us, “Yes, Baidu is making headway with Apollo. But here’s the thing. Does Baidu make cars? Who will actually make Apollo-based cars?”Juliussen doesn’t think this is a big problem. With leading tier ones and tech companies in the West already eager to make Apollo-based modules, local Chinese automotive manufacturers will find it easy to build Apollo-based AVs/EVs, he explained.“Initially, they will be Chinese cars for the Chinese market. But local Chinese OEMs will start exporting them in the next five years or so. First, they may be just focused on the low-end segment, which would allow them to capture only a sliver of the market. But over the next decade, they will move onto the high-end market.”Important to remember is that the Apollo program isn’t just about a hardware platform. Baidu is adding a software platform on top of that, where apps will abound. This opens the door not just to Western tech companies but also Chinese software companies. They can add their own vision-type algorithms, Juliussen explained, allowing China to have its own AI-based autonomous vehicles using their home-grown AI technologies.AI camera, FPGAs, PaddlePaddleSeparately, Intel used Baidu Create to showcase a host of its own AI-related technologies. These include Xeye, a new AI camera powered by Intel’s Movidius vision processing unit, Baidu’s plan to offer workload acceleration as a service which leverages Intel FPGAs, and Baidu’s deep learning framework, called PaddlePaddle. Intel says Baidu’s PaddlePaddle is now optimized for Intel Xeon scalable processors.
Key word:
Release time:2018-07-05 00:00 reading:1427 Continue reading>>
<span style='color:red'>Baidu</span> to Release Voice Data for AI
  China Web giant Baidu will make available what it claims are three of the largest data sets related to Chinese voice recognition in an effort to attract developers. Its Project Prometheus also includes $1 million dollar fund to invest in efforts related to voice and machine learning.  The initiative is part of DuerOS, Baidu’s platform for natural-language services. Earlier this year, the Web giant, known as the Google of China, formally launched DuerOS and a variety of third-party products using it.  Baidu will gradually open three large datasets, one in far-field wake word detection, one in far-field speech recognition and one in what it calls multi-turn conversations. The data can be used to train new smart voice systems or services.  The wake-word data consists of about 500,000 voice clips of five to ten popular Chinese wake words. It includes the wake word to activate DuerOS devices, “xiaodu xiaodu.”  The speech recognition datasets will include thousands of hours of spoken Mandarin. The third data set is made up of thousands of dialogues across ten domains DuerOS currently serves.  Web giants such as Baidu typically guard the large datasets they accumulate because they are seen as part of their strategic advantage. Baidu’s goal is to enable many small groups to use the data to expand Baidu’s offerings and drive the technology ahead.  “In the age of AI, data is the new oil,” said Guoguo Chen, Baidu’s principal architect for DuerOS, speaking in a press statement.  Even giants such as Amazon and Google do not yet support Chinese in their Alexa and Google Assistant products today, in part, due to the complexity of the language.  Interestingly, Baidu invited Bj?rn Hoffmeister, senior manager of Amazon Machine Learning, to speak about the field at an event in Silicon Valley today where Baidu launched Prometheus. Baidu is taking a page from Facebook which has tried to spawn open source work among partners to gain leverage over larger rivals.  Under Project Prometheus, Baidu will work with universities and other researchers to conduct joint training, course design and workshops. The effort is geared to attract talent to the field as well as make Baidu a center of technical work in the area.  Baidu claims more than 100 branded devices from refrigerators and air conditioners to TV set-top boxes and smart speakers currently use its DuerOS.
Key word:
Release time:2017-11-10 00:00 reading:1318 Continue reading>>
<span style='color:red'>Baidu</span>'s Voice Exec Speaks Out
  Kun Jing wants to enable any embedded system in China to listen to and speak Mandarin. He aims to make Baidu’s DuerOS a kind of Android for natural-language cloud services.  “Our goal is to have every chip maker pre-install our software,” said Jing, general manager of Baidu’s DuerOS group, in an interview with EE Times. “We want every device to have voice capability,” he said, noting the free DuerOS code can add value to an otherwise commodity Wi-Fi chip.  So far ARM, Conexant, Intel, Nvidia, Qualcomm, Realtek, RDA Microelectronics and one undisclosed chip vendor plan to support DuerOS. They are among about 100 partners that include systems, software and content companies.  Realtek, RDA and the unnamed chip partner will offer so-called lightweight chip sets. So far, the RDA 5981, a 40nm Wi-Fi/Bluetooth chip with an ARM Cortex M4 processor, is the only chip shipping with the DuerOS SDK pre-installed.  Smartphones such as an HTC handset shipping now will run DuerOS on versions of Qualcomm’s Snapdragon. Intel is working with Lenovo on a smart speaker that will ship later this summer.  As many as 30 DuerOS products are in the works, including smartphones, TVs, refrigerators, air conditioners and speakers from OEMs such as Haier, HTC, Vivo, and Harman. A TV with voice search capabilities shipped in March, and a smart speaker shipped in May.  “Right now it’s all premium partners we work with closely to port and optimize our software for their chip sets,” said Jing.  Baidu officially launched DuerOS at a Bejing event July 4 with about 100 different capabilities. It claims its natural language recognition has a 97 percent accuracy rate.  Despite its name, DuerOS, “is not a traditional operating system, but a cloud service client that supports a wide range of OSes such as FreeRTOS, ARM Mbed, Linux and iOS,” said Jing. (Amazon takes a similar approach with its Alexa voice service.)
Key word:
Release time:2017-07-17 00:00 reading:1169 Continue reading>>
<span style='color:red'>Baidu</span> Upgrades Neural Net Benchmark
  Baidu updated its open-source benchmark for neural networks, adding support for inference jobs and support for low-precision math.DeepBench provides a target for optimizing chips that help data centers build larger and, thus, more accurate models for jobs such as image and natural-language recognition.  The work shows that it’s still early days for neural nets. So far, results running the training version of the spec launched last September are only available on a handful of Intel Xeon and Nvidia graphics processors.  Results for the new benchmark on server-based inference jobs should be available on those chips soon. In addition, Baidu is releasing results on inference jobs run on devices including the iPhone 6, iPhone 7, and a Raspberry Pi board.  Inference in the server has longer latency but can use larger processors and more memory than is available in embedded devices like smartphones and smart speakers. “We’ve tried to avoid drawing big conclusions; so far, we’re just compiling results,” said Sharan Narang, a systems researcher at Baidu’s Silicon Valley AI Lab.  At press time, it was not clear whether Intel would have inference results for the release today, and it is still working on results for its massively parallel Knights Mill. AMD expressed support for the benchmark but has yet to release results running it on its new Epyc x86 and Radeon Instinct GPUs.  A handful of startups including Corenami, Graphcore, Wave Computing, and Nervana — acquitted by Intel — have plans for deep-learning accelerators.  “Chip makers are very excited about this and want to showcase their results, [but] we don’t want any use of proprietary libraries, only open ones, so these things take a lot of effort,” said Narang. “We’ve spoken to Nervana, Graphcore, and Wave, and they all have promising approaches, but none can benchmark real silicon yet.”  The updated DeepBench supports lower-precision floating-point operations and sparse operations for inference to boost performance.  “There’s a clear correlation in deep learning of larger models and larger data sets getting better accuracy in any app, so we want to build the largest possible models,” he said. “We need larger processors, reduced-precision math, and other techniques we’re working on to achieve that goal.”
Key word:
Release time:2017-06-29 00:00 reading:1193 Continue reading>>

Turn to

/ 1

  • Week of hot material
  • Material in short supply seckilling
model brand Quote
RB751G-40T2R ROHM Semiconductor
TL431ACLPR Texas Instruments
BD71847AMWV-E2 ROHM Semiconductor
CDZVT2R20B ROHM Semiconductor
MC33074DR2G onsemi
model brand To snap up
ESR03EZPJ151 ROHM Semiconductor
TPS63050YFFR Texas Instruments
IPZ40N04S5L4R8ATMA1 Infineon Technologies
BU33JA2MNVX-CTL ROHM Semiconductor
BP3621 ROHM Semiconductor
STM32F429IGT6 STMicroelectronics
Hot labels
ROHM
IC
Averlogic
Intel
Samsung
IoT
AI
Sensor
Chip
About us

Qr code of ameya360 official account

Identify TWO-DIMENSIONAL code, you can pay attention to

AMEYA360 mall (www.ameya360.com) was launched in 2011. Now there are more than 3,500 high-quality suppliers, including 6 million product model data, and more than 1 million component stocks for purchase. Products cover MCU+ memory + power chip +IGBT+MOS tube + op amp + RF Bluetooth + sensor + resistor capacitance inductor + connector and other fields. main business of platform covers spot sales of electronic components, BOM distribution and product supporting materials, providing one-stop purchasing and sales services for our customers.

Please enter the verification code in the image below:

verification code