IC Insights Raises 2018 IC Market Forecast from 8% to 15%
  IC Insights’ latest market, unit, and average selling price forecasts for 33 major IC product segments for 2018 through 2022 is included in the March Update to the 2018 McClean Report (MR18).  The Update also includes an analysis of the major semiconductor suppliers’ capital spending plans for this year.  The biggest adjustments to the original MR18 IC market forecasts were to the memory market; specifically the DRAM and NAND flash segments.  The DRAM and NAND flash memory market growth forecasts for 2018 have been adjusted upward to 37% for DRAM (13% shown in MR18) and 17% for NAND flash (10% shown in MR18).  The big increase in the DRAM market forecast for 2018 is primarily due to a much stronger ASP expected for this year than was originally forecast.  IC Insights now forecasts that the DRAM ASP will register a 36% jump in 2018 as compared to 2017, when the DRAM ASP surged by an amazing 81%.  Moreover, the NAND flash ASP is forecast to increase 10% this year, after jumping by 45% in 2017.  In contrast to strong DRAM and NAND flash ASP increases, 2018 unit volume growth for these product segments is expected to be up only 1% and 6%, respectively.  At $99.6 billion, the DRAM market is forecast to be by far the largest single product category in the IC industry in 2018, exceeding the expected NAND flash market ($62.1 billion) by $37.5 billion.  Figure 1 shows that the DRAM market has provided a significant tailwind or headwind for total worldwide IC market growth in four out of the last five years.  The DRAM market dropped by 8% in 2016, spurred by a 12% decline in ASP, and the DRAM segment became a headwind to worldwide IC market growth that year instead of the tailwind it had been in 2013 and 2014.  As shown, the DRAM market shaved two percentage points off of total IC industry growth in 2016.  In contrast, the DRAM segment boosted total IC market growth last year by nine percentage points.  For 2018, the expected five point positive impact of the DRAM market on total IC market growth is forecast to be much less significant than it was in 2017.FIgure 1
Key word:
Release time:2018-03-15 00:00 reading:1369 Continue reading>>
<span style='color:red'>AI</span> Helps Bots Avoid Collisions
  Collision detection and tracking technologies to make collaborative robots safer around humans and other moving objects are emerging from startups and research labs. A University of San Diego team has designed a faster algorithm to help robots avoid obstacles using machine learning, and Massachusetts Institute of Technology spinoff Humatics is developing artificial intelligence (AI)-assisted indoor radar systems so robots can precisely track human movements.  The UCSD algorithm, called Fastron, speeds and simplifies collision detection using machine learning. It classifies collisions or non-collisions of moving objects based on a model of the robot's configuration space (C-space) that uses only a small number of collision points and collision-free points. Existing collision detection algorithms are computation-intensive because they specify all the points in the 3D geometries of robots and their obstacles, and then check every point for possible collisions between two bodies. When those bodies are moving, computational load increases drastically.  Fastron's C-space model is used as a proxy for kinematic-based collision detection. The algorithm combines a modification of the kernel perceptron learning algorithm with an active learning algorithm to reduce the number of kinematic-based collision detections. Instead of checking each point, the algorithm checks near the boundaries and classifies collisions versus non-collisions. The classification boundary between them changes as objects move, so the algorithm rapidly updates its classifier and then continues the cycle.  In simulations, the team has demonstrated that proxy collision checks can be done twice as fast as an efficient polyhedral checker, and eight times as fast as an efficient high-precision collision checker, without the need for GPU acceleration or parallel computing. In addition to use on the factory floor, one potential application is helping robotic surgical arms autonomously perform assistive tasks more safely during surgery, without interfering with either the patient's organs or surgeon-controlled robot arms.  Startup Humatics was co-founded by CEO David Mindell, who is also a professor of aerospace engineering and history of technology at MIT. Its chief product officer is Stephen Toebes, former senior vice president of product development and operations for collaborative robotics leader Rethink Robotics, makers of Baxter and Sawyer robots. Its vice president and principal software architect is Michael Barbehenn, former vice president of software and a longtime leader at Kiva Systems/Amazon Robotics, pioneers in mobile warehouse robots.  The company's Spatial Intelligence Platform combines a micro-location system based on inexpensive RF technology with AI-assisted analytics software. A single system can track multiple, moving transponder targets with millimeter-scale precision at ranges of up to 30 meters. Multiple systems can be networked together for broader coverage, from factory work cells to entire distribution centers.  Humatics wants to create a world with small, inexpensive RF beacons that can do centimeter- and millimeter-scale absolute reference positioning: indoors, outdoors, and in all weather, said Mindell. "Collaborative robots don't really know where people are. To be incorporated into human environments they will all have to be position-navigating the things around them. So we think the future of autonomous robots, whether they are cars, or automated robots in factories, or drones, will all be part of a connected world."  Although the company is working in small- and short-range radar, "we're ambivalent about the term radar," said Mindell. "Our Spatial Intelligence Platform is not a backscatter system, it's a secondary radar system in the way air traffic control is secondary, or beacon-to-beacon." The current system is a millimeter-accurate, 3D measurement single unit that can track a large number of mobile, small battery-powered or vehicle-powered beacons, or "pucks," on moving objects like people or other robots, and that gives millimeter-scale tracking at very high update rates, he said.  The company is building its own analytics, said Mindell. "Everything is moving around at the millimeter scale, so there's a lot of richness in this precision micro-location data." The core algorithms are basic recursive estimators, which are self-tuning and self-optimizing. As they gather reams of data they become better at analyzing motion and position, he said.  The system's hardware and software are inexpensive and scalable. The industry has driven down the cost of microwave and millimeter-wave electronics, and standard APIs for the technology let the data be used by other applications and services. The architecture is extensible, so the system can be networked throughout a large factory or other space to provide broader coverage, said Mindell. The solution will be piloted in 2018 and launched in early 2019.
Key word:
Release time:2018-03-12 00:00 reading:1302 Continue reading>>
Arm GPU Gets More <span style='color:red'>AI</span> Muscle
  ARM announced four new cores for mainstream smartphones and digital TVs, two Mali GPUs and associated video and display cores for them. The news shows that Arm is, at least for now, taking a three-tier approach to machine learning and that China mobile OEMs are becoming increasingly influential.  Arm’s new Mali G52 GPU core is aimed at mid-tier smartphones and digital TVs using combinations of Cortex-A72 and -A55 CPU cores. The GPU boosts machine-learning performance up to 3.6x for ImageNet classifiers compared to its existing G51 core.  The G52 packs eight execution engines compared to four on the G51, with four lanes in each engine and each capable of up to four 8-bit integer multiply-accumulate operations per cycle. Up to four G52s can be used in an SoC, each executing up to 288 MACs/cycle.  For the low end, a new G31 core uses Arm’s Bifrost architecture and targets systems using A55 CPUs. It is Arm’s smallest core to date to support the latest OpenGL ES and Vulkan graphics APIs but provides no specific acceleration for neural nets.  The company previously announced that it is preparing dedicated neural-network acceleration cores for premium mobile systems as part of its Project Trillium.  “We may not always have a dedicated machine-learning processor in these devices,” said an Arm spokesman.  New display and video cores are targeted for use with the G52/31. The D51 display core aims to handle more jobs with significantly fewer accesses needed to external memory. The V61 video core supports 4K resolution at 60 frames/second as well as HDR10 rendering.  Arm provided no results of third-party benchmarks for the cores.  As of this year, more than a billion smartphones from China’s largest OEMs will be in use, with users outside of China doubling each year. China’s handset makers grew their share of the global pie to 945 million phones, 31% of total handset sales last year, according to stats that Arm showed from market watcher Newzoo.  For its part, Arm said that 159 licensees have shipped a total of 1.2 billion Mali GPU cores to date. The cores are currently used in half of all handsets and 80% of digital TVs, it said.  Arm’s Mali leads in the mobile GPU space with a 48% share with design wins in handsets, tablets, and TVs as well as some IoT and automotive systems, according to Jon Peddie Research. Qualcomm’s Snapdragon with its Adreno GPUs follows at 25%, and Imagination Technologies, which used to lead the sector, now is in third at 12%.
Key word:
Release time:2018-03-09 00:00 reading:1780 Continue reading>>
With Windows ML, Intel <span style='color:red'>AI</span> to Invade Mobile PCs
  It might not be too long before your average mobile PC will feature — on its motherboard — not just CPUs and GPUs but also an embedded AI inference chip, like the Intel/Movidius Vision Processor Unit (VPU).  The first clue for this scenario unfolded in Microsoft Corp.’s launch announcement today, at its Windows Developer Day, of Windows ML, an open-standard framework for machine-learning tasks in the Windows OS. Microsoft said that it is extending Windows OS native support for the Intel/Movidius VPU. Implied in the message is that Intel/Movidius has taken a step closer to finding a home not just in embedded applications, such as drones and surveillance cameras, but also in Windows-based laptops and tablets.  In a telephone interview with EE Times, Gary Brown, director of marketing at Movidius/Intel, confirmed, “Although today’s announcement isn’t about that [VPU integration on a mobile PC], yes, you will see VPU migrating into a PC motherboard.”  Windows ML is expected to bring Windows up to date in the fast-heating AI world. It will “dynamically determine the most suitable hardware for any given AI workload and intelligently distribute across multiple hardware types — now including Intel VPUs,” according to Intel/Movidius.  Brown explained, “Our VPU can offload heavy-duty AI processing tasks such as vision, face recognition, voice, biometrics, and others from CPUs and GPUs on PCs. VPU can help free up their processing resources.”  Intel/Movidius VPU goes mainstream  Kevin Krewell, principal analyst at Tirias Research, told us, “Adding native Windows support will flag developers that the Movidius VPU is going more mainstream.”  However, Krewell isn’t quite sure if a PC is the right home for VPU. “I can see the VPU as a good addition to AR/VR products like a next-generation Hololens. I’m not sure if it makes sense in a PC — there’s plenty of processing power in a PC, including the CPU and GPU to process video. The Movidius VPU works best where the unit is power-constrained, like drones.”  He added, “Perhaps this is a first step for Microsoft extending Windows into new areas such as drones and robots.”  The world of AI is expanding fast, putting Microsoft under pressure to catch up. Last year, Khronos started work on its own low-level ML framework. Like graphics APIs, this is meant to be a generic API. However, as seen in the Windows ML announcement, it’s clear that Microsoft would have still needed one that’s specific to its Windows OS.  Mike Demler, senior analyst at The Linley Group, observed that Windows ML looks to be like any other neural-network runtime API. However, he added, “It’s about time that Microsoft caught up with Arm platforms.”  Windows OS native support “gives Movidius better access to the Windows laptop/tablet market,” he said.  Asked what’s in it for Microsoft, Demler explained, “Developers could already run machine-learning apps on Windows platforms using the CPU, GPU, or custom peripherals like [Intel/Movidius] Myriad, but Windows ML gives them a standard method.” For PC users, Windows ML will “help drive the migration of machine-learning apps to client devices — PCs in this case.”  What AI applications?  If an AI processor is being designed into specific embedded systems, applications and tasks assigned to the AI chip are clear. They can do object tracking, collision avoidance in drones, or forensic analysis in surveillance cameras. The goal of pushing AI to the edge is to enable embedded systems for “sense, assess, and decide” actions, said former Movidius CEO Remi El-Ouazzane, now Intel’s vice president and general manager.  If so, what exactly are the AI applications on a PC?  Intel’s Brown suggested many. “Suppose you walk into a room and there is a Windows tablet on a table. It can see you, recognize your voice or face, and help you with a variety of personal-assistant types of work, including smart music search or classifications of your photos. The vision-based AI can also help enhance images in video conferencing.” Of course, it’s up to app developers’ imaginations to come up with new AI apps on PCs.  Demler believes that AI apps on PCs will be no different from apps on other mobile devices. “They include biometrics, AR/VR, image processing, object recognition, and others,” he said.  For now, Intel’s Brown acknowledged that Movidius/Intel’s Myriad X is the first AI processor for Windows ML to take advantage of on mobile PCs. Asked about the timing for Myriad X to be on the motherboard, Brown said, “Soon.”  However, the deal obviously isn’t exclusive to Intel/Movidius.  Asked to speculate on other possible AI accelerator candidates on PCs, Demler stressed that the conversation focuses on mobile PCs. “There are plenty of Nvidia GPUs running machine-learning apps in desktop PCs. Linley [Gwennap] recently covered an AI accelerator startup called Gyrfalcon that has an AI-accelerator chip they build into a USB stick, just like Movidius. The trick is getting onto the motherboard. In China, you have AI chip companies like Cambricon, which has Lenovo as one of its investors.”  Myriad X  The AI inference chip expected on Windows ML first is Intel/Movidus’s Myriad X. When the company first unveiled it last summer, El-Ouazzne told us, in designing the company’s third-generation VPU, “We were looking to anything that allows us to increase the performance of neural networks without increasing power.”  With many more hardware acceleration blocks, Myriad X architecture today can do a trillion operations per second (TOPS) of compute performance on deep-neural-network inferences while keeping power use within a watt. The Myriad X, housed in an 8 x 9-mm package, is capable of 4 TOPS in total.  In designing Myriad X, Movidius explained that it increased the number of its Streaming Hybrid Architecture Vector Engine (SHAVE) DSP Cores from 12 [in Myriad 2] to 16. Movidius added a neural-compute engine with more than 20 enhanced hardware accelerators. These accelerators are designed to perform specific tasks without additional compute overhead. Tasks include depth-mapping to extract edges (a key to drone landings, for example), a de-warping engine for sensors enabling a wider field of view, and optical flow for super-high-performance motion estimation critical in tracking and counting people with surveillance cameras, explained El-Ouazzane at the time of the Myriad X announcement last year.
Key word:
Release time:2018-03-08 00:00 reading:1170 Continue reading>>
NB-IoT Raises its Volume at MWC
  SAN JOSE, Calif. — The Narrowband-IoT version of LTE for the Internet of Things took a big leap forward this week at the Mobile World Congress with reports of new chips, software, and service offerings. NB-IoT is predicted to take the lion’s share of cellular IoT connections over the next few years, growing in parallel with LTE M1 and a host of non-cellular, long-range nets led by LoRa.  Sequans Communications announced its first chip optimized for NB-IoT, leapfrogging Qualcomm. Startup Riot Micro teamed up with a software developer to show a dual-mode (NB-IoT/LTE M1) network, and Qorvo rounded out its portfolio of low-band RF chips for all low-power wide-area networks.  LPWANs will be the world’s fastest-growing connectivity technology through 2025, supporting 4 billion IoT devices by that date, according to market watcher ABI Research. For its part, Qorvo said that it saw 20% growth in the market for low-band products in 2017.  China Mobile reported at MWC that it has launched NB-IoT networks in 346 cities using chipsets from five companies — Huawei, Mediatek, Qualcomm, RDA, and ZTE. The carrier has approved for use on its network 15 NB-IoT modules using the chips, according to a report from TechInsights analysts at the event.  Goodix, a chip vendor in China known for touchscreen controllers and fingerprint sensors, announced that it will sell NB-IoT chips, using IP from its acquisition of Germany’s CommSolid GmbH. It showed a live demo of its technology on the Vodafone network in Barcelona, said TechInsights.  Separately, Cisco reported on trials with China Unicom of an NB-IoT management system, the Cisco Jasper Control Center for NB-IoT. It helps automate control for a wide range of applications from agriculture and building automation to smart metering, parking, fire control, and street lighting.  “We expect to have more than 100 million NB-IoT connections on our network by 2020,” said Xiaotian Chen, general manager of China Unicom’s IoT group, speaking in a Cisco press statement.  China’s other major carrier, China Telecom, gave an update on its aggressive deployments of NB-IoT at a U.S. version of the MWC event last year. The three China carriers are racing to carry out government mandates to deploy cellular IoT.  For its part, Sequans announced its Monarch N, a single chip optimized for LTE Cat NB1/NB2, compliant with 3GPP release 14/15. The company claims that it significantly reduces size and cost compared to its existing dual-mode chip but gave no details other than that it enables modules smaller than 10 mm2.  Sequans said that Monarch N targets markets such as industrial sensors and utility meters. The chip got praise from Verizon for possible use in its guard-band deployments.  “Sequans is a leader in LTE for IoT, and their Monarch technology was instrumental in the launch of our LTE Cat M1 network,” said Chris Schmidt, an executive director at Verizon, side-stepping the question of whether the carrier will use the NB-IoT version.  Startup Riot Micro debuted in December its NB-IoT-only chip that draws milliamps to microamps of power and could sell for well below the industry’s target of a $5 module. At MWC, it partnered with telecom software vendor Amarisoft to demo a dual-mode network using the Riot RM1000 chip and Amarisoft’s Amari LTE 100 software, presumably running on an x86 server.  For its part, Qorvo detailed a portfolio of nine low-band RF chips for any type of IoT network.  The Qorvo parts span bands from 50 to 4,200 MHz. They include transmit linear amplifiers, gain blocks, variable gain amplifiers, attenuators, switches, filters, duplexers, and low-noise amplifiers, optimized for low power and small size.
Key word:
Release time:2018-03-05 00:00 reading:1327 Continue reading>>
Qualcomm Raises Offer for NXP to $44 Billion
  SAN FRANCISCO — Qualcomm entered into a new amended acquisition agreement with NXP Semiconductors, worth $44 billion, in a move that may cause Broadcom to abandon its hostile takeover attempt of Qualcomm.  Qualcomm said Tuesday (Feb. 20) that the new deal to acquire NXP is worth $127.50 per share in cash, up from the initial $110-per-share price that the companies agreed to 16 months ago. The new deal, which was agreed upon by the boards of directors of both companies, also reduces the minimum tender offer condition to 70% of NXP shares from 80%, lowering the bar for shareholder approval that would cement the deal.  Qualcomm's acquisition of NXP, first announced in October 2016, has been cleared by most antitrust review agencies throughout the world. However, the deal is still awaiting the seal of approval from China's Ministry of Commerce. Signoff from China was reportedly expected sometime this month, although some reports have since suggested that the approval may take even longer to secure.  Thus far, the deal hasn't come anywhere close to the threshold of 80% of shares tendered that the original agreement required. However, Qualcomm said Tuesday that it has entered into binding agreements with nine NXP shareholders who collectively own more than 28% of NXP's outstanding shares.  Qualcomm was forced to sweeten the deal partly because of NXP's financial success and stock performance in the time since the proposed acquisition was announced. Several large institutional shareholders have said publicly in recent months that they would not support the acquisition at the original purchase price, which totaled roughly $38 billion. But the revised deal could cause Broadcom to withdraw its $121 billion bid to acquire Qualcomm, a proposal that Qualcomm's board has twice voted unanimously to reject.  Steve Mollenkopf, Qualcomm's CEO, said through a statement that work on the integration of the two companies continues even as the process of securing regulatory approval has plodded along slower than initially expected.  "With only one regulatory approval remaining, we are working hard to complete this transaction expeditiously," said Mollenkopf. "Our integration planning is on track, and we expect to realize the full benefits of this transaction for our customers, employees, and stockholders."  Broadcom initially offered about $104 billion to acquire Qualcomm last year in what would be one of the largest tech acquisition agreements of all time. When that offer was rejected by Qualcomm's board, Broadcom launched a hostile takeover attempt, nominating directors to Qualcomm's board in advance of Qualcomm's annual stockholder meeting next month.  Earlier this month, Broadcom raised its offer to Qualcomm to $121 billion and included in its acquisition proposal an $8 billion "reverse breakup fee" intended to compensate Qualcomm in the event that the acquisition is not cleared by regulators. Qualcomm has said that it faces substantial risk of losing licensing and product revenue if it agrees to a deal that is ultimately not completed because of regulators.  Qualcomm and Broadcom met last week to discuss Broadcom's revised offer, which was rejected by Qualcomm. While the company said that the meeting was productive and that it would be open to future talks, Qualcomm's board of directors once again rejected the deal, saying that it undervalues Qualcomm and would be risky.  Broadcom said at the time of the revised offer that the proposal would remain in effect until Qualcomm completed its acquisition of NXP or until Qualcomm's annual stakeholder meeting next month.  Broadcom said Tuesday that it was "evaluating its options" after the new agreement between Qualcomm and NXP.
Key word:
Release time:2018-03-02 00:00 reading:1085 Continue reading>>
ST Projects Embedded <span style='color:red'>AI</span> Vision
  BARCELONA — As expected, AI is the crowd magnet at this year’s Mobile World Congress. As Jem Davies, vice president, fellow and general manager of the machine learning group at Arm, quipped, during an interview with EE Times, “Machine learning is a bit like fleas. Everyone has got one.”  Companies who already tipped their plans for machine learning prior to the show include Arm pushing its Project Trillium, MediaTek for P60, Ceva with PentaG and startup GreenWaves’GAP8.  STMicroelectronics, meanwhile, broke its silence and discussed during the company's press conference Tuesday (Feb. 27) how the company sees machine learning as a key to “distributed intelligence” in the embedded world. ST envisions a day when a network of tiny MCUs become smart enough to detect wear and tear in machines on the factory floor or find anomalies in a building, without reporting sensory data every so often back to data centers.  At its booth, ST demonstrated three tangible AI solutions: a neural network converter and code generator called STM32 CubeMX.AI, ST’s own Deep Learning SoC (codenamed Orlando V1), and a neural network hardware accelerator (currently under development using an FPGA) which can be eventually integrated into the STM32 microcontroller.  Asked if ST’s embedded AI solutions have been developed in partnerships with Arm’s Project Trillium, ST’s president and CEO Carlo Bozotti replied emphatically, “No. These are internally developed by ST.”  Unlike many smartphone chip vendors developing an AI accelerator designed to work with a CPU and a GPU inside a handset, ST focuses on designing machine-learning solutions on embedded processors deployed in connected mesh networks. Gerard Cronin, ST’s group vice president, told EE Times that ST already has neural network code that runs on any STM32 in software today. Its drawback is, he explained, that it would run too slow for sophisticated/processing intensive applications.  For machine-learning acceleration, ST is designing AI-specific hardware and software architectures. ST unveiled its first test chip, an ultra-energy efficient deep convolutional neural network (DCNN) SoC. It contains 8 DCNN reconfigurable accelerators and 16 DSPs. Manufactured in a 28nm FD-SOI process, it is “ultra-energy efficient,” claimed Bozotti. He described it as a significant achievement for ST’s R&D team. “It’s a real SoC, running AlexNet at 0.5 TOPS,” Bozotti said.  ST has not decided whether the SoC will be launched as is, since the company is already working on its follow-ons. But, running 2.9TOPS per watt at 266MHz, it can be used as a co-processor for ST’s MCUs.  ST’s ultimate AI scenario for STM32, however, might be in integrating a neural network hardware accelerator inside the MCU. The FPGA-based demo showed that it would take only a fraction of STM32 CPU load to detect how many people are in a scene captured by an infrared camera.  Responding to the market’s hunger for AI, Arm is confident it has built a better mousetrap, with its CPU and GPU instruction set extensions — specifically for machine learning. ARM is making these extensions available through an open-source license, and Davies said many companies are already using them.  Arm is planning to launch in mid-2018 what it calls a machine-learning processor capable of 3TOPS. Davies stressed that this isn’t a hardware accelerator to be used with Arm’s CPU and GPU. It is, he said, a standalone, powerful enough — and yet energy efficient — “machine-learning processor.”  “We have several hardwired blocks to run specific neural networks,” said Davies, “but this is truly a programmable AI processor. There’s no need for dynamic scheduling. Static scheduling can get you what you need.”  Asked about target markets for such an AI processor, Davies said, “Object detection, voice/messaging, and digital TV.”  Similar to ST, Arm also sees the machine-learning trend moving from the cloud to edge devices. “It’s simple, it’s a law of physics (too many edge devices), law of economics (nobody wants to pay for bandwidth), law of latency (time critical applications) and law of land (protection of privacy),” Davies said.  While agreeing on the vision for object detection, ST, as a leading MCU provider, isn’t going to wait for Arm to come up with a standalone AI processor.  Nor is MediaTek waiting for Arm. In an interview with EE Times, MediaTek president Joe Chen told us, “We are extending our NeuroPilot AI platform (bridging CPU, GPU and onboard AI accelerators) to MediaTek’s other consumer products including digital TV.”  Asked about AI in the context of digital TV, Arm’s Davies explained that the idea is somewhat similar to how Huawei is using its AI processor, Kirin 970, for beautification of one’s portrait photo. “These DTV guys are planning to use the power of AI for image enhancements in each video frame,” he said. “They are really eager to get their hands onto the AI processor.”
Key word:
Release time:2018-03-01 00:00 reading:1176 Continue reading>>
Experts Disdain Blockchain in Spain
  BARCELONA — “Trust” and “security” were the two words most oft uttered during a discussion here Monday at the Mobile World Congress entitled “IoT and the Security Blockchain,” but they were spoken — for the most part — either wishfully or in tones of outright sarcasm.  The explosion of Internet of Things (IoT) devices, said moderator Ian Hughes, an IoT analyst for 451 Research, “has created a massive ballooning of risk” to the security of systems dependent on Internet communications.  “The proliferation of IoT devices,” said Rashni Misra, Microsoft’s general manager for IoT and AI solutions, “has basically opened a new surface for attack, to an extraordinary degree.”  The message offered by a parade of experts at the Mobile World session was that security is finally an issue that big companies are taking seriously, but that the solutions today are more theoretical than actual, and they will require a measure of mutual trust (socialism) unusual among high-tech competitors (capitalists).  However, none of the experts was sanguine about the exclusively software approach, such as blockchains, which originally emerged as a decentralized transaction ledger for the crypto-currency Bitcoin. “You just don’t base all your security in software,” said analyst Seshu Madhavapeddy, Qualcomm’s vice president of IoT product management.  Speaking more positively on the subject, Paul Williamson, Arm’s vice president for IoT device IPs, touted IoT’s “huge potential to change our world” and described measures, specifically Arm’s “ground-up” hardware solution called Platform Security Architecture.  But Williamson admitted that today, IoT is a “wild West” landscape better described as the “insecurity of things.” Fellow speaker Erin Linch, vice president of corporate development at Syniverse, expanded on this theme, noting that in any given second, traffic on the public Internet includes 24,000 gigs of data, 62,000 Google searches and 2.6 million emails — each item a potential target for cyberattack.  Williamson noted that danger no longer applies to devices when they are launched. “We have to think about how devices can be managed throughout their lives in this world of IoT,” he said.  Linch, of Syniverse, emphasized the potential impact of a security breach in massive systems, like high-speed trains and hospital networks, but Jaya Baloo, chief information security officer at KPN Telecom, characterizing her company as a “customer” of security systems, took the issue down to the smallest devices.  She cited the case of Fitbit users in Somalia. Their activities were monitored and fed to the Internet by a built-in monitoring system that kept track of data like mileage run and heart-rate levels. By tuning in to the network and finding an unusual concentration of Fitbit data emanating from a remote region in East Africa, unauthorized observers correctly determined that this fitness cluster, a lot of people working out, was the location of what had been a secret military base.  Baloo noted that this breach was not a bug, nor did it require a sophisticated hack. It was a flaw intentionally built in by its designers, a “sharing” feature. “People are designing devices who don’t know enough to anticipate bugs,” lamented Baloo.  Among solutions suggested during the Mobile World session was a Blockchain IoT Registry, described by Anoop Nannra, chairman at Cisco of the Trusted IoT Alliance and head of its Blockchain Initiatives. He said each IoT system — such as drug delivery by drone — could be secured by “smart contracts that define a common model for IoT devices in a registry.”  He laid out a program, incorporating both hardware and software protections, for each IoT “asset” — a “smart truck,” for example — that would include a) registration, b) verification, c) transfer security, d) a secure ledger system and e) a digital wallet to pay for and get paid for services.  But this is where, said Baloo, the truck hits the road. Proposing standards, registries, alliances and trust are the easy part of Internet security, especially in the industrial realm. “We have failed at everything, at every single level,” she said. “The standards are there, but our implementation of them sucks. There’s no other way to put it.”  She offered another real-world example, in which high-tech medical devices were carefully and strictly registered to prevent a security breach. But the machines then rejected the remote software updates that they needed. It seems that if the device was opened to allow the new software, the security protocol would rescind the certification that was necessary to permit its use.  Baloo’s own company hired a team of white-hat hackers to attack its just-finished, state-of-the-art security system. The hackers discovered a flaw in the protocol standard that rendered the system vulnerable and in need of massive repairs. She added that most companies have neither the resources nor the wits to hire teams of hackers to test security quite so intensely.  The bottom line, which was left to Baloo, the final speaker, is that IoT security has a long way to go. “Defense in depth actually requires us to do just that,” she said. “Trust, but always be in a position to verify.”
Key word:
Release time:2018-02-27 00:00 reading:2336 Continue reading>>
<span style='color:red'>AI</span> Comes to Sensing Devices
  BARCELONA — GreenWaves Technologies, a startup based in Grenoble, France, launched an apps processor designed to do image, sound and vibration AI analysis on battery-operated sensing devices. The processor, called GAP8, is built on the RISC-V and PULP open-source projects.  Greenwaves’ first sample chip just came back last week from TSMC, which built it using its 55nm low power process. With this brainchild in hand, the company is pitching its GAP8 processor and GAP8 software development kit this week both at Mobile World Congress here and Embedded World in Nürnberg, Germany.  Mike Demler, senior analyst at the Linley Group, told us, “It’s the first time I’ve seen someone add a neural engine to an MCU-class processor.”  The move by the French startup illustrates how the AI frenzy is infecting even the IoT world, where most edge devices are both resource- and power-constrained.  Founded in 2014, Greenwaves didn’t originally aim to design embedded AI processors. The initial goal, to do an innovative orthogonal frequency-division multiplexing (OFDM) algorithm known as GreenOFDM on a processor, however, recently shifted focus. The company reset its sights on machine learning applications, acknowledged Lo?c Liétar, co-founder and CEO of GreenWaves. This pivot became inevitable, explained Liétar, when he saw “far more [market] traction” on the processor’s ability to do “content understanding (image, sound, vibration).”  GreenWaves was born when two projects merged into one. Liétar was originally interested in solving the high-power consumption limits of OFDM and was looking for an appropriate processor architecture to map his algorithm. Eric Flamand, Liétar’s long-time friend and now GreenWaves’ CTO, was then developing an ultra-low power processor for content understanding. After the two decided to join forces as a single startup, they leveraged Flamand’s PULP-based architecture to offer both machine learning functions and GreenOFDM.  Asked about whatever happened to GreenOFDM, Liétar noted, “A couple of customers are interested in the SW modem capabilities of GAP8, albeit not for GreenOFDM, which would require the development of a specific power amplifier to deliver on its promise.”  Architecture  Put simply, GreenWaves’ GAP8 consists of nine RISC-V cores. One serves as a fabric controller managing peripherals and communication with the outside world. The other eight cores are organized in a cluster with shared data and instruction memory. The cluster — consisting of eight RISC-V cores — has an integrated hardware convolution computation engine that accelerates inference calculations for convolutional neural networks (CNNs).  According to Greenwaves, the fabric controller and cluster live in separate voltage and frequency domains, so that each consumes power only when necessary. Greenwaves also used the standard RISC-V ISA extension mechanism to add instructions that boost performance for DSP-centric operations, which are frequently found in the algorithms executed on the cluster.  Liétar explained, “For most developers, GAP8 is programmed just like any MCU.”  When compute-intense tasks need to be launched, they go to the cluster through the APIs of a rich compute library included in the GAP8 SDK. “A tool-driven methodology also allows trained CNNs described with an AI framework to be optimized for and ported onto GAP8,” he added.  Where AI, IoT and MCU meet  Greenwaves hopes to position GAP8 into the market whirlwind where AI, IoT and MCU meet.  Greewaves promises that GAP8 will deliver “scalable compute performance at dynamically adjustable power consumption points from 1mW to 60mW and standby and data acquisition in the range of nAs to ?As.”  Asked to compare GAP8 with other neural network processors, Liétar noted that embedded vision processors/dedicated CNN processors with TFLOPS of computing power can run complex machine learning applications. However, they consume too much power to get designed into battery-operated devices.  This reality opens a sweet spot for Liétar and GAP8, somewhere between ultra-low power MCUs (100s of MOPS), such as STMicroelectronics STM32, and high-end low power MCUs/mid-range apps processors (several GOPs) such as Allwinner’s apps processors or NXP I.MX apps processors.  GAP8, he claimed, can prove 20 times more energy-efficient than mid-range apps processors, while bringing down the system cost two to three times.  The goal for GAP8 is to deliver “a flexible compute engine that can accelerate a wide range of algorithms from CNN to traditional machine vision, sound or vibration analysis at an absolute low power point,” he noted.  Asked about target applications for GAP8, Liétar cited embedded systems for counting people and objects for smart cities, vibration analysis for the industrial market, robotic control/navigation for consumer robotic vacuum cleaners, keyword spotting for smart speakers and object recognition for home surveillance systems.  Consider, he said, traffic lights in a smart city. With machine learning capabilities, the traffic light can count cars are at any given time. In a smart office space, management can install a system to see how many desks are free to use.  All of this begs one question: Why run such machine learning applications on battery-operated systems? Don’t traffic signals and smart offices come with their own power?  Liétar said, “It turns out that those who want to do such analysis are not usually the same people operating traffic lights or smart offices.” One needs to be able to attach such an AI feature as an independent battery-operated unit to the existing infrastructure, he explained.  Greenwaves says GAP8 can do always-on face detection with a few milliwatts of power, while indoor people-counting and presence-detection could be done without replacing batteries for years.  Asked about customers, Liétar said that Greenwaves has at least one customer, with whom it has been working since last fall. Since Greenwaves launched its software development kit, the feedback has been encouraging. “We have seen at least 20 customers have downloaded it, since we launched the SDK,” he noted, “although we can’t tell you how active they are.”  Coming soon: Gapduino  Greenwaves is getting ready to roll out in April its GAP8 hardware development kit priced at 100 euros (about $123). Included in the kit are the GAPDUINO board and the GAP8 SDK.  GAPUINO is an Arduino Uno compatible Master or Shield with a camera connector for external cameras, according to the company. It can be powered via a battery (SAF17500), DC connector or USB.  Greenwaves also created a sensor board (Arduino shield format) containing several sensors including: 4 x MP34DT01 microphones, VL53 time of flight, IR sensor, pressure sensor, light sensor, temperature and humidity sensor and a 6-axis accelerometer / gyroscope.  Secret sauce in ultra-low power GAP8  To push the limits of GAP8’s energy efficiency, Greenwaves has applied a “set of levers in a consistent and balanced manner,” explained Liétar.  They include an extended RISC-V instruction set architecture to pack more operations into each cycle, an energy optimal sign-off frequency, hardware synchronization, eight-core parallelization, a fast turn-on/switch off function achieved by putting the power management unit inside the chip, a shared instruction cache and a hardware convolution engine.  More specifically, hardware synchronization is important because, for fine-grain loops, synchronization dispatched to the eight cores could burn up to 50 percent of the sequential computing. This would drastically limit efficiency in parallelization, said Liétar. “Doing this synchronization in HW removes this limitation,” he said.  Meanwhile, having 8 independent cores offer more parallelization opportunities down to fine grain than a VLIW or GPU architectures, claimed Liétar.  Greenwaves comes with a strong academic background. Co-founder and CTO Flamand, who still has a position at ETH Zurich, is a software developer who created DSP instruction extensions forPULPino, a 32-bit RISC-V processor designed by researchers at ETH Zurich and Università di Bologna as port of the Parallel Ulta Low Power (PULP) project.  The Linley Group’s Demler suspects that Greenwaves’ academic background and the OpenSource fervor might have been the key drivers for Greenwaves’ processor design. Noting that the startup’s opportunity is mostly lower cost, Demler believes it is likely to face tough competition from ST or NXP. On the higher end, Greeenwaves’ rivals will be companies with dedicated neural-network processors and coprocessors like FPGA solutions, he noted.  Acknowledging GAP8’s unique architecture, Demler cautioned, “Just because you’re doing something different doesn’t mean it’s better. As a startup, they are going to have to work hard to get some meaningful design wins. To do that, they need to focus on delivering the whole-product solution for a particular market/application, not just the uniqueness of their architecture.”  Asked if Greenwaves has any plans to license its embedded AI processor as an IP, Liétar said, “Never. I’ve been in this business long enough to know that you can’t really make money as an IP vendor unless you are Arm.”
Key word:
Release time:2018-02-27 00:00 reading:1132 Continue reading>>
Mobile <span style='color:red'>AI</span> Race Unfolds at MWC
  BARCELONA — While Apple and Samsung, both armed with home-grown apps processors, have a lock on the premium smartphone market, MediaTek, seeking to rebound in smartphones, is rolling out at the Mobile World Congress its Helio P60 chipset.  MediaTek’s plan is to re-enter the mid-upper tier smartphone market where it competes with Qualcomm.  MediaTek is pitching Helio P60 as “the first SoC platform featuring a multi-core AI processing unit (mobile APU) and MediaTek’s NeuroPilot AI technology.”  MediaTek’s move highlights a sharp shift in focus — in the industry’s smartphone battle — to mobile AI. Various chip vendors are racing to make neural network engines locally available on handsets. The goal is simple. They want to enable the AI experience — voice UIs, face unlock, AR and others– processed on client devices, faster and better, with or without network connection.  “We only in the last year have seen the first wave of smartphone processors with embedded neural engines, and those were all in flagship processors like the Apple A11, Huawei’s Kirin 970, Qualcomm’s Snapdragon 835 and MediaTek’s Helio X30,” said Mike Demler, senior analyst at the Linley Group.  Demler said, “We’re not surprised that MediaTek would add a neural engine in the lower tier, but it’s interesting that they’re doing it with a more powerful core than the company’s flagship X30 has.”  In other words, a community of vibrant mid-tier smartphone vendors — mostly driven by Chinese handset manufacturers, appear impatient. They want to pounce on the mobile AI trend as soon as possible.  New premium  MediaTek has defined what it calls “new premium,” as “devices that offer premium performance and features at a mid-range price.” Finbarr Moynihan, general manager, corporate sales at MediaTek, explained to EE Times “new premium” is where all the action is in smartphones today. Mid- to upper-tier players, such as Oppo, Vivo, Lenovo, etc. are eager to close the gap with their top-tier rivals, in hopes of making a big leap in apps, features and AIs.  MediaTek told us that 48 percent of global smartphone shipments in 2017 were from Chinese OEMs, largely aimed at emerging markets. MediaTek quoted a TrendForce report, pointing out that brands focusing on mid-range consumers saw huge growth in 2017 with Xiaomi reporting a remarkable 76 percent increase in smartphone production, and significant increases from Transsion, OPPO and Vivo.  Helio P60 features four Arm A73 processors and four Arm A53 processors in an octa-core CPU complex. Based on a big.LITTLE octa-core design, MediaTek claims 70 percent CPU performance enhancement compared to its predecessors, Helio P23 and Hleio P30. By using a new Mali G72 GPU that maxes at 800MHz, the P60 also improves GPU performance by 70 percent.  MediaTek’s neural network engine  Helio P60’s claim to fame, however, is a built-in NeuroPilot AI platform that bridges CPU, GPU and onboard AI accelerators. MediaTek’s AI framework is there to manage heterogeneous AI-compute architecture by coordinating computing workload across CPU, GPU and the AI accelerator within the SoC to maximize performance and energy efficiency.  MediaTek has confirmed that the P60 integrates a Cadence Vision P6 core for its AI accelerator.  Compared to MediaTek’s flagship Helio X30, which used Cadence Vision P5 at 70 GMAC per second (8-bit), Helio P60 does 280 GMAC per second. Demler said, “So they dropped down a tier as far as the overall processor’s performance, but increased neural-engine performance by 4x at the same time.”  Asked to compare the performance of Helio P60’s neural network engine, Demler said, “Huawei’s Kirin 970 does ~1TMAC/s (FP16), so it has 4x the neural-network performance of P60 at higher resolution. At 280GMAC/s, the P60 is a close match for the Apple’s A11, which does 300GMAC/s.”  No AI benchmarks  Most analysts we consulted, however, agreed that the lack of a benchmark for deep-learning accelerators makes it nearly impossible to make any meaningful comparison. Calling it “a big open issue,” Demler said the mobile-AI quagmire could easily lead us to “a GOPS/TOPS battle of marketing hype.”  Jim McGregor, principal analyst at Tirias Research, concurred. “This is a confusing topic because there are few details and no benchmarks,” he said. “MediaTek and others make it sound like these AI solutions can do anything,” but they are not usually true, McGregor added.  For example, the Cadence Vision P6 core used in MediaTek’s P60 is optimized for computer-vision applications, not general-purpose neural networks, Demler said.  As McGregor explained, “First, you need to understand what most of these AI processors are.” For example, MediaTek, Apple and Huawei call their solutions "dedicated." That means they use a single IP block for AI acceleration. “In most cases, that means an IP block licensed from someone else” such as Cadence or Ceva. Such an IP block “supports a configurable neural network with some limitations,” said McGregor. But “no one will tell exactly what those limitations are.”  So, obviously, dropping inside an app processor a neural networks engine isn’t the end of the story. As McGregor pointed out that the development and training of new neural networks still need to take place in data centers where they must depend on much more high-precision, powerful processors for training.  If app developers and OEMs want to exploit the neural engines inside a smartphone app processor, they need a software framework with hooks to the underlying hardware. “All the leading mobile-processor designers (Qualcomm, MediaTek, Huawei, Apple) now offer neural-network SDKs,” Demler observed. But they all need to support popular training frameworks like Caffe and Torch, he added.  In MediaTek’s case, the company offers what it calls NeuroPilot AI SDK, a framework that lets app developers and OEMs “look down into hardware, to see how AI apps can run on CPU, GPU and dedicated AI accelerator,” said MediaTek’s Moynihan.  Meanwhile, apps developers and OEMs also need to be able to “look up, and to see what Android Networks API (Android NNAPI) says,” Moynihan added. Google developed Android NNAPI and runtime engine for Android-based machine learning. “MediaTek’s NeuroPilot SDK is fully compliant with Android NNAPI,” Moynihan added.  Among methods deployed to enable smartphone processors to run AI apps, Qualcomm appears to have a slightly different approach.  McGregor said Qualcomm’s solution is different because “they use multiple resources already on their chip, including the Hexagon DSP, Adreno GPU, and Kryo CPU cores.”  However, he added, “With no benchmarks available, it is impossible to determine which method is better, but the Qualcomm model does offer more flexibility.”  Battle for AI software  Regardless of the underlying hardware, it’s after all the software that can truly differentiate the AI experience on any given smartphone.  McGregor said, “Right now, these applications are being targeted towards common functions on the phone, such as photography and digital assistants. However, it is often left up to third-party software developers to develop and train the model for use on the device.”  He noted, “In limited cases, some models or libraries are available. Qualcomm developed some libraries around image recognition, Samsung around photography, and I'm sure Apple is developing its own models.”  In other cases, it is up to the applications developer, which is a significant limitation, McGregor pointed out. “Not many application developers are accustomed to deep learning or have access to large data centers necessary for deep learning," he said.  The Linley Group’s Demler also sounded a note of caution on AI software development in his recent Microprocessor Report. “The diversity of processor architecture creates a challenge for developers of Android apps, because these apps must work even on devices that lack a dedicated deep learning accelerator.” On the other hand, developers of iOS apps need only support a few Apple designed processors, he noted.  Similarly, Kevin Krewell, principal analyst at Tirias Research, warned, “The biggest problem I see is that each silicon and IP vendor is doing Machine Learning differently. Arm may have the best opportunity to standardize multiple vendors on one IP.”
Key word:
Release time:2018-02-27 00:00 reading:1132 Continue reading>>

Turn to

/ 32

  • Week of hot material
  • Material in short supply seckilling
model brand Quote
TL431ACLPR Texas Instruments
BD71847AMWV-E2 ROHM Semiconductor
MC33074DR2G onsemi
RB751G-40T2R ROHM Semiconductor
CDZVT2R20B ROHM Semiconductor
model brand To snap up
BU33JA2MNVX-CTL ROHM Semiconductor
STM32F429IGT6 STMicroelectronics
BP3621 ROHM Semiconductor
IPZ40N04S5L4R8ATMA1 Infineon Technologies
ESR03EZPJ151 ROHM Semiconductor
TPS63050YFFR Texas Instruments
Hot labels
ROHM
IC
Averlogic
Intel
Samsung
IoT
AI
Sensor
Chip
About us

Qr code of ameya360 official account

Identify TWO-DIMENSIONAL code, you can pay attention to

AMEYA360 mall (www.ameya360.com) was launched in 2011. Now there are more than 3,500 high-quality suppliers, including 6 million product model data, and more than 1 million component stocks for purchase. Products cover MCU+ memory + power chip +IGBT+MOS tube + op amp + RF Bluetooth + sensor + resistor capacitance inductor + connector and other fields. main business of platform covers spot sales of electronic components, BOM distribution and product supporting materials, providing one-stop purchasing and sales services for our customers.

Please enter the verification code in the image below:

verification code