AMD Closes In on NVIDIA, Securing Major Deals with Oracle and <span style='color:red'>IBM</span>
  As Jiwei reported, AMD, although trailing NVIDIA in AI, has recently clinched significant deals, earning the trust of two major clients, Oracle and IBM. Oracle plans to integrate AMD’s Instinct MI300X AI chips into their cloud services, complemented by HPC GPUs. Additionally, as per insights from Ming-Chi Kuo, TF International Securities analyst, IBM is set to leverage AMD’s Xilinx FPGA solutions to handle artificial intelligence workloads.  Oracle’s extensive cloud computing infrastructure faces challenges due to a shortage of NVIDIA GPUs. Nonetheless, Oracle maintains an optimistic outlook. They aim to expand the deployment of the H100 chip by 2024 while considering AMD’s Instinct MI300X as a viable alternative. Oracle has decided to postpone the application of their in-house chips, a project with a multi-year timeline. Instead, they are shifting their focus to AMD’s high-performance AI chip, the MI300X, well-regarded for its impressive capabilities.  Reports indicate that Oracle intends to introduce these processor chips into their infrastructure in early 2024.  Similarly, IBM is exploring chip options beyond NVIDIA. Their new AI inference platform relies on NeuReality’s NR1 chip, manufactured on TSMC’s 7nm process. AMD plays a pivotal role in NeuReality’s AI solution by providing the essential FPGA chips. Foxconn is gearing up for AI server production using this technology in the Q4 2023.  Guo also pointed out that, although Nvidia remains the dominant AI chip manufacturer in 2024, AMD strengthens partnerships with platform service providers/CSPs like Microsoft and Amazon while acquiring companies like Nod.ai. This positions AMD to potentially narrow the AI gap with Nvidia starting in 2025. This collaboration also affirms that AMD remains unaffected by the updated U.S. ban on shipping AI chips to China.
Key word:
Release time:2023-10-24 16:33 reading:2131 Continue reading>>
Macronix and <span style='color:red'>IBM</span> Continue to Collaborate on Phase Change Memory Technology
 The Macronix International Co., Ltd. Board of Directors yesterday passed a resolution to sign a contract with IBM to continue to carry out joint development of “Phase Change Memory (PCM).”The two parties will share research and development expenditures, and the contractual period is from January 22 of next year to January 21, 2022, for a period of three years.The objective of this bilateral collaboration is to continue the research and development of “Storage Class Memory” technology.Storage Class Memory was advanced by IBM in 2008, and it is a new memory between DRAM and NAND Flash which can go an extra step in improving the processing power of computers.In 2014 at the International Electron Devices Meeting, IBM proposed the potential of PCM as a storage memory device.The following year in 2015 at the Symposia on VLSI Technologyand Circuits Macronix and IBM jointly presented a phase change memory sample using a 90 nm process capacity of 512Mb.Macronix pointed our that decreasing costs is the biggest challenge in the development of PCM; however, multi-valued operations can be utilized to effectively reduce expenditures. Furthermore, multi-valued operations can also dramatically increase the capacity of memory, even though the performance of memory components will also be derogated. Nevertheless, samples made using contemporary manufacturing technologies already show excellent performance, and there is a good chance that PCM will be able to be used for storage level memory applications.
Key word:
Release time:2018-12-27 00:00 reading:1049 Continue reading>>
<span style='color:red'>IBM</span> expands strategic partnership with Samsung to include 7nm chip manufacturing
IBM (NYSE:  IBM) today announced an agreement with Samsung to manufacture 7-nanometer (nm) microprocessors for IBM Power Systems, IBM Z and LinuxONE, high-performance computing (HPC) systems, and cloud offerings.The agreement combines Samsung’s industry-leading semiconductor manufacturing with IBM’s high-performance CPU designs. This combination is being designed to drive unmatched systems performance, including acceleration, memory and I/O bandwidth, encryption and compression speed, as well as system scaling. It positions IBM and Samsung as strategic partners leading the new era of high-performance computing specifically designed for AI.“At IBM, our first priority is our clients,” said John Acocella, Vice President of Enterprise Systems and Technology Development for IBM Systems. “IBM selected Samsung to build our next generation of microprocessors because they share our level of commitment to the performance, reliability, security, and innovation that will position our clients for continued success on the next generation of IBM hardware.”Today’s announcement also expands and extends the 15-year strategic process technology R&D partnership between the two companies which, as part of IBM’s Research Alliance, includes many industry firsts such as the first NanoSheet Device innovation for sub 5nm, the production of the industry’s first 7nm test chip and the first High-K Metal Gate foundry manufacturing. IBM’s Research Alliance ecosystem continues to define the leadership roadmap for the semiconductor industry.“We are excited to expand our decade-long strategic relationship with IBM with our 7nm EUV process technology,” said Ryan Lee, Vice President of Foundry Marketing at Samsung Electronics. “This collaboration is an important milestone for Samsung’s foundry business as it signifies confidence in Samsung’s cutting-edge high performance EUV process technology.”Samsung is a member of the OpenPOWER Foundation, a vendor ecosystem facilitating the development of IBM Power architecture-based customized servers, networking and storage for future data centers and cloud computing. Samsung is also a member of the Q Network to help advance the understanding of applications software in quantum computing for the industry.
Key word:
Release time:2018-12-21 00:00 reading:2836 Continue reading>>
 <span style='color:red'>IBM</span> Quantum Woos Fortune 500
  Fortune 500 companies, academic institutions and national research labs are signing up to use IBM's quantum computers — called IBM Q — hosted in the cloud.  JPMorgan Chase, Daimler AG, Samsung, JSR Corp., Barclays, Hitachi Metals, Honda, Nagase, Keio University, Oak Ridge National Lab, Oxford University and University of Melbourne are the first commercial members of the IBM Q cloud based pay-as-you-go IBM Q Network service.  Already publicly available as the IBM Q Experience, IBM has freely made quantum computing available to more than 60,000 users who have run more than 1.7 million quantum experiments which resulted in more than 35 third-party scholarly publications. IBM’s open source quantum software and developer tools are also made freely available to users.  Five IBM Q Network hubs, which serve quantum computer users worldwide via IBM Q Systems, will be located at IBM Research in the United States (already in operation), Keio University in Japan, Oak Ridge National Lab (already in operation) in the United States, Oxford University in the United Kingdom and the University of Melbourne in Australia.  The IBM Q Network today sports a 20-qubit universal quantum computer — the IBM Q system — which it plans to upgrade to a 50-qubit system, in prototype today, that will be able to solve non-deterministic polynomial-time hard problems that are impossible to solve on even the fastest supercomputers today.  Problems that foil supercomputers today are the big attraction for the wide variety of companies signing up for the IBM Q Network service. For instance, JPMorgan Chase plans to solve difficult financial industry problems such as trading strategies, portfolio optimization, asset pricing and risk analysis. Daimler AG will solve difficult automotive and transportation problems, including new material inventions, using quantum chemistry, manufacturing process optimization, vehicle routing for fleets, autonomous/self-driving car control, quantum-level machine learning and artificial intelligence (AI). Samsung is already working with IBM Q to identify the most important use-cases for using quantum computers in semiconductors and electronics. Likewise, Barclays, Hitachi Metals, Honda and Nagase will investigate potential use cases for their respective industries of finance, materials, automotive and chemistry.  IBM Q Consulting services are also offering consultants, scientists and industry experts to help IBM Q Network get a leg up on how quantum computing can be useful in their industries. IBM is also building an ecosystem which has already registered more than 1,500 universities, 300 high schools and 300 private institutions worldwide to include quantum computing in their educational curriculums.
Key word:
Release time:2017-12-18 00:00 reading:1152 Continue reading>>
<span style='color:red'>IBM</span> Power 9 Servers Target AI
  IBM announced its first Linux servers to use its Power 9 processors, targeting businesses that want to accelerate machine learning jobs. The systems are the first to use PCI Express Gen 4 as well as NVLink 2.0 to attach Nvidia GPUs and IBM’s OpenCAPI for FPGAs and other accelerators.  The company claims it will approach the prices of rival x86 systems while delivering greater bandwidth. However, it’s unclear what accelerators will be available for the new interconnects.  The NVLink 2.0 attaches up to six Nvidia GPUs to a Power 9 system delivering 5.6 times the bandwidth of the PCIe Gen 3 links used on x86 server, IBM claimed. The additional bandwidth translates to about 3.7-times speed up on machine learning jobs using frameworks such as Chainer or Caffe, it said.  IBM provides optimized versions of AI frameworks that can distribute compute-intensive training jobs across hundreds of GPUs with 95 percent scaling, it added.  The company expects partners will provide FPGAs and NAND flash drives for its PCIe Gen 4 slots. It has demonstrated systems with the Mellanox Innova 2 Ethernet/FPGA card which has not yet announced its general availability date.  Xilinx has a prototype FPGA working on the OpenCAPI link. The interface is based on standard serdes to ease porting for logic chips as well as future storage-class memories, however, IBM gave no other examples of chips planned for the interconnect.  “I think IBM will continue to face issues of ecosystem support. They have Nvidia, but I haven’t seen a whole lot of people rushing to OpenCAPI,” said Nathan Brookwood, principal of market watcher Insight64.  “In servers, it’s still Intel’s party, and it’s been hard for anybody else to crash that party. It’s much easier for someone like AMD. You don’t have to change a line of software as opposed to IBM and ARM servers that require software changes that are always problematic,” Brookwood said.  “IBM is clearly the first to use PCI Express Gen 4, but I don’t know if that’s going to make a huge difference in this generation, and next year others will be there too. So, it’s hard to see how they will make much progress against the Intel juggernaut,” he added.
Key word:
Release time:2017-12-06 00:00 reading:1119 Continue reading>>
<span style='color:red'>IBM</span>: Copper Interconnects Here to Stay
  When aluminum interconnects became too slow for complementary metal oxide semiconductors (CMOS) at the 180 nanometer node, IBM led the way to the now universally used copper interconnects starting in 1997.  Now, on its 20th anniversary, many other interconnects are being proposed to replace copper, notably graphene. IBM, however, claims that slight tweaks to copper deposition will give it an enduring edge all the way to the end of the road for CMOS.  Big Blue is touting "copper forever" at the IEEE Nanotechnology Symposium this week in Albany, with more details expected to be revealed at the IEEE International Electronic Devices Meeting (IEDM) in San Francisco.  "Graphene is not readily manufacturable, and furthermore end-to-end comparisons show graphene does not flow uniformly and can't achieve the low resistances of enhanced copper interconnects," IBM Fellow Dan Edelstein told EE Times in an exclusive preview of his Nanotechnology Symposium talk.  "Copper with a thin cap of cobalt is better than graphene at carrying current and even at the smallest sizes imaginable copper interconnects are still the best solution, perhaps with cobalt, nickel, ruthenium or another platinum-group noble metals brought in to underlay it," Edelstein said.  Initial IBM studies showed that copper had 40 percent less resistance than aluminum, resulting in an immediate 15 percent burst in processor speeds. Plus, copper is more durable and 100 times more reliable, according to IBM.  But the industry in the 1990s expressed two big resistances to the changeover to copper — both surmounted by IBM. The first was the fact that copper "poisons" silicon when it comes into direct contact. That was solved by encasing copper in tantalum-nitride and tantalum in a diffusion barrier all around.  The second was its deposition method. Aluminum was previously fabricated as interconnects by depositing an even layer on a topping of dielectric with vias down to the silicon, after which it was etched. Since copper had to be encased in the tantalum compound, this substrative method was not possible. Instead, IBM came up with an additive method with the kind of electroplating used for printed-circuit boards (PCBs).  Electroplating had never before been used on CMOS chips, so stumped the rest of the industry until IBM shared its discovery of it and the encasing process to prevent copper poisoning of the underlying CMOS circuitry. The most complicated part of the process, however, was the dual-damascene process of electroplating inside deep trenches, enabling from seven to 17 (then and today, respectively) metal layers to interconnect the single layer of silicon transistors on typical planar chip. And then there was the "magic."  "We discovered that copper's 'magic' was that in the process of preparing it, trace impurities vastly improved its reliability," Edelstein told EE Times. "Our electroplated copper had minimal electro-migration [the bane of interconnections in microelectronics] because of these traces of carbon, nitrogen, sulfur, chlorine and phosphorus, all of which were present in as little as 10 parts per million."  Cyprian Uzoh, the chemist on the team (whose name in his native Nigerian language means “copper”) came up with the electroplated copper "recipe" and said at the time of the impurities that "a little salt and pepper never hurt anybody."  "I firmly believe that the discovery of the superior, cheaper and easier interconnection of CMOS transistors with copper instead of aluminum resulted from IBM Research's multi-disciplinary expertise across chemistry, electrical engineering and physics," Edelstein told EE Times. "Plus, we built our own PCBs, chips and their packaging, which together gave us the expertise to discover how electroplating copper could replace aluminum. All our competitors sub-contracted many of these steps, putting IBM in the unique position to solve the puzzle."  The dual damascene process, for instance, essentially added silicon dioxide as insulation between layers while simultaneously permitting the tantalum-coated copper wires to be electroplated into the chips trenches. These techniques depended on multidisciplinary expertise, enabling IBM to produce the first prototypes in 1997 and the first production PowerPC chips in 1998. When compared to the previous generation 300-MHz PowerPCs, the 1998 versions experienced a 33 percent boost in speed attributable to their unique copper interconnect. And putting the rest of the industry on the trail to figure out how IBM was doing it.  "At first our competitors said that it would only last one generation, but so far it has lasted 12. And we believe that for CMOS it will last forever, except perhaps on the bottom layer next to the advanced node silicon transistors which may require cobalt, nickel, ruthenium or another platinum-group noble metals," Edelstein told EE Times.
Release time:2017-11-16 00:00 reading:1782 Continue reading>>
<span style='color:red'>IBM</span>'s Quantum Computer Goes Commercial
  IBM's quantum computer — free online as IBM's Q — is going commercial at the Supercomputing Conference 2017 this week in Denver.  Q's  now time-proven capabilities, attained from the free trial period, will still be cloud hosted with a ready-to-go 20-qubit version and a 50-qubit prototype that demonstrates how to solve NP Hard (non-deterministic polynomial-time hard) problems impossible for the fastest supercomputer today.  IBM will also provide an open-source quantum information software kit (QIS-Kit). The key to its QIS-Kit is you don't need a quantum computer to compose and debug your quantum application software, but can prove its correctness first on a conventional computer. Once debugged, the software can be assured to achieve its desired goals with NP-Hard problems. In fact, IBM claims over 60,000 users have beta-tested and debugged their QIS-Kit on over 1.7 million quantum application programs.  IBM will also be displaying at SC 2017 specialty programs built for simulating chemical reactions on quantum computers, for everything from new catalyst development to drug discovery. It claims the key to its success was perfecting error-detecting fault tolerance code for that work on prototypes with up to 56-qubits.  In more detail, IBM's Q Systems cannot attain coherence times (the time before the quantum states relax into an answer) of over 90 microseconds, allowing their 20-to-50 qubit systems the time to solve extremely complex applications impossible for conventional supercomputers.  IBM first launched its first free-to-try cloud-based working 5-to-16 qubit quantum computer in May 2016, and now just 18 months has upgraded the IBM Q experience to 20-qubits with 50-qubits next in line. IBM's 60,000 beta-testers included 1,500 universities, 300 high schools and 300 private-sector participants.  IBM Data Science Experience, a compiler that maps desired experiments onto the available hardware, has worked examples of quantum applications. It has also worked quantum computing concepts and application development principles into its QISKit tutorials. And besides its chemistry simulations for development of new catalysts and drug discovery, the tutorials also provided implementation details for optimization problems.  IBM describes Q as an industry-first initiative to build commercially available universal quantum computing systems for business and science applications. 
Key word:
Release time:2017-11-14 00:00 reading:1276 Continue reading>>
<span style='color:red'>IBM</span> Demos In-Memory Massively Parallel Computing
  Today’s experimental non-von Neumann computing architectures mostly make use of memristive devices modeled on the human brain; they do not separate data memory from computing hardware and thus avoid the inefficiency of von Neumann computers’ repeated load/store operations. Now IBM Research (Zurich) has demonstrated a way to mass-produce 3-D stacks of phase-change memory (PCM) to perform memristive calculations 200 times faster than von Neumann computers. The in-memory coprocessor uses algorithms that exploit the dynamic physics of phase-change memories simultaneously on myriad cells, similar to the way millions of neurons and trillions of synapses in the brain operate in parallel.  The development, which IBM will demonstrate in December at the International Electronic Devices Meeting (IEDM), could return the company to the brink of hardware dominance.  “We have demonstrated that computational primitives using non-von Neumann processors can be used to do machine learning tasks,” IBM Fellow Evangelos Eleftheriou told EE Times. “So far, we predict a speedup of 200 times for our non-von Neumann correlation detection algorithm compared to using state-of-the-art computing systems, but we have many other computational primitives on the way that we will demonstrate later this year.”  The new paradigm combines PCM crystallization dynamics with an acceleration methodology called in-memory computing, which loads all data into RAM instead of swapping data sets into and out of mass memory (hard drives or flash). IBM’s approach does not force the in-memory values through the von Neumann bottleneck of a central processing unit; rather, it leaves the initial-state memory values in each PCM cell and uses a specialized memory controller to perform simultaneous, parallel operations on the cells. Calculations are performed in place by harnessing the physical properties of the phase-change RAMs.  Building on crystallization-dynamics discoveries  Memristive non-von Neumann architectures work like the brain by strengthening (lowering the resistance between) memory synapses each time they are used (and, conversely, increasing the resistance over time if they are not frequently used). The pattern-recognition and other algorithms get increasingly accurate as they gain experience, “memorizing” similar data sets and “forgetting” irrelevant ones whose patterns are seldom repeated. IBM uses this technique in its all-digital neurocomputer e-brains, which are already shepherding the U.S. nuclear arsenal and piloting U.S. fighter jets.  IBM Zurich’s latest effort does not emulate brainlike algorithms such as the digital spiking used in its neurocomputer e-brains. Rather, the development builds on IBM’s discoveries about the crystallization dynamics of phase-change memories.  “What we are trying to do is make more energy-efficient processors by avoiding all the load/store operations” of a von Neumann computer, IBM Research Staff Member Abu Sebastian said in an interview. “Today we’re showing how to use crystallization dynamics to perform unsupervised deep learning, but eventually we plan to build a coprocessor that will allow a von Neumann computer to offload all sorts of tasks it is ill-suited to perform well.”  The prototype houses 1 million in-memory cells, each performing the same deep-learning computational tasks on the unique data set loaded into it. The memristor-like use of PCM crystallization dynamics both accelerates time-to-results and eliminates power-wasting load/stores. IBM says the technology should be easily scalable both horizontally and vertically to realize 3-D non-von Neumann coprocessors that can solve tasks of almost any size.  In more detail, the PCM device uses a germanium-antimony-telluride alloy, sandwiched between two electrodes. When pulsed, the phase-change material shifts from an amorphous to a crystalline phase in easily controllable resistance steps that vary from extremely low (for 0) to extremely high (for 1) or anywhere in between (analog operation).   Model of the phase-change material.  Sebastian was the lead author on a paper describing the development in Nature Communications. He also leads a European Research Council project on the same topic.
Release time:2017-10-26 00:00 reading:998 Continue reading>>
<span style='color:red'>IBM</span> Watson AI XPrize Adds Wild-Card Round
  The $5 million IBM Watson AI XPrizecompetition, which kicked off last year and will end in 2020, was the first of the XPrize contests (14 since 1995) to have a contestant-defined “open” goal rather than a predetermined objective. Now it is also the first XPrize to add a wild card, giving new contestants until Dec. 1 to join the 147 teams that made the first-year cut.  “The total number of teams officially registered stands at 147, out of the total of 870 team submissions that were recorded from more than 9,000 initial interested requests,” Amir Banifatemi, prize lead for the IBM Watson AI XPrize, told EE Times. “Given the rapid pace of artificial-intelligence breakthroughs and the possibilities that AI opens to solve grand challenges, we wanted to ensure that teams with new ideas still had the opportunity to participate.”  The original teams still in the competition hail from 22 countries in total: Australia, Barbados, Canada, China, the Czech Republic, Ecuador, France, Germany, Hungary, India, Israel, Italy, Japan, the Netherlands, Norway, Poland, Romania, Spain, Switzerland, the United Kingdom, the United States, and Vietnam. Their projects are being evaluated not only for their efficacy in addressing AI challenges but also for their potential social, ethical, and technological impact. Imbuing AI with the ability to understand human emotional cues, for example, could have implications beyond the AI’s cognitive-computing capabilities.  The IBM Watson AI XPrize is also the first XPrize to leave the goal open to the contestants’ discretion, and as a result the teams have proposed ideas for solving problems across a wide swath of disciplines. “Energy Efficiency” projects would reduce greenhouses gases and makes landfills smart at separating recyclables. “Health and Wellness” investigations look to head off mental health problems, diagnose an infant’s crying, and improve sleep. “Learning and Human Potential” projects aim to reinvent computer coding, personalized learning, peer-to-peer tutoring, and scalable learning to achieve universal worldwide literacy. Proposals for “Improving Society” would automatically flag “fake news” in social media and get legal information to victims at little cost. “Shelter and Infrastructure” projects aim to meld social development with satellite imagery, predict disasters, manage traffic flows in cities, and assess the structural health of buildings. “Space and New Frontiers” explorations seek to develop neurologically inspired models and automatically propose hypotheses.  “We have been impressed with the level of variety and domain focus so far. Teams are very diverse, [hailing from] startups, academia, large corporations, nonprofits, and more,” Banifatemi said. Among the “impressive” entries, he said, are AI applications to “detect crop disease in Ethiopia, detect illegal mining in Congo, model malaria-prone regions in India, predict psychiatric medicine effectiveness, automate project management at scale, [advance] triage emergency medicine, and monitor the structural health of buildings.”  The addition of the wild-card teams aims to widen the application domains even further, but the expanded pool will still be subject to the same annual culling process. “Each year, up to 50% of teams will move to the next round provided they reach their milestones and are selected by judges to move forward,” Banifatemi said. “Based on how many wild cards are approved to compete, we expect to have half of the total teams in competition by September 2018 moving into 2019.”  The judges also will be picking out favorites over the course of the multiyear competition, distributing $500,000 in total to teams for meeting their stated milestones with outstanding performance. The milestone awards will be made at the judges’ discretion rather than follow a strict policy. The finalists in 2020 will receive $3 million for first place, $1 million for second place, and $500,000 for third place at the Grand Prize event on the TED2020 stage. The conference attendees, including the online audience, will have a say in determining the final placement of the three winners.  Banifatemi described the awards system in detail: “We will have 10 teams eligible for the milestone prizes each year, and the top three will receive cash prizes based on the judges’ assessment. The milestone prizes in 2017, 2018, and 2019 are part of the prize purse that IBM has committed to. By the final round, in 2020, three teams will have been selected from the 10 milestone prize candidates in 2019. The judges will have already approved the top three, and the public will weigh in on the final scoring. The judges score’ and the public score will be taken into account for selecting the first-, second-, and third-place winners."  The next major event, in December, will be the announcement of which of the currently registered teams (roughly half of the current total) will move on to the next phase. In January, the judges will announce which of the wild-card teams will be added to the roster. In January 2019, the field will be halved again, with survival dependent on the standards proposed and on the AI’s performance, scalability, and — most of all ??— likely worldwide impact.
Release time:2017-10-19 00:00 reading:1195 Continue reading>>
<span style='color:red'>IBM</span> Uses Deep Learning to Train Raspberry Pi
  Computations requiring high performance computing (HPC) power may soon be done in the palm of your hand thanks to work done this summer by IBM Research in Dublin, Ireland.  While scientists have come a long away in teaching machines how to process images for facial recognition and understand language to translate texts, IBM researchers focused on a different problem: how to use artificial intelligence (AI) techniques to forecast a physical process. In this case, the focus was on ocean waves, using traditional physics-based models driven by external forces, such as the rise and fall of tides, winds blowing in different directions, the depth and physical properties of water influence the speed and height of the waves.  HPC is normally essential to resolve the differential equations that encapsulate these physical processes and their relationships, and the expense often limits the spatial resolution, physical processes and time-scales that can be investigated by a real-time forecasting platform. In an interview with EE Times, IBM Research Senior Research Manager Sean McKenna said an HPC cluster using Big Iron has generally been the solution to dealing with the heavy computational load. IBM Research wanted to see if it could do the same work more quickly and more simply, he said.  The differential equations approach has developed over the course of a century or more, he said. Machine learning through AI is not rule based. “It's non-linear mapping of one input space to an output space," McKenna said. "That's what everything is in AI right now."  Researchers developed a deep-learning framework that provides a 12,000 percent acceleration over these physics-based models at comparable levels of accuracy. McKenna said the validated deep-learning framework can be used to perform real-time forecasts of wave conditions using available forecasted boundary wave conditions, ocean currents, and winds.  “The deep learning method is more of a black box," he said. "It's a little bit of paradigm shift."  Deep learning isn't about physical modeling and science to figure out what's leading to a set of results, it's about using engineering to solve a problem, and being able to do it more efficiently and faster, said McKenna. “We can build a model, train that model and put in on a more computationally-efficient device," he said.  What is clear are the significant benefits. Massively reducing the computational expense means simulations can be done on a Raspberry Pi rather HPC infrastructure.  The deep-learning framework was trained to forecast wave conditions at a case-study site at Monterey Bay, Calif., using the physics-based Simulating WAves Nearshore (SWAN) model to generate training data for the deep learning network. Driven by measured wave conditions, ocean currents from an operational forecasting system, and wind data, the model was run between the beginning of April 2013 and end of July 2017, generating forecasts at three-hour intervals to provide a total of 12,400 distinct model outputs. The study expands and builds on a collaboration between IBM Research-Ireland, Baylor University and the University of Notre Dame.  The deep learning model has yet to be deployed to a physical device, said McKenna, but the study demonstrates that the reduction in computational expense means the simulation of a physics model could be done an Raspberry Pi or any other low-end computing device that's trained by HPC.  “That opens up possibilities as to where that model can be deployed," McKenna said.  Being able to accurately forecast ocean wave heights and directions are a valuable resource for many marine-based industries as they often operate in harsh environments where power and computing facilities are limited. One scenario includes a shipping company using highly accurate forecasts to determine the best voyage route in rough seas to minimize fuel consumption or travel time. A surfer could get data localized to a specific beach to ride the best waves, said McKenna.  IBM Research's deep learning model could potentially be leveraged to use existing HPC infrastructure to train cheaper computing devices, even a smartphone, he said. “HPC resources are becoming more available in the cloud, so even if you don't own that resource you probably have access to it," he said.
Key word:
Release time:2017-09-29 00:00 reading:1222 Continue reading>>

Turn to

/ 2

  • Week of hot material
  • Material in short supply seckilling
model brand Quote
RB751G-40T2R ROHM Semiconductor
TL431ACLPR Texas Instruments
BD71847AMWV-E2 ROHM Semiconductor
MC33074DR2G onsemi
CDZVT2R20B ROHM Semiconductor
model brand To snap up
TPS63050YFFR Texas Instruments
IPZ40N04S5L4R8ATMA1 Infineon Technologies
STM32F429IGT6 STMicroelectronics
BU33JA2MNVX-CTL ROHM Semiconductor
BP3621 ROHM Semiconductor
ESR03EZPJ151 ROHM Semiconductor
Hot labels
ROHM
IC
Averlogic
Intel
Samsung
IoT
AI
Sensor
Chip
About us

Qr code of ameya360 official account

Identify TWO-DIMENSIONAL code, you can pay attention to

AMEYA360 mall (www.ameya360.com) was launched in 2011. Now there are more than 3,500 high-quality suppliers, including 6 million product model data, and more than 1 million component stocks for purchase. Products cover MCU+ memory + power chip +IGBT+MOS tube + op amp + RF Bluetooth + sensor + resistor capacitance inductor + connector and other fields. main business of platform covers spot sales of electronic components, BOM distribution and product supporting materials, providing one-stop purchasing and sales services for our customers.

Please enter the verification code in the image below:

verification code