When AI Goes Awry

发布时间:2018-03-27 00:00
作者:Ameya360
来源:Semiconductor Engineering
阅读量:1249

  The race is on to develop intelligent systems that can drive cars, diagnose and treat complex medical conditions, and even train other machines.

  The problem is that no one is quite sure how to diagnose latent or less-obvious flaws in these systems—or better yet, to prevent them from occurring in the first place. While machines can do some things very well, it’s still up to humans to devise programs to train them and observe them, and that system is far from perfect.

  “Debugging is an open area of research,” said Jeff Welser, vice president and lab director at IBM Research Almaden. “We don’t have a good answer yet.”

  He’s not alone. While artificial intelligence, deep learning and machine learning are being adopted across multiple industries, including semiconductor design and manufacturing, the focus has been on how to use this technology rather than what happens when something goes awry.

  “Debugging is an open area of research,” said Norman Chang, chief technologist at ANSYS. “That problem is not solved.”

  At least part of the problem is that no one is entirely sure what happens once a device is trained, particularly with deep learning and AI and various types of neural networks.

  “Debugging is based on understanding,” said Steven Woo, vice president of enterprise solutions technology and distinguished inventor atRambus. “There’s a lot to learn about how the brain hones in, so it remains a challenge to debug in the classical sense because you need to understand when misclassification happens. We need to move more to an ‘I don’t know’ type of classification.”

  This is long way from some of the scenarios depicted in science fiction, where machines take over the world. A faulty algorithm may result in an error somewhere down the line that was unexpected. If it involves a functional safety system, it may cause harm, but in other cases it may generate annoying behavior in a machine. But what’s different with artificial intelligence (AI), deep learning (DL) and machine learning (ML) is that fixing those errors can’t be achieved just by applying a software patch. Moreover, those errors may not show up for months or years—or until there is a series of interactions with other devices.

  “If you’re training a network, the attraction is that you can make it faster and more accurate,” said Gordon Cooper, product marketing manager for Synopsys‘ Embedded Vision Processor family. “Once you train a network and something goes wrong, there is only a certain amount you can trace back to a line of code. Now it becomes a trickier problem to debug, and it’s one that can’t necessarily be avoided ahead of time.”

  What is good enough?

  An underlying theme in the semiconductor world is, ‘What is good enough?’ The answer varies greatly from one market segment to another, and from one application to another. It may even vary from one function to another in the same device. For example, having an error in a game on a smart phone may be annoying, and may require a reboot, but if you can’t make a phone call you’ll probably replace the phone. With industrial equipment, the technology may be directly tied to revenue, so it might be part of a planned maintenance replacement rather than even waiting for a failure.

  For AI, deep learning and machine learning, no such metrics exist. Inferencing results are mathematical distributions, not fixed numbers or behaviors.

  “The big question is how is it right or wrong, and how does that compare to a human,” said Mike Gianfagna, vice president of marketing at eSilicon. “If it’s better than a human, is it good enough? That may be something we will never conclusively prove. All of these are a function of training data, and in general the more training data you have, the closer you get to perfection. This is a lot different from the past, where you were only concerned about whether algorithms and wiring were correct.”

  This is one place where problems can creep in. While there is an immense amount of data in volume manufacturing, there is far less on the design side.

  “For us, every chip is so unique that we’re only dealing with a couple hundred systems, so the volume of input data is small,” said Ty Garibay, CTO at ArterisIP. “And this stuff is a black box. How do you deal with something that you’ve never dealt with before, particularly with issues involving biases and ethics. You need a lot more training data for that.”

  Even the perceptions of what constitutes a bug are different for AI/DL/ML.

  “The definition of a bug changes because the capability of the algorithm evolves in the field, and the algorithm is statistical rather than deterministic,” Yosinori Watanabe, senior architect in Cadence‘s System & Verification Group. “Sometimes, one may not be able to isolate a particular output you obtain from an algorithm of this kind as a bug, because it is based on an evolving probability distribution captured in the algorithm.”

  This can be avoided by setting a clear boundary condition of acceptable behavior of the algorithm up front, said Watanabe. However, understanding those boundary conditions isn’t always so simple, in part because the algorithms themselves are in a constant state of refinement and in part because those algorithms are being used for a wide variety of applications.

  Understanding the unknowns

  One of the starting points in debugging AI/ML/DL is to delineate what you do and don’t understand.

  This is simpler in machine learning than in deep learning, both of which fit under the AI umbrella, because the algorithms themselves are simpler. Deep learning is a data representation based on multiple layers of a matrix, where each layer uses output from the previous layer as input. Machine learning, in contrast, uses algorithms developed for a specific task.

  “With deep learning, it’s more difficult to understand the decision-making process,” said Michael Schuldenfrei, CTO of Optimal+. “In a production environment, you’re trying to understand what went wrong. You can explain the model that the machine learning algorithm came from and do a lot of work comparing different algorithms, but the answer still may be different across different products. On Product A, Random Forest may work well, while on Product B, another algorithm or some combination works better. But machine learning is no good when there is not lots of data. Another area that’s problematic is when you have a lot of independent variables that are changing.”And this is where much of the research is focused today.

  “An AI system may look at a dog and identify it as a small dog or a certain type of dog,” said IBM’s Welser. “What you need to know is what characteristics did it pick up on. There may be five or six characteristics that a machine has identified. Are the the right characteristics? Or is there overemphasis of one characteristic over another? This all comes back to what are people good at versus machines.”

  The chain of events leading up to that decision is well understood, but the decision-making process is not.

  “There is this whole line of explainable AI, which is that you put some data into the system, and out pops an answer,” said Rob Aitken, an Arm fellow. “It doesn’t necessarily explain to you the precise chain of reasoning that led to the answer but it says, ‘Here are some properties of your input data that strongly influence the fact that this answer came out this way.’ Even being able to do that is helpful for a variety of contexts in that if we give AI programs or machine learning algorithms more control over making decisions, then it helps if they can explain why. Okay, you didn’t get the loan for your car, this is the particular piece of your data that flagged that. There is a flip side on security on that too. There are some attacks on machine learning algorithms that by asking the algorithm to give you responses on given sets of data, by playing with that data you can infer what its training set was. So you can learn some allegedly confidential things about the training set by selecting your queries.”

  Training data bias plays a key role here, as well.

  “It’s a big challenge in medical data because in some areas there’s actually disagreement among the experts on how to label something, so you wind up having to development algorithms that are tolerant of noise in the labeling,” said Aitken. “We kind of know algorithmically what it’s doing and we observe that it’s telling us stuff that seems to be useful. But at the same time we also have demonstrated to ourselves that whatever biases went into the input set pop right out the output. Is that an example of intelligence or is that just an example of inference abuse, or something we don’t know yet?”

  What works, what doesn’t

  Once bugs are identified, the actual process of getting rid of them isn’t clear, either.

  “One way to address this is to come at it from more of a conventional side, such as support systems and optimizing memory bandwidth,” said Rupert Baines, CEO of UltraSoC. “But no one knows how these systems actually work. How do you configure a black box? You don’t know what to look for. This may be a case where you need machine learning to debug machine learning. You need to train a supervisor to train these systems and identify what’s good and bad.”

  Small variations in training data can spread, too. “The data used for training one machine might be produced by another machine,” said Cadence’s Watanabe. “The latter machine might implement different algorithms, or it might be a different instance that implements the same algorithm. For example, two machines, both of which implement an algorithm to play the game of Go, might play with each other, so that each machine produces data to be used by the other for training. The principle of debugging remains the same as above, since each machine’s behavior is verified against the boundary condition of acceptable behavior respectively.”

  Another approach is to keep the application of AI/DL/ML narrow enough that it can be constantly refined internally. “We started with TensorFlow algorithms and quickly found out they were not adequate, so we moved over to Random Forest,” said Sundari Mitra, CEO of NetSpeed Systems. “But then what do you do? Today we do analysis and we are able to change our methodology. But how do you do that in a deep learning that is already fabricated?

  Progress so far

  To make matters even more confusing, all of these systems are based on training algorithms that are in an almost constant state of flux. As they are used in the real-world applications, problems show up. From there the training data is revised, and the inferencing systems are analyzed and tested to see how those changes affect behavior.

  “That data involves not only how the testbench and the stimulus generation has behaved, but it also involves how the design has behaved,” said Mark Olen, a product marketing manager at Mentor, a Siemens Business. “It knows that in order to generate a good set of tests I want to do a lot of different things, and it knows that if I’ve tried to present a certain set of stimulus to the device and i’ve done it 1,000 times over the course of the day, over my simulation farm, and I’ve always gotten the same result, it knows to not do that again because I’m going to get the same result, so it has to do something different. Iit’s actually an application of some methods that are pretty similar to what we would call formal techniques, but it’s not formal verification in the pure sense of the way we think about property checking and assertion-based verification. It’s formal in terms of formal mathematics.”

  Olen noted that leading-edge companies have been working on this for some time. “We haven’t seen anything commercially yet, but you can imagine the Bell Labs type of customers. There are a handful of customers that have long since been on the forefront of some of this technology—developing it for themselves, not necessarily for commercial purposes,” he said.

  The path forward

  For years, debugging AI was put on the back burner while hand-written algorithms were developed and tested by universities and research houses. In the past year, all of that has changed. Machine learning, deep learning and AI are everywhere, and the technology is being used more widely even within systems where just last year it was being tested.

  That will have to change, and quickly. The idea behind some of these applications is that AI can be used for training other systems and improving quality and reliability in manufacturing, but that only works if the training data itself is bug-free. And at this point, no one is quite sure.

(备注:文章来源于网络,信息仅供参考,不代表本网站观点,如有侵权请联系删除!)

在线留言询价

相关阅读
Tech Giants Launch AI Arms Race, Aiming to Spark a Wave of Smartphone and Computer Upgrades
  According to CNA’s news, the potential business opportunities in artificial intelligence have spurred major tech giants, with NVIDIA, AMD, Intel, MediaTek, and Qualcomm sequentially launching products featuring the latest AI capabilities.  This AI arms race has expanded its battleground from servers to smartphones and laptops, as companies hope that the infusion of AI will inject vitality into mature markets.  Generative AI is experiencing robust development, with MediaTek considering this year as the “Generative AI Year.” They anticipate a potential paradigm shift in the IC design industry, contributing to increased productivity and significantly impacting IC products.  This not only brings forth new applications but also propels the demand for new algorithms and computational processors.  MediaTek and Qualcomm recently introduced their flagship 5G generative AI mobile chips, the Dimensity 9300 and Snapdragon 8 Gen 3, respectively. The Dimensity 9300, integrated with the built-in APU 790, enables faster and more secure edge AI computing, capable of generating images within 1 second.  MediaTek points out that the smartphone industry is experiencing a gradual growth slowdown, and generative AI is expected to provide new services, potentially stimulating a new wave of consumer demand growth. Smartphones equipped with the Dimensity 9300 and Snapdragon 8 Gen 3 are set to be released gradually by the end of this year.  Targeting the AI personal computer (PC) market, Intel is set to launch the Meteor Lake processor on December 14. Two major computer brands, Acer and ASUS, are both customers for Intel’s AI PC.  High-speed transmission interface chip manufacturer Parade and network communication chip manufacturer Realtek are optimistic. The integration of AI features into personal computers and laptops is expected to stimulate demand for upgrades, leading to a potential increase in PC shipments next year.  In TrendForces’ report on November 8th, it has indicated that the emerging market for AI PCs does not have a clear definition at present, but due to the high costs of upgrading both software and hardware associated with AI PCs, early development will be focused on high-end business users and content creators.  For consumers, current PCs offer a range of cloud AI applications sufficient for daily life and entertainment needs. However, without the emergence of a groundbreaking AI application in the short term to significantly enhance the AI experience, it will be challenging to rapidly boost the adoption of consumer AI PCs.  For the average consumer, with disposable income becoming increasingly tight, the prospect of purchasing an expensive, non-essential computer is likely wishful thinking on the part of suppliers. Nevertheless, looking to the long term, the potential development of more diverse AI tools—along with a price reduction—may still lead to a higher adoption rate of consumer AI PCs.  Read more  Key Development Period for AI PCs in 2024; Global Notebook Market Set to Rebound to Healthy Supply-Demand Cycle with an Estimated Growth Rate of 3.2%, Says TrendForce。
2023-11-21 10:41 阅读量:426
Ameya360:EU, U.S. Making Moves to Address Ethics in AI
  The United States and European Union are divided by thousands of miles of the Atlantic Ocean, and their approaches to regulating AI are just as vast. The landscapes are also dynamic, with the latest change on the U.S. side set to roll out today—about seven weeks after a big move in the EU.  The stakes are high on both sides of the Atlantic, with repercussions in practices as disparate as determining prison sentences to picking who gets hired.  The European Union’s Artificial Intelligence Act (AIA), which was approved by the Council of the EU on Dec. 6 and is set to be considered by the European Parliament as early as March, would regulate AI applications, products and services under a risk-based hierarchy: The higher the risk, the stricter the rule.  If passed, the EU’s AIA would be the world’s first horizontal—across all sectors and applications—regulation of AI.  In contrast, the U.S. has no federal law specifically to regulate the use of AI, relying instead on existing laws, blueprints, frameworks, standards and regulations that can be stitched together to guide the ethical use of AI. However, while business and government can be guided by frameworks, they are voluntary and offer no protection to consumers who are wronged when AI is used against them.  Adding to the patchwork of federal actions, local and state governments are enacting laws to address AI bias in employment, as in New York City and the entire state of California, and insurance, with a law in Colorado. No proposed or enacted local law has appeared in the news media to address using AI in jail or prison sentencing. However, in 2016, a Wisconsin man, Eric Loomis, unsuccessfully sued the state over a six-year prison sentence that was based, in part, on AI software, according to a report in The New York Times. Loomis contended that his due process rights were violated because he could not inspect or challenge the software’s algorithm.  “I would say we still need the foundation from the federal government,” Haniyeh Mahmoudian, global AI ethicist at DataRobot, told EE Times. “Things around privacy that pretty much every person in the United States is entitled to, that is something that the federal government should take care of.”  The latest national guideline is expected to be released today by the National Institute of Standards and Technology (NIST).
2023-01-28 14:23 阅读量:2686
AI Market Ramps Everywhere
Artificial Intelligence (AI) has inspired the general populace, but its rapid rise over the past few years has given many people pause. From realistic concerns about robots taking over jobs to sci-fi scares about robots more intelligent than humans building ever smarter robots themselves, AI inspires plenty of angst.Within the technology industry, we have a better understanding about the potential for the technology, but the ways in which it will develop are less clear. Semiconductor Engineering asked the community to assess the status of AI and machine learning (ML) and if they thought the technology was being overhyped.“What makes AI so interesting is that it’s a global phenomenon with universities, established companies, start-ups and even countries all trying to move the game forward as fast as possible,” says Andrew Grant, senior business development director for Vision & AI at Imagination Technologies. “The Fourth Industrial Revolution is perhaps the first where people can see change happening on an almost daily basis.”We are still in the early days of this. “In the technology adoption cycle, this technology has moved past the tech enthusiasts and visionaries that define the early market,” says Markus Levy, head of AI at NXP. “We are now standing at the edge of the chasm, which we are successfully crossing to reach the mainstream market. The good news is we know what it takes to cross this chasm and there are hundreds of companies around the world, including tech bellwether companies, working hard to make that possible. We believe that within the next couple of years this revolutionary technology will have made substantial foray into the mainstream market. Even though we know that this technology is real and not a passing attempt to grow a market, people will continue to use and misuse the buzzwords until they clearly understand the real meanings.”It is the creation of those buzzwords that may separate the technical realities from mainstream’s currently perceptions. “ML is just pattern matching at its core, and often the two words are interchanged to sensationalize ongoing research and industry press releases,” says Sharad Singh, digital marketing engineer for Allied Analytics. “AI is definitely overhyped in the media as the next technological breakthrough that has profound life-changing applications, and institutions are cashing in on the hype to promote themselves.”Some of the changes seen by the mass market may not be life-changing. “It might be overhyped today,” says Benjamin Prautsch, group manager for advanced mixed-signal automation at Fraunhofer EAS. “However, I believe that AI will be a core element in almost every future system. AI won’t be visible—just like the transistor. It’s effect, however, will be. AI will not only add new function to devices, it will also improve the electronic design and design automation, and many other fields.”That is already happening. “AI is broadly deployed today, in many ways you many not notice, such as smart unlock features on your smartphone, using your face or fingerprint, predictive text in your emails and instant messages, and efficient energy-management monitoring,” says Steve Roddy, vice president of special projects in the Machine Learning Group at Arm. “However, some AI applications are overhyped, such as self-driving autonomous cars or companion robots replacing human interaction. The technology just isn’t sufficiently advanced for these kinds of things to be routinely and consistently deployed.”Raymond Nijssen, vice president and chief technologist for Achronix,agrees. “The implications will be much broader than anyone can imagine. We do hear some wild claims, and some of them definitely are overhyped. But it will find its way into our lives and other areas of technology in ways that have not yet been foreseen. There will be a lot of development, but we will encounter some glass ceilings where we had high expectations that will not become reality. That will have a lot to do with where AI is just not intelligent enough.”The term AI itself is problematic. “It is all about context and whose expectations are considered,” says David White, senior group director for R&D in the Custom IC & PCB Group at Cadence. “I believe there are extremes on both sides of the debate. I don’t believe we are anywhere near true machine intelligence that threatens our safety, and I don’t believe that AI and deep learning are pure hype with no redeeming engineering value. My expectations are that AI and deep learning would provide value in real-world systems for specific tasks, and in that context, I believe we are on track.”And context is important. “Zachary Lipton, an assistant professor at Carnegie Mellon University, states that the AI hype is blinding people to its limitations and is dangerous in the long run,” says Allied Analytics’ Singh. “He argues that the current state of AI is poorly understood by the public, as the latter often associates AI with self-aware robots taking over humanity. In reality, machines still have a long way to go before being able to develop human-like intelligence. Legendary physicist Stephen Hawking and Tesla founder Elon Musk have both publicly spoken about the dangers of AI, while Microsoft co-founder Bill Gates believes there’s reason to be cautious.”What complicates the picture is the rate of change. “It’s only a few years since Geoffrey Hinton’s team at the University of Toronto made breakthroughs in CNNs,” points out Grant. “Since then Google, Facebook and others have made many of their own developments available to the wider audience of data scientists, software developers and hardware teams.”Understanding the roots of the technology can help. “If you look at AI, the best way to think about it today is a super-universal curve fitting function,” explains Achronix’s Nijssen. “Anything that fits that mold can make a lot of progress beyond what we see now. But there are other forms of intelligence that are not an extrapolation of patterns or images or sequences of events that have been seen before where actual interpretation and deeper understanding is necessary. Today, that is not part of what is being considered.”The area covered by curve fitting is large. “We still haven’t cataloged all the ways and places where it can be used,” says Peter Glaskowsky, computer system architect at Esperanto Technologies. “Almost anywhere that decisions depend on recognizing repeated patterns, AI will play a role.”Many of these will continue to involve humans. “There are so many areas that will benefit from the combination of person, machine and AI,” says Imagination’s Grant. “With that combination we can begin to tackle problems that would otherwise elude us. In health care, security and economics, for example, the opportunities are literally endless.”Taking the human out of the loop is where problems may start. “During this process, it will be important to understand AI’s decision-making so the quality of decisions can be measured,” warns Fraunhofer’s Prautsch. “If the decision, however, gets too much attention over the process of decision-making, then hidden dangers could arise.”And there will be failures. “There are opportunities within the market for one actor, or one group of actors, to do something that is sub-optimal around AI,” says Marc Naddell, vice president of marketing for Gyrfalcon Technologies. “If they over promote the capability of the solution, that could result in disappointment. That can be used as evidence that AI does not really live up to the billing.”NXP’s Levy tackles this problem. “Every technology has the hype cycle with troughs of disillusionment. We view ML and AI as a natural progression of technological advances that has characterized human evolution for millennia. Look at it this way—humans have become the most successful species because we figured out how to transfer our acquired knowledge, problem-solving skills, and decision-making techniques to our progeny, not through genes, but extra-somatically. We have been doing the same thing to our machines by making them more efficient, smarter, and now the natural progression is to enable them to think. So unlike other technologies, AI & ML are not over-hyped or short-lived. They are fundamental to human nature.”What of machines creating better machines? At present, the furthest we have gone is the employ these techniques to create better silicon. “There is now unprecedented interest and investment in applying ML to chip design,” says Jeff Dyck, director of engineering/R&D at Mentor, a Siemens Business. “This has led to a new generation of ML practitioners in EDA, many of which have a solid academic knowledge of ML. They are now developing promising results in controlled environments. However, we are still learning from the school of ML hard knocks about the challenges of bringing ML methods from the lab to production. Perhaps we are at the early stage of a golden age of ML for chip design, but we need to see the promising techniques in the lab successfully move to production for the value to be realized.”Accelerating development ML and AI run on very sub-optimal hardware today. “We will see AI processing move from CPUs and GPUs to dedicated AI accelerator chips,” says Glaskowsky. “Because these new devices are designed specifically for machine-learning algorithms, they will deliver better performance at lower prices, and they’ll be much more energy-efficient on the same tasks—typically 10 times better than GPUs and 100 times better than CPUs.”And we are beginning to see custom silicon being used. “There are dozens of companies bringing AI chips to market in 2019 and 2020,” says Geoff Tate, CEO of Flex Logix. “Many will miss the mark, but some of them will deliver the goods enabling rapid growth of edge AI. The long-term winners in AI chips will be those who can keep up with the rapid pace of change as neural networks improve.”According to a recent report by Allied Market Research, the global deep learning chip market is projected to reach $29.4 billion by 2025, growing at a CAGR of 39.9 % from 2018 to 2025.Xilinx has jumped into this market in a big way. “They have invested billions in their Everest platform, expected to tape out by 2018 on 7nm technology,” says Sergio Marchese, technical marketing manager for OneSpin Solutions. “Flexible and powerful hardware platforms supporting heterogeneous computing are crucial to accelerate the development and deployment of machine learning and AI-based applications.”We have to look at all metrics. “At some point, it is not just about performance,” warns Naddell. “It is about cost of ownership and that includes energy use.”Achieving that will require a range of devices. “They will cover a wide range of cost and power points,” says Glaskowsky. “There will be AI chips (and IPblocks for SoC designs) that cost less than a dollar. Big standalone chips may cost over a thousand dollars, but will outperform a box full of GPUs costing far more. Most of the world’s AI processing will shift from legacy platforms to optimized solutions as quickly as the new silicon can be manufactured.”Some of those devices are already in consumer devices. “Neural networkaccelerators will become ubiquitous, in every device in our environment—indeed we could call it ambient AI,” says Grant. “As the ability to process complex neural networks increases and the price per device falls, we will see this everywhere, from urban infrastructure to provide advanced services such as traffic and building management and security, to monitoring the elderly in care homes.”There is a lot of work ahead. “The first generation of solutions is not very efficient,” says Nijssen. “Both training and inferencing are done in a very brute-force fashion. GPUs are useful, but they are simple-minded and they don’t allow for things that deviate from just pumping through a lot of MAC functions. There are many techniques that people have not had a chance to try out yet because the field is moving so quickly. Once the dust settles and the way that people do training becomes more uniform, and the algorithms do not change on a daily basis, you will see people pushing down the power consumption curve.”“In the hardware space, it’s critical to have flexible, scalable and energy-efficient hardware that spans all performance points, from CPUs to GPUs and NPUs,” says Arm’s Roddy. “The market is expanding and will continue to ramp up. AI is here to stay.”
2018-12-26 00:00 阅读量:1278
AI Still Has Trust Issues
A lot has been accomplished in the last year to improve comprehension, accuracy and scalability of artificial intelligence, but 2019 will see efforts focused on eliminating bias and making decision making more transparent.Jeff Welser, vice president at IBM Research, says the organization has hit several AI milestones in the past year and is predicting three key areas of focus for 2019. Bringing cognitive solutions powered by AI to a platform businesses can easily adopt is a strategic business imperative for the company, he said, while also increasing understanding of AI and addressing issues such as bias and trust.When it comes to advancing AI, Welser said there’s been progress in several areas, including comprehension of speech and analyzing images. IBM’s Project Debater work has been able to extend current AI speech comprehension capabilities beyond simple question answering tasks, enabling machines to better understand when people are making arguments, he said, and taking it beyond just “search on steroids.” One scenario involved asking a question that had no definitive answer — whether government should increase funding for telemedicine.Just as it’s critical to get AI to better understand what is being said, progress has been made for it to recognize what it sees faster and more accurately, said Welser. Rather than requiring thousands or possibly millions of labeled images to train a visual recognition model, IBM has demonstrated it’s now possible for AI to recognize new objects with as little as one example as a guideline, which makes AI scalable.IBM Research AI introduced a Machine Listening Comprehension capability for argumentative content stemming from its work on Project Debater, pictured with professional human debater, Dan Zafrir, in San Francisco. (Photo Credit: IBM Research).IBM Research AI introduced a Machine Listening Comprehension capability for argumentative content stemming from its work on Project Debater, pictured with professional human debater, Dan Zafrir, in San Francisco. (Photo Credit: IBM Research).Another way that AI learning is becoming scalable is getting AI agents to learn from each other, said Welser. IBM researchers have developed a framework and algorithm to enable AI agents to exchange knowledge, thereby learning significantly faster than previous methods. In addition, he said, they can learn to coordinate where existing methods fail.“If you have a more complex task, you don't have to necessarily train a big system," Welser said. "But you could take individual systems and combine them to go do that task.”Progress is also being made in reducing the computational resources necessary for deep learning models. In 2015, IBM outlined how it was possible to train deep learning models using 16-bit precision, and today 8-bit precision is now possible without compromising model accuracy across all major AI dataset categories, including image, speech, and text. Scaling of AI can also be achieved through a new neural architecture search technique that reduces the heavy lifting required to design a network.All this progress needs to be tempered by the fact AI must be trustworthy, and Welser said there will be a great deal of focus on this in the next year. Like any technology, AI can be subject to malicious manipulation, so it needs to be able to anticipate adversarial attacks.Right now, AI can vulnerable to what are called “adversarial examples,” where a hacker might imperceptibly alter an image such to fool a deep learning model into classifying it into any category the attacker desires. IBM Research has made some progress addressing this with an attack-agnostic measure to evaluate the robustness of a neural network and direct systems on how to detect and defend against attacks.Another conundrum is neural nets tend to be black boxes in that how they come to a decision is not immediately clear, Welser. This lack of transparency is a barrier to putting trust in AI. Meanwhile, it’s also important to eliminate bias as AI is increasingly relied on to make decisions, he said, but it’s challenging.“Up to now we've seen mostly that people have been just so excited to design AI systems to be able to do things," Wesler said. "Then afterwards they try and figure out if they're biased or if they're robust or if they've got some issue with the decisions they're making.”
2018-12-17 00:00 阅读量:1255
  • 一周热料
  • 紧缺物料秒杀
型号 品牌 询价
TL431ACLPR Texas Instruments
MC33074DR2G onsemi
RB751G-40T2R ROHM Semiconductor
CDZVT2R20B ROHM Semiconductor
BD71847AMWV-E2 ROHM Semiconductor
型号 品牌 抢购
TPS63050YFFR Texas Instruments
ESR03EZPJ151 ROHM Semiconductor
BU33JA2MNVX-CTL ROHM Semiconductor
IPZ40N04S5L4R8ATMA1 Infineon Technologies
BP3621 ROHM Semiconductor
STM32F429IGT6 STMicroelectronics
热门标签
ROHM
Aavid
Averlogic
开发板
SUSUMU
NXP
PCB
传感器
半导体
相关百科
关于我们
AMEYA360微信服务号 AMEYA360微信服务号
AMEYA360商城(www.ameya360.com)上线于2011年,现 有超过3500家优质供应商,收录600万种产品型号数据,100 多万种元器件库存可供选购,产品覆盖MCU+存储器+电源芯 片+IGBT+MOS管+运放+射频蓝牙+传感器+电阻电容电感+ 连接器等多个领域,平台主营业务涵盖电子元器件现货销售、 BOM配单及提供产品配套资料等,为广大客户提供一站式购 销服务。

请输入下方图片中的验证码:

验证码