Top 10 IC Design Houses’ Combined Revenue Grows 12% in 2023, <span style='color:red'>NVIDIA</span> Takes Lead for the First Time, Says TrendForce
  In 2023, the combined revenue of the world’s top ten IC design houses reached approximately $167.6 billion, marking a 12% annual increase. This growth was primarily driven by NVIDIA, which saw a remarkable 105% increase in revenue, significantly boosting the overall industry. While Broadcom, Will Semiconductor, and MPS experienced only marginal revenue growth, other companies faced declines due to economic downturns and inventory reductions, says TrendForce.  Looking ahead to 2024, TrendForce predicts that with IC inventory levels returning to healthy standards and driven by the AI boom, major CSPs will continue to expand the construction of LLMs. Additionally, AI applications are expected to penetrate personal devices, potentially leading to the introduction of AI-powered smartphones and AI PCs. Consequently, the global IC design industry's revenue growth is expected to continue its upward trajectory.  NVIDIA, Broadcom, and AMD benefit from a surge in demand for AI  The top five IC design houses boosted their 2023 revenues to $55.268 billion—a 105% year-over-year increase—primarily driven by NVIDIA’s AI GPU H100. Currently, NVIDIA captures over 80% of the AI accelerator chip market, and its revenue growth is expected to continue in 2024 with the release of the H200 and next-generation B100/B200/GB200. Broadcom’s revenue reached $28.445 billion in 2023 (semiconductor segment only), growing by 7%, with AI chip income accounting for nearly 15% of its semiconductor solutions. Despite stable wireless communications revenue, Broadcom expects a near-double-digit decline in broadband and server storage connectivity this year.  AMD’s revenue fell by 4% to $22.68 billion in 2023, due to declining PC demand and inventory reductions, affecting most of its business segments. Only its data center and embedded businesses, boosted by the acquisition of Xilinx, grew by 17%. AMD’s AI GPU MI300 series, launched in the fourth quarter of 2023, is expected to be a major revenue driver in 2024.  Conversely, Qualcomm and MediaTek were impacted by the downturn in the smartphone market. Qualcomm’s 2023 revenue decreased by 16% YoY to $30.913 billion (QCT only) due to weak demand in the handheld device and IoT sectors, with China’s smartphone shipments hitting a decade low. However, Qualcomm is actively promoting the automotive market, expecting automotive revenues to more than double by 2030.  MediaTek’s revenue also fell in 2023, dropping 25% YoY to $13.888 billion, with declines in smartphone, power management IC, and smart edge businesses. Nevertheless, due to the adoption of its Dimensity 9300 by several Chinese clients and expected growth in high-end smartphone shipments, the company predicts a return to double-digit growth for all of 2024.  Two significant changes in the ranking from sixth to tenth took place: First, Cirrus Logic fell off the list from its last place spot and was replaced by MPS, whose 2023 revenue rose 4% YoY to $1.821 billion thanks to automotive, enterprise data, and storage computing businesses—offsetting declines in communication and industrial sectors.  Secondly, Realtek’s revenue fell by 19% annually to $3.053 billion in 2023, dropping the company down to eighth place. The decline was mainly due to a sharp decrease in PC shipments, a suspension of telecom tenders in China, and early inventory write-offs. However, after clearing inventory, Realtek saw a slight improvement in PC and automotive shipments in the first quarter of 2024 over networking and consumer electronics. With the launch of WiFi-7 in the third quarter, the restart of telecom tenders, and participation in the development of edge computing frameworks through the Arm alliance, Realtek’s revenues are poised for growth.
Key word:
Release time:2024-05-13 14:48 reading:836 Continue reading>>
<span style='color:red'>NVIDIA</span> Confirms Development of “Compliance Chips” for the Chinese Market
  According to IJIWEI’s report, NVIDIA recently confirmed that it is actively working on new “compliant chips” tailored for the Chinese market. However, these products are not expected to make a substantial contribution to fourth-quarter revenue.  On November 21, during NVIDIA’s earnings briefing for the third quarter of 2024, executives acknowledged the significant impact of tightened U.S. export controls on AI. They anticipated a significant decline in data center revenue from China and other affected countries/regions in the fourth quarter. The controls were noted to have a clear negative impact on NVIDIA’s business in China, and this effect is expected to persist in the long term.  NVIDIA’s Chief Financial Officer, Colette Kress, also noted that the company anticipates a significant decline in sales in China and the Middle East during the fourth quarter of the 2024 fiscal year. However, she expressed confidence that robust growth in other regions would be sufficient to offset this decline.  Kress mentioned that NVIDIA is collaborating with some customers in China and the Middle East to obtain U.S. government approval for selling high-performance products. Simultaneously, NVIDIA is attempting to develop new data center products that comply with U.S. government policies and do not require licenses. However, the impact of these products on fourth-quarter sales is not expected to materialize immediately.  Previous reports suggested that NVIDIA has developed the latest series of computational chips, including HGX H20, L20 PCIe, and L2 PCIe, specifically designed for the Chinese market. These chips are modified versions of H100, ensuring compliance with relevant U.S. regulations.  As of now, Chinese domestic manufacturers have not received samples of H20, and they may not be available until the end of this month or mid-next month at the earliest. IJIWEI’s report has indicated that insiders have revealed the possibility of further policy modifications by the U.S., a factor that NVIDIA is likely taking into consideration.
Key word:
Release time:2023-11-23 13:24 reading:2540 Continue reading>>
AMD Closes In on <span style='color:red'>NVIDIA</span>, Securing Major Deals with Oracle and IBM
  As Jiwei reported, AMD, although trailing NVIDIA in AI, has recently clinched significant deals, earning the trust of two major clients, Oracle and IBM. Oracle plans to integrate AMD’s Instinct MI300X AI chips into their cloud services, complemented by HPC GPUs. Additionally, as per insights from Ming-Chi Kuo, TF International Securities analyst, IBM is set to leverage AMD’s Xilinx FPGA solutions to handle artificial intelligence workloads.  Oracle’s extensive cloud computing infrastructure faces challenges due to a shortage of NVIDIA GPUs. Nonetheless, Oracle maintains an optimistic outlook. They aim to expand the deployment of the H100 chip by 2024 while considering AMD’s Instinct MI300X as a viable alternative. Oracle has decided to postpone the application of their in-house chips, a project with a multi-year timeline. Instead, they are shifting their focus to AMD’s high-performance AI chip, the MI300X, well-regarded for its impressive capabilities.  Reports indicate that Oracle intends to introduce these processor chips into their infrastructure in early 2024.  Similarly, IBM is exploring chip options beyond NVIDIA. Their new AI inference platform relies on NeuReality’s NR1 chip, manufactured on TSMC’s 7nm process. AMD plays a pivotal role in NeuReality’s AI solution by providing the essential FPGA chips. Foxconn is gearing up for AI server production using this technology in the Q4 2023.  Guo also pointed out that, although Nvidia remains the dominant AI chip manufacturer in 2024, AMD strengthens partnerships with platform service providers/CSPs like Microsoft and Amazon while acquiring companies like Nod.ai. This positions AMD to potentially narrow the AI gap with Nvidia starting in 2025. This collaboration also affirms that AMD remains unaffected by the updated U.S. ban on shipping AI chips to China.
Key word:
Release time:2023-10-24 16:33 reading:2124 Continue reading>>
TSMC Intensifies Silicon Photonics R&D, Rumored Collaboration with Broadcom and <span style='color:red'>NVIDIA</span>
  According to a report by Economic Daily, AI is driving a massive demand for data transmission, and silicon photonics and Co-Packaged Optics (CPO) have become new focal points in the industry. TSMC is actively entering this field and is rumored to be collaborating with major customers such as Broadcom and NVIDIA to jointly develop these technologies. The earliest large orders are expected to come in the second half of next year.  TSMC has already assembled a research and development team of over 200 people, aiming to seize the business opportunities in the emerging market of ultra-high-speed computing chips based on silicon photonics, which are expected to arrive gradually starting next year.  Regarding these rumors, TSMC has stated that they do not comment on customer and product situations. However, TSMC has a high regard for silicon photonics technology. TSMC Vice President Douglas Yu recently stated publicly, “If we can provide a good silicon photonics integration system, it can address two key issues: energy efficiency and AI computing capability. This could be a paradigm shift. We may be at the beginning of a new era.”  Silicon photonics was a hot topic at the recent SEMICON Taiwan 2023 with major semiconductor giants like TSMC and ASE giving related keynote speeches. This surge in interest is mainly due to the proliferation of AI applications, which have raised questions about how to make data transmission faster and achieve signal latency reduction. The traditional method of using electricity for signal transmission no longer meets the demands, and silicon photonics, which converts electricity into faster optical transmission, has become the highly anticipated next-generation technology to enhance high-volume data transmission speeds in the industry.  Industry reports suggest that TSMC is currently collaborating with major customers like Broadcom and NVIDIA to develop new products in the field of silicon photonics and Co-Packaged Optics. The manufacturing process technology ranges from 45 nanometers to 7 nanometers, and with mass production slated for 2025. At that time, it is expected to bring new business opportunities to TSMC.  Industry sources reveal that TSMC has already organized a research and development team of approximately 200 people. In the future, silicon photonics is expected to be incorporated into CPU, GPU, and other computing processes. By changing from electronic transmission lines to faster optical transmission internally, computing capabilities are expected to increase several tens of times compared to existing processors. Currently, this technology is still in the research and academic paper stage, but the industry has high hopes that it will become a new driver of explosive growth for TSMC’s operations in the coming years.
Key word:
Release time:2023-09-11 14:52 reading:2388 Continue reading>>
Ameya360:Quest Global and <span style='color:red'>NVIDIA</span> to Develop Digital Twin Solutions for Manufacturing Industry
  Quest Global is developing new services and solutions, based on the NVIDIA Omniverse Enterprise platform, to deliver the best 3D visualization, simulation, design collaboration, and digital twin solutions for the manufacturing and automotive industries.  Through this association, Quest Global aims to facilitate the transformation of the traditional manufacturing processes and facilities by enabling manufacturers to augment their physical production environments with large-scale, AI and IoT-enabled, digital twin counterparts. These digital twins will enable manufacturers to optimize their manufacturing, logistics, and warehouse processes, reduce waste, and unlock operational efficiencies.  “As organizations work towards enabling their manufacturing operations with predictive analysis, operational efficiencies, and innovative automation, live digital twins of factory solutions play a vital role in achieving that. We are proud to work with NVIDIA to set up an Omniverse center of excellence, with trained engineers and NVIDIA-specific labs and infrastructure. This association is a testament to our commitment towards helping our customers pursue the next frontier of innovation and solve the world’s hardest engineering problems,” said Dushyant Reddy, Global Business Head for Hi-Tech, Quest Global.  NVIDIA Omniverse Enterprise is an end-to-end 3D simulation platform that helps organizations develop and operate physically accurate, perfectly synchronized and AI-enabled digital twins. Building the factories of the future requires uniting disparate datasets from many 3D digital content creation (DCC) and simulation applications in full fidelity, a capability uniquely enabled by Omniverse Enterprise, then connecting to scalable AI platforms such as NVIDIA Isaac Sim for robotics simulation and Metropolis for vision AI applications.  “The industrial metaverse requires innovative simulation and AI capabilities to tackle today’s critical manufacturing and automotive challenges,” said Brian Harrison, Senior Director of Product Management for Omniverse Digital Twins at NVIDIA. “The collaboration between Quest Global and NVIDIA delivers workflow solutions and enhancements that take manufacturing and design collaboration to the next level.”  Quest Global — a long-standing Elite member of the NVIDIA Partner Network – is uniquely positioned to leverage its 3D simulation, engineering, and AI capabilities to help manufacturers quickly develop and harness digital twins of their production environments. The company plans to utilize the capabilities of Omniverse for its customers across industry sectors for product design, optimization and operation of factories of the future, simulation and training of robotics, synthetic data generation for AI training and much more.
Key word:
Release time:2023-02-03 11:44 reading:3185 Continue reading>>
Chipmaker Nvidia plunges after missing on revenue and guidance
Nvidia stock fell as much as 19 percent Thursday after the company reported earnings for the third quarter of its 2019 fiscal year, which ended on Oct. 28.Here's how the company did:Earnings: $1.84 per share, excluding certain items, vs. $1.71 per share as expected by analysts, according to Refinitiv.Revenue: $3.18 billion, vs. $3.24 billion as expected by analysts, according to Refinitiv.With respect to guidance, Nvidia said it's expecting $2.70 billion in revenue in the fiscal fourth quarter, plus us minus 2 percent, excluding certain items. That's below the Refinitiv consensus estimate of $3.40 billion.Overall, in the fiscal third quarter, Nvidia's revenue rose 21 percent year over year, according to its earnings statement.In its fiscal second-quarter earnings, the chipmaker fell short of analyst expectations on guidance despite beating on earnings and revenue estimates. The company's cryptocurrency mining products suffered a hefty decline in that quarter, and the trend continued in the fiscal third quarter.It has become less profitable to use graphics processing units, or GPUs, for mining, according to a recent analysis by Susquehanna. To mine cryptocurrency, computers compete to solve complex math problems in exchange for a specific amount of bitcoin or ethereum. But as both currencies have sunk in value, so too has this segment of revenue for Nvidia."Our near-term results reflect excess channel inventory post the crypto-currency boom, which will be corrected," Nvidia CEO Jensen Huang is quoted as saying in a Thursday press release. In the fiscal third quarter Nvidia's revenue from original equipment manufacturers and intellectual property totaled $148 million, which was down 23 percent year over year but above the FactSet consensus estimate of $102 million. Nvidia chocked up the decline to "the absence of cryptocurrency mining" in its earnings statement.In the quarter Nvidia had a $57 million charge related to older products because of the decrease in demand for cryptocurrency mining."Our Q4 outlook for gaming reflects very little shipment in the midrange Pascal segment to allow channel inventory to normalize," Nvidia's chief financial officer, Colette Kress, told analysts on a conference call after the company announced its results.It will take one to two quarters to go through the extra inventory, Huang said on the call."This is surely a setback, and I wish we had seen it earlier," he said.Inventory issues also affect other brands, Huang said. AMD stock fell 5 percent in extended trading on Thursday.Nvidia's gaming business segment generated $1.76 billion in revenue in the quarter, below the $1.89 billion FactSet consensus estimate.Nvidia's data center segment came in at $792 million in revenue, lower than the $821 million estimate.Revenue for the company's professional visualization business segment was $305 million, surpassing the $284 million estimate.Nvidia, like most other tech stocks, was hit hard in October, which was the worst month for the Nasdaq Composite Index since 2008. The stock is now up 4 percent since the beginning of the year.
Key word:
Release time:2018-11-16 00:00 reading:2706 Continue reading>>
Nvidia Enters ADAS Market via AI-Based Xavier
Nvidia is in Munich this week to declare war that it is coming after the advanced driver assistance system (ADAS) market. The GPU company is now pushing its AI-based Nvidia Drive AGX Xavier System — originally designed for Level 4 autonomous vehicles — down to Level 2+ cars.In a competitive landscape already crowded with ADAS solutions provided by rival chip vendors such as NXP, Renesas, and Intel/Mobileye, Nvidia is boasting that its GPU-based automotive SoC isn’t just a “development platform” for OEMs to prototype their self-driving vehicles.At the company’s own GPU Technology Conference (GTC) in Europe, Nvidia announced that Volvo cars will be using the Nvidia Drive AGX Xavier for its next generation of ADAS vehicles, with production starting in the early 2020s.NVIDIA's Drive AGX Xavier will be designed into Volvo's ADAS L2+ vehicles. Henrik Green (left), head of R&D of Volvo Cars, with Nvidia CEO Jensen Huang on stage at GTC Europe in Munich. (Photo: Nvidia)Danny Shapiro, senior director of automotive at Nvidia, told us, “Volvo isn’t doing just traditional ADAS. They will be delivering wide-ranging features of ‘Level 2+’ automated driving.”By Level 2+, Shapiro means that Volvo will be integrating “360° surround perception and a driver monitoring system” in addition to a conventional adaptive cruise control (ACC) system and automated emergency braking (AEB) system.Nvidia added that its platform will enable Volvo to “implement new connectivity services, energy management technology, in-car personalization options, and autonomous drive technology.”It remains unclear if car OEMs designing ADAS vehicles are all that eager for AI-based Drive AGX Xavier, which is hardly cheap. Shapiro said that if any car OEMs or Tier Ones are serious about developing autonomous vehicles, taking an approach that “unifies ADAS and autonomous vehicle development” makes sense. The move allows carmakers to develop software algorithms on a single platform. “They will end up saving cost,” he said.Phil Magney, founder and principal at VSI Labs, agreed. “The key here is that this is the architecture that can be applied to any level of automation.” He said, “The processes involved in L2 and L4 applications are largely the same. The difference is that L4 would require more sensors, more redundancy, and more software to assure that the system is safe enough even for robo-taxis, where you don’t have a driver to pass control to when the vehicle encounters a scenario that it cannot handle.”Better than discrete ECUsAnother argument for the use of AGX for L2+ is that the alternative requires the use of multiple discrete ECUs. Magney said, “An active ADAS system (such as lane keeping, adaptive cruise, or automatic emergency braking) requires a number of cores fundamental to automation. Each of these tasks requires a pretty sophisticated hardware/software stack.” He asked, “Why not consolidate them instead of having discrete ECUs for each function?”Scalability is another factor. Magney rationalized, “A developer could choose AGX Xavier to handle all these applications. On the other hand, if you want to develop a robo-taxi, you need more sensors, more software, more redundancy, and higher processor performance … so you could choose AGX Pegasus for this.”Is AGX Xavier safer?Shapiro also brought up safety issues.He told us, “Recent safety reports show that many L2 systems aren’t doing what they say they would do.” Indeed, in August, the Insurance Institute for Highway Safety (IIHS) exposed “a large variability of Level 2 vehicle performance under a host of different scenarios.” An EE Times story entitled “Not All ADAS Vehicles Created Equal” reported that some [L2] systems can fail under any number of circumstances. In some cases, certain models equipped with ADAS are apparently blind to stopped vehicles and could even steer directly into a crash.Nvidia’s Shapiro implied that by “integrating more sensors and adding more computing power” that runs robust AI algorithms, Volvo can make their L2+ cars “safer.”On the topic of safety, Magney didn’t necessarily agree. “More computing power doesn’t necessarily mean that it is safer.” He noted, “It all depends on how it is designed.”Lane keeping, adaptive cruise, and emergency braking for L2 could rely on a few sensors and associated algorithms while a driver at the wheel manages events beyond the system’s capabilities.However, the story is different with a robo-taxi, explained Magney. “You are going to need a lot more … more sensors, more algorithms, some lock-step processing, and localization against a precision map.” He said, “For example, if you go from a 16-channel LiDAR to a 128-channel LiDAR for localization, you are working with eight times the amount of data for both your localization layer as well as your environmental model.”Competitive landscapeBut really, what does Nvidia have that competing automotive SoC chip suppliers don’t?Magney, speaking from his firm VSI Labs’ own experience, said, “The Nvidia Drive development package has the most comprehensive tools for developing AV applications.”He added, “This is not to suggest that Nvidia is complete and a developer could just plug and play. To the contrary, there is a ton of organic codework necessary to program, tune, and optimize the performance of AV applications.”However, he concluded that, in the end, “you are going to be able to develop faster with Nvidia’s hardware/software stack because you don’t have to start from scratch. Furthermore, you have DRIVE Constellation for your hardware-in-loop simulations where you can vastly accelerate your simulation testing, and this is vital for testing and validation.”
Key word:
Release time:2018-10-11 00:00 reading:2755 Continue reading>>
SiFive announces first open-source RISC-V-based SoC platform with <span style='color:red'>NVIDIA</span> Deep Learning Accelerator technology
SiFive, a provider of commercial RISC-V processor IP, today announced the first open-source RISC-V-based SoC platform for edge inference applications based on NVIDIA’s Deep Learning Accelerator (NVDLA) technology.The demo will be shown this week at the Hot Chips conference and consists of NVDLA running on an FPGA connected via ChipLink to SiFive’s HiFive Unleashed board powered by the Freedom U540, the world’s first Linux-capable RISC-V processor. The complete SiFive implementation is well suited for intelligence at the edge, where high-performance with improved power and area profiles are crucial. SiFive’s silicon design capabilities and innovative business model enables a simplified path to building custom silicon on the RISC-V architecture with NVDLA.NVIDIA open-sourced its leading deep learning accelerator over a year ago to spark the creation of more AI silicon solutions. Open-source architectures such as NVDLA and RISC-V are essential building blocks of innovation for Big Data and AI solutions.“It is great to see open-source collaborations, where leading technologies such as NVDLA can make the way for more custom silicon to enhance the applications that require inference engines and accelerators,” said Yunsup Lee, co-founder and CTO, SiFive. “This is exactly how companies can extend the reach of their platforms.”“NVIDIA open sourced its NVDLA architecture to drive the adoption of AI,” said Deepu Talla, vice president and general manager of Autonomous Machines at NVIDIA. “Our collaboration with SiFive enables customized AI silicon solutions for emerging applications and markets where the combination of RISC-V and NVDLA will be very attractive.”
Key word:
Release time:2018-08-21 00:00 reading:1246 Continue reading>>
Nvidia's GTX 1180 production should kick off with the new SK Hynix memory deal
Nvidia has signed a big chip deal with SK Hynix, putting the South Korean memory manufacturing giant in line for a large slice of Nvidia’s future financial successes with AI and data centre sales. And potentially with future GTX 1180-shaped graphics cards too.SK Hynix is the world’s second largest memory fab behind Samsung, and number four in the world for semiconductor sales behind TSMC, Intel, and Samsung - who is, again, in the very top spot.The memory manufacturer has already been enjoying a pretty stellar year, with huge worldwide memory demand that has seen system memory prices soar. But since the deal with Nvidia has entered the public domain, SK Hynix’s stocks have gone supersonic.Won’t settle for anything less than 4K? Don’t worry, the best Nvidia graphics cards are only a click away.Stocks are currently valued at 95,300 KRW apiece, toppling the previous high of the year of 90,700 KRW back in March. As a result of the Nvidia deal, this is the highest the semiconductor manufacturer’s stocks have been in 17 years.So what chips are SK Hynix providing Nvidia? From what we know of Nvidia’s next-generation of graphics cards (which is still very little, despite the recent Nvidia GTX 1180 rumours), we can only assume that SK Hynix will be providing GDDR6 for Nvidia’s GTX 1180 and further next-generation graphics cards. SK Hynix confirmed that its speedy GDDR6 memory modules were available for purchase earlier in the year, and this memory standard has been the most likely candidate to power Nvidia’s next gaming cards for some time.Nvidia’s current Volta generation Titan V utilises 12GB of HBM2 memory (which SK Hynix also make and assumedly supply to Nvidia in some capacity), although this on-package memory solution has been largely deemed too expensive and unnecessary for gaming cards - unfortunately, AMD’s RX Vega graphics cards had to find this out first hand.It’s all kicking off for SK Hynix and the company’s financials, and is likely the icing on the cake of what has already been an extremely successful year for the memory fab. Hopefully this newfound memory deal indicates signs of life within the Nvidia supply chain toward large-scale production of new product, and a good omen for gamers patiently waiting it out for new graphics cards from the green team.
Key word:
Release time:2018-05-24 00:00 reading:3225 Continue reading>>
Nvidia Moves Into Top 10 in Chip Sales
  Nvidia cracked the list of top 10 semiconductor vendors by sales for the first time in 2017, joining Qualcomm as the only other strictly fabless chip supplier to attain that distinction last year, according to market research firm IHS Markit.  Nividia's 2017 sales total of $8.57 billion was good enough for the company to secure the 10th position among chip vendors last year, IHS said. Ironically, Nvidia edged out fellow fabless chip supplier MediaTek of Taiwan to crack the top 10, according to Len Jelinek, director and chief analyst for technology, media and telecommunications at IHS.  Qualcomm, Nvidia and MediaTek are the only strictly fabless chip vendors to ever crack the top 10 list of chip suppliers in a calendar year. MediaTek was among the top 10 chip vendors in 2014 and again in 2016. Qualcomm first cracked the top 10 list in 2007 and has remained a fixture on the list ever since.  Overall, global semiconductor sales rose 21.7 percent in 2017 to reach a record $429.1 billion. It was the highest year-over-year growth for the industry in 14 years.  Most of the industry's sales gains were driven by a blockbuster year for memory chip sales, which increased by 60.8 percent from 2016. Outside of memory, the rest of the semiconductor industry grew by 9.9 percent last year, due largely to what IHS called solid unit-sales growth and strong demand across all applications, regions and technologies.  Craig Stice, senior director for memory and storage at IHS, said through a press statement that NAND prices — which increased throughout 2017 — are expected to decline this year.  "Entering 2018, the 3D NAND transition is now almost three-quarters of the total bit percent of production, and it is projected to provide supply relief for the strong demand coming from the SSD and mobile markets," Stice said. "Prices are expected to begin to decline aggressively, but 2018 could still be a record revenue year for the NAND market."
Key word:
Release time:2018-04-13 00:00 reading:2873 Continue reading>>

Turn to

/ 2

  • Week of hot material
  • Material in short supply seckilling
model brand Quote
RB751G-40T2R ROHM Semiconductor
MC33074DR2G onsemi
CDZVT2R20B ROHM Semiconductor
BD71847AMWV-E2 ROHM Semiconductor
TL431ACLPR Texas Instruments
model brand To snap up
IPZ40N04S5L4R8ATMA1 Infineon Technologies
ESR03EZPJ151 ROHM Semiconductor
STM32F429IGT6 STMicroelectronics
TPS63050YFFR Texas Instruments
BU33JA2MNVX-CTL ROHM Semiconductor
BP3621 ROHM Semiconductor
Hot labels
ROHM
IC
Averlogic
Intel
Samsung
IoT
AI
Sensor
Chip
About us

Qr code of ameya360 official account

Identify TWO-DIMENSIONAL code, you can pay attention to

AMEYA360 mall (www.ameya360.com) was launched in 2011. Now there are more than 3,500 high-quality suppliers, including 6 million product model data, and more than 1 million component stocks for purchase. Products cover MCU+ memory + power chip +IGBT+MOS tube + op amp + RF Bluetooth + sensor + resistor capacitance inductor + connector and other fields. main business of platform covers spot sales of electronic components, BOM distribution and product supporting materials, providing one-stop purchasing and sales services for our customers.

Please enter the verification code in the image below:

verification code