SemiExpo 2022 » Mostly Upbeat Outlook For Chips
  • Русский
  • English
default-logo
19 February 2019

2019 has started with cautious optimism for the semiconductor industry, despite dark clouds that dot the horizon

Market segments such as cryptocurrencies and virtual reality are not living up to expectations, the market for smart phones appears to be saturated, and DRAM prices are dropping, leading to cut-backs in capital expenditures. EDA companies are talking about sales to China being put on hold in the shadow of a trade war between the United States and China. And there appears to be a slowdown in consumer electronics, as evidenced by Apple’s recent earnings and guidance.

Alongside all of that, there are several areas to be excited about. The rapid adoption of, artificial intelligence (AI) is fueling advancements in fields such as automotive and IoT5G, another important enabler, is being readied for significant deployment. Plus, the number of design starts is growing, spurring a resurgence in ASICs and the emergence of embedded FPGA structures.

There are some bright spots in the semiconductor market that should provide accelerated growth, helping to offset average selling price declines for memory. “In particular, the dramatic increase in investment in new domain specific processors, principally for artificial intelligence and machine learning, is very encouraging,” says Wally Rhines, CEO Emeritus, Mentor, a Siemens Business. “For many applications in pattern recognition and data analytics, traditional chip architectures are not providing adequate performance and the reduced power needed to execute the newest machine learning algorithms. As a result, there is a major acceleration in the design of new custom chips. This has also stimulated development of new chip design methodologies, many of which involve the use of artificial intelligence to improve the design process.”

But we should beware of putting too much faith in AI to fuel the industry. “It’s likely that AI will enter Gartner’s famed ‘trough of disillusionment’ for many companies,” warns Forrest. “The industry will quickly realize that AI isn’t the answer to everything and the hype will disappear somewhat. Others will quickly shift focus, ensuring that useful elements of AI that augment systems capabilities are retained, but AI will not necessarily remain central to the operation of those systems.”

Expectations have to be set. “While we all dream of AI devices or robotics that can perform loosely defined tasks like ‘do the dishes’, ‘paint the fence’ or ‘drive to Mom’s house’, the AI growth that I’m seeing for 2019 will be for simplifying routine tasks that can be automated with the addition of AI, and in particular with voice or image recognition,” says Marc Greenberg, group director of product marketing for the IP Group at Cadence. “For example, if you hate punching in the cooking time into the keypad of your microwave, then voice recognition – either standalone or through the digital assistant portal – will allow you to tell your microwave to heat your toaster pastry. We’ll see more office security systems that use facial recognition to open doors and possibly alert security if there’s an unauthorized tailgater behind an authorized user.”

AI is still in its infancy, despite all the rapid advancements. “There’s a lot of talk about the danger of AI, but that seems to be based on rather science-fictional concepts about singularity and ethics,” points out David Harold, vice president of marketing communications for Imagination. “Today, the application of AI is very much in the hands of people—and it is people, especially those with legislative powers, to whom we need to turn our attention to make sure AI is a benefit to society, not a burden.”

There are two significantly different domains for AI: learning and inferencing. Inferencing can happen either in the data center or at the edge.

AI in the data center is beginning to see significant change. “Nvidia’s GPUs and Intel Xeons have dominated most data center neural network processing to date,” says Geoff Tate, CEO for Flex Logix. “Nvidia’s Tesla T4 has been the only mass volume data center training card. Habana’s Goya is the first product to ship with price/performance characteristics markedly better than Tesla T4. In 2019 more optimized inference engines will debut and their collective penetration in the data center will accelerate, sharply reducing Nvidia’s market share. Cloud company’s home-grown inferencing accelerators, like Amazon’s Inferentia, will add to this trend.”

This trend is likely to continue. “Intel’s data center dominance will begin to recede,” adds Tate. “Dedicated training and inferencing engines will move significant shares of neural network processing off of Xeons. Intel’s own neural network acquisitions have yet to gain any significant traction. This is critical because neural networks are a rapidly growing share of data center processing workloads, and processors are not competitive with optimized neural network chips.”

The drive toward optimized architectures also will enable inferencing at the edge. “In the next year, we will see significant progress in the development of truly adaptive, learning systems for use in the cloud and in embedded, low power, autonomous applications at the edge,” says David White, senior group director for R&D in Cadence’s Custom IC & PCB Group. “This will require a more effective combination of machine and deep learning with optimization-driven adaptation. It will also increase the focus on the verification of AI-enabled systems for industrial and transportation-related systems and environments. It is not as large of a concern for cloud-based marketing applications or image processing benchmarks, but as we integrate AI and deep learning into transportation, manufacturing, IoT or other safety critical environments, it becomes more of a concern.”

The migration to the edge will take a while. “For true AI at the edge we need to consider innovative methods of improving the packing density of transistors onto silicon chips,” points out Forrest. “We expect to see entirely new ways of constructing SoCs that have both the capacity to acquire knowledge through learning alongside the necessary reasoning skills to adapt. This is several years away.”

Source: https://semiengineering.com 

About the Author