This year’s SEMICON Taiwan 2024 centered around the theme “Breaking Limits: Powering the AI Era,” exploring innovations in technology, cross-industry collaboration, and the widespread impact of semiconductors. Key industry leaders highlighted new opportunities and challenges presented by artificial intelligence (AI), showcasing their latest technological advancements to meet evolving market needs.
Market Opportunity for AI
TSMC’s Executive Vice President and COO, Dr. Y.J. Mii, underscored the vast potential of AI, predicting a compound annual growth rate (CAGR) of 50%. He emphasized that AI development has a highly promising future, with customer expectations driving a wide variety of applications.
Hamidou Dia, Google’s Vice President of Generative AI Solutions Architecture, forecasted that AI would push the semiconductor market to a valuation between $16 trillion and $20 trillion by 2030. He noted that AI has become an integral part of daily life, impacting work, education, and healthcare sectors.
Microsoft Corporate Vice President Rani Borkar echoed these sentiments, projecting that AI would create $20 trillion in market opportunities over the next two decades. She pointed out that AI is driving global GDP growth and accelerating innovation, with the semiconductor market expected to surpass $1 trillion by 2030.
AI’s Technology Challenges: Performance vs. Energy Efficiency
Despite the remarkable progress AI brings to the semiconductor industry, it also introduces significant challenges, particularly concerning the demand for high performance and energy efficiency. As AI models grow in complexity, addressing these challenges becomes crucial to maintaining the industry’s role in advancing AI technologies.
The Role of Memory
Akshay Singh, Vice President of Micron Technology, highlighted how AI is advancing at an unprecedented pace, finding applications in smart vehicles, healthcare, and manufacturing. With the emergence of generative AI, the conventional architecture—where graphics processing units (GPUs) access data through central processing units (CPUs)—has become insufficient for large language model (LLM) training and inference. As a solution, high-bandwidth memory (HBM) integrated with GPUs has emerged as a new platform to enhance computing power, offering high bandwidth and capacity to enable direct data access and reduce bottlenecks.
Building on the industry’s push toward HBM, Dr. Jung-Bae Lee, President of Samsung Electronics’ memory business, provided further insight into the challenges facing the memory sector as AI scales up. Dr. Lee underscored that larger AI models are driving higher energy consumption in memory systems, making power efficiency a priority. While GPUs have seen rapid advancements, slower progress in memory bandwidth has created bottlenecks in AI development. He also emphasized the growing importance of large-capacity storage, given the increasing reliance of generative AI models on vector databases to prevent hallucinations.
At SEMICON Taiwan, Samsung introduced its strategic memory roadmap to tackle these challenges. Dr. Lee stressed that conventional memory processes alone are insufficient for the performance required by advanced HBM solutions. By integrating logic technologies into memory, Samsung offers a comprehensive turnkey solution that sets it apart in the market. This innovation has led to rising demand for Samsung’s custom HBM solutions, which reduce power consumption by approximately 66% using logic processes in the base die.
Samsung’s leadership in HBM innovation is further bolstered by its System LSI and foundry capabilities. The company also reaffirmed its commitment to collaborating with other foundries and electronic design automation (EDA) companies to address evolving customer needs.
Samsung’s product roadmap includes plans to begin mass production of HBM4 in 2025, followed by HBM4E in 2026 and HBM5 in 2027. Additionally, Samsung announced the release of DRAM products using a 10nm-class 1d process in 2026 and a groundbreaking single-digit 0a node process in 2027.
In closing, Dr. Lee emphasized the need for collaboration across the semiconductor ecosystem: “The challenges of the future are immense, and customer demands are becoming more complex and diverse. Samsung cannot tackle everything alone. We will continue to work closely with our partners to drive technological innovation and strengthen our leadership in the AI and memory markets.”
As competition in the AI memory space intensifies, SK Hynix also continues to develop innovative solutions to meet growing demands for energy efficiency, performance, and capacity. At the Heterogeneous Integration Global Summit, a pre-event of SEMICON Taiwan, Kangwook Lee, Vice President of Packaging (PKG) R&D at SK Hynix, presented the company’s latest advancements under the theme “Preparing for the AI Era with HBM and Advanced Packaging Technologies.” He detailed SK Hynix’s strategic product roadmap, highlighting the company’s focus on selecting the most optimal technology solutions tailored to customer needs.
Currently, SK Hynix utilizes mass reflow-molded underfill technology (MR-MUF) in its 8-layer HBM3 and HBM3E products, while advanced MR-MUF is used in 12-layer products. The company plans to mass-produce 12-layer HBM4 products next year, leveraging this advanced technology. For 16-layer products, SK Hynix is preparing to utilize both advanced MR-MUF and hybrid bonding technologies to ensure the best solution for each customer. Future products with over 20 layers, such as HBM5, are expected to transition toward hybrid bonding.
Hybrid bonding is an advanced packaging method that directly connects copper to copper, removing the need for solder bumps used in traditional semiconductor stacking. This reduces resistance, maximizes signal transmission efficiency, and minimizes the distance between semiconductors.
MR-MUF, first used in HBM2E, offers several advantages, including low-voltage and low-temperature bonding, as well as improved batch thermal processing. This method enhances production efficiency and reliability, using high thermal conductivity gap-fill materials and dense metal bumps to achieve over 30% better heat dissipation compared with traditional processes.
SK Hynix also highlighted the capabilities of HBM4, which supports up to 16 layers, offering a maximum capacity of 48GB and data processing speeds exceeding 1.65TB per second. From HBM4 onward, logic processes applied to the base die will improve both performance and energy efficiency.
During his keynote at the SEMICON Master Forum, Juseon (Justin) Kim, President of SK Hynix, expanded on the company’s broader vision for the future of memory development. He emphasized the pressing need to overcome key challenges, such as power consumption, heat dissipation, and memory bandwidth, to elevate AI to the level of general-purpose generative AI. Among these, power consumption remains the most critical hurdle. To address this, SK Hynix is actively working with its global partners to develop high-efficiency memory solutions that balance the demand for high capacity and performance with minimal power consumption and heat generation.
To support technological advancements and meet increasing customer demand, SK Hynix has launched an aggressive expansion plan. The company’s land development project at the Yongin semiconductor cluster in South Korea is progressing smoothly, with plans to establish state-of-the-art production facilities. Additionally, SK Hynix is constructing an advanced packaging plant and R&D facility in Indiana, USA, aiming for mass production to begin by 2028.
The Role of Advanced Packaging
As market expectations for AI chips’ performance and cost efficiency continue to grow, advanced semiconductor packaging technologies are evolving to meet these needs. He Jun, Vice President of Operations and Advanced Packaging Technology Services at TSMC, highlighted that in the past two years, approximately 40% of demand in the semiconductor market has come from AI and high-performance computing (HPC). Chiplet technology is increasingly crucial, providing a platform for integrating memory and logic. 3D IC technology is key to achieving this integration. AI customers are pushing for larger-scale, higher-density products, and by 2027, it is projected that mask sizes will increase 8 to 10 times from current sizes, with product life cycles shortening to yearly updates.
In response to strong customer demand, TSMC is rapidly expanding its advanced packaging capacity. CoWoS production capacity is expected to expand rapidly until 2026, with a compound annual growth rate exceeding 50% from 2022 to 2026. Over these four years, capacity will increase fivefold compared to 2022, with actual growth being approximately four times.
TSMC’s CoWoS technology comes in three forms, CoWoS-S, CoWoS-R, and CoWoS-L, each differentiated by their interposer solutions:
- CoWoS-S: Utilizes a monolithic silicon interposer and through-silicon vias (TSVs) to enable high-speed electrical signal transmission between the chip and substrate.
- CoWoS-R: Employs integrated fan-out technology (InFO) and a redistribution layer (RDL) interposer for interconnecting small chips, especially in heterogeneous integration scenarios involving HBM and SoCs. The RDL interposer has up to six copper layers with spacing as small as 4µm (2µm line width).
- CoWoS-L: Combines CoWoS-S and InFO technologies with an interposer and LSI local silicon interconnect (LSI) chips for flexible chip-to-chip integration, beginning with configurations like 1x SoC + 4x HBM cubes and scaling to integrate more chips.
Hou Shangyong, Director of High-Performance Packaging Integration at TSMC, mentioned that TSMC’s three CoWoS products are designed to meet various customer needs, with CoWoS-L being the best option due to its balance of cost and efficiency in high-end applications.
In the past two decades, computing power has surpassed 1 exaflop, but memory and I/O bandwidth have grown only by 100-fold and 30-fold, respectively. To address these bottlenecks, the semiconductor industry is adopting co-packaged optics (CPOs) and optical I/O technologies, which integrate optical modules with processors (CPUs, GPUs, DPUs, and FPGAs). This reduces transmission distances and improves data throughput, especially in AI and HPC applications, while minimizing latency and power consumption.
The Role of Silicon Photonics
Silicon photonics plays a crucial role in enabling CPOs by embedding optical components directly on silicon wafers, offering a cost-effective and scalable solution for data centers and advanced computing. This technology is essential for closing the gap between growing computing power and the limitations of traditional interconnects.
According to SEMI research, the global silicon photonics semiconductor market is expected to reach $7.86 billion by 2030, growing at a CAGR of 25.7%. During SEMICON Taiwan, SEMI launched the Silicon Photonics Alliance, with TSMC and ASE as key advocates. This alliance includes more than 30 industry partners with plans to expand and develop the most comprehensive silicon photonics ecosystem in Taiwan, promoting collaboration and innovation in this crucial field.
Industry experts have noted that generative AI servers require massive computational power and demand high data transmission rates. Wider bandwidth and low-power consumption interfaces are essential, and silicon photonics could potentially increase data transmission rates by tenfold when applied to 3D packaging solutions.
Key Takeaways from SEMICON Taiwan
If SEMICON Taiwan made one thing clear, it is that AI is here to stay. At the same time, the industry acknowledges that several challenges remain and multiple paths, novel memory and packaging structures, along with silicon photonics, all offer viable solutions. Industry leaders emphasized the need for collaboration, innovation, and strategic roadmaps to keep up with the evolving demands. Many voices also highlighted the importance of continued innovation to effectively power the AI era, ensuring the industry can meet both current and future technological challenges.