HotI 2025 has concluded. Watch the recordings of all HotI 2025 sessions and talks on YouTube! YouTube Logo


What’s Up with HPC Interconnects? A Q&A with HotI

By Alex Woodie. The following article originally appeared on HPCwire.

The AI revolution has exposed new bottlenecks in HPC architectures, particularly at the network level. As GPUs and other AI accelerators get faster, they’re outrunning the capability of storage to keep them fed with data. This is spurring research and development into new interconnect technology, which was the focus of the recent Hot Interconnects (now HotI) conference.

Like the annual Hot Chips Symposium, the Institute of Electrical and Electronics Engineers has hosted an annual Hot Interconnects (or HotI) conference to showcase the latest advances in interconnect technology. This year’s event took place virtually over three days in late August 2025.

HPCwire, HotI’s media partner, had the chance to conduct an email Q&A with HotI about the recent event. Several HotI representatives participated in the conversation, including Artem Polyakov (General Chair), Sayan Ghosh (Vice Chair), Dan Pitt (Keynote/Panel Chairs), and Scott Levy and Jim Dinan (both Finance Chairs).

Here’s our conversation:

HPCwire: First, can you tell us more about what the Hot Interconnects conference is about?

HotI: Hot Interconnects (which we now call HotI [sounds like Hot Eye]) was established in 1993 as a sister conference of the well-known Hot Chips Symposium. Both conferences were held back-to-back, sharing the same venues. Having a multi-decade history (this year marks the 32nd edition!), HotI was actively engaged in the establishment of high-performance communication. For further details, we would like to refer the interested reader to a nostalgic retrospective outlook by the HotI “veteran” Dan Pitt.

Today, HotI is the premier international forum for researchers and developers of state-of-the-art hardware and software architectures and implementations for high-performance interconnection networks of all scales, ranging from multi-core on-chip interconnects to those within systems, clusters, and data centers.

HPCwire: What was the focus of Hot Interconnects this year?

HotI: The focus of HotI was always on the hottest trends of the year. In his presentation to open the conference, our General Chair, Artem Polyakov, outlined how trends in the field have shifted in the past decades.

For example, routing was heavily discussed in the early 2000s, with up to 4 sessions on it in 2002. Hardware offloading periodically draws attention, e.g., in 2006, 2014, and 2020-2022. The infrastructure- and optic-related topics have shown consistent multi-year bursts. Beginning in 2023, we have observed a dramatic increase in interest in topics related to Artificial Intelligence (AI).

This year, the HotI theme was dedicated to the communication software that enables systems to achieve the performance requirements of cutting-edge HPC and AI applications.

Both of our stellar keynotes, by Amin Vahdat of Google (which you can watch on YouTube) and Deepak Bansal of Microsoft (also available for viewing on YouTube), as well as Cornelis Networks, Cisco, Meta, and GigaIO talks delved into the AI infrastructures and respective software.

The annual panel discussion (which you can also watch on YouTube) provided an AI software perspective on interconnect requirements and explored the LLM token economy that drives cutting-edge AI systems.

Continuing this topic, several papers in the technical program focused on various aspects of AI communication. In addition, for the first time, we held a tutorial on NVIDIA GPU communication libraries that attracted a lot of attention.

HPCwire: So there has been an “AI effect” on Interconnects!

HotI: Absolutely! The explosive growth of AI has exposed several shortcomings in interconnect performance and capabilities. Specifically, the AI boom has brought into clear focus how much faster computing has evolved than networking and interconnect capabilities. It has become apparent to everyone that moving the data around has become the bottleneck to scaling AI computing. As a result, many of the technologies and systems discussed at Hot Interconnects addressed how to close the performance gap between computing and networking. We appreciate the attention we in the Interconnect industry have been receiving of late, as the world is looking to us to enable more acceleration of AI. Interestingly, for many years, the market for advanced interconnect technologies was primarily HPC. Suddenly, we find that these technologies are also needed for scaling AI, which represents a much larger market.

HPCwire: Were there any new technologies that we should watch?

HotI: Hot Interconnects have always been at the forefront of showcasing contemporary cutting-edge interconnect technologies. All of these new technologies, including:

  • Cray Seastar (2002), Gemini (2010), and Slingshot (2019, 2022);
  • PathScale Infinipath (2005);
  • Quadrics QsNetIII (2008);
  • Fujitsu Tofu (2011) and Tofu2 (2014);
  • Intel Omni-path (2015);
  • Atos BXI (2015);
  • Standards: RoCE (2009), CXL (2021);
  • Lightmatter Passage (2023);

have all been featured at various times over the history of Hot Interconnects.

This year is no exception. Several emerging “hot interconnects,” including Ultra Ethernet (view more on YouTube:), NVIDIA NVLink Fusion (see this YouTube presentation:), and UALink have set new standards for growing AI and HPC needs, and a paper on early adoption of UCIe for on-package memory implementation. We are excited about the impact of the scale of the next generation of AI products.

Optical technologies have also been prominent over the last few years. As the newest advances reduce the power down to 1 pJ/bit, these technologies satisfy massive bandwidth and very low latency requirements at ultra-low cost. This year was unprecedented in the number of optics-related talks. Lightmatter presented an update on their Passage Co-Packaged Optics (CPO) solution and a paper on its impact for AI training applications, and Nubis discussed mixing active copper and CPO to improve scaling. Arista examined the power aspects of optics. Avicena presented a technical paper on the advantages of micro LEDs over lasers. Celestial AI presented the Photonic Fabric CPO solution.

HPCwire: Does HotI highlight the best work?

HotI: Definitely. HOTI Technical Program Committee (TPC) has experts from industry, academia and labs, who have been instrumental in identifying the best research papers through meticulous peer review and discussions. Since 2024, we have reprised the best paper awards to further highlight two top papers in the “Industry” and “Academia” categories, respectively. This year, thanks to the generosity of our sponsors, each award included a $1000 cash prize. The Best Industry Paper award was given to “Accelerating Frontier MoE Training with 3D Integrated Optics” by Mikhail Bernadskiy et al., Lightmatter. The authors model the benefits of 3D CPO for scale-up GPU networks, which demonstrates an 8x increase in scale-up pod bandwidth using half the energy of conventional CPO, a 6x reduction in package area expansion compared to CPO, and 2.7x increase in MoE training throughput. The Best Academic Paper was given to “Deadlock-free routing for Full-mesh networks without using Virtual Channels” by Alejandro Cano Cos et al., Universidad de Cantabria. This work actualizes the routing topic from the 2000s in contemporary realities. The authors present their Topology-Embedded Routing Algorithm, which deconstructs the physical topology into two components that provide up to 32% performance improvement.

HPCwire: What is the focus for 2026 Hot Interconnects?

HotI: Continuing an emerging trend from the past few years, this year featured a remarkable number of talks dedicated to optics for bridging the growing gap between computation and bandwidth. Looking ahead, we plan to focus on strategies for increasing bandwidth density while addressing the limitations of conventional electrical interconnects.