Silicon photonics has emerged as a promising solution to realize high performance computing (HPC) systems required in the Big Data era. Having various applications in the domains of HPC, data centers, sensors and bio-sensing, aerospace, etc., it has attracted researchers from academia and industries in different fields to explore different benefits and challenges of this technology. As an emerging area, it demands multidisciplinary collaborations and contributions, from material science and engineering, to realizing low-loss CMOS compatible components, as well as software CAD and design tools to explore the design space of the resulting complex devices and systems.
The North American Workshop on Silicon Photonics for High Performance Computing (SPHPC) is bringing together experts in Silicon Photonics and in High Performance Computing (HPC architects, interconnect architects, HPC systems modeling) to discuss the needs for Silicon Photonics based HPC interconnects, and the main challenges that must be addressed to accelerate their development. It is comprised of invited talks of the highest caliber from both academia and industry as well as from different disciplines. This is the event for meeting professionals in the field as well as exchanging and exploring new ideas.
In general, we expect to address and discuss the following questions at SPHPC:
- Are 200G or 400G links and switches absolutely required in the next years? Or can double- or quad-rail 100G be satisfying solutions?
- Is the lack of interconnect bandwidth really an obstacle for HPC system scaling?
- Are parallel programmers bracing for bandwidth scarce environments (as they are preparing for the end of Moore’s Law)? Or are they betting on progresses in photonics?
- Is low $/Gb/s the only goal or are there other metrics to optimize for, e.g. bandwidth density, energy efficiency?
- Are HPC system architects well aware of the true potential, and true limitations, of silicon photonics?
- Do silicon photonics experts understand well what is and isn’t required for HPC interconnects?
- Is interconnect power consumption an important issue? Or is it dominated by the cost issue anyway?
- When will silicon photonics beat (in overall value for money) copper based back-plane links?
- Can silicon photonics ever beat copper for short-distance links, e.g. to memory?
- Can silicon photonics do better than VCSELs based links?
- How important is bandwidth density (in Gb/s per silicon area) in the context of integrated photonics?
- Can ring resonators really replace Mach-Zehnders or EAM for modulation in practice, even in challenging thermal environments?
- Is it possible to obtain high bandwidth density with Mach-Zehnders or EAM?
- Can coherent systems meet ultra-low cost requirements? Are there a viable option for high energy efficiency, low cost links?
- Are EPDA tools ready for prime-time?
- Isn’t the bottleneck in the link between the ASIC and the transceiver after all?
- How can electrical drivers challenges (SERDES, TIA, coding) better be taken into account in the link design?
- How can one expose more the photonic community to electrical drivers challenges?
- Is silicon photonic fundamentally capable of beating copper for short distances? Or will EO/OE conversion overheads always be a liability?
- Is there a realistic solution for low-cost packaging and assembly of ASICs with integrated photonics?
- Is it a must to have the laser co-integrated? Or can we leave with external supply? Or is it the best solution after all (e.g. given thermal challenges)?
- If silicon photonic links eventually beat copper in both cost and power terms for few cm links, will it enable a computer architecture revolution?
- Is there really room for optical switching in HPC? How can optics beat electrical packet switches showing only tens of nanosecond of latency?
- In general, what are the hottest issues and challenges? Which ones must be addressed in priority?