TL;DR Exa Laboratories is building reconfigurable chips for AI that are up to 27.6x more efficient and powerful than the H100 GPUs. This could save data centers hundreds of millions to billions in annual energy costs.
Meet Elias and Prithvi from Exa. They're developing reconfigurable chips for AI that are up to around 27.6x* more efficient and performant than the modern H100 GPUs.
CEO, Elias Almqvist (right): Self-taught engineer who also studied computer science and computer engineering (dropped out and founded Exa, btw) at Chalmers University of Technology. Previously worked in the embedded software space but also worked on various aerospace projects at university.
CTO, Prithvi Raj (left): Holds an MEng from the world-leading Computational Stats & ML Lab at Cambridge. During his time there, he fell in love with scientific machine learning, a field that demands bespoke neural network architectures and extreme hardware efficiency, and also interned at Microsoft as a software engineer.
The problems!
The AI industry faces critical challenges threatening its sustainable growth:
Unsustainable Energy Consumption: Modern GPUs consume 600-1000 W per unit, creating massive scaling issues for data centers. Large data centers face energy costs in the hundreds of millions to potentially billions each year. GPU power draw seems to be increasing with each new release, while compute per area has remained the same for the past 5 years.
Exponential Compute Demand: With AI advancements, computational power demand is rapidly increasing. Unchecked, this trend could lead to an energy crisis, impeding AI progress and costing data centers billions of dollars.
Hardware Limitations:Current fixed architectures constrain AI innovation. They lack the versatility to efficiently support diverse AI architectures and custom neural network designs crucial for solving real-world problems.
The solution.
Exa's polymorphic computing technology addresses these challenges:
Reconfigures for each AI model architecture, maximizing efficiency and versatility.
Supports diverse approaches, from transformers and GPTs to novel AI architectures (e.g., the new Kolmogorov-Arnold Networks (KAN)).
Early simulations indicate potential efficiency gains of up to 27.6x over the H100 GPUs.
This technology could save data centers hundreds of millions to billions in annual energy costs, significantly reducing operational expenses and environmental impact.
🤝 Introduce the founders to anyone in the scientific machine learning space and/or someone conducting research in AI, particularly those who have very “cursed model architectures.”
📧 Get Exa in contact with any data center, AI research organization, or GPU cloud provider (i.e., AWS, OpenAI, Anthropic, DeepMind, Lambda).
🙌 Give the team intros to semiconductor industry professionals, particularly those interested in bringing back chip manufacturing to the US!
📝 Feel free to reach the founders via email , they would love to hear your feedback and answer your questions!
Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.