Nvidia CEO Predicts Generative AI and Accelerated Computing Will Redefine the Future

Nvidia CEO Predicts Generative AI and Accelerated Computing Will Redefine the Future

During his keynote at the annual Computex event in Taiwan, Nvidia CEO Jensen Huang emphasized that generative AI and accelerated computing are poised to "redefine the future."

"Today, we’re at the cusp of a major shift in computing," Huang stated. "Generative AI is reshaping industries and opening new opportunities for innovation and growth."

Nvidia has solidified its leadership in the AI sector with its high-performance GPUs, essential for businesses aiming to develop and scale generative AI applications.

Huang highlighted that AI is transforming accelerated computing, impacting both consumer-facing AI PCs and enterprise-level computing platforms in data centers. "The future of computing is accelerated," he said. "With our innovations in AI and accelerated computing, we’re pushing the boundaries of what’s possible and driving the next wave of technological advancement."

In his presentation, Huang introduced Nvidia’s future plans, showcasing the upcoming Rubin GPUs, set for release in 2026, following the unveiling of the Blackwell GPUs earlier this year. The roadmap includes updated versions of Blackwell and Rubin GPUs in 2025 and 2027.

Huang explained Nvidia's "one-year rhythm" strategy, allowing businesses to deploy consistently upgraded hardware that enhances power and performance as AI demands grow. "Our basic philosophy is very simple: build the entire data center scale, disaggregate and sell to you parts on a one-year rhythm and push everything to technology limits," he explained.

The new hardware aims to reduce costs for businesses running AI applications by using less power and achieving up to 100 times better performance. Huang noted that running OpenAI’s GPT-4 on Blackwell GPUs has reduced energy consumption by 350 times.

Nvidia's advancements have significantly surpassed Moore’s Law, which predicts a doubling of integrated circuits on a chip approximately every two years. According to Huang, Nvidia has achieved a 1000x increase in AI compute in just eight years, from 19 TFLOPS on Pascal GPUs in 2016 to 20,000 TFLOPS on the new Blackwell GPUs.