Blog

Nvidia Expands Research Lab to Advance AI and Robotics Development

Nvidia’s research division, once a small team focused on ray tracing, has grown into a 400-person powerhouse driving advancements in artificial intelligence and robotics, contributing to the company’s rise to a $4 trillion valuation.

When Bill Dally joined Nvidia’s research lab in 2009, it had about a dozen employees primarily engaged in ray tracing, a computer graphics rendering technique. Today, the lab has over 400 researchers and plays a critical role in Nvidia’s transformation from a gaming GPU company into a leader in AI technologies.

The lab is now focusing on innovations to support robotics and physical AI. On Monday, Nvidia unveiled a suite of new world AI models, libraries, and infrastructure aimed at robotics developers.

Dally, who became Nvidia’s chief scientist, began consulting for the company in 2003 while chairing Stanford University’s computer science department. Initially planning a sabbatical, Dally was persuaded by then-research head David Kirk and CEO Jensen Huang to join Nvidia full-time.

Under Dally’s leadership, the lab quickly expanded its research scope beyond ray tracing to include circuit design and very large-scale integration (VLSI). Early efforts in specialized GPU development for AI began in 2010, well ahead of the current AI surge. Nvidia invested in both hardware and supporting software, engaging with researchers worldwide.

With a dominant position in the AI GPU market, Nvidia is now targeting robotics. “Eventually robots are going to be a huge player in the world and we want to be making the brains of all the robots,” Dally stated.

Vice President of AI Research Sanja Fidler joined Nvidia in 2018, bringing expertise in simulation models for robots. She established a Toronto-based lab under Nvidia’s Omniverse platform, focusing on simulations for physical AI. Early challenges included acquiring sufficient 3D data and developing differentiable rendering technology to enable AI to transform images and videos into 3D models.

In 2021, Omniverse released GANverse3D, followed by advancements in video-to-3D processing using its Neural Reconstruction Engine in 2022. These technologies underpin Nvidia’s Cosmos family of world AI models, announced at CES 2024.

Current efforts aim to accelerate these models for real-time applications. Fidler noted that robots could process environmental data up to 100 times faster than real time, significantly enhancing physical AI capabilities.

At the SIGGRAPH computer graphics conference on Monday, Nvidia introduced new world AI models for synthetic data generation, alongside updated libraries and infrastructure for robotics developers.

Despite technological progress, both Dally and Fidler emphasized that humanoid robots for home use remain years away, drawing parallels to the development timelines for autonomous vehicles. They cite advances in visual AI, generative AI, and task planning as key enablers, with growth expected as training data volumes increase.

Read More: LAD REPORTING

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button