AI is Becoming More Human-Like in its Reasoning. Nvidia Claims its Upgraded Platform Enhances This Capability Even Further.

Nvidia CEO Jensen Huang delivered a keynote address at CES 2025, an annual consumer electronics trade show, held in Las Vegas, Nevada, on January 6, 2025.

On Tuesday, Nvidia unveiled more details about its next-generation AI chip platform, Blackwell Ultra, which it claims will enhance AI’s ability to reason and act on behalf of users—advancing beyond chatbots into real-world applications.

Announced at its annual GTC conference, Blackwell Ultra builds on the widely used Blackwell chip, offering increased computing power. According to Nvidia, this upgrade enables AI models to break down complex queries into multiple steps and evaluate different possibilities more efficiently.

Since the launch of OpenAI’s ChatGPT in 2022, demand for AI chips has skyrocketed, driving Nvidia’s stock to record highs. Its hardware powers AI and cloud services for major companies like Microsoft, Amazon, and Google. However, the emergence of Chinese startup DeepSeek, whose R1 model demonstrated strong reasoning abilities at a lower cost, has raised questions about the need for expensive AI hardware. Despite this, Nvidia continues to dominate, as reflected in its January quarter earnings, which exceeded Wall Street expectations.

Nvidia aims to position its chips at the core of advanced reasoning models. The company claims that a DeepSeek R1 query that previously took 90 seconds on its older Hopper chip would take just 10 seconds with Blackwell Ultra.

Tech giants including Cisco, Dell, HP, Lenovo, and Supermicro are developing new servers powered by Blackwell Ultra. The first products featuring the platform are expected in the second half of 2025.

Nvidia’s Blackwell Ultra platform is built to support advanced AI reasoning models.

Experts say AI’s ability to reason—analyzing a question before responding—will enable it to handle more complex and nuanced queries. Instead of providing a single answer, AI with reasoning capabilities can break down a request and offer multiple, scenario-based responses.

Nvidia illustrated this with an example: a reasoning model could assist in planning a wedding seating chart, factoring in preferences like where to seat parents and in-laws while ensuring the bride sits on the left.

“These models are beginning to mimic aspects of human behavior,” said Arun Chandrasekaran, an AI analyst at Gartner.

Beyond OpenAI and DeepSeek, other major players are advancing reasoning models. Google enhanced its Gemini models last year, and Anthropic introduced Claude 3.7 Sonnet in February, incorporating hybrid reasoning.

Some experts believe reasoning models could lead to AI “agents” capable of taking actions rather than just answering questions. Companies like Google, Amazon, and Qualcomm are developing AI assistants that can, for example, book a vacation based on a user’s preferences instead of merely suggesting flight and hotel options.

“Agentic AI thrives on multitasking,” said Gene Munster, managing partner at Deepwater Asset Management. “Its ability to reason within each task will make these AI agents far more capable.”

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like