One immense trade show, three AI trends, two bits of advice
Having had a bit of time to process what I saw, heard and discussed at CES, I wanted to share three AI trends that I think will have a big impact on businesses and consumer behaviours – and two pieces of advice for CTOs planning their AI strategies.
Advanced reasoning capabilities – the new
OS Let’s start with new AI models from, among others, OpenAI, Google and China’s DeepSeek.
Unlike previous AI LLMs – which act as pattern finders of text, token generators or sophisticated indexing machines – the new generation of AI models are systems that genuinely ‘think’. They possess internal knowledge about entities, relationships and the real world, which gives them human-like ability to reason and extract insights rather than simply regurgitate pre-learned text.
That means new AI models (like o1 from OpenAI) can analyze complex scenarios, apply logical patterns and, most significantly, provide nuanced responses that are similar to how humans behave.
The result is that LLMs are no longer merely, well... language models. They are quickly transforming into end-to-end operating systems, capable of expressing knowledge about the world in meaningful, actionable ways, capable of organizing knowledge into action plans, capable of integrating and invoking both digital and physical tools and, essentially, capable of acting as intelligent digital and physical agents.
For businesses, this means AI can take on complex challenges like planning, decision making and solving intricate problems. And the impact will be immense, propelling us into the next era of our digital economy where AI becomes deeply embedded and integrated into the systems that make our lives and society function.
Agentic AI
The advanced reasoning capabilities of new AI systems mean AI agents are going to be able to do more on behalf of humans.
Last week, OpenAI released an Operator that can perform tasks on the web such as buying groceries or booking flights. This matters because it transforms AI from a passive ‘service’ that responds to user prompts into an active agent capable of executive real-world tasks.
Consumers can expect more devices which are capable of learning our preferences and habits, and therefore anticipate our needs. Get ready to welcome more virtual assistants into our homes in the near future.
In the business environment, agentic AI will be able to work invisibly and seamlessly to automate key steps of existing processes with minimal or no input from the user. And from CES, it’s clear the race is on to monetize agentic AI at an enterprise level so that these services can be packaged, sold and deployed widely.
Physical AI
“The next frontier of AI is physical AI,” declared Nvidia CEO Jensen Huang at CES – no doubt sending shivers down the spines of many of those in the audience.
Physical AI is, in a nutshell, the integration of advanced AI into robotics to create machines which can perform complex, real-world tasks with human-like precision and adaptability. It’s an area of special interest to me, and to see how much progress has been made in teaching AI to understand our physical world and operate is incredibly exciting – but also daunting.
We had our first glimpse of Nvidia’s Cosmos World Model, a cloud-based platform that enables developers to create and train AI models specifically for robotics and autonomous vehicles. It does this by generating hyper-realistic synthetic training environments and data, reducing the need for expensive and extensive real-world data collection for a machine to learn how to maneuver like a human.
While physical AI is not an area that will impact most businesses in the immediate future, I believe it’s worth noting Nvidia and many other OEMs such as Tesla’s ambition in hardware and AI integration, because we’re now moving into the era of autonomous systems where human intervention becomes more and more obsolete.
The economic incentives for tech giants to create trainable, humanoid robots are immense, promising unprecedented total addressable labor market and cost efficiencies. Governments and regulatory bodies should address these potential outcomes and make sure AI adoption benefits society as a whole.
And those two bits of advice?
With so many possibilities, it can be overwhelming trying to narrow down the best strategy to capitalize on all the potential of AI. Here are my suggestions.
1: Be laser focused on AI use cases
Success lies in identifying practical, high-impact use cases where AI works to reduce friction and enhance productivity, seamlessly integrating into workflows. A good example is the process of generating clinical trial reports, which often involves thousands of pages and extensive data aggregation. With AI, this workflow can be streamlined: pressing a button initiates the process, and the AI system gathers all relevant data, processes it and produces a high-quality draft. This kind of automation transforms labor-intensive tasks into efficient, accurate outputs so employees can spend more time on areas that require more human ingenuity, The same principle applies to industries like retail, logistics or manufacturing, where AI can predict demand, optimize inventory and automate operational tasks. The challenge is to identify where AI can drive meaningful value rather than forcing its integration into every corner of the business.
2: Augment user experience where it makes sense
For AI to improve the experience of your consumers, think pragmatically and focus on context and purpose. For instance, voice interfaces provide a safe and intuitive way to interact with your in-car assistant when you’re driving and focusing on the road. AI here can help with better navigation and fetch information or even perform tasks without any distractions. On the other hand, complex tasks requiring detailed input, like generating reports, are better suited to traditional interfaces like buttons or screens. In these cases, AI operates invisibly in the background, reducing effort and improving efficiency without overwhelming the user. That said, not every experience needs AI. Many interactions, such as scrolling through content or tapping a button to initiate a task, are natural and effective as they are. Use your common sense, don’t fix something that isn’t broken, and use AI thoughtfully and leave existing intuitive processes untouched. In summary, aligning AI integration with clear business needs and end-user experience is critical.
Focusing on practical AI use cases is the ultimate way to achieve scalability, as each successful application creates a compound effect across operations and workflows. Your ability to address specific use cases in 2025 will help you build systems that can continuously scale and adapt to future advancements and needs.