Need smarter insights in your inbox? Sign up for our weekly newsletter to get only the things that matter to enterprise AI, data and security leaders. Subscribe now
Cognitive transition is underway. The station is crowded. Some people board while they are unsure whether their destination justifies their departure.
Future work expert and Harvard professor Christopher Stanton recently commented that he observed that AI intake is extremely incredible and that it is a “very fast diffusion technology.” The speed of adoption and impact is an important part of distinguishing the AI revolution from previous technology-driven transformations like PCs and the Internet. Demis Hassabis, CEO of Google Deepmind, went further, predicting that AI would be “10 times faster than the Industrial Revolution, and perhaps 10 times faster.”
Intelligence, or at least thinking, is increasingly shared between people and machines. Some people are starting to use AI regularly in their workflows. Others are going even further, integrating it into cognitive routines and creative identity. These are “ambitious” including consultants who are proficient in fast design, product manager retool systems, and consultants who build their own businesses that do everything from coding to product design to marketing.
For them, the terrain feels new but navigable. Even exciting. But for many others, this moment feels strange and a little unsettling. The risks they face are not just being left behind. I don’t know how, when, what I will invest in AI, a future that seems extremely uncertain, and a future where it is difficult to imagine my place. It is a double risk of AI preparation, shaping how people interpret this pace, promise and pressure.
AI scaling reaches its limit
Power caps, rising token costs, and inference delays are rebuilding Enterprise AI. Join exclusive salons and discover what your top team looks like.
- Turning energy into a strategic advantage
- Architects efficient inference for real throughput gain
- Unlock competitive ROI with a sustainable AI system
Make sure you have your place to stay first: https://bit.ly/4mwgngo
Is it true?
New roles and teams are being formed across the industry, and AI tools are restructuring workflows faster than norms and strategies can keep up. However, its importance is still hazy and the strategy is unknown. If there is Endgame, Endgame remains uncertain. However, the pace and range of change seem to be a sign. Everyone is told to adapt, but few people know exactly what that means and how the change will take place. Some AI industry leaders have argued that there is a big change going on, and that super intelligent machines will soon be appearing within a few years.
But as others have had before, this AI revolution will become busts and another “AI winter” will continue. There were two notable winters. The first was in the 1970s, brought about by calculation limitations. The second began in the late 1980s after a wave of unmet expectation with famous failures and a lack of “expert systems.” These winters were marked by a noble cycle of expectations and the enormous disappointment that led to significant funding cuts and AI interest.
If the excitement around today around AI agents reflects the failed promises of expert systems, this could lead to another winter. However, there are significant differences between then and now. Today there is much larger institutional buy-in, consumer traction and cloud computing infrastructure compared to the professional systems of the 1980s. There is no guarantee that a new winter won’t appear, but if the industry fails this time, it’s not because of a lack of money or momentum. That’s because trust and reliability were broken first.

Cognitive transition has begun
If the “great cognitive transition” is authentic, this remains the early part of the journey. Some people are on the train and are sure if they will still be on the train or when they will board. Amidst the uncertainty, the atmosphere at the station became restless, as travelers felt a change in the travel itinerary that no one had announced.
Most people work, but I wonder about the extent of risk they face. The value of their work is changing. Performance reviews and under the surface of the company’s city hall, there is a concern that it is quiet and installed.
Already, AI can accelerate software development by 10-100 times, generate most of the code for clients, and dramatically compress the project’s timeline. Managers can now use AI to create performance ratings for employees. Even classicists and archaeologists have discovered value in AI and use this technique to understand ancient Latin inscriptions.
“Want to do” may have an idea of where they are heading and find traction. But even those who are not exposed to “pressure”, “resistance”, or even AI, this moment feels like something between anticipation and sadness. These groups are beginning to realize that they may not remain in their comfort zone for a long time.
For many, this is not just tools and new cultures, but whether that culture has space for them. Waiting long is like missing a train and can lead to long-term work transfers. Even those who are senior in their careers and have begun using AI think their position is under threat.
The story of opportunity and the heightened nature of the matter hides a more unpleasant truth. For many, this is not a transition. It is a controlled displacement. Some workers have not opted out of AI. They discover that the future being built does not include them. Beliefs about tools differ from belonging to a system. The tool has been rebuilt. And without a clear path to meaningfully participate, “adapt or get left behind” doesn’t sound like advice and starts to become like verdicts.
These tensions are precisely why this moment is important. As they know, the job is growing, as work is beginning to retreat. The signal comes from above. Microsoft CEO Satya Nadella admitted in a July 2025 memo that he was receiving power cuts, saying that the transition to the AI era “may feel troubling at times, but conversions are always the case.” But there is another layer in this unsettling reality. Technology that promotes this urgent transformation remains fundamentally unreliable.
Power and Glitch: Why can’t AI still be trusted?
Yet, with regard to all urgency and momentum, this increasingly widespread technology itself remains glitchy, limited, oddly fragile and far from trustworthy. This creates a second layer of doubt not only about how to adapt, but also about whether a tool can be provided to adapt. Perhaps these drawbacks are not surprising given that it was a few years ago that the output from large-scale language models (LLM) was largely inconsistent. But now it’s like having a PhD in your pocket. The once-on-demand ambient intelligence idea where science fiction was mostly realized.
But under their polish, chatbots built on top of these LLMs are false, forgettable, and often overly confident. They are still hallucinated and we cannot fully trust their output. AI can answer with confidence, but it is not accountable. This is probably a good thing as our knowledge and expertise is still needed. They also have no permanent memory and it is difficult to proceed with conversations from one session to another.
They could get lost again. Recently I had a session with a major chatbot and it answered questions in a complete non-sequitur. When I pointed out this it responded again off topic, as if our conversation thread simply disappeared.
They also do not learn, at least in the human sense. Whether it’s Google, Anthropic, Openai or Deepseek, the weight will be frozen when a model is released. That “intelligence” is fixed. Instead, the continuity of conversations with the chatbot is limited to the scope of its context window. This is certainly very big. Within that window and conversation, chatbots can absorb knowledge and create connections that act as learning at the moment, making them seem increasingly savant.
These gifts and flaws make them intriguing and fascinating. But can we trust it? Research such as the 2025 Edelman Trust Barometer shows that AI trusts are split. In China, 72% of people express their trust in AI. However, in the US, that number drops to 32%. This difference highlights how public belief in AI is shaped by culture and governance, as well as technical capabilities. If AI didn’t suppress hallucinations, if it remembers, if it learns, we probably would trust it more if we understand how it works. However, trust in the AI industry itself remains elusive. There is no meaningful regulation of AI technology, and there is growing concern that ordinary people have little to say about how they are being developed or deployed.
Without trust, will this AI revolution be floundering and bring another winter? If so, what about those who invested their time, energy and their careers? Would it be better for those who were waiting to accept AI to do so? Will cognitive transition be a flop?
Some notable AI researchers warn that AI is based on optimistic predictions in its current form, primarily based on deep learning neural networks on which LLM is built. They argue that additional technical breakthroughs are needed to further this approach. Others disagree with optimistic AI predictions. Novelist Ewan Morrison considers the potential of super intelligence as a fiction that hangs to attract investors’ money. “It’s fantasy,” he said, “a product that’s crazy for venture capital.”
Perhaps Morrison’s skepticism is justifiable. But even with their flaws, today’s LLM already demonstrates a huge commercial utility. If exponential progress over the past few years ceased tomorrow, ripples from what has already been created will have an impact in the coming years. But beneath this movement there is something more vulnerable. The reliability of the tool itself.
Gambling and dreams
For now, exponential advances continue as companies pilot and deploy AI more and more. Whether we missed confidence or not, the industry is determined to move forward. It could all fall apart, especially if the AI agent fails to deliver, if another winter arrives. Still, the general assumption is that today’s shortcomings are solved through better software engineering. And they may be. In fact, they will probably be, at least to some extent.
The bet is that technology works, expands, and the disruption it produces outweighs productivity-enabled. Success in this adventure assumes that what we lose in human nuances, values and meanings is compensated for by efficiency when we are out of reach. This is the gambling we make. And there is a dream. AI becomes a widely shared source of wealth, rising rather than excluded, and expands intelligence and access to opportunity rather than focusing.
The unstable lies in the gap between the two. As if this gambling guarantees our dreams, we are moving forward. It is a faith that acceleration hopes to land us in a better place and that does not erode the human element worthy of reaching our destination. But history reminds us that even successful bets can leave many people behind. The ongoing “messy” transformation is not just an inevitable side effect. This is a direct result of the speed at which people and institutions can overwhelm the capabilities of humans and institutions to be effective and carefully adapted. For now, cognitive transitions have as much faith as faith.
The challenge is not only building better tools, but asking tougher questions about where they are taking us. We are not just relocating to unknown destinations. We do it so fast that while we run, the map is changing and we still cross the landscape that is drawn. Every transition has hope. But hope is unexamined and can be at risk. It’s time to ask not only where we are heading, but who belongs when we arrive.
Gary Grossman is Edelman’s EVP of Technology Practice and the global lead of the Edelman AI Center of Excellence.
Source link
