Understanding where you are and how to move forwards
AI adoption isn’t a technical shift or a simple switch or —it’s a messy middle ground between old ways of working and new possibilities. This space can be uncomfortable, filled with uncertainty, learning and resistance, but it’s also where real progress happens.
Success here isn’t about technology alone—it’s about people navigating change, reshaping roles and building trust. In the following article, TAU consultant and expert Transition Coach, Kirsti Wenn and Rob Webster (CEO of TAU Marketing Solutions), explore how to lead teams through that messy middle, tackling the psychological barriers, the emotional journey, along with the practical and technical steps needed to move from uncertainty to action.
The Psychological Journey of AI Adoption
There are two theoretical frameworks we can look at to understand transitions for AI.
The Bridges’ Transition Model focuses on the psychological process during external change but is a useful guide for leaders to adapt their change approach:
- Endings: Acknowledging the skills and practices that may become less relevant
- Neutral Zone (the “messy middle”): The uncomfortable period of learning and adaptation – although I would argue this can be the most fun, creative and inventive space with good leadership
- New Beginnings: Embracing new workflows and capabilities
With AI, the neutral zone or ‘the messy middle’ can be particularly challenging, but also offers potential for experimentation, creativity, and innovation if teams are fully engaged. Effective leadership during this phase is crucial for AI adoption. It is arguable that, with the AI revolution along with many other business changes, people will always be in transition – thus learning to thrive in a continuous state of change is essential.
The Kübler-Ross Change Curve provides valuable insight into the emotional journey individuals experience during adoption:
- Denial: “This won’t affect me or my role”
- Anger: Frustration at disruption to established ways of working
- Bargaining: Attempts to maintain the status quo with minimal adaptation
- Depression: Confronting potential obsolescence of skills or position
- Acceptance: Experimenting with AI and beginning to accept it into new ways of working where AI is an enhancement rather than a threat.
Organisations that acknowledge and address these emotional stages will achieve more successful transitions as they can understand the barriers and create relevant nudges to address concerns.
Potential Psychological Barriers to AI Adoption
- Identity Threat: Professionals may feel their expertise and identity are undermined by AI capabilities.
- Loss Aversion and Status Quo Bias: Preference for familiar routines over new technologies and resistance to giving up control.
- Technostress: Anxiety caused by the ongoing need to adapt to rapidly changing technologies.
- Trust Deficit: Distrust in AI systems, especially after witnessing errors or biases in outputs – getting caught out.
- Ethical Objections: Resistance based on concerns about environmental impact, data privacy, creative exploitation, and human displacement.
The Concept of Liminal Space in AI Transition
AI adoption creates a significant “liminal space,” a threshold or transitional zone between states of being. As AI begins performing tasks previously considered uniquely human, professionals find themselves between their established identity and an emerging one. This is similar to the Neutral zone in Bridges Transition model, but it also applies at a macro level as we figure out the relationship between humans and technology.
For example, a financial analyst whose data processing is now fully automated exists in a space between their traditional role and whatever their future role might become. This “in-between” state often generates anxiety, but also creates opportunity for redefinition and growth.
Navigating the Technological Transition
Equally, in terms of technology transformation, we need to consider how to address this transition stage where we move from human-driven to automation of processes and identify the opportunities it presents.
From a technology perspective we experience a similar level of uncertainty in terms of what the next era of innovation will bring. Technology transformation, like psychological transformation, requires an understanding and recognition of where we are now in order to move on successfully. We need to ensure we don’t abdicate responsibility and therefore accountability to AI, but rather enhance it with human experience, empathy and understanding.
By conducting an AI Health Check assessing areas such as data governance, data quality, teams can identify potential complications and risks as well as investigating and addressing concerns around sustainability and ethical considerations. Taking stock of where the organisation or team is today and its readiness for transformation, makes it easier for the team to develop a robust plan, with a clear appreciation of what can be done today and where work is needed and how to prepare for the direction that technology is heading in.
How Use Cases can Help
This is where a use-case driven approach can be particularly useful. By breaking down departments into specific use cases and the tasks involved we can ground AI transformation in a specific context, helping to identify the challenges and opportunities around what the team wishes to achieve. We can bring greater clarity by identifying trends impacting a specific function or task to understand how the needs and behaviours of all parties involved may change.
Take Search Marketing for example, by honing in on this use case, we can consider the advances in AI that will impact both the platforms, users and the Performance Marketers themselves – all in relation to where we are today. This focuses the conversation away from the speculative towards the practicalities of where we are and what we need to be prepared so that we can plan effectively, moving away from amorphous concepts and uncertainty to tangible next steps. We can then design and build in flexibility for different scenarios and options.
Conclusion
The “messy middle” is where uncertainty can stall progress—or spark innovation. The key is recognising that this phase isn’t a hurdle to rush through, but rather a space to engage with and learn from. Yes, it’s uncertain. Yes, it feels uncomfortable. But that’s where the real opportunities are. That where genuine creativity and innovation can be explored.
People need time to adjust, space to learn, and a reason to trust the process. The real risk isn’t the technology, it’s failing to adapt. So:
- Know where you stand. Understand your data, your workflows, your people.
- Start with what’s real. Use cases, not theory. Solve actual problems.
- Support the human journey. Change brings uncertainty, acknowledge it, lead through it.
- Stay flexible. The tech moves fast. You need an approach that can keep up.
By addressing the human side of change, confronting fears and staying grounded in practical steps, teams can turn uncertainty into momentum. Mastering this middle means not just surviving AI adoption—but shaping it.
This isn’t about having all the answers today. It’s about taking the next step, learning, adjusting, and moving again.