The State of AI - Navigating a Rapidly Shifting Landscape

Author

Ross Mason

Date Published

The Emerging Mind

Setting the Scene

Artificial intelligence has always evolved in waves. Neural networks first gained traction decades ago, but the hardware simply wasn’t ready for them. As their limitations became clear, the field shifted toward other approaches—support vector machines, kernel methods, Bayesian techniques, and a range of statistical learning algorithms that dominated the late 90s and early 2000s. These methods were powerful, elegant, and mathematically grounded, and for a long time they represented the cutting edge.

But as computing power grew and data became abundant, neural networks re‑emerged—this time under the banner of deep learning. The breakthrough moment came with the introduction of the Transformer architecture, which unlocked the ability to train neural‑network‑based models on vast corpora of text. From that foundation, Large Language Models (LLMs) exploded into mainstream awareness, demonstrating capabilities that felt qualitatively different from earlier AI systems.

I’ll admit that I was initially sceptical. LLMs looked like sophisticated probabilistic engines—remarkable at predicting the next word, but surely not “intelligent” in any meaningful sense. Yet the more I’ve worked with them, the more that view has shifted. These models exhibit behaviours that resemble reasoning, abstraction, and even creativity. They surprise me in ways I didn’t expect. And the uncomfortable realisation I’ve come to is this: perhaps the gap between “predicting the next word” and “thinking” is smaller than I once believed. Or, more provocatively, perhaps humans are operating in a way closer to “predicting the next few words” than we like to admit.

Looking Back: The Last 12 Months

The past year has been one of the most intense periods of innovation I’ve seen in my career.

Key Breakthroughs

  • Model efficiency leaps: Smaller models now achieve performance that once required massive architectures, thanks to quantisation, distillation, and smarter training techniques.
  • Multimodality becoming mainstream: Models that understand and generate text, images, audio, and video are no longer research curiosities—they’re product features.
  • Agentic workflows: AI systems that can plan, take actions, and use tools have moved from demos to early production use.
  • On‑device AI: Capable models running locally on laptops and phones have shifted the privacy and latency conversation dramatically.

Adoption Trends

  • Businesses have moved from experimentation to integration.
  • AI copilots and assistants have become standard in productivity tools.
  • Software teams increasingly use AI for code generation, testing, documentation, and architecture exploration.
  • Non‑technical teams are adopting AI for research, content creation, and workflow automation.

The shift is no longer “should we use AI?” but “how deeply should we embed it?”

Looking Forward: The Next 12 Months

Predicting AI is risky, but several trends feel likely.

Breakthroughs on the Horizon

  • More capable small models: Expect models that run locally to rival today’s cloud giants.
  • Better reasoning: Research is converging on techniques that improve planning, consistency, and multi‑step problem solving.
  • AI‑native applications: Not just adding AI to existing products, but building products that assume AI as a core capability.
  • Improved safety and controllability: More transparent models, better guardrails, and more predictable behaviour.

Adoption Trends and Barriers

  • Rapid enterprise adoption will continue, but with more scrutiny.
  • Data privacy concerns will dominate conversations. Many organisations are still uncomfortable sending sensitive data to external AI services, even with strong assurances.
  • Hybrid architectures—combining local inference, private cloud models, and selective use of public APIs—will become the norm.
  • Regulation will tighten, especially around data governance and model transparency.

For companies like ours, this means helping clients navigate a fragmented landscape: choosing the right model, the right deployment strategy, and the right balance between capability and control.

Data Privacy

This is the concern I hear more than any other. Businesses are excited about what AI can do, but they’re understandably cautious about sending sensitive information to a model they don’t control. What’s interesting is that many of these same organisations already trust cloud services like SQL Azure with their most critical data—customer records, financial information, operational systems—without hesitation.

Part of the challenge is that AI feels different. It’s new, it’s powerful, and it’s not always obvious what happens to the data once it enters the model. But in reality, when using Microsoft’s enterprise AI services, the guarantees around data handling are very similar to the guarantees behind SQL Azure or other core Azure services.

Microsoft provides clear commitments for its commercial AI offerings:

  • Your data remains your data. It isn’t used to train Microsoft’s foundation models.
  • Your prompts and outputs aren’t stored for model improvement. They’re processed securely and then discarded.
  • The service runs within the same trusted Azure infrastructure that organisations already rely on for databases, storage, and application hosting.
  • Enterprise-grade isolation and compliance apply in the same way they do for other Azure services.

In other words, the data you send to an AI model hosted by Microsoft is treated with the same level of protection as the data you send to SQL Azure. The underlying technology is different, but the operational guarantees—privacy, isolation, and control—are very much aligned.

The perception gap is still real, though. AI is new territory, and trust takes time. But as more organisations understand that these systems operate within the same secure boundaries as the cloud services they already use daily, I expect this barrier to adoption to shrink significantly.

Concerns Going Forward

It would be irresponsible to ignore the challenges.

Job Displacement

AI will automate tasks that were once the domain of junior roles. Many organisations will choose AI over hiring trainees, which raises uncomfortable questions about how future talent will gain experience. The long‑term implications for the workforce—and for the pipeline of skilled professionals—are significant.

Concentration of Power

The most capable models are controlled by a small number of companies with immense resources. This centralisation risks creating dependencies that are difficult to unwind. Open‑source models help counterbalance this, but the gap between open and proprietary systems may widen before it narrows.

Data Privacy (Again, in a Broader Sense)

Even with strong guarantees from providers like Microsoft, the broader ecosystem remains uneven. Not all vendors offer the same level of transparency or protection. Organisations will need to be selective and informed about the AI services they adopt.

Closing Thoughts

The pace of change in AI has been so extraordinary that I often find myself oscillating between two very different emotions. On some days, I feel a genuine fear for my own role as a developer and technology leader — not because I doubt my ability to adapt, but because the ground is shifting faster than at any point in my career. And on other days, I’m filled with excitement at what is starting to become possible. The tools we now have at our disposal are unlocking ideas that would have been unthinkable even a year ago.

This post is just the beginning. I plan to write more on this subject over the coming months. I want to explore in detail how the role of developers is changing, what new skills and mindsets will matter, and how we can continue to deliver value in a world where AI is increasingly part of the team. I also want to outline my current thinking on how businesses can adopt AI effectively — what they should be considering when commissioning new systems, where the real opportunities lie, and what pitfalls they need to avoid.