Search
Close this search box.

Dev Trends: AI, MCP and Software Engineering

The development landscape is experiencing transformative changes. From groundbreaking protocols standardising AI integrations to revolutionary shifts in how developers write code, the tech ecosystem is evolving at an unprecedented pace. Our dev team has been closely monitoring these developments, and here’s what’s capturing our attention this month. 


The Model Context Protocol: A USB-C Moment for AI 

 

The Model Context Protocol (MCP) has emerged as one of the most significant developments in AI infrastructure since its open-source release by Anthropic in November 2024. Think of MCP as creating a “USB-C standard” for AI—a universal connector that allows any AI model to seamlessly interface with any data source or tool, regardless of where they’re hosted. 

The protocol was designed to address the “M×N problem,” where connecting different AI models with various tools required custom code for each pairing, making it difficult to scale AI systems. Before MCP, every new data source required its own custom implementation, creating an unsustainable development burden. 

The adoption has been remarkable. In March 2025, OpenAI officially integrated MCP across its products, including the ChatGPT desktop app and OpenAI’s Agents SDK. Google DeepMind followed suit, confirming MCP support in upcoming Gemini models. By early 2025, developers had already created over 1,000 MCP servers for various data sources and services, demonstrating the explosive growth of this ecosystem. 

 

What Makes MCP Revolutionary 

MCP enables AI agents to pull real-time data from distributed environments seamlessly. This allows agents to access databases, APIs, file systems, and business tools through a common language, transforming them from static models confined to their training data into dynamic systems that can interact with live information. 

Our team has been particularly excited about the recent specification updates. The June 2025 release focused on structured tool outputs, OAuth-based authorisation, and improved security best practices. The protocol now classifies MCP servers as OAuth Resource Servers and mandates Resource Indicators to prevent token misuse—critical enhancements as organisations deploy these systems at enterprise scale. 

MCP in Practice 

Major companies are already leveraging MCP for production use cases. Block has developed more than 60 MCP servers, while development tools like Cursor and Windsurf have integrated MCP support, allowing their AI assistants to interface with filesystems and version control securely. Cursor’s integration raised $900 million at a $9 billion valuation in 2025, signaling massive investor confidence in MCP-enabled development tools. 

However, adoption comes with challenges. Security researchers discovered that hundreds of MCP servers on the web are misconfigured, exposing users to potential cyberattacks. Additionally, Asana warned users that a flaw in its MCP implementation potentially led to data exposure between instances. These incidents underscore the importance of proper security implementation as the protocol scales. 


AI Agents: From Chatbots to Autonomous Systems 

The evolution from simple chatbots to sophisticated AI agents represents one of 2025’s most significant shifts. AI agents can pull real-time information and react to changes, unlike static LLMs that can’t see past their training data cutoff. 

 

Agent use is most commonly reported in IT and knowledge management, with use cases like service-desk management and deep research. By industry, technology, media, telecommunications, and healthcare sectors are leading adoption. Our team has observed that successful agent deployments share common characteristics: they’re anchored to reference applications and use spec-driven development to prevent agents from going off track. 

The Rise of Agentic Workflows 

Full-blown AI workflows have risen to prominence, serving entire teams through functional or cross-functional applications. These workflows extend beyond individual tasks to coordinate complex operations across multiple systems. The trend has spawned new protocols like Agent-to-Agent (A2A) and AG-UI to support inter-agent communication. 

The economic impact is already measurable. From Q4 2022 through Q2 2025, aggregate labour productivity increased by 2.16% annually, with excess cumulative productivity growth of 1.89 percentage points since ChatGPT’s release. Industries reporting higher time savings from generative AI also experienced faster measured productivity growth, suggesting AI may already be meaningfully affecting aggregate labor productivity. 


Context Engineering: The New Prompt Engineering

As AI systems mature, we’re witnessing a shift from ad-hoc prompting to rigorous context engineering. Context engineering involves carefully preparing and feeding structured background information to AI models so they can perform tasks reliably. This goes far beyond clever single prompts, involving carefully planned steps to improve model reliability and accuracy. 

The practice seeks to mitigate the non-deterministic behaviour of LLMs by feeding them essential information for precision. MCP exemplifies this trend as a standard evolving to “control” context effectively. Our dev team has found that proper context engineering dramatically improves AI output quality and reduces hallucinations. 

 


Software Development Transformation 

AI-Assisted Development Reaches Critical Mass 

Gartner predicts that by 2028, 90% of enterprise software engineers will use AI code assistants, up from less than 14% in early 2024. The role of developers is shifting from implementation to orchestration, focusing on problem-solving, system design, and ensuring AI tools deliver high-quality outcomes. 

AI is already writing over 55% of developers’ code, according to GitHub. Tools like GitHub Copilot, Amazon CodeWhisperer, Cursor, and Claude Code aren’t just suggesting snippets anymore—they’re refactoring codebases, optimising performance, and explaining logic in real-time. 

Our team has embraced this shift strategically. We use AI for routine coding tasks while reserving human expertise for architectural decisions, security considerations, and complex problem-solving. The productivity gains have been substantial, allowing us to ship features faster while maintaining code quality. 

Low-Code/No-Code Platforms Mature 

The low-code/no-code market is projected to reach $65 billion in 2025, growing at nearly 20% annually. These platforms have evolved from simple form builders to sophisticated environments handling complex business logic and integrations. 

However, complex projects still require experienced developers to build robust, scalable, and secure systems. The most successful organisations adopt hybrid development strategies: low-code platforms for rapid prototyping and business process automation, traditional coding for performance-critical applications, and no-code tools for content management and simple integrations. 

Modern Languages Gain Momentum 

Languages like Rust, Go, and Kotlin are gaining traction by addressing specific challenges that older languages struggle with. Rust has become popular due to its focus on safety, speed, and concurrency, with major companies like Microsoft and Amazon using it for secure, scalable solutions. Its memory safety model prevents common issues without requiring garbage collection.


Security and DevSecOps Evolution 

With AI’s expanding reach, security has become paramount. In 2024, U.S. federal agencies introduced 59 AI-related regulations—more than double the number in 2023. Globally, legislative mentions of AI rose 21.3% across 75 countries. 

DevSecOps is embedding security into each step of the development process, particularly important as embedded systems and IoT development outpace traditional server-side software. Organisations are evolving their practices to accommodate growth in IoT and embedded systems development, addressing unique challenges like hardware-in-the-loop testing. 

The threat landscape is evolving too. AI has lowered the barrier to cybercrime, with AI-enabled attackers no longer needing technical skills to develop ransomware. Cybercriminals use AI throughout their operations, from victim profiling to automated service delivery. 


Infrastructure Investment Reaches Historic Levels 

The AI infrastructure buildout represents one of the largest capital formation cycles of this generation. Microsoft committed $80 billion for data centers globally, the largest single AI infrastructure investment announced in 2025, while Google plans to spend $85 billion in 2025 to build out data center capacity. 

Inference costs for systems performing at GPT-3.5 levels dropped over 280-fold between November 2022 and October 2024. At the hardware level, costs declined 30% annually while energy efficiency improved 40% each year. These improvements are rapidly lowering barriers to advanced AI adoption. 

Looking Ahead 

The convergence of MCP, agentic AI, and advanced development tools is creating an ecosystem where AI systems can seamlessly interact with data, tools, and each other. By 2027, at least 55% of software engineering teams will be actively building LLM-based features. 

Our dev team remains focused on balancing innovation with responsibility. We’re investing in AI-assisted development tools while maintaining rigorous code review processes. We’re exploring MCP integrations while implementing robust security controls. And we’re embracing low-code platforms for appropriate use cases while preserving our core engineering expertise for complex challenges. 

The future of software development isn’t about choosing between human developers and AI; it’s about orchestrating both to create better solutions faster. As these trends continue evolving, staying informed and adaptable will be key to maintaining competitive advantage in an increasingly AI-native world. 

Share on:

You may also like

en_GB

Subscribe To Our Newsletter