Foreword
The London tube strikes certainly made for some interesting travel today. There was quite a fair amount of stressed and disgruntled travellers outside of Euston. I was very surprised to find the tube stations entirely closed given Google was reporting only a “reduced service”. I had quite a relaxed approach to my travel today - get to London as early as I can, figure it out. How did the Bear Grylls meme go; adapt, improvise, overcome?
I arrived around 10:45 which sadly meant I had missed the opening keynote ”Forging the Foundations for an AI-Native Era” and Simon Bobbit’s talk on “Successful Practices to Elevate Business Value with Architecture”.
Presentations
The Future of Developer Productivity
Tad Travis examined how organisations establish lasting alignment between business goals and application strategies. They divided this challenge into five core elements: executive summaries that preserve institutional memory, business context cascades that link IT initiatives to top-level objectives, portfolio management using models like PACE Layering and TIME, stress testing of alignment and impact, and resource planning. The message was pretty clear: productivity gains stem not from tools alone, but from consistent governance and clarity about why applications exist and how they deliver value.
I thought that a compelling point was the emphasis on documenting assumptions. Projects often fail not because of poor execution but because the "unspoken truths" underpinning them remain untracked. Tad suggested using simple red/yellow/green indicators for assumptions, offering a simple and practical way to prevent future disputes and missed expectations. The recommendation to employ portfolio heat maps and comparative analysis over time provides a tangible method to determine whether investments are yielding returns or stagnating.
One aspect of Tad’s talk I would question is the reliance on high-level frameworks like PACE Layering or TIME. While these can be useful abstractions, smaller organisations or fast-moving teams will probably find them too rigid or resource-intensive to implement regularly. Advice to "start simple" and define a minimal viable application strategy is both more actionable and impactful for many teams. As was quoted: "First, just lay out what's important." I always believe a pragmatic approach like this will deliver more impact than pursuing the perfect methodology.
Overall, the talk reinforced that developer productivity is less about accelerating code production and more about aligning technology teams' work with business value. Success comes from making strategy explicit, documenting assumptions, and maintaining alignment through continuous change.
Top 10 Reasons Why GenAI Projects Fail and How to Fix It
This session by Arun Chandrasekaran mapped the top failure reasons for generative AI projects and how to avoid them. Core themes included ruthless use-case prioritisation, favouring composite AI over LLM-only solutions, ensuring data readiness for unstructured content, building platform composability to avoid lock-in, implementing responsible AI with "guardian" validators, adopting “LLMOps” to manage short model half-lives, controlling costs at inference stage, and addressing upscaling and change management.
Two key takeaways stood out for me. First, the “Use Case Comparison Matrix” forces you to make value versus feasibility trade-offs, helping surface truly valuable "killer" applications. Second, the recommendation for a modular, model-agnostic platform with an internal sandbox empowers teams to route prompts to the smallest capable model and swap in new models without extensive rewrites. I’m really proud to say that thanks to our incredible platform team at Griffiths Waite, we’re already doing the latter. It's also validating that we cover the former in our GenAI workshops.
The talk focussed heavily on GenAI of course, but didn’t cut across to other areas of ML or AI. If your data is solid and the task is predictive or causal in nature, classic machine learning may outperform GenAI on accuracy, latency, and cost. It's important to treat GenAI as just one tool in your toolkit, not the default solution. The warning Arun gives that "GenAI is the hammer, and every use case looks like the nail" is particularly worth heeding!
Coming out of Arun’s talk left me with quite a simple decision tree I would follow If I were implementing tomorrow: select three narrow, revenue-linked use cases; conduct a bake-off between small LLMs and baseline ML; measure cost per successful outcome; and deploy the cheapest model that meets quality requirements. I'd invest first in data engineering to improve RAG quality, add lightweight validators, and maintain a simple, composable stack that allows hot-swapping models without re-architecting the entire system.
Theatre 1 & Theatre 2
I made my way over to the theatres in the corners of the exhibition space, as I intended to listen to the Huawei talk on "Delivering Enterprise AI with ROMA Connect Agentic Integration”. However, lunch was being served at the same time which made the exhibition hall an incredibly loud and crowded space. It was definitely impacting Soroosh and Alex’s delivery which was a huge shame; credit to them for giving it their all because I really would have struggled in that environment. It wasn’t so bad for those attending as headphones were distributed which did a good job of blocking everything else out.
I took the opportunity to grab a bite of lunch and coffee, then secured myself a prime seat in Theatre 1 with a headset - ready for the next 2 talks I had planned to see.
Apps to Agentic Future: Crafting Smarter, Autonomous Workflows
Kim Sturges took the stage and presented a practical roadmap for evolving to agentic systems through a three-phase approach. Phase 1 enhances existing workflows with AI skills to improve speed and accuracy. Phase 2 deploys coordinated AI agents capable of making decisions. Phase 3 implements fully autonomous, AI-native applications that operate independently within defined guardrails. She capitalised on her opportunity to highlight ServiceNow's low-code platform, newly launched "BuildAgent," and an AI Control Tower for comprehensive governance of both third-party and native agents.
Kim gave a nuanced perspective that full autonomy isn't appropriate for all processes. High-stakes operations often benefit more from deterministic workflows enhanced with targeted AI skills rather than comprehensive agent systems. Before implementing agents, organisations should establish concrete metrics such as cost per resolved task, lead time and error rates, comparing these against robust non-agentic baselines. Kim delivered a clear rule for how AI is used at and within ServiceNow: deploy agents for probabilistic, adaptable tasks while maintaining traditional workflows for repeatable, rule-governed processes.
What I appreciated from Kims talk was that she showed AI use-case examples that weren’t exclusively chat bot and customer support related, which I can’t believe is still a rarity in 2025… Her delivery overall was great and the polished slides only added to this.
I was really quite perplexed that ServiceNow positioned themselves in this presentation as “a low-code platform”. Feels like this is going to be a tough pivot to achieve, even if only from a brand perspective, for such a recognised product.
How AI Agents are Changing the SDLC
Tim Rogers took us through the evolution of AI at GitHub; from basic code completions and chat functions to autonomous coding agents capable of planning, editing, testing, and iterating. He demonstrated two distinct agent modes: one operating locally within the IDE that iterates until tests pass, and another cloud-based version that works with GitHub Issues, enabling parallel and unattended execution. Tim connected the emergence of these agents to improvements in model benchmarks like SWE-Bench Verify and showcased how agents are expanding across the entire software development lifecycle—from coding to review and planning.
The presentation's strengths were evident in its practical, realistic demos that avoided typical industry hype. Tim showed us compelling use cases, including using agents to clean up stale feature flags and accelerate validator creation. His conceptualisation of agents as "peer programmers" rather than mere "pair programmers" provided a framework for understanding this technological shift.
I did slightly take issue with the claim that Copilot is GitHub's "top contributor" simply because it commits the most code; is this not a superficial metric? More meaningful measurements would include defects prevented, reduced lead times, escaped-bug rates, and cost per successful change rather than just lines of code or PR counts. While agents increase throughput, they also shift bottlenecks to other areas: planning, verification, deployment, and audit. Without robust guardrails and observability, this increased velocity could potentially lead to chaos.
A quote I thought worthwhile keeping note of: "We're not at the end, but at the end of the beginning." The next phase of this evolution is going to be exciting.
LSEG in Conversation: Retooling IT and Accelerating AI Transformation
Sadly I struggled to hear a lot of this from where I was sitting and it was a fairly packed audience. I wouldn’t have included it, except Kent Walters from Retool made a really good point which I firmly agreed with; give LLMs your building blocks to create applications. I interpreted this as brand guidelines, design systems, UI libraries, SDKs, APIs, etc—limiting an LLM for citizen developers in this way would go some way to ensuring consistent delivery of internally built applications.
Gartner Futures Lab: Most Maverick Predictions
This was a really dynamic session led by Frank Buytendijk, so I struggled to get any detailed notes. He did a great job of making it feel more like a conversation with the audience; very much a “build your own adventure” talk. Frank took the stage and immediately got a laugh out of everyone, and it was a pretty big crowd.
This was a new format Gartner introduced and whilst I really enjoyed it, I think it took the audience a little while to get into it and actually it would probably work much better with a smaller, more intimate group.
Reframing Low Code in Enterprise Architecture
Lauren focussed on repositioning low-code from being seen as a tactical solution, to a strategic component of enterprise architecture. She emphasised that in today's environment of constant change, organisations need tools that can deliver quickly, maintain compliance, and integrate with legacy systems. Low-code becomes valuable when treated as an integral part of the target architecture rather than a peripheral project.
The case study resonated well, given our work at Griffiths Waite in a similar space; a global insurer adopted Mendix as their system-of-engagement platform, maintained systems of record behind APIs, and rapidly prototyped integrations. Their reported results were impressive: similar applications delivered 70% faster and at half the cost, plus development of a customer application with 15 integrations in just 9 weeks. It would be interesting to know how Everywhen arrived at these figures, however.
Strengths within their low-code implementation included their architecture-first approach, reusable UX component libraries, and realistic expectations about citizen development limitations for customer-facing applications. Their upgrade strategy appeared well-planned, and diagrams showed Mendix properly integrated into the enterprise architecture rather than simply attached as an afterthought.
With low-code I’m always wary of vendor lock-in, application sprawl, and hidden operational costs. I believe we should focus on measuring throughput, change failure rates, and runtime performance—not just development speed. Keep low-code platforms away from core systems of record, maintain design governance, and thoroughly load test all customer-facing applications.
Laurens position is clear and I respect it: "Low-code is not a shortcut. It is an architectural choice."
Re-Wiring for the Future
The guest keynote by Dr. Michelle Dickinson was a fantastic way to close the day. Her talk was motivating; it challenged the idea of what it means to be an inventor and reminded me of the importance of being curious and asking questions
Closing Day 1
This was my first time at a Gartner conference, and I was surprised at actually how much I took away from it. Really grateful to Griffiths Waite for the opportunity to attend and I'm looking forward to day 2!
