Day 2: Gartner Application Innovation & Business Solutions Summit
Day 2: Gartner Application Innovation & Business Solutions Summit

Day 2: Gartner Application Innovation & Business Solutions Summit

Tags
Tech
Published
September 9, 2025
Author
Warren Bickley

Foreword

Despite the late finish writing up yesterdays notes, I woke up pretty refreshed and got myself over to the Intercontinental 02 for day 2 of the conference.
 

Presentations

The Future of Computing - Gartner Futures Lab

I knew after Franks engaging session yesterday that I wouldn’t want to miss this. Frank Boutendijk presented a three-horizon view of computing's future.
Horizon 1 covers federated computing in the next two to five years. Edge computing expands, spatial interfaces make a comeback, multi-agent systems dissolve traditional application boundaries, and wearables integrate AI into everyday life. However, serious constraints are emerging. Limited power, bandwidth, data sovereignty concerns, and questions about synthetic data trustworthiness all restrict scalability. Boutendijk urged focusing on more ambitious use cases beyond simple meeting summaries, while preparing for potential application rationing and prioritising energy efficiency.
Horizon 2 envisions hybrid computing over the next five to ten years. Rather than one dominant paradigm, we'll see classical, quantum, neuromorphic and photonic systems operating simultaneously. The key control point becomes orchestration across these diverse systems. Autonomous operations will expand from factories to entire logistics networks. For humans, brain-computer interfaces will transition from medical applications to commercial use.
Horizon 3 looks beyond ten years to a future with self-architecting software, biological and carbon computing, data embedded directly in physical products, and potentially even geoengineering capabilities. Two competing worldviews emerge at this stage: "hyper-machinity," which removes humans from processes for maximum efficiency, and "hyper-humanity," which integrates technology more deeply into human biology. Boutendijk's most compelling insight: "If all technical constraints are gone, only one constraint remains. What would we like to see?"
In true Gartner fashion the horizons model provides a practical framework for technology roadmapping, with particularly actionable guidance on orchestration and distributed data management. We should advocate for open standards and auditable agents while viewing autonomy as a design choice rather than an inevitability. Success should ultimately be measured by human outcomes, not just system throughput.
 

Behind the Screens: How ITV Delivers Live Video without Fail

The session with ITV and LaunchDarkly was a really interesting insight into the challenges of delivering a large scale streaming platform (they rather humbly pointed out that the numbers they shared pale in comparison to US counterparts). They shared a high-stakes case study about Euro 2024's England vs Netherlands match. With 3.5 million concurrent viewers streaming approximately 22 Tbps across UK ISPs, any failure would make front-page news. They maintained 100% uptime by using LaunchDarkly as a real-time control plane rather than just a feature flag tool. These flags enabled per-ISP and per-CDN routing and origin selection. Real-time telemetry measuring throughput, video starts, quality metrics and buffering triggered traffic shifts within seconds.
The platform evolved after a famous 2021 service meltdown. Since then, ITV implemented multi-origin architecture, multi-CDN routing, and human-in-the-loop controls specifically designed for sporting event traffic spikes. Love Island (which when mentioned promptly garnered a laugh from the audience) presented a different challenge; its sharp pre-show surge requires more automation because human operators simply cannot react quickly enough. Their new target is 6 million concurrent viewers at roughly 40 Tbps, pushing them toward more closed-loop, automated decision systems.
ITV stressed placing every decision behind a feature flag, recommending we design isolated features that can be enabled or disabled without requiring deployments. Treat metrics as first-class inputs to your system, not just dashboard elements.
An audience member rightly raised that long-lived configuration flags become technical debt. LaunchDarkly shared methods for establishing a cleanup schedule and verifying evaluation paths before removing flags. More importantly, they shared how they extend chaos engineering drills to ISP and CDN failure domains, not just service-level testing.
 

Driving Standards - How Ford is Consolidating its Tech Stack in Europe

For someone like myself who’s background has predominantly outside of Enterprise technology, Fords session did a great job of showing the real scale of Enterprise estates alongside the complexity and politics of multi-region companies.
Chris Hurley of Ford Europe outlined a pragmatic consolidation of its fragmented technology estate. Their goal: a single regional stack for marketing, sales and service, fewer products, and significantly reduced business-managed IT. The driving factors are cost, risk and pace; not vanity rebuilds. They plan to migrate at natural inflection points rather than by mandate, leveraging global platforms which already exist where appropriate.
In practice, the plan involves consolidating commercial workflows on Salesforce, standardising appointments on Pega, creating a single customer entry point, and anchoring build work in GCP. Regular architecture reviews, fortnightly L2 meetings for in-room decisions, and monthly executive check-ins ensure alignment between global and regional priorities.
What seems to be working well is their focus on consistent governance rather than big-bang programmes. This approach enables quick decision-making. Retiring local one-off solutions and investing in shared skills and certifications is already reducing risk and improving time to value.
One of the key things I took away from this talk was Ford Europes approach to being extremely direct and clear to everyone in the organisation; these are the technological constraints we want you to work in, these are the technologies we use. Whilst there’s a risk that this will stifle innovation, it does reduce the complexity of large Enterprise estates.
 

Celigo: Avoiding Agents of Chaos - An AI Framework for Trust

Celigo presented a practical trust model for AI, starting with the key challenges: hallucinations, confidence gaps, latency and energy costs. Rather than pursuing full autonomy, they focus on applying AI where it genuinely improves real workflows. Their examples included marketing segmentation, product copy, support and sales automation, and customer 360 views.
The core framework is built on TRiSM with three principles: limit agency to what's necessary, treat LLMs as building blocks rather than complete solutions, and train people to critically evaluate AI outputs. They break down processes into testable steps. Low-agency components gather context, while higher-agency steps handle summarisation or classification. Agent chains are permitted but kept within defined boundaries.
Their controls were specific: confidence gates and schema checks before users see results, second-model validators for output scoring and policy verification, flags for PII and history divergence, and human intervention when confidence levels drop. The strengths of this approach are clear to anyone working with AI today; decomposition is testable, guardrails are well-defined, and there's a sensible focus on workflow impact.
From another perspective, they noted that model confidence scores are often unreliable. Their solution requires golden test sets, regular offline evaluations, and established cost and energy budgets - not just simple thresholds. They did give a pretty solid piece of advice; keep agent chains short to prevent latency issues. One of the memorable quotes from this talk was "LLMs as building blocks, not buildings." - citing the industrial revolution as being a collection of advancements.
 

Crossfire: Using Low Code in the Enterprise

This was an interesting format for the conference, the idea was quite simple; put two representatives with opposing view points and let the audience vote on some straight forward questions. They discussed the questions, then the audiences answers and whether this was predictable or explainable.
The session presented two perspectives on low code versus professional code in the GenAI era. They illustrated the evolution from custom-built survey tools to Node-RED to ready-made applications, emphasising pragmatism. While low code was once the disruptor, both approaches are now being disrupted by GenAI and code assistants.
They discussed the current position of Low-Code Application Platforms (LCAPs), reasons for growing interest, and common obstacles. Issues like cost, scalability, governance and data silos were quickly identified. They highlighted an emerging trend: professional development teams now use platforms like Mendix or OutSystems, not just citizen developers. Meanwhile, GenAI coding assistance has become standard practice for professional coding. Investment is currently flowing into three key LCAP areas: AI-assisted development, enhanced governance, and agent workflows.
The positive aspects included candid discussions about trade-offs and vendor pricing, along with clear explanations of "developer assistance" across both approaches. I particularly appreciated their control-plane perspective for agents on LCAPs, which offers a significant advantage when out-of-the-box guardrails are necessary.
An alternative viewpoint suggested treating "developer resistance" as a leadership challenge rather than an inevitability. When pilot projects include rigorous metrics, golden tests and cost budgets, most teams eventually embrace these platforms. They also recommended designing with exit strategies from day one. Where possible, keep shared APIs, data loss prevention, and lineage separate from the platform. Use LCAPs to deliver quickly, but ensure removing them later remains straightforward.
 

Future-Proof your Front-End Development with New Fundamentals

Danny Brian opened with a show of hands - “Who here comes from an engineering or development background?” to which at least 90% of the audience raised there hand. He immediately got a laugh out of everyone by following with “the best way to punish an engineer is to promote them to manager and send them to a Gartner conference”.
The talk advocated for a standards-first frontend approach to reduce technical debt and withstand technology churn. Rather than automatically choosing React or Angular, developers should leverage modern browser primitives. The presenter demonstrated a zero-dependency component that works in any host environment, making it suitable for micro frontends and web views. With AI-augmented development shortening timelines, leaders should reconsider the buy vs build vs blend equation and distinguish between development-time and runtime dependencies. A cautionary example given was that WebAssembly implementations like Blazor require heavy runtimes.
Key strengths included a compelling case that standards reduce vendor lock-in and extend longevity. The presenter often took a fairly balanced view on frameworks but it was clear there was no love there for React: React remains highly employable and appropriate for complex applications, but developers should make deliberate choices rather than following trends blindly.
Standards alone however cannot address all needs. Server-side rendering, routing, accessibility, internationalisation, state management, testing and analytics still require opinionated solutions. While Web Components interoperability has improved, server-side rendering and hydration still present challenges. Micro frontends also introduce governance complexity.
I particularly like this quote from Danny "The only code I wrote 15 years ago that still runs is JavaScript."
 

Closing Day 2

Day 2 of the Gartner Application Innovation & Business Solutions Summit offered a wealth of insights and helped to frame some of my own thoughts and views in new ways.
I was particularly pleased with the diversity of perspectives presented. From Ford's pragmatic approach to enterprise technology consolidation to ITV's high-stakes streaming architecture, the conference successfully showcased different scales of implementation and varying approaches to similar challenges.
The consistent themes throughout the day centred on responsible AI implementation, the value of standards-first approaches, and the importance of aligning technology decisions with business outcomes.
Whilst it's quite a change from the typical events I attend, I took a lot from it and met a range of interesting people. Overall, a success!
I closed the day by taking the cable car over the river and watching some wake boarders, with a beer, whilst talking to a new friend from the conference.
notion image