top of page

From “catching AI” to designing for it (and why that’s finally good news)

  • Writer: Ross Jordan
    Ross Jordan
  • Jan 2
  • 9 min read

Updated: Jan 3



I’ve written a version of this annual blog for a few years now, and it’s becoming a slightly strange ritual. Partly because I’m always writing it at the same time I’m doing the other annual rituals (marking, planning, more marking, pretending I have a “quiet week” in January, etc.). But mostly because each year I find myself trying to describe the same thing from a slightly different angle:


We are living through a structural shift — not a tool upgrade.


And 2025 was the year that felt different in my day-to-day work: Artificial Intelligence stopped being “something you try” and started becoming something you have to design around, whether you like it or not. If you want the neatest version of that in one line, it’s probably this:


AI is moving from an app you open to an environment you inhabit.


That shift is why some of the arguments we’ve been having in universities (about detection, cheating, bans, policing) are now colliding with reality. Not because integrity doesn’t matter, it matters more than ever, but because the unit of design is changing.


That’s what I want this blog to do: pull out the recurring themes I kept circling, particularly in the second half of the year, and translate them into a set of practical (and slightly opinionated) predictions for 2026.


If you are interested in a week-by-week take on developments in AI and with an unashamed focus on UK Higher Education and Business Schools, then please see my AI Weekend Newsletter: https://thesixthwave.substack.com/


The big 2025 realisation: “AI-proofing” is a dead end


There’s a line I keep coming back to from Stephen Fitzpatrick’s piece in July, AI-Proofing Is a Myth: if the task is fundamentally “gather information → turn it into sentences → submit for evaluation”, then we’re basically asking for trouble.

That doesn’t mean “give up”. It means change the game.


In 2026, the universities that do best (academically, reputationally, culturally) will be the ones that stop asking:

  • “How do we stop students from using AI?”

…and start asking:

  • “What evidence of thinking do we actually want, and how do we design for it?”


Because the uncomfortable truth is: even when students are trying to do the “right” thing, AI can still short-circuit the learning process if it’s used as a first resort rather than a structured scaffold. Ethan Mollick’s Against “Brain Damage” is a useful framing for this: the risk isn’t “AI destroys minds”, it’s that default-use encourages cognitive outsourcing at exactly the point learning needs effort.


So for 2026, my working assumption is:

Most out-of-class work is AI-assisted. Not always, not universally, not even necessarily well, but enough that we should design with that reality in mind.


The integrity conversation is shifting from “proof” to “process” (and it has to)


This is the piece that, in my view, many teams are still slightly behind on.

In 2024/25, a significant portion of the sector's energy was invested in “detection” and “evidence”. But the emerging official tone is more nuanced, and it’s worth paying attention to it.


The UK Office of the Independent Adjudicator (OIA) published a casework note on AI and academic misconduct that contains several quietly important signals:

  • Complaints they’ve received remain low (so far), but they note providers are seeing a rising incidence internally.

  • Providers should be clear about what is/isn’t acceptable and apply fair investigations.

  • The burden of proof is on the provider, not the student.

  • They explicitly emphasise understanding the limitations of detection tools and considering a range of evidence.


This aligns with where many of us are heading anyway: “prove it” is often a brittle strategy; “make the process visible” is more robust.

So when I say “process”, what do I actually mean in practice?


A simple 2026 test

If a student used AI heavily, could they still show:

  • how they framed the problem,

  • what alternatives they considered,

  • why they accepted/rejected options,

  • what changed after feedback,

  • what they believe now (and why),

  • what they can defend orally?


If yes, then you probably have an assessment that can survive ‘ambient’ AI. If no, you’re likely grading product performance, not learning performance. Which is the moment where stress levels go up, meetings multiply, and people start saying “we need a new policy”.

Which brings me to…


Policy is not the solution — but it’s part of the scaffolding


I’ve come to think we overestimate what policy can do, and underestimate what it’s for. Policy isn’t mainly there to “stop” behaviour. Policy is there to:

  • create shared language,

  • define boundaries for fairness,

  • reduce ambiguity,

  • protect staff and students when things go wrong,

  • and give you permission to redesign.


On this, the QAA framing is helpful. Not because it gives you a magic template, but because it anchors the discussion in learning and teaching rather than surveillance. If you haven’t looked at it recently, it’s worth reviewing QAA’s Generative AI resources, including: QAA: How can generative AI be used in learning and teaching?


But here’s the more important point:

Policy without assessment redesign creates a compliance theatre.

And compliance theatre is exhausting, demoralising, and ultimately ineffective.


Assessment redesign isn’t one project. It’s the new operating system.


If you’ve been following my work (or sat through one of my slightly-too-animated rants about entrepreneurship education), you’ll know I’m not allergic to experimentation. I’m allergic to performative innovation. The kind that looks bold but doesn’t survive contact with students, workload, or the quality assurance machine.


In 2026, the most pragmatic assessment redesign work will focus on three things:

  1. Witnessed thinking (more of it, in more forms)

  2. Defended judgement (making students explain choices, trade-offs, ethics, context)

  3. Structured use of AI (where AI is part of the task design, not a hidden variable)


Advance HE captured a practical “starting framework” for this kind of shift in: How to team up with AI: 3 steps for assessment redesign


I’m not pretending there’s a single model that works for every discipline and every module (there isn’t). But the direction of travel is remarkably consistent across the sector:

  • less “submit the artefact”,

  • more “show me how you got there”.


Yes that usually means more use of vivas, presentations, commentaries, prototypes, logs, portfolios, iterations, peer critique, in-class creation, and authentic constraints.

(Also, yes: it means we need to have honest conversations about workload and class size, because redesign without resourcing just moves the pain around the system.)


Research: the REF automation question is a values question, not a tech question


I don’t want to drift too far into Research Excellence Framework territory here (it’s not my natural home), but one late-year piece stuck with me because it frames the risk correctly.


It raises concerns about erosion of disciplinary expertise, “metrics by stealth”, and bias amplification if AI becomes deeply embedded without responsible governance.

This matters for Business Schools because we live in the tension between:

  • measurable outputs and rankings,

  • and scholarship that is interpretive, contested, contextual, and slow.


The 2026 question isn’t “should we use AI in research workflows?” (we will).

The question is:

What do we refuse to outsource — because it is the point?

In other words, which parts of academic work are the actual purpose of being a university (the parts where human judgement is the product), and therefore shouldn’t be delegated to automation? That’s a values conversation, not a tooling conversation.


The sector is coalescing around a “long game” view (finally)


The most useful thing I read in late 2025 wasn’t a prediction about a new model.

It was the steady drumbeat from multiple people saying, in different ways:

Stop chasing features. Start building capability.


This view is reflected in the HEPI collection, 'AI and the Future of Universities' (Report 193), which explicitly encompasses teaching, assessment, research, professional services, strategy, and AI literacy. If you want the headline page: HEPI: AI and the Future of Universities (Report 193)


It shows up in the way conversations started shifting toward multi-year institutional rebuilds (policy + capability + assessment + governance + infrastructure + staff development). Not sexy. Very necessary.


Meanwhile, the tech kept moving: models are becoming more usable, not just “better”


In the background of all of this, the tools continued to evolve in a way that matters for education:

  • more “grounded” synthesis,

  • more multimodal inputs,

  • more integration into everyday workflows,

  • more model variety (and therefore more complexity in “what do we standardise?”)


Two links I found useful for framing that acceleration:


If you want a concrete example of “AI becoming ambient”: Google’s post on translation upgrades (including the “headphones as translators” framing) is here: Google: Gemini capabilities + translation upgrades

When the interface disappears, policy arguments get harder, and design becomes even more central.


Quick rewind: how did my 2025 predictions age?


Before I do the “predictions are risky” bit, it’s probably worth doing what I asked everyone else to do last year: show my workings.


In my 2025 blog, a few predictions stood out (and a few of them were… ambitious). In hindsight:

  • “Focus on the user base… to build loyalty and moats”, I suggest, was broadly right. 2025 felt like the year the competitive edge shifted from “who has the best model” to “who has the best experience”: workflows, integrations, defaults, and switching costs. (Which is exactly why universities can’t outsource the design question to IT procurement alone.)

  • “The most significant use-case developments… will be education and scientific research”, still feels directionally right, but with a caveat: in HE, the big story wasn’t a single killer app. It was the slow institutional grind of assessment redesign, integrity process, and capability-building. The unglamorous work that actually changes practice.

  • “2025 will see the rise of the AI user”, yes, and I think that is why this year’s blog is about design rather than “catching”. Once usage becomes normal(ish), pretending it’s exceptional stops working.

  • “ASI will be achieved in 2025” (Artificial Superintelligence), I’m going to (generously) mark that one as: not proven (and possibly just wrong). I’m less interested now in the label and more interested in the lived experience, and I think AI companies are starting to get this too. Even without ASI, the practical consequences for how we teach, assess, and run institutions are compounding.


And my “boring processes” bonus prediction (that the opportunity to monetise and improve basic workflows remains untapped) is the one I’m most confident about continuing to watch. Because it’s where the real adoption curve often hides, and it links directly to my teaching in the form of business start-up opportunities.


My predictions for 2026 (written with the usual “predictions are risky” disclaimer)


Prediction 1: “AI-use statements” become normal, and boring

Not as a policing tool. As an academic norm. Students will increasingly be expected to describe how they used AI, the same way they reference sources and methods.


Prediction 2: Assessment will shift toward defended judgement

More vivas. More short orals. More recorded walkthroughs. More critique and iteration.

Not because it’s fashionable (or cheap!), but because it’s one of the cleanest ways to make thinking visible, and it aligns with the direction of travel on evidence and fairness.


Prediction 3: The “AI literacy” conversation becomes less about prompts and more about ethics + evaluation

The next capability gap isn’t “can you get a good answer?” It’s: can you judge whether the answer is trustworthy, biased, incomplete, or contextually wrong — and can you explain that judgement?


Prediction 4: Universities will quietly move from “AI committees” to “AI owners”

The work will consolidate under a smaller number of accountable roles: someone has to own risk, budgets, governance, and educational design outcomes, not just convene conversations (but again it needs commitment of resource).


Prediction 5: We will stop looking for “the right tool” and start building “the right workflow”

Model choice matters, but workflow design matters more.

(And if you’re in learning design, Dr Philippa Hardman’s AI Model Selection for Instructional Design is a worthwhile read.)


A few things I’m carrying into 2026 (practically)


If you only want the “so what do I do on Monday?” list, mine looks like this:

  • Assume AI is present, then design evidence of learning accordingly.

  • Put energy into process visibility, not surveillance.

  • Treat assessment redesign as a multi-year operating system change, not a one-off retrofit.

  • Keep policy simple, then do the real work in module-level design and shared exemplars.

  • Protect what is human by design (disciplinary judgement, ethics, critique, scholarship), especially where automation pressure is rising.


Closing thought (and a Dr Who nod)


There’s a classic sci-fi trope: you go back in time and prevent the disaster before it happens. Higher Education doesn’t get to do that (in fact, nobody does). We don’t get to prevent AI; we get to design learning that survives it.


Although the AI debate sometimes flirted with Robots of Death — the anxiety of a robot-dependent civilisation where humans supervise but don’t really understand — that framing has oddly faded. (It’s another great Tom Baker story – you should really watch it. It was my Dad’s favourite too. Louise Jameson plays Leela. The Robots are divided into ‘Dum’, ‘Voc’, and ‘SuperVoc’ (Senior Managers) with numbers like the dreaded SV7 – is this where the AI companies get their ideas for new model names?)


The real risk isn’t melodrama. It’s plausibility: systems that can generate outputs that look right, sound right, and pass superficial checks… while quietly bypassing the thinking we claim to assess.

Which is why 2026 has to be the year we stop grading “plausible” and start designing for defended judgement.


Strange as it is to say, I’m more optimistic going into 2026 than I was going into 2025, because the conversation is finally moving away from panic and toward craft.

From policing to pedagogy.

From proving to designing.

From “what can AI do?” to “what should we do?”


If you want a slightly external perspective on 2026 specifically, Stanford HAI’s roundup of expert predictions is here: Stanford AI Experts Predict What Will Happen in 2026


Right. Back to marking.

 
 
 

Recent Posts

See All

Comments


Commenting on this post isn't available anymore. Contact the site owner for more info.
Post: Blog2_Post

Subscribe Form

Thanks for submitting!

  • Facebook
  • Twitter
  • LinkedIn

©2020 by TheSixthWave. Proudly created with Wix.com

bottom of page