top of page

AI III: Reflections & Reflexivity

  • Writer: Ross Jordan
    Ross Jordan
  • Jan 3
  • 16 min read

Updated: Jan 7



This is the 3rd successive year where my annual blog is focused upon Artificial Intelligence (AI). My reasoning is that this site’s overarching themes of pedagogy and entrepreneurship (start-up) continue to be radically impacted by the ongoing development of AI. It feels an appropriate moment in the trilogy to reflect on what has come before and exercise some reflexivity on changing predictions for the future. I sense a lot-of early adopters (of AI) are trying to do a similar untangling of the issues and pace of development given the approach of the major Large Language Model (LLM) providers to daily product launches in the last month (e.g. OpenAI’s ‘The 12 days of Christmas’ - you can click on these links throughout the blog to access the resources mentioned or expand on the discussion) as a means to lead the competition.


Last year I said we were in the early adopter phase, and I think we still are. Or at least in the early ‘admit you are using it’ phase. My prediction in last year’s blog was:



I conducted two interesting experiments in 2024 in terms of pedagogy, firstly on my final year capstone module. The initial assignment is for students to propose a new business idea (value proposition) and deliver a set of 6 slides to convince the audience that the concept is worth proceeding with for the rest of the module (to draw together a team of 5 or 6 founder-peers, produce a full business plan, pitch for real investment, and if they wish – to launch for real). I made it clear that I was happy for students to use AI to support their creativity (once again citing one of my favourite academic writers in this area, Ethan Mollick: 'Automating Creativity', plus for a particular example related to student entrepreneurial start-ups I recommended: ‘Ideas are Dimes a Dozen: Large Language Models for Idea Generation in Innovation’).


One of the things that undergraduates often struggle with is the originality of new business ideas, alongside generating ideas with serious growth potential. My hope, which was proved correct to some extent, was that engaging with AI would help students develop better business ideas. There was a marked difference in the 200 ideas pitched from previous years. Far greater variety (in terms of target market, industry, technical and resource requirements, and growth opportunities), far fewer ‘Tar Pit’ ideas (a phrase coined by the Y Combinator accelerator in the US to describe ideas that we see year-on-year and never really quite go anywhere – they are either too unattractive for growth, or two technically difficult, and often both).


This is the first year where we have not had the ‘joy’ of a new coffee shop proposal or app to support student tenancy issues, and the first year where students appeared more comfortable with asking for 'big money' to support large infrastructural, environmental, or technically advanced proposals. The quality of the visuals, slides, logos, wireframes, customer personas, and pitches as a whole was significantly better with one exception; the academic framing was poor (i.e. the application of the content from the course – sigh, I clearly need to edutain more).


My home institution takes a supportive and open approach to the use of AI in assessments with flexibility for the academic in selecting their approach/approval of the use of AI (or not). We provide a good range of support for students to consider the ethical and appropriate use of AI and ask students to declare its use with a simple form attached to a digital assignment submission. On the assignment I mentioned above, I would estimate (but of course cannot prove) that 75% of students used AI in some shape or form (indeed, as one student said to me “Surely I need to declare the use of Google as an AI-enabled search engine these days”.) We had 3 declaration forms.


The second example is a piece of research conducted with students returning from a year on an industrial placement, where I wanted to get a sense of the realities of AI use in the workplace to inform the ongoing development of the employability stream in our courses. This involved 65 placement students (mainly in the UK) across a variety of industries and almost all working for large businesses that are key players (the sort of names that you would be proud to display on your CV when applying for graduate jobs). We asked the question to what extent AI dominated the discussion or was a key issue being considered by the company/industry (in Higher Education it is clearly one of the key issues being discussed in the UK). Only 15% of respondents identified AI in this category, whilst 57% said it was barely discussed, if at all. Furthermore, 53% of respondents said their company had no policies around AI use, and 42% of respondents said they used AI in the workplace. On this last point, I experienced a classic piece of researchers regret in not asking whether this AI use was endorsed or approved.


Nonetheless, I cite these two examples not just for interest but to support this sense that we are still in the early-adopter phase and we suffer from what some have called shadow AI use. In other words - don’t tell anybody. There are all sorts of reasons for this (not least human guilt) that are discussed here in a piece looking at AI adoption: The Hidden Reasons You're Not Using AI Every Day?


The noise of the AI landscape has changed during the last year. X is still a powerful discussion resource but there are fewer of the ‘get rich quick’ posts and more of a technical focus than before (perhaps putting off people from its previous excellent resource as a user-guide in place of the continued lack of instruction manuals from the LLM providers). Ethan Mollick as mentioned earlier is still a go-to resource for me, alongside Rowan Cheung’s daily newsletter. But I would add the newsletter and free webinars from Section as a business-focused resource at a strategic level and with some specialism in certain sectors (not least Higher Education). They will want to sell you their wares and the paid-for resources may be excellent, but as a cheapskate academic I have not tried these. Greg Isenberg is a great read on start-up development and AI, although I must say one of his predictions for 2025 does worry me as I will need to do yet another course rewrite: “AI makes start-up ideas worthless. When anyone can build anything, distribution and timing become the only things that matter”. Dr. Mushtaq Bilal has some good thoughts on using AI for academic research (especially useful for early-career researchers). Finally, Dr.Philippa (Phil) Hardman provides an excellent update on AI resources for instructional design.


This leads us to consider where the discussion (‘noise’) has taken us and to reflect further on my predictions for 2024. The first is one that stimulated a lot of debate and indeed criticism (which I take as a good thing):



You can read about Schumpeter, Creative Destruction, and Kondratiev in last year’s blog, but did we achieve Artificial General Intelligence (AGI)? Well AGI seems to have entered the collective consciousness and parlance of mainstream media and yet the argument that OpenAI’s launch of o3 (just before Christmas) as a reasoning model that has surpassed the human threshold on the ARC (Abstract Reasoning Corpus)-AGI benchmark - scoring 87.5 where the human threshold is 85 was hardly front-page news despite suggestions that it leads to ‘signs of life’ and emergent behaviour. Part of the reason for this is that we continue to see super-agile development of LLMs in public (thanks to an investor ‘rush’ and engineering-focused management teams) without enough user (customer) focus. OpenAI’s recent proposed restructure (another one!) toward a ‘Public-Benefit’ model (largely for-profit) demonstrates strategic struggles at the top, whilst any level 4 student could have pointed out the inevitable need to jump from the naming convention of o1 to o3 to avoid an intellectual property lawsuit from a major telecoms company as a sign of confused thinking throughout.


The LLM obsession with AGI has stimulated much of the ‘noise’ in 24. Partly in terms of definition (we are now hearing of narrow AGI – which strikes me as an oxymoron, and soft/hard/sharp AGI), with my preferred choice(s) of definition being:




However, OpenAI just changed their definition to have a profit focus of $100bn (some distance from their original vision), and also have a ‘pathway to AGI’ which offers more flexibility/confusion: For a summary see: 5-Levels of Super AI to Outperform Human Capability.


The goalposts being moved are also part of the problem, along with the varied methods of measurement. If you are interested in the latest league tables this is a good resource: https://artificialanalysis.ai/models


To achieve widespread AGI (Oh dear, I just introduced another term) the suggestion in 2024 was all about ‘compute’, i.e. throw more and more chips at the challenge and we will get there (with the associated concerns around the environment pushed aside). The mid-year suggestion of hitting a scaling wall (the pace of development decreasing) encouraged geo-political moves to restrict access to technology. This was most marked in affecting China but can also be seen by the user in the UK who experienced frustrations of new tools being trumpeted (that should probably be instruments rather than tools) and then being met by the text ‘Not available in your territory yet’ (bringing back the palpitations of an Error 404 page). Ironically the restrictions in China prompted development, late in the year, of DeepSeek v3. An open-source LLM trained at a far lower cost than the market leaders and far more efficient in terms of compute, as a result of necessity. An entrepreneurship scholar would recognise this as a classic innovation methodology and arguably an example of effectuation rather than the causation goal of AGI. This is a position that smaller countries, organisations and institutions (such as Universities) find themselves in – a subject which I return to at the end of this blog. Political involvement in the discourse has increased significantly (partly due to the Trump victory and association with Elon Musk) and it is worth viewing the UN debate on AI.


If I have lost some of you here then I apologise. I think you may have a sense of confusion around what is going on (after the last few paragraphs) that most people have regarding AI. In fact, if you do understand the AI landscape fully then I urge you to get in touch with the LLM leaders as they will pay you a lot of money because they don’t understand it fully themselves (“Anyone not confused doesn’t really understand the situation”). It is ok to not understand. Indeed, as we allow/encourage LLMs to work independently of us, and together with each other, we may never fully understand (this is the annual dystopian warning where I usually say something about Dr.Who, and interestingly – well to me, at least – there is almost equal confusion as to whether Davros, the creator of the Daleks, was killed by his creations or not, as to whether we have achieved AGI). For more depressing warnings see: Godfather of AI Raises the Odds of Technology Wiping Out Humanity Over the Next 30 Years.


Enough adding to the noise. Have we achieved AGI? I say yes!

For a typical discussion about the realities of this one-word answer see: Haider at 'Slow Developer'. With further support for ‘yes’ here: from Chubby on X.

And finally the helpful reminder that “We have not achieved “better than any human at any task” but what we have is “better than most humans at most tasks” from Vahid Kazemi, and that “Most folks don’t have a lot of tasks that bump up against limits of human intelligence, so won’t see it (AGI)” from Ethan Mollick.


I also cite in support of my answer the AI world turning its focus toward ASI (Artificial Super Intelligence). Again, definitions vary, but here are my selections:


“A superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds, not just in a specific domain but across a broad range of intellectual tasks.”


“Artificial Super Intelligence refers to AI systems that achieve abilities exceeding the most capable human minds in almost every field, including scientific creativity, general wisdom, and social skills, potentially enabling the system to continuously improve itself and innovate beyond human comprehension.” from DeepMind Blog.


Thanks to the leaps of performance seen in the last year in AI, and the breaking of the scaling wall (see OpenAI's Sébastien Bubeck) this leads me to my (possibly controversial) first prediction for this year.


Artificial superintelligence (ASI) will be achieved in 2025


Unsure, yes me too. But take a look here for where the discussion is now with Logan Kilpatrick. Or consider Sam Altman’s view that “How does superintelligence emerge? You have to look at the rate of scientific progress,” describing how things might compound advances over the next few years. There is also the suggestion from Haider that we could reach ASI before the general public recognises that AGI has been achieved:

We will return to this subject in 2026.


I talked last year about the competitive environment of AI as a fascinating business case study (citing the Sam Altman fire and rehire saga as a unique example of the chaos of hyper-competition), and this has developed further in 2024. My prediction for 2025 is:


A focus on the user base (the customers) will develop as a means to build loyalty and ‘moats’ around the leading providers


We have seen this begin in 24 with Google’s rapid deployment of numerous tools through Google Labs (it’s a ‘lab’ so you have to wear a white coat – come on Google, this is for the end-user!), fine-tuning of its LLM Gemini and the introduction of NotebookLM. All of this whilst its core business of search is under threat from AI providers such as Perplexity (have you tried it? You really should, here’s a useful thought piece: Section School

The strength in depth of an established business of size such as Google has been difficult for OpenAI to compete with, especially whilst distracted by the path to AGI/ASI.


The rise of agents has been mixed, in my view, and perhaps this is because of the inclination to keep AI use secret discussed earlier. ChatGPT’s hope that it would replace the App marketplace with the ability for users to build custom workflows (confusingly also called GPT’s) and Claude’s response with Artefacts was misplaced, as most users build for what they presume are very specific workflows in their environment and then do not wish to share or monetise them in case their increased productivity is ‘seen through’, or worse still, their employer tells them to stop for reasons of governance.


Early-adopters continue to use a variety of LLMs depending on their goals (I use ChatGPT for workflow tasks, Claude for research and comprehension of large data sets, and Gemini for pedagogy-focused endeavours, and if you are prepared to pay then for a research focus I use Deep Research). I know I am missing out on Grok, and Open Source models (and Mistral, Llama etc. etc.) and whilst this frustrates me, the realities of the day job and needing to use these models extensively to get the most out of them means that I suspect I am a typical user in focusing on what my time resource allows. This is partly what the major players hope will drive users to their model of choice alongside advancements in memory of AI interactions (i.e. your LLM will recall all your previous conversations and data such that it becomes a ‘lifelong friend’ and no new-fangled friend could ever know as much about you, thus being disadvantaged at providing the tailored advice you have come to expect).


I think Google has stolen a march here as their approach is more user-focused, certainly from a pedagogic perspective. If you are in this sector and have yet to try NotebookLM I urge you to upload a transcript of a lecture you have given (perhaps adding the slides and a couple of key articles) and hit Audio Overview. The podcast which is generated a few minutes later will be a revelation for you (and your students). Its new interactive feature, allowing you to guest on your own podcast (the join feature) and ask questions to direct the discussion, is another significant step and a sign of further refinements to come (hopefully including a selection of English accents). I think the timing of the Podcast feature is perfect with this medium now leading mainstream media for many users. You can access a NotebookLM-generated podcast of this blog here: AI III: Reflections & Reflexivity.

There are numerous other features within NotebookLM that provide varied ways to interact with learning material, and I have found it refreshing to listen to podcasts about my own lectures as a way to refresh the material and appreciate different understandings of my delivery (some other materials related to this blog, produced in seconds: Briefing Note, Study Guide, FAQs, Timeline ). This seems a good way of supporting personal development (although I accept a critical management view that I may end up disappearing into my own vortex – there was a ruder version of this sentence). This is not to say that Google may not be trumped by a step-change advance from another (where are Apple in all this? Perhaps DeepSeek V3 from China is a sign, perhaps OpenAI get better at AI enabling an understanding of human psychology and social dynamics). But for the moment if you are new to LLMs and the signal-to-noise ratio in this blog has frustrated you then I would say start with Gemini.


Last year I introduced a framework for thinking about AI and I think this still stands for 2025. There are many other areas in this framework that space prevents us from considering in depth this year. To mention a few significant areas, the advancement in robotics continues apace and I think we are probably the last generation of non-native robotic humans (i.e. our successors will grow up in a world surrounded by robots at home, in the street, and at work – whatever 'at work' may end up meaning).


Until robotic interaction becomes common in the workplace (as it surely will) most of us will continue to interact with AI via our computers and progressively our phones or newer 'wearable' technology in this space. This year has seen advancements in AI speech and vision that have enabled keen users to interact with technology in a more complete human way. If you have not tried voice on your LLM of choice then do give it a go, e.g. Stream Realtime in Google AI Studio. Yes, it blurs the boundaries between humans and machines but this is something we are going to have to get used to, and I would argue is one of the new AI literacies students will need to develop that means that prompt engineering is already yesterday’s news. If you want to take a step further then enabling your AI to take control of your screen and start opening browser windows or coding Python as it sees fit to complete its assigned task feels otherworldly (well to this grey-haired man anyway – as an aside this post on X literally struck a chord with me in terms of my own attempts at Edutainment with grey-hair, although this link is to some bloke called Angus – not me).


Content production has also taken a big step forward and the threat to internet influencers, film,TV, and streaming content providers, and the traditional music industry has increased. My first industry (aeons ago) was the music industry and I was a very average keyboard player who bought more and more equipment to try to counteract my lack of talent. Now I can type some instructions into AI and have it produce an original (new pattern) song in the style of one of my favourite artists (e.g. Dave Gahan’s wonderful baritone) in minutes. Indeed, the grabbing hands will grab all they can (Google / Perplexity this sentence if this does not make sense to you).


Finally, developments in AI enablement of research continue to be significant, and this leads me to my next prediction for 2025:


The most significant use-case developments emerging from AI in 2025 will be in the areas of education and scientific research


That may not be seen as controversial and I do not try to make these predictions to specifically stimulate controversy (as many other predictors do). However, my follow-on point may not sit comfortably. Please understand that I am not making a political point in my conclusion (and if anything, I consider myself to be apolitical or perhaps disillusioned with politics through age) but rather stating a genuine concern about the state of play in a particular sector in a particular country.


I do not think the UK and specifically Higher Education in the UK are well placed to take advantage of these developments. The signals have not been good in terms of a budget for growth that increases the burden of tax on employers, and a government asking its regulators to suggest innovations to stimulate growth in the absence of other (their own) ideas. This is also not helped by removing the limited promises for investment in AI research, and not inviting Elon Musk (whatever you may think of him) to a ‘global’ summit on AI.


Within Higher Education, a drop in numbers of overseas students (amongst other things) has led to a reported liquidity problem in more than 50% of institutions. The knock-on effect of this is widespread restructuring programmes dominating the institutional discourse at local levels and reducing the opportunity to consider the impact of AI (which could ironically be a cost-saver) at a strategic level. Further, this means the use cases for AI in Higher Education administration will continue to remain hidden, the pedagogic and research opportunities under explored, and the governance decisions are likely to remain behind the curve or possibly be pushed into a more restrictive risk-averse application.


In other words, the US and China (alongside possibly India) will continue to lead the world in AI development (with a few countries specialising such as France in visual media creation) and the UK will be a laggard. Take a look at this example of AI development for research and pedagogy from Stanford as an example of US leadership in development https://storm.genie.stanford.edu/ , and by way of another example this story of an AI-focused school in Arizona: AI Educators are Coming


Now this is a bit too negative to finish on, so here are some hopeful suggestions as to how to address these issues (some New Year's Resolutions if you like).


The UK Government needs to engage with Higher Education to a far greater extent and in a partnership model. Universities are significant employers and drivers of the economy at local and national levels, often with a major impact in the public realm providing a strong economic multiplier effect.


UK Higher Education Institutions need to (somehow) find the time whilst restructuring to encourage (force) this engagement with the government and influence the policy decisions.


Progressive UK Higher Education providers should appoint a Chief AI Officer to drive institutional development in this area. In the same way that a Head or Director of Sustainably has become a given, this role will be too in the future. This is in addition to a Chief Technology Officer, and they need not be an engineer, but rather a practitioner, academic, a user of AI.


In conclusion, my takeaway prediction for 25, which I really hope comes true is:


2025 will see the rise of the AI user as a focus, as we progress toward the early-majority uptake of AI


Plus, I will give you 2 bonus predictions:


Developments in Quantum Computing will accelerate the pace of change in AI even further,


and (at the risk of becoming a purveyor of get-rich-quick schemes, but in support of lean start-ups):


the opportunity to monetise boring basic processes (nowhere near requiring AGI/ASI) and provide a decent user experience for the forthcoming late-majority in specialised industries (including HE) remains untapped.


Many thanks for reading and see you next year, all being well.



Note: If you have got this far - well done - and many thanks for reading. In previous years I have provided an extensive list of links at the end of my blog. This year I have taken a different approach of interweaving the links in the text to be more of a conversation. I hope this works for you. I would love feedback, especially as to where you would like me to focus the discussion in the future.

 
 
 

Recent Posts

See All

Comments


Commenting has been turned off.
Post: Blog2_Post

Subscribe Form

Thanks for submitting!

  • Facebook
  • Twitter
  • LinkedIn

©2020 by TheSixthWave. Proudly created with Wix.com

bottom of page