top of page
  • Writer's pictureRoss Jordan

AGI is the 6th Wave


My annual blog is again focused on Artificial Intelligence (AI) and addresses the challenge of writing for longevity in this field rather than adding to the noise. The blog itself is rather longer than normal (sorry) as it has been such a busy year in AI.

To achieve this I will provide curated links of workflows for early adopters (for we remain in this phase) looking for results now, in my specialisms of pedagogy and entrepreneurship start-ups at the end of the blog.

Before we get to this, I will make predictions for 2024 and reflect upon the extraordinary year in AI that was 2023.


Predictions are embedded with risk. Hence, they are often avoided by academics who in many, but not all cases, understandably, reside in a world of empirical research. However, as an Entrepreneurship tutor, it would be remiss not to take risk. I have no idea if there will be any reward attached to this risk and engage in it purely for the entrepreneurial spirit of ‘paying it back’, but it is worth saying that these blogs are not designed to generate passive income, build profile, or act as clickbait. They are for fun, and to stimulate thought in others and myself (be they students, teachers, or practitioners – and hopefully all three).


One of the challenges of navigating the AI landscape (which I will attempt to provide a framework later to help with) is the level of noise in commentary. As Ethan Mollick identifies ('How to use AI to do stuff' ), the ‘user guides’ for AI tools are rarely from the developers of the core models and are more likely to be found in Reddit threads or for the casual user via X (I preferred it when it was called Twittermas – is the first of the traditional poor jokes this year). X’s algorithms are enabling a mass of ‘get rich quick’ / ‘side hustle’ / ‘top 10 AI tools you have never heard of that will change your working life forever’ type posts, which are starting to crowd out the genuine thought leaders and thus influencers (Ethan is one I would recommend a follow if you like my blogs  https://www.oneusefulthing.org/ ).

Therefore, to cut through the noise, here is my key prediction for 2024:


Artificial General Intelligence is the 6th Kondratiev Wave of Creative Destruction and it will be achieved this year.


For those who have not read my earlier blogs and indeed for those who have but a year is a long time, a quick recap to explain the basis of the prediction. An area of entrepreneurship (and economic) theory which has been an interest of mine for many years is the notion of waves of Creative Destruction introduced by Josef Schumpeter in 1942. This helps us to understand major shifts in changes of approach to business (and thus entrepreneurial opportunities and start-ups) as being triggered or based around key technological and infrastructural change as enablers (and by association destroyers of old worlds). Schumpeter suggested naming these cycles after Soviet Nikolai Kondtratiev.  He was one of several economists writing around the start of the last century considering the drivers of cyclical change and who identified technology as key (as opposed to investment or inventory for example).

Various interpretations of cycles past include canals and then steam engines as early triggers, followed by application of scientific knowledge, electrical power, petrochemicals etc. These varied views are often cited by critics of the theory as its failing, but even critics generally agree with the hypothesis that the 5th wave is that of information technology. One of the criticisms of my prediction will be that Artificial General Intelligence (AGI) is just an extension of AI, which has been around for a long time (at least since Alan Turing and Machine Learning in 1956) and thus is just a further part of the information technology wave number 5. I proffer that AGI is a step beyond information technology. How we understand our engagement with it requires humans to consider some fundamental questions about their existence. For example, how will humans supervise AI systems that are much smarter than they are? This is what we will need to do to achieve what has been called a superalignment of AI and human capabilities. Getting this wrong is what stimulates the nightmarish scenarios predicted by science fiction and that AI companies and nations are already grappling with (probably only in the consciousness of the early adopters and the casual onlookers).


Defining AGI is becoming more complex as we engage in the journey, and partly as a result of the highly competitive business landscape, there will be lots of debate around measurement (already there is talk of weak and strong AGI). The beautiful simplicity of the Turing Test (originally called the Imitation Game by Turing in 1950) has already been achieved (ask yourself – can you tell a computer from a human –perhaps next time you ring up your insurance company), and so new measures will be needed. I prefer a definition that is akin to AGI being able to achieve any intellectual task that a human being can perform and I do not think that we will hit that this year. However, I do think we will have an AGI that can perform some seriously impressive, surprising, and probably ethically and morally challenging tasks beyond human capability this year, and most importantly beyond the expectation of its human prompters (thus perhaps not human masters). You could argue that (at least in terms of weak AGI) we are already there, and I suspect within at least one AI large language model provider the ability is there awaiting the right moment for release.


I said I would also reflect a little on the AI year just gone and I will pick one event that was seismic from an AI and business perspective. Every day appeared to produce a new remarkable event in AI such that the only remarkable days were the quieter ones. My preferred choice to try to keep up is Rowan Cheung’s daily newsletter ( https://www.therundown.ai/ ) and he summarises the year well here: 'The Biggest AI Events in 2023' and a nice timeline here: 'AI 2023 Timeline on X'  Note: for X links to work fully you need to be signed into X).


My key event is the firing of Sam Altman (as CEO of OpenAI) on November 17th and his subsequent re-hiring on November 22nd.


Plenty has been written on this and I look forward to the inevitable film. Information has been coming out from the board (past and present), so it could have been a personality clash, a power grab, a strategic disagreement, an issue of core values being challenged due to the original non-profit (open-source) motive conflicting with the significant Microsoft investment, or any number of other possibilities. However, I suspect it was down to governance and the challenge that the prospect of AGI places on humans as the custodians.

In last year’s blog, I referred to the classic Dr Who scene from Genesis of the Daleks in which Tom Baker is in a lab, having travelled back in time, with the opportunity to kill the Daleks before they are truly born by touching two wires together (if this reference means nothing to you then you really should look at some original Dr Who series – the stories are better even if the effects are not. But for your convenience and 45 seconds of your time: 'Genisis of the Daleks - YouTube').

I think those 6 days in November were the Open AI Dalek moment, with one difference (excepting the robots, Timelords, time travel, etc.) which was that someone thought they ‘had the right’ and the collective decided they did not. If this sounds like a fantasy, and it may be, it is worth remembering that this is a company that has just surpassed $1.6bn annual revenue, is reportedly discussing a $100bn fundraising round, and is due to complete its $86bn tender this month.

Other key elements of this story are that the firing of Sam Altman came just 10 days after the OpenAI developer Summit at which the concept of GPT building and a marketplace for GPT’s were launched (effectively an AI ‘app’ market which could creatively destroy the existing app market) to great aplomb, and at which Microsoft CEO Satya Nadella, as investor and partner, implied that they will do whatever Sam wants ('OpenAI Developer Summit - YouTube' ).

After the rehire, there was a pause to new ChatGPT sign-ups, and some have argued a throttling back of performance – a pause for breath perhaps. Sign-ups are now live again, and I am noticing no perceivable reduction in ChatGPT performance (mind you, as a mere human, how can I be sure).

So fantasy maybe, landslide certainty. The other theory that it was an elaborate way to avoid having to use Microsoft Teams also has followers.


Why does all this matter? Other than, as a business case study for a strategy module at University in the future, and of course for the players embedded in the game. It probably does not matter except it added to the noise and thus confusion. That leads me to my second prediction:


The early-majority users will not engage significantly with AI in 2024.


I would like to be wrong on this one, but I think for many users (the majority) the noise is too loud and the nature of the competition too strong (too creatively destructive) to ‘buy into’. I suspect that it will get more confusing before the true victors are crowned and thus the users become attached to one or other player. We have yet to reach a mobile phone equivalent landscape of Apple vs Samsung vs the rest, and AI is complicated by the underlying Large Language models as a concept for the average user, and the means to interact with them being confusing (just ask someone who is a Microsoft user what they understand Co-Pilot to be, and don’t ask someone who is an Apple user – yet). It is far too early to presume the player holding the most turnover or valuation will be the winner, it could easily be an open-source or substitute-type competitor.


If you are an early adopter, how can you make sense of the current landscape and filter out some of the noise?

Here I introduce a conceptual framework I have developed to help my own thinking and daily AI research that may help you.


I have categorised different elements of the landscape to enable users to allocate new information into locations and organise their thinking. I then take a subjective view as to which elements are of most significance to my workflows (this being a phrase that has entered the AI language) and thus help me not to be waylaid by interesting but potentially non-productive distractions. The workflows I have chosen to focus on here are those of pedagogy and entrepreneurship from a start-up perspective. I also mention my academic research workflow which I may explore another time but is not the focus of this series of blogs and is covered extensively by others (for example I recommend following Mushtaq Bilal ( Mushtaq Bilal on X).



Before we get to the use cases, I will talk you through the landscape model. I place the competition element at the centre (tipping my hat to Michael Porter, who I continue to refer to as Mikey in my lectures, as if we are mates – we have never met, sorry Professor) as this is driving the innovation and creative destruction/disruption.


If AGI is indeed the 6th Wave then I am old enough to have experienced two such events. In the late 90's and early 2000’s when the impact of the internet was still being realised, I recall being involved in investor pitches where if the internet impact was not addressed, it would be called out (regardless of the value proposition). It feels the same with AI now, and I can envisage seeing pitches in 2024 for barely adapted existing offers, where investors will ask “you have not addressed how AI will affect your idea”….for another Coffee Shop / Food Subscriptions Service / Tourism App etc.  (which effectively means, “I’m out” to use the folk hero parlance).

What does appear different this time (I accept it could be due to my ageing memory) is the pace of change and the level of destruction caused by technological advancement, particularly to start-ups themselves. Arguably, OpenAI’s launch of GPTs (ChatGPT’s app equivalent within ChatGPT, and not to be confused with ChatGPT itself) wiped out a whole host of start-ups. They had built their value proposition around providing a separate app-style service to interact with ChatGPT (although Open AI would say they could rebuild their offer as a GPT within their service, but they are of course then financially beholden to ChatGPT as a supplier/provider). Therefore, if you are in start-up world, you have to be in AI world, but also be incredibly flexible in response to the competitive landscape.

It is also these enabling type business models that have provided the easiest access for early adopters to deliver results from workflows efficiently thus far. This change from OpenAI has led to a resurgence of open access models as users and developers seek out a return to the original values of OpenAI (and thus AI in general). Whilst the larger players proceed down a monetizing route, which will become more aggressive as the valuations rocket and the field thins out.

Moreover, this is one of the drivers (alongside governance, security, and ethics) for a surge in local language models rather than large language models.

To bring this last point to life I will introduce a pedagogic workflow example. If you were to ask the question, ‘without any resource constraints, what would be the best pedagogic approach to teaching and learning subject X’, what might your answer be? If you believe Sal Khan in one of the most watched Ted Talks of 2023, it would be one-to-one tutor-learner interaction ('How AI Could Save Not Destroy Education' - TED Talk ). Therefore, could AI provide that interaction? I feel certain that AGI could do so, but AI can do it now within a more controlled environment that I suspect many teachers would be more comfortable with. Let us say you are teaching a 12-week undergraduate module. You have content; the module guide, some online resources, and your lectures, which are most likely recorded. You could make all this available as a local data set that could be interrogated by an AI bot. You could even allow it to look at your referenced data to encourage engagement with the original source documentation. You could build your own GPT within OpenAI and take advantage of its ability to ‘hear’ your lectures and ‘see’ your slides, but of course then you open up your data to the world at large and interaction beyond your chosen data set. Therefore, you might encode all your data as a PDF (your slides of course, and transcripts of your lectures) and then use a simple bot such as Ask Your PDF (https://askyourpdf.com/ ) to encourage your students to interact with the material in their own way, in their own time. You could try this now.

 

At this point, some of you will be starting to engage with the links between the competitive box in our landscape and the ethics box, because governance is a driver of local language models and we start to get worried about copyright and intellectual property more widely (which is a key concern for start-ups too). Large language models rely on the open availability of data, but this questions who owns the data (as exemplified by the New York Times' current legal efforts to enforce its copyright in this arena). AGI demands open data access for it to be most effective. Yet in a hyper-competitive environment where the stakes are high enough for governments and groups of nation-states to try to regulate based on ethics, but most likely to support their competitive interests, this element can derail and delay AGI (and my 2024 prediction too). My sense is again that the speed of development is so fast, like nothing we have seen before, and that this trumps the attempts at governance. However, given the ethical challenges raised by AGI, that is a concern too.

For me, the competitive box and the ethics boxes in our model are for a watching brief but not to engage with daily as they are difficult to influence but will need to be responded to in actual workflows over time.


In getting this far with the model we have addressed to some extent the box that considers AI as an entity in itself. It is present to recognise the ‘seeing’ and ‘hearing’ elements of current AI, which have escaped most people due to the focus on language (and of course, these senses tend to be translated in our understanding into language anyway). They are powerful and have knock-on impacts on all types of content providers (thus taking us back to intellectual property concerns). The only element of this blog actually generated by AI is the picture via the Microsoft Co-Pilot mobile app (as I was criticised for citing ChatGPT as a Co-Author last year!). We are likely to see an entirely AI-generated feature film this year, we have already heard AI-generated songs, and if you have not yet tried text-to-app software generation and realised you do not need to learn Python anymore, you really should.

The entity box also highlights the physicality potential in the form of robots as actors of AI, and potentially independent actors with AGI, as well as the means that users interact with AI (relevant due to its ubiquitous potential). For example, all significant new mobile phone device releases this year are likely to emphasise AI capability, and plenty are looking beyond these rather old-fashioned devices. Elon Musk’s AI start-up xAI is rumoured to be looking at AI processors for implanting into humans, but of course, with Elon, that could be for publicity.

The current Musk offering of Grok is amusing but little more than a distraction in the competitive landscape at present (it reminds me of Grot from the ‘Fall and Rise of Reginald Perrin’ and with the same sense of humour – another aged reference but one worth investigating – the original with Leonard Rossiter of course. If you visit/revisit you will note that despite Reggie’s best efforts, Grot became very successful: 'Grot - Reggie Perrin' YouTube ).

Again, the entity element is a watching brief and inevitably, developments in this area are influencing start-ups and will influence pedagogy.


The AI as an Academic Discipline category is one I could easily be distracted by, as it is fascinating and my academic sensibilities tell me that I should understand how we got to this point. However, for workflows in the areas I practice in, there is little requirement to appreciate the technicalities of a deeply complex discipline in its own right (we all switch on the light switch, but we do not all know the intricacies of electricity generation).

However, it is good to understand some basic principles which can inform thinking in other areas and the research coming out of the AI businesses is the closest we have to official user guides and a sense of what is next (e.g. this paper from OpenAI is an excellent primer in weak to strong AGI and the Superalignment challenge (Humans as master of machines smarter than them):  'Weak to Strong Generalization' ). An understanding of the nature of AI as being founded on neural networks similar to the human brain, rather than a more binary input-output model, helps inform the concerns around AGI and its management.

I also include measurement in this section, as this is where attempts at comparison of different models and the degree of AGI are at their most robust.

It is also here where you can find papers that provide evidence to support key thinking that may be relevant to your particular use cases. For example, in start-up, it was initially thought that AI could not be as creative as humans and this has been disproven (again see Ethan Mollick for a useful summary: 'Automating Creativity', plus for a particular example related to student entrepreneurial start-ups I recommend: ‘Ideas are Dimes a Dozen: Large Language Models for Idea Generation in Innovation’). 

It is also in this category that I include academic endeavours to identify and manage the use of AI in pedagogy. This approach enables me to focus on pedagogic workflows that presume that AI detection and use prevention are fruitless, whilst observing the efforts of institutions, governance organisations and plagiarism detection houses. 


Finally, there are the guides, or ‘how to’ resources, which I have already addressed to some extent. You will see when we get to the use cases that X is my preferred choice of guidance (albeit it is becoming more difficult because of the ‘earn $1000 a month from a faceless YouTube channel’ posts that are dominating now).

I think there are three ways to avoid the confusion of the guides (such as they are). First is to focus on key messengers (I am not claiming to be one, but I have pointed you toward my favourites), and they have probably been messaging for at least 18 months and have 100k+ followers or subscribers.

The second is to largely ignore the influence of AI, other than that driven by your industry, and wait for the winners to become apparent. There are problems with this approach, firstly if your organisation does not have a defined AI strategy, then it might be left behind (or the whole industry might be superseded). Secondly, if you are relying on your organisation's legacy choices, what if they are not the winners (e.g. if you rely on G suite and Google’s efforts with Bard, Assistant, and DeepMind and Google’s rumoured intent to replace 30,000 of its employees with AI, and they don’t work out, will you be talking of them with the same fondness and regret as MySpace or Netscape).


The third way is to look for practical use cases or workflows directly in your area of interest. It is this to which I now turn in the form of the annual curated set of links for your delectation and to support fast, impactful, output today. If you have read this far, then well done. Moreover, it also proves that one of the workflows that suggest attention-grabbing headlines for LinkedIn posts generated from the content by AI, has worked. Now that is what I call a work!flow.


Curated Links for Workflows


Please find below a list of links categorised using the framework I have introduced. Each link is my choice from typically a range of 5 to 10 alternatives to address workflows effectively. If there are broken links or you have a tool you use which you think is better, please let me know. Enjoy.

 

1. Entrepreneurship (Start-up)

        

  • Ideation


14 Websites for Starting an Online Business (X) (Note: for X links to work fully you need to be signed into X)

 

  • Marketing

 

  • Business Planning & Project Management

 

  • Pitching & Investors

 

2. Pedagogy


  • Course Creation

 

  • Presentations

 

  • Learning Assistants/Bots

 

  • Video/Audio & Image

 

  • Quizzes

 

  • Miscellaneous resources and interesting reading



160 views0 comments

Recent Posts

See All
Post: Blog2_Post
bottom of page