I’ve been reading and thinking a lot about the future of the internet due to AI language models like ChatGPT and AI image generators, particularly with the new release of ChatGPT4 (subscription only currently) with increased ‘learning’ capacity. The new version can acquire information from images and video, as well as “caption, classify, and analyze images” (Buckler, 2023). In particular, what does this next phase of internet and computer literacy look like?
We’re already seeing human job losses due to use of AI. Right now those might be few, but they are happening. Case in point – the other day I was reading a news article that had an AI generated image as the visual to accompany the article. The article itself was NOT about AI, but about strategies to stay active while aging. In previous years (even a few months ago), that image would have had to be created by a human, either as a photo or illustration, using analogue or digital means.
There are a couple of thoughts that spring to mind.
One – the article was about healthy human aging, but the image was AI generated. There was no handsome, strong, long haired, octogenarian surfer (with too many fingers) photographed as an ACTUAL EXAMPLE for the purposes of the article. Someone who is not able to spot an AI image would possibly think this is real, and may compare themselves to or strive to be like this completely made up image of a person (this has been happening with airbrushing and Photoshop-ing for decades. . .but those image manipulations were still done by – and/or of – people). Another thought that comes to mind is that it adds yet another image of a human to the AI scrape-able imagery on the web that is not a human, that is human-like. Bear with me, I’ll return to this second point in a moment.
As an internet-reading public, we are increasingly reading articles that are produced on the whole or in part by AIs (more human job losses – or at least shrinking the possible work pool). While that’s problematic from the job loss lens, most of the information that the AIs are accessing is human generated (although that may be of varying bias level, quality, verity, and accuracy). Over time, though, AIs will probably be accessing an increasingly AI-generated pool of information. I have curiosity as to whether AIs accessing primarily AI generated information will create a kind of ouroboros internet of information that doesn’t really say anything – like this:
In today’s fast-paced world, it’s important to stay ahead of the curve and think outside the box. With cutting-edge technology and innovative ideas, we can push the boundaries and unlock new possibilities. By leveraging our strengths and working together, we can achieve success and make a real impact. So let’s take a step forward and embrace the future, because the sky’s the limit!
ChatGTP, personal communication, March 24, 2023

I’m already seeing my dear, intelligent, and observant friends re-posting things like the photo (right) generated by an AI with various captions. Sometimes it reads, ” Nursing home residents in 2050″, Sometimes I see it with “when all us old ladies with tattoos are sitting around the nursing home, we will all look like wet coloring books”. While this is fun, I always wonder if the person posting has noticed that these are actually not people who have dressed up and posed for these pictures. And it occurs to me that the more these images are shared out of the AI-generated context in social media and other ways, the more they become part of the larger pool of information that AIs like ChatGTP, etc., are drawing their information from.
So, returning to the AI generated image piece – with so very many images being proliferated into the web by people requesting AI generated pictures AND the AI now being able to analyze images, will it tell me that most humans have <some non-5 number> of fingers? At what point does the internet itself become it’s own circular logic? Will we choose, and are we already choosing to limit the datasets that AI-type language and image generation software are pulling data from?
I asked ChatGPT what it thought, and it had this to say:
The internet is not always a reliable source of information. With the rise of social media, fake news and misinformation can spread rapidly and widely, often with the help of AI-generated content. This presents a challenge for AI, as it may struggle to distinguish between accurate and false information, leading to inaccurate conclusions and decisions.
ChatGPT, personal correspondence, March 24, 2023.
Fake news is an old term much popularized as a glib characterization of news that a particular former US President wanted to reject and discredit. Misinformation and fake news are real things we’re wading through every day, trying to verify our sources, to find reliable and creditable information to rely on. It’s getting harder and harder to convince someone of a real phenomenon (like the roundness of the Earth and Moon, for example) when confronted with the sheer volume of contradictory ‘sources’ available to our home and work computers, to the phones in the palms of our hands.
I feel like we’re living in a rich and complex time right now, one in which we’re “building the plane while we fly it” (for lack of a better reference: Kirkness, 2020). There are gains and losses that come with this approach, for sure. I love so many aspects of AI. I love predictive text in my emails, the way that I can use conversational language to create browser searches, and that we can use AI tools to support learners in a variety of amazing ways. I want to be using and supporting others to use AIs in ways that are informed, transparent, and truly helpful. I want to build a better internet of well vetted information – to keep the good parts of this behemoth of knowledge we’ve been actively (and passively, thank you Bachelor Frog and a million other memes and gifs) co-constructing since 1983.
Well, if you’ve made it this far, thanks for sticking with me. Certainly more questions than answers in today’s notes! I’d love to hear what you think – I know the topic is SO much bigger than what I’ve touched on and I’d love to continue having conversations about it all.
Reference:
Buckler, N. (2023). ChatGPT-4 – What do we know about the latest development? The Chainsaw. https://thechainsaw.com/business/chatgpt-4-vs-chatgpt-3-latest-development/
Kirkness, E. (2020). The art of building the plane while you fly it. Ayden Creative. https://aydencreative.com/the-art-of-building-the-plane-while-you-fly-it/
This article was written with some assumptions that aren’t true at this time. The assumption that the language generating AI models are generally using the open internet as a dataset is not true – they are working from closed datasets. My wonder here really is just that. . . wonder.
Provocative pondering a, my intelligent friend. But those fingers!!! Disturbing.