AI is not good or bad inherently – it is just here among us. What do we do?
Here I share some thoughts with you on AI in history, philosophy and the current digital landscape.
Why you?
You will have other thoughts, opinions and maybe concerns about AI. You will be feeling certain things about the new technology. If you’re worried about whether AI will replace people in the job marketplace, I have no answers here on that front, alas.
Why me?
I have a PhD in the history of reading, diary writing and the self; I have an MA in philosophy, with a particular interest in philosophy of mind and metaphysics; and I currently work in the digital world of user experience design and content. I am not an AI expert in any sense. I’ve just been piling up certain concepts in my head for a while and thought it would be fun to transmit them to you.
If you’ve read this far, you’ve already invested a few seconds of time in my thoughts, and hopefully you’re encouraged and interested enough to spend another couple of minutes with them.
AI in history
I don’t know much about the history of AI as a technology beyond the basics. I’m also very unfamiliar with AI in realms other than communications and the written word. I’m going to focus on that aspect of it mainly. I’m interested in the history of information and communications technology.
A brief history of information and communications technology
I’ve recently been reading the book Nexus: A brief history of information from the Stone Age to AI by Yuval Noah Harari. As a ‘big history’ fan, I enjoyed Harari’s exploration of information as a concept and as the central way to understand various political systems throughout time, ranging from democracy to totalitarianism. Not to pounce on the author’s tailcoat, but here is my own, slightly different version of that history, completely simplified and without the political systems angle.
At one stage long ago, humans took their own ideas, thoughts and feelings (‘human stuff’) out of their heads and transferred these onto physical objects. This could have been through drawings, symbols and/or what we understand as words. It could have also been various forms of what we understand to be art. The basic stage of this information and communication transfer is just about thoughts/feelings/ideas from our heads coming out onto some physical object in some form. That’s a broad definition, but it has bearing on my later theories of where we’re going with AI.
I’m focusing specifically on writing now. Writing I’ll just define as language being transferred, initially by hand, from our heads onto some physical substance in a way that another person can understand. Not everyone was doing this from the beginning. Over time more people did it. I would say this was the first information technology.
As we see now with recent information technologies, there was concern and fear when writing became more widespread. Plato, for example, feared that the proliferation of writing would make us stupid (I’m paraphrasing), because we would no longer remember things in our heads and share ideas through oral discussion. Think of the calculator making us not need to do ‘mental maths’ anymore. Like AI today, writing was once feared for dumbing us down.
For hundreds of years, people were writing, by hand, in different ways, on different materials, and sharing all this across distances.
Next: the invention of printing in various parts of the world at different times. This new technology allowed humans to create many more copies of the stuff being transferred out of their heads onto some physical substance and share it even more easily across distance. Books came about along with a plethora of other printed materials.
Later, people started being able to record more than just writing on physical matter. They could also record sounds and images and transmit them far and wide. Other information and communication technologies manifested in the telegraph, photograph, telephone, radio, television, video and so forth.
Now we come to computers. Computers are in a different category from the rest of these technologies because they were not just about taking stuff from our heads and putting them onto some physical format; they were manipulating the stuff in some way and then transmitting that out into the world.
(Sidenote: computers are also not as modern an invention as we may think. Scientists and philosophers have been theorising and working on them for hundreds of years at least, for example see Ada Lovelace).
In the current age, in the year 2025, computers are used throughout most parts of the world in a range of forms, for a range of purposes. From these, we have had the inevitable development of AI.
Not being very up to speed on the history of AI itself, my basic understanding follows. We started with computers being able to make decisions using a binary code: if computer receives 1, it does x; if computer receives 0, it does y. From there the technology expanded to do more complicated equations, based on more complicated rules, which led to algorithms. The relatively recent breakthrough has been giving the computer loads of examples, loads of inputs, so that it learns from them and acts accordingly. This could be a very basic definition of intelligence. Something comes into our heads or a computer, whether one input or many examples, some decision or choice is actively made to react somehow, and something different goes out into the world.
‘Non-human agents’
So be it. What I find the most fascinating part of this is that from the initial human starting point of stuff being transferred from our heads onto a physical form, now we have this intermediary that takes the stuff transmitted from our heads, makes something entirely new and sends it out into the world, according to an autonomous rendering of its own.
I see computers as intermediaries. Harari sees them as ‘non-human agents’. This is interesting – the concept of ‘agent’. What does an agent do, how much autonomy does it have, and how different is it to a ‘human agent’? I guess these are the big questions with AI. How will this agency, this autonomy, impact the world?
I’m not going to explore consciousness
Who or what is conscious is a massive topic and a big debate. I’m not going to go there here. My focus is more on this flow between our human heads/hearts (selves) out into the physical world. AI, at least from my awareness of Large Language Models (LLMs), participates in a similar flow.
Perhaps the key question, therefore, is one of substance, or material, or physical matter. I loved reading philosopher David Chalmers book called Reality + in which he discusses virtual worlds. In the future there may be virtual worlds that operate in place of or alongside our own world, and in some ways, these already exist. Is this good or bad? Again, I’m not interested in the value judgement of these, rather that they could be in existence and perhaps just are.
He does discuss consciousness, but also the idea of physical matter; for example, our bodies, and how they may relate to a ‘virtual world’. In the question of AI and agency, this seems a fundamental angle. Who controls substance (what I define to be physical matter)? The classic dystopian fear of physical robots taking over the planet and destroying humans has been well rehearsed.
(Sidenote: I think this dystopian framing says so much about humans – the fact that if there were other agents with power above or beyond our own, they would automatically choose to dominate us. Why do we not assume they would be cooperative or benign?)
As I’m focused on the non-virtual world, as that is where human life begins and ends for now in the commonplace understanding of human life, who controls substance is primary. I believe, for now, humans still control substance. There is a point here about humans being controlled by non-human agents and therefore humans being manipulated to control substance in a way that is not autonomous. And maybe that is where the problem would come from.
A favourite novel of mine is Frankenstein by Mary Shelley. It touches on so many themes, but notably on the intelligence and substance question. ‘Life’ was given to a human substance (parts of dead human bodies) and then it acted autonomously. But it was still a substance like our own bodies and therefore limited in its capacity by that physical format.
So, we return to the beginning
My overall thinking is about human ideas, thoughts and feelings going out in the world in some way and how this works in computers and therefore AI. The key question is the agency, or what I prefer to call autonomy, and how this will impact the physical world. I’m not concerned that ChatGPT will manifest physicality (with the caveat again that AI is much more than just LLMs, and other forms of AI may do this).
What seems to be more worrying now, in the digital communications realm, is not that algorithms are choosing to destroy us or making decisions on their own, but that we’ve set up competitive market forces that have influenced how algorithms work, tapping into some of the basest human qualities to survive. Specifically, I’m speaking of digital communications like social media, which have a huge ability to impact our thoughts, feelings and actions very directly and intimately, even more so than ‘traditional’ media forms.
In a massive a twist to the story, as so many of us carry around little computers, via our phones that we are tied to throughout the day in so many ways, it’s like that early development of us transmitting thoughts, ideas and feelings from our heads out into the physical world through writing, is now operating in reverse. We have this little side agent/’brain’ feeding stuff directly back into our heads during all waking hours.
It all comes down to this
It all comes down to the flow of thoughts, ideas, feelings, ie ‘human stuff’ (still not wanting to get into consciousness) within the world of physical substance. How will it play out? Who knows. It’s not simply ‘good or bad’. It’s all just fascinating.