Let’s start the second part off with a bombshell: 180 million.
As of March 2024, there are a staggering 180 million active users of ChatGPT worldwide.
It is odd to start an article with a number, of all things. A number like that is usually reserved for a conclusion – a few numbers strung together at the very end to set in stone a conclusion of AI’s prowess as of late. Besides, it’s not like I’m unfamiliar with numbers and their place in writing – all good analytical articles have buried within them somewhere a truckload of data in a small space that diminishes all traces of written cohesion and reader retention – nor am for some reason shy to spend time talking about them. No. The second part starts with a number so I could avoid having to bring up even more numbers down the line. You see, a conventionally written part 2 Sora breakdown has to discuss the history of AI in a way that makes numbers unavoidable as far as analyses are concerned, as every development in the technology comes with them a set of numbers that helps visualize the length of said development so the readers can have an idea of just how much the new outpaces the old. The problem with that is that despite the inherently exciting nature of AI and its history, I cannot bring myself to spend time jotting down numbers and relaying them to you in the way I just described. It’s uninteresting and requires even further explanations to contextualize the actual impact of these numbers, which makes an already long article even longer.
Fortunately for you and your short-form-content-perforated mass of pulsating fat and neurotransmitters you keep inside the skull cage that you call a brain, I am not a conventional writer. I do not believe the average reader knows what ‘parameters’ in the context of AI mean and why these ‘parameters’ matter, and that’s alright – you don’t have to. The more practical – and frankly, interesting and intuitive – way of translating numbers into palatable chunks of information is to compare these numbers from across history with real-life ‘things’ of all types (‘things’ that we’re all familiar with to some capacity) in an exercise in creative writing and large-scale research. It’s more fun that way anyway, and I’d finally get an excuse not having my written sentences be of the length and complexity of the Great Wall of China every time I attempt to explain something.
Before we get to the modern history of AI, we’ll have first to establish the groundwork – that being the concept of artificial intelligence itself and the vessels that allow for it to function in the real world, and finally, provide you with an actual, clear definition of AI – something so large and broad that it very well warrants its very own article flaunting a ridiculous wordcount sometime down the road – For now, though, I’ll try my absolute best to summarize AI as it stands right now. When I mention AI in my articles, I don’t mean all AI. Instead, and this shouldn’t be a surprise considering the topic at hand, it is generative, deep-learning AI that I reference, which is a sub-field of neural networks (something that will become relevant later) and a sub-field of machine learning and AI. A program doesn’t have to be sapient or even sentient to be considered AI. AI is any program capable of sorting, consolidating, and using information in some capacity. It is artificially intelligent; without the ‘intelligence’ we’d been taught to expect from human beings, Search engines, web filters, and plenty of hardware are all AI by this definition, as what they do daily is without conscious input from humans. Generative, Deep-learning AI is a minuscule slice of the AI pie that had only recently been given attention from the mass public, so it makes some sense that misinformation should run rampant when it comes to what AI is and what it does and can potentially do – and so this section works to inform and dissolve some of that misinformation. With that out of the way, it’s time to discuss the history of AI as a whole and how generative AI came to be – a more manageable feat that has been cleared up.
The first example of ‘interactive’ AI technically started in the 1950s, but the concept of artificial intelligence has existed for millennia. It does make some semblance of sense – the idea that we as human beings aren’t all that unique in the grand scheme of things is less an opinion and more a confirmable truth of life at this point – even the philosophers of yore knew of such perspectives. If what makes us human (our sapience, intelligence, metacognition, introspection, etc.) It isn’t unique; it stands to reason that it could be replicated in some way, shape, or form and subsequently injected into, say, a lifeless statue carved out of wood and limestone to grant it life and humanity (If you couldn’t already yet tell, that period in history is when the phrase ‘make friends’ stands as quite the literal statement). Despite the similar nature between these archaic beliefs and the more refined takes of modern society, the idea of granting something human-like intelligence wasn’t coined ‘Artificial Intelligence’ until the 1950s, when Turing developed his infamous (and recently more topical than ever) Turing test. These ancient ideas laid the groundwork upon which the modern iterations of artificial intelligence will build.
Long cut to early 1900s America – where everything (including the examples of locomotives planes that were used to prove a point back in the first part) that this subject touches on seems to originate – We saw an influx of popular media that centered around the idea of ‘artificial humans’, likely as a result of the industry boom granting America an uncountable number of new revolutionary technologies that mingled with creativity in philosophical thought. A simple yet intriguing concept. Twenty-one years later, in a stage play (Rossum’s Universal Robots, or RUR as it is sometimes known) by Czech playwright Karel Capek, a single word that will eventually become synonymous with technology and the future of humanity was born – Robot. Robots… Artificial beings of incredible intelligence confined to a metallic vessel forced to move in – for lack of a better word – ‘robotic’ comfortably distinct from humans. Both their design and mannerisms felt less like a take on manufactured life and instead not at all dissimilar to the mere mockery of the human form when looking retrospectively – but only when considering the capabilities of the robots of the modern day. However, this stalemate between fantasy and reality wouldn’t last much longer, as we cut to 1929 when the first Japanese robot was made.










