(Original Article on Medium.com)
https://ginigrahamscott.medium.com/how-ai-got-started-b8d3e97fb3f8

 

Many people I’ve met think AI got started at the end of 2022, when ChatGPT was first introduced in November that year, and when some AI art platforms — Midjourney, Stable Diffusion, and DALL-E — were made available to the general public.   But actually, AI has had a long history, though people didn’t call it that, when it was used for public surveillance systems or in digital assistants like SIRI, designed to answer simple questions, make recommendations, and schedule appointments.

I also remember the early days of AI when I was at the University of Rochester as a freshman in 1959 and a student showed off a robot with legs like a dog. He used a remote-control device to direct the robot to walk across the front of the classroom and come back, and he said he was directing it using artificial intelligence. Beyond that, though, AI wasn’t something that people talked about.

So how did AI develop? Given AI’s huge knowledge base today, I asked ChatGPT4.0

In my query, I wrote:

“Can you tell me about the history of artificial intelligence for an article I’m writing. When did it first get started. Who were the first people to develop it. How did they do it. What did artificial intelligence do in the early days back in the 1950s or 1960s or even earlier. And what recent developments led to the explosion of interest in AI systems for writing, art, and business? “

Here’s what ChatGPT said:

 

A Short History of the Early Days of AI

The history of artificial intelligence (AI) is a rich narrative that spans over centuries, from early ideas about intelligent machines to modern developments in machine learning and neural networks that are revolutionizing writing, art, and business. Here’s an overview of key milestones:

 

Early Conceptualizations of AI (Pre-1950s)

 

Before AI became a scientific discipline, the idea of intelligent machines had long been part of human imagination, from myths like the Greek Talos (a giant automated robot) to stories in literature. However, it wasn’t until the 20th century, with the advent of computers, that these concepts could be explored scientifically.

Turing’s Contribution (1940s):  Alan Turing, often considered the father of modern computer science, laid the groundwork for AI in his 1950 paper, “Computing Machinery and Intelligence.” He introduced the famous “Turing Test,” which sought to determine if a machine could exhibit intelligent behavior indistinguishable from a human’s behavior.

 

The Birth of AI as a Discipline (1950s)

 

The field of AI formally began during the mid-20th century.

Dartmouth Conference (1956): The official birth of AI occurred at a summer workshop organized by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon at Dartmouth College. McCarthy coined the term “artificial intelligence.” The goal of the conference was to explore the idea that human intelligence could be simulated by machines. While this ambitious goal was far from achieved at the time, it set the agenda for decades to come.

 

Early AI Programs (1950s-60s):

 

The Logic Theorist (1956): Developed by Allen Newell and Herbert A. Simon, this program is often considered the first AI program. It was able to prove mathematical theorems from “Principia Mathematica,” demonstrating symbolic reasoning in machines.

ELIZA (1966): Created by Joseph Weizenbaum, ELIZA simulated a human therapist by mimicking simple conversation. While limited in depth, it opened up the field of natural language processing.
AI in the Early Days (1950s-1970s)

AI research in the 1960s and 1970s focused primarily on symbolic AI and problem-solving using programmed logic, much of which stemmed from the successes of the Dartmouth Conference.

Perceptrons and Early Neural Networks: Frank Rosenblatt developed the Perceptron in 1958, a simple neural network model designed to simulate the brain’s neurons. Though the model had limitations, it laid the foundation for future neural network research.

SHRDLU (1968): Developed by Terry Winograd, SHRDLU was an early AI system that understood natural language to manipulate objects in a virtual environment. It demonstrated basic reasoning and was a leap forward in AI’s ability to understand language.

Despite these early successes, AI soon faced obstacles. By the mid-1970s, progress in AI research stagnated due to the limitations of computing power and overly optimistic expectations, a period often referred to as the “AI Winter.”

 

Renewed Progress (1980s-2000s)

 

The AI field saw a resurgence in the 1980s and 1990s, driven by advances in computational hardware, more sophisticated algorithms, and the development of expert systems — programs designed to mimic human expertise in specific domains (e.g., medical diagnosis).

Expert Systems: AI began to find practical applications during this time. Programs like MYCIN, designed for medical diagnostics, demonstrated the power of rule-based AI systems.

Backpropagation and Neural Networks (1986): In the mid-1980s, the rediscovery of backpropagation — an algorithm for training neural networks — by Geoffrey Hinton and others allowed for more effective machine learning. This led to advancements in fields like computer vision and speech recognition.
The Explosion of Modern AI (2010s-Present)

 

The modern AI revolution is largely driven by advances in three key areas: data, computational power, and algorithms.

 

Deep Learning (2010s): Deep learning, a subset of machine learning, exploded in popularity around 2012, with the breakthrough of AlexNet in image recognition competitions. Deep learning is based on artificial neural networks with many layers, capable of extracting high-level features from data. This allowed for major progress in fields like computer vision, speech recognition, and natural language processing (NLP).

GPT and Generative AI: One of the most significant recent developments is the rise of large language models like OpenAI’s GPT (Generative Pre-trained Transformer) and its successors. These models, trained on vast amounts of text data, can generate human-like text and perform tasks such as summarizing information, writing essays, and even programming.

The release of GPT-3 in 2020 marked a watershed moment in AI’s ability to understand and generate language.

AI in Art and Creativity: The application of AI to art and creative fields saw a breakthrough with models like DALL-E and MidJourney, which generate images based on textual descriptions. This sparked an explosion of interest in AI-generated art, leading to debates about creativity and the role of machines in creative processes.

AI in Business and Industry: AI has been rapidly adopted across industries for applications ranging from automation, decision-making, and customer service (e.g., chatbots) to complex data analytics and predictive modeling. Companies like Google, Amazon, and Microsoft have integrated AI into their products, while startups have used AI for everything from personalized recommendations to drug discovery.
Factors Behind the AI Boom

 

The current explosion of interest in AI is due to several factors:

 

Massive Data Availability: The internet and digital platforms generate vast amounts of data, which is essential for training AI models.
Increased Computational Power: The development of GPUs (Graphics Processing Units) and specialized AI chips has dramatically sped up the training of AI models.

Algorithmic Advances: The refinement of algorithms like neural networks and reinforcement learning has enabled breakthroughs in tasks once thought impossible for machines.

Open-Source AI: Platforms like TensorFlow and PyTorch have democratized AI, making it accessible to a broader range of developers, which in turn has accelerated innovation.

 

Conclusion

 

AI has progressed from early symbolic systems in the 1950s and 1960s to the cutting-edge deep learning models of today. The field’s trajectory is one of both conceptual breakthroughs and technological advancements, culminating in today’s widespread applications in language, art, business, and more. What started as an academic curiosity has now transformed into a cornerstone of modern technology, driving forward the next wave of innovation across multiple industries.

So there you have it. What AI thinks are the major developments in the history of AI. The general public has been largely unaware of these developments, though some products used AI, such as digital assistants, and surveillance and face recognition devices, which some law enforcement agencies adopted, igniting a controversy about invasions of privacy.

Then, the world of AI exploded in the 2020s, to be described in my next article on AI’s history. I decided to explore the history of AI, since I have embraced it in a big way in the last two years, ever since ChatGPT and various art platforms emerged for the general public.

Besides incorporating AI as a starting point in my work as a writer, film producer, and ghostwriter, I have created and produced a film on the development of AI, AI Evolution, and I have written a series of books about how modern-day scammers are using AI  in T”he Big Con”, “I Was Scammed”, and “Scams in the Digital Age”, published by American Leadership Books and turned into two films so far: Conned: A True Story and Con Artists Unveiled.

The third film in the series will be filmed in February 2025.

 

For more information and to schedule interviews, email or call:

Karen Andrews
Executive Assistant
Changemakers Publishing and Writing
San Ramon, CA 94583
(925) 804–6333
changemakerspublishingandwriting.com