How does AI work? And What Are the Different Types of AI?
At this point at the dawn of 2023 you’ve probably heard it said, “AI might not replace people, but the people who don’t learn how to use AI will be replaced”. As of November 2022, when ChatGPT washed over us like a wave from the future, this rings true: The time to sink or swim is now.
Forms of artificial intelligence have saturated industries for decades, but ChatGPT–the latest large language model chatbot from OpenAI, Inc.–may be the biggest thing in tech since smartphones. Its applications are seemingly endless and its speed is downright spooky. Basically, AI is here to stay as an integral part of efficient, scalable performance in almost any industry.
The question is, how did AI get to this place? How does it work? And where, exactly, might all this be going?
The History of AI (Briefly)
The idea of artificial intelligence–that machines, computers, and software can behave like the human mind to remember, learn, and apply reason–has lived in our collective psyche for over a century. But it was Alan Turing’s 1950 paper (yes, he of the Benedict Cumberbatch portrayal), Computing Machinery and Intelligence, that outlined the possibility that machines could make their own decisions. Problem was computers weren’t ready to handle the commands, rules, and data needed to implement that decision-making.
Over the course of the next 40 years or so, AI developed in fits and starts. Investment spikes, more sophisticated algorithms, and advances in processing power would ramp up progress, but every breakthrough inevitably came with a slough of complications and missteps. By the 1980s, however, sophisticated algorithms were beginning to form the foundation of AI innovation. Advances like deep learning techniques, and the Japanese government-funded Fifth Generation Computer Project (FGCP), made it so that, by the ‘90s, things were ready to take off.
Over the course of the 1990s and 2000s, algorithms have stayed fairly similar, but AI capabilities have exploded. That’s because we finally have the capability to process enough information to efficiently and comprehensively train these artificial intelligence models. Basically, the proliferation of data and computing power have enabled AI to learn more efficiently through a relative waterfall of information. The results are AI tools that operate at nearly every intersection of business, technology, and data.
So How Does AI Actually Work?
Although there are many different applications of artificial intelligence, a few basic versions of AI (that have plenty of intersections and overlaps between them) perform myriad functions.
Machine learning set us on the path to the complex, sprawling, high-powered AI applications taking over today’s news cycles. Through algorithms–sets of rules that provide step-by-step instructions for solving a problem or completing a computation–programs can automatically identify patterns and increase their understanding through data analysis. This allows machines to essentially learn from their past experiences and change their behavior accordingly.
Neural networks, a forerunner of “deep learning” and a subset of machine learning, strive for a loose recreation of the neural pathways and synapses of the human brain. While one neuron does little on its own, billions of neurological connections combine to perform complex tasks. When a particular input enters the neural network, neurons create outputs that fire to the next cell based on their value. This allows the system to recognize patterns and come to a final decision and eventual output.
Whether or not that output is correct is another question, which is why neural networks are often supervised. Supervised learning is a machine-learning approach where humans label sets of training data, which the algorithm analyzes and applies to the output. Researchers check that output against correct answers until the learning algorithm is trained to function effectively in unfamiliar situations. Finally, neural networks improve their accuracy in these situations through backpropagation.
Deep learning builds on neural networks to behave in increasingly complex ways. Technically, neural networks with more than three layers qualify as deep learning algorithms. These systems are much less dependent on human intervention and allow for the use of large data sets. And while they can learn from labeled sets of data, they can also take large data sets, apply known patterns to them, and delineate between the unknown inputs.
AI Without Machine Learning (GOFAI)
GOFAI–Good Old-Fashioned AI–doesn’t incorporate machine learning. Humans input rules into a program to produce expert systems. They don’t train themselves or incorporate new information, but they do run processes so humans don’t have to carry them out. In the right situations, these are incredibly efficient. Most chatbots, for instance, don’t learn on their own, but engage huge audiences through prearranged rules and processes. IT teams also use GOFAI to tackle common problems that can be addressed through simple step-by-step solutions.
These AI approaches, among others, have generated a mass of tools that span nearly every industry. From facial recognition software, to manufacturing processes, to patient-monitoring systems, AI helps organizations become more efficient, address problems, and analyze data even more effectively than humans can. The newest one sweeping the tech space (and pretty much every other space) is–of course–ChatGPT.
What is ChatGPT?
ChatGPT is essentially a ChatBot staging area for the impressive large language model (LLM), GPT-3.5. GPT-3, the language model from OpenAI that really changed the game, came around in 2020, but not in a user-friendly form. After a period of dedicated reinforcement learning with human feedback (RLHF) to fine-tune GPT-3, it became the conversational juggernaut it is today.
In a lot of ways ChatGPT isn’t perfect. The information it trained on isn’t completely up-to-date, it sometimes fabricates quotes and imperfectly cites sources, and its answers aren’t always correct. It also can’t really communicate new information that isn’t already available across its trillions of data points. It’s fallible, but that doesn’t make it any less amazing–and its ability to replicate human inflections and speech patterns is more than a little spell-binding. Also, as an efficient content-generation engine–whether you’re writing, programming, or using other formulas–it’s a downright juggernaut.
Companies that leverage it and other large language models to perform their tasks day-in and day-out have a dramatic efficiency edge, and software developers can also use it to make their tools even more powerful. Halda, for instance, a company that specializes in web content personalization, uses a large language model to build personalized content feeds for each web visitor. It’s an integration that makes the personalization process efficient, so you can build personalized content at scale. This is just one of the ways companies, like Halda, are leveraging AI and LLMs to create meaningful content and software.
Halda personalizes web content through AI. Learn more about this process here, or explore more articles about AI, ChatGPT, and how they’re changing the way we work.
Sign up for Halda News!
Get Marketer's best tips & tricks to rock your website and online marketing. Sign up today!
By submitting this form, you are consenting to receive marketing emails from Marketing Blog.
What GPT-4 Means for Higher Education
There’s an unexpected line from the mid-2000s comedy, Rat Race, that rings true: “Good things take time, but great things happen...
AI and Higher Ed: An Increasingly Common Combination
Artificial intelligence functions in almost all the layers of the education system. That’s no surprise considering that much of...
Will Chat GPT and AI Take My Job in Higher Ed? Absolutely Not (If you know how to use AI)
To academics and those associated with higher education, artificial intelligence can seem a little redundant. After all, who...