A Brief History of Artificial Intelligence
The idea of artificial intelligence is thousands of years old, with Greek mythology containing references to giant brass robots built by the god Hephaestus. Over the next couple of millennia philosophers, mathematicians, religious scholars, and authors wrote extensively about the idea and its potential moral and practical ramifications.
Despite the mental attention given to the subject, AI had to wait until the early 1950s to come into being. These first artificial intelligences were rudimentary things by today’s standards and could only play simple games like checkers or solve basic chess endgame problems.
Within just 20 years these basic artificial intelligences were supplanted by increasingly more capable models. In the mid-1970s there was even an AI developed which was capable of diagnosing and offering therapeutic suggestions for bacterial infections in humans. This AI, called MYCIN, was regarded with skepticism by the medical community and never was used in practice, despite having a better diagnosis rate than non-specialist physicians.
In 1996, Deep Blue defeated Garry Kasparov, then chess world champion, and proved that artificial intelligences were able to beat humans at complex tasks. This accomplishment may seem trivial – but it is important to remember that the very first chess playing AI had only been created 40 years prior.
Since Deep Blue’s victory, AI has mastered other domains including self driving vehicles (NASA’s early Mars rovers Spirit and Opportunity both used AI to navigate autonomously), the Chinese board game Go (Google’s DeepMind AlphaGo), and image identification (Google Lens). While identifying pictures of cats and playing board games might not seem impressive, the complexity of these tasks may surprise you.
After just three moves in chess there are 120 million possible positions, and after each player has moved just five times that number jumps to 69,352,859,712,417 positions. A typical game of chess is thought to offer 10^120 variations – a number that is mind bogglingly huge as there are only thought to be 10^80 atoms in the entire visible universe.
The game of Go laughs at the child-like simplicity of chess, with each game of Go presenting 10^360 possible variations – a level of complexity that simply doesn’t mesh with our human understanding of scale. To make AlphaGo Zero’s task of mastering this game even more remarkable, the AI model wasn’t even given the rules of the game or provided with strategies. Instead the AI taught itself how to play over the course of 40 days, soundly defeating previous AI models and all human opponents it faced.
And, while Google Lens’ typical task of reading QR codes, translating text, and identifying dog breeds may not seem that impressive, in 2020 the program’s powerful AI could recognize 15 billion items just from a picture. In 2017 that number was just 250,000 – so it is clear that this technology is developing at breakneck speeds.
With 70 years of experience creating ever more powerful artificial intelligences, it should come as no surprise that the latest versions are almost scarily capable.
The Different Types of Artificial Intelligence
When we talk about AI, it is easy to refer to them as a collective – unified in the sense that all of them are computer-based intelligences. However, there are actually several distinct methods used to create AIs and each comes with its own advantages, disadvantages, and capabilities.
In brief, here are a few common types of AIs:
- Rule-based systems: These AI systems use a predefined set of rules or decision trees to solve problems. They are particularly effective for tasks with a limited number of possibilities and clear rules.
Example: The chess playing AI, DeepBlue.
- Expert systems: A type of rule-based system designed to mimic human expert decision-making in a specific domain. These systems rely on a knowledge base of facts and a set of rules or heuristics to solve problems.
Example: The early medical diagnostic AI, MYCIN.
- Artificial Neural Networks (ANN): Inspired by the structure and function of the human brain, ANNs consist of interconnected nodes (neurons) that process information and learn from data.
Example: Imagine identification AIs like Google Lens.
- Deep Learning: A subtype of ANN’s which can learn complex patterns and representations in data.
Example: OpenAI’s ChatGPT and Google’s Bard are chatbots which utilize deep learning.
- Reinforcement Learning (RL): An approach to AI where the intelligence learns to make decisions by interacting with its environment, receiving feedback in the form of rewards or penalties.
Example: Google’s Go-playing artificial intelligence AlphaGo Zero used reinforcement learning to master the game without being told the rules or given human guidance.
Ultimately, this list is just scratching the surface and many implementations of artificial intelligence use a combination of methods rather than relying on just one.
What Makes ChatGPT Special
By this point it should be obvious that artificial intelligence is not new. It has been with us for over 70 years and has been commercially used in a variety of fields for the past 40 years.
However, if the breathless reporting about ChatGPT and other Large Language Models are anything to go off of, something has changed. ChatGPT, as its name suggests, is built on an artificial intelligence model known as a Generative Pre-Trained Transformer.
Transformers are a subset of Deep Learning, which is itself a subset of Artificial Neural Networks. Pre-trained refers to the process where ChatGPT’s intelligence model was trained on a body of text (in this case containing hundreds of billions or trillions of words). And finally, Generative, which refers to the model’s capacity to create new content.
The corpus of text that ChatGPT is trained upon is where the model gets the descriptor “Large Language Model” or LLM. There is no agreed upon definition of LLM – what constitutes large, or even what sort of AI underpins the model varies from model to model – although most are transformers, like ChatGPT.
In the case of ChatGPT, and other LLMs, the new content it creates is text – but generative AI models can also create images (DALL-E) or even music (Jukebox). This is a radical departure from other types of AI which were programmed to perform very specific tasks like playing chess or diagnosing illnesses. ChatGPT and other AI’s like it are capable of creating novel, human-like responses to user queries.
OK, But Is It Any Good?
When ChatGPT first hit the scene in November 2022, it was met with considerable fanfare, but also derision and mockery. The results it provided often sounded awkward and unnatural – leading many to dismiss the technology as nothing more than a silly toy. Despite criticisms, this version of ChatGPT was reported by the New York Times to be able to write prose with fluency approaching that of humans.
The claims of fluency were debatable, but even if the text it produced couldn’t yet replace a human writer, ChatGPT proved to be a powerful brainstorming tool and could be used to quickly generate interesting ideas. Whether you were looking for novel business concepts, a catchy marketing slogan, or a premise for a science fiction novel, ChatGPT could help.
Note: While ChatGPT was publicly released in 2022, OpenAI has been releasing generative AIs since 2018.
This first public release, referred to as GPT-3, was superseded by GPT-4 just four months after it was made available to the public. It is this latest iteration that truly merits the attention and sometimes-hyperbole that the press has been lavishing on AI.
GPT-4 writes uncannily good copy – easily rivaling the average person’s writing abilities and delivering results in a fraction of the time.
Other AIs like Midjourney, a generative AI which creates images, have proven to be capable of making striking and award winning works of digital art. Even photography isn’t safe – with a ‘photograph’ created by DALL-E 2 recently taking first place at the Sony World Photography Awards. The results speak for themselves – AI is currently able to go toe to toe with human creativity.
The Contenders For AI Supremacy
OpenAI is probably the company getting the most attention right now with their ChatGPT and DALL-E generative AIs having reached the market before most of their competitors. However, that is not to say that they are alone in the field and other major tech companies are racing to release their own.
Microsoft – The Money Behind OpenAI
You may wonder where Microsoft finds itself in this battle – and in fact they are sitting squarely behind OpenAI and their powerful ChatGPT system. In 2018 Microsoft invested $1 billion dollars into OpenAI and doubled down on their investment with a $10 billion dollar pledge and a 49% stake in the company as of January 2023.
Microsoft has already added ChatGPT functionality to its Bing search engine, as well as its GitHub Copilot AI which helps programmers create and edit code.
Alphabet’s AI Entry: Google Bard
Previously Alphabet, the parent company to Google, could probably have been considered the industry leader in AI, with their Google Lens offering robust image identification that was hard to rival. Google’s version of ChatGPT, called Bard, was initially shown to the world in February 2023 – but was dealt a serious setback when the AI made a serious gaff during the release event.
Access to Bard is currently limited to those who sign up for a waitlist and functions similarly to Microsoft’s Bing chat as a search-engine meets generative AI.
Amazon’s Additions to AWS: CodeWhisperer & Bedrock
Amazon’s AI is not as well known as ChatGPT or Bard, but their foray into generative AI goes back to June 2022. CodeWhisperer is part of Amazon Web Services (AWS) and, like GitHub’s Copilot, is designed to help programmers edit their code.
As of April 2023, Amazon announced Bedrock, another AWS service which will allow customers to build their own generative AI applications using Amazon’s cloud services.
BLOOM – An Open Source AI Alternative
While most of the contenders in the generative AI scene are well-established tech giants, these mega corporations don’t have a monopoly on the technology. BLOOM, or the BigScience Large Open-science Open-access Multilingual Language Model, is a free Large Language Model that was built collaboratively by over 1,000 AI researchers.
Bloom is unique in that it is free to use and open-source – although it does require significantly more setup to use than its corporate alternatives.
How Generative AI Is Currently Being Used
The possible uses for generative AI are potentially endless – particularly as the technology continues to develop. However, that does not mean that its uses today are purely limited to the hypothetical or recreational. Some early adopters have found generative AI systems to be helpful for several everyday tasks including:
Content & Copy Creation
From Facebook ads to blog posts, ChatGPT and its AI-brethren are proving themselves more than capable of quickly churning out compelling copy. The results often require a little massaging to achieve the voice that matches your desired brand identity, but even with this extra step, AI usually shaves valuable time off of the process.
Generative AI have proven themselves to be powerful programming aids – quickly writing code snippets in dozens of different coding languages. Just like content writing, the results may not prove to be perfect right out of the box, but often only require a little bit of adjustment to yield workable results.
For many small businesses, responding to emails is quotidian drudgery that simply must get done. AI is able to slash the amount of time you spend crafting emails and can reliably produce professional sounding messages.
While ChatGPT and other Large Language Models often work off of data that may not be entirely up to date, they are surprisingly capable of summarizing an incredibly wide range of topics. If your subject matter is newer than the foundational language model, most generative AIs will allow you to input the raw data or research material and quickly provide you with a coherent synthesis.
While transcription software is nothing new, generative AI models have made it possible to easily generate meeting summaries. Instead of needing to go line by line through a transcription, Google’s Bard and similar services are now able to create a concise summary for videoconferencing events.
LLM-based AIs aren’t the only useful generative models out there. DALL-E and other image generating AIs make it easy for designers and artists to quickly mock up prototypes and previews for clients. While the images may not fully capture the essence of the finished product, they often get close enough for clients to give feedback before the artist commits to the lengthy process of creating a final product.
The Limitations of Today’s Artificial Intelligence Models
It is undeniable that generative AIs have tremendous potential, but it is important to recognize that the technology is still quite new. It is reasonable to assume that some of these problems may be able to be solved with time, although some may prove to be intractable issues that are fundamental to the models.
Regardless of if the problems can be solved, it is vital for consumers to know the limitations of generative AI models so that these shortcomings can be recognized and avoided.
Lack of Context
While AI models may seem to understand our world, they don’t actually know how the world works. Instead, they create patterns of text that mimic those that they encountered in their training data to simulate an understanding. This can result in nonsensical copy, sometimes referred to as hallucinations.
This issue can seem innocuous but can cause disturbing results. For example, image recognition models may be able to reliably identify an unmodified stop-sign, but small changes to the sign could cause the model to completely misidentify it. While a human wouldn’t be fooled by the changes, artificial intelligences don’t actually know what a stop sign is and can be misled by seemingly small differences.
Inconsistency Over Time
As it stands today, if you ask ChatGPT, Bard, or any other AI model a question more than once, you shouldn’t expect the same answer. In fact, you may get a radically different answer from day to day!
If you’re trying to write copy for birthday cards or fortune cookies, the lack of consistency may be a good thing! However, if you are producing a white paper on a new product that your company is developing, this lack of consistency may result in content that is contradictory or confusing.
Dependent on Training Data
The most well-known AI models are trained with vast amounts of data which they pull from to create unique outputs. However, any biases or shortcomings contained within the training data will ultimately be reflected in the results. Some may argue that given a sufficiently large training dataset these biases should be negligible, but that is a risky assumption made riskier by the fact that most companies do not disclose their training data.
Ethical and Legal Concerns
While the idea of producing more content, more quickly is enticing to anyone whose business involves publishing – AI generated works are not without their ethical and legal considerations.
First and foremost, passing off computer-generated text as text authored by a human is morally questionable. A major tech website, CNET, recently discovered that its readers were disturbed and unhappy to learn that a considerable amount of their content had been produced by AI. In the face of reader backlash, they stopped producing AI-generated content and returned to the time-honored tradition of human-powered writing.
Another issue is related to the first, although has less to do with reader satisfaction and more to do with responsibility. Namely, who is responsible for the content generated by an AI? If it turns out that biases in the training data result in biased outputs, is the company that created the AI model at fault, or the company who used that AI model to create content in the wrong?
Compounding the issue of responsibility is the fact that many of these generative AIs do not cite their sources – instead simply presenting the user with an output that is stated as fact. While in many cases the AIs do have their facts right, this is absolutely not universally true and it often requires additional legwork to verify the veracity of its claims.
A practical consideration is: who owns the copyright to AI-generated content? Usually, copyright is framed in terms of authorship – but if an AI model created the content the lines of ownership become blurry. For their part, ChatGPT’s term of service states that “Subject to your compliance with these Terms, OpenAI hereby assigns to you all its right, title and interest in and to Output.” However, OpenAI makes no claims of ChatGPT’s output uniqueness – meaning that multiple users can receive the same output and have legal rights to their use.
Another issue related to the matter of copyright and ownership pertains to the training data that these AI models were built upon. The billions of books, blogs, tweets, images, and more that form the foundation of generative AI models were produced by people who never gave permission to use their works in this manner. OpenAI claims that their usage of copyrighted material was in line with fair use practices – but the matter has yet to be decided on in court.
These and other issues are going to be the subject of additional scrutiny as the technology becomes more mainstream. Right now, it is unclear how these questions will be resolved.
What’s Coming In the Near Future
We said at the outset of this article that we would not speculate on the unknown. However, there are a few clear signs from major tech companies about upcoming products which will rely on these new AI technologies.
Google introducing Bard into their search
Currently, Bard is only available to those who sign up for Google’s waitlist, but it is fairly clear that it will become widely available sooner than later. There are still major questions to be answered about how this will impact Google’s advertising business and what changes this will cause for website traffic and SEO.
OpenAI has announced that it will be making ChatGPT plugins available, and is already working with several major companies like Expedia, InstaCart, Shopify, and Slack. It is too early to say exactly how these implementations will work, but expect to see AI-enhanced chatbots hit the market in the near future.
Is AI Development Slowing?
The past year has seen incredibly rapid progress in terms of what AI is capable of, but we’re beginning to see some signs that this development will slow. OpenAI has announced that it is not currently working on GPT-5, although updates to GPT-4 are in the works. Sam Altman, the CEO of OpenAI, has also stated that the future of generative AI doesn’t lie in increasing the size of the training databases but instead will need to come from improvements to AI architecture and model design.
In addition to technological limitations, AI developers might need to contend with legislative hurdles in the near future. In March, Italy imposed a ban on ChatGPT, and now the European Union is considering drafting legislation which would regulate AI. It’s hard to believe that this particular genie can be put back in the bottle, but regulations could further slow the industry’s development.
The AI-Powered Revolution Has Already Begun
Whether or not your business is currently in a place to take advantage of AI, it is impossible to deny that this technology is here to stay. It is easy to understand where all of the breathless hype related to the industry is coming from as AI models are positioned to be as disruptive to business as the development of the internet was.
The next few years promise to be interesting as developments in the field continue, businesses find new use cases for AI, and governments grapple to understand and regulate the industry.