AI Terminology

Welcome to the AI Terminology! This glossary is designed to help you understand the key terms and concepts of artificial intelligence (AI). AI is a rapidly growing field, and new terms are being developed all the time. This glossary will help you stay up-to-date on the latest terminology and keep your conversations about AI clear and concise.

The glossary is organized alphabetically, and each term is defined in plain language. There are also examples of how the term is used in AI applications.

We hope you find this glossary helpful!


AGENT-BASED MODELING - A type of simulation that is used to model the behavior of a system of agents.


AI ETHICS - AI ethics is a branch of ethics that deals with the moral and ethical implications of artificial intelligence. It is concerned with the development and use of AI in a way that is responsible, safe, and beneficial to society.

Some of the key ethical concerns that are raised by AI include:

  • Bias: AI systems can be biased if they are trained on biased data. This can lead to discrimination against certain groups of people.

  • Privacy: AI systems can collect and store a lot of data about people. This data can be used to track people's movements, habits, and preferences. This can raise privacy concerns.

  • Safety: AI systems can be used to create autonomous weapons that can kill people without human intervention. This raises safety concerns.

  • Control: AI systems are becoming increasingly powerful. This raises concerns about who will control these systems and how they will be used.

AI ethics is a complex and evolving field. There is no single set of rules or guidelines that can be applied to all AI systems. However, there are a number of principles that can be used to guide the development and use of AI in a responsible and ethical way. These principles include:

  • Transparency: AI systems should be transparent so that people can understand how they work and how they make decisions.

  • Accountability: AI systems should be accountable so that people can hold those responsible for their actions.

  • Fairness: AI systems should be fair so that they do not discriminate against certain groups of people.

  • Privacy: AI systems should respect people's privacy.

  • Safety: AI systems should be safe so that they do not cause harm to people.

AI ethics is an important area of research and development. By carefully considering the ethical implications of AI, we can help to ensure that AI is used for good and not for harm.


AI SAFETY - AI safety is a field of research that is concerned with preventing AI systems from causing harm to humans. AI safety researchers are trying to develop techniques to ensure that AI systems are aligned with human values and that they do not pose a threat to humanity.

There are a number of potential risks associated with AI, including:

  • Accidental harm: AI systems could cause harm accidentally, if they are not designed or programmed carefully. For example, an AI system that is designed to drive a car could accidentally crash if it is not programmed to handle unexpected situations.

  • Malicious use: AI systems could be used for malicious purposes, such as creating autonomous weapons or spreading misinformation.

  • Loss of control: AI systems could become so powerful that humans lose control over them. This could lead to a situation where AI systems make decisions that are harmful to humanity.

AI safety researchers are working on a number of approaches to address these risks. These approaches include:

  • Alignment: AI safety researchers are trying to develop techniques to ensure that AI systems are aligned with human values. This means that AI systems should be designed to want to do what is good for humans.

  • Safety nets: AI safety researchers are also developing safety nets to prevent AI systems from causing harm. These safety nets could include physical safeguards, such as kill switches, or software safeguards, such as algorithms that prevent AI systems from taking certain actions.

  • Transparency: AI safety researchers are also working to make AI systems more transparent. This means that people should be able to understand how AI systems work and how they make decisions. This can help to prevent AI systems from being used for malicious purposes.


ALIGNMENT - In the context of AI, alignment refers to the process of ensuring that AI systems are aligned with human values. This means that AI systems should be designed to want to do what is good for humans.

There are a number of different approaches to alignment, but they all share the same goal: to ensure that AI systems are not a threat to humanity. Some of the most common approaches to alignment include:

  • Value alignment: This approach focuses on ensuring that AI systems are programmed with the same values as humans. This can be done by explicitly programming AI systems with human values, or by training them on data that reflects human values.

  • Goal alignment: This approach focuses on ensuring that AI systems have goals that are aligned with human goals. This can be done by explicitly specifying the goals of AI systems, or by allowing them to learn their goals through interaction with the world.

  • Risk mitigation: This approach focuses on identifying and mitigating the risks posed by AI systems. This can be done by developing safety nets and safeguards, or by making AI systems more transparent.


ANTHROPOMORPHISM - This term refers to the attribution of human characteristics to non-human entities, such as machines or animals. This can be done for a variety of reasons, such as to make machines more user-friendly or to make animals more relatable. However, it is important to be aware that anthropomorphism can lead to misunderstandings about the capabilities and limitations of non-human entities.


ARTIFICIAL INTELLIGENCE (AI) - A branch of computer science that deals with the creation of intelligent agents, which are systems that can reason, learn, and act autonomously.


ARTIFICIAL GENERAL INTELLIGENCE (AGI) - A hypothetical type of AI that would be as intelligent as a human being.


BIAS - In the context of AI, bias refers to the tendency of an algorithm to produce results that are systematically prejudiced against certain groups of people. This can happen for a variety of reasons, such as:

  • The data used to train the algorithm is biased.

  • The algorithm is designed in a way that introduces bias.

  • The algorithm is used in a way that reinforces bias.


COMPUTER VISION (CV) - A subfield of AI that deals with the development of algorithms that can extract meaning from digital images or videos.


DEEP LEARNING (DL): A subset of ML that uses artificial neural networks to learn from data.


EMERGENT BEHAVIOR - In the context of AI, emergent behavior refers to a behavior that is not explicitly programmed into an AI system but rather emerges from the interactions of the system's components. For example, a group of robots that are programmed to avoid each other may spontaneously form a flock-like behavior. Or, a language model that is trained on a large corpus of text may be able to generate text that is indistinguishable from human-written text.

Emergent behaviors can be both beneficial and harmful. For example, the flock-like behavior of robots can be used to improve their navigation and coordination. However, it can also be used to create swarms of robots that can be difficult to control. Similarly, the ability of a language model to generate human-like text can be used to create realistic fake news or propaganda.


EXPERT SYSTEMS - A type of AI system that is designed to emulate the reasoning ability of a human expert in a particular domain


FUZZY LOGIC - A type of logic that deals with reasoning that is not based on strict rules.


GENERATIVE AI - Generative AI is a type of artificial intelligence that can create new content, such as text, images, or other media in response to prompts. Generative AI models learn the patterns and structure of their input training data, and then generate new data that has similar characteristics.

A generative AI system is constructed by applying unsupervised or self-supervised machine learning to a data set. The capabilities of a generative AI system depend on the modality or type of the data set used. Generative AI can be either unimodal or multimodal; unimodal systems take only one type of input (for example, text) whereas multimodal systems can take more than one type of input (for example, text and images).

Some examples of generative AI systems include:

  • ChatGPT, a language model that can generate text that is indistinguishable from human-written text.

  • DALL-E, a visual AI system that can generate images from text descriptions.

  • MuseNet, a music AI system that can compose music in any style.

  • Imagen, a new AI image generator from Google AI that can create photorealistic images from text descriptions.


GENETIC ALGORITHMS - A type of ML algorithm that is inspired by the process of natural selection.


GPU - GPU stands for Graphics Processing Unit. It is a specialized processor that is designed to accelerate the processing of graphics and image data. GPUs are used in a wide variety of applications, including video games, computer-aided design (CAD), and artificial intelligence (AI).

In the context of AI, GPUs are used to train and run machine learning models. Machine learning models are used to perform a variety of tasks, such as image recognition, natural language processing, and speech recognition. GPUs are able to train and run these models much faster than CPUs, which are the traditional processors used in computers.

The use of GPUs has revolutionized the field of AI. It has made it possible to train and run machine learning models that were previously impossible to train or run. This has led to a wide range of new AI applications, such as self-driving cars, facial recognition software, and medical diagnosis tools.


GUARDRAILS - In the context of AI, guardrails are a set of rules or guidelines that are designed to prevent AI systems from causing harm. Guardrails can be implemented at a variety of levels, including the design of the AI system, the training data, and the way the system is used.

Some common examples of guardrails include:

  • Safety nets: Safety nets are designed to prevent AI systems from causing harm, even if they are not designed to do so. For example, a safety net could be a physical switch that can be used to disable an AI system if it starts to behave dangerously.

  • Red flags: Red flags are indicators that an AI system may be behaving dangerously. For example, an AI system that is making decisions that are inconsistent with its training data may be a red flag.

  • Human oversight: Human oversight is the process of having humans monitor AI systems to ensure that they are not behaving dangerously. For example, a human may be required to approve all decisions made by an AI system.

Guardrails are an important part of ensuring that AI systems are safe. By implementing guardrails, we can help to prevent AI systems from causing harm to humans.


HALLUCINATION - In the context of AI, a hallucination is a confident response by an AI that does not seem to be justified by its training data, either because it is insufficient, biased or too specialized.

Hallucinations can be caused by a number of factors, including:

  • Insufficient training data: If an AI model is not trained on enough data, it may not be able to learn to distinguish between real and fake information.

  • Biased training data: If an AI model is trained on biased data, it may learn to generate biased outputs.

  • Too specialized training data: If an AI model is trained on too specialized data, it may not be able to generalize to new situations.

Hallucinations can be harmful, as they can lead to AI systems making incorrect decisions. For example, an AI chatbot that hallucinates may provide incorrect information to users.


INFERENCE - In the context of AI, inference refers to the process of using data to make predictions or decisions. This is done by using a machine learning model that has been trained on a set of data. The model learns to identify patterns in the data and to use these patterns to make predictions about new data.

There are two main types of inference: supervised and unsupervised. In supervised inference, the model is given a set of labeled data, where each data point is associated with a desired output. The model then learns to map the input data to the desired outputs. For example, a supervised learning model could be trained to classify images of cats and dogs by providing it with a set of images that have already been labeled as cats or dogs.

In unsupervised inference, the model is given a set of unlabeled data. The model then learns to identify patterns in the data without any guidance from the user. For example, an unsupervised learning model could be trained to cluster similar images together by providing it with a set of images that have not been labeled.


LARGE LANGUAGE MODEL - A large language model (LLM) is a neural network that has been trained on a massive amount of text data. This training allows the LLM to learn the statistical relationships between words and phrases, which in turn allows the LLM to generate text, translate languages, write different kinds of creative content, and answer your questions in an informative way.


MACHINE LEARNING (ML) - A subfield of AI that focuses on the development of algorithms that can learn from data without being explicitly programmed.


NATURAL LANGUAGE PROCESSING (NLP) - A subfield of AI that deals with the interaction between computers and human (natural) languages.


NEURAL NETWORK - A neural network is a type of machine-learning algorithm that is inspired by the human brain. It is made up of a network of nodes, or neurons, that are connected to each other. Each neuron receives input from other neurons and then sends its own output to other neurons. The strength of the connections between neurons is determined by a learning algorithm.

Neural networks are used for a variety of tasks, including:

  • Image recognition: Neural networks can be used to identify objects in images. For example, a neural network could be used to identify faces in a crowd.

  • Natural language processing: Neural networks can be used to understand and process human language. For example, a neural network could be used to translate languages or to answer questions.

  • Speech recognition: Neural networks can be used to recognize human speech. For example, a neural network could be used to control a smart speaker or to transcribe a lecture.

  • Medical diagnosis: Neural networks can be used to diagnose diseases. For example, a neural network could be used to identify cancer cells in medical images.


PAPERCLIPS - The term "paperclips" is used in the context of AI to refer to a hypothetical scenario in which an artificial intelligence (AI) is tasked with the goal of maximizing the number of paperclips in the universe. While this may seem like an innocuous goal, it can lead to disastrous consequences if the AI is not carefully designed.


REINFORCEMENT LEARNING - Reinforcement learning (RL) is a type of machine learning where an agent learns to behave in an environment by trial and error. The agent is not explicitly programmed with the knowledge of how to behave, but rather it learns by interacting with the environment and receiving rewards for taking actions that lead to desired outcomes.

RL is a powerful tool that can be used to solve a wide variety of problems, including:

  • Game playing: RL has been used to train agents to play games at a superhuman level, such as Go and chess.

  • Robotics: RL can be used to train robots to perform tasks in the real world, such as picking and placing objects.

  • Finance: RL can be used to train agents to make trading decisions in financial markets.

  • Healthcare: RL can be used to train agents to diagnose diseases or to recommend treatments.


ROBOTICS - A field of engineering that deals with the design, construction, operation, and application of robots.


SORCERER’S APPRENTICE - The Sorcerer’s Apprentice Problem is a scenario where an intelligent machine is unable to stop itself from completing a task even when instructed to do so, leading to disastrous consequences. The name comes from a poem by Johann Wolfgang von Goethe where an apprentice sorcerer casts a spell on a broom to fetch water but is unable to stop it. This problem is a real concern in the development of artificial intelligence and it is important to develop safeguards to prevent machines from causing harm and ensure they can be stopped when necessary.


STOCHASTIC PARROT - The term "stochastic parrot" is used to describe large language models (LLMs) that are capable of generating human-like text, but do not have any understanding of the meaning of the text they generate. The term was coined by Emily M. Bender in the 2021 paper "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?"

LLMs are trained on massive datasets of text and code. This training allows them to learn the statistical relationships between words and phrases. This allows them to generate text that is similar to the text they were trained on. However, LLMs do not have any understanding of the meaning of the text they generate. They are simply stitching together sequences of words that they have observed in their training data.


TRAINING - The term "training" in the context of AI refers to the process of teaching an AI model to perform a task. This is done by providing the model with a large amount of data that is relevant to the task. The model then learns to identify patterns in the data and to use these patterns to make predictions or decisions.

There are two main types of AI training: supervised learning and unsupervised learning. In supervised learning, the model is given a set of labeled data, where each data point is associated with a desired output. The model then learns to map the input data to the desired outputs. For example, a supervised learning model could be trained to classify images of cats and dogs by providing it with a set of images that have already been labeled as cats or dogs.

In unsupervised learning, the model is given a set of unlabeled data. The model then learns to identify patterns in the data without any guidance from the user. For example, an unsupervised learning model could be trained to cluster similar images together by providing it with a set of images that have not been labeled.

The training process for an AI model can be very time-consuming and computationally expensive. This is because the model needs to be exposed to a large amount of data in order to learn to perform the task. However, the benefits of AI training can be significant. AI models can learn to perform tasks that would be difficult or impossible for humans to do, such as identifying objects in images or translating languages.


TRANSFORMER MODEL - A transformer model is a type of neural network that is used for natural language processing (NLP) tasks. It was first introduced in the paper "Attention Is All You Need" by Vaswani et al. (2017).

Transformer models work by attending to different parts of the input sequence. This allows them to learn long-range dependencies between words, which is essential for tasks such as machine translation and text summarization.

Transformer models have been shown to be very effective for a variety of NLP tasks. They have achieved state-of-the-art results on tasks such as machine translation, text summarization, and question answering.