Understanding Artificial Intelligence Terminology
The world of artificial intelligence (AI) is complex and often shrouded in technical jargon. To help navigate this landscape, we’ve compiled a glossary of key terms and concepts commonly used in AI discussions.
Artificial General Intelligence (AGI)
AGI refers to AI systems that are capable of performing most tasks at least as well as humans. Definitions of AGI vary among experts and organizations. For instance, OpenAI CEO Sam Altman describes AGI as “the equivalent of a median human that you could hire as a co-worker.” OpenAI’s charter defines it as “highly autonomous systems that outperform humans at most economically valuable work.” Google DeepMind views AGI as “AI that’s at least as capable as humans at most cognitive tasks.”
AI Agent
An AI agent is a tool that utilizes AI technologies to perform complex tasks on behalf of users, going beyond the capabilities of basic AI chatbots. Examples include filing expenses, booking tickets, or maintaining code. These agents imply autonomous systems that may draw on multiple AI systems to carry out multistep tasks.
Chain of Thought
Chain-of-thought reasoning in AI involves breaking down complex problems into smaller, intermediate steps to improve the quality of the end result. This approach is particularly useful in logic or coding contexts, where it enhances the accuracy of AI outputs.
Deep Learning
Deep learning is a subset of machine learning that uses multi-layered artificial neural networks (ANNs) to make complex correlations in data. Inspired by the human brain’s structure, deep learning algorithms can identify important data characteristics without human intervention. However, they require large datasets and significant computational resources.
Diffusion
Diffusion in AI refers to a technique inspired by physics, where data is gradually “destroyed” by adding noise until it’s unrecognizable. The AI system then learns to reverse this process, effectively generating new data by recovering it from noise. This technique is at the heart of many art-, music-, and text-generating AI models.
Distillation
Distillation is a technique used to transfer knowledge from a large AI model (the “teacher”) to a smaller model (the “student”). By training the student model on the outputs of the teacher model, developers can create more efficient models with minimal loss of performance.
Fine-tuning
Fine-tuning involves further training an AI model on specialized data to optimize its performance for specific tasks or domains. This approach is commonly used by AI startups to adapt large language models to their particular needs.
Generative Adversarial Network (GAN)
A GAN is a machine learning framework that uses two neural networks in competition with each other to generate realistic data. One network generates outputs, while the other evaluates them, leading to improved performance over time.
Hallucination
In AI, “hallucination” refers to the generation of incorrect or fabricated information. This is a significant problem, especially for general-purpose AI models, as it can lead to misleading or dangerous outputs.
Inference
Inference is the process of using a trained AI model to make predictions or draw conclusions from new data. While training is required to develop the model, inference can be performed on various hardware, from smartphones to specialized AI accelerators.
Large Language Model (LLM)
LLMs are AI models used in popular AI assistants like ChatGPT or Google’s Gemini. These models process and generate human-like language based on the patterns they’ve learned from vast amounts of text data.
Neural Network
A neural network is a multi-layered algorithmic structure inspired by the human brain. It’s the foundation of deep learning and generative AI tools. The rise of graphical processing units (GPUs) has enabled the development of more complex neural networks, leading to breakthroughs in various AI applications.
Training
Training is the process of feeding data into an AI model to enable it to learn patterns and generate useful outputs. While some AI systems are rules-based and don’t require training, most modern AI models rely on training to develop their capabilities.
Transfer Learning
Transfer learning involves using a pre-trained AI model as a starting point for a new, related task. This approach can save development time and resources by leveraging knowledge gained during previous training.
Weights
In AI training, weights are numerical parameters that determine the importance of different features in the training data. They are adjusted during the training process to optimize the model’s performance.