AI for Altruism

AI for AltruismAI for AltruismAI for Altruism
Home
Grants
Resources
Insights

AI for Altruism

AI for AltruismAI for AltruismAI for Altruism
Home
Grants
Resources
Insights
More
  • Home
  • Grants
  • Resources
  • Insights
  • Home
  • Grants
  • Resources
  • Insights

AI Lexicon

Core AI and Machine Learning Concepts

Artificial Intelligence (AI)
The field of computer science focused on creating systems capable of performing tasks that typically require human intelligence, such as perception, reasoning, and language understanding. AI spans subfields including machine learning, robotics, and generative modeling.


Machine Learning (ML)
A subset of AI where algorithms learn from data to make predictions or decisions without being explicitly programmed. ML models identify patterns in data and improve performance over time through experience.


Deep Learning
A branch of ML that uses neural networks with many layers to learn complex patterns and representations. Deep learning powers major advances in image recognition, speech, and natural language understanding.


Supervised Learning
A method where models learn from labeled datasets, examples with known correct outputs. This approach is used for classification, prediction, and regression tasks.


Unsupervised Learning
A method for finding structure in unlabeled data. Algorithms identify clusters, patterns, or anomalies without explicit guidance.


Reinforcement Learning (RL)
A learning paradigm where an agent interacts with an environment, receiving rewards or penalties to guide behavior. It’s widely used in robotics, games, and adaptive control systems.


Transfer Learning
Reusing knowledge from one task or dataset to improve performance on another related task. This reduces the amount of new data required to train effective models.


Overfitting
When a model memorizes training data, including noise, rather than learning general patterns. Overfit models perform poorly on unseen data.


Bias (Algorithmic Bias)
Systematic errors in model predictions caused by skewed or unrepresentative training data. Managing bias is key to ensuring fairness and ethical AI use.

Generative AI and Language Models

Generative AI
AI systems that can create new content such as text, images, or music by learning the underlying patterns of data. They differ from discriminative models, which classify or predict rather than generate.


Large Language Model (LLM)
An AI model trained on massive text datasets to understand and produce human-like language. Examples include GPT, Claude, and Gemini, which can write, summarize, and reason through natural prompts.


Large Multimodal Model (LMM)
A model capable of processing multiple types of input such as text, images, and audio to generate contextually appropriate outputs. LMMs can describe images, answer visual questions, or link textual and visual data.


Foundation Model
A large, general-purpose model trained on diverse data and adaptable to many downstream tasks through fine-tuning or prompting. 


Prompt Engineering
The practice of crafting effective prompts to guide AI models toward desired outputs. Good prompts provide clear instructions, context, and format expectations.


Fine-Tuning
Further training a pre-trained model on domain-specific data to adapt it for specialized use. Fine-tuning refines a model’s general knowledge for targeted applications.


Zero-Shot Learning
The ability of a model to perform new tasks it hasn’t explicitly been trained for, using only a task description in the prompt.


Few-Shot Learning
Training or prompting a model with only a handful of examples to perform a new task. This is valuable when labeled data is limited.


In-Context Learning
A model’s capacity to learn dynamically from examples within the prompt itself, without changing its internal parameters.


Hallucination
When a language model generates confident but false or fabricated information. It occurs because the model predicts plausible text patterns rather than verified facts.


RAG (Retrieval-Augmented Generation)
Combining an AI model with an external information source so it can retrieve and integrate relevant knowledge into its responses. This improves factual accuracy and reduces hallucination.


Transformer
The neural network architecture underlying most modern AI models. Its “attention mechanism” enables parallel processing and the ability to capture long-range relationships in data.


GPT (Generative Pre-trained Transformer)
A family of models using the transformer architecture to generate text by predicting the next token. They are pre-trained on diverse text and can be fine-tuned for specific applications.

Model Architectures and Techniques

Distillation
Training a smaller “student” model to mimic the outputs of a larger “teacher” model. This produces lightweight, efficient models suitable for deployment.


Generative Adversarial Network (GAN)
A model framework where a generator creates data and a discriminator evaluates it. The two networks compete, resulting in highly realistic synthetic outputs.


Variational Autoencoder (VAE)
A model that learns compressed representations of data, which can be sampled to generate new, similar examples. VAEs are often used in image synthesis and anomaly detection.


Diffusion Model
A generative approach that learns to turn random noise into coherent data by reversing a gradual noising process. It underlies many state-of-the-art image and video generators.


Attention Mechanism
A neural network component that focuses computational resources on the most relevant parts of input data. It’s key to the power and interpretability of transformer models.

Model Evaluation and Reliability

Precision
The percentage of positive predictions that are correct. High precision means few false positives.


Recall (Sensitivity)
The percentage of actual positives correctly identified by the model. High recall means few missed detections.


F1 Score
The harmonic mean of precision and recall, providing a balanced measure of both.


Explainable AI (XAI)
Methods for making AI decisions transparent and understandable. XAI builds trust and helps diagnose errors and bias.


Calibration
Ensuring that a model’s predicted probabilities align with actual outcome frequencies. A well-calibrated model’s 70% predictions should be right about 70% of the time.


Uncertainty Quantification
Assessing the confidence or reliability of AI outputs. Quantifying uncertainty is essential for safe deployment.

Ethical and Responsible AI

Alignment
Ensuring AI systems behave in ways that are beneficial and consistent with human values and intentions. This includes technical, ethical, and governance considerations.


Algorithmic Fairness
The pursuit of equitable AI systems that do not discriminate across groups. Fairness metrics and transparency tools help measure and mitigate bias.


RLHF (Reinforcement Learning from Human Feedback)
A method for improving AI models using human judgments of quality or appropriateness. It aligns AI outputs with human values and expectations.


Privacy-by-Design
Building privacy safeguards into systems from the outset. This includes data minimization, purpose limitation, and security controls.


Differential Privacy
A mathematical technique ensuring that the inclusion or exclusion of any individual’s data minimally affects analysis results, protecting individual privacy.


Provenance
The record of where data and model outputs come from, how they’ve been processed, and by whom. Provenance supports transparency and accountability.


Dual Use
The potential for AI technologies to be used for both beneficial and harmful purposes. Responsible governance includes assessing and mitigating misuse risks.

Data and Infrastructure

Data Pipeline
An automated sequence that collects, processes, and delivers data for analysis or model training. Reliable pipelines are key to scalable AI systems.


ETL (Extract, Transform, Load)
The process of gathering data from sources, cleaning and structuring it, and loading it into storage systems or models.


Data Lake
A centralized repository for storing large volumes of raw, unstructured, and structured data for future use.


MLOps (Machine Learning Operations)
Practices that automate and manage the lifecycle of machine learning models, from training and testing to deployment and monitoring.


Docker/Containerization
Packaging applications and their dependencies so they run consistently across environments. Containers simplify scaling and deployment.


Cloud Computing
Delivering computing resources over the internet for flexible, scalable access to storage, processing, and analytics.


Edge Computing
Performing computation close to where data is generated to reduce latency and preserve privacy.


API (Application Programming Interface)
A standard way for software systems to communicate and exchange data securely and efficiently.

AI for Altruism (A4A) is a 501(c)(3) nonprofit organization.  

Copyright © 2025 AI for Altruism, Inc. - All Rights Reserved.

Please donate today.

  • Privacy Policy
  • Resources
  • Donation Policy

This website uses cookies.

We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.

Accept