Understand key AI governance terms and concepts with straightforward definitions to help you navigate the Enzai platform.
The obligation of AI system developers and operators to ensure their systems are designed and used responsibly, adhering to ethical standards and legal requirements.
The process of ensuring AI systems' goals and behaviors are aligned with human values and intentions.
The systematic evaluation of AI systems to assess compliance with ethical standards, regulations, and performance metrics.
The adherence of AI systems to applicable laws, regulations, and ethical guidelines throughout their lifecycle.
The extent to which the internal mechanics of an AI system can be understood and interpreted by humans.
The framework of policies, processes, and controls that guide the ethical and effective development and use of AI systems.
The understanding of AI concepts, capabilities, and limitations, enabling informed interaction with AI technologies.
The continuous observation and analysis of AI system performance to ensure reliability, safety, and compliance.
The process of identifying, assessing, and mitigating risks associated with AI systems.
The principle that AI systems should be open and clear about their operations, decisions, and data usage.
Techniques that manipulate AI models by introducing deceptive inputs to cause incorrect outputs.
Bias that occurs when an algorithm produces results that are systemically prejudiced due to erroneous assumptions in the machine learning process.
The use of algorithms to manage and regulate societal functions, potentially impacting decision-making processes.
A type of AI that possesses the ability to understand, learn, and apply knowledge in a generalized way, similar to human intelligence.
The simulation of human intelligence processes by machines, especially computer systems, including learning, reasoning, and self-correction.
A training algorithm used in neural networks that adjusts weights by propagating errors backward from the output layer to minimize loss.
A machine learning approach where the model is trained on the entire dataset at once, as opposed to incremental learning.
The process of comparing AI system performance against standard metrics or other systems to assess effectiveness.
The phenomenon where AI systems exacerbate existing biases present in the training data, leading to increasingly skewed outcomes.
An evaluation process to detect and mitigate biases in AI systems, ensuring fairness and compliance with ethical standards.
The process of identifying biases in AI models by analyzing their outputs and decision-making processes.
Techniques applied during AI development to reduce or eliminate biases in models and datasets.
An AI system whose internal workings are not transparent or interpretable, making it difficult to understand how decisions are made.
A method in AI and statistics used to determine cause-and-effect relationships, helping to understand the impact of interventions or changes in variables.
A supervised learning technique in machine learning where the model predicts the category or class label of new observations based on training data.
Systematic patterns of deviation from norm or rationality in judgment, which can influence AI decision-making if present in training data.
A subset of AI that simulates human thought processes in a computerized model, aiming to solve complex problems without human assistance.
The total amount of mental effort being used in the working memory, considered in AI to design systems that do not overwhelm users.
A structured set of guidelines and best practices that organizations follow to ensure their AI systems meet regulatory and ethical standards.
The potential for legal or regulatory sanctions, financial loss, or reputational damage an organization faces when it fails to comply with laws, regulations, or prescribed practices.
A field of AI that trains computers to interpret and process visual information from the world, such as images and videos.
The change in the statistical properties of the target variable, which the model is trying to predict, over time, leading to model degradation.
A range of values, derived from sample statistics, that is likely to contain the value of an unknown population parameter, used in AI to express uncertainty.
A process to determine whether an AI system meets specified requirements, standards, or regulations, often involving testing and certification.
An AI system's ability to continuously learn and adapt from new data inputs without human intervention, improving over time.
The extent to which humans can direct, influence, or override the decisions and behaviors of an AI system.
A model validation technique for assessing how the results of a statistical analysis will generalize to an independent dataset.
The practice of protecting systems, networks, and programs from digital attacks, crucial in safeguarding AI systems against threats.
The change in model input data over time, which can lead to model performance degradation if not monitored and addressed.
The branch of ethics that evaluates data practices with respect to the moral obligations of gathering, protecting, and using personally identifiable information.
The overall management of data availability, usability, integrity, and security in an enterprise, ensuring that data is handled properly throughout its lifecycle.
The policy-based management of data flow throughout its lifecycle: from creation and initial storage to the time it becomes obsolete and is deleted.
The principle of collecting only the data that is necessary for a specific purpose, reducing the risk of misuse or breach.
The aspect of information technology that deals with the ability to control what data is shared and with whom, ensuring personal data is handled appropriately.
The process of safeguarding important information from corruption, compromise, or loss, ensuring compliance with data protection laws and regulations.
The condition of data based on factors such as accuracy, completeness, reliability, and relevance, crucial for effective AI model performance.
The physical or geographic location of an organization's data, which can have implications for compliance with data protection laws.
The concept that data is subject to the laws and governance structures within the nation it is collected, stored, or processed.
An individual whose personal data is collected, held, or processed, particularly relevant in the context of data protection laws like GDPR.
The process of removing or obscuring personal identifiers from data sets, making it difficult to identify individuals, used to protect privacy.
A subset of machine learning involving neural networks with multiple layers, enabling the modeling of complex patterns in data.
A system for publicly sharing information about a dataset by describing patterns of groups within the dataset while withholding information about individuals.
In AI, refers to unfair treatment of individuals or groups based on biases in data or algorithms, leading to unequal outcomes.
A machine learning approach where training data is distributed across multiple devices or locations, and models are trained collaboratively without sharing raw data.
A technique in machine learning where a model trained in one domain is adapted to work in a different but related domain.
The continuous process of identifying and evaluating risks in real-time, allowing for timely responses to emerging threats in AI systems.
The analysis of data at the edge of the network, near the source of data generation, reducing latency and bandwidth usage.
A machine learning paradigm where multiple models are trained and combined to solve the same problem, improving overall performance.
The process of identifying and linking records that refer to the same real-world entity across different datasets.
The practice of designing, developing, and deploying AI systems in a manner that aligns with ethical principles and values, ensuring fairness, accountability, and transparency.
The process of systematically evaluating AI systems to ensure they comply with ethical standards and do not cause harm.
A formal recognition that an AI system adheres to established ethical standards and guidelines.
The framework of policies, procedures, and practices that ensure AI systems are developed and used responsibly and ethically.
Structured sets of principles and guidelines designed to guide the ethical development and deployment of AI systems.
The practice of intentionally probing systems for vulnerabilities to identify and fix security issues, ensuring the robustness of AI systems.
A systematic evaluation process to identify and address the ethical implications and potential societal impacts of AI systems before deployment.
The potential for an AI system to cause harm due to unethical behavior, including bias, discrimination, or violation of privacy.
A set of guidelines developed by the European Commission's High-Level Expert Group on AI to promote trustworthy AI, focusing on human agency, technical robustness, privacy, transparency, diversity, societal well-being, and accountability.
Methods used to interpret and understand the decisions made by AI models, such as LIME, SHAP, and saliency maps.
While both aim to make AI decisions understandable, explainability focuses on the reasoning behind decisions, whereas interpretability relates to the transparency of the model's internal mechanics.
AI systems designed to provide human-understandable justifications for their decisions and actions, enhancing transparency and trust.
Machine learning models designed to provide clear and understandable explanations for their predictions and decisions.
Quantitative measures (e.g., demographic parity, equalized odds) used to evaluate how fair an AI model’s predictions are across groups.
When an AI model incorrectly predicts a negative class for an instance that is actually positive (Type II error).
When an AI model incorrectly predicts a positive class for an instance that is actually negative (Type I error).
The ability of an AI system to continue operating correctly even when some components fail or produce errors.
Creating, selecting or transforming raw dataset attributes into features that improve the performance of machine learning models.
The process of mapping raw data (e.g., text, images) into numerical representations (features) suitable for input into ML algorithms.
Identifying and selecting the most relevant features for model training to reduce complexity and improve accuracy.
A decentralized ML approach where models are trained across multiple devices or servers holding local data, without sharing raw data centrally.
A process where AI outputs are fed back as inputs, which can amplify model behavior—for better (reinforcement learning) or worse (bias reinforcement).
Adapting a pre-trained AI model to a specific task or dataset by continuing training on new data, often improving task-specific performance.
Mathematically proving that AI algorithms comply with specified correctness properties, often used in safety-critical systems.
Using AI techniques (e.g., anomaly detection, pattern recognition) to identify and prevent fraudulent activities in finance, insurance, etc.
Ensuring AI systems operate safely under all conditions, especially in industries like automotive or healthcare, often via redundancy and checks.
A logic system that handles reasoning with approximate, rather than binary true/false values—useful in control systems and uncertainty handling.