Large Language Model

A deep learning model trained on vast text corpora that can perform tasks like text generation, translation, and summarization, often requiring governance around bias and misuse.

Definition

Transformer-based architectures (e.g. GPT, BERT) with hundreds of millions to trillions of parameters, pre-trained on diverse Internet-scale data. They excel at few-shot learning but can propagate societal biases or generate harmful content. Governance must include bias audits, usage monitoring, content filtering, and clear policies for allowable vs. prohibited prompts to mitigate misuse.

Real-World Example

A marketing team uses GPT-style models to draft ad copy. Before deployment, they run bias checks on generated text (e.g., gendered language), enforce profanity filters, and implement usage quotas to prevent confidential data leaks—ensuring the model enhances creativity without introducing reputational or legal risk.