Γ‰thique Updated 2026-04

AI Safety

Definition

AI Safety is the field focused on ensuring AI systems are safe, reliable and don't cause unintended harm.

Frequently Asked Questions

Why is AI Safety important?
LLMs can generate harmful content, be manipulated through prompt injection, or make biased decisions. AI Safety seeks to prevent these risks.
Who works on AI Safety?
Anthropic (Claude's creator) was explicitly founded for AI Safety. OpenAI, Google DeepMind and Meta also have dedicated teams.