BasicAgent
AI Risk Assessment
How to run an AI risk assessment for LLM and ML systems with clear criteria and evidence.
An AI risk assessment evaluates the potential harm, likelihood, and exposure of an AI system before and after release.
Assessment checklist
- system purpose and user impact
- data sensitivity and privacy exposure
- model failure modes and safety risks
- monitoring and rollback readiness
Outputs to keep
- risk tier and rationale
- required controls and approvals
- evaluation plan with metrics
Related pages
- AI risk assessment framework:
/ai-risk-assessment-framework/ - AI risk management:
/ai-risk-management/ - AI governance framework:
/ai-governance-framework/
Create account
Build narrative
Follow a coherent path from thesis to lab notes to proof-of-work instead of isolated pages.
Step 1
Intelligence systems office
The strategic map for what is being built and why.
Step 2
Lab notes
Build footprints and progression logs as proof-of-work.
Step 3
Control surface
Governance and monitoring architecture for operational reliability.
Step 4
Private alignment
Convert insight into execution with scoped collaboration.