BasicAgent
AI Risk Assessment
How to run an AI risk assessment for LLM and ML systems with clear criteria and evidence.
An AI risk assessment evaluates the potential harm, likelihood, and exposure of an AI system before and after release.
Assessment checklist
- system purpose and user impact
- data sensitivity and privacy exposure
- model failure modes and safety risks
- monitoring and rollback readiness
Outputs to keep
- risk tier and rationale
- required controls and approvals
- evaluation plan with metrics
Related pages
- AI risk assessment framework:
/ai-risk-assessment-framework/ - AI risk management:
/ai-risk-management/ - AI governance framework:
/ai-governance-framework/