AI Ethics Checker
Provides systematic ethical review frameworks for evaluating AI systems before deployment. Covers bias detection across protected attributes, fairness metric selection, transparency and explainability requirements, privacy impact assessment, data consent verification, accountability structures, and compliance with AI regulations (EU AI Act, NIST AI RMF, local laws).
Usage
Describe the AI system under review: what it does, who it affects, what data it uses, and where it will be deployed. Specify the decision domain (hiring, lending, healthcare, content moderation, etc.) and any applicable regulations. This skill produces a structured ethical assessment with identified risks, severity ratings, and concrete mitigation recommendations.
Examples
- "Audit a resume screening model for gender and racial bias using disparate impact analysis and counterfactual testing"
- "Create an AI ethics checklist for a credit scoring system that must comply with the EU AI Act high-risk requirements"
- "Evaluate a content recommendation algorithm for filter bubble effects, manipulation risks, and child safety concerns"
Guidelines
- Test for bias across all protected attributes: race, gender, age, disability, religion, national origin
- Use multiple fairness metrics (demographic parity, equalized odds, predictive parity) since they can conflict
- Require model explainability proportional to decision impact: higher stakes demand clearer explanations
- Document training data provenance including collection methods, consent, and known limitations
- Implement human oversight mechanisms for high-stakes automated decisions with clear appeal processes
- Conduct regular bias audits post-deployment since data drift can introduce new disparities over time
- Publish model cards documenting intended use, limitations, and evaluation results for transparency
- Engage affected communities in the review process, not just internal stakeholders and engineers