Authors recommend guidelines for designing and deploying audits of opaque AI systems to mitigate algorithmic harms to users and society.
WASHINGTON – A new report, “AI Audit-Washing and Accountability,”
finds that auditing could be a robust means for holding AI systems
accountable, but today’s auditing regimes are not yet adequate to the
job. The report assesses the effectiveness of various auditing regimes
and proposes guidelines for creating trustworthy auditing systems.
Various
government and private entities rely on or have proposed audits as a
way of ensuring AI systems meet legal, ethical and other standards. This
report finds that audits can in fact provide an agile co-regulatory
approach—one that relies on both governments and private entities—to
ensure societal accountability for algorithmic systems through private
oversight.
But the “algorithmic audit” remains ill-defined and inexact, whether
concerning social media platforms or AI systems generally. The risk is
significant that inadequate audits will obscure problems with
algorithmic systems. A poorly designed or executed audit is at best
meaningless and at worst even excuses harms that the audits claim to
mitigate.
Inadequate audits or those without clear standards provide false
assurance of compliance with norms and laws, “audit-washing” problematic
or illegal practices. Like green-washing and ethics-washing before, the
audited entity can claim credit without doing the work.
The paper identifies the core specifications needed in order for
algorithmic audits to be a reliable AI accountability mechanism:
- “Who” conducts the audit—clearly defined qualifications, conditions for data access, and guardrails for internal audits;
- “What” is the type and scope of audit—including its position within a larger sociotechnical system;
- “Why” is the audit being conducted—whether for
narrow legal standards or broader ethical goals, essential for audit
comparison, along with potential costs; and
- “How” are the audit standards determined—an
important baseline for the development of audit certification mechanisms
and to guard against audit-washing.
Algorithmic audits have the potential to increase the reliability and
innovation of technology in the twenty-first century, much as financial
audits transformed the way businesses operated in the twentieth
century. They will take different forms, either within a sector or
across sectors, especially for systems that pose the highest risk.
Ensuring that AI is accountable and trusted is key to ensuring that
democracies remain centers of innovation while shaping technology to
democratic values.
But as algorithmic audits are encoded into law or adopted voluntarily
as part of corporate social responsibility, the audit industry must
arrive at shared understandings and expectations of audit goals and
procedures. This paper provides such an outline so that truly meaningful
algorithmic audits can take their deserved place in AI governance
frameworks.
full paper
German Marshall Fund
© German Marshall Fund of the United States
Key
Hover over the blue highlighted
text to view the acronym meaning
Hover
over these icons for more information
Comments:
No Comments for this Article