Widespread use of AI in the workplace, little oversight: A recent survey of employers showed that nearly 80% use artificial intelligence in employment decisions such as recruitment and hiring. Despite such prevalence, and despite recognized risks of bias involved with such uses of AI, “there is no standard for AI audits currently,” according to Mona Sloane, an AI expert at the New York University Center for Responsible AI.
Regulation on the horizon: New York City will become the first U.S. jurisdiction to require notice and audit requirements for AI decision-making tools, with such requirements going into effect this spring. Meanwhile, the European Union is currently considering an even more comprehensive AI law that will require audits of AI use in the workplace. Finally, the EEOC has created an AI and Algorithmic Fairness Initiative through which it will focus on AI-related bias.
The race for audit standards: Both employers and AI vendors are now scrambling to establish bias audit industry standards ahead of the effective date of the NYC law, rather than waiting for regulators to tell them how such audits must be conducted. Notably, the NYC law places responsibility on employers, rather than the vendor behind the AI tool, to show proof of audit. Multiple employers may use the same audit, however, if it includes their data. Although some vendors have taken it upon themselves to conduct auditing for compliance, the question of shared responsibility between employers and vendors on bias auditing remains open.
Outlook: The use of AI in the workplace, like data privacy, is a clear example of the law struggling to play catch-up with an already widely established practice. Whether future regulation mirrors the practical realities of the use of AI in employment decisions remains to be seen, particularly as industry standards continue to be lacking.