The widespread adoption of artificial intelligence (AI) has led to countless innovations ranging from task automation, detecting fraud, and even curing disease. At the same time, AI has the potential to reinforce biases regarding who we incarcerate, how we allocate healthcare resources, and more.
Regulators in the United States have started regulating the use of AI in high-impact contexts. For those already following global developments, this should come as no surprise. We’ve seen legislation tabled or enacted in the European Union via the EU AI Act, and Canada via the Artificial Intelligence and Data Act. What differentiates the US from the rest is the fragmented nature of legislation in this space.
Very recently, we’ve seen Federal, State, and city-level legislation focused on regulating the use of AI in different contexts. For example, New York City has passed legislation regulating the use of automated employment decision tools. California’s Fair Employment and Housing Council has published a draft set of modifications to existing employment regulations imposing liability on companies or third-party agencies administering AI tools that have a discriminatory impact. Illinois and Colorado have also introduced legislation in attempt to lessen the potential biases the use of AI carries. At the Federal level, the US has also unveiled the Algorithmic Accountability Act, which requires organizations to assess the impact of automated systems and creates transparency requirements about when and how automated decisions are used.
Another recent example is Washington D.C.’s proposed Stop Discrimination by Algorithms Act, which aims to encourage transparency and accountability by requiring covered entities to provide notice to individuals about how their personal information is used, audit algorithmic determination practices for discriminatory processing or impact, and prohibit adverse algorithmic decision-making based on protected traits. Some argue that such legislation is duplicative, and that governments should instead focus on enhancing existing anti-discrimination laws and clarifying how they apply to AI. Others may say these regulations are more than just about preventing bias, but also that they help to protect consumers and encourage transparency through disclosure requirements and mandated system audits, which bakes due diligence into the design, development, and deployment of AI.
Over the coming months, AI vendors and organizations in the US leveraging AI tools will face a unique set of evolving regulations to navigate – in addition to laws governing the use of personal information which is often an input for AI training and development. However, there are overarching convergences where organizations that are looking to get ahead of these regulations can focus on: develop practices to test data and algorithms for biases, conduct impact assessments for high-risk systems, develop ongoing monitoring and oversight practices, and implement disclosure and notice where appropriate.
Not sure where to get started? INQ has extensive expertise in the space of regulatory compliance, data privacy, and AI governance. For more information, contact Carole Piovesan or Michael Pascu.
Comments