The use of artificial intelligence (AI) in consumer-facing businesses is witnessing a significant uptick, with a growing emphasis on how to effectively govern this technology in the long run. The recent executive order from the Biden administration has added pressure for new measurement protocols in the development and use of advanced AI systems.
Explainability has emerged as a key focus for AI providers and regulators, serving as a pillar of AI governance. This approach aims to empower individuals affected by AI systems to comprehend and question the outcomes, particularly in terms of bias. However, the landscape becomes more intricate with the advent of complex algorithms in recent AI technologies, like OpenAI's GPT-4 and Google Deepmind's cancer screening models.
While simpler algorithms, such as those used for car loan approvals, are easier to explain, advanced AI systems often operate with highly complex algorithms that offer powerful benefits but can be challenging to articulate. The question arises: should we limit the deployment of these partially explainable technologies, or can we leverage their benefits while minimizing harm?
Even U.S. lawmakers grappling with AI regulation are recognizing the challenges surrounding explainability, prompting a shift towards an AI governance approach that prioritizes outcomes rather than solely focusing on explainability.
Drawing parallels from the medical field's approach to novel therapies, the AI community is exploring new frameworks to assess the long-term safety and efficacy of AI systems. The classical randomized controlled trial may not be suitable for the continuous learning nature of AI, but concepts like A/B testing, widely used in product development, offer potential solutions.
A/B testing, applied over the last 15 years in product development, involves treating groups differently to measure the impacts of specific features. It has proven effective in iteratively testing changes to technology against a control or benchmark, allowing companies to understand both business benefits and potential harms.
Effective measurement of AI safety involves setting up experiments, such as those undertaken by a large bank assessing the fairness of a new pricing algorithm for personal lending products. By comparing outcomes between treatment and control groups, companies can quantitatively evaluate the impact on different populations and establish accountability for AI systems.
In conclusion, while the quest for explainability remains crucial, the application of measurement frameworks derived from healthcare and adopted in the tech industry provides a quantitative and tested path toward ensuring that AI not only works as intended but, most importantly, is safe.