Market Research Logo

Responsible AI in Government and Vendors Providing Tools Testing for Bias

Responsible AI in Government and Vendors Providing Tools Testing for Bias

This IDC Perspective highlights examples of how the federal government is weighing in on responsible and ethical AI, and it highlights several tools offered by a subset of vendors providing tools and testing for bias in AI systems, including Accenture Federal Services, IBM, and SAS. There are several critical steps that agencies should take to ensure responsible and ethical AI. Many vendors are developing tools and techniques that test for and detect unintended consequences such as gender, racial, and ethnic bias in AI software. "Software that detects bias is a nascent field of research for many AI vendors, and there is no silver bullet that will automatically address bias and fairness issues," says Adelaide O'Brien, research director, IDC Government Insights. "Neither the machine nor your vendor can go it alone when guarding against bias — solutions require agency vigilance."

Please Note: Extended description available upon request.


Executive Snapshot
>
Situation Overview
>
The Federal Government Is Weighing in on Responsible and Ethical AI
Vendors Are Providing Tools Testing for Bias in AI Systems
Accenture Fairness Tool
IBM
Diversity in Faces
IBM AI Fairness 360 Open Source Toolkit
IBM Watson OpenScale
SAS
Advice for the Technology Buyer
>
Learn More
>
Related Research
Synopsis

Download our eBook: How to Succeed Using Market Research

Learn how to effectively navigate the market research process to help guide your organization on the journey to success.

Download eBook