Responsible AI in Government and Vendors Providing Tools Testing for Bias
This IDC Perspective highlights examples of how the federal government is weighing in on responsible and ethical AI, and it highlights several tools offered by a subset of vendors providing tools and testing for bias in AI systems, including Accenture Federal Services, IBM, and SAS. There are several critical steps that agencies should take to ensure responsible and ethical AI. Many vendors are developing tools and techniques that test for and detect unintended consequences such as gender, racial, and ethnic bias in AI software. "Software that detects bias is a nascent field of research for many AI vendors, and there is no silver bullet that will automatically address bias and fairness issues," says Adelaide O'Brien, research director, IDC Government Insights. "Neither the machine nor your vendor can go it alone when guarding against bias — solutions require agency vigilance."
Please Note: Extended description available upon request.
Learn how to effectively navigate the market research process to help guide your organization on the journey to success.Download eBook