
IDC PeerScape: Practices for Securing AI Models and Applications
Description
IDC PeerScape: Practices for Securing AI Models and Applications
This IDC PeerScape describes the best practices for securing AI models and applications."Cybersecurity vendors are securing their AI applications and models by protecting APIs, monitoring model inputs and outputs, and proactively looking for weaknesses," said Michelle Abraham, research director, Security and Trust at IDC. "They have well-thought-out protections in place using existing security technologies as well as new technologies designed for GenAI."
Please Note: Extended description available upon request.
Table of Contents
7 Pages
- IDC PeerScape Figure
- Executive Summary
- Peer Insights
- Practice 1: Protect APIs and Connections to AI Infrastructure as Well as the AI Infrastructure Itself
- Challenge
- Examples
- Broadcom
- CrowdStrike
- IBM
- Trend Micro
- Guidance
- Practice 2: Use Verified and Tested Foundation Models
- Challenge
- Examples
- Broadcom
- Cisco
- CrowdStrike
- IBM
- Trend Micro
- Guidance
- Practice 3: Monitor Model Inputs and Outputs to Detect and Respond to Attacks Against AI
- Challenge
- Examples
- IBM
- Cisco
- Splunk
- Guidance
Pricing
Currency Rates
Questions or Comments?
Our team has the ability to search within reports to verify it suits your needs. We can also help maximize your budget by finding sections of reports you can purchase.