[ad_1]
James Ding
Sep 26, 2025 19:58
Explore why Common Vulnerabilities and Exposures (CVE) should focus on frameworks and applications rather than AI models, according to NVIDIA’s insights.
The Common Vulnerabilities and Exposures (CVE) system, a globally recognized standard for identifying security flaws in software, is under scrutiny concerning its application to AI models. According to NVIDIA, the CVE system should primarily focus on frameworks and applications rather than individual AI models.
The CVE system, maintained by MITRE and supported by CISA, assigns unique identifiers and descriptions to vulnerabilities, facilitating clear communication among developers, vendors, and security professionals. However, as AI models become integral to enterprise systems, the question arises: should CVEs also cover AI models?
AI models introduce failure modes such as adversarial prompts, poisoned training data, and data leakage. These resemble vulnerabilities but do not align with the CVE definition, which focuses on weaknesses violating confidentiality, integrity, or availability guarantees. NVIDIA argues that the vulnerabilities typically reside in the frameworks and applications that utilize these models, not in the models themselves.
Proposed CVEs for AI models generally fall into three categories:
AI models, due to their probabilistic nature, exhibit behaviors that can be mistaken for vulnerabilities. However, these are often typical inference outcomes exploited in unsafe application contexts. For a CVE to be applicable, a model must fail its intended function in a way that breaches security, which is seldom the case.
Vulnerabilities often originate from the surrounding software environment rather than the model itself. For example, adversarial attacks manipulate inputs to produce misclassifications, a failure of the application to detect such queries, not the model. Similarly, issues like data leakage result from overfitting and require system-level mitigations.
One exception where CVEs could be relevant is when poisoned training data results in a backdoored model. In such cases, the model itself is compromised during training. However, even these scenarios might be better addressed through supply chain integrity measures.
Ultimately, NVIDIA advocates for applying CVEs to frameworks and applications where they can drive meaningful remediation. Enhancing supply chain assurance, access controls, and monitoring is crucial for AI security, rather than labeling every statistical anomaly in models as a vulnerability.
For further insights, you can visit the original source on NVIDIA’s blog.
Image source: Shutterstock
[ad_2]
Source link
[ad_1] भारतीय शेयर बाजारों में शुक्रवार (11 अप्रैल) को जबरदस्त तेजी देखने को मिली। सेंसेक्स…
[ad_1] Joerg Hiller Dec 13, 2025 13:56 BTC price prediction suggests…
[ad_1] Mutual Fund March 2025 Data: शेयर बाजार में जारी उतार-चढ़ाव और ट्रंप टैरिफ (Trump…
[ad_1] Lawrence Jengar Dec 10, 2025 12:37 Glassnode releases The Bitcoin…
[ad_1] जेफरीज के अनुसार 2026 में देखने योग्य शीर्ष उपभोक्ता वित्त स्टॉक्स [ad_2] Source link
[ad_1] Felix Pinkston Dec 10, 2025 12:39 ARB price prediction shows…
This website uses cookies.