Manipulate inference parameters and perform comparative evaluation to assess the impact of selected values. Build a suite of systems using targeted tools that detect problems such as prompt injection (CWE-1427) and other problems. Consider statistically measuring token distribution to see if it is consistent with expected results.
Insecure Setting of Generative AI/ML Model Inference Parameters
This vulnerability occurs when a generative AI or ML model is deployed with inference parameters that are too permissive, causing it to frequently generate incorrect, nonsensical, or unpredictable…
What is CWE-1434?
Real-world CVEs caused by CWE-1434
No public CVE references are linked to this CWE in MITRE's catalog yet.
Step-by-step attacker path
- 1
Assume the product offers an LLM-based AI coding assistant to help users to write code as part of an Integrated Development Environment (IDE). Assume the model has been trained on real-world code, and the model behaves normally under its default settings. Suppose there is a default temperature of 1, with a range of temperature values from 0 (most deterministic) to 2. Consider the following configuration.
- 2
The problem is that the configuration contains a temperature hyperparameter that is higher than the default. This significantly increases the likelihood that the LLM will suggest a package that did not exist at training time, a behavior sometimes referred to as "package hallucination." Note that other possible behaviors could arise from higher temperature, not just package hallucination. An adversary could anticipate which package names could be generated and create a malicious package. For example, it has been observed that the same LLM might hallucinate the same package regularly. Any code that is generated by the LLM, when run by the user, would download and execute the malicious package. This is similar to typosquatting. The risk could be reduced by lowering the temperature so that it reduces the unpredictable outputs and has a better chance of staying more in line with the training data. If the temperature is set too low, then some of the power of the model will be lost, and it may be less capable of producing solutions for rarely-encountered problems that are not reflected in the training data. However, if the temperature is not set low enough, the risk of hallucinating package names may still be too high. Unfortunately, the "best" temperature cannot be determined a priori, and sufficient empirical testing is needed.
- 3
In addition to more restrictive temperature settings, consider adding guardrails that test that independently verify any referenced package to ensure that it exists, is not obsolete, and comes from a trusted party. Note that reducing temperature does not entirely eliminate the risk of package hallucination. Even with very low temperatures or other settings, there is still a small chance that a non-existent package name will be generated.
Vulnerable JSON
Assume the product offers an LLM-based AI coding assistant to help users to write code as part of an Integrated Development Environment (IDE). Assume the model has been trained on real-world code, and the model behaves normally under its default settings. Suppose there is a default temperature of 1, with a range of temperature values from 0 (most deterministic) to 2. Consider the following configuration.
{
```
"model": "my-coding-model",
"context_window": 8192,
"max_output_tokens": 4096,
"temperature", 1.5,
...
} Secure JSON
The problem is that the configuration contains a temperature hyperparameter that is higher than the default. This significantly increases the likelihood that the LLM will suggest a package that did not exist at training time, a behavior sometimes referred to as "package hallucination." Note that other possible behaviors could arise from higher temperature, not just package hallucination. An adversary could anticipate which package names could be generated and create a malicious package. For example, it has been observed that the same LLM might hallucinate the same package regularly. Any code that is generated by the LLM, when run by the user, would download and execute the malicious package. This is similar to typosquatting. The risk could be reduced by lowering the temperature so that it reduces the unpredictable outputs and has a better chance of staying more in line with the training data. If the temperature is set too low, then some of the power of the model will be lost, and it may be less capable of producing solutions for rarely-encountered problems that are not reflected in the training data. However, if the temperature is not set low enough, the risk of hallucinating package names may still be too high. Unfortunately, the "best" temperature cannot be determined a priori, and sufficient empirical testing is needed.
{
```
...
"temperature", 0.2,
...
} How to prevent CWE-1434
- Implementation / System Configuration / Operation Develop and adhere to robust parameter tuning processes that include extensive testing and validation.
- Implementation / System Configuration / Operation Implement feedback mechanisms to continuously assess and adjust model performance.
- Documentation Provide comprehensive documentation and guidelines for parameter settings to ensure consistent and accurate model behavior.
How to detect CWE-1434
Manipulate inference parameters and perform comparative evaluation to assess the impact of selected values. Build a suite of systems using targeted tools that detect problems such as prompt injection (CWE-1427) and other problems. Consider statistically measuring token distribution to see if it is consistent with expected results.
Plexicus auto-detects CWE-1434 and opens a fix PR in under 60 seconds.
Codex Remedium scans every commit, identifies this exact weakness, and ships a reviewer-ready pull request with the patch. No tickets. No hand-offs.
Frequently asked questions
What is CWE-1434?
This vulnerability occurs when a generative AI or ML model is deployed with inference parameters that are too permissive, causing it to frequently generate incorrect, nonsensical, or unpredictable outputs.
How serious is CWE-1434?
MITRE has not published a likelihood-of-exploit rating for this weakness. Treat it as medium-impact until your threat model proves otherwise.
What languages or platforms are affected by CWE-1434?
MITRE lists the following affected platforms: Not Architecture-Specific, AI/ML, Not Technology-Specific.
How can I prevent CWE-1434?
Develop and adhere to robust parameter tuning processes that include extensive testing and validation. Implement feedback mechanisms to continuously assess and adjust model performance.
How does Plexicus detect and fix CWE-1434?
Plexicus's SAST engine matches the data-flow signature for CWE-1434 on every commit. When a match is found, our Codex Remedium agent opens a fix PR with the corrected code, tests, and a one-line summary for the reviewer.
Where can I learn more about CWE-1434?
MITRE publishes the canonical definition at https://cwe.mitre.org/data/definitions/1434.html. You can also reference OWASP and NIST documentation for adjacent guidance.
Weaknesses related to CWE-1434
Expected Behavior Violation
This weakness occurs when a software component, such as a function, API, or feature, fails to act as documented or intended. The system's…
Incorrect Provision of Specified Functionality
This weakness occurs when software behaves differently than its documented specifications, which can mislead users and create security…
Insufficient Control Flow Management
This vulnerability occurs when a program's execution flow isn't properly managed, allowing attackers to bypass critical checks, trigger…
Don't Let Security
Weigh You Down.
Stop choosing between AI velocity and security debt. Plexicus is the only platform that runs Vibe Coding Security and ASPM in parallel — one workflow, every codebase.