CWE-1434 Base Draft

Insecure Setting of Generative AI/ML Model Inference Parameters

This vulnerability occurs when a generative AI or ML model is deployed with inference parameters that are too permissive, causing it to frequently generate incorrect, nonsensical, or unpredictable…

Definition

What is CWE-1434?

This vulnerability occurs when a generative AI or ML model is deployed with inference parameters that are too permissive, causing it to frequently generate incorrect, nonsensical, or unpredictable outputs.
Generative AI models, like those for text or image creation, use settings such as temperature, Top P, and Top K to control their creativity and decision-making. Setting these values incorrectly—often too high—forces the model to make overly random guesses, leading to 'hallucinations,' incoherent content, or wildly unrealistic results. When these flawed outputs are used in decision-making processes, data pipelines, or user-facing features, they can corrupt data integrity and create serious reliability issues. For developers, securing AI components means rigorously validating and constraining these inference parameters in production, similar to sanitizing user input. This requires testing across diverse inputs to find safe thresholds. Managing these configurations at scale across multiple models is challenging; an ASPM platform like Plexicus can help by automatically detecting insecure AI parameter settings and tracking these flaws alongside traditional vulnerabilities in your application stack.
Real-world impact

Real-world CVEs caused by CWE-1434

No public CVE references are linked to this CWE in MITRE's catalog yet.

How attackers exploit it

Step-by-step attacker path

  1. 1

    Assume the product offers an LLM-based AI coding assistant to help users to write code as part of an Integrated Development Environment (IDE). Assume the model has been trained on real-world code, and the model behaves normally under its default settings. Suppose there is a default temperature of 1, with a range of temperature values from 0 (most deterministic) to 2. Consider the following configuration.

  2. 2

    The problem is that the configuration contains a temperature hyperparameter that is higher than the default. This significantly increases the likelihood that the LLM will suggest a package that did not exist at training time, a behavior sometimes referred to as "package hallucination." Note that other possible behaviors could arise from higher temperature, not just package hallucination. An adversary could anticipate which package names could be generated and create a malicious package. For example, it has been observed that the same LLM might hallucinate the same package regularly. Any code that is generated by the LLM, when run by the user, would download and execute the malicious package. This is similar to typosquatting. The risk could be reduced by lowering the temperature so that it reduces the unpredictable outputs and has a better chance of staying more in line with the training data. If the temperature is set too low, then some of the power of the model will be lost, and it may be less capable of producing solutions for rarely-encountered problems that are not reflected in the training data. However, if the temperature is not set low enough, the risk of hallucinating package names may still be too high. Unfortunately, the "best" temperature cannot be determined a priori, and sufficient empirical testing is needed.

  3. 3

    In addition to more restrictive temperature settings, consider adding guardrails that test that independently verify any referenced package to ensure that it exists, is not obsolete, and comes from a trusted party. Note that reducing temperature does not entirely eliminate the risk of package hallucination. Even with very low temperatures or other settings, there is still a small chance that a non-existent package name will be generated.

Vulnerable code example

Vulnerable JSON

Assume the product offers an LLM-based AI coding assistant to help users to write code as part of an Integrated Development Environment (IDE). Assume the model has been trained on real-world code, and the model behaves normally under its default settings. Suppose there is a default temperature of 1, with a range of temperature values from 0 (most deterministic) to 2. Consider the following configuration.

Vulnerable JSON
{

```
   "model": "my-coding-model",
   "context_window": 8192,
   "max_output_tokens": 4096,
   "temperature", 1.5,
   ...
 }
Secure code example

Secure JSON

The problem is that the configuration contains a temperature hyperparameter that is higher than the default. This significantly increases the likelihood that the LLM will suggest a package that did not exist at training time, a behavior sometimes referred to as "package hallucination." Note that other possible behaviors could arise from higher temperature, not just package hallucination. An adversary could anticipate which package names could be generated and create a malicious package. For example, it has been observed that the same LLM might hallucinate the same package regularly. Any code that is generated by the LLM, when run by the user, would download and execute the malicious package. This is similar to typosquatting. The risk could be reduced by lowering the temperature so that it reduces the unpredictable outputs and has a better chance of staying more in line with the training data. If the temperature is set too low, then some of the power of the model will be lost, and it may be less capable of producing solutions for rarely-encountered problems that are not reflected in the training data. However, if the temperature is not set low enough, the risk of hallucinating package names may still be too high. Unfortunately, the "best" temperature cannot be determined a priori, and sufficient empirical testing is needed.

Secure JSON
{

```
   ...
   "temperature", 0.2,
   ...
 }
What changed: the unsafe sink is replaced (or the input is validated/escaped) so the same payload no longer triggers the weakness.
Prevention checklist

How to prevent CWE-1434

  • Implementation / System Configuration / Operation Develop and adhere to robust parameter tuning processes that include extensive testing and validation.
  • Implementation / System Configuration / Operation Implement feedback mechanisms to continuously assess and adjust model performance.
  • Documentation Provide comprehensive documentation and guidelines for parameter settings to ensure consistent and accurate model behavior.
Detection signals

How to detect CWE-1434

Automated Dynamic Analysis Moderate

Manipulate inference parameters and perform comparative evaluation to assess the impact of selected values. Build a suite of systems using targeted tools that detect problems such as prompt injection (CWE-1427) and other problems. Consider statistically measuring token distribution to see if it is consistent with expected results.

Manual Dynamic Analysis Moderate

Manipulate inference parameters and perform comparative evaluation to assess the impact of selected values. Build a suite of systems using targeted tools that detect problems such as prompt injection (CWE-1427) and other problems. Consider statistically measuring token distribution to see if it is consistent with expected results.

Plexicus auto-fix

Plexicus auto-detects CWE-1434 and opens a fix PR in under 60 seconds.

Codex Remedium scans every commit, identifies this exact weakness, and ships a reviewer-ready pull request with the patch. No tickets. No hand-offs.

Frequently asked questions

Frequently asked questions

What is CWE-1434?

This vulnerability occurs when a generative AI or ML model is deployed with inference parameters that are too permissive, causing it to frequently generate incorrect, nonsensical, or unpredictable outputs.

How serious is CWE-1434?

MITRE has not published a likelihood-of-exploit rating for this weakness. Treat it as medium-impact until your threat model proves otherwise.

What languages or platforms are affected by CWE-1434?

MITRE lists the following affected platforms: Not Architecture-Specific, AI/ML, Not Technology-Specific.

How can I prevent CWE-1434?

Develop and adhere to robust parameter tuning processes that include extensive testing and validation. Implement feedback mechanisms to continuously assess and adjust model performance.

How does Plexicus detect and fix CWE-1434?

Plexicus's SAST engine matches the data-flow signature for CWE-1434 on every commit. When a match is found, our Codex Remedium agent opens a fix PR with the corrected code, tests, and a one-line summary for the reviewer.

Where can I learn more about CWE-1434?

MITRE publishes the canonical definition at https://cwe.mitre.org/data/definitions/1434.html. You can also reference OWASP and NIST documentation for adjacent guidance.

Ready when you are

Don't Let Security
Weigh You Down.

Stop choosing between AI velocity and security debt. Plexicus is the only platform that runs Vibe Coding Security and ASPM in parallel — one workflow, every codebase.