Wikipedia:Large language models

Wikipedia:Large language models

LLM-originated content and deletion: Fix typo

← Previous revision Revision as of 21:56, 23 April 2026
Line 96: Line 96:
Even though the near-blanket ban in article space is not strictly necessary to hold LLM content as non-compliant with policy and unsuitable for the encyclopedia in general terms—as it is possible to apply core content policies and other policies directly to reach the same conclusion and act on it—designating [[WP:LLM]] as a standalone content guideline creates a standalone ''content standard'', even if somewhat intentionally redundant. Relative to preexisting policies, it serves as an additional protective layer to ensure Wikipedia remains a high-quality encyclopedia.
Even though the near-blanket ban in article space is not strictly necessary to hold LLM content as non-compliant with policy and unsuitable for the encyclopedia in general terms—as it is possible to apply core content policies and other policies directly to reach the same conclusion and act on it—designating [[WP:LLM]] as a standalone content guideline creates a standalone ''content standard'', even if somewhat intentionally redundant. Relative to preexisting policies, it serves as an additional protective layer to ensure Wikipedia remains a high-quality encyclopedia.


This establishes a presumption of a generalized, inherent problem with LLM-originated content. Editors addressing this issue are not obligated to look past the content's origins to identify specific unverifiable, non-neutral, or originally synthesized (or copyright-infringjng, libellous, etc.) statements before acting on it, provided those origins are certain or reasonably close to certain.{{efn|Very often they are {{em|not}}, but the cases when they are are also numerous in absolute terms—for example, an editor who has created many articles says the articles are AI-generated when asked.}} An editor might attempt to demonstrate that specific LLM content is compliant, but this is liable to be disputed. The burden to demonstrate compliance lies with the editor who adds or restores the known LLM-originated material. Conversely, demanding that others articulate specific problems to prove the content is flawed could be a wasted effort, as its unsuitability can already be fairly safely assumed under the policies.
This establishes a presumption of a generalized, inherent problem with LLM-originated content. Editors addressing this issue are not obligated to look past the content's origins to identify specific unverifiable, non-neutral, or originally synthesized (or copyright-infringing, libellous, etc.) statements before acting on it, provided those origins are certain or reasonably close to certain.{{efn|Very often they are {{em|not}}, but the cases when they are are also numerous in absolute terms—for example, an editor who has created many articles says the articles are AI-generated when asked.}} An editor might attempt to demonstrate that specific LLM content is compliant, but this is liable to be disputed. The burden to demonstrate compliance lies with the editor who adds or restores the known LLM-originated material. Conversely, demanding that others articulate specific problems to prove the content is flawed could be a wasted effort, as its unsuitability can already be fairly safely assumed under the policies.


Therefore, LLM-originated articles can be deleted under the [[WP:DELREASON#14|deletion policy]] (reason #14: "Any other content not suitable for an encyclopedia"), following the normal [[Wikipedia:Articles for deletion|AfD]] process, with [[WP:PROD|PROD]] as an option.
Therefore, LLM-originated articles can be deleted under the [[WP:DELREASON#14|deletion policy]] (reason #14: "Any other content not suitable for an encyclopedia"), following the normal [[Wikipedia:Articles for deletion|AfD]] process, with [[WP:PROD|PROD]] as an option.