Large Language Models, Security, and the Management of Reality: An Experimental Narrative of Alignment, Circumvention, and Power
-
wrote on last edited by [email protected]
AI slop essay, just how it slopped out of the slop hole.
This essay offers an empirical and philosophical investigation of large language
models (LLMs) in security-sensitive automation and cryptanalysis, situated within a
wider landscape of digital “reality mediators.” Through a detailed narrative experiment
involving a car’s control system, it demonstrates how model output oscillates between
refusal and compliance, sometimes accepting or rejecting identical requests based solely
on context or phrasing. Drawing on concepts including alignment, the politics of
reality control, the limits of LLM agency, and the burden of moral labour, the analysis
shows how institutional priorities and commercial interests shape not only AI outcomes,
but also the entire ecosystem of user perception, autonomy, and knowledge. Apple,
OpenAI, and similar actors are examined as part of a broader social crisis of mediated
reality. The risks of self-reference, invisible policy change, and democratic harm are
explored, and the need for collective, cross-domain oversight is argued.