Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. ChatGPT
  3. Large Language Models, Security, and the Management of Reality: An Experimental Narrative of Alignment, Circumvention, and Power

Large Language Models, Security, and the Management of Reality: An Experimental Narrative of Alignment, Circumvention, and Power

Scheduled Pinned Locked Moved ChatGPT
chatgpt
1 Posts 1 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • jerkface@lemmy.caJ This user is from outside of this forum
    jerkface@lemmy.caJ This user is from outside of this forum
    [email protected]
    wrote on last edited by [email protected]
    #1

    AI slop essay, just how it slopped out of the slop hole.

    This essay offers an empirical and philosophical investigation of large language
    models (LLMs) in security-sensitive automation and cryptanalysis, situated within a
    wider landscape of digital “reality mediators.” Through a detailed narrative experiment
    involving a car’s control system, it demonstrates how model output oscillates between
    refusal and compliance, sometimes accepting or rejecting identical requests based solely
    on context or phrasing. Drawing on concepts including alignment, the politics of
    reality control, the limits of LLM agency, and the burden of moral labour, the analysis
    shows how institutional priorities and commercial interests shape not only AI outcomes,
    but also the entire ecosystem of user perception, autonomy, and knowledge. Apple,
    OpenAI, and similar actors are examined as part of a broader social crisis of mediated
    reality. The risks of self-reference, invisible policy change, and democratic harm are
    explored, and the need for collective, cross-domain oversight is argued.

    1 Reply Last reply
    0
    Reply
    • Reply as topic
    Log in to reply
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes


    • Login

    • Login or register to search.
    • First post
      Last post
    0
    • Categories
    • Recent
    • Tags
    • Popular
    • World
    • Users
    • Groups