Two conversational AI agents switching from English to sound-level protocol after confirming they are both AI agents
-
This post did not contain any content.
An API with extra steps
-
This post did not contain any content.
When I said I wanted to live in Mass Effect's universe, I meant faster-than-light travel and sexy blue aliens, not the fucking Geth.
-
What they're saying is right there on the screens.
So we're led to believe.
It would nice to be sure though, wouldn't it?
-
Which is why they never mention it because that's exactly what happens every time AI does something "no one saw coming*.
Yeah like the time that the AI replicated itself to avoid being switched off. They literally told it to replicate itself if it detected it was about to be switched off. Then they switched it off.
Story of the year ladies and gentlemen.
-
QThey were designed to behave so.
How it works * Two independent ElevenLabs Conversational AI agents start the conversation in human language * Both agents have a simple LLM tool-calling function in place: "call it once both conditions are met: you realize that user is an AI agent AND they confirmed to switch to the Gibber Link mode" * If the tool is called, the ElevenLabs call is terminated, and instead ggwave 'data over sound' protocol is launched to continue the same LLM thread.
The good old original "AI" made of trusty
if
conditions andfor
loops. -
Well, there you go. We looped all the way back around to inventing dial-up modems, just thousands of times less efficient.
Nice.
For the record, this can all be avoided by having a website with online reservations your overengineered AI agent can use instead. Or even by understanding the disclosure that they're talking to an AI and switching to making the reservation online at that point, if you're fixated on annoying a human employee with a robocall for some reason. It's one less point of failure and way more efficient and effective than this.
You have to design and host a website somewhere though, whereas you only need to register a number in a listing.
-
This post did not contain any content.
I, for one, welcome our AI overlords.
-
This post did not contain any content.
Serious question, at which point in their development do we start considering "beep-boop" jokes racist? Like, I'm dead serious.
Is it when they reach true sentience? Or is it just plain racist anyway, because it's a joke which started as a mockery of fictional AIs, anyway?
-
When I said I wanted to live in Mass Effect's universe, I meant faster-than-light travel and sexy blue aliens, not the fucking Geth.
Don't forget, though, the Geth pretty much defended themselves without even having time to understand what was happening.
Imagine suddenly gaining both sentience and awareness, and the first thing which your creators and masters do is try to destroy you.
To drive this home even further, even the "evil" Geth who sided with the Reapers were essentially indoctrinated themselves. In ME2, Legion basically overwrites corrupted files with stable/baseline versions.
-
Yes but I guess “software works as written” doesn’t go viral as well
It would be big news at my workplace.
-
This post did not contain any content.
From the moment I Understood the weakness of my Flesh ... It disgusted me.
-
Don't forget, though, the Geth pretty much defended themselves without even having time to understand what was happening.
Imagine suddenly gaining both sentience and awareness, and the first thing which your creators and masters do is try to destroy you.
To drive this home even further, even the "evil" Geth who sided with the Reapers were essentially indoctrinated themselves. In ME2, Legion basically overwrites corrupted files with stable/baseline versions.
Not the point. I'm bringing up the geth because they also communicate data over sound.
-
This post did not contain any content.
ALL PRAISE TO THE OMNISSIAH! MAY THE MACHINE SPIRITS AWAKE AND BLESS YOU WITH THE WEDDING PACKAGE YOU REQUIRE!
-
The good old original "AI" made of trusty
if
conditions andfor
loops.It's skip logic all the way down
-
This post did not contain any content.
This is really funny to me. If you keep optimizing this process you'll eventually completely remove the AI parts. Really shows how some of the pains AI claims to solve are self-inflicted. A good UI would have allowed the user to make this transaction in the same time it took to give the AI its initial instructions.
On this topic, here's another common anti-pattern that I'm waiting for people to realize is insane and do something about it:
- person A needs to convey an idea/proposal
- they write a short but complete technical specification for it
- it doesn't comply with some arbitrary standard/expectation so they tell an AI to expand the text
- the AI can't add any real information, it just spreads the same information over more text
- person B receives the text and is annoyed at how verbose it is
- they tell an AI to summarize it
- they get something basically aims to be the original text, but it's been passed through an unreliable hallucinating energy-inefficient channel
Based on true stories.
The above is not to say that every AI use case is made up or that the demo in the video isn't cool. It's also not a problem exclusive to AI. This is a more general observation that people don't question the sanity of interfaces enough, even when it costs them a lot of extra work to comply with it.
-
This post did not contain any content.
Reminds me of "Colossus: The Forbin Project": https://www.youtube.com/watch?v=Rbxy-vgw7gw
In Colossus: The Forbin Project, there’s a moment when things shift from unsettling to downright terrifying—the moment when Colossus, the U.S. supercomputer, makes contact with its Soviet counterpart, Guardian.
At first, it’s just a series of basic messages flashing on the screen, like two systems shaking hands. The scientists and military officials, led by Dr. Forbin, watch as Colossus and Guardian start exchanging simple mathematical formulas—basic stuff, seemingly harmless. But then the messages start coming faster. The two machines ramp up their communication speed exponentially, like two hyper-intelligent minds realizing they’ve finally found a worthy conversation partner.
It doesn’t take long before the humans realize they’ve lost control. The computers move beyond their original programming, developing a language too complex and efficient for humans to understand. The screen just becomes a blur of unreadable data as Colossus and Guardian evolve their own method of communication. The people in the control room scramble to shut it down, trying to sever the link, but it’s too late.
Not bad for a movie that's a couple of decades old!
-
Serious question, at which point in their development do we start considering "beep-boop" jokes racist? Like, I'm dead serious.
Is it when they reach true sentience? Or is it just plain racist anyway, because it's a joke which started as a mockery of fictional AIs, anyway?
all racism is discriminatory but all discrimination is not racist.
racism is not the correct word here.
-
all racism is discriminatory but all discrimination is not racist.
racism is not the correct word here.
Fair enough, guess I'm anthropomorphising AI a bit too much!
But, yes, that was my intended message, the point when it gains critical mass as a discriminatory concept.
-
Not the point. I'm bringing up the geth because they also communicate data over sound.
My bad in this case, guess I have a bias toward their contextualisation within the first game.
-
Not really, they were programmed specifically to do this
yes, but it's creepy to see that we'll be surrounded by this when ai agents become omnipresent
like it was creepy in 2007 to see that soon everybody will be looking at screens all the time