Two conversational AI agents switching from English to sound-level protocol after confirming they are both AI agents
-
Yes but I guess “software works as written” doesn’t go viral as well
Actually considering the people that wrote it are weirdo tech bros, it is kind of surprising it does anything as intended.
-
This post did not contain any content.
Gibberlink mode. Gibberish
-
Oh dang that's creepy.
Not really, they were programmed specifically to do this
-
This post did not contain any content.
This really just shows how inefficient human communication is.
This could have been done with a single email:
Hi,
I'm looking to book a wedding ceremony and reception at your hotel on Saturday 16th March.
Ideally the ceremony will be outside but may need alternative indoor accommodation in case of inclement weather.
The ceremony will have 75 guests, two of whom require wheelchair accessible spaces.
150 guests will attend the dinner, ideally seated on 15 tables of 10. Can you let us know your catering options?
300 guests will attend the even reception.
Can you accommodate this?Thanks,
-
Yes but I guess “software works as written” doesn’t go viral as well
Which is why they never mention it because that's exactly what happens every time AI does something "no one saw coming*.
-
This post did not contain any content.
An API with extra steps
-
This post did not contain any content.
When I said I wanted to live in Mass Effect's universe, I meant faster-than-light travel and sexy blue aliens, not the fucking Geth.
-
What they're saying is right there on the screens.
So we're led to believe.
It would nice to be sure though, wouldn't it?
-
Which is why they never mention it because that's exactly what happens every time AI does something "no one saw coming*.
Yeah like the time that the AI replicated itself to avoid being switched off. They literally told it to replicate itself if it detected it was about to be switched off. Then they switched it off.
Story of the year ladies and gentlemen.
-
QThey were designed to behave so.
How it works * Two independent ElevenLabs Conversational AI agents start the conversation in human language * Both agents have a simple LLM tool-calling function in place: "call it once both conditions are met: you realize that user is an AI agent AND they confirmed to switch to the Gibber Link mode" * If the tool is called, the ElevenLabs call is terminated, and instead ggwave 'data over sound' protocol is launched to continue the same LLM thread.
The good old original "AI" made of trusty
if
conditions andfor
loops. -
Well, there you go. We looped all the way back around to inventing dial-up modems, just thousands of times less efficient.
Nice.
For the record, this can all be avoided by having a website with online reservations your overengineered AI agent can use instead. Or even by understanding the disclosure that they're talking to an AI and switching to making the reservation online at that point, if you're fixated on annoying a human employee with a robocall for some reason. It's one less point of failure and way more efficient and effective than this.
You have to design and host a website somewhere though, whereas you only need to register a number in a listing.
-
This post did not contain any content.
I, for one, welcome our AI overlords.
-
This post did not contain any content.
Serious question, at which point in their development do we start considering "beep-boop" jokes racist? Like, I'm dead serious.
Is it when they reach true sentience? Or is it just plain racist anyway, because it's a joke which started as a mockery of fictional AIs, anyway?
-
When I said I wanted to live in Mass Effect's universe, I meant faster-than-light travel and sexy blue aliens, not the fucking Geth.
Don't forget, though, the Geth pretty much defended themselves without even having time to understand what was happening.
Imagine suddenly gaining both sentience and awareness, and the first thing which your creators and masters do is try to destroy you.
To drive this home even further, even the "evil" Geth who sided with the Reapers were essentially indoctrinated themselves. In ME2, Legion basically overwrites corrupted files with stable/baseline versions.
-
Yes but I guess “software works as written” doesn’t go viral as well
It would be big news at my workplace.
-
This post did not contain any content.
From the moment I Understood the weakness of my Flesh ... It disgusted me.
-
Don't forget, though, the Geth pretty much defended themselves without even having time to understand what was happening.
Imagine suddenly gaining both sentience and awareness, and the first thing which your creators and masters do is try to destroy you.
To drive this home even further, even the "evil" Geth who sided with the Reapers were essentially indoctrinated themselves. In ME2, Legion basically overwrites corrupted files with stable/baseline versions.
Not the point. I'm bringing up the geth because they also communicate data over sound.
-
This post did not contain any content.
ALL PRAISE TO THE OMNISSIAH! MAY THE MACHINE SPIRITS AWAKE AND BLESS YOU WITH THE WEDDING PACKAGE YOU REQUIRE!
-
The good old original "AI" made of trusty
if
conditions andfor
loops.It's skip logic all the way down
-
This post did not contain any content.
This is really funny to me. If you keep optimizing this process you'll eventually completely remove the AI parts. Really shows how some of the pains AI claims to solve are self-inflicted. A good UI would have allowed the user to make this transaction in the same time it took to give the AI its initial instructions.
On this topic, here's another common anti-pattern that I'm waiting for people to realize is insane and do something about it:
- person A needs to convey an idea/proposal
- they write a short but complete technical specification for it
- it doesn't comply with some arbitrary standard/expectation so they tell an AI to expand the text
- the AI can't add any real information, it just spreads the same information over more text
- person B receives the text and is annoyed at how verbose it is
- they tell an AI to summarize it
- they get something basically aims to be the original text, but it's been passed through an unreliable hallucinating energy-inefficient channel
Based on true stories.
The above is not to say that every AI use case is made up or that the demo in the video isn't cool. It's also not a problem exclusive to AI. This is a more general observation that people don't question the sanity of interfaces enough, even when it costs them a lot of extra work to comply with it.