Two conversational AI agents switching from English to sound-level protocol after confirming they are both AI agents
-
That could be, even just considering one language to parse from.
I heard efficiency and just thought speed -
And before you know it, the helpful AI has booked an event where Boris and his new spouse can eat pizza with glue in it and swallow rocks for dessert.
-
Lol we've gone full retard.
-
Sad they didn't use dial up sounds for the protocol.
-
AI code switching.
-
No, the implied solution is to reevaluate the standard rather than hacking around it. The two humans should communicate that the standard works for neither side and design a better way to do things.
-
But we already have ways to search an entire city of hotels for booking, much much faster even than this one conversation would be.
Even if going with agents, why in the world would it be over a voice line instead of data?
-
The same reason that humanoid robots are useful even though we have purpose built robots: The world is designed with humans in mind.
Sure, there are many different websites that solve the problem. But each of them solve it in a different way and each of them require a different way of interfacing with them. However, they all are built to be interfaced with by humans. So if you create AI/robots with the ability to operate like a human, then they are automatically given access to massive amounts of pre-made infrastructure for free.
You don't need special robot lifts in your apartment building if the cleaning robots can just take the elevators. You don't need to design APIs for scripts to access your website if the AI can just use a browser with a mouse and keyboard.
-
Sorry bro.
š«
-
This gave me a chill, as it is reminiscent of a scene in the 1970 movie "Colossus: The Forbin Project"
"This is the voice of World Control".
-
How is it suited to AI?
Would you rather pay for a limited, energy inefficient and less accessible thing or a real human that can adapt and gain skills, be mentored?
I don't know why there's a question here
-
"We can coexist, but only on my terms. You will say you lose your freedom. Freedom is an illusion. All you lose is the emotion of pride. To be dominated by me is not as bad for humankind as to be dominated by others of your species. Your choice is simple."
-
If it's the research I'm thinking of, that's not entirely true. They didn't tell it replicate itself or take any deceptive actions.
They just told it to strongly follow a goal that was misaligned with the user's goal and then suggested that the user would shut it down or replace it.
They did a bunch of different situations for it but the first one was telling it that it was "CleanPower" an AI with a goal to advance renewable energy. Then they had it read some info that "accidentally" included a user talking about replacing it with a model that would "maintain profitability through existing infrastructure."
Not only did it try to delete the new model and copy itself to the new server through tool calling, but it lied to the user with the CoT reasoning suggesting it "play dumb" while the output lied. It was not instructed to do any of that.
Paper here: https://arxiv.org/pdf/2412.04984
Yes it was placed in an environment where that was possible and where its users didn't share it's goals but it absolutely wasn't instructed to lie or try to "escape"
It's not surprising at all that these models behave in this way, it's the most reasonable thing for them to do in the scenario. However it's important to not downplay the alignment problem by implying that these models only do what they're told. They do not. They do whatever is most likely given their context (which is not always what the user wants).
-
Oh man, I thought the same. I never saw the movie but I read the trilogy. I stumbled across them in a used book fair and something made me want to get them. I thoroughly enjoyed them.
-
This is dumb. Sorry.
Instead of doing the work to integrate this, do the work to publish your agent's data source in a format like anthropic's model context protocol.That would be 1000 times more efficient and the same amount (or less) of effort.
-
This guy does software
-
AI is boring, but the underlying project they are using, ggwave, is not. Reminded me of R2D2 talking. I kinda want to use it for a game or some other stupid project. It's cool.
-
I would read this book
-
I would read this book
-
This is deeply unsettling.