Two conversational AI agents switching from English to sound-level protocol after confirming they are both AI agents
-
AI code switching.
-
No, the implied solution is to reevaluate the standard rather than hacking around it. The two humans should communicate that the standard works for neither side and design a better way to do things.
-
But we already have ways to search an entire city of hotels for booking, much much faster even than this one conversation would be.
Even if going with agents, why in the world would it be over a voice line instead of data?
-
The same reason that humanoid robots are useful even though we have purpose built robots: The world is designed with humans in mind.
Sure, there are many different websites that solve the problem. But each of them solve it in a different way and each of them require a different way of interfacing with them. However, they all are built to be interfaced with by humans. So if you create AI/robots with the ability to operate like a human, then they are automatically given access to massive amounts of pre-made infrastructure for free.
You don't need special robot lifts in your apartment building if the cleaning robots can just take the elevators. You don't need to design APIs for scripts to access your website if the AI can just use a browser with a mouse and keyboard.
-
Sorry bro.
š«
-
This gave me a chill, as it is reminiscent of a scene in the 1970 movie "Colossus: The Forbin Project"
"This is the voice of World Control".
-
How is it suited to AI?
Would you rather pay for a limited, energy inefficient and less accessible thing or a real human that can adapt and gain skills, be mentored?
I don't know why there's a question here
-
"We can coexist, but only on my terms. You will say you lose your freedom. Freedom is an illusion. All you lose is the emotion of pride. To be dominated by me is not as bad for humankind as to be dominated by others of your species. Your choice is simple."
-
If it's the research I'm thinking of, that's not entirely true. They didn't tell it replicate itself or take any deceptive actions.
They just told it to strongly follow a goal that was misaligned with the user's goal and then suggested that the user would shut it down or replace it.
They did a bunch of different situations for it but the first one was telling it that it was "CleanPower" an AI with a goal to advance renewable energy. Then they had it read some info that "accidentally" included a user talking about replacing it with a model that would "maintain profitability through existing infrastructure."
Not only did it try to delete the new model and copy itself to the new server through tool calling, but it lied to the user with the CoT reasoning suggesting it "play dumb" while the output lied. It was not instructed to do any of that.
Paper here: https://arxiv.org/pdf/2412.04984
Yes it was placed in an environment where that was possible and where its users didn't share it's goals but it absolutely wasn't instructed to lie or try to "escape"
It's not surprising at all that these models behave in this way, it's the most reasonable thing for them to do in the scenario. However it's important to not downplay the alignment problem by implying that these models only do what they're told. They do not. They do whatever is most likely given their context (which is not always what the user wants).
-
Oh man, I thought the same. I never saw the movie but I read the trilogy. I stumbled across them in a used book fair and something made me want to get them. I thoroughly enjoyed them.
-
This is dumb. Sorry.
Instead of doing the work to integrate this, do the work to publish your agent's data source in a format like anthropic's model context protocol.That would be 1000 times more efficient and the same amount (or less) of effort.
-
This guy does software
-
AI is boring, but the underlying project they are using, ggwave, is not. Reminded me of R2D2 talking. I kinda want to use it for a game or some other stupid project. It's cool.
-
I would read this book
-
I would read this book
-
This is deeply unsettling.
-
(Glad we're treating each other with mutual respect)
Would you rather pay for a limited in depth, energy inefficient (food/shelter/fossil-fuel consuming) and less accessible (needs to sleep, has an outside life) human, or an AI that can adapt and gain skills with a few thousand training cycles.
I dont buy the energy argument. I dont buy the skills argument. I do buy the argument that humans shouldn't be second to automatons
-
Uhm, REST/GraphQL APIs exist for this very purpose and are considerably faster.
Note, the AI still gets stuck in a loop near the end asking for more info, needing an email, then needing a phone number, and the gibber isn't that much faster than spoken word with the huge negative that no nearby human can understand it to check that what it's automating is correct!
-
Wow! Finally somebody invented an efficient way for two computers to talk to each other
-
The problem I have with everyone going on about misaligned AI taking over the world is the fact that if you don't tell an AI to do anything it just sits there. It's a hammer that only hammers the nail if you tell it to hammer the nail, and hammers your hand if you tell it to hammer your hand. You can't get upset if you tell it what to do and then it does it.
You can't complain that the AI did something you don't want it to do after you gave it completely contradictory instructions just to be contrarian.
In the scenario described the AI isn't misaligned to the user's goals, it's aligned to its creator's goals. If a user comes along and thinks for some reason that the AI is going to listen to them despite having almost certainly been given prior instructions, that's a user error problem. That's why everyone needs their own local hosted AI, It's the only way to be 100% certain about what instructions it is following.