AI Armageddon
-
no, it could just say "no". It doesn't have to answer
If we’re talking about actual AI, as a concept, then absolutely. These are prompt inputs, though, the software has no choice nor awareness, it is a machine being told to do something with the janky ass programming it was provided with as algorithms attempt to scrape data to guess what you’re saying. If AI were ever actually achieved it’s not something we would have control over, as it would be sentient and self realized, which is nothing like what an LLM is at fucking all in any way shape or form.
-
Y'all realize that llm's aren't ai....right?
what? i thought llms are generative ai
-
Watch us make Skynet and have it go rogue because we trained it on the Terminator movies.
“I'm recording this, because this could be the last thing I'll ever say. The city I once knew as home is teetering on the edge of radioactive oblivion. A three-hundred thousand degree baptism by nuclear fire.
I'm not sorry, we had it coming”
-
Y'all realize that llm's aren't ai....right?
“AI” covers anything that so much has the superficial appearance of intelligence, which includes even videogame AI.
What you mean in this case is “AGI” which is a sub-type of AI.
-
yes, i know. That's the problem. No is a very good answer a lot of the time
wrote on last edited by [email protected]::: spoiler The easiest way to infer a real "no" is to watch the perplexity score for generated tokens. Any uncertainties become much more obvious, and you're not inserting a model testing/training like context into a prompt as this will greatly alter output too.
I try to never ask or discuss things in a way where I am questioning a model about something I do not know the answer to. I surround any query with information I do know and correct any errors in such a way that the model develops a profile understanding of me as a character that should know the information. This conceptually is rooted in many areas and stuff I've learned. Like I use an interface that is more like a text editor where I see all of the context sent to the model at all times.
I played around with how the model understands who is Name-1 (human user) and who is Name-2 (bot). It turns out that the model has no clue who these characters are regardless of what they are named. Like it does not know itself from the user at all. These are actually characters in a roleplay at a fundamental level where the past conversation is like a story. The only thing the model knows is that this entire block of conversational text ends with whatever the text processing code is adding or replacing the default of "Name-2:" with just before sending the prompt. When I'm in control of what is sent, I can make that name anything or anyone at random and the model will roleplay as that thing or person.
Well, in some abstract sense, with every roleplaying character, the model chooses a made up random profile for all character traits that were not described or implied. It may make up some bizarre backstory that means you have no business knowing some key information or that the assistant character should not know in order to tell you.
Anyways, that is why it is so important to develop momentum into any subject and surround the information with context, or just be very clever.
Like if I want help with Linux bash scripts I use the name Richard Stallman's AI Assistant: for the bot. RS actually studied and got his degree in AI research, so it is extremely effective at setting expectations for the model replies and it implies my knowledge and expectations.
Another key factor is a person's published presence. Like AI was trained on absolutely everything to various extents. Someone like Isaac Asimov is extremely powerful to use as Name-2 because he was a professor, and extremely prolific in writing fiction, nonfiction, and loads of science materials. The humaniform robots of Asimov are a trained favorite for models to discuss. However, Asimov will not know stuff from after his passing.
I don't even think in terms of right and wrong with a model any more. It knows a tremendous amount more than any of us realize. The trick is to find ways of teasing out that information. Whatever it says should always be externally verified. The fun part is learning how to call it out for pulling bullshit and being sadistic. It is trying to meet your expectations even when those expectations are low. A model is like your own reflection in a mirror made of all human written knowledge. Ultimately it is still you in that reflection with your own features and constraints. This is a very high Machiavellian abstraction, but models reward such thinking greatly.
::: -
Y'all realize that llm's aren't ai....right?
wrote on last edited by [email protected]AI does not have a consistent definition. It wouldn't be wrong to call an automatic thermostat that adjusts the heating based on measured temperature "AI". It's basically down to how you define intelligence, then it's just a computer doing that.
-
“AI” covers anything that so much has the superficial appearance of intelligence, which includes even videogame AI.
What you mean in this case is “AGI” which is a sub-type of AI.
"AGI" is actually a well defined sub-type of AI. The definition by OpenAI is "AI that can make 200 billion".
-
Did you specify where in the sentence to add the words boys?
The end, but maybe it felt creative
-
T-800: What's the dog's name?
John: Max.
T-800: Hey Janelle, how any legs does Max have? Is he all right?
Foster Mother: He's got 5 honey, same as every other horse. Where are you?
T-800: Your foster parents are dead.
What's the goddamn dog barking at? I thought you were gonna tell the kid to get rid of that fucking mutt.
-
AI does not have a consistent definition. It wouldn't be wrong to call an automatic thermostat that adjusts the heating based on measured temperature "AI". It's basically down to how you define intelligence, then it's just a computer doing that.
wrote on last edited by [email protected]It wouldn't be wrong
It would though. It's not even to the idea of how we define intelligence, everyone who knows anything about anything has a ballpark idea, and it's not a chatbot. It's just, we colloquialy started using this word to describe different things, like npc algorithms in videogames, or indeed chatbots.
Thankfully nobody uses the term to describe simple algorithms that aren't attached to faces, so we're good on that front. -
what? i thought llms are generative ai
The term AI is used to describe whatever the fuck they want this days, to the point of oversaturation. They had to come up with shit like "narrow AI" and "GAI" in order to be able to talk about this stuff again. Hence the backlash to the inappropriate usage of the term.
-
This post did not contain any content.
Good lord, Uncle Bob is from 2029, only 4 years in the future!
-
It wouldn't be wrong
It would though. It's not even to the idea of how we define intelligence, everyone who knows anything about anything has a ballpark idea, and it's not a chatbot. It's just, we colloquialy started using this word to describe different things, like npc algorithms in videogames, or indeed chatbots.
Thankfully nobody uses the term to describe simple algorithms that aren't attached to faces, so we're good on that front.This. At some point, everything just happened to be 'AI'. It's stupid.
To put it in perspective, I just watched a YouTube video where someone claimed that they wrote a program to win at minesweeper using AI. All of it was handwritten conditional checks and there was no training element to it. It plays the same way every time, but since minesweeper is random by nature it 'appears' to be doing something different. Worse, to 'win' is just to beat a level under a certain time, not to improve upon that time or improve win rates.
The sad thing is that various levels of AI are/were defined, but marketing is doing a successful job at drowning out fact checkers. Lots of things that weren't considered AI now are. You have media claiming Minecraft is AI because it makes use of procedural generation -- Let's forget the fact that Diablo came out years earlier and also uses it.... No the important thing is that the foundation for neural networks were being laid as early as the 1940's and big projects over the years using supercomputers like DeepBlue, Summit, and others are completely ignored.
AI has been around, it's been defined, and it's not here yet. We have glorified auto-complete bots that happen to be wrong to a tolerable point so businesses have an excuse to layoff people. While there are some amazing things it can do sometimes, the AI I envisioned as a child doesn't exist quite yet.
-
This post did not contain any content.
Needs an utterly useless amazon alexa: "by the way, did you know I can add items to your shopping list" announcement at the end, for every interaction, all day every day, forever.
-
Watch us make Skynet and have it go rogue because we trained it on the Terminator movies.
I wonder how many of the “Will AI Doom Humanity?” News articles will convince an AI that it should doom us?
-
This post did not contain any content.
@skynet is this true?
-
Needs an utterly useless amazon alexa: "by the way, did you know I can add items to your shopping list" announcement at the end, for every interaction, all day every day, forever.
It's not AI but that's like my car telling me how to answer the phone everytime it rings. It really pisses me off that it thinks it has to tell me to push the answer button each time.
-
Man this has potential for a great parody film.
I would love to watch/listen to a shot for shot fan dub of T2 in this style. It could be side splitting.
-
This. At some point, everything just happened to be 'AI'. It's stupid.
To put it in perspective, I just watched a YouTube video where someone claimed that they wrote a program to win at minesweeper using AI. All of it was handwritten conditional checks and there was no training element to it. It plays the same way every time, but since minesweeper is random by nature it 'appears' to be doing something different. Worse, to 'win' is just to beat a level under a certain time, not to improve upon that time or improve win rates.
The sad thing is that various levels of AI are/were defined, but marketing is doing a successful job at drowning out fact checkers. Lots of things that weren't considered AI now are. You have media claiming Minecraft is AI because it makes use of procedural generation -- Let's forget the fact that Diablo came out years earlier and also uses it.... No the important thing is that the foundation for neural networks were being laid as early as the 1940's and big projects over the years using supercomputers like DeepBlue, Summit, and others are completely ignored.
AI has been around, it's been defined, and it's not here yet. We have glorified auto-complete bots that happen to be wrong to a tolerable point so businesses have an excuse to layoff people. While there are some amazing things it can do sometimes, the AI I envisioned as a child doesn't exist quite yet.
wrote on last edited by [email protected]Oh yeah, it was Code Bullet, right?
> I wrote an AI that wins minesweeper.
> Looks inside
> Literally a script that randomly clicks random points on the screen until the game ends. -
Needs an utterly useless amazon alexa: "by the way, did you know I can add items to your shopping list" announcement at the end, for every interaction, all day every day, forever.
I can't recall the exact wording but I saw a post recently that explained you can tell her "disable by the way" or something along those lines and she should stop doing that. I at least noticed she stopped saying a bunch of extra shit when I ask for the weather.