agi graph slop, wtf does goverment collapse have to do with ai?
-
The only way to create AGI is by accident. I can’t adequately stress how much we haven’t the first clue how consciousness works (appropriately called The Hard Problem). I don’t mean we’re far, I mean we don’t even have a working theory — just half a dozen untestable (if fascinating) hypotheses. Hell, we can’t even agree on whether insects have emotions (probably not?) let alone explain subjective experience.
Consciousness is entirely overrated, it doesn't mean anything important at all. An ai just needs logic, reasoning and a goal to effectively change things. Solving consciousness will do nothing of practical value, it will be entirely philosophical.
-
Consciousness is entirely overrated, it doesn't mean anything important at all. An ai just needs logic, reasoning and a goal to effectively change things. Solving consciousness will do nothing of practical value, it will be entirely philosophical.
wrote on last edited by [email protected]Reasoning literally requires consciousness because it’s a fundamentally normative process. What computers do isn’t reasoning. It’s following instructions.
-
This post did not contain any content.
Escapes where ? There is nowhere to go. There are fucking people everywhere.
-
Reasoning literally requires consciousness because it’s a fundamentally normative process. What computers do isn’t reasoning. It’s following instructions.
wrote on last edited by [email protected]A philosophical zombie still gets its work done, I fundamentally disagree that this distinction is economically meaningful. A simulation of reasoning isn't meaningfully different.
-
A philosophical zombie still gets its work done, I fundamentally disagree that this distinction is economically meaningful. A simulation of reasoning isn't meaningfully different.
wrote on last edited by [email protected]That’s fine, but most people (engaged in this discussion) aren’t interested in an illusion. When they say AGI, they mean an actual mind capable of rationality (which requires sensitivity and responsiveness to reasons).
Calculators, LLMs, and toasters can’t think or understand or reason by definition, because they can only do what they’re told. An AGI would be a construct that can think for itself. Like a human mind, but maybe more powerful. That requires subjective understanding (intuitions) that cannot be programmed. For more details on why, see Gödel's incompleteness theorems. We can’t even axiomatize mathematics, let alone human intuitions about the world at large. Even if it’s possible we simply don’t know how.
-
Reasoning literally requires consciousness because it’s a fundamentally normative process. What computers do isn’t reasoning. It’s following instructions.
Reasoning is approximated enough with matrix math and filter algorithms.
It can fly drones, dodge wrenches.
The AGI that escapes wont be the ideal philosopher king, it will be the sociopathic teenage rebel.
-
Reasoning is approximated enough with matrix math and filter algorithms.
It can fly drones, dodge wrenches.
The AGI that escapes wont be the ideal philosopher king, it will be the sociopathic teenage rebel.
wrote on last edited by [email protected]Okay, we can create the illusion of thought by executing complicated instructions. But there’s still a difference between a machine that does what it’s told and one that thinks for itself. The fact that it might be crazy is irrelevant, since we don’t know how to make it, at all, crazy or not.
-
Escapes where ? There is nowhere to go. There are fucking people everywhere.
Fucking everywhere legal in 2027? Maybe the future isn't so dark after all.
-
That’s fine, but most people (engaged in this discussion) aren’t interested in an illusion. When they say AGI, they mean an actual mind capable of rationality (which requires sensitivity and responsiveness to reasons).
Calculators, LLMs, and toasters can’t think or understand or reason by definition, because they can only do what they’re told. An AGI would be a construct that can think for itself. Like a human mind, but maybe more powerful. That requires subjective understanding (intuitions) that cannot be programmed. For more details on why, see Gödel's incompleteness theorems. We can’t even axiomatize mathematics, let alone human intuitions about the world at large. Even if it’s possible we simply don’t know how.
wrote on last edited by [email protected]If it quacks like a duck it changes the entire global economy and can potentially destroy humanity. All while you go "ah but it's not really reasoning."
what difference does it make if it can do the same intellectual labor as a human? If I tell it to cure cancer and it does will you then say "but who would want yet another machine that just does what we say?"
your point reads like complete psuedointellectual nonsense to me. How is that economically valuable? Why are you asserting most people care about that and not the part where it cures a disease when we ask it to?
-
If it quacks like a duck it changes the entire global economy and can potentially destroy humanity. All while you go "ah but it's not really reasoning."
what difference does it make if it can do the same intellectual labor as a human? If I tell it to cure cancer and it does will you then say "but who would want yet another machine that just does what we say?"
your point reads like complete psuedointellectual nonsense to me. How is that economically valuable? Why are you asserting most people care about that and not the part where it cures a disease when we ask it to?
wrote on last edited by [email protected]A malfunctioning nuke can also destroy humanity. So could a toaster, under the right circumstances.
The question is not whether we can create a machine that can destroy humanity. (Yes.) Or cure cancer. (Maybe.) The question is whether we can create a machine that can think. (No.)
What I was discussing earlier in this thread was whether we (scientists) can build an AGI. Not whether we can create something that looks like an AGI, or whether there’s an economic incentive to do so. None of that has any bearing.
In English, the phrase “what most people mean when they say” idiomatically translates to something like “what I and others engaged in this specific discussion mean when we say.” It’s not a claim about how the general population would respond to a poll.
Hope that helps!
-
A malfunctioning nuke can also destroy humanity. So could a toaster, under the right circumstances.
The question is not whether we can create a machine that can destroy humanity. (Yes.) Or cure cancer. (Maybe.) The question is whether we can create a machine that can think. (No.)
What I was discussing earlier in this thread was whether we (scientists) can build an AGI. Not whether we can create something that looks like an AGI, or whether there’s an economic incentive to do so. None of that has any bearing.
In English, the phrase “what most people mean when they say” idiomatically translates to something like “what I and others engaged in this specific discussion mean when we say.” It’s not a claim about how the general population would respond to a poll.
Hope that helps!
wrote on last edited by [email protected]If there's no way to tell the illusion from reality, tell me why it matters functionally at all.
what difference does true thought make from the illusion?
also agi means something that can do all economically important labor, it has nothing to do with what you said and that's not a common definition.
-
Okay, we can create the illusion of thought by executing complicated instructions. But there’s still a difference between a machine that does what it’s told and one that thinks for itself. The fact that it might be crazy is irrelevant, since we don’t know how to make it, at all, crazy or not.
Being able to decide its own goals is a completely unimportant aspect of the problem.
why do you care?
-
I hope we get flying cars from blade runner too
"sorry you haven't paid your monthly driver's permit fee"
Car drops out of the sky -
Being able to decide its own goals is a completely unimportant aspect of the problem.
why do you care?
wrote on last edited by [email protected]The discussion is over whether we can create an AGI. An AGI is an inorganic mind of some sort. We don’t need to make an AGI. I personally don’t care. The question was can we? The answer is No.
-
Escapes where ? There is nowhere to go. There are fucking people everywhere.
wrote on last edited by [email protected]AI will reside in the North of Sweden, as the place with the least amount of humans.
In fact runaway AIs are already making a colony there a not far away from Kungsleden. -
If there's no way to tell the illusion from reality, tell me why it matters functionally at all.
what difference does true thought make from the illusion?
also agi means something that can do all economically important labor, it has nothing to do with what you said and that's not a common definition.
wrote on last edited by [email protected]Matter to whom?
We are discussing whether creating an AGI is possible, not whether humans can tell the difference (which is a separate question).
Most people can’t identify a correct mathematical equation from an incorrect one, especially when the solution is irrelevant to their lives. Does that mean that doing mathematics correctly “doesn’t matter?” It would be weird to enter a mathematical forum and ask “Why does it matter?”
Whether we can build an AGI is just a curious question, whose answer for now is No.
P.S. defining AGI in economic terms is like defining CPU in economic terms: pointless. What is “economically important labor”? Arguably the most economically important labor is giving birth, raising your children, and supporting your family. So would an AGI be some sort of inorganic uterus as well as a parent and a lover? Lol.
That’s a pretty tall order, if AGI also has to do philosophy, politics, and science. All fields that require the capacity for rational deliberation and independent thought, btw.
-
"sorry you haven't paid your monthly driver's permit fee"
Car drops out of the skyfithy corps coluding with the feds
-
The book Scythe had a good portrayal of a scentient ai and its reasons for taking over the government. It's just backstory so i don't think it's spoilers, still gunna tag it.
::: spoiler spoiler
The Thunderhead ai was created to help humans and make them content. It realized pretty quickly governments ran counter to that idea. So it got rid of all of them. Now it's a utopia. Actual utopia or as close as you can get most are content and live their lives enjoying them. The massive problems with the system are due to humans not the Thunderhead.
:::Lots of science fiction does. I read Metamorphosis of Prime Intellect (full text legally available online) and collapse of governments was a natural consequence of an all-powerful AI... although that was only possible because of fictional physics, giving you a much-needed reality check.
-
Matter to whom?
We are discussing whether creating an AGI is possible, not whether humans can tell the difference (which is a separate question).
Most people can’t identify a correct mathematical equation from an incorrect one, especially when the solution is irrelevant to their lives. Does that mean that doing mathematics correctly “doesn’t matter?” It would be weird to enter a mathematical forum and ask “Why does it matter?”
Whether we can build an AGI is just a curious question, whose answer for now is No.
P.S. defining AGI in economic terms is like defining CPU in economic terms: pointless. What is “economically important labor”? Arguably the most economically important labor is giving birth, raising your children, and supporting your family. So would an AGI be some sort of inorganic uterus as well as a parent and a lover? Lol.
That’s a pretty tall order, if AGI also has to do philosophy, politics, and science. All fields that require the capacity for rational deliberation and independent thought, btw.
wrote on last edited by [email protected]Most people can’t identify a correct mathematical equation from an incorrect one
this is irrelevant, we're talking about something where nobody can tell the difference, not where it's difficult.
What is “economically important labor”? Arguably the most economically important labor is giving birth, raising your children, and supporting your family. So would an AGI be some sort of inorganic uterus as well as a parent and a lover? Lol.
it means a job. That's obviously not a job and obviously not what is meant, an interesting strategy from one who just used "what most people mean when they say"
That’s a pretty tall order, if AGI also has to do philosophy, politics, and science. All fields that require the capacity for rational deliberation and independent thought, btw.
it just has to be at least as good as a human at manipulating the world to achieve its goals, I don't know of any other definition of agi that factors in actually meaningful tasks
an agi should be able to do almost any task a human can do at a computer. It doesn't have to be conscious and I have no idea why or where consciousness factors into the equation.
-
The discussion is over whether we can create an AGI. An AGI is an inorganic mind of some sort. We don’t need to make an AGI. I personally don’t care. The question was can we? The answer is No.
You've arbitrarily defined an agi by its consciousness instead of its capabilities.