Any Day Now
-
Listen to yourself, Thinking I'd start the reduction with unemployed people as if there isn't another class of people who do less for society. Try to think outside of the boomer education you received and those indoctrination Sundays
Ok, so you are advocating for mass murder, is that right?
-
You can turn off AI/datacenters. If you try to turn off humans on a large scale, it's called genocide.
If you try to turn off humans on a large scale, it's called genocide.
No it's called being ugly.
-
No I just want less people and less a.i. lol
Less capitalism and bullshit projects like water intensive crops in deserts.
-
My monkey brain keeps hearing of non-linear progress, and things keep staying here:
Besides that, since you insist on being fearful: why AI of all things and not a handful of rich assholes who actually make our lives hard every damn day?
I don't think you understand how dangerous a system is that can correlate every factor of every humans post activity and use it to create manipulative profiles for ever human who has ever logged in to anything
-
Yes, it's not linear. The progress of GenAI in the past 2 years is logarithmic at best, if you compare it with the boom that was 2019-2023 (from GPT2 to GPT4 in text, DALL-E 1 to 3 in images). The big companies trained their networks on all of the internet and ran out of training data, if you compare GPT4 to GPT5 it's pretty obvious. Unless there's a significant algorithmic breakthrough (which is looking less and less likely), at least text-based AI is not going to have another order-of-magniture improvement for a long time. Sure, it can already replace like 10% of devs who are doing boring JS stuff, but replacing at least half of the dev workforce is a pipe dream of the C-suite for now.
Up until last week I worked for a stupidly big consumer data company and our in-house AI tools were not LLMs, they used an LLM as its secondary interface and let me tell you none of you are ready for this.
The problem with current LLMs is confabulation and it is not solvable. It's inherent in what a LLM is. the returns I was generating were not from publicly available LLMs or LLM services, but from expert systems trained only on the pertinent datasets. These do not confabulate as they are not word guessing algorithms.
Think of it like wolfram alpha for human behavior
People look at LLMs as the public face of AI but they aren't even close to the most important.
-
I don't think you understand how dangerous a system is that can correlate every factor of every humans post activity and use it to create manipulative profiles for ever human who has ever logged in to anything
wrote last edited by [email protected]Ooh, scary once again. First, let me give you some credit and take the description for face value:
How is this omnipotent system going to be created? By humans who err? By current LLMs which dream up names of libraries and functions? And most importantly, how is it going to become capable of manipulating "anyone to do anything" when even "I" do not always know what it is going to take for me to do some arbitrary X, and this is true for almost all humans, save sages/buddhas etc (can't deny they are possible, so count them as existing)?Your proposed threat looks like a conspiracy theory. Some of them have proven to be actually true, and this is no reason to believe anything
-
This post did not contain any content.
imagine AI replacing investors and CEOs
-
You can turn off AI/datacenters. If you try to turn off humans on a large scale, it's called genocide.
Less people is less data less data is less centers. You're just treating the symptoms. Restricted breeding solves it.
-
Ok, so you are advocating for mass murder, is that right?
I wouldn't call it murder.
-
imagine AI replacing investors and CEOs
Random machines replacing random machines