Brian Eno: “The biggest problem about AI is not intrinsic to AI. It’s to do with the fact that it’s owned by the same few people”
-
For some reason the megacorps have got LLMs on the brain, and they're the worst "AI" I've seen. There are other types of AI that are actually impressive, but the "writes a thing that looks like it might be the answer" machine is way less useful than they think it is.
most LLM's for chat, pictures and clips are magical and amazing. For about 4 - 8 hours of fiddling then they lose all entertainment value.
As for practical use, the things can't do math so they're useless at work. I write better Emails on my own so I can't imagine being so lazy and socially inept that I need help writing an email asking for tech support or outlining an audit report. Sometimes the web summaries save me from clicking a result, but I usually do anyway because the things are so prone to very convincing halucinations, so yeah, utterly useless in their current state.
I usually get some angsty reply when I say this by some techbro-AI-cultist-singularity-head who starts whinging how it's reshaped their entire lives, but in some deep niche way that is completely irrelevant to the average working adult.
-
This post did not contain any content.
"Biggest" maybe. But it's not the only relevant problem. I think AI is gonna pan out like social media did, which is to say it's gonna be a shit show for society. And that would be the same no matter who owned it.
-
This post did not contain any content.
AI business is owned by a tiny group of technobros, who have no concern for what they have to do to get the results they want ("fuck the copyright, especially fuck the natural resources") who want to be personally seen as the saviours of humanity (despite not being the ones who invented and implemented the actual tech) and, like all big wig biz boys, they want all the money.
I don't have problems with AI tech in the principle, but I hate the current business direction and what the AI business encourages people to do and use the tech for.
-
And those people want to use AI to extract money and to lay off people in order to make more money.
That’s “guns don’t kill people” logic.
Yeah, the AI absolutely is a problem. For those reasons along with it being wrong a lot of the time as well as the ridiculous energy consumption.
Yeah, the AI absolutely is a problem.
AI is noto a problemi by itself, the problemi is that most of the people who make decisions in the workplace about these things do not understand what they are talking about and even less what something is capable of.
My impression is that AI now is what blockchain was some years ago, the solution to every problemi,which was of course false.
-
Ollama is FOSS, SD has a proproprietary but permissive, source-available license, but it is not what most people would associate with "open-source"
Fair, it may not be strictly FOSS but I think my point still stands. If people are worried about AI being owned by "the elite" they can just run Ollama.
-
More broadly, I would expect UBI to trigger a golden age of invention and artistic creation because a lot of people would love to spend their time just creating new stuff without the need to monetise it but can't under the current system.
I don't know nearly enough history to be an expert on this subject, but I've heard that one of the causes of the Enlightenment was because peasants and poors were able to afford to spend time learning and creating, rather than substinance-farming.
-
most LLM's for chat, pictures and clips are magical and amazing. For about 4 - 8 hours of fiddling then they lose all entertainment value.
As for practical use, the things can't do math so they're useless at work. I write better Emails on my own so I can't imagine being so lazy and socially inept that I need help writing an email asking for tech support or outlining an audit report. Sometimes the web summaries save me from clicking a result, but I usually do anyway because the things are so prone to very convincing halucinations, so yeah, utterly useless in their current state.
I usually get some angsty reply when I say this by some techbro-AI-cultist-singularity-head who starts whinging how it's reshaped their entire lives, but in some deep niche way that is completely irrelevant to the average working adult.
The delusional maniacs are going to be surprised when they ask the Super AI "how do we solve global warming?" and the answer is "build lots of solar, wind, and storage, and change infrastructure in cities to support walking, biking, and public transportation".
-
I'm aware of this, but it still mostly just something for people speculate on. Something people buy, sit on, and then hopefully sell with a profit.
Bitcoin was supposed to be a decentralized money alternative, but the amount of people actually buying things with crypto are highly negligible.
And honestly even if was actually used for that the power consumption would still be something to discuss.
Yes, most people buy, sit on, and then hopefully sell with a profit.
However, there are a large number of devs building useful things (supply chain, money transfer, digital identity). Most as good as, but not yet better than incumbent solutions.
My main challenge is the energy misconception. The ethereum network runs on the energy equivalent of a single wind turbine.
-
"Biggest" maybe. But it's not the only relevant problem. I think AI is gonna pan out like social media did, which is to say it's gonna be a shit show for society. And that would be the same no matter who owned it.
Both AI and social media are a shit show because it's owned by a few people.
Unironically, the best social media is Fetlife. Not that it's perfect by any means--not by far--but it is designed to facilitate bringing people together.
-
Two intrinsic problems with the current implementations of AI is that they are insanely resource-intensive and require huge training sets. Neither of those is directly a problem of ownership or control, though both favor larger players with more money.
If gigantic amounts of capital weren't available, then the focus would be on improving the models so they don't need GPU farms running off nuclear reactors plus the sum total of all posts on the Internet ever.
-
I don't really agree that this is the biggest issue, for me the biggest issue is power consumption.
Large power consumption only happens because someone is willing to dump lots of capital into it so they can own it.
-
Large power consumption only happens because someone is willing to dump lots of capital into it so they can own it.
Oh you're right, let me just tally up all the days where that isn't the case...
carry the 2...
don't forget weekends and holidays...
Oh! It's every single day. It's just an always and forever problem. Neat.
-
Sure, but despite all the crypto bros assurances to the contrary, the only real-world applications for it is buying drugs, paying ransoms and getting scammed. Which means that any non-zero amount of energy is too much energy.
There are some use cases below, and none use proof of work.
https://hbr.org/2022/01/how-walmart-canada-uses-blockchain-to-solve-supply-chain-challenges
-
Oh you're right, let me just tally up all the days where that isn't the case...
carry the 2...
don't forget weekends and holidays...
Oh! It's every single day. It's just an always and forever problem. Neat.
It's nothing of the sort. If nobody had the capital to scale it through more power, then the research would be more focused on making it efficient.
-
Dunno, the part about generative music (not like LLMs) I've tried, I think if I spent a few more years of weekly migraines on that, I'd become better.
you mean like in the same way that learning an instrument takes time and dedication?
-
AI business is owned by a tiny group of technobros, who have no concern for what they have to do to get the results they want ("fuck the copyright, especially fuck the natural resources") who want to be personally seen as the saviours of humanity (despite not being the ones who invented and implemented the actual tech) and, like all big wig biz boys, they want all the money.
I don't have problems with AI tech in the principle, but I hate the current business direction and what the AI business encourages people to do and use the tech for.
Well I'm on board for fuck intellectual property. If openai doesn't publish the weights then all their datacenter get visited by the killdozer
-
100%. People treat AI like some all knowing god figure. It can and will be manipulated just like every other social media site or search engine.
That's why we need the weights, right now! Before they figure out how to do this. It will happen, but at least we can prevent backsliding from what we have now.
-
...in the same way that someone who's read a lot of books can make money by writing their own.
I hate to be the one to break it to you but AIs aren't actually people. Companies claiming that they are "this close to AGI" doesn't make it true.
The human brain is an exception to copyright law. Outsourcing your thinking to a machine that doesn't actually think makes this something different and therefore should be treated differently.
-
No, but humans have differences in scale also. Should a person gifted with hyper-fast reading and writing ability be given less opportunity than a writer who takes a year to read a book and a decade to write one? Imo if the argument comes down to scale, it's kind of a shitty argument. Is the underlying principle faulty or not?
Part of my point is that a lot of everyday rules do break down at large scale. Like, 'drink water' is good advice - but a person can still die from drinking too much water. And having a few people go for a walk through a forest is nice, but having a million people go for a walk through a forest is bad. And using a couple of quotes from different sources to write an article for a website is good; but using thousands of quotes in an automated method doesn't really feel like the same thing any more.
That's what I'm saying. A person can't physically read billions of books, or do the statistical work to put them together to create a new piece of work from them. And since a person cannot do that, no law or existing rule currently takes that possibility into account. So I don't think we can really say that a person is 'allowed to' do that. Rather, it's just an undefined area. A person simply cannot physically do it, and so the rules don't have to consider it. On the other hand, computer systems can now do it. And so rather than pointing to old laws, we have to decide as a society whether we think that's something we are ok with.
I don't know what the 'best' answer is, but I do think we should at least stop to think about it carefully; because there are some clear downsides that need to be considered - and probably a lot of effects that aren't as obvious which should also be considered!
-
This post did not contain any content.
Reminds me of "biotech is Godzilla". Sepultura version of course