Brian Eno: “The biggest problem about AI is not intrinsic to AI. It’s to do with the fact that it’s owned by the same few people”
-
Idk if it’s the biggest problem, but it’s probably top three.
Other problems could include:
- Power usage
- Adding noise to our communication channels
- AGI fears if you buy that (I don’t personally)
Power usage
I'm generally a huge eco guy but on power usage particularly I view this largely as a government failure. We have had to incredible energy resources that the government has chosen not to implement or effectively dismantled.
It reminds me a lot of how Recycling has been pushed so hard into the general public instead of and government laws on plastic usage and waste disposal.
It's always easier to wave your hands and blame "society" than the is to hold the actual wealthy and powerful accountable.
-
I see the "AI is using up massive amounts of water" being proclaimed everywhere lately, however I do not understand it, do you have a source?
My understanding is this probably stems from people misunderstanding data center cooling systems. Most of these systems are closed loop so everything will be reused. It makes no sense to "burn off" water for cooling.
-
This post did not contain any content.
wrong. it's that it's not intelligent. if it's not intelligent, nothing it says is of value. and it has no thoughts, feelings or intent. therefore it can't be artistic. nothing it "makes" is of value either.
-
No brian eno, there are many open llm already. The problem is people like you who have accumulated too much and now control all the markets/platforms/medias.
Totally right that there are already super impression open source AI projects.
But Eno doesn't control diddly, and it's odd that you think he does. And I assume he is decently well off, but I doubt he is super rich by most people's standards.
-
This post did not contain any content.
Either the article editing was horrible, or Eno is wildly uniformed about the world. Creation of AIs is NOT the same as social media. You can't blame a hammer for some evil person using it to hit someone in the head, and there is more to 'hammers' than just assaulting people.
-
I see the "AI is using up massive amounts of water" being proclaimed everywhere lately, however I do not understand it, do you have a source?
My understanding is this probably stems from people misunderstanding data center cooling systems. Most of these systems are closed loop so everything will be reused. It makes no sense to "burn off" water for cooling.
data centers are mainly air-cooled, and two innovations contribute to the water waste.
the first one was "free cooling", where instead of using a heat exchanger loop you just blow (filtered) outside air directly over the servers and out again, meaning you don't have to "get rid" of waste heat, you just blow it right out.
the second one was increasing the moisture content of the air on the way in with what is basically giant carburettors in the air stream. the wetter the air, the more heat it can take from the servers.
so basically we now have data centers designed like cloud machines.
-
This post did not contain any content.
AI will become one of the most important discoveries humankind has ever invented. Apply it to healthcare, science, finances, and the world will become a better place, especially in healthcare.
Hey artist, writers, you cannot stop intellectual evolution. AI is here to stay. All we need is a proven way to differentiate the real art from AI art. An invisible watermark that can be scanned to see its true "raison d'etre".
Sorry for going off topic but I agree that AI should be more open to verification for using copyrighted material. Don't expect compensation though. -
So far, the result seems to be "it's okay when they do it"
Yeah... Nothing to see here, people, go home, work harder, exercise, and don't forget to eat your vegetables. Of course, family first and god bless you.
-
I wasn't being pedantic. It's a very fucking important distinction.
If you want to say "unethical" you say that. Law is an orthogonal concept to ethics. As anyone who's studied the history of racism and sexism would understand.
Furthermore, it's not clear that what Meta did actually was unethical. Ethics is all about how human behavior impacts other humans (or other animals). If a behavior has a direct negative impact that's considered unethical. If it has no impact or positive impact that's an ethical behavior.
What impact did OpenAI, Meta, et al have when they downloaded these copyrighted works? They were not read by humans--they were read by machines.
From an ethics standpoint that behavior is moot. It's the ethical equivalent of trying to measure the environmental impact of a bit traveling across a wire. You can go deep down the rabbit hole and calculate the damage caused by mining copper and laying cables but that's largely a waste of time because it completely loses the narrative that copying a billion books/images/whatever into a machine somehow negatively impacts humans.
It is not the copying of this information that matters. It's the impact of the technologies they're creating with it!
That's why I think it's very important to point out that copyright violation isn't the problem in these threads. It's a path that leads nowhere.
Just so you know, still pedantic.
-
Idk if it’s the biggest problem, but it’s probably top three.
Other problems could include:
- Power usage
- Adding noise to our communication channels
- AGI fears if you buy that (I don’t personally)
Could also put up:
- Massive collections of people are exploited in order to train various AI systems.
- Machine learning apps that create text or images from prompts are supposed to be supplementary but businesses are actively trying to replace their workers with this software.
- Machine learning image generation currently has diminishing returns for training as we pump exponentially more content into them.
- Machine learning text and image generated content self-poisons their generater's sample pool, greatly diminishing the ability for these systems to learn from real world content.
There's actually a much longer list if we expand to talking about other AI systems, like the robot systems we're currently training to use in automatic warfare. There's also the angle of these image and text generation systems being used for political manipulation and scams. There's alot of terrible problems created from this tech.
-
The issue I see is that they are using the copyrighted data, then making money off that data.
...in the same way that someone who's read a lot of books can make money by writing their own.
-
This is an interesting argument that I've never heard before. Isn't the question more about whether ai generated art counts as a "derivative work" though? I don't use AI at all but from what I've read, they can generate work that includes watermarks from the source data, would that not strongly imply that these are derivative works?
If you studied loads of classic art then started making your own would that be a derivative work? Because that's how AI works.
The presence of watermarks in output images is just a side effect of the prompt and its similarity to training data. If you ask for a picture of an Olympic swimmer wearing a purple bathing suit and it turns out that only a hundred or so images in the training match that sort of image--and most of them included a watermark--you can end up with a kinda-sorta similar watermark in the output.
It is absolutely 100% evidence that they used watermarked images in their training. Is that a problem, though? I wouldn't think so since they're not distributing those exact images. Just images that are "kinda sorta" similar.
If you try to get an AI to output an image that matches someone else's image nearly exactly... is that the fault of the AI or the end user, specifically asking for something that would violate another's copyright (with a derivative work)?
-
Oh, and it also hallucinates.
Oh, and people believe the hallucinations.
-
This post did not contain any content.
like most of money
-
This post did not contain any content.
The biggest problem with AI is the damage it’s doing to human culture.
-
Either the article editing was horrible, or Eno is wildly uniformed about the world. Creation of AIs is NOT the same as social media. You can't blame a hammer for some evil person using it to hit someone in the head, and there is more to 'hammers' than just assaulting people.
Eno does strike me as the kind of person who could use AI effectively as a tool for making music. I don’t think he’s team “just generate music with a single prompt and dump it onto YouTube” (AI has ruined study lo fi channels) - the stuff at the end about distortion is what he’s interested in experimenting with. That is a possibility, even if in effect all that’s going to happen is music execs thinking they can replace songwriters and musicians with “hey siri, generate a pop song with a catchy chorus” while talentless hacks inundate YouTube and bandcamp with shit.
-
The problem with AI is that it pirates everyone’s work and then repackages it as its own and enriches the people that did not create the copywrited work.
This is where "universal basic income" comes into play
-
Like Sam Altman who invests in Prospera, a private "Start-up City" in Honduras where the board of directors pick and choose which laws apply to them!
The switch to Techno-Feudalism is progressing far too much for my liking.
Techno-Feudalism
I'll say it, yet again. It's just feudalism. "Techno-Feudalism" has nothing different enough to it to differentiate it as even a sub-type of feudalism. It's just the same thing all over again, using technological advances to improve the ability to monitor and impose control over the populace. Historical feudalists also leveraged technology to cement their rule (plate armor, cavalry, crossbows, cannon, mills, control of literacy, etc).
-
The problem with AI is that it pirates everyone’s work and then repackages it as its own and enriches the people that did not create the copywrited work.
That's what all artists have done since the dawn of ages.