OpenAI declares AI race “over” if training on copyrighted works isn’t fair use
-
In all of your replies, however, you fail to provide a single example. Are they writing code for you, or creating shitty art for you?
I have used them in a large variety of ways, from general knowledge seeking to specific knowledge seeking, writing code, generating audio, images, and video. I use it most days, if not essentially every day. What examples would you like me to provide? Tell me and I will provide them.
-
I've used them and have yet to get a fully correct result on anything I've asked beyond the absolute basics. I always have to go in and correct some aspect of whatever it shits out. Scraping every bit of data they can get their hands on is only making the problem worse.
-
who are these international and ideological competitors ? What happens if they develop it further than for profit corporations ?
To me it seems like for profit corporations themselves are our international and ideological adversaries.
-
I fail to see the significance of not being dominant in bullshit generation, which is OpenAIs specialty.
Non-LLM machine learning is more interesting, but "write me a poem about how you're my loving ai waifu" is just not a strategic resource.
-
I have conflicting feelings about this whole thing. If you are selling the result of training like OpenAI does (and every other company), then I feel like it’s absolutely and clearly not fair use. It’s just theft with extra steps.
On the other hand, what about open source projects and individuals who aren’t selling or competing with the owners of the training material? I feel like that would be fair use.
What keeps me up at night is if training is never fair use, then the natural result is that AI becomes monopolized by big companies with deep pockets who can pay for an infinite amount of random content licensing, and then we are all forever at their mercy for this entire branch of technology.
The practical, socioeconomic, and ethical considerations are really complex, but all I ever see discussed are these hard-line binary stances that would only have awful corporate-empowering consequences, either because they can steal content freely or because they are the only ones that will have the resources to control the technology.
Japan already passed a law that explicitly allows training on copyrighted material. And many other countries just wouldn’t care. So if it becomes a real problem the companies will just move.
I think they need to figure out a middle ground where we can extract value from the for profit AI companies but not actually restrict the competition.
-
Technological advances are supposed to improve peoples lives. Allow them to work less and enjoy things more often.
It's why we invented a wheel. It's why we invented better weapons to hunt with.
"Tech for techs sake" is enjoying the technology and ignoring its impact on people's lives.
When a society creates a massive sum of information accessible to all, trains new technology on data created by that society, and then a small subset of that society steals and uses that data to profit themselves and themselves alone; I don't know what else you call that but exploitation.
Advances in AI should make our lives better. Not worse. Because of our economic model we have decided that technological advances no longer benefit everyone, but hurt a majority of the population for the profits of a few.
-
The fact that you can't distinguish between being against something vs. being against a double-standard is insane to me.
-
This post did not contain any content.
-
This post did not contain any content.
This is basically a veiled admission that OpenAI are falling behind in the very arms race they started. Good, fuck Altman. We need less ultra-corpo tech bro bullshit in prevailing technology.
-
I’m not an american but losing in that area internationally might be way worse than to fight over training data.
Maybe not paying the full amount of the copyright, but I agree they should compensate the IP holders.
I don't believe there's a future in AI at all.
-
It is the shittiest tech. If you think this bullshit will actually lead to AGI, something that wouldn't be shit, you don't know much about LLMs or are incredibly delusional.
-
It is the shittiest tech. If you think this bullshit will actually lead to AGI, something that wouldn't be shit, you don't know much about LLMs or are incredibly delusional.
-
So you believe there is no protection for creators at all and removing copyright will help them?
I believe that the protection copyright provides is proportionate to how much you can spend on lawyers. So, no protection for the smallest creators, and little protection for smaller creators against larger corporations.
I support extreme copyright reform, though I doubt it should be completely removed.
-
This post did not contain any content.
No amigo, it's not fair if you're profiting from it in the long run.
-
This post did not contain any content.
Good. I hope this is what happens.
- LLM algorithms can be maintained and sold to corpos to scrape their own data so they can use them for in house tools, or re-sell them to their own clients.
- Open Source LLMs can be made available for end users to do the same with their own data, or scrape whats available in the public domain for whatever they want so long as they don't re-sell
- Altman can go fuck himself
-
This post did not contain any content.
These fuckers are the first one to send tons of lawyers whenever you republish or use any IP of them. Fuck these idiots.
-
This post did not contain any content.
-
This post did not contain any content.
Do you promise?!?!
-
I understand your frustration, but it's a necessary thing we must do. Because if it's not us, well then it will be someone else and that could literally be devastating.
Who is "us", in this scenario?
-
The billionaires are the ones with the resources to develop this tech. We could nationalize it, but then people would complain about that too for different reasons.
Having unchecked unelected power to allocate resources and change the rules doesn't sit well with most people, I can say.