What's lemmy opinion on AI in fediverse?
-
I meant in the future they'll implement it. I haven't found it. Also corporation will enter fediverse they might not explicitly say they're a corporation
ooh, I thought you were saying you know currently one that will implement, but you meant you know eventually, an instance will implement algos
Algorithms aren't always bad. I think the biggest problem of lemmy is that it doesn't have good algorithms & search/indexation
-
I already made some people mad by suggesting that I would I would make by computer run an ollama model. I suggested that they make a counter AI bot to find these accounts that don't disclose they're bots. What's lemmy opinion of Ai coming into fediverse?
Has AI improved where it has been implemented? I don't think it has.
-
Has AI improved where it has been implemented? I don't think it has.
I'd say yes like it's hard to program language processing, plus it helps get information pretty fast
-
I already made some people mad by suggesting that I would I would make by computer run an ollama model. I suggested that they make a counter AI bot to find these accounts that don't disclose they're bots. What's lemmy opinion of Ai coming into fediverse?
If it added value then I wouldn't be opposed. But I don't see what value AI could possibly add to a social network. Some specific fields, like researchers combing through large data sets, have benefitted from AI. Every other place it's been shoehorned into has suffered for it.
If you see a problem and realize AI could address it, then that's fantastic. If you're coming at it from the other direction and looking for problems then you're going to waste everyone's time.
-
If it added value then I wouldn't be opposed. But I don't see what value AI could possibly add to a social network. Some specific fields, like researchers combing through large data sets, have benefitted from AI. Every other place it's been shoehorned into has suffered for it.
If you see a problem and realize AI could address it, then that's fantastic. If you're coming at it from the other direction and looking for problems then you're going to waste everyone's time.
AI actually makes it so computers can process language. I had two issues one is tracking police based on where they are and the other is detecting live streams post is a live stream post and it's hard to process abstract concepts like that so you get the LLM to make the determination. Beats going through all the data yourself and figuring out edge cases
-
I stopped caring what other people think. Especially when they can't say why they're mad
I think they're mad that you want to make a spam bot?
-
I think they're mad that you want to make a spam bot?
In this case that probably it, but I mean in general. You can calmly ask an angry person if alternative X is okay and they berrate you, accuse you, or talk in a way that makes no sense then you just ignore them.
-
In general, if it isn't open source in every sense of the term, GPL license, all weights and parts of the model, and all the training data and training methods, it's a non-starter for me.
I'm not even interested in talking about AI integration unless it passes those initial requirements.
Scraping millions of people's data and content without their knowledge or consent is morally dubious already.
Taking that data and using it to train proprietary models with secret methodologies, locking it behind a pay wall, then forcing it back onto consumers regardless of what they want in order to artificially boost their stock price and make a handful of people disgustingly wealthy is downright demonic.
Especially because it does almost nothing to enrich our lives. In its current form, it is an anti-human technology.
Now all that being said, if you want to run it totally on your own hardware, to play with and help you with your own tasks, that's your choice. Using in a way that you have total sovereignty over is good.
There are totally open efforts like IBM Granite. Not sure what is SOTA these days.
There are some diffusion models like that too.
Problem is there’s a performance cost, and since LLMs are so finicky and hard to run, they’re not very popular so far.
-
I already made some people mad by suggesting that I would I would make by computer run an ollama model. I suggested that they make a counter AI bot to find these accounts that don't disclose they're bots. What's lemmy opinion of Ai coming into fediverse?
I’m sympathetic.
But… What exactly would you use them for? Spam detection would be quite expensive, in other cases it’s basically a writing assistant for a human response.
-
I’m sympathetic.
But… What exactly would you use them for? Spam detection would be quite expensive, in other cases it’s basically a writing assistant for a human response.
If you're talking about the counter AI measures I'm curious if they exist and I want to implement them in a bit that makes human like responses. But the AI I'm curious if it can the turing test
-
System shared this topic onSystem shared this topic on