Why is Big Tech hellbent on making AI opt-out? | As Microsoft, Apple, and Google switch the tech on by default, what happened to asking for permission first?
-
[email protected]replied to [email protected] last edited by
Use it and then explain how much of a waste of time it was to get the wrong results.
-
[email protected]replied to [email protected] last edited by
Odds on the opt out actually opting you out instead of pretending to?
-
[email protected]replied to [email protected] last edited by
Correct. It's about metrics. They're making AI opt-out because they desparately need to pump user engagement numbers, even if those numbers don't mean anything.
It's all for the shareholders. Big tech has been, for a while now, chasing a new avenue for meteoric growth, because that's what investors have come to expect. So they went all in on AI, to the tune of billions upon billions, and came crashing, hard, into the reality that consumers don't need it and enterprise can't use it;
Transformer models have two fatal flaws; the hallucination problem - to which there is still no solution - makes them unsuitable for enterprise applications, and their cost per operation make them unaffordable for retail customer applications (ie, a chatbot that gives you synonyms while you write is the sort of thing people will happily use, but won't pay $40 a month for).
So now the C-suites are standing over the edge of the burning trash fire they pushed all that money into, knowing that at any moment their shareholders are going to wake up and shove them into it too. They've got to come up with some kind of proof that this investment is paying off. They can't find that proof in sales, because no one is buying, so instead they're going to use "engagement"; shove AI into everything, to the point where people basically wind up using it by accident, then use those metrics to claim that everyone loves it. And then pray to God that one of those two fatal flaws will be solved in time to make their investments pay off in the long run.
-
[email protected]replied to [email protected] last edited by
No, that just plays into their hands. If you complain that it sucks, you're just "using it wrong".
Its better to not use it at all so they end up with dogshit engagement metrics and the exec who approved the spend has to explain to the board why they wasted so much money on something their employees clearly don't want or need to use.
Remember, they won't show the complaints, just the numbers, so those numbers have to suck if you really want the message to get through.
-
[email protected]replied to [email protected] last edited by
This! ️
-
[email protected]replied to [email protected] last edited by
"coPilot, what is the Sunk Cost Fallacy?"
-
[email protected]replied to [email protected] last edited by
lmao my workplace encourages use / exploration of LLMs when useful, but that's stupid
-
[email protected]replied to [email protected] last edited by
Because they are trash that wish to force their garbage on us. Next question.
-
[email protected]replied to [email protected] last edited by
They have to put it into everything and have people and apps depend on it before the AI bubble pops so after the pop it's too difficult to remove or break the dependency. As long as it's in there they can charge subscription and fees for it.
-
[email protected]replied to [email protected] last edited by
lmao when have tech companies ever given a semblance of a shit about consent. It’s an industry that has deep roots in misogynistic nerds and drunk frat bros
-
[email protected]replied to [email protected] last edited by
I heard Windows was adding a "snapshot" and guessed that's what they were doing, but uhh.. are there really "AI Screens" hardware doing this too?
-
[email protected]replied to [email protected] last edited by
The reason is probably the indexing takes a lot of time. You can’t just turn AI on and expect it to work. You need to turn it on and wait for it to dig through all your data and index it for fast retrieval
-
[email protected]replied to [email protected] last edited by
what happened to asking permission first?
Masculine energy I guess. -
[email protected]replied to [email protected] last edited by
Data collection. And hoping to stumble upon consistently useful use cases yet-unknown.
-
[email protected]replied to [email protected] last edited by
AI is data hungry. If they opt you in, they also get more data from you automatically.
Even if everyone opts out on day 2, they get a crap ton of free stuff in the meantime.
-
[email protected]replied to [email protected] last edited by
Because very few people are ever going to turn an optional feature on, whether they would ultimately like it or not. They need to be shown it. If they hate it, they will turn it off.
-
[email protected]replied to [email protected] last edited by
They can be offered to choose.
Opt-out is always bad because it is meant to exploit users who are not aware that a certain feature can be turned off. Even among those who do, not all are confidently going through settings.
-
[email protected]replied to [email protected] last edited by
Just because you brought up copilot, I think people need to see this
-
[email protected]replied to [email protected] last edited by
Not yet on pc, but iPhone 16 has a chip already and the new copilot certified laptops will have it as well. MacOS has the capabilities already with that Daemon mentioned in the video.
The era of surveillance is making huge progress every week.