Google’s ‘Secret’ Update Scans All Your Photos
-
graphene folks have a real love for the word misinformation. That's not you under there
, Daniel, is it?
-
Why do you need machine learning for detecting scams?
Is someone in 2025 trying to help you out of the goodness of their heart? No. Move on.
-
Yes, absolutely, and regularly, and without shame.
But not usually about technical stuff.
-
What's over engineered about it?
-
Because that's where I got the info from first? Grow up
-
There's another one mentioned in the comments
-
If you want to talk money then it is in businesses best interest that money from their users is being used on their products, not being scammed through the use of their products.
Secondly machine learning or algorithms can detect patterns in ways a human can't. In some circles I've read that the programmers themselves can't decipher in the code how the end result is spat out, just that the inputs will guide it. Besides the fact that scammers can circumvent any carefully laid down antispam, antiscam, anti-virus through traditional software, a learning algorithm will be magnitudes harder to bypass. Or easier. Depends on the algorithm
-
Carriers don't care. They are selling you data. They don't care how it's used. Google is selling you a phone. Apple held down the market for a long time for being the phone that has some of the best security. As an android user that makes me want to switch phones. Not carriers.
-
not only refused to restore the account, but still insisted he was a pedophile producing child pornography despite the cops and doctors and every other authority involved insisting he wasnt, and that the images were medically necessary, and refuse to even give/let him get a backup of all his family pictures, emails, etc.
-
So, kinda like a free malware software that just scans without doing anything to solve the problem
-
I don't know the point of the first paragraph...scams are bad? Yes? Does anyone not agree? (I guess scammers)
For the second we are talking in the wild abstract, so I feel comfortable pointing out that every automated system humanity has come up with so far has pulled in our own biases and since ai models are trained by us, this should be no different. Second, if the models are fallible, you cannot talk about success without talking false positives. I don't care if it blocks every scammer out there if it also blocks a message from my doctor. Until we have data on consensus between these new algorithms and desired outcomes, it's pointless to claim they are better at X.
-
And you’ll again inconvenience a human slightly as they look at a pixelated copy of a picture of a cat or some noise.
No cops are called, no accounts closed
-
That's what you don't use, which wasn't what they asked, right?
-
Have you even read the article you posted? It mentions these posts by GrapheneOS
-
I guess the app then downloads the required models
-
True or not, one can avoid the whole issue by using your phone as a phone, maybe to send texts, with location, mike, and camera switched off permanently, and all the other apps deleted or disabled. Sure, Google will still know you called your SO daily and your Mom once a week (NOT ENOUGH!), and that you were supposed to pick up the dry cleaning last night (did you?). Meh. If that's what floats the Surveillance Society's boat, I am not too worried.
-
did they make it so after people started removing it?
-
The scaling attack specifically can make a photo sent to you look innocent to you and malicious to the reviewer, see the link above
-
Google says that SafetyCore “provides on-device infrastructure for securely and privately performing classification to help users detect unwanted content
Cheers Google but I'm a capable adult, and able to do this myself.