Everything you say to your Echo will be sent to Amazon starting on March 28.
-
People are saying don't get an echo but this is the tip of an iceberg. My coworkers' cell phones are eavesdropping. My neighbors doorbells record every time I leave the house. Almost every new vehicle mines us for data. We can avoid some of the problem but we cannot avoid it all. We need a bigger, more aggressive solution if we are going to have a solution at all.
How about regulation? Let's start with saying data about me belongs to me, not to whoever collected the data, as is currently the case
-
I have always told people to avoid Amazon.
They have doorbells to watch who comes to your house and when.
Indoor and outdoor security cameras to monitor when you go outside, for how long, and why.
They acquired roomba, which not only maps out your house, but they have little cameras in them as well, another angle to monitor you through your house in more personal areas that indoor cameras might not see.
They have the Alexa products meant to record you at all times for their own use and intent.
Why do you think along with Amazon Prime subscriptions you get free cloud storage, free video streaming, free music? They are categorizing you in the most efficient and accurate way possible.
Boycott anything Amazon touches
They backed out of the Roomba deal. Now iRobot is going down the shitter.
-
Want to setup a more privacy friendly solution?
Have a look at Home Assistant! It’s a great open source smart home platform that recently released a local (so not processing requests in the cloud) voice assistant. It’s pretty neat!
I've seen something about this pop up occasionally on my feed, but it's usually a conversation I'm nowhere close to understanding lol
Could you recommend any resources for a complete noob?
-
No way! The microphones you put all over your house are listening to you? What a shocker!
If you bought these this is on you. Trash them now. -
I didn't even know this was a feature. My understanding has always been that Echo devices work as follows.
- Store a constant small buffer of the past few seconds of audio
- Locally listen for the wake word (typically "Alexa") using onboard hardware. (This is why you cannot use arbitrary wake words.)
- Upon hearing the wake word, send the buffer from step one along with any fresh audio to the cloud to process what was said.
- Act on what was said. (Turn lights on or off, play Spotify, etc.)
Unless they made some that were able to do step 3 locally entirely I don't see this as a big deal. They still have to do step 4 remotely.
Also, while they may be "always recording" they don't transmit everything. It's only so if you say "Alexaturnthelightsoff" really fast it has a better chance of getting the full sentence.
I'm not trying to defend Amazon, and I don't necessarily think this is great news or anything, but it doesn't seem like too too big of a deal unless they made a lot of devices that could parse all speech locally and I didn't know.
-
People are saying don't get an echo but this is the tip of an iceberg. My coworkers' cell phones are eavesdropping. My neighbors doorbells record every time I leave the house. Almost every new vehicle mines us for data. We can avoid some of the problem but we cannot avoid it all. We need a bigger, more aggressive solution if we are going to have a solution at all.
My clunky old bike ain't listening to shit bro. Neither is my android phone using a custom rom.
-
Want to setup a more privacy friendly solution?
Have a look at Home Assistant! It’s a great open source smart home platform that recently released a local (so not processing requests in the cloud) voice assistant. It’s pretty neat!
I have one big frustration with that: Your voice input has to be understood PERFECTLY by TTS.
If you have a "To Do" list, and speak "Add cooking to my To Do list", it will do it! But if the TTS system understood:
- Todo
- To-do
- to do
- ToDo
- To-Do
- ...
The system will say it couldn't find that list. Same for the names of your lights, asking for the time,..... and you have very little control over this.
HA Voice Assistant either needs to find a PERFECT match, or you need to be running a full-blown LLM as the backend, which honestly works even worse in many ways.
They recently added the option to use LLM as fallback only, but for most people's hardware, that means that a big chunk of requests take a suuuuuuuper long time to get a response.
I do not understand why there's no option to just use the most similar command upon an imperfect matching, through something like the Levenshtein Distance.
-
I didn't even know this was a feature. My understanding has always been that Echo devices work as follows.
- Store a constant small buffer of the past few seconds of audio
- Locally listen for the wake word (typically "Alexa") using onboard hardware. (This is why you cannot use arbitrary wake words.)
- Upon hearing the wake word, send the buffer from step one along with any fresh audio to the cloud to process what was said.
- Act on what was said. (Turn lights on or off, play Spotify, etc.)
Unless they made some that were able to do step 3 locally entirely I don't see this as a big deal. They still have to do step 4 remotely.
Also, while they may be "always recording" they don't transmit everything. It's only so if you say "Alexaturnthelightsoff" really fast it has a better chance of getting the full sentence.
I'm not trying to defend Amazon, and I don't necessarily think this is great news or anything, but it doesn't seem like too too big of a deal unless they made a lot of devices that could parse all speech locally and I didn't know.
It was a non advertised feature only available in the US and in English only
-
Maybe I misread the actual text, but it sounds like the exact opposite, that it's going to auto-delete what you say.
-
They literally could just leave the feature on the device, but then you can't force your users to send you all their data, voices, thoughts and first borns
Fuck Amazon, fuck Bezos
-
If anyone remembers the Mycroft Mark II Voice Assistant Kickstarter and was disappointed when development challenges and patent trolls caused the company's untimely demise, know that hope is not lost for a FOSS/OSHW voice assistant insulated from Big Tech..
FAQ: OVOS, Neon, and the Future of the Mycroft Voice Assistant
Disclaimer: I do not represent any of these organizations in any way; I just believe in their mission and wish them all the success in getting there by spreading the word.
-
I have one big frustration with that: Your voice input has to be understood PERFECTLY by TTS.
If you have a "To Do" list, and speak "Add cooking to my To Do list", it will do it! But if the TTS system understood:
- Todo
- To-do
- to do
- ToDo
- To-Do
- ...
The system will say it couldn't find that list. Same for the names of your lights, asking for the time,..... and you have very little control over this.
HA Voice Assistant either needs to find a PERFECT match, or you need to be running a full-blown LLM as the backend, which honestly works even worse in many ways.
They recently added the option to use LLM as fallback only, but for most people's hardware, that means that a big chunk of requests take a suuuuuuuper long time to get a response.
I do not understand why there's no option to just use the most similar command upon an imperfect matching, through something like the Levenshtein Distance.
Because it takes time to implement. It will come.
-
Who pays for Alexa?
-
Maybe I misread the actual text, but it sounds like the exact opposite, that it's going to auto-delete what you say.
-
Who pays for Alexa?
-
Who pays for Alexa?
Plenty of people I know have gotten the little echo dots or the bigger alternative with larger speakers for Christmas or birthdays. Technically they didn't spend money, but their friends and family did.
-
Want to setup a more privacy friendly solution?
Have a look at Home Assistant! It’s a great open source smart home platform that recently released a local (so not processing requests in the cloud) voice assistant. It’s pretty neat!
home assistant is amazing but it is not yet an alternative to Alexa, the assistant/voice is still in development and far from being usable. it’s impossible for me to remember the specific wording assist demands and voice to text is incorrect like nine out of ten times. And this includes giving up on terrible locally hosted models trying out their cloud which obviously is a huge privacy hole, but even then it was slow and inaccurate. It’s a mystery to me how the foss community is so behind on voice, Siri and Google Assistant started working offline years ago, and they work straight on a mobile device.
-
This is legal, even in the US?