Mozilla is Introducing 'Terms of Use' to Firefox | Also about to go into effect is an updated privacy notice
-
Building a browser from scratch is going to cost well over a million dollars in development costs. I don't think they'd be able to achieve it without sponsors.
-
I'm not saying they shouldn't seek funding, but maybe not from companies that hosted and sold literally Nazi tshirts.
-
What's that saying about sitting at a table with a Nazi?
-
Also turns out that drying up donations for privacy protecting browsers means there is less demand for it
Or, hear me out, that former donors don't trust them anymore!
But also that a lot of people don't want to donate, basically when they could only donate an immeasurably small amount, to a company whose CEO gets an unimaginably huge pay, that could be used for significantly boosting development.
Personally that's a big reason I rather want to support smaller projects, or even that of size like Bitwarden. -
they have to dip something for sure. THEY HAVE TO REDUCE THE CEO PAY BY MEASLY 20% AND FUND DEVELOPMENT FROM THAT!!!
or by even more.
-
and then, "uh, we are removing the URL bar in the next version because our statistics say nobody uses it!!"
-
and then, "uh, we are removing the URL bar in the next version because our statistics say nobody uses it!!"
-
soon! they can come any year now!
-
Cough cough, that's true the biggest cost is salary 17,097,933. But 10 millions are paid to C-Suite and 4mil to contractors who do the job.
https://assets.mozilla.net/annualreport/2024/b200-mozilla-foundation-form-990-public-disclosure-ty23.pdf
Just look into the books. -
calm down
sweet summer child
I know these tactics, they're designed to goad me into an emotive response so I lose the argument!
They're not a case in themselves and your smugness is distasteful. Your interlocutor is treating you with more respect than you are showing in return.
-
LibreWolf is annoying in that it doesn't work on my Mac with VPN split tunneling, a seemingly known issue they haven't fixed.
-
I feel like everything is getting corroded, the capitalists are wearing down everything
-
PAID ONLY BY A RELATED FOR-PROFIT
Conveniently missed note above
️
The remainder of the executive team is paid what appears to be a fairly reasonable salary.
-
lol. Are you for real? You think the Firefox development team can be funded by 20% of the CEO’s salary?!
-
The great part of open source is forking.
And forking it will, Firefox will be forked with a version of a different name that doesn't have this shit, and then the name Firefox will fade into history as a once great product that formed the basis of a different grey product.
Fork you, Mozilla
-
Yes, I was admittedly tired when I responded to this thread, and then seeing such long winded responses was quite annoying to me.
But I wasn't trying to goad them, I was just exhausted at having to spend so much time and energy just to make my point, which seemed relatively non-controversial to me when I originally posted it.
-
I’m not. Apologies if I was unclear, but I was specifically referencing the fact that you were saying AI was going to accelerate to the point that it replaces human labor, and I was simply stating that I would prefer a world in which human labor is not required for humans to survive, and we can simply pursue other passions, if such a world where to exist, as a result of what you claim is happening with AI. You claimed AI will get so good it replaces all the jobs.
I'm sorry, but you seem to have misinterpreted what I was saying. I never claimed that AI would get so good it replaces all jobs.
I stated that the potential consequences were extremely concerning, without necessarily specifying what those consequences would be. One consequence is the automation of various forms of labor, but there are many other social and psychological consequences that are arguably more worrying.Cool, I would enjoy that, because I don’t believe that jobs are what gives human lives meaning, and thus am fine if people are free to do other things with their lives.
Your conception of labor is limited. You're only taking into account jobs as they exist within a capitalist framework. What if AI was statistically proven to be better at raising children than human parents? What if AI was a better romantic partner than a human one? Can you see how this could be catastrophic for the fabric of human society and happiness? I agree that jobs don't give human lives meaning, but I would contend that a crucial part of human happiness is feeling that one is a valued, contributing member a community or family unit.
The automation of labor is not even remotely comparable to the creation of a technology who’s explicit, sole purpose is to cause the largest amount of destruction possible.
If you actually understood my point, you wouldn't be saying this. The intended purpose of the creation of a technology often turns out to be completely different from the actual consequences. We intended to create fire to keep warm and cook food, but it eventually came to be used to create weapons and explosives.
-
I’m sorry, but you seem to have misinterpreted what I was saying. I never claimed that AI would get so good it replaces all jobs. I stated that the potential consequences were extremely concerning, without necessarily specifying what those consequences would be. One consequence is the automation of various forms of labor, but there are many other social and psychological consequences that are arguably more worrying.
My apologies, I'm simply quite used to people arguing against AI using specifically the automation of jobs as their primary concern, and assumed that it was a larger concern of yours when it came to the "consequences." of AI as a concept.
If you actually understood my point, you wouldn’t be saying this. The intended purpose of the creation of a technology often turns out to be completely different from the actual consequences.
Obviously, but the statistical probability of a thing being used for bad purposes, especially in a way that outweighs the benefit of the technology itself, is always higher for a thing designed to be harmful from the start, as opposed to something started with good intentions. That doesn't mean a thing created to be harmful can't do or cause a good thing later on, but it's much less likely to than something designed to help people as its original goal.
We intended to create fire to keep warm and cook food, but it eventually came to be used to create weapons and explosives.
Had we not invented our uses of fire, would we have any of the comforts, standard of living, and capabilities that we do now? Would we be able to feed as many people as we do, keep our food safe and prevent it from spoiling, keep ourselves from dying in the winter, etc? Fire has brought a larger benefit than it has harms.
We intended to use the printing press to spread knowledge and understanding, but it ultimately came to spread hatred and fear.
While some media is used to spread hatred and fear, a much worse scenario is one in which no media can be spread at the same scale, and information dissemination is instead entirely reliant on word of mouth. This means extremely delayed knowledge of current events, an overall less informed population, and all the issues that come along with disseminating knowledge through a literal game of telephone. Things get lost, mixed up, falsified, and so on, and the ability to disseminate knowledge quickly can make those things much less likely.
Will they still happen? Sure. But I'd prefer a well-informed world that is sometimes subjected to misinformation, fear, and hate, to a world where all information is spread via ever-changing word of mouth, where information can't be easily fact-checked, shared, or researched, and where rumors can very frequently hold the same validity as fact for extended periods of time without anyone even being capable of checking if they're real.
The printing press has brought a larger benefit than it has harms. Do you see the pattern here?
And again, nuclear weapons have been used twice in wartime. Guns, swords, spears, automobiles, man made famines, aeroplanes, literally hundreds of other technologies have killed more human beings than nuclear weapons have.
Just because nuclear weapons make a big boom doesn’t make them more destructive than other technologies.
Cool, I never once stated that Nukes were more deadly than any of these other examples provided. I only stated that I don't believe that AI is more dangerous than nukes, in contrast to your original statement.
Nuclear fission has also provided one of the cleanest sources of energy we possess,
Nuclear fission research was taking place before the idea of using it for a deadly bomb was even a thing. The development of nuclear bombs came afterwards.
What if AI was statistically proven to be better at raising children than human parents? What if AI was a better romantic partner than a human one? Can you see how this could be catastrophic for the fabric of human society and happiness? I agree that jobs don’t give human lives meaning, but I would contend that a crucial part of human happiness is feeling that one is a valued, contributing member of a community or family unit.
A few points on this one. Firstly, just because a technology can be used, I don't necessarily think it should. If a tool is better than humans at something (let's say AI becomes good enough to automate all woodworkers with physical woodworking robots adapted for any task) I'll still support allowing humans to do that thing if it brings them joy. (People could simply still do woodworking, and I could get a table from one of them instead of from the AI, just because I feel like it.) The use of any technology after it's developed is not an inevitability, even if it's an option.
Secondly, I personally believe in doing what I can to maximize overall human happiness. If AI was better at raising children, but people still wanted to enjoy raising children, and we didn't see any demonstrable negative outcomes from having humans raise children instead of AI, then I would support whatever mechanism the parents preferred based on what they think would make them more happy, raising a child, or not.
If AI was a better romantic partner, in the sense that people broadly preferred AI to real people, and there wasn't evidence that such a trend increasing would make people broadly more unhappy, or unsatisfied with life, then I'd support it, because it wouldn't be doing any harm.
Ask yourself why you consider such things to be bad in the first place. Is it because you personally wouldn't enjoy those things? Cool, you wouldn't have to. And if society broadly didn't enjoy those things, then nobody would use them in the first place. You're presupposing both that society would develop and use AI for those purposes, but also not actually prefer using them, in which case they wouldn't be a replacement, because no society would choose to implement them.
This is like saying "what if we gave everyone IV drips that gave them dopamine all the time, but this actually destroyed the fabric of society and everyone was less happy with it?" Great, then nobody will use the IVs because they make them less happy than not using the IVs.
This entire argument assumes two contradictory things: That society will implement a thing to replace people because it's better, and they'd prefer to use it, but also that society will not prefer to use it because it will make them less happy. You can't have both.
As far as I can tell, all three of your initial retorts about the relative danger of nuclear weapons are basically incoherent word salads. Even if I were to concede your arguments regarding the relative dangers of AI (which I am absolutely not going to do, although you did make some good points), you would still be wrong about your initial statement because you clearly overestimated the relative danger of nuclear weapons.
Your only argument here for why AI would be relatively more dangerous is... "it could be." Simply stating that in the future, it may get good enough to do X or Y, and because that's undesirable to you, therefore the technology as it exists now will obviously do those things if allowed to progress.
Do you have any actual evidence or reason to believe that AI will do these things? That it will ever even be possible for it to do X or Y, that society would simultaneously willingly implement it while also not wanting it to be implemented because it harms them, or that the current trajectory of the industry even has a chance of driving the development of technologies that would ever be capable of those things?
Right now, the primary developments in "AI" are just better LLMs, which are just word probability predictors. Sure, they're getting better at predicting the probability of words, but how would that lend itself to practically, say, raising a child?
I essentially dismantled your position from both sides, and yet you refuse to concede even a single inch of ground, even on the more obvious issue of nuclear weapons only being responsible for a relatively paltry number of deaths.
And how many people has AI killed today? Oh wait, less than nuclear bombs? Just because today nukes haven't yet been responsible for a large number of deaths, but AI might be in the future, then stating that AI is possibly more dangerous than nuclear bombs must be correct!
You're making arguments from two completely different points in time. You're saying that because nukes haven't yet killed as many people as you think that AI will do in the future, they are therefore less dangerous. (Even while nukes still pose a constant threat, that can cause a chain reaction of deaths given the right circumstances, in the future) Unless you can substantiate your claim with some form of evidence that shows AI is likely to do any of these dangerous things on our current trajectory, you're arguing current statistics against a wholly unsubstantiated, imagined future, and then saying you're correct because in what you think the future will be like, AI will actually be doing all these bad things that make it worse than nukes.
Substantiate why you think AI will ever even get to that point, and also be implemented in a way that damages society, instead of just assuming the worst case scenario and assuming it's likely.
-
Cheers mate, have a good one.