Save The Planet
-
I have llama 3.2 on my phone and it's really funny because it's so low powered and dumb but so sweet.
it's like a little friend to talk to when I don't have Internet. he's a lil stupid but he got the spirit
That's cute!
-
We're going away folks, and nothing of any true value will be lost, except all the species that did live in homeostasis with the Earth that we're taking with us in our species' avarice induced murder-suicide
Carlin had some good material, but this is an absolutely stupid mindset. We can cause an extreme level of ecological damage. Will the planet eventually recover? Quite possibly. But that's not a certainty, and in the mean time we're triggering a mass extinction precisely because irresponsible humans figure there's no way we can hurt the Earth and it's self-important hubris to think that we can.
But the time we're living through and the time we're heading into are all the proof we should need that it's actually hubris to assume our actions have no meaningful impact.
-
Worse is Google that insists on shoving a terrible AI-based result in your face every time you do a search, with no way to turn it off.
I'm not telling these systems to generate images of cow-like girls, but I'm getting AI shoved in my face all the time whether I want it or not. (I don't).
I am trying to understand what Google's motivation for this even is. Surely it is not profitable to be replacing their existing, highly lucrative product with an inferior alternative that eats up way more power?
-
Carlin had some good material, but this is an absolutely stupid mindset. We can cause an extreme level of ecological damage. Will the planet eventually recover? Quite possibly. But that's not a certainty, and in the mean time we're triggering a mass extinction precisely because irresponsible humans figure there's no way we can hurt the Earth and it's self-important hubris to think that we can.
But the time we're living through and the time we're heading into are all the proof we should need that it's actually hubris to assume our actions have no meaningful impact.
We do have an impact but the earth will 100% be ok when we are dead and gone eventually. A million years ain't shit to the earth.
-
Carlin had some good material, but this is an absolutely stupid mindset. We can cause an extreme level of ecological damage. Will the planet eventually recover? Quite possibly. But that's not a certainty, and in the mean time we're triggering a mass extinction precisely because irresponsible humans figure there's no way we can hurt the Earth and it's self-important hubris to think that we can.
But the time we're living through and the time we're heading into are all the proof we should need that it's actually hubris to assume our actions have no meaningful impact.
We humans are a virus…a parasite and the earth will be better off once we are extinct.
-
Carlin had some good material, but this is an absolutely stupid mindset. We can cause an extreme level of ecological damage. Will the planet eventually recover? Quite possibly. But that's not a certainty, and in the mean time we're triggering a mass extinction precisely because irresponsible humans figure there's no way we can hurt the Earth and it's self-important hubris to think that we can.
But the time we're living through and the time we're heading into are all the proof we should need that it's actually hubris to assume our actions have no meaningful impact.
wrote on last edited by [email protected]️ Immediate to Short-Term (Days to Centuries)
- Hours to weeks: Power grids fail; nuclear reactors melt down without maintenance[11].
- Months to decades: Urban areas flood as drainage systems fail; buildings decay from weather and plant growth[6][11].
- 100–300 years: Steel structures collapse; concrete buildings crumble[5][7]. Most cities become overgrown forests[6].
Medium-Term (Thousands of Years)
- 1,000 years: Visible surface structures (e.g., roads, monuments) are buried or eroded. Plastics fragment but persist chemically[5][7].
- 10,000–250,000 years: Nuclear isotopes (e.g., plutonium-239) remain detectable in sediments and ice cores[7]. Mining tunnels fill with sediment but leave identifiable "industrial fossils"[7].
- 500,000 years: Microplastics and polymer layers in ocean sediments endure[5][10].
Long-Term (Millions of Years)
- 1–7 million years: Fossils of humans and domesticated animals persist. Geological strata show elevated carbon levels and mass extinction markers[4][8]. Deep mines and landfills remain as distinct layers[7][10].
- 50–100 million years: Continental drift subducts surface evidence; satellites decay or drift into space[3][10]. Only deep geological traces (e.g., mine shafts, isotope ratios) might endure[3][10].
- 250 million years: Next predicted mass extinction eradicates all mammals, including any remaining human traces[9].
Near-Permanent Traces
- Space artifacts: Lunar landers, Mars rovers, and Voyager probes persist for billions of years[3][10].
- Radio signals: Human broadcasts travel through space indefinitely at light speed[5].
Key Factors
- Detection likelihood: Aliens or future species could find traces for 100+ million years via deep geological analysis or space exploration[5][10].
- Total erasure: Requires Earth's destruction (e.g., solar expansion in 5 billion years)[10].
Citations:
[1] Human extinction https://en.wikipedia.org/wiki/Human_extinction
[2] What If Humans Suddenly Went Extinct? https://www.youtube.com/watch?v=yuOKTZISXhc
[3] How long would it take for all traces of humans to be gone? https://www.reddit.com/r/answers/comments/1azu120/how_long_would_it_take_for_all_traces_of_humans/
[4] What would happen to Earth if humans went extinct? https://www.livescience.com/earth-without-people.html
[5] How long before all human traces are wiped out? https://www.newscientist.com/lastword/2215950-how-long-before-all-human-traces-are-wiped-out/
[6] Vanishing Act: What Earth Will Look Like 100 Years After Humans Disappear - Brilliantio https://brilliantio.com/if-people-dissapeared-what-will-happen-to-earth-in-100-years/
[7] If humans became extinct, how long would it take for all ... https://www.sciencefocus.com/science/if-humans-became-extinct-how-long-would-it-take-for-all-traces-of-us-to-vanish
[8] Nature will need up to five million years to fill the gaps caused by man-made mass extinctions, study finds https://www.independent.co.uk/climate-change/news/mass-extinctions-five-million-years-nature-mammals-crisis-animal-plants-pnas-aarhus-a8585066.html
[9] Humans Will Go Extinct on Earth in 250 Million Years; Mass Extinction Will Occur Sooner if Burning Fossil Fuels Continues [Study] https://www.sciencetimes.com/articles/49951/20240430/humans-will-go-extinct-earth-250-million-years-mass-extinction.htm
[10] How long would it take for all evidence of humanity to be ... https://worldbuilding.stackexchange.com/questions/153618/how-long-would-it-take-for-all-evidence-of-humanity-to-be-erased-from-earth
[11] What Would Happen If Every Human On Earth Just Disappeared? https://www.scienceabc.com/humans/life-like-humans-suddenly-disappeared.html -
I am trying to understand what Google's motivation for this even is. Surely it is not profitable to be replacing their existing, highly lucrative product with an inferior alternative that eats up way more power?
Their motivation is always ads. The ai response is longer and takes time to read so more time looking at their ads. If the answer is sufficient, you might not even click away to the search result.
AI is a potential huge bonanza to search sites, letting them suck up the ad revenue that used to goto the search results
-
Are you interpreting my statement as being in favour of training AIs?
I'm interpreting your statement as "the damage is done so we might as well use it"
And I'm saying that using it causes them to train more AIs, which causes more damage. -
I'm interpreting your statement as "the damage is done so we might as well use it"
And I'm saying that using it causes them to train more AIs, which causes more damage.I agree with your second statement. You have misunderstood me. I am not saying the damage is done so we might as well use it. I am saying people don't understand that it is the training of AIs which is directly power-draining.
I don't understand why you think that my observation people are ignorant about how AIs work is somehow an endorsement that we should use AIs.
-
This post did not contain any content.
I wish for cloud genAI services to be pay per use in at least the electricity cost per run. It would turn the overblown hype over ChatGPT clones into a question of if it's really worth the operating cost. Right now it's all VC funded.
-
There's no functional difference aside from usage and scale, which is my point.
I find it interesting that the only actual energy calculations I see from researchers is the training and the things going along with the training, rather then the usage per actual request after training.
People then conflate training energy costs to normal usage cost without data to back it up. I don't have the data either but I do have what I can do/see on my side.
I'm not sure that's true, if you look up things like "tokens per kwh" or "tokens per second per watt" you'll get results of people measuring their power usage while running specific models in specific hardware. This is mainly for consumer hardware since it's people looking to run their own AI servers who are posting about it, but it sets an upper bound.
The AI providers are right lipped about how much energy they use for inference and how many tokens they complete per hour.
You can also infer a bit by doing things like looking up the power usage of a 4090, and then looking at the tokens per second perf someone is getting from a particular model on a 4090 (people love posting their token per second performance every time a new model comes out), and extrapolate that.
-
I agree with your second statement. You have misunderstood me. I am not saying the damage is done so we might as well use it. I am saying people don't understand that it is the training of AIs which is directly power-draining.
I don't understand why you think that my observation people are ignorant about how AIs work is somehow an endorsement that we should use AIs.
I guess.
It still smells like an apologist argument to be like "yeah but using it doesn't actually use a lot of power".
I'm actually not really sure I believe that argument either, through. I'm pretty sure that inference is hella expensive. When people talk about training, they don't talk about the cost to train on a single input, they talk about the cost for the entire training. So why are we talking about the cost to infer on a single input?
What's the cost of running training, per hour? What's the cost of inference, per hour, on a similarly sized inference farm, running at maximum capacity? -
And when it did it also altered the results, making them worse, because it was trying to satisfy "fuck" as part of your search.
Well fuck...
-
Worse is Google that insists on shoving a terrible AI-based result in your face every time you do a search, with no way to turn it off.
I'm not telling these systems to generate images of cow-like girls, but I'm getting AI shoved in my face all the time whether I want it or not. (I don't).
wrote on last edited by [email protected]Someone posted here a while ago - if you use the URL https://www.google.com/search?q=%25s&udm=14 it doesn't include the AI search. I've updated my Google search links to use that instead of the base Google URL.
-
This post did not contain any content.
Meanwhile I'm down town I'm my city cleaning windows in office buildings that are 75% empty but the heat or ac is blasting on completely empty floors and most of the lights are on.
-
It's actually because small trucks were regulated out of the US market. Smaller vehicles have more stringent mileage standards that trucks aren't able to meet. That forces companies to make all their trucks bigger, because bigger vehicles are held to a different standard.
So the people who want or need a truck are pushed to buy a larger one.
They can meet them. But the profit margin is slimmer than if they use the giant frame.
-
I guess.
It still smells like an apologist argument to be like "yeah but using it doesn't actually use a lot of power".
I'm actually not really sure I believe that argument either, through. I'm pretty sure that inference is hella expensive. When people talk about training, they don't talk about the cost to train on a single input, they talk about the cost for the entire training. So why are we talking about the cost to infer on a single input?
What's the cost of running training, per hour? What's the cost of inference, per hour, on a similarly sized inference farm, running at maximum capacity?Maybe you should stop smelling text and try reading it instead.
Running an LLM in deployment can be done locally on one's machine, on a single GPU, and in this case is like playing a video game for under a minute. OpenAI models are larger than by a factor of 10 or more, so it's maybe like playing a video game for 15 minutes (obviously varies based on the response to the query.)
It makes sense to measure deployment usage marginally based on its queries for the same reason it makes sense to measure the environmental impact of a car in terms of hours or miles driven. There's no natural way to do this for training though. You could divide training by the number of queries, to amortize it across its actual usage, which would make it seem significantly cheaper, but it comes with the unintuitive property that this amortization weight goes down as more queries are made, so it's unclear exactly how much of the cost of training should be assigned to a given query. It might make more sense to talk in terms of expected number of total queries during the lifetime deployment of a model.
-
Someone posted here a while ago - if you use the URL https://www.google.com/search?q=%25s&udm=14 it doesn't include the AI search. I've updated my Google search links to use that instead of the base Google URL.
You can also use alternatives like startpage and ecosia which use google results, I believe.
-
This post did not contain any content.
Laughs in total recall
-
I'm really OOTL when it comes to AI GHG impact. How is it any worse than crypto farms, or streaming services?
How do their outputs stack up to traditional emitters like Ag and industry? I need a measuring stick
How is it any worse than crypto farms, or streaming services?
These two things are so different.
Streaming services are extremely efficient; they tend to be encode-once and decode-on-user's-device. Video was for a long time considered a tough thing to serve, so engineers put tons of effort into making it efficient.
Crypto currency is literally designed to be as wasteful as possible while still being feasible. "Proof-of-work" (how Bitcoin and many other currencies operate) literally means that crypto mining algorithms must waste as much computation as they can get away with doing pointless operations just to say they tried. It's an abomination.