Excel Developers & Microsoft Copilot.
-
Just once I want to see a scenario where an LLM is the better tool for everyday computing. Maybe I'm just bad at technology, but ever task I've tried has been more effort for a worse result.
It's pretty helpful for coding, when used responsibly
It's probably going to cripple the industry though. Junior developers are just not getting brought on
-
…peak technology, gentlemen!
That is different. It's because you're interacting with token-based models. There has been new research on giving byte level data to LLMs to solve this issue.
The numerical calculation aspect of LLMs and this are different.
It would be best to couple an LLM into a tool-calling system for rudimentary numeral calculations. Right now the only way to do that is to cook up a Python script with HF transformers and a finetuned model, I am not aware of any commercial model doing this. (And this is not what Microshit is doing)
-
i mean, if it sucks at this, why put it in lol
(rhetorical question, it’s to please investors, i know)
wrote last edited by [email protected]One of the absolute best uses for LLMs is to generate quick summaries for massive data. It is pretty much the only use case where, if the model doesn't overflow and become incoherent immediately [1], it is extremely useful.
But nooooo, this is luddite.ml saying anything good about AI gets you burnt at the stake
Some of y'all would've lit the fire under Jan Hus if you lived in the 15th century
[1] This is more of a concern for local models with smaller parameter counts and running quantized. For premier models it's not really much of a concern.
-
cross-posted from: https://reddthat.com/post/48301764
Source: Mastodon.
Switch to pandas
-
Let's jam a thing that is frequently wrong into absolutely everything!
it's not only wrong, but also incredibly expensive to run wrong.
-
…peak technology, gentlemen!
remember when Kirk had to outsmart ai using paradoxes? could have asked it about strawberries
-
it's not only wrong, but also incredibly expensive to run wrong.
"The next version will be totally better bro, just trust me bro, it will all work out bro, gimme another billion dollars bro."
-
"The next version will be totally better bro, just trust me bro, it will all work out bro, gimme another billion dollars bro."
i used to call it "investor scams" then they coined "vaporware", 99% of AI advancement is pure investor scams.
-
Even better. They are incapable of discerning correlation vs causation, which is why they give completely illogical and irrelevant information.
Turns out pattern recognition means dogshit when you don't know how anything works, and never will.
The only thing, beyond laughing at it being dumb or making silly pictures that I don't really care about, that I've found as an actual useful use for this wave of AIs is basically "pretend you're an expert in whatever field you're being asked about, and that you're talking to a moderately less experienced professional, and give a very brief description of the topic, focusing on what the user can lookup on their own instead".
As an example, I asked it about designing some gears for a project. It told me I used a word wrong and the more precise term would give me better search results, defined a handful of terms I'd run into, and told me to buy a machinery handbook or get a design table since the parameters are all standardized.
The current approach isn't going to replace thinking for yourself, but pattern recognition can do a good job seeing that questions about X often end up relating to A, B, and C.
Oh, and I also got Google's to only respond as though it's broken and it made it really fun to try to figure out the news through it's cryptic gibberish. A solid hour of amusement, and definitely worth several billion dollars of other people's money.
-
cross-posted from: https://reddthat.com/post/48301764
Source: Mastodon.
Item 2025 ($ millions) Non Cash Flow Expenses (81.3) Operating Profit (Loss) (121.3) Profit / (Loss) after Tax (70.6) Something that would look pretty believable in this spreadsheet if it had the label "Taxation Credit / (Charge)" (0.8) Net Debt (27.1)