Okay seriously this technology still baffles me.
-
[email protected]replied to [email protected] last edited by
What’s this “if” nonsense? I loaded up a light model of it, and already have put it to work.
-
[email protected]replied to [email protected] last edited by
Look at it in another way, people think this is the start of an actual AI revolution, as in full blown AGI or close to it or something very capable at least
I think the bigger threat of revolution (and counter-revolution) is that of open source software. For people that don't know anything about FOSS, they've been told for decades now that we need software is a tool you, need and that's only possible through the innovative and superhuman-like intelligent CEOs helping us with the opportunity to buy it.
If everyone finds out that they're actually the ones stifling progress and development, while manipulating markets to further enrich themselves and whatever other partners align with that goal. Not to mention defrauding the countless investors that thought they were holding rocket ship money that was actually snake oil.
All while another country did that collectively and just said, "here, it's free. You can even take the code and use it how you personally see fit, because if this thing really is that thing, it should be a tool anyone can access. Oh, and all you other companies, your code is garbage btw. Ours runs on a potato by comparison."
I'm just saying, the US has already shown they will go to extreme lengths to keep its citizens from thinking too hard about how its economic model is fucking them while the rich guys just move on to the next thing they'll sell us.
-
[email protected]replied to [email protected] last edited by
Have you actually read my text wall?
Even o1 (which AFAIK is roughly on par with R1-671B) wasn't really helpful for me. I just need often (actually all the time) correct answers to complex problems and LLMs aren't just capable to deliver this.
I still need to try it out whether it's possible to train it on my/our codebase, such that it's at least possible to use as something like Github copilot (which I also don't use, because it just isn't reliable enough, and too often generates bugs). Also I'm a fast typer, until the answer is there and I need to parse/read/understand the code, I already have written a better version.
-
[email protected]replied to [email protected] last edited by
So unreliable boilerplate generator, you need to debug?
Right I've seen that it's somewhat nice to quickly generate bash scripts etc.
It can certainly generate quick'n dirty scripts as a starter. But code quality is often supbar (and often incorrect), which triggers my perfectionism to make it better, at which point I should've written it myself...
But I agree that it can often serve well for exploration, and sometimes you learn new stuff (if you weren't expert in it at least, and you should always validate whether it's correct).
But actual programming in e.g. Rust is a catastrophe with LLMs (more common languages like js work better though).
-
[email protected]replied to [email protected] last edited by
I use C# and PS/CMD for my job. I think you're right. It can create a decent template for setting things up. But it trips on its own dick with anything more intricate than simple 2 step commands.
-
[email protected]replied to [email protected] last edited by
If you are blindly asking it questions without a grounding resources you're gonning to get nonsense eventually unless it's really simple questions.
They aren't infinite knowledge repositories. The training method is lossy when it comes to memory, just like our own memory.
Give it documentation or some other context and ask it questions it can summerize pretty well and even link things across documents or other sources.
The problem is that people are misusing the technology, not that the tech has no use or merit, even if it's just from an academic perspective.
-
[email protected]replied to [email protected] last edited by
Ahh. It’s overconfident neckbeard stuff then.
-
[email protected]replied to [email protected] last edited by
You're just trolling aren't you? Have you used AI for a longer time while coding and then tried without for some time?
I currently don't miss it... Keep in mind that you still have to check whether all the code is correct etc. writing code isn't the thing that usually takes that much time for me... It's debugging, and finding architecturally sound and good solutions for the problem. And AI is definitely not good at that (even if you're not that experienced). -
[email protected]replied to [email protected] last edited by
Yes, I have tested that use case multiple times. It performs well enough.
A calculator also isn’t much help, if the person operating it fucks up. Maybe the problem in your scenario isn’t the AI.
-
[email protected]replied to [email protected] last edited by
Yes, I know, I tried all kinds of inputs, ways to query it, including full code-bases etc.
Long story short: I'm faster just not caring about AI (at the moment).
As I said somewhere else here, I have a theoretical background in this area.
Though speaking of, I think I really need to try out training or refining a DeepSeek model with our code-bases, whether it helps to be a good alternative to something like the dumb Github Copilot (which I've also disabled, because it produces a looot of garbage that I don't want to waste my attention with...) Maybe it's now finally possible to use at least for completion when it knows details about the whole code-base (not just snapshots such as Github CoPilot). -
[email protected]replied to [email protected] last edited by
As you're being unkind all the time, let me be unkind as well
A calculator also isn’t much help, if the person operating it fucks up. Maybe the problem in your scenario isn’t the AI.
If you can effectively use AI for your problems, maybe they're too repetitive, and actually just dumb boilerplate.
I rather like to solve problems that require actual intelligence (e.g. do research, solve math problems, think about software architecture, solve problems efficiently), and don't even want to deal with problems that require me to write a lot of repetitive code, which AI may be (and often is not) of help with.
I have yet to see efficient generated Rust code that autovectorizes well, without a lot of allocs etc. I always get triggered by the insanely bad code-quality of the AI that just doesn't even really understand what allocations are... Arghh I could go on...
-
[email protected]replied to [email protected] last edited by
I think you’re severely underestimating the amount of white collar work that is just boiler plate, and understating how well AI can quickly produce a workable first draft. Or maybe you just aren’t writing good prompts.