the beautiful code
-
On Error Resume Next
-
It can become pretty bad quickly, with just a small project with only 15-20 files. I've been using cursor IDE, building out flow charts & tests manually, and just seeing where it goes.
And while incredibly impressive how it's creating all the steps, it then goes into chaos mode where it will start ignoring all the rules. It'll start changing tests, start pulling in random libraries, not at all thinking holistically about how everything fits together.
Then you try to reel it in, and it continues to go rampant. And for me, that's when I either take the wheel or roll back.
I highly recommend every programmer watch it in action.
I think Generative AI is a genuinely promising and novel tool with real, valuable applications. To appreciate it however, you have to mentally compartmentalize the irresponsible, low-effort ways people
sometimesmostly use it—because yeah, it's very easy to make a lot of that so that's most of what you see when you hear "Generative AI" and it's become its reputation...Like I've had interesting "conversations" with Gemini and ChatGPT, I've actually used them to solve problems. But I would never put it in charge of anything critically important that I couldn't double check against real data if I sensed the faintest hint of a problem.
I also don't think it's ready for primetime. Does it deserve to be researched and innovated upon? Absolutely, but like, by a few nerds who manage to get it running, and universities training it on data they have a license to use. Not "Crammed into every single technology object on earth for no real reason".
I have brain not very good sometimes disease and I consider being able to "talk" to a "person" who can get me out of a creative rut just by exploring my own feelings a bit. GPT can actually listen to music which surprised me. I consider it scientifically interesting. It doesn't get bored or angry at you unless you like, tell it to? I've asked it for help with a creative task in the past and not actually used any of its suggestions at all, but being able to talk about it with someone (when a real human who cared was not available) was a valuable resource.
To be clear I pretty much just use it as a fancy chatbot and don't like, just copy paste its output like some people do.
-
Oh my goodness, that's adorable and sweet of your dog! Also, I'm so glad you had such a big laugh. I love when that happens.
wrote on last edited by [email protected]He’s a sweet guy. … Mostly. Very much in need of a lot of attention. Sometimes he just sits next to you on the couch and puts his paw on you if you’re not giving him enough attention.
Here he is posing with his sister as a prop:
-
I asked ChatGPT with help about bare metal 32-bit ARM (For the Pi Zero W) C/ASM, emulated in QEMU for testing, and after the third iteration of "use printf for output" -> "there's no printf with bare metal as target" -> "use solution X" -> "doesn't work" -> "ude printf for output" ... I had enough.
Can't you just send prints to serial?
-
Watching the serious people trying to use AI to code gives me the same feeling as the cybertruck people exploring the limits of their car. XD
"It's terrible and I should hate it, but gosh it it isn't just so cool"
I wish i could get so excited over disappointing garbage
It's useful if you just don't do....That. it's just a new fancy search engin, it's a bit better than going to stack overflow, it can do good stuff if you go small.
Just don't do whatever this post suggested of doing...
-
I think the main barriers are context length (useful context. GPT-4o has "128k context" but it's mostly sensitive to the beginning and end of the context and blurry in the middle. This is consistent with other LLMs), and just data not really existing. How many large scale, well written, well maintained projects are really out there? Orders of magnitude less than there are examples of "how to split a string in bash" or "how to set up validation in spring boot". We might "get there", but it'll take a whole lot of well written projects first, written by real humans, maybe with the help of AI here and there. Unless, that is, we build it with the ability to somehow learn and understand faster than humans.
-
Coherent originality does not point to the machine’s understanding; the human is the one capable of finding a result coherent and weighting their program to produce more results in that vein.
You got the "originality" part there, right? I'm talking about tasks that never came close to being in the training data. Would you like me to link some of the research?
Your brain does not function in the same way as an artificial neural network, nor are they even in the same neighborhood of capability. John Carmack estimates the brain to be four orders of magnitude more efficient in its thinking; Andrej Karpathy says six.
Given that both biological and computer neural nets very by orders of magnitude in size, that means pretty little. It's true that one is based on continuous floats and the other is dynamic peaks, but the end result is often remarkably similar in function and behavior.
It’s true that one is based on continuous floats and the other is dynamic peaks
Can you please explain what you’re trying to say here?
-
Can't you just send prints to serial?
Yes, that was the plan, which ChatGPT refused to do
-
Four lines? Let's have realistic discussions, you're just intentionally arguing in bad faith or extremely bad at prompting AI.
You can prove your point easily: show us a prompt that gives us a decent amount of code that isn't stupidly simple or sufficiently common that I don't just copy paste the first google result
-
He’s a sweet guy. … Mostly. Very much in need of a lot of attention. Sometimes he just sits next to you on the couch and puts his paw on you if you’re not giving him enough attention.
Here he is posing with his sister as a prop:
Oh my goodness, he sounds precious! I've had a sweet and needy dog like that in the past, too. It can be a lot, but I loved it (and miss it,) haha.
Both your dogs are very cute! You and your pups gave me a much-needed smile. Thank you for that.
Please give them some pets from me!
-
Coherent originality does not point to the machine’s understanding; the human is the one capable of finding a result coherent and weighting their program to produce more results in that vein.
You got the "originality" part there, right? I'm talking about tasks that never came close to being in the training data. Would you like me to link some of the research?
Your brain does not function in the same way as an artificial neural network, nor are they even in the same neighborhood of capability. John Carmack estimates the brain to be four orders of magnitude more efficient in its thinking; Andrej Karpathy says six.
Given that both biological and computer neural nets very by orders of magnitude in size, that means pretty little. It's true that one is based on continuous floats and the other is dynamic peaks, but the end result is often remarkably similar in function and behavior.
If you would like to link some abstracts you find in a DuckDuckGo search that’s fine.
-
You can prove your point easily: show us a prompt that gives us a decent amount of code that isn't stupidly simple or sufficiently common that I don't just copy paste the first google result
I have nothing to prove to you if you wish to keep doing everything by hand that's fine.
But there are plenty of engineers l3 and beyond including myself using this to lighten their workload daily and acting like that isn't the case is just arguing in bad faith or you don't work in the industry.
-
I have nothing to prove to you if you wish to keep doing everything by hand that's fine.
But there are plenty of engineers l3 and beyond including myself using this to lighten their workload daily and acting like that isn't the case is just arguing in bad faith or you don't work in the industry.
I do use it, it's handy for some sloppy css for example. Emphasis on sloppy. I was kinda hoping you actually had something there
-
The cursed Will Smith eating spaghetti wasn't the best video AI model available at the time, just what was available for consumers to run on their own hardware at the time. So while the rate of improvement in AI image/video generation is incredible, it's not quite as incredible as that viral video would suggest
But wouldn't you point still be true today that the best AI video models today would be the onces that are not available for consumers?
-
But wouldn't you point still be true today that the best AI video models today would be the onces that are not available for consumers?
Probably is still true, but I've not been paying close attention to the AI market in the last couple of years. But the point I was trying to make was that it's an apples to oranges comparison
-
It’s true that one is based on continuous floats and the other is dynamic peaks
Can you please explain what you’re trying to say here?
Both have neurons with synapses linking them to other neurons. In the artificial case, synapse activation can be any floating point number, and outgoing synapses are calculated from incoming synapses all at once (there's no notion of time, it's not dynamic). Biological neurons are binary, they either fire or do not fire, during a firing cycle they ramp up to a peak potential and then drop down in a predictable fashion. But, it's dynamic; they can peak at any time and downstream neurons can begin to fire "early".
They do seem to be equivalent in some way, although AFAIK it's unclear how at this point, and the exact activation function of each brain neuron is a bit mysterious.
-
If you would like to link some abstracts you find in a DuckDuckGo search that’s fine.
I actually was going to link the same one I always do, which I think I heard about through a blog or talk. If that's not good enough, it's easy to devise your own test and put it to an LLM. The way you phrased that makes it sound like you're more interested in ignoring any empirical evidence, though.
-
Both have neurons with synapses linking them to other neurons. In the artificial case, synapse activation can be any floating point number, and outgoing synapses are calculated from incoming synapses all at once (there's no notion of time, it's not dynamic). Biological neurons are binary, they either fire or do not fire, during a firing cycle they ramp up to a peak potential and then drop down in a predictable fashion. But, it's dynamic; they can peak at any time and downstream neurons can begin to fire "early".
They do seem to be equivalent in some way, although AFAIK it's unclear how at this point, and the exact activation function of each brain neuron is a bit mysterious.
Ok, thanks for that clarification. I guess I’m a bit confused as to why a comparison is being drawn between neurons in a neural network and neurons in a biological brain though.
In a neural network, the neuron receives an input, performs a mathematical formula, and returns an output right?
Like you said we have no understanding of what exactly a neuron in the brain is actually doing when it’s fired, and that’s not considering the chemical component of the brain.
I understand why terminology was reused when experts were designing an architecture that was meant to replicate the architecture of the brain. Unfortunately, I feel like that reuse of terminology is making it harder for laypeople to understand what a neural network is and what it is not now that those networks are a part of the zeitgeist thanks to the explosion of LLM’s and stuff.
-
I don't see how that follows because I did point out in another comment that they are very useful if used like search engines or interactive stack overflow or Wikipedia.
LLMs are extremely knowledgeable (as in they "know" a lot) but are completely dumb.
If you want to anthropomorphise it, current LLMs are like a person that read the entire internet, remembered a lot of it, but still is too stupid to win/draw tic tac toe.
So there is value in LLMs, if you use them for their knowledge.
You say they have no knowledge and are only good for boilerplate. So you're contradicting yourself there.
-
thank you for your input obvious troll account.
Ahh got nothing but lack of understanding and insults. Typical.