the beautiful code
-
I have nothing to prove to you if you wish to keep doing everything by hand that's fine.
But there are plenty of engineers l3 and beyond including myself using this to lighten their workload daily and acting like that isn't the case is just arguing in bad faith or you don't work in the industry.
I do use it, it's handy for some sloppy css for example. Emphasis on sloppy. I was kinda hoping you actually had something there
-
The cursed Will Smith eating spaghetti wasn't the best video AI model available at the time, just what was available for consumers to run on their own hardware at the time. So while the rate of improvement in AI image/video generation is incredible, it's not quite as incredible as that viral video would suggest
But wouldn't you point still be true today that the best AI video models today would be the onces that are not available for consumers?
-
But wouldn't you point still be true today that the best AI video models today would be the onces that are not available for consumers?
Probably is still true, but I've not been paying close attention to the AI market in the last couple of years. But the point I was trying to make was that it's an apples to oranges comparison
-
It’s true that one is based on continuous floats and the other is dynamic peaks
Can you please explain what you’re trying to say here?
Both have neurons with synapses linking them to other neurons. In the artificial case, synapse activation can be any floating point number, and outgoing synapses are calculated from incoming synapses all at once (there's no notion of time, it's not dynamic). Biological neurons are binary, they either fire or do not fire, during a firing cycle they ramp up to a peak potential and then drop down in a predictable fashion. But, it's dynamic; they can peak at any time and downstream neurons can begin to fire "early".
They do seem to be equivalent in some way, although AFAIK it's unclear how at this point, and the exact activation function of each brain neuron is a bit mysterious.
-
If you would like to link some abstracts you find in a DuckDuckGo search that’s fine.
I actually was going to link the same one I always do, which I think I heard about through a blog or talk. If that's not good enough, it's easy to devise your own test and put it to an LLM. The way you phrased that makes it sound like you're more interested in ignoring any empirical evidence, though.
-
Both have neurons with synapses linking them to other neurons. In the artificial case, synapse activation can be any floating point number, and outgoing synapses are calculated from incoming synapses all at once (there's no notion of time, it's not dynamic). Biological neurons are binary, they either fire or do not fire, during a firing cycle they ramp up to a peak potential and then drop down in a predictable fashion. But, it's dynamic; they can peak at any time and downstream neurons can begin to fire "early".
They do seem to be equivalent in some way, although AFAIK it's unclear how at this point, and the exact activation function of each brain neuron is a bit mysterious.
Ok, thanks for that clarification. I guess I’m a bit confused as to why a comparison is being drawn between neurons in a neural network and neurons in a biological brain though.
In a neural network, the neuron receives an input, performs a mathematical formula, and returns an output right?
Like you said we have no understanding of what exactly a neuron in the brain is actually doing when it’s fired, and that’s not considering the chemical component of the brain.
I understand why terminology was reused when experts were designing an architecture that was meant to replicate the architecture of the brain. Unfortunately, I feel like that reuse of terminology is making it harder for laypeople to understand what a neural network is and what it is not now that those networks are a part of the zeitgeist thanks to the explosion of LLM’s and stuff.
-
I don't see how that follows because I did point out in another comment that they are very useful if used like search engines or interactive stack overflow or Wikipedia.
LLMs are extremely knowledgeable (as in they "know" a lot) but are completely dumb.
If you want to anthropomorphise it, current LLMs are like a person that read the entire internet, remembered a lot of it, but still is too stupid to win/draw tic tac toe.
So there is value in LLMs, if you use them for their knowledge.
You say they have no knowledge and are only good for boilerplate. So you're contradicting yourself there.
-
thank you for your input obvious troll account.
Ahh got nothing but lack of understanding and insults. Typical.
-
On one line of code you say?
*search & replaces all line breaks with spaces*
Fired for not writing the quota number of lines even junior devs manage to hit.
-
Ok, thanks for that clarification. I guess I’m a bit confused as to why a comparison is being drawn between neurons in a neural network and neurons in a biological brain though.
In a neural network, the neuron receives an input, performs a mathematical formula, and returns an output right?
Like you said we have no understanding of what exactly a neuron in the brain is actually doing when it’s fired, and that’s not considering the chemical component of the brain.
I understand why terminology was reused when experts were designing an architecture that was meant to replicate the architecture of the brain. Unfortunately, I feel like that reuse of terminology is making it harder for laypeople to understand what a neural network is and what it is not now that those networks are a part of the zeitgeist thanks to the explosion of LLM’s and stuff.
wrote last edited by [email protected]Agreed. The started out trying to make artificial nerves, but then made something totally different. The fact we see the same biases and failure mechanisms emerging in them, now that we're measuring them at scale, is actually a huge surprise. It probably says something deep and fundamental about the geometry of randomly chosen high-dimensional function spaces, regardless of how they're implemented.
Like you said we have no understanding of what exactly a neuron in the brain is actually doing when it’s fired, and that’s not considering the chemical component of the brain.
I wouldn't say none. What the axons, dendrites and synapses are doing is very well understood down to the molecular level - so that's the input and output part. I'm aware knowledge of the biological equivalents of the other stuff (ReLU function and backpropagation) is incomplete. I do assume some things are clear even there, although you'd have to ask a neurologist for details.
-
It can become pretty bad quickly, with just a small project with only 15-20 files. I've been using cursor IDE, building out flow charts & tests manually, and just seeing where it goes.
And while incredibly impressive how it's creating all the steps, it then goes into chaos mode where it will start ignoring all the rules. It'll start changing tests, start pulling in random libraries, not at all thinking holistically about how everything fits together.
Then you try to reel it in, and it continues to go rampant. And for me, that's when I either take the wheel or roll back.
I highly recommend every programmer watch it in action.
wrote last edited by [email protected]Is there a chance that's right around the time the code no longer fits into the LLMs input window of tokens? The basic technology doesn't actually have a long term memory of any kind (at least outside of the training phase).
-
You say they have no knowledge and are only good for boilerplate. So you're contradicting yourself there.
I didn't say they have no knowledge, quite the opposite. Here a quote from the comment you answered:
LLMs are extremely knowledgeable (as in they "know" a lot) but are completely dumb.
There is a subtle difference between intelligent and knowledgeable. LLM know a lot in that sense that they can remember a lot of things, but they are dumb in that sense that they are completely unable to draw conclusions and put that knowledge into action in any other means besides spitting out again what they once learned.
That's why LLMs can tell you a lot about about all different kinds of game theory about tic tac toe but can't draw/win that game consistently.
So knowing a lot and still being dumb is not a contradiction.
-
Is there a chance that's right around the time the code no longer fits into the LLMs input window of tokens? The basic technology doesn't actually have a long term memory of any kind (at least outside of the training phase).
Was my first thought as well. These things really need to find a way to store a larger context without ballooning past the vram limit
-
Was my first thought as well. These things really need to find a way to store a larger context without ballooning past the vram limit
The thing being, it's kind of an inflexible blackbox technology, and that's easier said than done. In one fell swoop we've gotten all that soft, fuzzy common sense stuff that people were chasing for decades inside a computer, but it's ironically still beyond our reach to fully use.
From here, I either expect that steady progress will be made in finding more clever and constrained ways of using the raw neural net output, or we're back to an AI winter. I suppose it's possible a new architecture and/or training scheme will come along, but it doesn't seem imminent.
-
The thing being, it's kind of an inflexible blackbox technology, and that's easier said than done. In one fell swoop we've gotten all that soft, fuzzy common sense stuff that people were chasing for decades inside a computer, but it's ironically still beyond our reach to fully use.
From here, I either expect that steady progress will be made in finding more clever and constrained ways of using the raw neural net output, or we're back to an AI winter. I suppose it's possible a new architecture and/or training scheme will come along, but it doesn't seem imminent.
I fell like the way investments are currently made, coming up with something new is made almost impossible. Most of the hardware is designed with LLMs in mind
-
I actually was going to link the same one I always do, which I think I heard about through a blog or talk. If that's not good enough, it's easy to devise your own test and put it to an LLM. The way you phrased that makes it sound like you're more interested in ignoring any empirical evidence, though.
That’s unreal. No, you cannot come up with your own scientific test to determine a language model’s capacity for understanding. You don’t even have access to the “thinking” side of the LLM.
-
That’s unreal. No, you cannot come up with your own scientific test to determine a language model’s capacity for understanding. You don’t even have access to the “thinking” side of the LLM.
wrote last edited by [email protected]You can devise a task it couldn't have seen in the training data, I mean. Building a comprehensive argument out of them requires a lot more work and time.
You don’t even have access to the “thinking” side of the LLM.
Obviously, that goes for the natural intelligences too, so it's not really a fair thing to require.