I wonder if this was made by AI or a shit programmer
-
This post did not contain any content.
-
This post did not contain any content.
AI just enables the shit programmers to create a greater volume of shit
-
This post did not contain any content.
I wonder if their data is poisoned by below average Dev. I mean if your test subjects are met or below Dev and mad Ethel lost 20% efficiency imagine what you can do to good dev
-
This post did not contain any content.
Believe it or not a lot of hacking is more like this than you think.
-
This post did not contain any content.
I remember when a senior developer where i worked was tired of connecting to the servers to check its configuration, so they added a public facing rest endpoint that just dumped the entire active config, including credentials and secrets
That was a smaller slip-up than exposing a database like that (he just forgot that the config contained secrets) but still funny that it happened
-
AI just enables the shit programmers to create a greater volume of shit
I'll tape this to my office door.
-
I wonder if their data is poisoned by below average Dev. I mean if your test subjects are met or below Dev and mad Ethel lost 20% efficiency imagine what you can do to good dev
wrote on last edited by [email protected]Not below average dev necessarily, but when posting code examples on the internet people often try to get a point across. Like how do I solve X? Here is code that solves X perfectly, the rest of the code is total crap, ignore that and focus on the X part. Because it's just an example, it doesn't really matter. But when it's used to train an LLM it's all just code. It doesn't know which parts are important and which aren't.
And this becomes worse when small little bits of code are included in things like tutorials. That means it's copy pasted all over the place, on forums, social media, stackoverflow etc. So it's weighted way more heavily. And the part where the tutorial said: "Warning, this code is really bad and insecure, it's just an example to show this one thing" gets lost in the shuffle.
Same thing when an often used pattern when using a framework gets replaced by new code where the framework does a little bit more so the same pattern isn't needed anymore. The LLM will just continue with the old pattern, even though there's often a good reason it got replaced (for example security issues). And if the new and old version aren't compatible with each other, you are in for a world of hurt trying to use an LLM.
And now with AI slop flooding all of these places where they used to get their data, it just becomes worse and worse.
These are just some of the issues why using an LLM for coding is probably a really bad idea.
-
Believe it or not a lot of hacking is more like this than you think.
Social engineering is probably 95% of modern attack vectors. And that's not even unexpected, some highly regarded computer scientists and security researchers concluded this more than a decade ago.
-
Social engineering is probably 95% of modern attack vectors. And that's not even unexpected, some highly regarded computer scientists and security researchers concluded this more than a decade ago.
When the technical side reaches a certain level of security, the humans become the weakest link.
-
Believe it or not a lot of hacking is more like this than you think.
wrote on last edited by [email protected]Many years ago, I discovered that my then-employer’s “home built” e-commerce system had all user and admin passwords displayed in plaintext at home/admin/passwords.
When I brought this to the attention of leadership, they called the “developer” in and he said “oh, well, that’s IP locked, so no one on the web can access it!” When I pulled it up on my phone, he insisted my phone was on the work WiFi, despite it being clearly verifiable that was not the case. (The same work WiFi that had an open public connection, which is the one my phone would have been on, if it were on it…)
He did fix that, but many other issues remained. Eventually a new COO hired someone competent as his ‘backup’, replaced our website and finally suggested he pursue other employment opportunities before he could no longer voluntarily pursue them. (There was concern he might sabotage.)
-
This post did not contain any content.
Even the best models fine tuned for coding still have training that was based on both good and bad examples of programming from humans. And since it's not AGI but using probability to generate the code, you're going to get crap programming logic dependent on how often such things were used and suggested by humans to other humans. Googling for an answer on how to code something pulls up all sorts of answers from many sources, but reading through them, many are terrible. An LLM doesn't know that, it just knows that humans liked some answers better than others, so GIGO.
-
Even the best models fine tuned for coding still have training that was based on both good and bad examples of programming from humans. And since it's not AGI but using probability to generate the code, you're going to get crap programming logic dependent on how often such things were used and suggested by humans to other humans. Googling for an answer on how to code something pulls up all sorts of answers from many sources, but reading through them, many are terrible. An LLM doesn't know that, it just knows that humans liked some answers better than others, so GIGO.
Gorilla In Gorilla Out?
-
Gorilla In Gorilla Out?
Fantastic for building BaaS apps
-
Giraffe In Giraffe Out
-
Social engineering is probably 95% of modern attack vectors. And that's not even unexpected, some highly regarded computer scientists and security researchers concluded this more than a decade ago.
wrote on last edited by [email protected]I work in security and I kinda doubt this. There are plenty of issues just like what is outlined here that would be much easier to exploit than social engineering. Social engineering costs a lot more than
GET /secrets.json
.There is good reason to be concerned about both, but 95% sounds way off and makes it sound like companies should allocate significantly more time to defend against social engineering, when they should first try to ensure social engineering is the easiest way to exploit their system. I can tell you from about a decade of experience that it typically isn't.
-
This post did not contain any content.wrote on last edited by [email protected]
Not a big fan of the wording here. Plenty of skilled programmers make dumb mistakes. There should always be systems in place to ensure these dumb mistakes don't make it to production. Especially when related to sensitive information. Where was the threat model and the system in place to enforce it? The idea that these problems are caused by "shit programmers" misses the real issue: there was either no system or an insufficient system to test features and define security requirements.
-
Fantastic for building BaaS apps
Bullshit as a Service?
-
Giraffe In Giraffe Out
Gorilla In Giraffe Out
That would be the real trick.
-
Bullshit as a Service?
Bananas as a Service