I wonder if this was made by AI or a shit programmer
-
Believe it or not a lot of hacking is more like this than you think.
wrote on last edited by [email protected]Many years ago, I discovered that my then-employer’s “home built” e-commerce system had all user and admin passwords displayed in plaintext at home/admin/passwords.
When I brought this to the attention of leadership, they called the “developer” in and he said “oh, well, that’s IP locked, so no one on the web can access it!” When I pulled it up on my phone, he insisted my phone was on the work WiFi, despite it being clearly verifiable that was not the case. (The same work WiFi that had an open public connection, which is the one my phone would have been on, if it were on it…)
He did fix that, but many other issues remained. Eventually a new COO hired someone competent as his ‘backup’, replaced our website and finally suggested he pursue other employment opportunities before he could no longer voluntarily pursue them. (There was concern he might sabotage.)
-
This post did not contain any content.
Even the best models fine tuned for coding still have training that was based on both good and bad examples of programming from humans. And since it's not AGI but using probability to generate the code, you're going to get crap programming logic dependent on how often such things were used and suggested by humans to other humans. Googling for an answer on how to code something pulls up all sorts of answers from many sources, but reading through them, many are terrible. An LLM doesn't know that, it just knows that humans liked some answers better than others, so GIGO.
-
Even the best models fine tuned for coding still have training that was based on both good and bad examples of programming from humans. And since it's not AGI but using probability to generate the code, you're going to get crap programming logic dependent on how often such things were used and suggested by humans to other humans. Googling for an answer on how to code something pulls up all sorts of answers from many sources, but reading through them, many are terrible. An LLM doesn't know that, it just knows that humans liked some answers better than others, so GIGO.
Gorilla In Gorilla Out?
-
Gorilla In Gorilla Out?
Fantastic for building BaaS apps
-
Giraffe In Giraffe Out
-
Social engineering is probably 95% of modern attack vectors. And that's not even unexpected, some highly regarded computer scientists and security researchers concluded this more than a decade ago.
wrote on last edited by [email protected]I work in security and I kinda doubt this. There are plenty of issues just like what is outlined here that would be much easier to exploit than social engineering. Social engineering costs a lot more than
GET /secrets.json
.There is good reason to be concerned about both, but 95% sounds way off and makes it sound like companies should allocate significantly more time to defend against social engineering, when they should first try to ensure social engineering is the easiest way to exploit their system. I can tell you from about a decade of experience that it typically isn't.
-
This post did not contain any content.wrote on last edited by [email protected]
Not a big fan of the wording here. Plenty of skilled programmers make dumb mistakes. There should always be systems in place to ensure these dumb mistakes don't make it to production. Especially when related to sensitive information. Where was the threat model and the system in place to enforce it? The idea that these problems are caused by "shit programmers" misses the real issue: there was either no system or an insufficient system to test features and define security requirements.
-
Fantastic for building BaaS apps
Bullshit as a Service?
-
Giraffe In Giraffe Out
Gorilla In Giraffe Out
That would be the real trick.
-
Bullshit as a Service?
Bananas as a Service
-
This post did not contain any content.
What was the BASE_URL here? I’m guessing that’s like a profile page or something?
So then you still first have to get a URL to each profile? Or is this like a feed URL?
-
What was the BASE_URL here? I’m guessing that’s like a profile page or something?
So then you still first have to get a URL to each profile? Or is this like a feed URL?
It's a public firebase bucket
-
It's a public firebase bucket
Oh Jesus
-
I work in security and I kinda doubt this. There are plenty of issues just like what is outlined here that would be much easier to exploit than social engineering. Social engineering costs a lot more than
GET /secrets.json
.There is good reason to be concerned about both, but 95% sounds way off and makes it sound like companies should allocate significantly more time to defend against social engineering, when they should first try to ensure social engineering is the easiest way to exploit their system. I can tell you from about a decade of experience that it typically isn't.
https://www.infosecinstitute.com/resources/security-awareness/human-error-responsible-data-breaches/
You're right. It's 74%.
https://www.cybersecuritydive.com/news/clorox-380-million-suit-cognizant-cyberattack/753837/
It's way easier to convince someone that you are just a lost user who needs access than it is to try to probe an organization's IT security from the outside.
This is only going to get worse with the ability to replicate other's voices and images. People already consistently fall for text message and email social engineering. Now someone just needs to build a model off a CSO doing interviews for a few hours and then call their phone explaining there has been a breach. Sure, 80% of good tech professionals won't fall for it, but the other 20% that just got hired out of their league and are fearing for their jobs will immediately do what they are told, especially if the breach is elaborate enough to convince them it's an internal security thing.
-
Not a big fan of the wording here. Plenty of skilled programmers make dumb mistakes. There should always be systems in place to ensure these dumb mistakes don't make it to production. Especially when related to sensitive information. Where was the threat model and the system in place to enforce it? The idea that these problems are caused by "shit programmers" misses the real issue: there was either no system or an insufficient system to test features and define security requirements.
I found a bad programmer!
-
Believe it or not a lot of hacking is more like this than you think.
Shodan lists 100'000s of publicly accessible security cameras.
-
I remember when a senior developer where i worked was tired of connecting to the servers to check its configuration, so they added a public facing rest endpoint that just dumped the entire active config, including credentials and secrets
That was a smaller slip-up than exposing a database like that (he just forgot that the config contained secrets) but still funny that it happened
That's not a "senior developer." That's a developer that has just been around for too long.
Secrets shouldn't be in configurations, and developers shouldn't be mucking around in production, nor with production data.
-
This post did not contain any content.
This reminds me of how I showed a friend and her company how to get databases from BLS and it's basically all just text files with urls. "What API did you call? How did you scrape the data?"
Nah man, it's just... there. As government data should be. They called it a hack.
-
Not a big fan of the wording here. Plenty of skilled programmers make dumb mistakes. There should always be systems in place to ensure these dumb mistakes don't make it to production. Especially when related to sensitive information. Where was the threat model and the system in place to enforce it? The idea that these problems are caused by "shit programmers" misses the real issue: there was either no system or an insufficient system to test features and define security requirements.
I can tell you exactly what happened. "Hey Claude, I need to configure and setup a DB with Firebase to store images from our application." and then promptly hit shift+tab and then went to go browse Reddit.
nothing was tested. nothing was verified. They let the AI do its thing they checked in on it after an hour or so. once it was done it was add all, commit -m "done", push origin master. AI doesn't implement security stuff. there was zero security here.