I wonder if this was made by AI or a shit programmer
-
Bananas as a Service
bananas in pyjamas
-
Does anyone have a source for this?
wrote on last edited by [email protected]The original article is paywalled (I mean, registration-walled?), this summary is not
404 Media reported that 4chan users claimed to be sharing personal data and selfies from Tea after discovering an exposed database.
-
What was the BASE_URL here? Iβm guessing thatβs like a profile page or something?
So then you still first have to get a URL to each profile? Or is this like a feed URL?
wrote on last edited by [email protected]Possibly from the decompiled APK. 404media reported that they found the same URL as the posted one in the APK (archive link).
-
This post did not contain any content.
who'd have thought that javascript and client side programming was incredibly susceptible to security flaws and deeply unsafe
-
who'd have thought that javascript and client side programming was incredibly susceptible to security flaws and deeply unsafe
As much as I dislike JavaScript, it isn't responsible for this. The person (or AI) and their stupidity is.
-
whether it's telling the truth
"whether the output is correct or a mishmash"
"Truth" implies understanding that these don't have, and because of the underlying method the models use to generate plausible-looking responses based on training data, there is no "truth" or "lying" because they don't actually "know" any of it.
I know this comes off probably as super pedantic, and it definitely is at least a little pedantic, but the anthropomorphism shown towards these things is half the reason they're trusted.
That and how much ChatGPT flatters people.
wrote on last edited by [email protected]Yeah, it has no notion of being truthful. But we do, so I was bringing in a human perspective there. We know what it says may be true or false, and it's natural for us to call the former "telling the truth", but as you say we need to be careful not to impute to the LLM any intention to tell the truth, any awareness of telling the truth, or any intention or awareness at all. All it's doing is math that spits out words according to patterns in the training material.
-
who'd have thought that javascript and client side programming was incredibly susceptible to security flaws and deeply unsafe
who'd have thought that being shitty programmer was incredibly susceptible to security flaws and deeply unsafe instead of javascript
-
Disabling index and making the names UUID would make the directory inviolable even if the address was publicly available.
Sounds like a good case for brute forcing the filenames. Just do the proper thing and don't leave your cloud storage publicly accessible.
-
Sounds like a good time
-
Social engineering is probably 95% of modern attack vectors. And that's not even unexpected, some highly regarded computer scientists and security researchers concluded this more than a decade ago.
The percentage is closer to 75% than 95%.
-
Disabling index and making the names UUID would make the directory inviolable even if the address was publicly available.
Security through obscurity never works.
-
Believe it or not a lot of hacking is more like this than you think.
If I was a hacker, I would just get a job as a night cleaning person at corporate office buildings. And then just help myself to the fucking post-it notes with usernames and passwords on them.
-
Yeah, once you get the LLM's response you still have to go to the documentation to check whether it's telling the truth and the APIs it recommends are current. You're no better off than if you did an internet search and tried to figure out who's giving good advice, or just fumbled your own way through the docs in the first place.
wrote on last edited by [email protected]Youβre no better off than if you did an internet search and tried to figure out whoβs giving good advice, or just fumbled your own way through the docs in the first place.
These have their own problems ime. Often the documentation (if it exists) won't tell you how to do something, or it's really buried, or inaccurate. Sometimes the person posting StackOverflow answers didn't actually try running their code, and it doesn't run without errors. There are a lot of situations where a LLM will somehow give you better answers than these options. It's inconsistent, and the reverse is true also, but the most efficient way to do it is to use all of these options situationally and as backups to each other.
-
This reminds me of how I showed a friend and her company how to get databases from BLS and it's basically all just text files with urls. "What API did you call? How did you scrape the data?"
Nah man, it's just... there. As government data should be. They called it a hack.
wrote on last edited by [email protected]ah yes, the forbidden curl hack
-
who'd have thought that being shitty programmer was incredibly susceptible to security flaws and deeply unsafe instead of javascript
No, it must be JavaScript that is the problem
principal_skinner.jpg.exe
-
No, it must be JavaScript that is the problem
principal_skinner.jpg.exe
Microsoft defender identified a malware in this executable.
-
Peak Vibe Coding results.
while True:
Jesus Christ
-
If I was a hacker, I would just get a job as a night cleaning person at corporate office buildings. And then just help myself to the fucking post-it notes with usernames and passwords on them.
-
As much as I dislike JavaScript, it isn't responsible for this. The person (or AI) and their stupidity is.
When i tried making a website with gemini cli it did deadass use string interpolation for sql queries so everything is possible
-
while True:
Jesus Christ
You know that's not the Tea code, but the downloader, right?