Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Programmer Humor
  3. I wonder if this was made by AI or a shit programmer

I wonder if this was made by AI or a shit programmer

Scheduled Pinned Locked Moved Programmer Humor
programmerhumor
170 Posts 93 Posters 1 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • lena@gregtech.euL [email protected]
    This post did not contain any content.
    Q This user is from outside of this forum
    Q This user is from outside of this forum
    [email protected]
    wrote on last edited by [email protected]
    #17

    Not a big fan of the wording here. Plenty of skilled programmers make dumb mistakes. There should always be systems in place to ensure these dumb mistakes don't make it to production. Especially when related to sensitive information. Where was the threat model and the system in place to enforce it? The idea that these problems are caused by "shit programmers" misses the real issue: there was either no system or an insufficient system to test features and define security requirements.

    P R 2 Replies Last reply
    3
    • rayquetzalcoatl@lemmy.worldR [email protected]

      Fantastic for building BaaS apps

      J This user is from outside of this forum
      J This user is from outside of this forum
      [email protected]
      wrote on last edited by
      #18

      Bullshit as a Service?

      rayquetzalcoatl@lemmy.worldR 1 Reply Last reply
      5
      • rhaedas@fedia.ioR [email protected]

        Giraffe In Giraffe Out

        J This user is from outside of this forum
        J This user is from outside of this forum
        [email protected]
        wrote on last edited by
        #19

        Gorilla In Giraffe Out

        That would be the real trick.

        1 Reply Last reply
        13
        • J [email protected]

          Bullshit as a Service?

          rayquetzalcoatl@lemmy.worldR This user is from outside of this forum
          rayquetzalcoatl@lemmy.worldR This user is from outside of this forum
          [email protected]
          wrote on last edited by
          #20

          Bananas as a Service 🙂

          cupcakezealot@piefed.blahaj.zoneC 1 Reply Last reply
          5
          • lena@gregtech.euL [email protected]
            This post did not contain any content.
            J This user is from outside of this forum
            J This user is from outside of this forum
            [email protected]
            wrote on last edited by
            #21

            What was the BASE_URL here? I’m guessing that’s like a profile page or something?

            So then you still first have to get a URL to each profile? Or is this like a feed URL?

            lena@gregtech.euL P 2 Replies Last reply
            26
            • J [email protected]

              What was the BASE_URL here? I’m guessing that’s like a profile page or something?

              So then you still first have to get a URL to each profile? Or is this like a feed URL?

              lena@gregtech.euL This user is from outside of this forum
              lena@gregtech.euL This user is from outside of this forum
              [email protected]
              wrote on last edited by
              #22

              It's a public firebase bucket

              J L 2 Replies Last reply
              64
              • lena@gregtech.euL [email protected]

                It's a public firebase bucket

                J This user is from outside of this forum
                J This user is from outside of this forum
                [email protected]
                wrote on last edited by
                #23

                Oh Jesus

                1 Reply Last reply
                28
                • Q [email protected]

                  I work in security and I kinda doubt this. There are plenty of issues just like what is outlined here that would be much easier to exploit than social engineering. Social engineering costs a lot more than GET /secrets.json.

                  There is good reason to be concerned about both, but 95% sounds way off and makes it sound like companies should allocate significantly more time to defend against social engineering, when they should first try to ensure social engineering is the easiest way to exploit their system. I can tell you from about a decade of experience that it typically isn't.

                  K This user is from outside of this forum
                  K This user is from outside of this forum
                  [email protected]
                  wrote on last edited by
                  #24

                  https://www.infosecinstitute.com/resources/security-awareness/human-error-responsible-data-breaches/

                  You're right. It's 74%.

                  https://www.cybersecuritydive.com/news/clorox-380-million-suit-cognizant-cyberattack/753837/

                  It's way easier to convince someone that you are just a lost user who needs access than it is to try to probe an organization's IT security from the outside.

                  This is only going to get worse with the ability to replicate other's voices and images. People already consistently fall for text message and email social engineering. Now someone just needs to build a model off a CSO doing interviews for a few hours and then call their phone explaining there has been a breach. Sure, 80% of good tech professionals won't fall for it, but the other 20% that just got hired out of their league and are fearing for their jobs will immediately do what they are told, especially if the breach is elaborate enough to convince them it's an internal security thing.

                  Q 1 Reply Last reply
                  22
                  • Q [email protected]

                    Not a big fan of the wording here. Plenty of skilled programmers make dumb mistakes. There should always be systems in place to ensure these dumb mistakes don't make it to production. Especially when related to sensitive information. Where was the threat model and the system in place to enforce it? The idea that these problems are caused by "shit programmers" misses the real issue: there was either no system or an insufficient system to test features and define security requirements.

                    P This user is from outside of this forum
                    P This user is from outside of this forum
                    [email protected]
                    wrote on last edited by
                    #25

                    I found a bad programmer!

                    F 1 Reply Last reply
                    5
                    • S [email protected]

                      Believe it or not a lot of hacking is more like this than you think.

                      M This user is from outside of this forum
                      M This user is from outside of this forum
                      [email protected]
                      wrote on last edited by
                      #26

                      Shodan lists 100'000s of publicly accessible security cameras.

                      1 Reply Last reply
                      18
                      • I [email protected]

                        I remember when a senior developer where i worked was tired of connecting to the servers to check its configuration, so they added a public facing rest endpoint that just dumped the entire active config, including credentials and secrets

                        That was a smaller slip-up than exposing a database like that (he just forgot that the config contained secrets) but still funny that it happened

                        P This user is from outside of this forum
                        P This user is from outside of this forum
                        [email protected]
                        wrote on last edited by
                        #27

                        That's not a "senior developer." That's a developer that has just been around for too long.

                        Secrets shouldn't be in configurations, and developers shouldn't be mucking around in production, nor with production data.

                        I J 2 Replies Last reply
                        40
                        • lena@gregtech.euL [email protected]
                          This post did not contain any content.
                          T This user is from outside of this forum
                          T This user is from outside of this forum
                          [email protected]
                          wrote on last edited by
                          #28

                          This reminds me of how I showed a friend and her company how to get databases from BLS and it's basically all just text files with urls. "What API did you call? How did you scrape the data?"

                          Nah man, it's just... there. As government data should be. They called it a hack.

                          K L 2 Replies Last reply
                          88
                          • Q [email protected]

                            Not a big fan of the wording here. Plenty of skilled programmers make dumb mistakes. There should always be systems in place to ensure these dumb mistakes don't make it to production. Especially when related to sensitive information. Where was the threat model and the system in place to enforce it? The idea that these problems are caused by "shit programmers" misses the real issue: there was either no system or an insufficient system to test features and define security requirements.

                            R This user is from outside of this forum
                            R This user is from outside of this forum
                            [email protected]
                            wrote on last edited by
                            #29

                            I can tell you exactly what happened. "Hey Claude, I need to configure and setup a DB with Firebase to store images from our application." and then promptly hit shift+tab and then went to go browse Reddit.

                            nothing was tested. nothing was verified. They let the AI do its thing they checked in on it after an hour or so. once it was done it was add all, commit -m "done", push origin master. AI doesn't implement security stuff. there was zero security here.

                            Q mobotsar@sh.itjust.worksM 2 Replies Last reply
                            3
                            • T [email protected]

                              Not below average dev necessarily, but when posting code examples on the internet people often try to get a point across. Like how do I solve X? Here is code that solves X perfectly, the rest of the code is total crap, ignore that and focus on the X part. Because it's just an example, it doesn't really matter. But when it's used to train an LLM it's all just code. It doesn't know which parts are important and which aren't.

                              And this becomes worse when small little bits of code are included in things like tutorials. That means it's copy pasted all over the place, on forums, social media, stackoverflow etc. So it's weighted way more heavily. And the part where the tutorial said: "Warning, this code is really bad and insecure, it's just an example to show this one thing" gets lost in the shuffle.

                              Same thing when an often used pattern when using a framework gets replaced by new code where the framework does a little bit more so the same pattern isn't needed anymore. The LLM will just continue with the old pattern, even though there's often a good reason it got replaced (for example security issues). And if the new and old version aren't compatible with each other, you are in for a world of hurt trying to use an LLM.

                              And now with AI slop flooding all of these places where they used to get their data, it just becomes worse and worse.

                              These are just some of the issues why using an LLM for coding is probably a really bad idea.

                              D This user is from outside of this forum
                              D This user is from outside of this forum
                              [email protected]
                              wrote on last edited by
                              #30

                              Didn't expect this much. I don't think about tuto example being weighted heavier. This make sense.

                              1 Reply Last reply
                              1
                              • P [email protected]

                                I found a bad programmer!

                                F This user is from outside of this forum
                                F This user is from outside of this forum
                                [email protected]
                                wrote on last edited by
                                #31

                                I found someone who hasn't yet made their big dumb mistake. Give it time.

                                P 1 Reply Last reply
                                1
                                • F [email protected]

                                  I found someone who hasn't yet made their big dumb mistake. Give it time.

                                  P This user is from outside of this forum
                                  P This user is from outside of this forum
                                  [email protected]
                                  wrote on last edited by
                                  #32

                                  I've dodged the bullet for 20 years, now. I guess i had better get cracking

                                  D 1 Reply Last reply
                                  4
                                  • T [email protected]

                                    Not below average dev necessarily, but when posting code examples on the internet people often try to get a point across. Like how do I solve X? Here is code that solves X perfectly, the rest of the code is total crap, ignore that and focus on the X part. Because it's just an example, it doesn't really matter. But when it's used to train an LLM it's all just code. It doesn't know which parts are important and which aren't.

                                    And this becomes worse when small little bits of code are included in things like tutorials. That means it's copy pasted all over the place, on forums, social media, stackoverflow etc. So it's weighted way more heavily. And the part where the tutorial said: "Warning, this code is really bad and insecure, it's just an example to show this one thing" gets lost in the shuffle.

                                    Same thing when an often used pattern when using a framework gets replaced by new code where the framework does a little bit more so the same pattern isn't needed anymore. The LLM will just continue with the old pattern, even though there's often a good reason it got replaced (for example security issues). And if the new and old version aren't compatible with each other, you are in for a world of hurt trying to use an LLM.

                                    And now with AI slop flooding all of these places where they used to get their data, it just becomes worse and worse.

                                    These are just some of the issues why using an LLM for coding is probably a really bad idea.

                                    F This user is from outside of this forum
                                    F This user is from outside of this forum
                                    [email protected]
                                    wrote on last edited by
                                    #33

                                    Yeah, once you get the LLM's response you still have to go to the documentation to check whether it's telling the truth and the APIs it recommends are current. You're no better off than if you did an internet search and tried to figure out who's giving good advice, or just fumbled your own way through the docs in the first place.

                                    kayohtie@pawb.socialK C 2 Replies Last reply
                                    2
                                    • F [email protected]

                                      Yeah, once you get the LLM's response you still have to go to the documentation to check whether it's telling the truth and the APIs it recommends are current. You're no better off than if you did an internet search and tried to figure out who's giving good advice, or just fumbled your own way through the docs in the first place.

                                      kayohtie@pawb.socialK This user is from outside of this forum
                                      kayohtie@pawb.socialK This user is from outside of this forum
                                      [email protected]
                                      wrote on last edited by
                                      #34

                                      whether it's telling the truth

                                      "whether the output is correct or a mishmash"

                                      "Truth" implies understanding that these don't have, and because of the underlying method the models use to generate plausible-looking responses based on training data, there is no "truth" or "lying" because they don't actually "know" any of it.

                                      I know this comes off probably as super pedantic, and it definitely is at least a little pedantic, but the anthropomorphism shown towards these things is half the reason they're trusted.

                                      That and how much ChatGPT flatters people.

                                      F 1 Reply Last reply
                                      1
                                      • S [email protected]

                                        Believe it or not a lot of hacking is more like this than you think.

                                        4 This user is from outside of this forum
                                        4 This user is from outside of this forum
                                        [email protected]
                                        wrote on last edited by
                                        #35

                                        I think that’s less about “hacking” and more about modern day devs being overworked by their hot-shit team lead and clueless PMs and creating “temporary” solutions that become permanent in the long run.

                                        This bucket was probably something they set up early in the dev cycle so they could iterate components without needing to implement an auth system first and then got rushed into releasing before it could be fixed. That’s almost always how this stuff happens; whether it’s a core element or a rushed DR test.

                                        1 Reply Last reply
                                        22
                                        • hoshikarakitaridia@lemmy.worldH [email protected]

                                          Social engineering is probably 95% of modern attack vectors. And that's not even unexpected, some highly regarded computer scientists and security researchers concluded this more than a decade ago.

                                          4 This user is from outside of this forum
                                          4 This user is from outside of this forum
                                          [email protected]
                                          wrote on last edited by
                                          #36

                                          This has been the case for 40+ years. Humans are almost always the weakest link.

                                          P 1 Reply Last reply
                                          6
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups