Do you actually audit open source projects you download?
-
The question is simple. I wanted to get a general consensus on if people actually audit the code that they use from FOSS or open source software or apps.
Do you blindly trust the FOSS community? I am trying to get a rough idea here. Sometimes audit the code? Only on mission critical apps? Not at all?
Let's hear it!
I do sometimes, when I know the tech stack. (I wonder if GitHub Copilot could help in other situations?)
For example, I've been learning more about FreshRSS and Wallabag (especially now that Pocket is shutting down).
In any case, with open source, I trust that someone looks at it.
-
I know lemmy hates AI but auditing open source code seems like something it could be pretty good at. Maybe that's something that may start happening more.
I'm writing a paper on this, actually. Basically, it's okay-ish at it, but has definite blind spots. The most promising route is to have AI use a traditional static analysis tool, rather than evaluate the code directly.
-
The question is simple. I wanted to get a general consensus on if people actually audit the code that they use from FOSS or open source software or apps.
Do you blindly trust the FOSS community? I am trying to get a rough idea here. Sometimes audit the code? Only on mission critical apps? Not at all?
Let's hear it!
If it's a project with a couple hundred thousands of downloads a week then no, i trust that it's been looked at by more savvy people than myself.
If it's a niche project that barely anyone uses or comes from a source i consider to be less reputable then i will skim it
-
'AI' as we currently know it, is terrible at this sort of task. It's not capable of understanding the flow of the code in any meaningful way, and tends to raise entirely spurious issues (see the problems the curl author has with being overwhealmed for example). It also wont spot actually malicious code that's been included with any sort of care, nor would it find intentional behaviour that would be harmful or counterproductive in the particular scenario you want to use the program.
wrote on last edited by [email protected]Having actually worked with AI in this context alongside github/azure devops advanced security, I can tell you that this is wrong. As much as we hate AI, and as much as people like to (validly) point out issues with hallucinations, overall it's been very on-point.
-
I'm writing a paper on this, actually. Basically, it's okay-ish at it, but has definite blind spots. The most promising route is to have AI use a traditional static analysis tool, rather than evaluate the code directly.
That seems to be the direction the industry is headed in. GHAzDO and competitors all seem to be converging on using AI as a force-multiplier on top of the existing solutions, and it works surprisingly well.
-
The question is simple. I wanted to get a general consensus on if people actually audit the code that they use from FOSS or open source software or apps.
Do you blindly trust the FOSS community? I am trying to get a rough idea here. Sometimes audit the code? Only on mission critical apps? Not at all?
Let's hear it!
If I can read it in around an afternoon, and it’s not a big enough project that I can safely assume many other people have already done so, then I will !
But I don’t think it qualifies as “auditing”, for now I only have a bachelor’s in CS and I don’t know as much as I’d like about cybersecurity yet.
-
The question is simple. I wanted to get a general consensus on if people actually audit the code that they use from FOSS or open source software or apps.
Do you blindly trust the FOSS community? I am trying to get a rough idea here. Sometimes audit the code? Only on mission critical apps? Not at all?
Let's hear it!
It depends on the provenance of the code and who (if anyone) is downstream.
A project that's packaged in multiple distros is more likely to be reliable than a project that only exists on github and provides its own binary builds.
-
The question is simple. I wanted to get a general consensus on if people actually audit the code that they use from FOSS or open source software or apps.
Do you blindly trust the FOSS community? I am trying to get a rough idea here. Sometimes audit the code? Only on mission critical apps? Not at all?
Let's hear it!
Depends on how the project and how long they have been around.
-
The question is simple. I wanted to get a general consensus on if people actually audit the code that they use from FOSS or open source software or apps.
Do you blindly trust the FOSS community? I am trying to get a rough idea here. Sometimes audit the code? Only on mission critical apps? Not at all?
Let's hear it!
some yes, I'm currently using hyde for hyprland and I've been tinkering with almost every script that holds the project together
-
The question is simple. I wanted to get a general consensus on if people actually audit the code that they use from FOSS or open source software or apps.
Do you blindly trust the FOSS community? I am trying to get a rough idea here. Sometimes audit the code? Only on mission critical apps? Not at all?
Let's hear it!
Nah. My security is entirely based on vibes and gambling
-
Having actually worked with AI in this context alongside github/azure devops advanced security, I can tell you that this is wrong. As much as we hate AI, and as much as people like to (validly) point out issues with hallucinations, overall it's been very on-point.
Could you let me know what sort of models you're using? Everything I've tried has basically been so bad it was quicker and more reliable to to the job myself. Most of the models can barely write boilerplate code accurately and securely, let alone anything even moderately complex.
I've tried to get them to analyse code too, and that's hit and miss at best, even with small programs. I'd have no faith at all that they could handle anything larger; the answers they give would be confident and wrong, which is easy to spot with something small, but much harder to catch with a large, multi process system spread over a network. It's hard enough for humans, who have actual context, understanding and domain knowledge, to do it well, and I've, personally, not seen any evidence that an LLM (which is what I'm assuming you're referring to) could do anywhere near as well. I don't doubt that they flag some issues, but without a comprehensive, human, review of the system architecture, implementation and code, you can't be sure what they've missed, and if you're going to do that anyway, you've done the job yourself!
Having said that, I've no doubt that things will improve, programming languages have well defined syntaxes and so they should be some of the easiest types of text for an LLM to parse and build a context from. If that can be combined with enough domain knowledge, a description of the deployment environment and a model that's actually trained for and tuned for code analysis and security auditing, it might be possible to get similar results to humans.
-
For personal use? I never do anything that would qualify as "auditing" the code. I might glance at it, but mostly out of curiosity. If I'm contributing then I'll get to know the code as much as is needed for the thing I'm contributing, but still far from a proper audit. I think the idea that the open-source community is keeping a close eye on each other's code is a bit of a myth. No one has the time, unless someone has the money to pay for an audit.
I don't know whether corporations audit the open-source code they use, but in my experience it would be pretty hard to convince the typical executive that this is something worth investing in, like cybersecurity in general. They'd rather wait until disaster strikes then pay more.
wrote on last edited by [email protected]My company only allows downloads from official sources, verified publishers, signed where we can. This is enforced by only allowing the repo server to download stuff and only from places we’ve configured. In general those go through a process to reduce the chances of problems and mitigate them quickly.
We also feed everything through a scanner to flag known vulnerabilities, unacceptable licenses
If it’s fully packaged installable software, we have security guys that take a look at I have no idea what they do and whether it’s an audit
I’m actually going round in circles with this one developer. He needs an open source package and we already cache it on the repo server in several form factors, from reputable sources ….. but he wants to run a random GitHub component which downloads an unsigned tar file from an untrusted source
-
I know lemmy hates AI but auditing open source code seems like something it could be pretty good at. Maybe that's something that may start happening more.
wrote on last edited by [email protected]I’m actually planning to do an evaluation of a n ai code review tool to see what it can do. I’m actually somewhat optimistic that it could do this better than it can code
I really want to sic it on this one junior programmer who doesn’t understand that you can’t just commit ai generated slop and expect it to work. This last code review after over 60 pieces of feedback I gave up on the rest and left it as he needs to understand when ai generated slop needs help
Ai is usually pretty good at unit tests but it was so bad. Randomly started using a different mocking framework, it actually mocked entire classes and somehow thought that was valid to test them. Wasting tests on non-existent constructors no negative tests, tests without verifying anything. Most of all there were so many compile errors, yet he thought that was fine
-
Nah. My security is entirely based on vibes and gambling
Hell yeah brother!
-
The question is simple. I wanted to get a general consensus on if people actually audit the code that they use from FOSS or open source software or apps.
Do you blindly trust the FOSS community? I am trying to get a rough idea here. Sometimes audit the code? Only on mission critical apps? Not at all?
Let's hear it!
depends like for known projecte like curl i wont because i know its fine but if its a new project i heard about i do audit the source and if i dont know the lang its in i ask someone that does
-
Could you let me know what sort of models you're using? Everything I've tried has basically been so bad it was quicker and more reliable to to the job myself. Most of the models can barely write boilerplate code accurately and securely, let alone anything even moderately complex.
I've tried to get them to analyse code too, and that's hit and miss at best, even with small programs. I'd have no faith at all that they could handle anything larger; the answers they give would be confident and wrong, which is easy to spot with something small, but much harder to catch with a large, multi process system spread over a network. It's hard enough for humans, who have actual context, understanding and domain knowledge, to do it well, and I've, personally, not seen any evidence that an LLM (which is what I'm assuming you're referring to) could do anywhere near as well. I don't doubt that they flag some issues, but without a comprehensive, human, review of the system architecture, implementation and code, you can't be sure what they've missed, and if you're going to do that anyway, you've done the job yourself!
Having said that, I've no doubt that things will improve, programming languages have well defined syntaxes and so they should be some of the easiest types of text for an LLM to parse and build a context from. If that can be combined with enough domain knowledge, a description of the deployment environment and a model that's actually trained for and tuned for code analysis and security auditing, it might be possible to get similar results to humans.
Its just whatever is built into copilot.
You can do a quick and dirty test by opening copilot chat and asking it something like "outline the vulnerabilities found in the following code, with the vulnerabilities listed underneath it. Outline any other issues you notice that are not listed here." and then paste the code and the discovered vulns.
-
The question is simple. I wanted to get a general consensus on if people actually audit the code that they use from FOSS or open source software or apps.
Do you blindly trust the FOSS community? I am trying to get a rough idea here. Sometimes audit the code? Only on mission critical apps? Not at all?
Let's hear it!
wrote on last edited by [email protected]If it looks sketchy I'll look at it and not trust the binaries. I'm not going to catch anything subtle, but if it sets up a reverse shell, I can notice that shit.
-
The question is simple. I wanted to get a general consensus on if people actually audit the code that they use from FOSS or open source software or apps.
Do you blindly trust the FOSS community? I am trying to get a rough idea here. Sometimes audit the code? Only on mission critical apps? Not at all?
Let's hear it!
I don't have the know how to do so, so I go off of what others have said about it. It's at-least got a better chance of being safe than closed source software where people are FULLY guessing at if its safe or not, rather than what we have with at-least 1 person having poured over it that doesn't have ties to the creator.
-
The question is simple. I wanted to get a general consensus on if people actually audit the code that they use from FOSS or open source software or apps.
Do you blindly trust the FOSS community? I am trying to get a rough idea here. Sometimes audit the code? Only on mission critical apps? Not at all?
Let's hear it!
No, so I only use well known widely used open source programs. If I'm doing a code review I'm getting paid to do it.
-
The question is simple. I wanted to get a general consensus on if people actually audit the code that they use from FOSS or open source software or apps.
Do you blindly trust the FOSS community? I am trying to get a rough idea here. Sometimes audit the code? Only on mission critical apps? Not at all?
Let's hear it!
I rely on Debian repo maintainers to do this for me