Do you actually audit open source projects you download?
-
The question is simple. I wanted to get a general consensus on if people actually audit the code that they use from FOSS or open source software or apps.
Do you blindly trust the FOSS community? I am trying to get a rough idea here. Sometimes audit the code? Only on mission critical apps? Not at all?
Let's hear it!
Packaged products ready to use? No.
Libraries which I use in my own projects? I at least have a quick look at the implementation, often a more detailed analysis if issues pop up. -
The question is simple. I wanted to get a general consensus on if people actually audit the code that they use from FOSS or open source software or apps.
Do you blindly trust the FOSS community? I am trying to get a rough idea here. Sometimes audit the code? Only on mission critical apps? Not at all?
Let's hear it!
I know lemmy hates AI but auditing open source code seems like something it could be pretty good at. Maybe that's something that may start happening more.
-
The question is simple. I wanted to get a general consensus on if people actually audit the code that they use from FOSS or open source software or apps.
Do you blindly trust the FOSS community? I am trying to get a rough idea here. Sometimes audit the code? Only on mission critical apps? Not at all?
Let's hear it!
I generally look over the project repo and site to see if there's any flags raised like those I talk about here.
Upon that, I glance over the codebase, check it's maintained and will look for certain signs like tests and (for apps with a web UI) the main template files used for things like if care has been taken not to include random analytics or external files by default. I'll get a feel for the quality of the code and maintenance during this. I generally wouldn't do a full audit or anything though. With modern software it's hard to fully track and understand a project, especially when it'll rely on many other dependencies. There's always an element of trust, and that's the case regardless of being FOSS or not. It's just that FOSS provides more opportunities for folks to see the code when needed/desired.
-
The question is simple. I wanted to get a general consensus on if people actually audit the code that they use from FOSS or open source software or apps.
Do you blindly trust the FOSS community? I am trying to get a rough idea here. Sometimes audit the code? Only on mission critical apps? Not at all?
Let's hear it!
Having gone through the approval process at a large company to add an open source project to it's whitelist, it was surprisingly easy. They mostly wanted to know numbers. How long has it been around, when was the last update, number of downloads, what does it do, etc. They mostly just wanted to make sure it was still being maintained.
In their eyes, they also don't audit closed source software. There might also have been an antivirus scan run against the code, but that seemed more like a checkbox than something that would actually help.
-
I know lemmy hates AI but auditing open source code seems like something it could be pretty good at. Maybe that's something that may start happening more.
wrote on last edited by [email protected]This is one of the few things that AI could potentially actually be good at. Aside from the few people on Lemmy who are entirely anti-AI, most people just don't want AI jammed willy-nilly into places where it doesn't belong to do things poorly that it's not equipped to do.
-
This is one of the few things that AI could potentially actually be good at. Aside from the few people on Lemmy who are entirely anti-AI, most people just don't want AI jammed willy-nilly into places where it doesn't belong to do things poorly that it's not equipped to do.
Aside from the few people on Lemmy who are entirely anti-AI
Those are silly folks lmao
most people just don't want AI jammed willy-nilly into places where it doesn't belong to do things poorly that it's not equipped to do.
Exactly, fuck corporate greed!
-
The question is simple. I wanted to get a general consensus on if people actually audit the code that they use from FOSS or open source software or apps.
Do you blindly trust the FOSS community? I am trying to get a rough idea here. Sometimes audit the code? Only on mission critical apps? Not at all?
Let's hear it!
I trust the community, but not blindly. I trust those who have a proven track record, and I proxy that trust through them whenever possible. I trust the standards and quality of the Debian organization and by extension I trust the packages they maintain and curate. If I have to install something from source that is outside a major distribution then my trust might be reduced. I might do some cursory research on the history of the project and the people behind it, I might look closer at the code. Or I might not. A lot of software doesn't require much trust. A web app running in its own limited user on a well-secured and up-to-date VPS or VM, in the unlikely event it turned out to be a malicious backdoor, it is simply an annoyance and it will be purged. In its own limited user, there's not that much it can do and it can't really hide. If I'm off the beaten track in something that requires a bit more trust, something security related, or something that I'm going to run it as root, or it's going to be running as a core part of my network, I'll go further. Maybe I "audit" in the sense that I check the bug tracker and for CVEs to understand how seriously they take potential security issues.
Yeah if that malicious software I ran that I didn't think required a lot of trust, happens to have snuck in a way to use a bunch of 0-day exploits and gets root access and gets into the rest of my network and starts injecting itself into my hardware persistently then I'm going to have a really bad day probably followed by a really bad year. That's a given. It's a risk that is always present, I'm a single guy homelabbing a bunch of fun stuff, I'm no match for a sophisticated and likely targeted nation-state level attack, and I'm never going to be. If On the other hand if I get hacked and ransomwared along with 10,000 other people from some compromised project that I trusted a little too much at least I'll consider myself in good company, give the hackers credit where credit is due, and I'll try to learn from the experience. But I will say they'd better be really sneaky, do their attack quickly and it had better be very sophisticated, because I'm not stupid either and I do pay pretty close attention to changes to my network and to any new software I'm running in particular.
-
The question is simple. I wanted to get a general consensus on if people actually audit the code that they use from FOSS or open source software or apps.
Do you blindly trust the FOSS community? I am trying to get a rough idea here. Sometimes audit the code? Only on mission critical apps? Not at all?
Let's hear it!
I run projects inside Docker on a VM away from important data. It allows me to test and restrict access to specific things of my choosing.
It works well for me.
-
Aside from the few people on Lemmy who are entirely anti-AI
Those are silly folks lmao
most people just don't want AI jammed willy-nilly into places where it doesn't belong to do things poorly that it's not equipped to do.
Exactly, fuck corporate greed!
Those are silly folks lmao
Eh, I kind of get it. OpenAI's malfeasance with regard to energy usage, data theft, and the aforementioned rampant shoe-horning (maybe "misapplication" is a better word) of the technology has sort of poisoned the entire AI well for them, and it doesn't feel (and honestly isn't) necessary enough that it's worth considering ways that it might be done ethically.
I don't agree with them entirely, but I do get where they're coming from. Personally, I think once the hype dies down enough and the corporate money (and VC money) gets out of it, it can finally settle into a more reasonable solid-state and the money can actually go into truly useful implementations of it.
-
The question is simple. I wanted to get a general consensus on if people actually audit the code that they use from FOSS or open source software or apps.
Do you blindly trust the FOSS community? I am trying to get a rough idea here. Sometimes audit the code? Only on mission critical apps? Not at all?
Let's hear it!
I do not. But then again, I don’t audit the code of the closed source software I use either.
-
The question is simple. I wanted to get a general consensus on if people actually audit the code that they use from FOSS or open source software or apps.
Do you blindly trust the FOSS community? I am trying to get a rough idea here. Sometimes audit the code? Only on mission critical apps? Not at all?
Let's hear it!
wrote on last edited by [email protected]Nah not really...most of the time I'm at least doing a light metadata check, like who's the maintainer & main contributors, any trusted folks have starred the repo, how active is development and release frequency, search issues with "vulnerability"/"cve" see how contributors communicate on those, previous cve track record.
With real code audits... I could only ever be using a handful of programs, let alone the thought of me fully auditing the whole linux kernel before I trust it
Focusing on "mission critical" apps feels pretty useless imho, because it doesn't really matter which of the thousands of programs on your system executes malicious code, no?
Like sure, the app you use for handling super sensitive data might be secure and audited...then you get fucked by some obscure compression library silently loaded by a bunch of your programs. -
The question is simple. I wanted to get a general consensus on if people actually audit the code that they use from FOSS or open source software or apps.
Do you blindly trust the FOSS community? I am trying to get a rough idea here. Sometimes audit the code? Only on mission critical apps? Not at all?
Let's hear it!
No, I pretty much only look at the number of contributors (more is better)
-
The question is simple. I wanted to get a general consensus on if people actually audit the code that they use from FOSS or open source software or apps.
Do you blindly trust the FOSS community? I am trying to get a rough idea here. Sometimes audit the code? Only on mission critical apps? Not at all?
Let's hear it!
Full code audit is very time consuming. It's impossible to audit all software someone uses. However if I know nothing about project, I do a short look at the code to understand if it follows best practices or not and make some assumptions about the code quality. The problem is that I can't do this if I'm unfamiliar with the programming language the project is written in, so in most cases I try to avoid such projects.
-
I know lemmy hates AI but auditing open source code seems like something it could be pretty good at. Maybe that's something that may start happening more.
Daniel Stenberg claims that the curl bug reporting system is effectively DDOSed by AI wrongly reporting various issues. Doesn't seem like a good feature in a code auditor.
-
The question is simple. I wanted to get a general consensus on if people actually audit the code that they use from FOSS or open source software or apps.
Do you blindly trust the FOSS community? I am trying to get a rough idea here. Sometimes audit the code? Only on mission critical apps? Not at all?
Let's hear it!
Well my husband’s work place does audit the code they deploy but they have a big problem with contractors just downloading random shit and putting it on production systems without following proper review and in violation of policy.
The phrase fucking Deloitte is a daily occurrence.
-
The question is simple. I wanted to get a general consensus on if people actually audit the code that they use from FOSS or open source software or apps.
Do you blindly trust the FOSS community? I am trying to get a rough idea here. Sometimes audit the code? Only on mission critical apps? Not at all?
Let's hear it!
Lol. I download a library or program to do a task because I would not be able to code it myself (to that kind of production level, at least). Of course I'm not gonna be able to audit it! You need twice the IQ to debug a software compared to the one needed to even write it in the first place.
-
I know lemmy hates AI but auditing open source code seems like something it could be pretty good at. Maybe that's something that may start happening more.
Lots of things seem like they would work until you try them.
-
Those are silly folks lmao
Eh, I kind of get it. OpenAI's malfeasance with regard to energy usage, data theft, and the aforementioned rampant shoe-horning (maybe "misapplication" is a better word) of the technology has sort of poisoned the entire AI well for them, and it doesn't feel (and honestly isn't) necessary enough that it's worth considering ways that it might be done ethically.
I don't agree with them entirely, but I do get where they're coming from. Personally, I think once the hype dies down enough and the corporate money (and VC money) gets out of it, it can finally settle into a more reasonable solid-state and the money can actually go into truly useful implementations of it.
OpenAI's malfeasance with regard to energy usage, data theft,
I mean that's why I call them silly folks, that's all still attributable to that corporate greed we all hate, but I've also seen them shit on research work and papers just because "AI" Soo yea lol
-
I know lemmy hates AI but auditing open source code seems like something it could be pretty good at. Maybe that's something that may start happening more.
It wouldn't be good at it, it would at most be a little patch for non audited code.
In the end it would just be an AI-powered antivirus.
-
I know lemmy hates AI but auditing open source code seems like something it could be pretty good at. Maybe that's something that may start happening more.
'AI' as we currently know it, is terrible at this sort of task. It's not capable of understanding the flow of the code in any meaningful way, and tends to raise entirely spurious issues (see the problems the curl author has with being overwhealmed for example). It also wont spot actually malicious code that's been included with any sort of care, nor would it find intentional behaviour that would be harmful or counterproductive in the particular scenario you want to use the program.