Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Programmer Humor
  3. GitHub Actions radicalized me

GitHub Actions radicalized me

Scheduled Pinned Locked Moved Programmer Humor
programmerhumor
40 Posts 27 Posters 134 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • decipher_jeanne@lemmy.blahaj.zoneD [email protected]

    I mean what's the point of your test if they fail. It's already bad enough that one of our test is flacky. To be fair I am working in a company that does a lot of system safety and a lot of our stuff isn't just tested, it's mathematicaly proven.

    L This user is from outside of this forum
    L This user is from outside of this forum
    [email protected]
    wrote on last edited by
    #12

    Shit! You got deadlines, and managers or customers piling in? Yeah, they don’t pass, but who cares! The code works….probably! Ship it!

    1 Reply Last reply
    10
    • carrylex@lemmy.worldC [email protected]

      Context: https://github.com/orgs/community/discussions/44490

      M This user is from outside of this forum
      M This user is from outside of this forum
      [email protected]
      wrote on last edited by
      #13

      This is dumb as fuck.

      1 Reply Last reply
      0
      • carrylex@lemmy.worldC [email protected]

        Context: https://github.com/orgs/community/discussions/44490

        N This user is from outside of this forum
        N This user is from outside of this forum
        [email protected]
        wrote on last edited by [email protected]
        #14

        We have a few non-required checks here and there - mostly as you need an admin to list a check as required and that can be annoying to do. And we still get code merged in occasionally that fails those checks. Hell, I have merged in code that fails the checks. Sometimes checks take a while to run, and there is this nice merge when ready button in GH. But it will gladly merge your code in once all the required checks have passed ignoring any non-required checks.

        And it is such a useful button to have, especially in a large codebase with lots of developers - just merge in the code when it is ready and avoid forgetting about things for a few hours and possibly having to rebase and run all the checks again because of some minor merge conflict...

        But GH required checks are just broken for large code bases as well. We don't always want to run every check on every code change. We don't need to run all unit tests when only a documentation has changed. But required checks are all or nothing. They need to return something or else you cannot merge at all (though this might apply to external checks more then gh actions maybe). I really wish there was a require all checks to pass and a at least one check must run. Or if external checks could tell GH when they are required or not. Either way there is a lot of room for improvement on the GH PR checks.

        V 1 Reply Last reply
        5
        • paequ2@lemmy.todayP [email protected]

          Have you tried rerunning them all day until they pass? 😄

          jupdown@lemmy.caJ This user is from outside of this forum
          jupdown@lemmy.caJ This user is from outside of this forum
          [email protected]
          wrote on last edited by
          #15

          Would you look at that - the pipeline is green now! Quick everybody, merge your stuff while it's stable (/s) (sadly a true story tho)

          1 Reply Last reply
          9
          • F [email protected]

            Ha, losers - tests can't fail if you don't have any tests.

            obstreperouscanadian@lemmy.caO This user is from outside of this forum
            obstreperouscanadian@lemmy.caO This user is from outside of this forum
            [email protected]
            wrote on last edited by [email protected]
            #16

            Why write tests when you should be writing more features the business needs now but will never use?

            A 1 Reply Last reply
            15
            • N [email protected]

              We have a few non-required checks here and there - mostly as you need an admin to list a check as required and that can be annoying to do. And we still get code merged in occasionally that fails those checks. Hell, I have merged in code that fails the checks. Sometimes checks take a while to run, and there is this nice merge when ready button in GH. But it will gladly merge your code in once all the required checks have passed ignoring any non-required checks.

              And it is such a useful button to have, especially in a large codebase with lots of developers - just merge in the code when it is ready and avoid forgetting about things for a few hours and possibly having to rebase and run all the checks again because of some minor merge conflict...

              But GH required checks are just broken for large code bases as well. We don't always want to run every check on every code change. We don't need to run all unit tests when only a documentation has changed. But required checks are all or nothing. They need to return something or else you cannot merge at all (though this might apply to external checks more then gh actions maybe). I really wish there was a require all checks to pass and a at least one check must run. Or if external checks could tell GH when they are required or not. Either way there is a lot of room for improvement on the GH PR checks.

              V This user is from outside of this forum
              V This user is from outside of this forum
              [email protected]
              wrote on last edited by [email protected]
              #17

              There are definitely ways to run partial testing suites on modified code only. I feel like much of what you're complaining about is an already solved problem.

              B N 2 Replies Last reply
              2
              • carrylex@lemmy.worldC [email protected]

                Context: https://github.com/orgs/community/discussions/44490

                Z This user is from outside of this forum
                Z This user is from outside of this forum
                [email protected]
                wrote on last edited by
                #18

                If you only write tests for things that won't fail, you're doing it wrong. Are you anticipating some other feature coming soon? Write a failing test for it. Did you find untested code that might run soon with a little work? Write a test for it. Did a nonessential feature break while adding an essential feature, let the test fail and fix it later.

                B 1 Reply Last reply
                3
                • obstreperouscanadian@lemmy.caO [email protected]

                  Why write tests when you should be writing more features the business needs now but will never use?

                  A This user is from outside of this forum
                  A This user is from outside of this forum
                  [email protected]
                  wrote on last edited by
                  #19

                  a bug? No problem we will just fix it in the next release. loop for eternity.

                  1 Reply Last reply
                  3
                  • carrylex@lemmy.worldC [email protected]

                    Context: https://github.com/orgs/community/discussions/44490

                    thenamlessguy@lemmy.worldT This user is from outside of this forum
                    thenamlessguy@lemmy.worldT This user is from outside of this forum
                    [email protected]
                    wrote on last edited by
                    #20

                    This just sounds like "my frontend only changes shouldn't be impacted by some dumbass breaking backend two commits ago", which seems reasonable.

                    1 Reply Last reply
                    15
                    • carrylex@lemmy.worldC [email protected]

                      Context: https://github.com/orgs/community/discussions/44490

                      wfh@lemmy.zipW This user is from outside of this forum
                      wfh@lemmy.zipW This user is from outside of this forum
                      [email protected]
                      wrote on last edited by
                      #21

                      Bro just crash the CI because the linter found an extra space bro trust me bro this is important. Also Unit tests are optional.

                      1 Reply Last reply
                      5
                      • Z [email protected]

                        If you only write tests for things that won't fail, you're doing it wrong. Are you anticipating some other feature coming soon? Write a failing test for it. Did you find untested code that might run soon with a little work? Write a test for it. Did a nonessential feature break while adding an essential feature, let the test fail and fix it later.

                        B This user is from outside of this forum
                        B This user is from outside of this forum
                        [email protected]
                        wrote on last edited by
                        #22

                        Eww, no. You're doing tests wrong. The point of tests is to understand whether changes to the code (or dependencies) break any functionality. If you have failing tests, it makes this task very difficult and time consuming for people who need it most, i.e. people new to the project. "Is this test failing because of something I've done? <half an hour of debugging later> Oh, it was broken before my changes too!". If you insist on adding broken tests, mark them as "expected to fail" at least, so that they don't affect the overall test suite result (and when someone fixes the functionality they have to un-mark them as expected to fail), and the checkmark is green. You should never merge PRs/MRs which fail any tests - it is an extremely bad habit and forms a bad culture in your project.

                        W K 2 Replies Last reply
                        8
                        • carrylex@lemmy.worldC [email protected]

                          Context: https://github.com/orgs/community/discussions/44490

                          Q This user is from outside of this forum
                          Q This user is from outside of this forum
                          [email protected]
                          wrote on last edited by
                          #23

                          The real problem is merging before waiting for that one slow CI pipeline to complete

                          N P 2 Replies Last reply
                          23
                          • V [email protected]

                            There are definitely ways to run partial testing suites on modified code only. I feel like much of what you're complaining about is an already solved problem.

                            B This user is from outside of this forum
                            B This user is from outside of this forum
                            [email protected]
                            wrote on last edited by
                            #24

                            It can be finicky to set up and mistakes can be made easily. Often you have to manually replicate the entire internal dependency tree of your project in the checks so that there are no false positive test results. There are some per-language solutions, and there's Nix which is almost built for this sort of thing, but both come with drawbacks as well.

                            1 Reply Last reply
                            0
                            • V [email protected]

                              There are definitely ways to run partial testing suites on modified code only. I feel like much of what you're complaining about is an already solved problem.

                              N This user is from outside of this forum
                              N This user is from outside of this forum
                              [email protected]
                              wrote on last edited by
                              #25

                              Yeah there are ways to run partial tests on modified code only. But they interact poorly with GH required checks. https://github.com/orgs/community/discussions/44490 goes into a lot more detail on similar problems people are having with GH actions - though our problem is with external CICD tools that report back to GH. Though it does look like they have updated the docs that are linked to in that discussion so maybe something has recently changed with GH actions - but I bet it still exists for external tooling.

                              1 Reply Last reply
                              1
                              • Q [email protected]

                                The real problem is merging before waiting for that one slow CI pipeline to complete

                                N This user is from outside of this forum
                                N This user is from outside of this forum
                                [email protected]
                                wrote on last edited by
                                #26

                                One problem is GHs auto-merge when ready button. It will merge when there are still tests running unless they are required. It would be much better if the auto merges took into account all checks and not just required ones.

                                V 1 Reply Last reply
                                8
                                • B [email protected]

                                  Eww, no. You're doing tests wrong. The point of tests is to understand whether changes to the code (or dependencies) break any functionality. If you have failing tests, it makes this task very difficult and time consuming for people who need it most, i.e. people new to the project. "Is this test failing because of something I've done? <half an hour of debugging later> Oh, it was broken before my changes too!". If you insist on adding broken tests, mark them as "expected to fail" at least, so that they don't affect the overall test suite result (and when someone fixes the functionality they have to un-mark them as expected to fail), and the checkmark is green. You should never merge PRs/MRs which fail any tests - it is an extremely bad habit and forms a bad culture in your project.

                                  W This user is from outside of this forum
                                  W This user is from outside of this forum
                                  [email protected]
                                  wrote on last edited by
                                  #27

                                  You're both right. You're both wrong.

                                  • You write tests for functionality before you write the functionality.
                                  • You code the functionality so the tests pass.
                                  • Then, and only then, the test becomes a regression test and is enabled in your CI automation.
                                  • If the test ever breaks again the merge is blocked.

                                  If you only write tests after you've written the code then the test will test that the code does what the code does. Your brain is already polluted and you're not capable of writing a good test.

                                  Having tests that fail is fine, as long as they're not part of your regression tests.

                                  B 1 Reply Last reply
                                  1
                                  • B [email protected]

                                    Eww, no. You're doing tests wrong. The point of tests is to understand whether changes to the code (or dependencies) break any functionality. If you have failing tests, it makes this task very difficult and time consuming for people who need it most, i.e. people new to the project. "Is this test failing because of something I've done? <half an hour of debugging later> Oh, it was broken before my changes too!". If you insist on adding broken tests, mark them as "expected to fail" at least, so that they don't affect the overall test suite result (and when someone fixes the functionality they have to un-mark them as expected to fail), and the checkmark is green. You should never merge PRs/MRs which fail any tests - it is an extremely bad habit and forms a bad culture in your project.

                                    K This user is from outside of this forum
                                    K This user is from outside of this forum
                                    [email protected]
                                    wrote on last edited by [email protected]
                                    #28

                                    There are two different things mentioned here, which I feel I need to clarify:

                                    First, what you said about merging / creating a PR with broken tests. Absolutely you shouldn't do that, because you should only merge once the feature is finished. If a test doesn't work, then either it's testing for the wrong aspect and should be rewritten, or the functionality doesn't work 100% yet, so the feature isn't ready to get merged. Even if you're waiting for some other feature to get ready, because you need to integrate it or something, you're still waiting, so the feature isn't ready.

                                    At the same time, the OP's point about tests being supposed to fail at first isn't too far off the mark either, because that's precisely how TDD works. If you're applying that philosophy (which I personally condone), then that's exactly what you do: Write the test first, checking for expected behaviour (which is taken from the specification), which will obviously fail, and only then write the code implementing that behaviour.

                                    But, even then, that failing test should be contained to e.g. the feature branch you're working on, never going in a PR while it's still failing.

                                    Once that feature has been merged, then yes, the test should never fail again, because that indicates a new change having sabotaged some area of that feature. Even if the new feature is considered "essential" or "high priority" while the old feature is not, ignoring the failure is one of the easiest ways to build up technical debt, so you should damn well fix that now.

                                    B 1 Reply Last reply
                                    3
                                    • K [email protected]

                                      There are two different things mentioned here, which I feel I need to clarify:

                                      First, what you said about merging / creating a PR with broken tests. Absolutely you shouldn't do that, because you should only merge once the feature is finished. If a test doesn't work, then either it's testing for the wrong aspect and should be rewritten, or the functionality doesn't work 100% yet, so the feature isn't ready to get merged. Even if you're waiting for some other feature to get ready, because you need to integrate it or something, you're still waiting, so the feature isn't ready.

                                      At the same time, the OP's point about tests being supposed to fail at first isn't too far off the mark either, because that's precisely how TDD works. If you're applying that philosophy (which I personally condone), then that's exactly what you do: Write the test first, checking for expected behaviour (which is taken from the specification), which will obviously fail, and only then write the code implementing that behaviour.

                                      But, even then, that failing test should be contained to e.g. the feature branch you're working on, never going in a PR while it's still failing.

                                      Once that feature has been merged, then yes, the test should never fail again, because that indicates a new change having sabotaged some area of that feature. Even if the new feature is considered "essential" or "high priority" while the old feature is not, ignoring the failure is one of the easiest ways to build up technical debt, so you should damn well fix that now.

                                      B This user is from outside of this forum
                                      B This user is from outside of this forum
                                      [email protected]
                                      wrote on last edited by [email protected]
                                      #29

                                      I concede that on a feature branch, before a PR is made, it's ok to have some failing tests, as long as the only tests failing are related to that feature. You should squash those commits after the feature is complete so that no commit has a failing test once it's on master.

                                      (I'm also a fan of TDD, although for me it means Type-Driven Development, but I digress...)

                                      1 Reply Last reply
                                      0
                                      • W [email protected]

                                        You're both right. You're both wrong.

                                        • You write tests for functionality before you write the functionality.
                                        • You code the functionality so the tests pass.
                                        • Then, and only then, the test becomes a regression test and is enabled in your CI automation.
                                        • If the test ever breaks again the merge is blocked.

                                        If you only write tests after you've written the code then the test will test that the code does what the code does. Your brain is already polluted and you're not capable of writing a good test.

                                        Having tests that fail is fine, as long as they're not part of your regression tests.

                                        B This user is from outside of this forum
                                        B This user is from outside of this forum
                                        [email protected]
                                        wrote on last edited by [email protected]
                                        #30
                                        • You write tests for functionality before you write the functionality.
                                        • You code the functionality so the tests pass.
                                        • Then, and only then, the test becomes a regression test and is enabled in your CI automation.
                                        • If the test ever breaks again the merge is blocked.

                                        I disagree. Merging should be blocked on any failing test. No commit should be merged to master with a failing test. If you want to write tests first, then do that on a feature branch, but squash the commits properly before merging. Or add them as disabled first and enable after the feature is implemented. The enabled tests must always pass on every commit on master.

                                        W 1 Reply Last reply
                                        2
                                        • Q [email protected]

                                          The real problem is merging before waiting for that one slow CI pipeline to complete

                                          P This user is from outside of this forum
                                          P This user is from outside of this forum
                                          [email protected]
                                          wrote on last edited by
                                          #31

                                          gitlab has a feature where you can set it to auto-merge when and if the CI completes successfully

                                          1 Reply Last reply
                                          8
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups