Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Programmer Humor
  3. LLVM

LLVM

Scheduled Pinned Locked Moved Programmer Humor
programmerhumor
49 Posts 19 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • lena@gregtech.euL [email protected]

    Yeah, I think Go's compiler is so fast partially because it doesn't use LLVM

    firelizzard@programming.devF This user is from outside of this forum
    firelizzard@programming.devF This user is from outside of this forum
    [email protected]
    wrote last edited by
    #30

    TinyGo isn’t that much slower and it uses LLVM

    1 Reply Last reply
    2
    • P [email protected]

      cool new languages

      COBOL

      L This user is from outside of this forum
      L This user is from outside of this forum
      [email protected]
      wrote last edited by [email protected]
      #31

      I guess I should have put a /s but I thought it was pretty obvious. The 68 in Algol 68 is 1968. COBOL is from 1959. Modula-2 is from 1977.

      My point exactly was that all the hot new languages are built with LLVM while the “new” language options on GCC are languages from the 50’s, 60’s, and 70’s.

      I am not even exaggerating. That is just what the projects look like right now.

      B P S 3 Replies Last reply
      6
      • K [email protected]

        that’s just how they are made.

        Can confirm, even the little training compiler we made at Uni for a subset of Java (Javali) had a backend and frontend.

        I can't imagine trying to spit out machine code while parsing the input without an intermediary AST stage. It was complicated enough with the proper split.

        L This user is from outside of this forum
        L This user is from outside of this forum
        [email protected]
        wrote last edited by
        #32

        I have built single pass compilers that do everything in one shot without an AST. You are not going to get great error messages or optimization though.

        K 1 Reply Last reply
        3
        • L [email protected]

          I guess I should have put a /s but I thought it was pretty obvious. The 68 in Algol 68 is 1968. COBOL is from 1959. Modula-2 is from 1977.

          My point exactly was that all the hot new languages are built with LLVM while the “new” language options on GCC are languages from the 50’s, 60’s, and 70’s.

          I am not even exaggerating. That is just what the projects look like right now.

          B This user is from outside of this forum
          B This user is from outside of this forum
          [email protected]
          wrote last edited by
          #33

          If Algol68 is from 1968, shouldn't Modula-2 be from 1898?

          1 Reply Last reply
          3
          • firelizzard@programming.devF [email protected]

            Garbage collection is analyzing the heap and figuring out what can be collected. Reference counting requires the code to increment or decrement a counter and frees memory when the counter hits zero. They’re fundamentally different approaches. Also reference counting isn’t necessarily automatic, Objective-C had manual reference counting since day one.

            C This user is from outside of this forum
            C This user is from outside of this forum
            [email protected]
            wrote last edited by
            #34

            It's still mentioned as one of the main approaches to garbage collection in the garbage collection Wikipedia article.

            1 Reply Last reply
            1
            • D [email protected]

              It's a post rust language.

              By your definition any automatic memory management is garbage collection, including rust!

              Did you think rust doesn't free up memory for you? That would be the biggest memory leak in history! No! Rust does reference counting, it just makes sure that that number is always one! What did you think the borrow checker was for?

              In roc, because the platform is in charge of memory management, it can optimise, so that a web server can allocate an arena for each client, a game loop can calculate what it needs in advance etc etc.

              But like I say, they do a lot of work on avoiding cache misses and branch mispredictions, which are their own source of "stop the world while I page in from main memory" or "stop the pipeline while I build a new one". If it was doing traditional garbage collection, that would be an utterly pointless microoptimisation.

              Rust isn't a religion. Don't treat it like one.

              When it was very new a bunch of C programmers shit on its ideas and said C was the only real systems programming language, but rust, which was pretty much Linear ML dressed up in C style syntax came from hyper weird functional programming language to trusted systems programming language. Why? Because it does memory management sooooo much better than C and is just about as fast. Guess what roc is doing? Memory management soooooo much better than C, and sooooo much less niggly and hard to get right than the borrow checker and is just about as fast.

              Plenty of beginners program in rust by just throwing clone at every error the borrow checker sends them, or even unsafe! Bye bye advantages of rust, because it was hard to please. Roc calculates from your code whether it needs to clone (eg once for a reference to an unmodified value, each time for an initial value for the points in a new data structure), and like rust, frees memory when it's not being used.

              Rust does manual cloning. Roc does calculated cloning. Rust wins over C for memory safety by calculating when to free rather than using manual free, totally eliminating a whole class of bugs. Roc could win over rust by calculating when to clone, eliminating a whole class of unnecessary allocation and deallocation. Don't be so sure that no one could do better than rust. And the devXP in rust is really poor.

              C This user is from outside of this forum
              C This user is from outside of this forum
              [email protected]
              wrote last edited by [email protected]
              #35

              Did you think rust doesn’t free up memory for you? That would be the biggest memory leak in history! No! Rust does reference counting, it just makes sure that that number is always one! What did you think the borrow checker was for?

              There is no runtime garbage collection in Rust. Given a legal program, it can detect where free-type instructions are needed at compile time, and adds them. From there on it works like C, but with no memory leaks or errors because machines are good at being exactly correct. If you want to say that's just a reference counting algorithm that's so simple it's not there, sure, I guess you can do that.

              Roc has runtime overhead to do garbage collection, it says so right on their own page. It might be a post-Rust language but this feels like the same conversation I've had about D and... I can't even remember now. Maybe Roc is a cool, innovative language. It's new to me. But, it doesn't sound like it's doing anything fundamentally new on that specific part.

              Edit: Reading your follow up to the other person, it sounds like it has both a Rust-style compile time algorithm of some sort, and then (reference count-based) garbage collection at run time for parts of the program that would just be illegal in Rust.

              D 1 Reply Last reply
              3
              • B [email protected]

                To be fair, the drop/dealloc "pause" is very different from what people usually mean when they say "garbage collection pause", i.e. stop-the-world (...or at least a slice of the world).

                C This user is from outside of this forum
                C This user is from outside of this forum
                [email protected]
                wrote last edited by
                #36

                Yeah, it might be better, I don't actually know. It's not as novel as OP maybe thinks it is, though.

                B 1 Reply Last reply
                0
                • firelizzard@programming.devF [email protected]

                  Garbage collection is analyzing the heap and figuring out what can be collected. Reference counting requires the code to increment or decrement a counter and frees memory when the counter hits zero. They’re fundamentally different approaches. Also reference counting isn’t necessarily automatic, Objective-C had manual reference counting since day one.

                  B This user is from outside of this forum
                  B This user is from outside of this forum
                  [email protected]
                  wrote last edited by
                  #37

                  "Garbage collection" is ambiguous, actually; reference counting is traditionally considered a kind of "garbage collection". The type you're thinking of is called "tracing garbage collection," but the term "garbage collection" is often used to specifically mean "tracing garbage collection."

                  1 Reply Last reply
                  2
                  • C [email protected]

                    Yeah, it might be better, I don't actually know. It's not as novel as OP maybe thinks it is, though.

                    B This user is from outside of this forum
                    B This user is from outside of this forum
                    [email protected]
                    wrote last edited by
                    #38

                    That's fair; Python, Swift, and most Lisps all use or have previously used reference-counting. But the quoted sentence isn't wrong, since it said no "garbage collection pauses" rather than "garbage collection."

                    C 1 Reply Last reply
                    1
                    • C [email protected]

                      Did you think rust doesn’t free up memory for you? That would be the biggest memory leak in history! No! Rust does reference counting, it just makes sure that that number is always one! What did you think the borrow checker was for?

                      There is no runtime garbage collection in Rust. Given a legal program, it can detect where free-type instructions are needed at compile time, and adds them. From there on it works like C, but with no memory leaks or errors because machines are good at being exactly correct. If you want to say that's just a reference counting algorithm that's so simple it's not there, sure, I guess you can do that.

                      Roc has runtime overhead to do garbage collection, it says so right on their own page. It might be a post-Rust language but this feels like the same conversation I've had about D and... I can't even remember now. Maybe Roc is a cool, innovative language. It's new to me. But, it doesn't sound like it's doing anything fundamentally new on that specific part.

                      Edit: Reading your follow up to the other person, it sounds like it has both a Rust-style compile time algorithm of some sort, and then (reference count-based) garbage collection at run time for parts of the program that would just be illegal in Rust.

                      D This user is from outside of this forum
                      D This user is from outside of this forum
                      [email protected]
                      wrote last edited by
                      #39

                      Roc has runtime overhead to do garbage collection, it says so right on their own page.

                      I was sceptical about your assertion because the language authors made a design decision not do do garbage collection. So I did a google search for garbage on roc-lang.org to try and find evidence of your claim. It doesn't say it does garbage collection. It does say overhead, but you're talking about it like it's a big slow thing that takes up time and makes thread pauses, but it's a small thing like array bounds checking. You do believe in array bounds checking, don't you?

                      So no, that's not what it says and you're using the phrase garbage collection to mean a much wider class of things than is merited. Garbage collection involves searching the heap for data which has fallen out of scope and freeing that memory up. It's slow and it necessitates pausing the main thread, causing unpredictably long delays. Roc does not do this.

                      Here's what the website actually says on the topic.

                      https://www.roc-lang.org/fast

                      Roc is a memory-safe language with automatic memory management. Automatic memory management has some unavoidable runtime overhead, and memory safety based on static analysis rules out certain performance optimizations—which is why unsafe Rust can outperform safe Rust. This gives Roc a lower performance ceiling than languages which support memory unsafety and manual memory management, such as C, C++, Zig, and Rust.

                      Just in case you missed it, that was unsafe rust that lacks the overheads. If you're advocating for using unsafe to gain a tiny performance benefit, you may as well be writing C, or zig, which at least has some tools to cope with all that stuff.

                      https://www.roc-lang.org/fast

                      When benchmarking compiled Roc programs, the goal is to have them normally outperform the fastest mainstream garbage-collected languages (for example, Go, C#, Java, and JavaScript)

                      Just in case you missed it, roc is not in the list of garbage collected languages.

                      https://www.roc-lang.org/platforms

                      The bigger benefit is tailoring memory management itself based on the domain. For example, nea is a work-in-progress Web server which performs arena allocation on each request handler. In Roc terms, this means the host's implementation of malloc can allocate into the current handler's arena, and free can be a no-op. Instead, the arena can be reset when the response has been sent.

                      In this design, heap allocations in a Web server running on nea are about as cheap as stack allocations, and deallocations are essentially free. This is much better for the server's throughput, latency, and predictability than (for example) having to pay for periodic garbage collection!

                      Summary: roc doesn't have the performance disadvantages of garbage collected languages because it's not a garbage collected language.

                      C 1 Reply Last reply
                      1
                      • C [email protected]

                        I don't know whatever that language is doing is called, but it's not reference counting. It's doing some kind of static code analysis, and then it falls back to reference counting.

                        If you call that reference counting, what stops you from calling garbage collectors reference counting too? They certainly count references! Is the stack a reference count too? It keeps track of all the data in a stack frame, some of it might be references!

                        D This user is from outside of this forum
                        D This user is from outside of this forum
                        [email protected]
                        wrote last edited by
                        #40

                        Garbage collection is pausing the main thread while you go searching the heap for memory to free up. It's slow and unpredictable about when it'll happen or how long it'll take. That's a very different process indeed and roc doesn't do it.

                        Whether you call it static reference counting or not, when roc chooses in-place mutation it's because it would have satisfied the borrow checker. It can do a wider class of such things when stuff goes out of scope. There's a webserver platform that does arena allocation, often swerving cache misses as a result, but crucially frees the entire arena in one step. Freeing up all the tiny little bits of memory for lots of individual stuff as you go along as rust would do would be far slower.

                        Calling that kind of thing garbage collection is I think very misleading indeed.

                        Optimising your memory management for each problem domain/platform actually give you memory management efficiencies.

                        1 Reply Last reply
                        0
                        • C [email protected]

                          I don't know what you read on my reply. But your reply makes no sense.

                          Let me rephrase it if you prefer:

                          Claiming that Rusty's borrow checker is reference counting is hugely misleading. Since the borrow checker was made specifically to prevent the runtime cost of garbage collection and reference counting while still being safe.

                          To anyone unaware, it may read as "rust uses reference counting to avoid reference counting, but they just call it borrow checking". Which is objectively false, since rust's solution doesn't require counting references at runtime.

                          I don't know what mutable string or any of the other rant has to do with reference counting. Looks like you're just looking to catch a "rust evangelist" in some kind of trap. Without even reading what I said.

                          D This user is from outside of this forum
                          D This user is from outside of this forum
                          [email protected]
                          wrote last edited by
                          #41

                          A boolean is a non-negative integer with a maximum of one, often literally, but I see that calling the borrow checker a static reference counter with a maximum of one is frustrating you in the same way that you calling roc's reference counting a garbage collector is frustrating me.

                          The string example is because the thing you're calling runtime overhead is cheap compared to freeing up a string that's been extended even just a couple of times. It's not a trap. It's an example where freeing the string itself could be considerably more expensive than the DEC c and BRZ that you're calling overhead.

                          It's a bit hypocritical to tell me off for not reading what you said when you haven't bothered to figure out the relevance of the memory management examples I gave and just dismissed them out of hand as "rant" and a "trap".

                          C 1 Reply Last reply
                          0
                          • B [email protected]

                            That's fair; Python, Swift, and most Lisps all use or have previously used reference-counting. But the quoted sentence isn't wrong, since it said no "garbage collection pauses" rather than "garbage collection."

                            C This user is from outside of this forum
                            C This user is from outside of this forum
                            [email protected]
                            wrote last edited by [email protected]
                            #42

                            Yes, I read or interpreted that wrong at first.

                            1 Reply Last reply
                            2
                            • D [email protected]

                              Roc has runtime overhead to do garbage collection, it says so right on their own page.

                              I was sceptical about your assertion because the language authors made a design decision not do do garbage collection. So I did a google search for garbage on roc-lang.org to try and find evidence of your claim. It doesn't say it does garbage collection. It does say overhead, but you're talking about it like it's a big slow thing that takes up time and makes thread pauses, but it's a small thing like array bounds checking. You do believe in array bounds checking, don't you?

                              So no, that's not what it says and you're using the phrase garbage collection to mean a much wider class of things than is merited. Garbage collection involves searching the heap for data which has fallen out of scope and freeing that memory up. It's slow and it necessitates pausing the main thread, causing unpredictably long delays. Roc does not do this.

                              Here's what the website actually says on the topic.

                              https://www.roc-lang.org/fast

                              Roc is a memory-safe language with automatic memory management. Automatic memory management has some unavoidable runtime overhead, and memory safety based on static analysis rules out certain performance optimizations—which is why unsafe Rust can outperform safe Rust. This gives Roc a lower performance ceiling than languages which support memory unsafety and manual memory management, such as C, C++, Zig, and Rust.

                              Just in case you missed it, that was unsafe rust that lacks the overheads. If you're advocating for using unsafe to gain a tiny performance benefit, you may as well be writing C, or zig, which at least has some tools to cope with all that stuff.

                              https://www.roc-lang.org/fast

                              When benchmarking compiled Roc programs, the goal is to have them normally outperform the fastest mainstream garbage-collected languages (for example, Go, C#, Java, and JavaScript)

                              Just in case you missed it, roc is not in the list of garbage collected languages.

                              https://www.roc-lang.org/platforms

                              The bigger benefit is tailoring memory management itself based on the domain. For example, nea is a work-in-progress Web server which performs arena allocation on each request handler. In Roc terms, this means the host's implementation of malloc can allocate into the current handler's arena, and free can be a no-op. Instead, the arena can be reset when the response has been sent.

                              In this design, heap allocations in a Web server running on nea are about as cheap as stack allocations, and deallocations are essentially free. This is much better for the server's throughput, latency, and predictability than (for example) having to pay for periodic garbage collection!

                              Summary: roc doesn't have the performance disadvantages of garbage collected languages because it's not a garbage collected language.

                              C This user is from outside of this forum
                              C This user is from outside of this forum
                              [email protected]
                              wrote last edited by [email protected]
                              #43

                              Just in case you missed it, that was unsafe rust that lacks the overheads.

                              It says some overheads. It's different overheads, because Rust does not have reference counting garbage collection, even when safe.

                              Either you should go back and read what I said about reference counting being a runtime garbage collecting algorithm, or I think we're just done. Why say more if it's ignored anyway?

                              I don't think I'm the zealot here.

                              D 1 Reply Last reply
                              0
                              • C [email protected]

                                Just in case you missed it, that was unsafe rust that lacks the overheads.

                                It says some overheads. It's different overheads, because Rust does not have reference counting garbage collection, even when safe.

                                Either you should go back and read what I said about reference counting being a runtime garbage collecting algorithm, or I think we're just done. Why say more if it's ignored anyway?

                                I don't think I'm the zealot here.

                                D This user is from outside of this forum
                                D This user is from outside of this forum
                                [email protected]
                                wrote last edited by
                                #44

                                Well if you're calling any form of automatic memory management garbage collection, then it's only C that doesn't have garbage collection.

                                Rust does have explicit reference counting with Rc<T> and Arc<T>.

                                I'm trying to explain to you that static analysis that limits references to one can be done in a similar way without the limit of one (especially with the assumption of immutability) whilst retaining in-place mutation where the count really is one. It upsets you when I try to explain that it's a generalisation of the borrow checker (without the programmer pain) by calling the borrow checker a static (compile time) reference counter with a limit of one. I'm making a comparison. But don't be surprised if a lot of programming languages implement their boolean variables as an unsigned int with a maximum of one.

                                If roc does the equivalent of putting a call to drop where there were two or three references that fell out of scope rather than one, in what sense is that more overhead than rust calling drop when one reference went out of scope? Rust is still "garbage collecting" the references that turned up on the RHS of assignment statements as it goes along.

                                The overhead we're talking about with reference counting is like DEC r BRZ. It's like array bounds checking. Yes, it's an overhead, but no, it's not worth making a big deal about it if you get to allocate arrays of sizes unknown at compile time or you get to make multiple references without messing with keywords and reference symbols, fighting the borrow checker all day long or manually adding clones.

                                It says some overheads. It’s different overheads,

                                What? Overheads are overheads. Either they're small and useful like roc's reference counting when it turns out to need to be at runtime or array bounds checking, or rust calling drop when some variable falls out of scope, or they're big, stop the main thread at random points and take a long time, like garbage collection in garbage collected languages like java.

                                Why say more if it’s ignored anyway?

                                I know - I wrote a whole bunch of stuff and this other person just ignored every single nuance and explanation and kept saying the same thing again and again without trying to understand a new thing they didn't know about before, just repeating their favourite criticisms of other programming languages whether they applied or not. Oh wait, that was you.

                                I don’t think I’m the zealot here.

                                Interesting.

                                1 Reply Last reply
                                0
                                • D [email protected]

                                  A boolean is a non-negative integer with a maximum of one, often literally, but I see that calling the borrow checker a static reference counter with a maximum of one is frustrating you in the same way that you calling roc's reference counting a garbage collector is frustrating me.

                                  The string example is because the thing you're calling runtime overhead is cheap compared to freeing up a string that's been extended even just a couple of times. It's not a trap. It's an example where freeing the string itself could be considerably more expensive than the DEC c and BRZ that you're calling overhead.

                                  It's a bit hypocritical to tell me off for not reading what you said when you haven't bothered to figure out the relevance of the memory management examples I gave and just dismissed them out of hand as "rant" and a "trap".

                                  C This user is from outside of this forum
                                  C This user is from outside of this forum
                                  [email protected]
                                  wrote last edited by
                                  #45

                                  I haven't read 90% of your comment since it is out of the topic of the discussion. The "trap" is trying to argue with mee about something I haven't even mentioned.

                                  1 Reply Last reply
                                  0
                                  • L [email protected]

                                    GCC is adding cool new languages too!

                                    They just recently added COBOL and Modula-2. Algol 68 is coming in GCC 16.

                                    D This user is from outside of this forum
                                    D This user is from outside of this forum
                                    [email protected]
                                    wrote last edited by
                                    #46
                                    BEGIN    
                                        BEGIN
                                            Wow, 
                                            Modula 2! 
                                        END;    
                                        I remember Modula 2.
                                    END.
                                    
                                    1 Reply Last reply
                                    1
                                    • L [email protected]

                                      I guess I should have put a /s but I thought it was pretty obvious. The 68 in Algol 68 is 1968. COBOL is from 1959. Modula-2 is from 1977.

                                      My point exactly was that all the hot new languages are built with LLVM while the “new” language options on GCC are languages from the 50’s, 60’s, and 70’s.

                                      I am not even exaggerating. That is just what the projects look like right now.

                                      P This user is from outside of this forum
                                      P This user is from outside of this forum
                                      [email protected]
                                      wrote last edited by
                                      #47

                                      I had my suspicions that that's what you were going for, I just thought I'd make it obvious.

                                      1 Reply Last reply
                                      0
                                      • L [email protected]

                                        I have built single pass compilers that do everything in one shot without an AST. You are not going to get great error messages or optimization though.

                                        K This user is from outside of this forum
                                        K This user is from outside of this forum
                                        [email protected]
                                        wrote last edited by
                                        #48

                                        Oh! Okay, that's interesting to me! What was the input language? I imagine it might be a little more doable if it's closer to hardware?

                                        I don't remember that well, but I think the object oriented stuff with dynamic dispatch was hard to deal with.

                                        1 Reply Last reply
                                        1
                                        • L [email protected]

                                          I guess I should have put a /s but I thought it was pretty obvious. The 68 in Algol 68 is 1968. COBOL is from 1959. Modula-2 is from 1977.

                                          My point exactly was that all the hot new languages are built with LLVM while the “new” language options on GCC are languages from the 50’s, 60’s, and 70’s.

                                          I am not even exaggerating. That is just what the projects look like right now.

                                          S This user is from outside of this forum
                                          S This user is from outside of this forum
                                          [email protected]
                                          wrote last edited by
                                          #49

                                          I would guess those languages are added for preservation and compatibility reasons, and it's also an important thing

                                          1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups