Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Technology
  3. I've been working on an internal project for my job - a quarterly report on the most bleeding edge use cases of AI, and the stuff achieved is genuinely really impressive.

I've been working on an internal project for my job - a quarterly report on the most bleeding edge use cases of AI, and the stuff achieved is genuinely really impressive.

Scheduled Pinned Locked Moved Technology
22 Posts 12 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • F [email protected]

    I've been working on an internal project for my job - a quarterly report on the most bleeding edge use cases of AI, and the stuff achieved is genuinely really impressive.

    So why is the AI at the top end amazing yet everything we use is a piece of literal shit?

    The answer is the chatbot. If you have the technical nous to program machine learning tools it can accomplish truly stunning processes at speeds not seen before.

    If you don't know how to do - for eg - a Fourier transform - you lack the skills to use the tools effectively. That's no one's fault, not everyone needs that knowledge, but it does explain the gap between promise and delivery. It can only help you do what you already know how to do faster.

    Same for coding, if you understand what your code does, it's a helpful tool for unsticking part of a problem, it can't write the whole thing from scratch

    M This user is from outside of this forum
    M This user is from outside of this forum
    [email protected]
    wrote on last edited by
    #10

    LLMs could be useful for translation between programming languages. I asked it to recently for server code given a client code in a different language and the LLM generated code was spot on!

    M 1 Reply Last reply
    0
    • F [email protected]

      I've been working on an internal project for my job - a quarterly report on the most bleeding edge use cases of AI, and the stuff achieved is genuinely really impressive.

      So why is the AI at the top end amazing yet everything we use is a piece of literal shit?

      The answer is the chatbot. If you have the technical nous to program machine learning tools it can accomplish truly stunning processes at speeds not seen before.

      If you don't know how to do - for eg - a Fourier transform - you lack the skills to use the tools effectively. That's no one's fault, not everyone needs that knowledge, but it does explain the gap between promise and delivery. It can only help you do what you already know how to do faster.

      Same for coding, if you understand what your code does, it's a helpful tool for unsticking part of a problem, it can't write the whole thing from scratch

      M This user is from outside of this forum
      M This user is from outside of this forum
      [email protected]
      wrote on last edited by
      #11

      Exactly - I find AI tools very useful and they save me quite a bit of time, but they're still tools. Better at some things than others, but the bottom line is that they're dependent on the person using them. Plus the more limited the problem scope, the better they can be.

      W 1 Reply Last reply
      0
      • M [email protected]

        Exactly - I find AI tools very useful and they save me quite a bit of time, but they're still tools. Better at some things than others, but the bottom line is that they're dependent on the person using them. Plus the more limited the problem scope, the better they can be.

        W This user is from outside of this forum
        W This user is from outside of this forum
        [email protected]
        wrote on last edited by
        #12

        Yes, but the problem is that a lot of these AI tools are very easy to use, but the people using them are often ill-equipped to judge the quality of the result. So you have people who are given a task to do, and they choose an AI tool to do it and then call it done, but the result is bad and they can't tell.

        M 1 Reply Last reply
        0
        • W [email protected]

          Yes, but the problem is that a lot of these AI tools are very easy to use, but the people using them are often ill-equipped to judge the quality of the result. So you have people who are given a task to do, and they choose an AI tool to do it and then call it done, but the result is bad and they can't tell.

          M This user is from outside of this forum
          M This user is from outside of this forum
          [email protected]
          wrote on last edited by
          #13

          True, though this applies to most tools, no? For instance, I'm forced to sit through horrible presentations beause someone were given a task to do, they created a Powerpoint (badly) and gave a presentation (badly). I don't know if this is inherently a problem with AI...

          1 Reply Last reply
          0
          • R [email protected]

            So why is the AI at the top end amazing yet everything we use is a piece of literal shit?

            Just that you call an LLM "AI" shows how unqualified you are to comment on the "successes".

            F This user is from outside of this forum
            F This user is from outside of this forum
            [email protected]
            wrote on last edited by
            #14

            What are you talking about? I read the papers published in mathematical and scientific journals and summarize the results in a newsletter. As long as you know equivalent undergrad statistics, calculus and algebra anyone can read them, you don't need a qualification, you could just Google each term you're unfamiliar with.

            While I understand your objection to the nomenclature, in this particular context all major AI-production houses including those only using them as internal tools to achieve other outcomes (e.g. NVIDIA) count LLMs as part of their AI collateral.

            R 1 Reply Last reply
            1
            0
            • M [email protected]

              LLMs could be useful for translation between programming languages. I asked it to recently for server code given a client code in a different language and the LLM generated code was spot on!

              M This user is from outside of this forum
              M This user is from outside of this forum
              [email protected]
              wrote on last edited by
              #15

              I remain skeptical of using solely LLMs for this, but it might be relevant: DARPA is looking into their usage for C to Rust translation. See the TRACTOR program.

              1 Reply Last reply
              0
              • F [email protected]

                What are you talking about? I read the papers published in mathematical and scientific journals and summarize the results in a newsletter. As long as you know equivalent undergrad statistics, calculus and algebra anyone can read them, you don't need a qualification, you could just Google each term you're unfamiliar with.

                While I understand your objection to the nomenclature, in this particular context all major AI-production houses including those only using them as internal tools to achieve other outcomes (e.g. NVIDIA) count LLMs as part of their AI collateral.

                R This user is from outside of this forum
                R This user is from outside of this forum
                [email protected]
                wrote on last edited by
                #16

                The mechanism of machine learning based on training data as used by LLMs is at its core statistics without contextual understanding, the output is therefore only statistically predictable but not reliable.
                Labeling this as "AI" is misleading at best, directly undermining democracy and freedom in practice, because the impressively intelligent looking output leads naive people to believe the software knows what it is talking about.

                People who condone the use of the term "AI" for this kind of statistical approach are naive at best, snake oil vendors or straightout enemies of humanity.

                F 1 Reply Last reply
                0
                • L [email protected]

                  Not this again... LLM is a subset of ML which is a subset of AI.

                  AI is very very broad and all of ML fits into it.

                  R This user is from outside of this forum
                  R This user is from outside of this forum
                  [email protected]
                  wrote on last edited by
                  #17

                  No and if you label statistics as AI you contribute to the destruction of civil rights by lying to people.

                  1 Reply Last reply
                  0
                  • R [email protected]

                    The mechanism of machine learning based on training data as used by LLMs is at its core statistics without contextual understanding, the output is therefore only statistically predictable but not reliable.
                    Labeling this as "AI" is misleading at best, directly undermining democracy and freedom in practice, because the impressively intelligent looking output leads naive people to believe the software knows what it is talking about.

                    People who condone the use of the term "AI" for this kind of statistical approach are naive at best, snake oil vendors or straightout enemies of humanity.

                    F This user is from outside of this forum
                    F This user is from outside of this forum
                    [email protected]
                    wrote on last edited by
                    #18

                    Can you name a company who has produced an LLM that doesn't refer to it generally as part of "AI"?

                    can you name a company who produces AI tools that doesn't have an LLM as part of its "AI" suite of tools?

                    R 1 Reply Last reply
                    0
                    • F [email protected]

                      Can you name a company who has produced an LLM that doesn't refer to it generally as part of "AI"?

                      can you name a company who produces AI tools that doesn't have an LLM as part of its "AI" suite of tools?

                      R This user is from outside of this forum
                      R This user is from outside of this forum
                      [email protected]
                      wrote on last edited by
                      #19

                      How do those examples not fall into the category "snake oil vendor"?

                      F 1 Reply Last reply
                      0
                      • R [email protected]

                        How do those examples not fall into the category "snake oil vendor"?

                        F This user is from outside of this forum
                        F This user is from outside of this forum
                        [email protected]
                        wrote on last edited by
                        #20

                        what would they have to produce to not be snake oil?

                        R 1 Reply Last reply
                        0
                        • F [email protected]

                          what would they have to produce to not be snake oil?

                          R This user is from outside of this forum
                          R This user is from outside of this forum
                          [email protected]
                          wrote on last edited by
                          #21

                          Wrong question. "What would they have to market it as?" -> LLMs / machine learning / pattern recognition

                          F 1 Reply Last reply
                          0
                          • R [email protected]

                            Wrong question. "What would they have to market it as?" -> LLMs / machine learning / pattern recognition

                            F This user is from outside of this forum
                            F This user is from outside of this forum
                            [email protected]
                            wrote on last edited by
                            #22

                            Wouldn't you just take issue with whatever the new name for it was instead? "Calling it pattern recognition is snake oil, it has no cognition" etc

                            1 Reply Last reply
                            0
                            • System shared this topic on
                            Reply
                            • Reply as topic
                            Log in to reply
                            • Oldest to Newest
                            • Newest to Oldest
                            • Most Votes


                            • Login

                            • Login or register to search.
                            • First post
                              Last post
                            0
                            • Categories
                            • Recent
                            • Tags
                            • Popular
                            • World
                            • Users
                            • Groups