Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Technology
  3. Brian Eno: “The biggest problem about AI is not intrinsic to AI. It’s to do with the fact that it’s owned by the same few people”

Brian Eno: “The biggest problem about AI is not intrinsic to AI. It’s to do with the fact that it’s owned by the same few people”

Scheduled Pinned Locked Moved Technology
technology
157 Posts 90 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • sturgist@lemmy.caS [email protected]

    Oh, and it also hallucinates.

    This is arguably a feature depending on how you use it. I'm absolutely not an AI acolyte. It's highly problematic in every step. Resource usage. Training using illegally obtained information. This wouldn't necessarily be an issue if people who aren't tech broligarchs weren't routinely getting their lives destroyed for this, and if the people creating the material being used for training also weren't being fucked....just capitalism things I guess. Attempts by capitalists to cut workers out of the cost/profit equation.

    If you're using AI to make music, images or video... you're depending on those hallucinations.
    I run a Stable Diffusion model on my laptop. It's kinda neat. I don't make things for a profit, and now that I've played with it a bit I'll likely delete it soon. I think there's room for people to locally host their own models, preferably trained with legally acquired data, to be used as a tool to assist with the creative process. The current monetisation model for AI is fuckin criminal....

    A This user is from outside of this forum
    A This user is from outside of this forum
    [email protected]
    wrote on last edited by
    #24

    Tell that to the man who was accused by Gen AI of having murdered his children.

    sturgist@lemmy.caS 1 Reply Last reply
    0
    • riskable@programming.devR This user is from outside of this forum
      riskable@programming.devR This user is from outside of this forum
      [email protected]
      wrote on last edited by
      #25

      They're not illegally harvesting anything. Copyright law is all about distribution. As much as everyone loves to think that when you copy something without permission you're breaking the law the truth is that you're not. It's only when you distribute said copy that you're breaking the law (aka violating copyright).

      All those old school notices (e.g. "FBI Warning") are 100% bullshit. Same for the warning the NFL spits out before games. You absolutely can record it! You just can't share it (or show it to more than a handful of people but that's a different set of laws regarding broadcasting).

      I download AI (image generation) models all the time. They range in size from 2GB to 12GB. You cannot fit the petabytes of data they used to train the model into that space. No compression algorithm is that good.

      The same is true for LLM, RVC (audio models) and similar models/checkpoints. I mean, think about it: If AI is illegally distributing millions of copyrighted works to end users they'd have to be including it all in those files somehow.

      Instead of thinking of an AI model like a collection of copyrighted works think of it more like a rough sketch of a mashup of copyrighted works. Like if you asked a person to make a Godzilla-themed My Little Pony and what you got was that person's interpretation of what Godzilla combine with MLP would look like. Every artist would draw it differently. Every author would describe it differently. Every voice actor would voice it differently.

      Those differences are the equivalent of the random seed provided to AI models. If you throw something at a random number generator enough times you could--in theory--get the works of Shakespeare. Especially if you ask it to write something just like Shakespeare. However, that doesn't meant the AI model literally copied his works. It's just doing it's best guess (it's literally guessing! That's how work!).

      natecox@programming.devN G ? 3 Replies Last reply
      0
      • C [email protected]
        This post did not contain any content.
        F This user is from outside of this forum
        F This user is from outside of this forum
        [email protected]
        wrote on last edited by
        #26

        COO > Return.

        1 Reply Last reply
        0
        • N [email protected]

          But the people with the money for the hardware are the ones training it to put more money in their pockets. That's mostly what it's being trained to do: make rich people richer.

          riskable@programming.devR This user is from outside of this forum
          riskable@programming.devR This user is from outside of this forum
          [email protected]
          wrote on last edited by
          #27

          This completely ignores all the endless (open) academic work going on in the AI space. Loads of universities have AI data centers now and are doing great research that is being published out in the open for anyone to use and duplicate.

          I've downloaded several academic models and all commercial models and AI tools are based on all that public research.

          I run AI models locally on my PC and you can too.

          N 1 Reply Last reply
          0
          • ? Guest

            Yah, I'm an AI researcher and with the weights released for deep seek anybody can run an enterprise level AI assistant. To run the full model natively, it does require $100k in GPUs, but if one had that hardware it could easily be fine-tuned with something like LoRA for almost any application. Then that model can be distilled and quantized to run on gaming GPUs.

            It's really not that big of a barrier. Yes, $100k in hardware is, but from a non-profit entity perspective that is peanuts.

            Also adding a vision encoder for images to deep seek would not be theoretically that difficult for the same reason. In fact, I'm working on research right now that finds GPT4o and o1 have similar vision capabilities, implying it's the same first layer vision encoder and then textual chain of thought tokens are read by subsequent layers. (This is a very recent insight as of last week by my team, so if anyone can disprove that, I would be very interested to know!)

            riskable@programming.devR This user is from outside of this forum
            riskable@programming.devR This user is from outside of this forum
            [email protected]
            wrote on last edited by
            #28

            Would you say your research is evidence that the o1 model was built using data/algorithms taken from OpenAI via industrial espionage (like Sam Altman is purporting without evidence)? Or is it just likely that they came upon the same logical solution?

            Not that it matters, of course! Just curious.

            ? 1 Reply Last reply
            0
            • C [email protected]
              This post did not contain any content.
              H This user is from outside of this forum
              H This user is from outside of this forum
              [email protected]
              wrote on last edited by
              #29

              Ollama and stable diffusion are free open source software. Nobody is forcing anybody to use chatGPT

              A 1 Reply Last reply
              0
              • A [email protected]

                Tell that to the man who was accused by Gen AI of having murdered his children.

                sturgist@lemmy.caS This user is from outside of this forum
                sturgist@lemmy.caS This user is from outside of this forum
                [email protected]
                wrote on last edited by
                #30

                Ok? If you read what I said, you'll see that I'm not talking about using ChatGPT as an information source. I strongly believe that using LLMs as a search tool is incredibly stupid....for exactly reasons like it being so very confident when relaying inaccurate or completely fictional information.
                What I was trying to say, and I get that I may not have communicated that very well, was that Generative Machine Learning Algorithms might find a niche as creative process assistant tools. Not as a way to search for publicly available information on your neighbour or boss or partner. Not as a way to search for case law while researching the defence of your client in a lawsuit. And it should never be relied on to give accurate information about what colour the sky is, or the best ways to make a custard using gasoline.

                Does that clarify things a bit? Or do you want to carry on using an LLM in a way that has been shown to be unreliable, at best, as some sort of gotcha...when I wasn't talking about that as a viable use case?

                A 1 Reply Last reply
                0
                • C [email protected]
                  This post did not contain any content.
                  G This user is from outside of this forum
                  G This user is from outside of this forum
                  [email protected]
                  wrote on last edited by
                  #31

                  AI has a vibrant open source scene and is definitely not owned by a few people.

                  A lot of the data to train it is only owned by a few people though. It is record companies and publishing houses winning their lawsuits that will lead to dystopia. It's a shame to see so many actually cheering them on.

                  C 1 Reply Last reply
                  0
                  • sturgist@lemmy.caS [email protected]

                    Ok? If you read what I said, you'll see that I'm not talking about using ChatGPT as an information source. I strongly believe that using LLMs as a search tool is incredibly stupid....for exactly reasons like it being so very confident when relaying inaccurate or completely fictional information.
                    What I was trying to say, and I get that I may not have communicated that very well, was that Generative Machine Learning Algorithms might find a niche as creative process assistant tools. Not as a way to search for publicly available information on your neighbour or boss or partner. Not as a way to search for case law while researching the defence of your client in a lawsuit. And it should never be relied on to give accurate information about what colour the sky is, or the best ways to make a custard using gasoline.

                    Does that clarify things a bit? Or do you want to carry on using an LLM in a way that has been shown to be unreliable, at best, as some sort of gotcha...when I wasn't talking about that as a viable use case?

                    A This user is from outside of this forum
                    A This user is from outside of this forum
                    [email protected]
                    wrote on last edited by
                    #32

                    lol. I was just saying in another comment that lemmy users 1. Assume a level of knowledge of the person they are talking to or interacting with that may or may not be present in reality, and 2. Are often intentionally mean to the people they respond to so much so that they seem to take offense on purpose to even the most innocuous of comments, and here you are, downvoting my valid point, which is that regardless of whether we view it as a reliable information source, that's what it is being marketed as and results like this harm both the population using it, and the people who have found good uses for it. And no, I don't actually agree that it's good for creative processes as assistance tools and a lot of that has to do with how you view the creative process and how I view it differently. Any other tool at the very least has a known quantity of what went into it and Generative AI does not have that benefit and therefore is problematic.

                    sturgist@lemmy.caS 1 Reply Last reply
                    0
                    • riskable@programming.devR [email protected]

                      They're not illegally harvesting anything. Copyright law is all about distribution. As much as everyone loves to think that when you copy something without permission you're breaking the law the truth is that you're not. It's only when you distribute said copy that you're breaking the law (aka violating copyright).

                      All those old school notices (e.g. "FBI Warning") are 100% bullshit. Same for the warning the NFL spits out before games. You absolutely can record it! You just can't share it (or show it to more than a handful of people but that's a different set of laws regarding broadcasting).

                      I download AI (image generation) models all the time. They range in size from 2GB to 12GB. You cannot fit the petabytes of data they used to train the model into that space. No compression algorithm is that good.

                      The same is true for LLM, RVC (audio models) and similar models/checkpoints. I mean, think about it: If AI is illegally distributing millions of copyrighted works to end users they'd have to be including it all in those files somehow.

                      Instead of thinking of an AI model like a collection of copyrighted works think of it more like a rough sketch of a mashup of copyrighted works. Like if you asked a person to make a Godzilla-themed My Little Pony and what you got was that person's interpretation of what Godzilla combine with MLP would look like. Every artist would draw it differently. Every author would describe it differently. Every voice actor would voice it differently.

                      Those differences are the equivalent of the random seed provided to AI models. If you throw something at a random number generator enough times you could--in theory--get the works of Shakespeare. Especially if you ask it to write something just like Shakespeare. However, that doesn't meant the AI model literally copied his works. It's just doing it's best guess (it's literally guessing! That's how work!).

                      natecox@programming.devN This user is from outside of this forum
                      natecox@programming.devN This user is from outside of this forum
                      [email protected]
                      wrote on last edited by
                      #33

                      The problem with being like… super pedantic about definitions, is that you often miss the forest for the trees.

                      Illegal or not, seems pretty obvious to me that people saying illegal in this thread and others probably mean “unethically”… which is pretty clearly true.

                      riskable@programming.devR 1 Reply Last reply
                      0
                      • P [email protected]

                        Dead Internet theory has never been a bigger threat. I believe that’s the number one danger - endless quantities of advertising and spam shoved down our throats from every possible direction.

                        fingolfinz@lemmy.worldF This user is from outside of this forum
                        fingolfinz@lemmy.worldF This user is from outside of this forum
                        [email protected]
                        wrote on last edited by
                        #34

                        We’re pretty close to it, most videos on YouTube and websites that exist are purely just for some advertiser to pay that person for a review or recommendation

                        1 Reply Last reply
                        0
                        • ? Guest

                          And yet, he released his latest album exclusively on Apple Music.

                          K This user is from outside of this forum
                          K This user is from outside of this forum
                          [email protected]
                          wrote on last edited by
                          #35

                          ? 1 Reply Last reply
                          0
                          • K This user is from outside of this forum
                            K This user is from outside of this forum
                            [email protected]
                            wrote on last edited by
                            #36

                            Well, the harvesting isn’t illegal (yet), and I think it probably shouldn’t be.

                            It’s scraping, and it’s hard to make that part illegal without collateral damage.

                            But that doesn’t mean we should do nothing about these AI fuckers.

                            In the words of Cory Doctorow:

                            Web-scraping is good, actually.

                            Scraping against the wishes of the scraped is good, actually.

                            Scraping when the scrapee suffers as a result of your scraping is good, actually.

                            Scraping to train machine-learning models is good, actually.

                            Scraping to violate the public’s privacy is bad, actually.

                            Scraping to alienate creative workers’ labor is bad, actually.

                            We absolutely can have the benefits of scraping without letting AI companies destroy our jobs and our privacy. We just have to stop letting them define the debate.

                            1 Reply Last reply
                            0
                            • A [email protected]

                              lol. I was just saying in another comment that lemmy users 1. Assume a level of knowledge of the person they are talking to or interacting with that may or may not be present in reality, and 2. Are often intentionally mean to the people they respond to so much so that they seem to take offense on purpose to even the most innocuous of comments, and here you are, downvoting my valid point, which is that regardless of whether we view it as a reliable information source, that's what it is being marketed as and results like this harm both the population using it, and the people who have found good uses for it. And no, I don't actually agree that it's good for creative processes as assistance tools and a lot of that has to do with how you view the creative process and how I view it differently. Any other tool at the very least has a known quantity of what went into it and Generative AI does not have that benefit and therefore is problematic.

                              sturgist@lemmy.caS This user is from outside of this forum
                              sturgist@lemmy.caS This user is from outside of this forum
                              [email protected]
                              wrote on last edited by
                              #37

                              and here you are, downvoting my valid point

                              Wasn't me actually.

                              valid point

                              You weren't really making a point in line with what I was saying.

                              regardless of whether we view it as a reliable information source, that's what it is being marketed as and results like this harm both the population using it, and the people who have found good uses for it. And no, I don't actually agree that it's good for creative processes as assistance tools and a lot of that has to do with how you view the creative process and how I view it differently. Any other tool at the very least has a known quantity of what went into it and Generative AI does not have that benefit and therefore is problematic.

                              This is a really valid point, and if you had taken the time to actually write this out in your first comment, instead of "Tell that to the guy that was expecting factual information from a hallucination generator!" I wouldn't have reacted the way I did. And we'd be having a constructive conversation right now. Instead you made a snide remark, seemingly (personal opinion here, I probably can't read minds) intending it as an invalidation of what I was saying, and then being smug about my taking offence to you not contributing to the conversation and instead being kind of a dick.

                              A 1 Reply Last reply
                              0
                              • C [email protected]
                                This post did not contain any content.
                                myopinion@lemm.eeM This user is from outside of this forum
                                myopinion@lemm.eeM This user is from outside of this forum
                                [email protected]
                                wrote on last edited by
                                #38

                                The problem with AI is that it pirates everyone’s work and then repackages it as its own and enriches the people that did not create the copywrited work.

                                L P A 3 Replies Last reply
                                0
                                • R This user is from outside of this forum
                                  R This user is from outside of this forum
                                  [email protected]
                                  wrote on last edited by
                                  #39

                                  And also it's using machines to catch up to living creation and evolution, badly.

                                  A but similar to how Soviet system was trying to catch up to in no way virtuous, but living and vibrant Western societies.

                                  That's expensive, and that's bad, and that's inefficient. The only subjective advantage is that power is all it requires.

                                  1 Reply Last reply
                                  0
                                  • myopinion@lemm.eeM [email protected]

                                    The problem with AI is that it pirates everyone’s work and then repackages it as its own and enriches the people that did not create the copywrited work.

                                    L This user is from outside of this forum
                                    L This user is from outside of this forum
                                    [email protected]
                                    wrote on last edited by
                                    #40

                                    I mean, it's our work the result should belong to the people.

                                    1 Reply Last reply
                                    0
                                    • sturgist@lemmy.caS [email protected]

                                      and here you are, downvoting my valid point

                                      Wasn't me actually.

                                      valid point

                                      You weren't really making a point in line with what I was saying.

                                      regardless of whether we view it as a reliable information source, that's what it is being marketed as and results like this harm both the population using it, and the people who have found good uses for it. And no, I don't actually agree that it's good for creative processes as assistance tools and a lot of that has to do with how you view the creative process and how I view it differently. Any other tool at the very least has a known quantity of what went into it and Generative AI does not have that benefit and therefore is problematic.

                                      This is a really valid point, and if you had taken the time to actually write this out in your first comment, instead of "Tell that to the guy that was expecting factual information from a hallucination generator!" I wouldn't have reacted the way I did. And we'd be having a constructive conversation right now. Instead you made a snide remark, seemingly (personal opinion here, I probably can't read minds) intending it as an invalidation of what I was saying, and then being smug about my taking offence to you not contributing to the conversation and instead being kind of a dick.

                                      A This user is from outside of this forum
                                      A This user is from outside of this forum
                                      [email protected]
                                      wrote on last edited by
                                      #41

                                      Not everything has to have a direct correlation to what you say in order to be valid or add to the conversation. You have a habit of ignoring parts of the conversation going around you in order to feel justified in whatever statements you make regardless of whether or not they are based in fact or speak to the conversation you're responding to and you are also doing the exact same thing to me that you're upset about (because why else would you go to a whole other post to "prove a point" about downvoting?). I'm not going to even try to justify to you what I said in this post or that one because I honestly don't think you care.

                                      It wasn't you (you claim), but it could have been and it still might be you on a separate account. I have no way of knowing.

                                      All in all, I said what I said. We will not get the benefits of Generative AI if we don't 1. deal with the problems that are coming from it, and 2. Stop trying to shoehorn it into everything. And that's the discussion that's happening here.

                                      sturgist@lemmy.caS 1 Reply Last reply
                                      0
                                      • riskable@programming.devR [email protected]

                                        Would you say your research is evidence that the o1 model was built using data/algorithms taken from OpenAI via industrial espionage (like Sam Altman is purporting without evidence)? Or is it just likely that they came upon the same logical solution?

                                        Not that it matters, of course! Just curious.

                                        ? Offline
                                        ? Offline
                                        Guest
                                        wrote on last edited by
                                        #42

                                        Well, OpenAI has clearly scraped everything that is scrap-able on the internet. Copyrights be damned. I haven't actually used Deep seek very much to make a strong analysis, but I suspect Sam is just mad they got beat at their own game.

                                        The real innovation that isn't commonly talked about is the invention of Multihead Latent Attention (MLA), which is what drive the dramatic performance increases in both memory (59x) and computation (6x) efficiency. It's an absolute game changer and I'm surprised OpenAI has released their own MLA model yet.

                                        While on the subject of stealing data, I have been of the strong opinion that there is no such thing as copyright when it comes to training data. Humans learn by example and all works are derivative of those that came before, at least to some degree. This, if humans can't be accused of using copyrighted text to learn how to write, then AI shouldn't either. Just my hot take that I know is controversial outside of academic circles.

                                        1 Reply Last reply
                                        0
                                        • N [email protected]

                                          But the people with the money for the hardware are the ones training it to put more money in their pockets. That's mostly what it's being trained to do: make rich people richer.

                                          T This user is from outside of this forum
                                          T This user is from outside of this forum
                                          [email protected]
                                          wrote on last edited by
                                          #43

                                          But you can make this argument for anything that is used to make rich people richer. Even something as basic as pen and paper is used everyday to make rich people richer.

                                          Why attack the technology if its the rich people you are against and not the technology itself.

                                          N 1 Reply Last reply
                                          0
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups