Skip to content
  • Categories
  • Recent
  • Tags
  • Popular
  • World
  • Users
  • Groups
Skins
  • Light
  • Brite
  • Cerulean
  • Cosmo
  • Flatly
  • Journal
  • Litera
  • Lumen
  • Lux
  • Materia
  • Minty
  • Morph
  • Pulse
  • Sandstone
  • Simplex
  • Sketchy
  • Spacelab
  • United
  • Yeti
  • Zephyr
  • Dark
  • Cyborg
  • Darkly
  • Quartz
  • Slate
  • Solar
  • Superhero
  • Vapor

  • Default (No Skin)
  • No Skin
Collapse
Brand Logo

agnos.is Forums

  1. Home
  2. Ask Lemmy
  3. Will CEOs eventually have to replace themselves with AI to please shareholders?

Will CEOs eventually have to replace themselves with AI to please shareholders?

Scheduled Pinned Locked Moved Ask Lemmy
asklemmy
85 Posts 40 Posters 0 Views
  • Oldest to Newest
  • Newest to Oldest
  • Most Votes
Reply
  • Reply as topic
Log in to reply
This topic has been deleted. Only users with topic management privileges can see it.
  • M [email protected]

    That would free up a whole shitload of money for the citizens! /s

    F This user is from outside of this forum
    F This user is from outside of this forum
    [email protected]
    wrote last edited by
    #62

    That will be a whole shitload of money for the shareholders

    M 1 Reply Last reply
    0
    • H [email protected]

      Non-founder CEO's typically get brought in to use their connections to improve the company of is an internal promotion to signify the new direction of the company. They also provide a single throat to choke when things go wrong.

      What will be more likely to happen is that CEO's will use AI to vibe manage their companies and use the AI output as justification. We don't have enough data to tell if AI helps the best or worst CEO's.

      F This user is from outside of this forum
      F This user is from outside of this forum
      [email protected]
      wrote last edited by [email protected]
      #63

      United Healthcare CEO Brian Thompson was utilizing AI technology to mass murder people for shareholder profit

      H 1 Reply Last reply
      1
      • Y [email protected]

        If AI ends up running companies better than people, won’t shareholders demand the switch? A board isn’t paying a CEO $20 million a year for tradition, they’re paying for results. If an AI can do the job cheaper and get better returns, investors will force it.

        And since corporations are already treated as “people” under the law, replacing a human CEO with an AI isn’t just swapping a worker for a machine, it’s one “person” handing control to another.

        That means CEOs would eventually have to replace themselves, not because they want to, but because the system leaves them no choice. And AI would be considered a "person" under the law.

        C This user is from outside of this forum
        C This user is from outside of this forum
        [email protected]
        wrote last edited by [email protected]
        #64

        If AI ends up running companies better than people

        Okay, important context there. The current AI bubble will burst sooner or later. So, this is hypothetical future AGI.

        Yes, if the process of human labour becoming redundant continues uninterrupted, it's highly likely, although since CEOs make their money from the intangible asset of having connections more than the actual work they'll be one of the last to go.

        But, it won't continue uninterrupted. We're talking about rapidly transitioning to an entirely different kind of economy, and we should expect it will be similarly destabilising as it was to hunter gatherer societies that suddenly encountered industrial technology.

        If humans are still in control, and you still have an entire top 10% of the population with significant equity holdings, there's not going to be much strategy to the initial stages. Front line workers will get laid off catastrophically, basically, and no new work will be forthcoming. The next step will be a political reaction. If some kind of make-work program is what comes out of it, human managers will still find a place in it. If it's basic income, probably not. (And if there's not some kind of restriction on the top end of wealth, as well, you're at risk of creating a new ruling elite with an incentive to kill everyone else off, but that's actually a digression from the question)

        When it comes to the longer term, I find inspiration in a blog post I read recently. Capital holdings will eventually become meaningless compared to rights to natural factors. If military logic works at all the same way, and there's ever any kind of war, land will once again be supreme among them. There weren't really CEOs in feudalism, and even if we manage not to regress to autocracy there probably won't be a place for them.

        1 Reply Last reply
        1
        • S [email protected]

          Y’know, the whole “don’t dehumanize the poor biwwionaiwe’s :(((” works for like, nazis, because they weren’t almost all clinical sociopaths.

          S This user is from outside of this forum
          S This user is from outside of this forum
          [email protected]
          wrote last edited by [email protected]
          #65

          Lol the point about "don't dehumanize" has nothing to do about them or feeling bad for them. They can fuck right off. It's about us not pretending these aren't human monsters, as if being human makes us inherently good, as if our humanity somehow makes us inherently above doing monstrous things. No, to be human is to have the capacity for doing great good and for doing the monstrously terrible.

          Nazis aren't monsters because they're inhuman, they're monsters because of it. Other species on the planet might overhunt, displace, or cause depopulation through inadvertent ecological change, but only humanity commits genocide.

          1 Reply Last reply
          1
          • M This user is from outside of this forum
            M This user is from outside of this forum
            [email protected]
            wrote last edited by
            #66

            Yeah a lot of it is messy, but they are not being replicated by commodity gpus.

            LLMs have no intelligence. They are just exceedingly well at language, which has a lot of human knowledge in it. Just read claudes system prompt and tell me it's still smart, when it needs to be told 4 separate times to avoid copyright.

            facedeer@fedia.ioF 1 Reply Last reply
            0
            • M [email protected]

              Yeah a lot of it is messy, but they are not being replicated by commodity gpus.

              LLMs have no intelligence. They are just exceedingly well at language, which has a lot of human knowledge in it. Just read claudes system prompt and tell me it's still smart, when it needs to be told 4 separate times to avoid copyright.

              facedeer@fedia.ioF This user is from outside of this forum
              facedeer@fedia.ioF This user is from outside of this forum
              [email protected]
              wrote last edited by
              #67

              LLMs have no intelligence. They are just exceedingly well at language, which has a lot of human knowledge in it.

              Hm... two bucks... and it only transports matter? Hm...

              It's amazing how quickly people dismiss technological capabilities as mundane that would have been miraculous just a few years earlier.

              1 Reply Last reply
              0
              • M This user is from outside of this forum
                M This user is from outside of this forum
                [email protected]
                wrote last edited by
                #68

                I did not immediately dismiss LLM, my thoughts come from experience, observing the pace of improvement, and investigating how and why LLMs work.

                They do not think, they simply execute an algorithm. Yeah that algorithm is exceedingly large and complicated, but there's still no thought, there's no learning outside of training. Unlike humans who are always learning, even if they don't look like it, and our brains are constantly rewiring themselves, LLMs don't.

                I'm certain in the future we will get true AI, but it's not here yet.

                1 Reply Last reply
                1
                • F [email protected]

                  United Healthcare CEO Brian Thompson was utilizing AI technology to mass murder people for shareholder profit

                  H This user is from outside of this forum
                  H This user is from outside of this forum
                  [email protected]
                  wrote last edited by
                  #69

                  And the AI being bad at its job was a feature.

                  1 Reply Last reply
                  1
                  • Y [email protected]

                    If AI ends up running companies better than people, won’t shareholders demand the switch? A board isn’t paying a CEO $20 million a year for tradition, they’re paying for results. If an AI can do the job cheaper and get better returns, investors will force it.

                    And since corporations are already treated as “people” under the law, replacing a human CEO with an AI isn’t just swapping a worker for a machine, it’s one “person” handing control to another.

                    That means CEOs would eventually have to replace themselves, not because they want to, but because the system leaves them no choice. And AI would be considered a "person" under the law.

                    W This user is from outside of this forum
                    W This user is from outside of this forum
                    [email protected]
                    wrote last edited by
                    #70

                    Sadly don't think this is going to happen. A good CEO doesn't make calculated decisions based on facts and judge risk against profit. If he did, he would, at best, be a normal CEO. Who wants that? No, a truly great CEO does exactly what a truly bad CEO does; he takes risks that aren't proportional to the reward (and gets lucky)!

                    This is the only way to beat the game, just like with investments or roulette. There are no rich great roulette players going by the odds. Only lucky.

                    Sure, with CEOs, this is on the aggregate. I'm sure there is a genius here and a Renaissance man there... But on the whole, best advice is "get risky and get lucky". Try it out. I highly recommend it. No one remembers a loser. And the story continues.

                    P 1 Reply Last reply
                    2
                    • S [email protected]

                      This is closer to what I mean by strategy and decisions: https://matthewdwhite.medium.com/i-think-therefore-i-am-no-llms-cannot-reason-a89e9b00754f

                      LLMs can be helpful for informing strategy, and simulating strings of words that may can be perceived as a strategic choice, but it doesn't have it's own goal-oriented vision.

                      turkalino@lemmy.yachtsT This user is from outside of this forum
                      turkalino@lemmy.yachtsT This user is from outside of this forum
                      [email protected]
                      wrote last edited by
                      #71

                      Oh sorry I was referring to CEOs

                      1 Reply Last reply
                      0
                      • F [email protected]

                        That will be a whole shitload of money for the shareholders

                        M This user is from outside of this forum
                        M This user is from outside of this forum
                        [email protected]
                        wrote last edited by
                        #72

                        This makes more sense. Dammit!

                        1 Reply Last reply
                        0
                        • Y [email protected]

                          If AI ends up running companies better than people, won’t shareholders demand the switch? A board isn’t paying a CEO $20 million a year for tradition, they’re paying for results. If an AI can do the job cheaper and get better returns, investors will force it.

                          And since corporations are already treated as “people” under the law, replacing a human CEO with an AI isn’t just swapping a worker for a machine, it’s one “person” handing control to another.

                          That means CEOs would eventually have to replace themselves, not because they want to, but because the system leaves them no choice. And AI would be considered a "person" under the law.

                          M This user is from outside of this forum
                          M This user is from outside of this forum
                          [email protected]
                          wrote last edited by
                          #73

                          Wasn't it Willy Shakespeare who said "First, kill all the Shareholders" ? That easily manipulated stock market only truly functions for the wealthy, regardless of harm inflicted on both humans and the environment they exist in.

                          1 Reply Last reply
                          4
                          • W [email protected]

                            Sadly don't think this is going to happen. A good CEO doesn't make calculated decisions based on facts and judge risk against profit. If he did, he would, at best, be a normal CEO. Who wants that? No, a truly great CEO does exactly what a truly bad CEO does; he takes risks that aren't proportional to the reward (and gets lucky)!

                            This is the only way to beat the game, just like with investments or roulette. There are no rich great roulette players going by the odds. Only lucky.

                            Sure, with CEOs, this is on the aggregate. I'm sure there is a genius here and a Renaissance man there... But on the whole, best advice is "get risky and get lucky". Try it out. I highly recommend it. No one remembers a loser. And the story continues.

                            P This user is from outside of this forum
                            P This user is from outside of this forum
                            [email protected]
                            wrote last edited by
                            #74

                            Well you will be happy to hear that AI does make calculated risks but they are not based on reality so they are in fact - risks.

                            You can't just type "Please do not hallucinate. Do not make judgement calls based on fake news"

                            W 1 Reply Last reply
                            1
                            • N [email protected]

                              I could imagine a world where whole virtual organizations could be spun up, and they can just run in the background creating whole products, marketing them, and doing customer support, etc.

                              Right now the technology doesn't seem there yet, but it has been rapidly improving, so we'll see.

                              I could definitely see rich CEOs funding the creation of a "celebrity" bot that answers questions the way they do. Maybe with their likeness and voice, so they can keep running companies from beyond the grave. Throw it in one of those humanoid robots and they can keep preaching the company mission until the sun burns out.

                              What a nightmare.

                              P This user is from outside of this forum
                              P This user is from outside of this forum
                              [email protected]
                              wrote last edited by
                              #75

                              I could imagine a world where whole virtual organizations could be spun up, and they can just run in the background creating whole products, marketing them, and doing customer support, etc.

                              Perhaps we could have it sell Paperclips. With the sole goal of selling as many paperclips as possible.

                              Surely, selling something as innocuous as paperclips could never go wrong.

                              N 1 Reply Last reply
                              1
                              • Y [email protected]

                                If AI ends up running companies better than people, won’t shareholders demand the switch? A board isn’t paying a CEO $20 million a year for tradition, they’re paying for results. If an AI can do the job cheaper and get better returns, investors will force it.

                                And since corporations are already treated as “people” under the law, replacing a human CEO with an AI isn’t just swapping a worker for a machine, it’s one “person” handing control to another.

                                That means CEOs would eventually have to replace themselves, not because they want to, but because the system leaves them no choice. And AI would be considered a "person" under the law.

                                P This user is from outside of this forum
                                P This user is from outside of this forum
                                [email protected]
                                wrote last edited by [email protected]
                                #76

                                All of you are missing the point.

                                CEOs and The Board are the same people. The majority of CEOs are board members at other companies, and vice-versa. It's a big fucking club and you ain't in it.

                                Why would they do this to themselves?

                                Secondly, we already have AI running companies. You think some CEOs and Board Members aren't already using this shit bird as a god? Because they are

                                F 1 Reply Last reply
                                6
                                • Y [email protected]

                                  If AI ends up running companies better than people, won’t shareholders demand the switch? A board isn’t paying a CEO $20 million a year for tradition, they’re paying for results. If an AI can do the job cheaper and get better returns, investors will force it.

                                  And since corporations are already treated as “people” under the law, replacing a human CEO with an AI isn’t just swapping a worker for a machine, it’s one “person” handing control to another.

                                  That means CEOs would eventually have to replace themselves, not because they want to, but because the system leaves them no choice. And AI would be considered a "person" under the law.

                                  gandalf_der_12te@discuss.tchncs.deG This user is from outside of this forum
                                  gandalf_der_12te@discuss.tchncs.deG This user is from outside of this forum
                                  [email protected]
                                  wrote last edited by
                                  #77

                                  that's already broadly discussed . there's tons of articles like this one. just use your favorite search engine for "ceos replaced by ai",

                                  1 Reply Last reply
                                  0
                                  • N [email protected]

                                    I could imagine a world where whole virtual organizations could be spun up, and they can just run in the background creating whole products, marketing them, and doing customer support, etc.

                                    Right now the technology doesn't seem there yet, but it has been rapidly improving, so we'll see.

                                    I could definitely see rich CEOs funding the creation of a "celebrity" bot that answers questions the way they do. Maybe with their likeness and voice, so they can keep running companies from beyond the grave. Throw it in one of those humanoid robots and they can keep preaching the company mission until the sun burns out.

                                    What a nightmare.

                                    gandalf_der_12te@discuss.tchncs.deG This user is from outside of this forum
                                    gandalf_der_12te@discuss.tchncs.deG This user is from outside of this forum
                                    [email protected]
                                    wrote last edited by
                                    #78

                                    I have been having this vision you described for quite some time now.

                                    As time progresses, availability of resources on earth increases because we learn to process and collect them more efficiently; but on the other hand, number of jobs (or, demand for human labor) decreases continuously, because more and more work gets automated.

                                    So, if you'd draw a diagram, it would look something like this:

                                    X-axis is time. As we progress into the future, that completely changes the game. Instead of being a society that is driven by a constant shortage of resources and a constant lack of workers (causing a high demand for workers and a lot of jobs), it'd be a society with a shortage of jobs (and therefore meaningful employment), but with an abundance of resources. What do we do with such a world?

                                    1 Reply Last reply
                                    0
                                    • M [email protected]

                                      Current Ai has no shot of being as smart as humans, it's simply not sophisticated enough.

                                      And that's not to say that current llms aren't impressive, they are, but the human brain is just on a whole different level.

                                      And just to think about on a base level, LLM inference can run off a few gpus, roughly order of 100 billion transistors. That's roughly on par with the number of neurons, but each neuron has an average of 10,000 connections, that are capable of or rewiring themselves to new neurons.

                                      And there are so many distinct types of neurons, with over 10,000 unique proteins.

                                      On top of there over a hundred neurotransmitters, and we're not even sure we've identified them all.

                                      And all of that is still connected to a system that integrates all of our senses, while current AI is pure text, with separate parts bolted onto it for other things.

                                      gandalf_der_12te@discuss.tchncs.deG This user is from outside of this forum
                                      gandalf_der_12te@discuss.tchncs.deG This user is from outside of this forum
                                      [email protected]
                                      wrote last edited by
                                      #79

                                      Current Ai has no shot of being as smart as humans, it’s simply not sophisticated enough.

                                      you know what's also not very sophisticated? the chemistry periodic table. yet all variety of life (of which there is plenty) is based on it.

                                      M 1 Reply Last reply
                                      0
                                      • gandalf_der_12te@discuss.tchncs.deG This user is from outside of this forum
                                        gandalf_der_12te@discuss.tchncs.deG This user is from outside of this forum
                                        [email protected]
                                        wrote last edited by
                                        #80

                                        It’s amazing how quickly people dismiss technological capabilities as mundane that would have been miraculous just a few years earlier.

                                        yep, and it's also amazing how people think new technologies are impossible, until they happen.

                                        https://en.wikipedia.org/wiki/Flying_Machines_Which_Do_Not_Fly

                                        "Flying Machines Which Do Not Fly" is an editorial published in the New York Times on October 9, 1903. The article incorrectly predicted it would take one to ten million years for humanity to develop an operating flying machine.[1] It was written in response to Samuel Langley's failed airplane experiment two days prior. Sixty-nine days after the article's publication, American brothers Orville and Wilbur Wright successfully achieved the first heavier-than-air flight on December 17, 1903, at Kitty Hawk, North Carolina.

                                        1 Reply Last reply
                                        0
                                        • P [email protected]

                                          All of you are missing the point.

                                          CEOs and The Board are the same people. The majority of CEOs are board members at other companies, and vice-versa. It's a big fucking club and you ain't in it.

                                          Why would they do this to themselves?

                                          Secondly, we already have AI running companies. You think some CEOs and Board Members aren't already using this shit bird as a god? Because they are

                                          F This user is from outside of this forum
                                          F This user is from outside of this forum
                                          [email protected]
                                          wrote last edited by
                                          #81

                                          They would do it because the big investors--not randos with a 401k in an index fund, but big hedge funds--demand that AI leads the company. This could potentially be forced at a stockholder meeting without the board having much say.

                                          I don't think it will happen en masse for a different reason, though. The real purpose of the CEO isn't to lead the company, but to take the fall when everything goes wrong. Then they get a golden parachute and the company finds someone else. When AI fails, you can "fire" the model, but are you going to want to replace it with a different model? Most likely, the shareholders will reverse course and put a human back in charge. Then they can fire the human again later.

                                          A few high profile companies might go for it. Then it will go badly and nobody else will try.

                                          1 Reply Last reply
                                          3
                                          Reply
                                          • Reply as topic
                                          Log in to reply
                                          • Oldest to Newest
                                          • Newest to Oldest
                                          • Most Votes


                                          • Login

                                          • Login or register to search.
                                          • First post
                                            Last post
                                          0
                                          • Categories
                                          • Recent
                                          • Tags
                                          • Popular
                                          • World
                                          • Users
                                          • Groups