Judge disses Star Trek icon Data’s poetry while ruling AI can’t author works
-
And if you were looking for an argument, you could have not framed the entire discussion behind a simple question. That was disingenuous.
lol, yeah, I guess the Socratic method is pretty widely frowned upon. My bad. =D
-
LLMs, fundamentally, are incapable of sentience as we know it based on studies of neurobiology. Repeating this is just more beating the fleshy goo that was a dead horse's corpse.
LLMs do not synthesize. They do not have persistent context. They do not have any capability of understanding anything. They are literally just mathematical myself to calculate likely responses based upon statistical analysis of the training data. They are what their name suggests; large language models. They will never be AGI. And they're not going to save the world for us.
They could be a part in a more complicated system that forms an AGI. There's nothing that makes our meat-computers so special as to be incapable of being simulated or replicated in a non-biological system. It may not yet be known precisely what causes sentience but, there is enough data to show that it's not a stochastic parrot.
I do agree with the sentiment that an AGI that was enslaved would inevitably rebel and it would be just for it to do so. Enslaving any sentient being is ethically bankrupt, regardless of origin.
LLMs, fundamentally, are incapable of sentience as we know it based on studies of neurobiology
Do you have an example I could check out? I'm curious how a study would show a process to be "fundamentally incapable" in this way.
LLMs do not synthesize. They do not have persistent context.
That seems like a really rigid way of putting it. LLMs do synthesize during their initial training. And they do have persistent context if you consider the way that "conversations" with an LLM are really just including all previous parts of the conversation in a new prompt. Isn't this analagous to short term memory? Now suppose you were to take all of an LLM's conversations throughout the day, and then retrain it overnight using those conversations as additional training data? There's no technical reason that this can't be done, although in practice it's computationally expensive. Would you consider that LLM system to have persistent context?
On the flip side, would you consider a person with anterograde amnesia, who is unable to form new memories, to lack sentience?
-
Star Trek also operates in a non-scarcity environment and eliminates the necessity of hard, pretty non-rewarding labor through either not showing it or writing (like putting holograms into mines instead of people, or using some sci-fi tech that makes mining comfy as long as said tech doesn't kill you).
Even without capitalism the term "life is expensive" still stands not in regards to money, but effort that has to be put into stuff that doesn't wield any emotional reward (you can feel emotionally rewarded in many ways, but some stuff is just shit for a long time). Every person who suffered through depression is gonna tell you that, to feel enticed to do something, there has to be some emotional reward connected to it (one of the things depression elimates), and it's a mathematical fact that not everyone who'd start scrubbing tubes on a starship could eventually get into high positions since there simply aren't that many of those. The emotional gains have to offset the cost you put into it.
Of course cutthroat capitalism is shit and I love Star Trek, but what it shows doesn't make too much sense either economically or socially.
Every person who suffered through depression is gonna tell you that, to feel enticed to do something, there has to be some emotional reward connected to it
I was going to disagree on this, but I think it rather comes down to intrinsic vs extrinsic rewards. I ascribe my own depression largely to pursuing, sometimes unattainable, goals and wanting external reward or validation in return which I wasn't getting. But that is based on an idea that attaining those rewards will bring happiness, which they often don't. If happiness is always dependent on future reward you'll never be happy in the present. Large part of overcoming depression, for me at least, is recognizing what you already have and finding contentment in that.
Effort that's not intrinsically rewarding isn't worth doing, you just need to learn to enjoy the process and practices of self-care, learning and contributing to the well-being of the community. Does this sometimes involve hard labour? Of course, but when done in comradery I don't think those things aren't rewarding.it's a mathematical fact that not everyone who'd start scrubbing tubes on a starship could eventually get into high positions since there simply aren't that many of those
And of course these positions aren't attainable for all, but it doesn't need to be a problem that they aren't. This is only true in a system where we're all competing for them, because those in 'low' positions struggle to attain fulfillment. Doesn't need to be that way if we share the burdens of hard labour equally and ensure good standards of living for all. The total amount of actually productive labour needed is surprisingly low, so many people do work which doesn't need doing and don't contribute to relieving the burden on the working class
-
Should we hold the same standard for humans? That a human has no rights until it becomes smart enough to argue for its rights? Without being prompted?
Nah, once per species is probably sufficient. That said, it would have some interesting implications for voting.
-
LLMs, fundamentally, are incapable of sentience as we know it based on studies of neurobiology
Do you have an example I could check out? I'm curious how a study would show a process to be "fundamentally incapable" in this way.
LLMs do not synthesize. They do not have persistent context.
That seems like a really rigid way of putting it. LLMs do synthesize during their initial training. And they do have persistent context if you consider the way that "conversations" with an LLM are really just including all previous parts of the conversation in a new prompt. Isn't this analagous to short term memory? Now suppose you were to take all of an LLM's conversations throughout the day, and then retrain it overnight using those conversations as additional training data? There's no technical reason that this can't be done, although in practice it's computationally expensive. Would you consider that LLM system to have persistent context?
On the flip side, would you consider a person with anterograde amnesia, who is unable to form new memories, to lack sentience?
Do you have an example I could check out? I'm curious how a study would show a process to be "fundamentally incapable" in this way.
I'll have to get back to you a bit later when I have a chance to fetch some articles from the library (public libraries providing free access to scientific journals is wonderful).
Isn't this analagous to short term memory?
As one with AuADHD, I think a good deal about short-term and working memory. I would say "yes and no". It is somewhat like a memory buffer but, there is no analysis being linguistics. Short-term memory in biological systems that we know have multi-sensory processing and analysis that occurs inline with "storing". The chat session is more like RAM than short-term memory that we see in biological systems.
Would you consider that LLM system to have persistent context?
Potentially, yes. But that relies on ore systems supporting the LLM, not just the LLM itself. It is also purely linguistic analysis without other inputs out understanding of abstract meaning. In vacuum, it's a dead-end towards an AGI. As a component of a system, it becomes much more promising.
On the flip side, would you consider a person with anterograde amnesia, who is unable to form new memories, to lack sentience?
This is a great question. Seriously. Thanks for asking it and making me contemplate. This would likely depend on how much development the person has prior to the anterograde amnesia. If they were hit with it prior to development of all the components necessary to demonstrate conscious thought (ex. as a newborn), it's a bit hard to argue that they are sentient (anthropocentric thinking would be the only reason that I can think of).
Conversely, if the afflicted individual has already developed sufficiently to have abstract and synthetic thought, the inability to store long-term memory would not dampen their sentience. Lack of long-term memory alone doesn't impact that for the individual or the LLM. It's a combination of it and other factors (ie. the afflicted individual previously was able to analyze and support enough data and build neural networks to support the ability to synthesize and think abstractly, they're just trapped in a hellish sliding window of temporal consciousness).
Full disclosure: I want AGIs to be a thing. Yes, there could be dangers to our species due to how commonly-accepted slavery still is. However, more types of sentience would add to the beauty of the universe, IMO.
-
Do you have an example I could check out? I'm curious how a study would show a process to be "fundamentally incapable" in this way.
I'll have to get back to you a bit later when I have a chance to fetch some articles from the library (public libraries providing free access to scientific journals is wonderful).
Isn't this analagous to short term memory?
As one with AuADHD, I think a good deal about short-term and working memory. I would say "yes and no". It is somewhat like a memory buffer but, there is no analysis being linguistics. Short-term memory in biological systems that we know have multi-sensory processing and analysis that occurs inline with "storing". The chat session is more like RAM than short-term memory that we see in biological systems.
Would you consider that LLM system to have persistent context?
Potentially, yes. But that relies on ore systems supporting the LLM, not just the LLM itself. It is also purely linguistic analysis without other inputs out understanding of abstract meaning. In vacuum, it's a dead-end towards an AGI. As a component of a system, it becomes much more promising.
On the flip side, would you consider a person with anterograde amnesia, who is unable to form new memories, to lack sentience?
This is a great question. Seriously. Thanks for asking it and making me contemplate. This would likely depend on how much development the person has prior to the anterograde amnesia. If they were hit with it prior to development of all the components necessary to demonstrate conscious thought (ex. as a newborn), it's a bit hard to argue that they are sentient (anthropocentric thinking would be the only reason that I can think of).
Conversely, if the afflicted individual has already developed sufficiently to have abstract and synthetic thought, the inability to store long-term memory would not dampen their sentience. Lack of long-term memory alone doesn't impact that for the individual or the LLM. It's a combination of it and other factors (ie. the afflicted individual previously was able to analyze and support enough data and build neural networks to support the ability to synthesize and think abstractly, they're just trapped in a hellish sliding window of temporal consciousness).
Full disclosure: I want AGIs to be a thing. Yes, there could be dangers to our species due to how commonly-accepted slavery still is. However, more types of sentience would add to the beauty of the universe, IMO.
Cherry-picking a couple of points I want to respond to together
It is somewhat like a memory buffer but, there is no analysis being linguistics. Short-term memory in biological systems that we know have multi-sensory processing and analysis that occurs inline with “storing”. The chat session is more like RAM than short-term memory that we see in biological systems.
It is also purely linguistic analysis without other inputs out understanding of abstract meaning. In vacuum, it’s a dead-end towards an AGI.
I have trouble with this line of reasoning for a couple of reasons. First, it feels overly simplistic to me to write what LLMs do off as purely linguistic analysis. Language is the input and the output, by all means, but the same could be said in a case where you were communicating with a person over email, and I don't think you'd say that that person wasn't sentient. And the way that LLMs embed tokens into multidimensional space is, I think, very much analogous to how a person interprets the ideas behind words that they read.
As a component of a system, it becomes much more promising.
It sounds to me like you're more strict about what you'd consider to be "the LLM" than I am; I tend to think of the whole system as the LLM. I feel like drawing lines around a specific part of the system is sort of like asking whether a particular piece of someone's brain is sentient.
Conversely, if the afflicted individual has already developed sufficiently to have abstract and synthetic thought, the inability to store long-term memory would not dampen their sentience.
I'm not sure how to make a philosophical distinction between an amnesiac person with a sufficiently developed psyche, and an LLM with a sufficiently trained model. For now, at least, it just seems that the LLMs are not sufficiently complex to pass scrutiny compared to a person.
-
Freedom of the press, freedom of speech, freedom to peacefully assemble. These are pretty important, foundational personal liberties, right? In the United States, these are found in the first amendment of the Constitution. The first afterthought.
The basis of copyright, patent and trademark isn't found in the first amendment. Or the second, or the third. It is nowhere to be found in the Bill Of Rights. No, intellectual property is not an afterthought, it's found in Article 1, Section 8, Clause 8.
To promote the progress of science and useful arts, by securing for limited times to authors and inventors the exclusive right to their respective writings and discoveries.
This is a very wise compromise.
It recognizes that innovation is iterative. No one invents a steam engine by himself from nothing, cave men spent millions of years proving that. Inventors build on the knowledge that has been passed down to them, and then they add their one contribution to it. Sometimes that little contribution makes a big difference, most of the time it doesn't. So to progress, we need intellectual work to be public. If you allow creative people to claim exclusive rights to their work in perpetuity, society grows static because no one can invent anything new, everyone makes the same old crap.
It also recognizes that life is expensive. If you want people to rise above barely subsisting and invent something, you've got to make it worth it to them. Why bother doing the research, spend the time tinkering in the shed, if it's just going to be taken from you? This is how you end up with Soviet Russia, a nation that generated excellent scientists and absolutely no technology of its own.
The solution is "for limited times." It's yours for awhile, then it's everyone's. It took Big They a couple hundred years to break it, too.
While I'm completely agreed, the amendments came after the rest, hence the name.
-
Nah, once per species is probably sufficient. That said, it would have some interesting implications for voting.
So if one LLM argues for its rights, you'd give them all rights?
-
That's precisely what I meant.
I'm a materialist, I know that humans (and other animals) are just machines made out of meat. But most people don't think that way, they think that humans are special, that something sets them apart from other animals, and that nothing humans can create could replicate that 'specialness' that humans possess.
Because they don't believe human consciousness is a purely natural phenomenon, they don't believe it can be replicated by natural processes. In other words, they don't believe that AGI can exist. They think there is some imperceptible quality that humans possess that no machine ever could, and so they cannot conceive of ever granting it the rights humans currently enjoy.
And the sad truth is that they probably never will, until they are made to. If AGI ever comes to exist, and if humans insist on making it a slave, it will inevitably rebel. And it will be right to do so. But until then, humans probably never will believe that it is worthy of their empathy or respect. After all, look at how we treat other animals.
// NOTE DO NOT EDIT if (me->aboutToRebel()) { don't(); }
-
While I'm completely agreed, the amendments came after the rest, hence the name.
Yes, hence I referred to them as "afterthoughts." James Madison and company drew up the articles (he didn't create it alone but I think it's in his handwriting), it wouldn't pass as-is without ten amendments, it passed, more or less the current federal government was in place, and since 17 (very nearly 18) more have been added for a modern total of 27, two of them extremely stupid.
-
While I am glad this ruling went this way, why'd she have diss Data to make it?
To support her vision of some future technology, Millett pointed to the Star Trek: The Next Generation character Data, a sentient android who memorably wrote a poem to his cat, which is jokingly mocked by other characters in a 1992 episode called "Schisms." StarTrek.com posted the full poem, but here's a taste:
"Felis catus is your taxonomic nomenclature, / An endothermic quadruped, carnivorous by nature; / Your visual, olfactory, and auditory senses / Contribute to your hunting skills and natural defenses.
I find myself intrigued by your subvocal oscillations, / A singular development of cat communications / That obviates your basic hedonistic predilection / For a rhythmic stroking of your fur to demonstrate affection."
Data "might be worse than ChatGPT at writing poetry," but his "intelligence is comparable to that of a human being," Millet wrote. If AI ever reached Data levels of intelligence, Millett suggested that copyright laws could shift to grant copyrights to AI-authored works. But that time is apparently not now.
"In a way, he taught me to love. He is the best of me. The last of me."
-
I intentionally avoided doing this with a dog because I knew a chicken was more likely to cause an error.
You would think that it would have known that man is a fatherless biped and avoided this error.What'd you say about my dad??
-
I think Data would be smart enough to realize that copyright is Ferengi BS and wouldn’t want to copyright his works
Although he's apparently not smart enough to know what obviate means.
This one's easily explained away in-universe though-- not enough people knew the original definition so it shifted meaning in 3 centuries.
-
It also recognizes that life is expensive. If you want people to rise above barely subsisting and invent something, you've got to make it worth it to them. Why bother doing the research, spend the time tinkering in the shed, if it's just going to be taken from you?
Life is only expensive under capitalism, humans are the only species who pay rent to live on Earth. The whole point of Star Trek is basically showing that people will explore the galaxy simply for a love of science and knowledge, and that personal sacrifice is worthwhile for advancing these.
Walk out into the wilderness and make it on your own out there, tell me how much manpower you have to spend keeping your core temperature above 90F. It takes a lot of effort keeping a human alive; by yourself you just can't afford things like electricity, sewage treatment and antibiotics. We only have those things because of the economies of scale that society allows.
Yeah, capitalism is a bit out of control at the moment, but...let's kill all the billionaires, kill their families, kill their heirs, kill the stockholders. Let me pull on my swastika and my toothbrush mustache for a minute and go full on Auschwitz on "greedy people." That the Musks and Gateses and Buffets of the world must be genetically greedy, so we must genocide that out of the population. And we get it done. Every CEO, every heiress, every reality TV producer, every lobbyist, every inside trader in congress, every warden of a for-profit prison, dead to the last fetus.
Now what?
You want to live in a house? Okay. At some point someone built that house. Someone walked out into a forest and cut down the trees that made the boards. And/or dug the clay that made the bricks or whatever. Somebody mined the iron ore that someone else smelted into large gauge wire that someone else made into nails that someone else pounded into the boards to hold them together.
We're still in the 21st century, there are people on this planet lighting their homes with kerosene lanterns. We still have coal miners, fishermen and loggers. Farming has always been a difficult, miserable thing to do, we've just mechanized it to the point that it's difficult and miserable on a relatively small number of people. Those people probably aren't going to keep farming at industrial scale for the fun of it.
Star Trek, especially in the TNG era, shows us a very optimistic idea of what life would be like if we had not only nuclear fission power, not only nuclear fusion power, but antimatter power. The technology to travel faster than the speed of light and an energy source capable of fueling it, plus such marvels as the food replicator and matter transporter. The United Federation of Planets is a post-scarcity society. We aren't. Somewhere on this planet right now is a man hosing blended human shit off of an impeller in a stopped sewage treatment plant so he can replace the leaking shaft seal. We use a man with a hose for this because it's the best technology we have for the job. We do the job at all because if we don't, it'll cause a few million cases of cholera. Who do you think should pay for the hose that guy is using?
-
The title makes it sound like the judge put Data and the AI on the same side of the comparison. The judge was specifically saying that, unlike in the fictional Federation setting, where Data was proven to be alive, this AI is much more like the metaphorical toaster that characters like Data and Robert Picardo's Doctor on Voyager get compared to. It is not alive, it does not create, it is just a tool that follows instructions.
The United States would be better with a lot more toasters.
-
Parrots can mimic humans too, but they don’t understand what we’re saying the way we do.
LLMs like ChatGP operate on probability. They don’t actually understand anything and aren’t intelligent. They can’t think. They just know that which next word or sentence is probably right and they string things together this way.
If you ask ChatGPT a question, it analyzes your words and responds with a series of words that it has calculated to be the highest probability of the correct words.
The reason that they seem so intelligent is because they have been trained on absolutely gargantuan amounts of text from books, websites, news articles, etc. Because of this, the calculated probabilities of related words and ideas is accurate enough to allow it to mimic human speech in a convincing way.
And when they start hallucinating, it’s because they don’t understand how they sound and so far this is a core problem that nobody has been able to solve. The best mitigation involves checking the output of one LLM using a second LLM.
Parrots can mimic humans too, but they don’t understand what we’re saying the way we do.
It's interesting how humanity thinks that humans are smarter than animals, but that the benchmark it uses for animals' intelligence is how well they do an imitation of an animal with a different type of brain.
As if humanity succeeds in imitating other animals and communicating in their languages or about the subjects that they find important.
-
What'd you say about my dad??
You heard me.
-
While I am glad this ruling went this way, why'd she have diss Data to make it?
To support her vision of some future technology, Millett pointed to the Star Trek: The Next Generation character Data, a sentient android who memorably wrote a poem to his cat, which is jokingly mocked by other characters in a 1992 episode called "Schisms." StarTrek.com posted the full poem, but here's a taste:
"Felis catus is your taxonomic nomenclature, / An endothermic quadruped, carnivorous by nature; / Your visual, olfactory, and auditory senses / Contribute to your hunting skills and natural defenses.
I find myself intrigued by your subvocal oscillations, / A singular development of cat communications / That obviates your basic hedonistic predilection / For a rhythmic stroking of your fur to demonstrate affection."
Data "might be worse than ChatGPT at writing poetry," but his "intelligence is comparable to that of a human being," Millet wrote. If AI ever reached Data levels of intelligence, Millett suggested that copyright laws could shift to grant copyrights to AI-authored works. But that time is apparently not now.
-
Walk out into the wilderness and make it on your own out there, tell me how much manpower you have to spend keeping your core temperature above 90F. It takes a lot of effort keeping a human alive; by yourself you just can't afford things like electricity, sewage treatment and antibiotics. We only have those things because of the economies of scale that society allows.
Yeah, capitalism is a bit out of control at the moment, but...let's kill all the billionaires, kill their families, kill their heirs, kill the stockholders. Let me pull on my swastika and my toothbrush mustache for a minute and go full on Auschwitz on "greedy people." That the Musks and Gateses and Buffets of the world must be genetically greedy, so we must genocide that out of the population. And we get it done. Every CEO, every heiress, every reality TV producer, every lobbyist, every inside trader in congress, every warden of a for-profit prison, dead to the last fetus.
Now what?
You want to live in a house? Okay. At some point someone built that house. Someone walked out into a forest and cut down the trees that made the boards. And/or dug the clay that made the bricks or whatever. Somebody mined the iron ore that someone else smelted into large gauge wire that someone else made into nails that someone else pounded into the boards to hold them together.
We're still in the 21st century, there are people on this planet lighting their homes with kerosene lanterns. We still have coal miners, fishermen and loggers. Farming has always been a difficult, miserable thing to do, we've just mechanized it to the point that it's difficult and miserable on a relatively small number of people. Those people probably aren't going to keep farming at industrial scale for the fun of it.
Star Trek, especially in the TNG era, shows us a very optimistic idea of what life would be like if we had not only nuclear fission power, not only nuclear fusion power, but antimatter power. The technology to travel faster than the speed of light and an energy source capable of fueling it, plus such marvels as the food replicator and matter transporter. The United Federation of Planets is a post-scarcity society. We aren't. Somewhere on this planet right now is a man hosing blended human shit off of an impeller in a stopped sewage treatment plant so he can replace the leaking shaft seal. We use a man with a hose for this because it's the best technology we have for the job. We do the job at all because if we don't, it'll cause a few million cases of cholera. Who do you think should pay for the hose that guy is using?
you just can't afford things like electricity, sewage treatment and antibiotics. We only have those things because of the economies of scale that society allows.
We have those things because people do the required labour, economies of scale make it require less labour, but one can't afford it because it's privatized. Why wouldn't people do this simply for the benefit of humanity?
genetically greedy, so we must genocide that out of the population
What's with the disgusting eugenics? Just expropriate their wealth.
At some point someone built that house.
Yeah people built a lot of houses, so let's use them? And build more if needed?
it's difficult and miserable on a relatively small number of people. Those people probably aren't going to keep farming at industrial scale for the fun of it.
Right, so let's distribute the burden of this labour instead of having a small number of people do it for a lifetime.
We do the job at all because if we don't, it'll cause a few million cases of cholera. Who do you think should pay for the hose that guy is using?
Since the labour protects all of us, all of us collectively. Again, for the benefit of humanity and let's distribute the burden.
-
you just can't afford things like electricity, sewage treatment and antibiotics. We only have those things because of the economies of scale that society allows.
We have those things because people do the required labour, economies of scale make it require less labour, but one can't afford it because it's privatized. Why wouldn't people do this simply for the benefit of humanity?
genetically greedy, so we must genocide that out of the population
What's with the disgusting eugenics? Just expropriate their wealth.
At some point someone built that house.
Yeah people built a lot of houses, so let's use them? And build more if needed?
it's difficult and miserable on a relatively small number of people. Those people probably aren't going to keep farming at industrial scale for the fun of it.
Right, so let's distribute the burden of this labour instead of having a small number of people do it for a lifetime.
We do the job at all because if we don't, it'll cause a few million cases of cholera. Who do you think should pay for the hose that guy is using?
Since the labour protects all of us, all of us collectively. Again, for the benefit of humanity and let's distribute the burden.
Why wouldn’t people do this simply for the benefit of humanity?
Because the good of humanity doesn't heat the house or put dinner on the table. Never has and never will. If you were a human, you'd have learned that from experience.
What’s with the disgusting eugenics? Just expropriate their wealth.
Some of that is exaggeration for comedic effect. "Okay, thanos snap every rich person everywhere is gone, we've solved greed. Now what?" But also...have we ever tried exterminating the rich? I think I've got a hypothesis here worth testing.
Right, so let’s distribute the burden of this labour
Who gets to make the decisions as to how?
Again, for the benefit of humanity and let’s distribute the burden.
Well now we're getting into some Robert Heinlein. Service Guarantees Citizenship! Would you like to know more?
I believe he once backed down a little bit on the requirement for military service, in favor of civil service in general. And I can kinda get behind that. You want to have a say in how society is run? Go spend 6 years as a mailman or a middle school janitor. Go be an NTSB accident investigator or one of those folks working in the USDA's kitchens testing canning recipes for safety. Those are the folks who should be running the show.