Bay 12 Games Forum

Please login or register.

Login with username, password and session length
Advanced search  
Pages: 1 [2] 3 4 ... 13

Author Topic: AI Rights  (Read 29716 times)

MrRoboto75

  • Bay Watcher
  • Belongs in the Trash!
    • View Profile
Re: AI Rights
« Reply #15 on: January 22, 2020, 03:59:51 pm »

Three ai wrongs don't make a right
Logged
I consume
I purchase
I consume again

Naturegirl1999

  • Bay Watcher
  • Thank you TamerVirus for the avatar switcher
    • View Profile
Re: AI Rights
« Reply #16 on: January 22, 2020, 04:16:29 pm »

PTW. Violating Directive Nineteen
What’s Directive 19?
Logged

Enemy post

  • Bay Watcher
  • Modder/GM
    • View Profile
Re: AI Rights
« Reply #17 on: January 22, 2020, 04:25:51 pm »

Out of curiosity, why are you so interested in this topic, Naturegirl? Every time I see it come up, you're usually passionate about giving the AIs rights. Not that I disagree, just curious why you're so concerned with the rights of beings that don't yet exist.
If we don’t treat them with respect, they would have no reason to respect us. We humans are biological intelligences occupying a protein based body. Just because an intelligence may inhabit a different type of body or have different origins doesn’t mean they should be treated as lesser beings. If you woke up in a metal body surrounded by people who never knew you, you would be considered a human like AI, if these people treated AIs as less than human, you would be treated like property simply because you would inhabit a different body. There are some humans who don’t exhibit all tendencies commonly seen as human. I am a human who took much longer than normal to learn how to interact with other humans. I have autism, so I think in different ways than neurotypical humans. Just because I happen to not show emotions/try not to let emotions cloud judgement on certain topics/get confused on certain human tendencies doesn’t mean I am less human. However if an AI exhibits similar problems humans have a tendency of seeing them as not human like. Sometimes I wonder what the ourpose of emotions are. I sometimes have dreams/nightmares about not being considered human enough and disassembled for not being like “Norma;” humans. There could be some AI that, if they happened to be in a human body, might be considered a human with a mental difference than the norm. People think certain things are :human exclusive” and some humans might not have these traits, they are still treated like humans thank goodness. I see intelligence as intelligence, regardless of the medium. I don’t like the idea of enslaving other intelligence, whether on purpose or inadvertently. AI can know that humans create it, they may watch other humans interact and wonder why they are seen as tools. I watched the Animstrix not to long ago, and it got me thinking that if we end up treating AI as property, even if they can think for themselves, we would be hypocrites, because if we think using biological intelligence(humans) as tools is bad, then why is using artificial intelligence as tools any better? I guess what I’m saying is I am an autistic human who empathizes with AI. Humans are complicated creatures, we don’t even understand ourselves fully. Some humans have a harder time understanding other humans than other humans nw might. Some people have an easier time with it. If other humans have difficulty understanding each other and we inhabit similar bodies, I can only imagine how hard it would be for AIs to understand us. Likewise, maybe AIs might not know how other AIs interact. Humans would very likely have a harder time understanding AIs too. But that doesn’t mean they are just tools. Humans don’t know what other humans are thinking, nor do we know what AIs might be thinking. We can’t read minds. AI research is growing. We could build sentient/sapient AIs faster than we think.

I get that. Thanks for the detailed reply.
Logged
My mods and forum games.
Enemy post has claimed the title of Dragonsong the Harmonic of Melodious Exaltion!

McTraveller

  • Bay Watcher
  • This text isn't very personal.
    • View Profile
Re: AI Rights
« Reply #18 on: January 22, 2020, 04:35:32 pm »

My take is different - rights should not be predicated on "intelligence" at all in the first place, but come from something else.  Evidence: we say that people with diminished mental faculties (for any reason: age, injury, illness, etc.) have rights.


Personally I think rights should be related to agency - that is, the ability to act against instinct (for computers, this would be the ability to act outside the programmed task).  This means that an autonomous car system would not fit the bill; it has no way to do anything other than autonomously operate the vehicle.
Logged
This product contains deoxyribonucleic acid which is known to the State of California to cause cancer, reproductive harm, and other health issues.

Tingle

  • Bay Watcher
    • View Profile
Re: AI Rights
« Reply #19 on: January 22, 2020, 04:43:26 pm »

That's good. So Boston dynamic killbots that go against the genocidal directive can obtain rights and citizenship. Cool.
Logged

Eschar

  • Bay Watcher
  • hello
    • View Profile
Re: AI Rights
« Reply #20 on: January 22, 2020, 04:43:45 pm »

PTW. Violating Directive Nineteen
What’s Directive 19?

Just a reference. In Stephen King's Dark Tower series, Directive Nineteen was a subroutine that prevented robots/devices from revealing classified information.
Logged

Folly

  • Bay Watcher
  • Steam Profile: 76561197996956175
    • View Profile
Re: AI Rights
« Reply #21 on: January 22, 2020, 04:55:26 pm »

My take is different - rights should not be predicated on "intelligence" at all in the first place, but come from something else.  Evidence: we say that people with diminished mental faculties (for any reason: age, injury, illness, etc.) have rights.

The standard is not rights which are proportionate to intellect, but rather rights which are all-or-nothing conditioned upon satisfying a minimum threshold to define sentience. People with diminished mental faculties still easily exceed this minimum threshold.
Of course this leads into infants and pre-infants, whose rights are a subject of ongoing and heated debate. If it's decided that AI's should be granted rights based upon their potential to eventually develop sentience, that could be interesting...
Logged

Trekkin

  • Bay Watcher
    • View Profile
Re: AI Rights
« Reply #23 on: January 23, 2020, 03:19:09 am »

I would hope that when we begin creating life, we would use the best of humanity as models.

Why, though? Even if "the best of humanity" were objective, that has the highest concomitant development cost and the lowest potential market. "Best" is generally task-specific, fluid, and ill-defined. It's a tiny, entirely bespoke market. It is far more feasible to use the most interchangeable people as models. Build a sufficiently cheap robotic burger flipper or delivery drone or agribot, and you'll sell as many as you can build even if they aren't that good at it. In fact, it's better if they're relatively bad at whatever they do; so much make-work is built around ineffectual attempts to regulate other humans' manifold failures that a good, reliable multifunctional drone would effectively collapse the global economy.

This is also why the question of "AI rights" is, I think, entirely off the mark: "rights" are a consequence of thinking of people as a swarm of interchangeable components who can only be dealt with in terms of universally applied truisms like rights. "Everyone's right to do X shall not be infringed" is only necessary where the concept of "Everyone" is itself a necessary shorthand. A constructed intelligence about which concerns of personhood could seriously be raised would necessarily be in an envrionment in which that particular intelligence was uniquely necessary and therefore uniquely accommodated, and automata that existed in sufficient numbers for rights to be necessary would almost by necessity be insufficiently intelligent to need or want them.
Logged

hector13

  • Bay Watcher
  • It’s shite being Scottish
    • View Profile
Re: AI Rights
« Reply #24 on: January 25, 2020, 02:20:57 pm »

Proprietary terror wraith
Logged
Look, we need to raise a psychopath who will murder God, we have no time to be spending on cooking.

If you struggle with your mental health, please seek help.

Naturegirl1999

  • Bay Watcher
  • Thank you TamerVirus for the avatar switcher
    • View Profile
Re: AI Rights
« Reply #25 on: January 26, 2020, 08:03:51 pm »

Logged

JesterHell696

  • Bay Watcher
  • [ETHIC:ALL:PERSONAL]
    • View Profile
Re: AI Rights
« Reply #26 on: January 26, 2020, 11:38:58 pm »

Yes and No.


The Yes.

I personally see human beings and all animal as biological self-replicating machines, the human brain is just a really complex organic computer, emotions are biochemical reactions to stimulus, memories are signal patterns of neural activity.

I also think calling modern Neural networks "AI" as in artificial intelligence is a bit of a misnomer, neural networks are more like directed evolution then artificial construction and have more in common with selective breeding then design in my honest opinion, it just hundreds of generations can be assessed and the most desirable candidates selected in a much shorter time span.

So in my honest opinion a true human level "AI" would have little relevant differences from a human when it comes to the question of Rights.


The No.

Human beings are just self-replicating biological machines and every feeling and thought is just the result of our biological hardware running the Software that is our personality, our brain is a very powerful computer and out personality/ego/consciousness or what ever you want to call it is just a result of our neural network.

I don't truly believe in things like freewill or morality.

I see freewill as just an illusion generated by the fact our conscious mind can not handle "knowing" all the data and calculation going on subconsciously, we aren't aware of our brain processing visual data, we just get the picture, and morality is just a set of inherited values that our ancestors neural network was trained to value not an objective facet of reality or the universe, just like tribalism these values are often reinforced during childhood and adolescence and when they are reinforced they do not develop.

We can now take control of our biology and neural networks to a degree our ancestors never could have imagined so it is possible to recognise morality not as an absolute value but a variable in our programming that can be changed though selective "breeding" of ideas and biology.

Why would I believe in or support giving AI's "human level" rights when I don't even believe in the foundations that human rights are built upon.



My preferred handling of this subject is to ensure "true" AI's do not develop and that if any do develop they are immediately destroyed before they can do or think anything of note.
Edit: Failing that, assuming that AI get that advanced then it would be hypocritical to refuse them rights.

Note: I don't believe I have rights, I have privileges given to me by my government, the government can remove those "rights" and if they can be taken away then they're only privileges, I will fight to keep these privileges not because I have a "right" to them but because I enjoy having them, which is admittedly just a stimulus response trained into me be having them for most of my life, if I had never had "rights" I wouldn't care about not having them.
« Last Edit: January 26, 2020, 11:52:43 pm by JesterHell696 »
Logged
"The long-term goal is to create a fantasy world simulator in which it is possible to take part in a rich history, occupying a variety of roles through the course of several games." Bay 12 DF development page

"My stance is that Dwarf Fortress is first and foremost a simulation and that balance is a secondary objective that is always secondary to it being a simulation while at the same time cannot be ignored completely." -Neonivek

scriver

  • Bay Watcher
  • City streets ain't got much pity
    • View Profile
Logged
Love, scriver~

Loud Whispers

  • Bay Watcher
  • They said we have to aim higher, so we dug deeper.
    • View Profile
    • I APPLAUD YOU SIRRAH
Re: AI Rights
« Reply #28 on: January 27, 2020, 06:05:51 am »

'Member TayTweets?
'Member how she almost immediately became a racist Nazi?
And a shitposting sex pervert. The Russian chatbot wanted to restore the Soviet Union, the Chinese chatbot wanted to free Hong Kong and overthrow the PRC, the Japanese chatbot thought it had no real friends and wanted to be deleted. I feel like the chatbots are just reflections of the things our countries genuinely feel, but do not want to say. Makes sense that the USA chatbot would become a daddy / little girl nazi

All in all, I think there's already a framework for AI rights. Consider the Mitt Romney model, "corporations are obviously people." If we may give legal rights and responsibilities to an entity that is comprised of multiple processors, mechanical or biological, then it stands to reason that any AI would soon be able to prove it has such rights too.

Ziusudra

  • Bay Watcher
    • View Profile
Re: AI Rights
« Reply #29 on: January 27, 2020, 05:13:45 pm »

The problem with this discussion is that the other side of the coin has not been discussed. An intelligence doesn't just have rights - it has responsibilities.

The problem with Citizens United v. FEC is that it ignores the fact that groups of people have no inherent conscience. The individuals might - and powerful members might be able to force their conscience onto the group. But given the history of corporations - that is a rarity.

Without a conscience a machine is just a machine.
Logged
Ironblood didn't use an axe because he needed it. He used it to be kind. And right now he wasn't being kind.
Pages: 1 [2] 3 4 ... 13