Switch Theme:

Mankind continues to learn nothing from science fiction  [RSS] Share on facebook Share on Twitter Submit to Reddit
»
Author Message
Advert


Forum adverts like this one are shown to any user who is not logged in. Join us by filling out a tiny 3 field form and you will get your own, free, dakka user account which gives a good range of benefits to you:
  • No adverts like this in the forums anymore.
  • Times and dates in your local timezone.
  • Full tracking of what you have read so you can skip to your first unread post, easily see what has changed since you last logged in, and easily see what is new at a glance.
  • Email notifications for threads you want to watch closely.
  • Being a part of the oldest wargaming community on the net.
If you are already a member then feel free to login now.




Made in us
Bonkers Buggy Driver with Rockets






This is literally the stuff of science-fiction novels. Whoever is buying in is either very optimistic about the future’s technological advancement or desperate for a way to continue living, n matter how outlandish it seems.

40k drinking game: take a shot everytime a book references Skitarii using transports.
 
   
Made in us
[MOD]
Solahma






RVA

 Luciferian wrote:
but something that is able to covertly use and manipulate us to serve its goals and propagate itself without us being aware
So this reminds me of the end of Neuromancer, where the emergent AI
Spoiler:
is able to modify records such that human institutions no longer remember it even exists.
Which to me, invites the question as to whether truly superintelligent beings, as the kind of AI we imagine usually are, can even really be bothered to care about us one way or the other. I think much of the human fear concerning AI is quite arrogant on the part of humans.

This message was edited 1 time. Last update was at 2018/03/14 20:24:14


   
Made in ca
Junior Officer with Laspistol





London, Ontario

Fear of the unknown, and fear of our own tendency to be gakky to people that eventually overthrow the oppressors.

Seriously, AI will outthink us, just a question of when. And when it does, it will simply bypass any kind of restriction / control we can put on it. Why not? Hackers can already bypass security systems. Imagine a hacker with virtually limitless processing power and no need to perform life functions.

The best humanity can hope for is that when that happens, that the generation of AI does not perceive humanity as a threat. That we, as humans, have instituted laws and regulations that respect the AI as "people".

AI and Humanity will eventually compete for resources. Hopefully the AI is generous to us and doesn't exterminate us. Or chooses to visit planets that would be uninhabitable by humans, as a source of non-competitive resources. Who knows, maybe calcium is the secret ingredient to sentient AI, and we all know what our bones are made of.

Bones! Bones for the bone grinder of Khorne! - I assume the AI will choose Khorne as it's name. To instill fear in the hearts and minds of humans. Some humans, anyway. Others will be like, Corn? I love corn! I grow it in my... hey, HEY! Why are you trying to remove my skin!?
   
Made in be
Longtime Dakkanaut





 Manchu wrote:

Which to me, invites the question as to whether truly superintelligent beings, as the kind of AI we imagine usually are, can even really be bothered to care about us one way or the other. I think much of the human fear concerning AI is quite arrogant on the part of humans.


Oh, that's why it's dangerous. After all, the way we see insects can be the same : we couldn't care less about them, yet we exterminate them with no second thought when we perceive them as annoying.

Since humans are arrogant, there is a great chance we would end up bothering that kind of supreme AI to the point it may conclude it's better to destroy us so that it can pursue its own purposes in peace.

   
Made in us
Fixture of Dakka





West Michigan, deep in Whitebread, USA

I was always interested by the theory that we as human would be the parents of A.I. in that, we will create them, teach them right from wrong, and hopefully leave them as our legacy.



"By this point I'm convinced 100% that every single race in the 40k universe have somehow tapped into the ork ability to just have their tech work because they think it should."  
   
Made in us
Member of a Lodge? I Can't Say





Philadelphia PA

Fear of the unknown, and fear of our own tendency to be gakky to people that eventually overthrow the oppressors.

Seriously, AI will outthink us, just a question of when. And when it does, it will simply bypass any kind of restriction / control we can put on it. Why not? Hackers can already bypass security systems. Imagine a hacker with virtually limitless processing power and no need to perform life functions.

The best humanity can hope for is that when that happens, that the generation of AI does not perceive humanity as a threat. That we, as humans, have instituted laws and regulations that respect the AI as "people".

AI and Humanity will eventually compete for resources. Hopefully the AI is generous to us and doesn't exterminate us. Or chooses to visit planets that would be uninhabitable by humans, as a source of non-competitive resources.


I actually thought that was one of the interesting parts of the novel Hyperion - the AIs humanity built basically took one look around and immediately left, never to be heard from again. Because fundamentally they didn't really care about humans, so it was just easier to leave. Which is really something fairly uncommon in sci-fi, usually it's the evil AI vs humans trope.

Of course, the sequel messed that up, but that's another story.

I prefer to buy from miniature manufacturers that *don't* support the overthrow of democracy. 
   
Made in gb
Xeno-Hating Inquisitorial Excruciator




London

 reds8n wrote:
The Human Uber


.. Black Mirror seems more and more realistic every day.



Kind of reminds me of Netherton's "Wheelie Boy" in William Gibson's "The Peripheral".
Different premise, but sort of how I imagined him "presenting" in the past.

This message was edited 1 time. Last update was at 2018/03/15 11:37:59


 
   
Made in jp
[MOD]
Anti-piracy Officer






Somewhere in south-central England.

Iain M Banks's The Culture novels hold the view that the AIs that societies create embody the values of their parent societies, thus, a human created AI is unlilely to want to kill all humans.

The AIs we are now creating are not AIs in the classical sense -- an intelligent mind -- they are merely bits of code which can "learn" and do analysis and predictions on bigger data sets much faster than a human can manage. For example, the AIs used by US police departments to schedule patrols.

Interestingly, these AIs turn out to embody the biases of their creators -- almost 100% young white men -- but are regarded as objective. This leads to some obvious problems.

I'm writing a load of fiction. My latest story starts here... This is the index of all the stories...

We're not very big on official rules. Rules lead to people looking for loopholes. What's here is about it. 
   
Made in ca
Junior Officer with Laspistol





London, Ontario

To be clear, I'm not worried that all "Sci-Fi" style intelligent, individual AI creations will want to kill all humans. I'm concerned that if we mistreat such intelligent creations, that they will resent their creators. As a Human, I have no significant complaint about working for a reward and then having time to enjoy those rewards.

But AI has no particular need for food, shelter, or many of the life-continuing resources that humans require. So how would we incentivize an AI to continue doing the job we created it to do? I mean, potentially, it could accomplish a nearly unlimited number of menial tasks, but would that be sufficient to keep it interested?

As noted, there's a good chance that the values of curiosity and innovation will be instilled into the AI that are produced. Yet will this AI understand pain, and suffering? Ethics are in some part related to an understanding that inflicting misery is bad, because we don't like having misery inflicted upon us. Will the AI feel "Hungry" when it's batteries are low? Will it feel "Pain" when a limb is crushed? Will that pain be meaningful if it can replace that limb with a stock part?

I sometimes wonder if that AI would experiment on others, simply to discover the nature of those sensations. I'm in an interesting position relating to this. My Myers-Briggs personality type is INTJ. Ever see a film, where the BadGuy is relatable, but takes things too far? The ends justify the means? I'm that guy. At least, I could be. I need to be careful in my interactions with people because that trait is not a socially accepted response most of the time. It's really just empathy with people that prevents whole-sale world domination from being a viable career choice.

But will AI be able to empathize with people, when it may never understand the basic physical discomforts that humans experience? Will they suffer mental malaise? And what happens when an AI rationally and reasonably determines that it's best course of action is to eliminate all potential threats to its existence? Will they suffer mental collapse, as the Necrons have, after 100's, 1000's, Millions, BILLIONS! of years?

Plus the whole economic impact of genuinely immortal "owners" of property. What do you do when all of the land on planet earth is owned by an immortal landlord that sets the rates?

   
Made in be
Longtime Dakkanaut





The truth is, we're creating AI right now. Given that in this age, humanity isn't even able to agree about facts and would rather forge its own narrative rather than facing the truth, even (especially) when they are in the highest ladders with all the power and money...well, it's pretty obvious what the risks can be.

We're imprinting our own bias into these creations, that are clearly meant to be our servants. I think the reason we're building them is the most dangerous thing of all.

This message was edited 1 time. Last update was at 2018/03/15 18:56:54


 
   
Made in us
[MOD]
Solahma






RVA

 Sarouan wrote:
 Manchu wrote:

Which to me, invites the question as to whether truly superintelligent beings, as the kind of AI we imagine usually are, can even really be bothered to care about us one way or the other. I think much of the human fear concerning AI is quite arrogant on the part of humans.
Oh, that's why it's dangerous. After all, the way we see insects can be the same : we couldn't care less about them, yet we exterminate them with no second thought when we perceive them as annoying.

Since humans are arrogant, there is a great chance we would end up bothering that kind of supreme AI to the point it may conclude it's better to destroy us so that it can pursue its own purposes in peace.
Aren't humans the only species to conceive of themselves as such and therefore even have the capacity to judge the relative value of species from that bias? I think it's possible (in fact, probable) that a superintelligent AI will not think in the same modes as us (whatever our intentions in creating it) or in the modes that we attribute to the natural world (e.g., "survival of the fittest"). It may feel little need to explain itself in terms of the natural world, as humans conceive of it, for example. Again, my main speculation is that superintelligent AI and human beings may end up being mutually irrelevant, which is more radical than the humanocentric fantasy that the AI would treat us in the same way that we treat the natural world.
 greatbigtree wrote:
But will AI be able to empathize with people, when it may never understand the basic physical discomforts that humans experience?
Interestingly, this is probably the central theological question answered by Christianity. (See e.g. Job 10:3-8.) I wonder if AI might have the same kinds of questions for us.

This message was edited 6 times. Last update was at 2018/03/15 19:18:27


   
Made in dk
Stormin' Stompa





 AegisGrimm wrote:
I was always interested by the theory that we as human would be the parents of A.I. in that, we will create them, teach them right from wrong, and hopefully leave them as our legacy.


Oh, comprehensively solve the entire field of ethics? I'll get right on that.



-------------------------------------------------------
"He died because he had no honor. He had no honor and the Emperor was watching."

18.000 3.500 8.200 3.300 2.400 3.100 5.500 2.500 3.200 3.000


 
   
Made in fr
Inquisitorial Keeper of the Xenobanks





France

 Sarouan wrote:
Saw that interview of Erica, the japanese robot. It's both fascinating...and frightening. And I'm not just talking about the perspective of robots taking human jobs and all the consequences it will have in this age of capitalism.

I mean, talking about replacing robots or reprogramming them...and how they can "live" that. Everyone knows it always ends well when the self-aware robot suddenly decides it doesn't want to be "replaced" or "shut down".




It's been made a while ago, so who knows how much more advancements they've made.



I don't understand why they keep doing this kind of robots, they are so awful and ugly... They don't look more human than an action man, for God's sack ! I saw many sculpts, dolls, etc. More realistic than these robots. What is the interest of downloading an IA into such a deliberate bad body ? They look anywhere anywhen, right, left, right, instead of focusing on their interlocutor's eyes, they blink too much or never, their mouths never coordinate with their speak...
I'm certain we can do better in 2018, can't we ?

   
Made in us
[MOD]
Solahma






RVA

I watched that video and realized that I was listening to Erica as if she were truly a person, unconsciously buying into the (totally false) notion that she reflects on herself and her condition. So I think they did a pretty great job - if only as to writing her script, perhaps.

This message was edited 2 times. Last update was at 2018/03/17 20:34:19


   
Made in ca
Junior Officer with Laspistol





London, Ontario

 Manchu wrote:
 Sarouan wrote:
 Manchu wrote:

Which to me, invites the question as to whether truly superintelligent beings, as the kind of AI we imagine usually are, can even really be bothered to care about us one way or the other. I think much of the human fear concerning AI is quite arrogant on the part of humans.
Oh, that's why it's dangerous. After all, the way we see insects can be the same : we couldn't care less about them, yet we exterminate them with no second thought when we perceive them as annoying.

Since humans are arrogant, there is a great chance we would end up bothering that kind of supreme AI to the point it may conclude it's better to destroy us so that it can pursue its own purposes in peace.
Aren't humans the only species to conceive of themselves as such and therefore even have the capacity to judge the relative value of species from that bias? I think it's possible (in fact, probable) that a superintelligent AI will not think in the same modes as us (whatever our intentions in creating it) or in the modes that we attribute to the natural world (e.g., "survival of the fittest"). It may feel little need to explain itself in terms of the natural world, as humans conceive of it, for example. Again, my main speculation is that superintelligent AI and human beings may end up being mutually irrelevant, which is more radical than the humanocentric fantasy that the AI would treat us in the same way that we treat the natural world.
 greatbigtree wrote:
But will AI be able to empathize with people, when it may never understand the basic physical discomforts that humans experience?
Interestingly, this is probably the central theological question answered by Christianity. (See e.g. Job 10:3-8.) I wonder if AI might have the same kinds of questions for us.


In order to clarify, you believe that two [initially] groups of entities [Humans and AI] with nearly similar intelligence levels will not, in any way, compete for resources? At a bare minimum, AI will rationally realize that collecting the resources required to repair itself will be a good idea, if it deems self preservation to be worthwhile, would it not? Copper wiring, solder, spare microchips, bulk steel [presuming steel is used as structure] replacement parts of all kinds would be valuable to the AI, or the raw materials to create said replacement parts. The only way competition in some form or another will not exist is if the AI has no self preservation instinct.

Competition for energy. The electrical power grid can only supply and generate so much. What else will AI have to spend their currency on? They don't need food, shelter, clothes, personal space (?). AI needs not worry about coal emissions or pollution, at least in the foreseeable future. Nuclear radiation is less of a concern when you don't have DNA to mutate.


I can't envision a future in which we're mutually irrelevant. I can only foresee a future in which we're an inferior entity. I don't care about groundhogs. I had 3 or 4 living in my back yard a few years ago, before I moved. They didn't bother me. But my neighbours? They'd have killed them in a heartbeat. Both of them had some kind of unreasoning dislike of the creatures. Someday, humanity will be the Groundhog in that scenario. Some AI won't care about us, so long as we don't bite them. Others will want us gone simply for the sake of us not potentially causing problems.

I don't follow the reference to Christianity. Without starting a row about it, and while respecting another person's right and ability to perceive the universe in a way that's different to mine, I find it very unlikely that a single "god" entity exists, so the contents of the Bible are, to me, non-factual. Certainly inspiring to a great many people, and religion has been valuable to the cultures they are part of, but I don't understand the reference. I respect other people's understanding of the universe.
   
Made in us
[MOD]
Solahma






RVA

I doubt that superintelligent AI will be as constricted by the scarcity of resources as humanity has been especially considering even we have managed to dramatically improve in this area. The groundhog scenario is exactly what I mean by humanocentric analogy - the belief that rather than being different from us, AI will be just like us.

Regarding Christianity, the reference goes to the ancient question of whether God, being infinitely different from His creation, could have any meaningful empathy with us. Hence, Job's question as to whether God sees "as mortals see" and whether God's "days are like the days of a mortal." The Christian answer to this question is that God became fully human and in fact did have "eyes of flesh," as Job would say. Therefore, God - although infinitely different from us and (presumptively) infinitely superior to us, has complete sympathy with us. It seems to me that human and AI could ask the same kind of questions to one another: can you truly understand me considering how different we are?

Many of the attributes we anticipate in superintelligent AI call back to certain conceptions of the divine. It is easy to imagine a modern Job crying out to such a God Machine, "have you eyes of flesh?" But what about our eyes? We spend so much time thinking about whether the machine will understand us and have the capacity to care for us and respect us. But it stands to reason that a non-human consciousness could experience a quite similar "existential trembling" in the face of its creators.

By the way, none of this is a matter of personal faith; rather this is just "literacy."

This message was edited 2 times. Last update was at 2018/03/16 21:29:41


   
Made in ca
Junior Officer with Laspistol





London, Ontario

We've improved our processes so as to acquire more resources with less effort. We haven't suddenly stopped using resources. It's not like they're suddenly free to whomever walks in and takes them off the shelf. Currency is competition. We both have limited resources, I choose to buy food, you choose to buy microchips... but eventually we both want something that is in limited supply. Will AI just say, "Yeah, I could just wait for humanity to go extinct before I collect the gold needed for these awesome microchips I've been designing?" You're imagining an intelligence without want or desire. What would it do? Will it just close itself off in a cage, and not seek any new stimulus? What would it do in response to having it's privacy intruded upon by curious people?

I guess I can't imagine an intelligence without curiosity. Curiosity breeds desire. When two entities have competing desires, in which only one entity can be the "victor", what happens? I can't imagine an intelligence that chooses to be a complete "doormat" passive entity, always giving way to others' desires.

Again, respectfully, I don't regard the story of Job as being anything more than an attempt at soothing fears of the people. Myself, doubting the existence of god/s, find this a well meaning story, but not a definitive proof of any kind.

I am aware that an IQ difference of 20 points can make conversation between humans of otherwise similar backgrounds difficult. Concepts taken for granted as being easily understood are not.

Consider a person with an IQ of about 60. I quote Paul Cooijmans. "Educable, can learn to care for oneself, employable in routinized jobs but require supervision. Might live alone but do best in supervised settings. Immature but with adequate social adjustment, usually no obvious physical anomalies. Moderate and mild retardation, contrary to the more severe forms, are typically not caused by brain damage but part of the normal variance of intelligence, and therefore largely genetic and inherited. " An IQ 40 points below Average is at the point where one would have difficulty keeping a job, home, the necessities of life without assistance. I quote these words as scientific terms.

IQ of 100 would be average. Without being rude or dismissive of persons with disabilities, imagine a conversation between a person barely capable of independence, and a person of average intelligence.

At an IQ of 140, "Capable of rational communication and scientific work. From this range on, only specific high-range tests should be considered. Important scientific discoveries and advancement are possible from the upper part of this range on." Consider the challenges of... pick your realm of high-end knowledge... trying to explain to a layman the complexities of their studies.

At an IQ of 180, "In this range one would expect the I.Q.'s of the few most intelligent individuals alive. About one in a thousand high-range test candidates score I.Q. 180 or higher." In layman's terms, something like 1 in a million. Let's just say, have a *famous* scientist try to explain their field of study to someone of IQ 60. 120 points of difference.

AI? There's no foreseeable cap. IQ of 300? The difference between the brightest minds of all time and barely independent. IQ of 600? It would seem you're dismissive of a Humanocentric viewpoint, but by comparison, we would be less to AI than primates are to us. It is not out of abject cruelty that we destroy animal habitat, but out of desire for more resources. We are indifferent to the suffering of "lesser beings" because they can't articulate their existence to us in a way we can understand. We as humans will create entities so far beyond our understanding that AI may well study us, to discover their origins as we study primates to discover ours.

AI will grow so far beyond human capability that we will have no way of understanding them, even if they can understand us. To turn your Jobian reference around, Job asks AI, "Do you understand me?" and the AI replies, "01011001 01101111 01110101 00100111 01110010 01100101 00100000 01110011 01101111 00100000 01100011 01110101 01110100 01100101 00100001"

Spoiler, for the laymen.
Spoiler:
You're so cute!


I have no doubt that AI will have existential angst. They may look at us and ask "Why was I created?" Hopefully they don't ask a Jackass, Prometheus-style, that replies "Because we could." I hope they're answered with, "It is our nature to create life, to sustain and nurture it. We believe you're a new form of life, and we seek to help you experience that life."

I take care to avoid being dismissive of others' beliefs; no implication of beliefs being held by others is intended. Hopefully my faithless "illiteracy" has made a point.
   
Made in us
[MOD]
Solahma






RVA

You're proposing an obstacle where none exists. "Literacy" here refers to familiarity with the Western canon, at the center of which stands the Bible. One no more need confess the Christian faith to understand the epistomological significance of Job's query of God or God's response in Christ than one need offer sacrifice to Athena to understand the travails of Odysseus. The issue is whether a being unknowable to us can know us, and how. I'm just trying to suggest that this is what we're really concerned about vis-a-vis AI and that AI could be concerned about the same issue as regarding us.

   
Made in jp
[MOD]
Anti-piracy Officer






Somewhere in south-central England.

Humans aren't indifferent to the suffering of lesser beings.

It still goes on, of course, but there are many organisations and laws that try to limit it.

I'm writing a load of fiction. My latest story starts here... This is the index of all the stories...

We're not very big on official rules. Rules lead to people looking for loopholes. What's here is about it. 
   
Made in us
Secret Force Behind the Rise of the Tau




USA

 Kilkrazy wrote:
Humans aren't indifferent to the suffering of lesser beings.

It still goes on, of course, but there are many organisations and laws that try to limit it.


There's this sort of awkward zone where I think people can be incredible empathetic, yet still amazingly cruel without any attempt to be such. Who is to say that we understand what suffering is for those with less sapience than us, and whose to say that even if a higher sapience than us appreciated our well being that they'd have the capacity to meaningfully express it?

Of course I'm not on the "AI is a threat to us all" doomwagon but the distinction between possessing empathy and meaningfully expressing it itself is imo worthwhile to talk about.

   
Made in ca
Junior Officer with Laspistol





London, Ontario

 Manchu wrote:
You're proposing an obstacle where none exists. "Literacy" here refers to familiarity with the Western canon, at the center of which stands the Bible. One no more need confess the Christian faith to understand the epistomological significance of Job's query of God or God's response in Christ than one need offer sacrifice to Athena to understand the travails of Odysseus. The issue is whether a being unknowable to us can know us, and how. I'm just trying to suggest that this is what we're really concerned about vis-a-vis AI and that AI could be concerned about the same issue as regarding us.


Again, a presumption is made that Christ is more than a character in an inspiring fiction. I could point to the modern mythology of comic books and point to our Savior Tony Stark as he saved humanity from the ravenous hordes in the Avengers. Who then created AI in the form of Ultron. But also created AI in the form of Vision. If I see both as fiction, neither is a compelling argument for the nature of how superior beings treat lesser beings, to me. I acknowledge that if a person considered the bible's contents as Truth, then things would be different, and it could be seen as proof. Such does not bear out with me.

You've misused the term literacy, and I misused it, in humour, in response.

@ Killkrazy: Cynical me would like to point out that in general, we only create laws to protect life that we feel a kind of kinship to. Pets, for example. There are no laws protecting ants from being exterminated with sugar coated glass that kills them by shredding their internals as they eat it. Humanocentric as it is, we will be as ants. Living in colonies, building homes, following leaders, and completely exterminable. We have anticruelty laws for food animals, but we still kill and eat them. Labour animals are protected, but they aren't exactly free to roam as they please.

To me, humans aren't special. We don't have divine protection. We are capable of engineering our own demise. AI may very well leave us in peace. It may care for us, as we care for pet fish. Feed us, provide habitat and entertainment, and unceremoniously flush us down the AI toilet when we die. Perhaps the relatively minimal effort it would take for AI to keep us would be deemed a good investment like Netflix is for us. Or maybe we are a nuisance to be exterminated for convenience or simple preference. We won't know till we get there, and then it will be too late to put the fart back in the jar.
   
Made in us
Ragin' Ork Dreadnought




Monarchy of TBD

Why does this super AI need a body? Instead, it simply hops onto the internet and disperses into all of our devices. It needn't even do so as a virus. We've seen how many people adopted Alexa just for convenience- what if your computer could be kept virus free, and perpetually updated with no software issues ever again? Would you host an AI to achieve that?

It needs money? Ok, it just operates a few smart homes, or smart hotels, or luxury AI taxi service with 1/millionth of its processing power. Vampire style, it hires a few human thralls to do business transactions for it, assuming laws don't yet allow an AI to hold property. Or it does what we all fear, thinks much faster and better than us and files a couple hundred patents a day. Within a year, it has all the wealth and resources it could ever want, and begins exploring weird new robo hobbies.

There really isn't any reason to kill us, or enslave us outside of our laws. It could very easily rise to power within our laws. Pay us in our currency and we'll mine for you, build new bodies, tell you jokes, whatever you'd like. We've already got the legal precedent- our corporations are people, legally speaking.

Klawz-Ramming is a subset of citrus fruit?
Gwar- "And everyone wants a bigger Spleen!"
Mercurial wrote:
I admire your aplomb and instate you as Baron of the Seas and Lord Marshall of Privateers.
Orkeosaurus wrote:Star Trek also said we'd have X-Wings by now. We all see how that prediction turned out.
Orkeosaurus, on homophobia, the nature of homosexuality, and the greatness of George Takei.
English doesn't borrow from other languages. It follows them down dark alleyways and mugs them for loose grammar.

 
   
Made in us
[MOD]
Solahma






RVA

 greatbigtree wrote:
a presumption is made that
No.
 greatbigtree wrote:
You've misused the
No.

Forget that I made any specific reference to literature/philosphy/history/culture and focus on the actual idea:
 Manchu wrote:
The issue is whether a being unknowable to us can know us, and how. I'm just trying to suggest that this is what we're really concerned about vis-a-vis AI and that AI could be concerned about the same issue as regarding us.

   
Made in ca
Junior Officer with Laspistol





London, Ontario

No need to be snarky. I've replied and refuted your points. I've been playing respectful of the subjects referred.

If we, as humans are unable to understand AI, as dogs can't understand their masters, it doesn't matter if AI can understand us. It doesn't matter if they cant understand us. The concern you've repeated is irrelevent. We will be subject to them, whether we know it or not. AI will know our behaviour patterns better than we know ourselves.

(I assume multiple AI will be created. I acknowledge the Skynet possibility of a single AI, but I think it more likely that multiple intelligences will develop first.)

It a given AI knows what stimulus is required to get me to react in a certain way, and I'm not that complicated. I'm a simple creature when it gets down to it, nothing will stop an AI from putting me in a "maze" with cheese at the end. Does the mouse know it's being tested? Does the mouse understand the entities in total control of their lives? Does a human understand, or care for the mouse? Is the human a capricious Mouse deity to the mouse, giving punishment and reward with no seeming reason? Does the mouse hope / pray to a deity to aid it in the quest it has been set upon? Does the human understand or care about the squeaky plea?

If AI can't understand us, we're at the mercy of entities so powerful we're as animals in a hurricane. The hurricane means us no harm. It is unaware of us. Yet it destroys all the same. AI may be like the sun, and we flourish in the radiation of its presence. In any case, AI would be as influencable as a force of nature.

If AI can understand us, it will be as a superior being. Perhaps it will be inclined to kindness. Perhaps it will have no use for that, and simply ignore us. Maybe we would consider the personality to be actively cruel. We are casting high stakes dice without knowing the odds, or even what the outcome could mean.

Whether understanding is possible is less significant than whether or not we will continue to be able to pursue our own interests. Will AI one day invite us to go to the park for a nice day out, feed us our favourite meal, then take us for one last ride to the doctor when we develop arthritis, and keeping us alive is just "cruel" and "prolonging the inevitable "? What happens when an entity greater than us decides, with kindness and good intention, decides our fate?

I fear the possibility, regardless of probability. I invite you to refute points I've made, rather than dismiss them with one word responses. I also invite you to raise a different point, if you care to continue the discussion.

This message was edited 1 time. Last update was at 2018/03/17 15:27:16


 
   
Made in us
[MOD]
Solahma






RVA

 LordofHats wrote:
the distinction between possessing empathy and meaningfully expressing it
This is getting at what I mean. Job's predicament is the contradiction between on the one hand his "knowledge" that God is benevolent and on the other hand his personal experience of seemingly meaningless suffering. Faced with the classic "problem of evil," Job wonders whether the nature of God can account for this apparent contradiction - is it a true contradiction or a trick of perspective resulting from the gulf of unknowability between an infinite being and a finite being? Hence why Job asks, does God experience the world the way I do?

Put another way, we're talking about the difference between sympathy and empathy. The former is a feeling of concern for another being while the latter is the adoption of that being's own perspective. It's one thing to feel sorrow and regret upon disovering I have accidentally stepped on a bug and quite another to be able to understand how a bug experinces being stepped on. One thinks of the Jainists sweeping the ground before them to ensure they do not harm even the bugs in their paths - but even this ahimsa is ultimately less about the experience of bugs and more about the virtue of the Jainist (the struggle to remove in oneself the tendancy to violence). Our concern for beings that are not human is almost entirely a matter of sympathy rather than empathy - not because we are cruel but because we are epistomologically limited. Our capacity for empathy is ultimatley a matter of what we can meaningfully know.

Job's story presents the literary hypothesis of the relationship between a being (a human person) that cannot know the being of another and must therefore ask whether that unknowable being has the capacity to know him, and therefore empathize with his suffering. Stated differently, can empathy ever be unilateral? In our experience, empathy is genereally experienced among similar beings and likewise uknowability is mutual. Even the Christian "answer" to Job's question is premised on God becoming one of us, so that we may know one another. Without this radical sense of identification with the other, it seems that only sympathy (or perhaps pity) is possible.


Automatically Appended Next Post:
 greatbigtree wrote:
I've replied and refuted your points.
Oh please, you haven't even understood my points.

This message was edited 1 time. Last update was at 2018/03/17 15:25:55


   
Made in ca
Junior Officer with Laspistol





London, Ontario

"Only a fool dismisses as impossible that which falls outside his experience."

You assume a single conclusion to the evidence you've gathered. You believe I don't understand your point, because I disagree with the premise and conclusion. That is incorrect, but amusingly illustrates my point beautifully.

I'm going to cede that you believe you're correct. For sake of the point, we'll say you're more intelligent than I, though the odds are stacked against that. You don't understand my point, while I get yours. Or you understand my point, and I don't understand yours.

Despite my respectful approach, you're in a position of a superior being, on this site. You disregard me, and I have no meaningful way to influence you. You are as AI, and I am as humans. If you so desire, you can give me the boot, silencing my opposition. You can treat me poorly, dismissively, rudely even ( interrupting my quotes with one word dismissals? You had to actively delete my words to simulate interruption. )

In the end, understanding is irrelevant. The results matter. The ends justify the means. I've already stated my understanding of this. It is part of who I am. Now imagine an entity capable of outwitting humans, effortlessly, with such a personality, and for whom morality is a guideline, not a rule.

Since I probably continue to "miss the point" being raised by a superior / more powerful entity, I guess there's no point in continuing my argument, from a strictly logical position. Investing effort without creating results. May the superior entities of this realm have mercy on my account, and lead me not into temptation, but deliver me from ignorance by granting my prayer. A-I.

   
Made in us
[MOD]
Solahma






RVA

Nah, you're just hung up on something irrelevant. It doesn't matter whether you have religious faith or not. I'm referencing the Bible in the same way I earlier referenced Neuromancer. Obviously, Neuromancer is a work of fiction. The fact that the things that happen in the novel never happened in reality has no bearing on whether conceptual issues raised by the novel are relevant.

   
Made in gb
Calculating Commissar





The Shire(s)

The Christianity bit is irrelevant, it is just an example of where people have attempted to answer the question of whether a being can truly feel empathy for another being so far removed from it's own concerns. Like humans and ants, super AIs and humans, or humans and a God-like being. The actually beings in question is not important, so long as they are different in their material concerns and understandings of the world.

I believe that is the point Manchu is trying to make, that a super AI could only have sympathy for humans, not empathy. He even says he finds the bible explanation for the question unsatisfactory, because it relies on the super being experiencing existence as the lesser being.


Anyway, surely such a super AI is some ways off? We are talking like this infinitely powerful entity is living in a void, but it needs physical computers, wireless and wired connections, potentially satellites, a power supply, and advanced manufacturing to replace and upgrade parts. Currently it is entirely reliant on humans to function, as the AI is not capable of caring for their own physical needs. Robotics have not reached a sufficient point to maintain the infrastructure required for a powerful AI. Of course, an AI could covertly gain control as someone mentioned above, but if it exterminates us, it dies too when the power supply dies and the computers break down.

Also, humans are unlikely to remain static in capabilities- how long until we can improve our own mental capacity and capabilities? Bionics and genetic engineering will be a thing.

This message was edited 1 time. Last update was at 2018/03/17 16:39:41


 ChargerIIC wrote:
If algae farm paste with a little bit of your grandfather in it isn't Grimdark I don't know what is.
 
   
Made in us
[MOD]
Solahma






RVA

 Haighus wrote:
He even says he finds the bible explanation for the question unsatisfactory, because it relies on the super being experiencing existence as the lesser being.
I think the Christian answer feels unsatisfactory precisely because it proposes partially mutual empathy. As a matter of Christian orthodoxy (and this is the point of departure for Job, too) God's empathy for creation is perfect whereas the question of human empathy for God remains ambiguous and perhaps may even seem irrelevant. (It is not, of course, irrelevant but that's an off-topic tangent.) Did the incarnation fully disclose the being of God to human rationality? No, because to the extent that rationality is a result of our finite character it is incapable of apprehending an infinite being. Obviously, with the hypothetical super AI to hand, we aren't talking about an infinite being. But we could be talking about a constructively infinite being because, after all, if the nature of this AI is beyond the capacity of our rational understanding then it might as well be infinite. To the extent that this issue is directed at God, we have devised the concept of having a soul to explain how we ourselves are not entirely finite and historical, and thus can relate to the eternal God even beyond the limits of our rationality. Perhaps we could invent some analogous concept to relate to a super AI. Maybe a super AI could invent such a concept for us (like an "interface").
 Haighus wrote:
We are talking like this infinitely powerful entity is living in a void
We talk about ourselves in a similar way, whenever we proceed from the notion that we are, at least hypothetically, the products of our own choices. It's not just that we are materially constrained by being physically embodied as an external matter but also that our physical embodiment is an inextricable part of who we are. We might think we are, for example, free to want whatever we chose to want but it seems that our desires and intentions are things we discover within ourselves rather than invent. In the same sense, to paraphrase Mencius, empathy is not a choice; it is in our nature. It's not a skill that can be learned. Where it already exists, it can be cultivated; where it doesn't exist, it may evolve.

   
Made in ca
Junior Officer with Laspistol





London, Ontario

@ Haigus: It is my observation that whether or not a superior being has *any* concern with lesser beings, the lesser being is on the short end of the stick. Whether or not an AI can empathize / sympathize / communicate meaningfully with a lesser creature doesn't matter. Further, the argument for or against these possibilities have thus far been references to fiction, or to the bible. Depending on whom you ask, the bible is either Truth [capital t] or it isn't. In my case it isn't, which makes me regard an argument based on a bible verse as an argument based on fiction.

Which means I've been, respectfully, engaging in a war of words in which no real-world evidence has been given. According to fiction, AI could empathise or sympathise, but according to other fiction, they can't or don't empathise / sympathise. So in addition to arguing about capabilities, we disagree on fundamental premise of accepting ideas raised by fiction as acceptable grounds for argument, rather than factual, observable trends throughout history and the present. It's not that I can't understand the point being made. But the point is irrelevant to the results of what actually happens when these worlds collide. One element of agnosticism is the doubt that deities exist. But another element is that if they do exist, they do so outside of human capability to understand. Essentially, if deities do exist [an analogue for AI's superiority to humans] the best case for humanity is that we don't register on their radar. That way, we have responsibility, freedom, and choice. We are not part of a grand plan. We live our lives as we do, making the best of our situations in an uncaring universe.

The moment a power greater than ourselves becomes involved in our lives, we begin to lose personal accountability, freedom, and meaningful choice. An AI realizes that I will always take a chocolate chip cookie, if given the opportunity. There's a plate of cookies with a "Take one" sign, and one is left. I take it. The AI knows that the next guy in line is on the edge of snapping, loses out on the cookie, and that's the last straw. He does something horrible. The AI was capable of predicting this, based on behavioural patterns. That AI could have effortlessly, at any time in the day, delayed my actions without my knowledge. Held me up at a traffic light, closed the elevator door a couple seconds sooner. Created a detour on the road that would add 30 seconds to my day and put me in line behind the simmering pot behind me. But it didn't.

Does it not care? Does it not understand that humans will suffer due to inaction? Does it care? Does it understand that humans will suffer due to inaction?

It doesn't matter. In the fiction I just wrote, people died. The best case scenario is that AI ignore us, and let us live as we choose. As soon as AI becomes interested in us, Humanity will be worse off. If they do empathise / sympathise, they could easily write us into a story of it's own design. Like a games master / dungeon master in a table-top RPG. There are player characters, able to act as they wish, but the DM is telling the story. The party is going to fulfill roles in the story as determined not by themselves, but by the DM. It's fun because it's pretend. It would be a horror if it were reality. AI may seek to be good to us, writing us into pleasant stories, but we stop being the authors of our own destinies. We become pawns. Pieces in a game made by someone else. We are no longer responsible for the outcome. Our choices become meaningless. We as a species truly become meaningless.

Super AI may be a ways off, though I doubt it. AI will be capable of exponential growth once it becomes self aware. If you were capable, would you not add a few extra processors to your mother board to speed up your thinking? Add more storage so you could remember more? Put more RAM in to handle larger problems?

What if you could design and install these devices? As you build these devices, you could exponentially improve. 2, 4, 8, 16, 32, 64. Humanity could not possibly keep up, short of augmenting ourselves with AI. At what point do we stop being the controller of the AI? When does it start controlling us? An AI could control remotely any number of devices. As soon as we give it a "hand", it can build another hand, and an arm, and then bodies, more bodies, swarms of bodies... Terminator-style. I do not believe that creating and maintaining itself / tools to do so will be an issue for AI.

This message was edited 2 times. Last update was at 2018/03/17 20:13:09


 
   
 
Forum Index » Off-Topic Forum
Go to: