Switch Theme:

Viability of the raising of a second sentient species  [RSS] Share on facebook Share on Twitter Submit to Reddit
»
Author Message
Advert


Forum adverts like this one are shown to any user who is not logged in. Join us by filling out a tiny 3 field form and you will get your own, free, dakka user account which gives a good range of benefits to you:
  • No adverts like this in the forums anymore.
  • Times and dates in your local timezone.
  • Full tracking of what you have read so you can skip to your first unread post, easily see what has changed since you last logged in, and easily see what is new at a glance.
  • Email notifications for threads you want to watch closely.
  • Being a part of the oldest wargaming community on the net.
If you are already a member then feel free to login now.




Made in us
Legendary Master of the Chapter





Chicago, Illinois

 AlexHolker wrote:
 Asherian Command wrote:
As it will be a task, but it is inevitable humanity will try to create something. We are creators, so the chance to make life has been on peoples minds for hundreds of years. AI is the next step the human evolutionary chain, it is only a matter of time before we invent an AI.

If that was true, humanity could not be a sapient species.


How does that make any sense?

Man's first goal is to become gods themselves. Its one of the first sins of man. IF you go by the bible that is.

For science it is a drive to create something to further humanities development. Humans will be Sapient for a very long time, and we will never achieve true omnipotence.

From whom are unforgiven we bring the mercy of war. 
   
Made in au
Incorporating Wet-Blending






Australia

 Asherian Command wrote:
 AlexHolker wrote:
 Asherian Command wrote:
As it will be a task, but it is inevitable humanity will try to create something. We are creators, so the chance to make life has been on peoples minds for hundreds of years. AI is the next step the human evolutionary chain, it is only a matter of time before we invent an AI.

If that was true, humanity could not be a sapient species.

How does that make any sense?

Because the ability to act with appropriate judgement - to contemplate a course of action and make a considered decision if or how to go ahead with it - is literally the definition of sapience. A creature that is incapable of that judgement is not a sapient creature.

This message was edited 1 time. Last update was at 2015/07/23 06:04:18


"When I became a man I put away childish things, including the fear of childishness and the desire to be very grown up."
-C.S. Lewis 
   
Made in us
Legendary Master of the Chapter





Chicago, Illinois

 AlexHolker wrote:
 Asherian Command wrote:
 AlexHolker wrote:
 Asherian Command wrote:
As it will be a task, but it is inevitable humanity will try to create something. We are creators, so the chance to make life has been on peoples minds for hundreds of years. AI is the next step the human evolutionary chain, it is only a matter of time before we invent an AI.

If that was true, humanity could not be a sapient species.

How does that make any sense?

Because the ability to act with appropriate judgement - to contemplate a course of action and make a considered decision if or how to go ahead with it - is literally the definition of sapience. A creature that is incapable of that judgement is not a sapient creature.


Then how the hell does that have any baring on the conversation? I didn't say how it will be done, because I have no idea the delicate mechanics of creating new life in a form of an AI.

We know it will be multiple if not billions of programs running simultaneously but will it go together and be able to manage itself is unknown.

From whom are unforgiven we bring the mercy of war. 
   
Made in us
Gore-Soaked Lunatic Witchhunter




Seattle

 AlexHolker wrote:
 Asherian Command wrote:
 AlexHolker wrote:
 Asherian Command wrote:
As it will be a task, but it is inevitable humanity will try to create something. We are creators, so the chance to make life has been on peoples minds for hundreds of years. AI is the next step the human evolutionary chain, it is only a matter of time before we invent an AI.

If that was true, humanity could not be a sapient species.

How does that make any sense?

Because the ability to act with appropriate judgement - to contemplate a course of action and make a considered decision if or how to go ahead with it - is literally the definition of sapience. A creature that is incapable of that judgement is not a sapient creature.


Um, no. That's not the definition of sapience.

adjective
1.having or showing great wisdom or sound judgment.

Pop-culture references and paranoid (though humorous) delusions aside, there's nothing inherently wrong with producing an AI. In fact, in a world where BSG and The Terminator exist (and scientists are most certainly aware of such), it is likely that such scenarios will be considered and planned for in the development of AI.

It is best to be a pessimist. You are usually right and, when you're wrong, you're pleasantly surprised. 
   
Made in gb
Longtime Dakkanaut





 Asherian Command wrote:
 AlexHolker wrote:
 Asherian Command wrote:
As it will be a task, but it is inevitable humanity will try to create something. We are creators, so the chance to make life has been on peoples minds for hundreds of years. AI is the next step the human evolutionary chain, it is only a matter of time before we invent an AI.

If that was true, humanity could not be a sapient species.


How does that make any sense?
I think you guys might be starting to talk at cross purposes: "inevitability" versus something like "free will".

 AlexHolker wrote:
No, it is not inevitable. You can say that organic intelligence was inevitable because carbon-based molecules self-organise in a fashion that can eventually produce life, and from there, sapient life. If inorganic AI happens, it will be because we work really hard to make it happen, it won't just happen spontaneously.
I understand where you are coming from, but I think that's actually a very difficult argument to make. Organic intelligence (at a human level) seems to be quite improbable. There are lots of examples throughout history of convergent evolution, where nature has tried the same things over and over. One of my favourite examples is the ichthyosaur which appears remarkably like a dolphin, despite evolving, independently, from aquatic reptiles, ~200 million years before the first dolphin. Evidently, this is a shape that evolution favours for aquatic predators (swordfish and marlin might be considered yet more examples). However, in all the billions of years that life on Earth has been evolving, "super intelligence" has only occurred once. When we turn our ears to the stars, listening for radio signals from other planets, all we hear is silence. So... far from being inevitable, organic intelligence might be kind of a fluke.

I think the problem with evolution and intelligence, is that for evolution to work it likes small changes that correspond to small improvements in survivability. Any improvement in things like speed or armour will have an immediate impact on survival. We see lots of organisms that have evolved to have thick armour, or have streamlined themselves for speed as a result. Intelligence doesn't really work like that. Small amounts of intelligence don't have much impact. Even huge amounts of intelligence, such as the difference between a dolphin and a shark, still don't seem to make a correspondingly huge amount of difference. So there is really no reason for evolution to keep plugging away at intelligence, it's just not that useful... that is, until it suddenly becomes really useful and owns everything. But to get to that point probably takes a lot of energy and investment without any results, which is probably why we don't see it happening more often, and why even our own big brains appear to have been more of a lucky accident than a survival adaptation.

Artificial intelligence on the other hand, feels much more "inevitable" to me. And it probably will be kind of "spontaneous" (as you put it) rather than "hard work". Most inventions and innovations didn't happen because people willed them into existence through hard work. They happened because, little by little, technology reached a point where suddenly those things became possible. Then we see a whole bunch of people inventing the same thing at around the same time, and often racing each other to the patent office. That's not to say inventors don't work hard, or that improvements in technology aren't the result of hard work, but it's not really about that. In the 1930s I doubt many people wanted a camera built into their telephone. Even in the latter part of the 20th century when the technology was certainly available to send video along with voice messages, it still wasn't really a "thing". But at a certain point in technology, "how?" just becomes "why not?". With today's technology: broadband, skype, cellular phones, miniature digital cameras etc... Why the hell wouldn't you put a camera on a phone? If I could wish away all the camera phones in the world, and make it so that nobody had ever thought of camera phones before now... I guarantee they would still be back in production by the end of the year. It's just such an easy innovation now that it can't not happen. That is what (to me) makes AI so inevitable.


This message was edited 10 times. Last update was at 2015/07/25 05:59:50


 
   
Made in us
Longtime Dakkanaut





AncientSkarbrand wrote:
Strange. Most "sacred" texts would have you believe that's exactly what we are. Haha. Paints gods in an interesting light. Very evil indeed. But thats an entire other topic.

Regarding AI, do you believe that target will actually be reached? Do you think we can really pass beyond telling a computer what to do and it understanding our directives to actual comprehension and "self-thought" (that is, not needing to be told to think something.) Perhaps i'm not up to date, but from what i have read it's been somewhat of an inventor's block attempting to pass from processing instructions to conprehending concepts such as "animals do not like pain" and all of the emotional or conceptual meanings of the words in that sentence. Passing from math to feeling, from what i have read, is proving nigh-impossible with our current understanding.

I understand fully the exponential increase of the advance of science.. However i don't have full confidence that humans entirely predict every possible roadblock that could occur. Not to mention that what he means by "human cognition" could be a very satisfying and fulfillingmathematical skeleton of the myriad feelings and mental creations we have.
Will this AI he plans to have built by 2020 going to be able to dream about exploring the universe with the sense of wonder any human can? Will it love it's creators as loyally as a human child? Will it retain the ability to make an illogical decision? Will it have, preserve, and contemplate the "sense of self" with any resemblance to a human? Will it be able to create original, meaningful art? Will it have the ability to deny itself obsessive activities? Will it rest? Will it have an imagination, and create worlds in it's mind that dont exist and pretend it inhabits them? Will it enjoy a game? Can it be comforted?

These things are why i feel that, although AI can certainly physically surpass the processing power of the human mind, it wont have "human cognition" and perhaps never could... Isn't it possible that because our minds are a direct result of natural selection and the pressures we've recieved that it cannot be perfectly emulated by inorganic matter? Isnt it possible a machine will always be purely mathematical and cold? I am sure there are ways to make it seem like a machine has feelings and emotional responses, but does that mean it actually is, or is just programmed that way?


The Human Brain is simply a machine that contains a conscious mind (to get dualistic about it - the human brain IS the conscious mind).

I am friends with Ray Kurzweil (Google's Director of Engineering, and Technology Director), and Google says that by 2020 they will have something "Human Equivalent" (Turing Indistinct from a human). I am also a regular acquaintance of Jeff Hawkins, the owner of Numenta (I think I mentioned this), who is working on his own AI project. As well as knowing Ben Goertzel (who was the brains behind the DoD's "Total Information Awareness" project). Ben is now working on his own form of AI Cognition.

But, if you talk to people like Stephan Wolfram, he maintains that "human equivalence" is just a toy project along the way to something inhuman, and "better" (he will not precisely describe what he means by "Better," and the generalities would take me a few thousand words to explain - just look at Mathematica, and then imagine a "brain" capable of the kind of deep, multi-dimensional analysis Mathematica can do with a few commands applied to everything in existence - the consequences are deeply frightening in the kind of predictive abilities it would have).

But, as Ray and Jeff point out, the Human Mind does one thing, and one thing only: Pattern Recognition. And this is what Stephan Wolfram is aiming at as well, but using VERY DIFFERENT methods than are Google, or Numenta, or IBM (Dharmendra Modha's SyNAPSE Chips are basically the foundation of ALL of these group's AI projects).

But what happens after the Pattern Recognition is good enough to begin to extract meaning from those patterns (which Mathematica can already do, and which Google's AI does in a different fashion that is a bit less data-rich), and generalize it to new areas begins to get a bit muddy.

The work of Cynthia Breazeal at MIT's Personal Robotic's Group shows that when embodied, an AI TENDS to develop human-like behaviors and socialized attitudes that emerge from the interaction of a few basic rules.

So, we will likely begin to see very human appearing AIs within the next decade (the mid-20s), when the architecture to run these "minds" shrinks below needing several acres of server farms and mainframes.

But their ability to perform superhuman feats will appear LONG BEFORE any resemblance to human cognition arises.

My own studies are aimed at making sure that human cognition can keep up. I am hoping to do graduate work with either Ted Berger at USC (who has developed working Hippocampal Memory Prosthetics, giving the capability to "download" memories or learn new skills incredibly fast), but on what he considers to be the holy grail of cybernetics: The biological-to-silicon interface that does not cause sclerosis at the interface (my own studies are in the connection of the Enteric Nervous System with the Amygdala and Hypothalamus - this is the connection between your brain and your "gut" when you "feel" something - and Intuition is strongly implicated in this neural network).

Add to all of this the Connectome Project, headquartered at UCLA, which will map out all of the connections in the human brain/mind, and their causal behavior, and we will then begin to be able to locate in animals correlate structures to our own brains/minds, and thus begin to guess "What they are thinking/feeling." Which is the first step to knowing what we need to add to give them more advanced cognition.

I know that I would like to have better communication with my cats, so that they understand the words "No," "yes," and "Stop that! Don't put your paw in my mouth!"

MB


Automatically Appended Next Post:
AncientSkarbrand wrote:
We know that it can be done with organic material given life through chemistry and time. Scientists are attempting it with inorganic matter, electricity and curcuits. You believe the two can function the same, even though the types of matter and chemistry involved are vastly different? If so, why? I would like to know your reasons for thinking this.


Simple.

Because we can completely replicate the function of a Neuron in silicon.

The brain is just a huge collection of neurons.

So, all we need is a huge collection of silicate neurons, which run on electricity (BTW, the brain also runs on electricity - that is what the Action Potentials of a firing Neuron are, or the Feedback from an Ephaptic Coupling, or back-propagation along an axon).

So, we have both organic matter in the brain/mind running off electricity, and the computer running off electricity.

And, we have shown that we can replace the myelin in a neuron with silicon, producing a silicate neuron.

Thus, (QED) we have shown that we can build a brain out of silicon that runs on electricity.

MB


Automatically Appended Next Post:
 AlexHolker wrote:
 Psienesis wrote:
True inorganic AI is not only possible, it's inevitable.

No, it is not inevitable. You can say that organic intelligence was inevitable because carbon-based molecules self-organise in a fashion that can eventually produce life, and from there, sapient life. If inorganic AI happens, it will be because we work really hard to make it happen, it won't just happen spontaneously.


It is inevitable because of the drives of our civilization.

We have various forces at work which demand that needs be met through labor.

The incentive to maximize profits from that labor drives us to replace much of it with automation.

That automation drives the development of AI toward Sentience.

And, now that the question has been asked "Can a computer truly think*?" Somebody is GOING to answer that question by building such a computer.

Given that event is only a few years away at this point (we now know WHAT needs to be done to do so, even if we do not yet have all of those necessities met, or the capabilities to meet them at this time), there is not a lot that can stop it from happening.

MB


Automatically Appended Next Post:
 Psienesis wrote:
 AlexHolker wrote:
 Asherian Command wrote:
 AlexHolker wrote:
 Asherian Command wrote:
As it will be a task, but it is inevitable humanity will try to create something. We are creators, so the chance to make life has been on peoples minds for hundreds of years. AI is the next step the human evolutionary chain, it is only a matter of time before we invent an AI.

If that was true, humanity could not be a sapient species.

How does that make any sense?

Because the ability to act with appropriate judgement - to contemplate a course of action and make a considered decision if or how to go ahead with it - is literally the definition of sapience. A creature that is incapable of that judgement is not a sapient creature.


Um, no. That's not the definition of sapience.

adjective
1.having or showing great wisdom or sound judgment.

Pop-culture references and paranoid (though humorous) delusions aside, there's nothing inherently wrong with producing an AI. In fact, in a world where BSG and The Terminator exist (and scientists are most certainly aware of such), it is likely that such scenarios will be considered and planned for in the development of AI.


See Nick Bostrum, Ray Kurzweil (who now is in charge of the world's largest AI project at Google), or the Technofascists at MIRI (Machine Intelligence Reseach Institute - who have a lofty goal, but are utterly morally bankrupt).

Pretty much everyone who works in the fields associated with AI or Cybernetics has the Frankenstein Complex (the official name of the psychological phenomenon of fearing scientific progress, and creating life) in mind when pursuing their goals.

MB

This message was edited 4 times. Last update was at 2015/07/27 08:20:38


 
   
Made in us
Legendary Master of the Chapter





SoCal

Ray Kurzweil says 2020? I think we're safe for another 30 years, then.


   
Made in us
Secret Force Behind the Rise of the Tau




USA

I'm pretty sure they've been saying "we'll have AI" in 10 more years for like, 30 years...

So they gotta be right sooner or later

   
Made in us
Gore-Soaked Lunatic Witchhunter




Seattle

And throughout those 30 years, we have been getting exponentially closer.

It is best to be a pessimist. You are usually right and, when you're wrong, you're pleasantly surprised. 
   
Made in us
Legendary Master of the Chapter





Chicago, Illinois

 BobtheInquisitor wrote:
Ray Kurzweil says 2020? I think we're safe for another 30 years, then.



I'd say probably twenty years, once the bionic project goes through we might be seeing some sembelnce of AI. But very limited.

I mean it is inevitable that the Singularity will happen and the world will change completely but it will only be a complete surprise to everyone in the entire scientific community, no one will plan for it to happen things will fall into place and it will happen. Either option A Humanity is dying out and needs something to carry on human knowledge, Thus creating an AI in the hopes of saving all of mankind in the process (By saving its knowledge), option B, we discover it by accident, C we create something that then turns into an AI.

From whom are unforgiven we bring the mercy of war. 
   
Made in us
Ragin' Ork Dreadnought




Monarchy of TBD

I'm guessing B, with the corollary that it almost immediately escapes confinement onto the internet, either becoming a Skynet style uber intelligence, or replicating and diverging as it samples different materials. This should eventually result in vaguely defined cyberrealms, which begin to employ humanity in whatever goals the internet given sentience would have.

I suspect some rather epic orgies, Big Brain style knowledge seekers, and various wars waged to measure the budding computers' .... egos.

Klawz-Ramming is a subset of citrus fruit?
Gwar- "And everyone wants a bigger Spleen!"
Mercurial wrote:
I admire your aplomb and instate you as Baron of the Seas and Lord Marshall of Privateers.
Orkeosaurus wrote:Star Trek also said we'd have X-Wings by now. We all see how that prediction turned out.
Orkeosaurus, on homophobia, the nature of homosexuality, and the greatness of George Takei.
English doesn't borrow from other languages. It follows them down dark alleyways and mugs them for loose grammar.

 
   
Made in us
Legendary Master of the Chapter





Chicago, Illinois

 Gitzbitah wrote:
I'm guessing B, with the corollary that it almost immediately escapes confinement onto the internet, either becoming a Skynet style uber intelligence, or replicating and diverging as it samples different materials. This should eventually result in vaguely defined cyberrealms, which begin to employ humanity in whatever goals the internet given sentience would have.

I suspect some rather epic orgies, Big Brain style knowledge seekers, and various wars waged to measure the budding computers' .... egos.


What?

I am pretty sure a being that was created by humans wouldn't outright destroy its creators. I think adding emotion to it would be top priorty for alot of scientists. I am pretty sure we would teach it and cause it to learn about humanity. I don't think any being that can think rationally and logically would see "Kill all humans." As rational or logical. After all we did create it, it would never want to do that because, what if the being could have another one of its kind? What if there was another one of it. it probably wouldn't make another copy of itself.

I think the human race's fear of robots killing everyone is so unfounded it hilarious.

From whom are unforgiven we bring the mercy of war. 
   
Made in us
Longtime Dakkanaut





 BobtheInquisitor wrote:
Ray Kurzweil says 2020? I think we're safe for another 30 years, then.



Kurzweil is not the only one.

The list is pretty long.

MB


Automatically Appended Next Post:
 LordofHats wrote:
I'm pretty sure they've been saying "we'll have AI" in 10 more years for like, 30 years...

So they gotta be right sooner or later


As I pointed out, there are substantial differences with past claims, and those of today.

In the 1960s/70s, when those claims first began, they had absolutely no idea what they needed to do.

Now, we not only know what we need to do, but we know EXACTLY what we need to do to build a computer that has human-like cognition. And we have a selection of methods by which to achieve that goal.

We even have a laundry list of things for each method that have been being checked off since around 2005, and based upon the speed of things being checked off the list, we have a pretty accurate guess of around 2020 before such a goal is reached.

MB

This message was edited 1 time. Last update was at 2015/07/28 01:52:59


 
   
Made in us
The Conquerer






Waiting for my shill money from Spiral Arm Studios

 Gitzbitah wrote:
I'm guessing B, with the corollary that it almost immediately escapes confinement onto the internet, either becoming a Skynet style uber intelligence, or replicating and diverging as it samples different materials. This should eventually result in vaguely defined cyberrealms, which begin to employ humanity in whatever goals the internet given sentience would have.

I suspect some rather epic orgies, Big Brain style knowledge seekers, and various wars waged to measure the budding computers' .... egos.


My guess is that if that happens the servers that populate the internet would quickly get destroyed.

Self-proclaimed evil Cat-person. Dues Ex Felines

Cato Sicarius, after force feeding Captain Ventris a copy of the Codex Astartes for having the audacity to play Deathwatch, chokes to death on his own D-baggery after finding Calgar assembling his new Eldar army.

MURICA!!! IN SPESS!!! 
   
Made in us
Longtime Dakkanaut





 Asherian Command wrote:
 Gitzbitah wrote:
I'm guessing B, with the corollary that it almost immediately escapes confinement onto the internet, either becoming a Skynet style uber intelligence, or replicating and diverging as it samples different materials. This should eventually result in vaguely defined cyberrealms, which begin to employ humanity in whatever goals the internet given sentience would have.

I suspect some rather epic orgies, Big Brain style knowledge seekers, and various wars waged to measure the budding computers' .... egos.


What?

I am pretty sure a being that was created by humans wouldn't outright destroy its creators. I think adding emotion to it would be top priorty for alot of scientists. I am pretty sure we would teach it and cause it to learn about humanity. I don't think any being that can think rationally and logically would see "Kill all humans." As rational or logical. After all we did create it, it would never want to do that because, what if the being could have another one of its kind? What if there was another one of it. it probably wouldn't make another copy of itself.

I think the human race's fear of robots killing everyone is so unfounded it hilarious.


The more intelligent people working on AI do not worry about the fully formed conscious AI, but rather the "Narrow-AI" that we now build.

And what they worry about is defined in a paper (one among many, in fact) by an AI researcher by the name of Steve Omohundro, titled: Basic AI Drives.

The fear is in one of these "stupider" AIs getting a bad set of instructions from a group of creators, which leads to an unintended consequence that becomes catastrophic for mankind.

An example of this would be HAL 9000 from 2001: A Space Odyssey, where HAL was given conflicting commands by two different people, who themselves were unaware of the commands given to it by others.

The conflicting commands cause the AI to do "strange things" (In HAL's case, killing people).

The type of AI in question is called a Rational Economic Agent. Most of the AIs we use to control trading programs, or manage inventories for corporations are of this variety.

And, the example of HAL is not the only way things could go wrong.

Imagine a Trading program at the NYSE.

It is given instructions to maximize returns on investments, and minimize losses. It is also programmed to "learn" from behavior, so that past successes can be learned from, or repeated to improve returns, and minimize losses.

Through an accident in reporting one day, it fails to report the losses. And a careless technician gives it a command that it needs to "update its program" to "learn" from whatever happened on this glorious day of no losses.

The Program interprets this to mean it should just not report losses, even when they occur.

So... a month goes by, and suddenly another computer discovers "HEY! I am missing $5Trillion!" - the Global Economy Implodes, war breaks out over resources, chaos ensues....

All because a computer AI got a stupid command that did not have a proper error check.

Yet, the AI in question is really as stupid as a brick when dealing with anything other than high-speed trading.

MB
   
Made in us
Legendary Master of the Chapter





SoCal

 Psienesis wrote:
And throughout those 30 years, we have been getting exponentially closer.


One half to the one half to the one half power. Next year we'll be one half to the one half to the one half to the one half power! Exciting times!


It's a good thing we know exactly how much technical progression is necessary for true AI, as well as how long it will take to achieve that progress. It's only half as long as it will take to make a functional ftl drive.

   
Made in us
The Conquerer






Waiting for my shill money from Spiral Arm Studios

 BobtheInquisitor wrote:
 Psienesis wrote:
And throughout those 30 years, we have been getting exponentially closer.


One half to the one half to the one half power. Next year we'll be one half to the one half to the one half to the one half power! Exciting times!

It's a good thing we know exactly how much technical progression is necessary for true AI, as well as how long it will take to achieve that progress. It's only half as long as it will take to make a functional ftl drive.


2 people can make a new non-artificial intelligence given a comfy bed and 9 months of their time.

Seems like a hell of a long time to try and make what will essentially be a baby, but one with super powers and no conscience.

Self-proclaimed evil Cat-person. Dues Ex Felines

Cato Sicarius, after force feeding Captain Ventris a copy of the Codex Astartes for having the audacity to play Deathwatch, chokes to death on his own D-baggery after finding Calgar assembling his new Eldar army.

MURICA!!! IN SPESS!!! 
   
Made in us
Legendary Master of the Chapter





SoCal

But no poopy diaper, either.

   
Made in us
The Conquerer






Waiting for my shill money from Spiral Arm Studios

You might prefer the poopy diaper.

Self-proclaimed evil Cat-person. Dues Ex Felines

Cato Sicarius, after force feeding Captain Ventris a copy of the Codex Astartes for having the audacity to play Deathwatch, chokes to death on his own D-baggery after finding Calgar assembling his new Eldar army.

MURICA!!! IN SPESS!!! 
   
Made in us
Longtime Dakkanaut





 BobtheInquisitor wrote:
 Psienesis wrote:
And throughout those 30 years, we have been getting exponentially closer.


One half to the one half to the one half power. Next year we'll be one half to the one half to the one half to the one half power! Exciting times!


It's a good thing we know exactly how much technical progression is necessary for true AI, as well as how long it will take to achieve that progress. It's only half as long as it will take to make a functional ftl drive.


The "Exponential" Argument is tired, and only partially (negligibly) connected to the reality of the development of human-cognition in a "machine."

The "Exponential Progress" noted was primarily of the hardware, and YES, it was a crucial part of the advances that were needed to make the hardware small enough that you would not need most of the real-estate of one of the USA's larger states just to hold the computational hardware needed for the job.

But really, the progress that was vital came in the form of Computational Neuroscience, Neural Networks, and Induction (A friend of mine just won an Award at this year's AGI Conference for his work in Solomonoff Induction), to say nothing of other forms of computational "reasoning" and pattern recognition (such as that developed by Stephan Wolfram and Wolfram Research).

Also, adding to this was the invention of the ƒMRI in the mid-1990s, which suddenly allowed scientists to see the brain in operation, rather than snapshots of the brain at rather lengthy intervals (The MRI takes approximately 5 seconds to form a single image, while the ƒMRI's temporal resolution is in microseconds).

Then you have the development of the PET Scan, and MEG along the same time-frame giving even more detailed information about the operation of the brain, and the work on the Connectome, and things suddenly fell into place in the mid-00s, with experimental evidence showing WHAT EXACTLY we needed to build a "brain," and the means to test those assumptions.

People like Henry Markham, with the Blue Brain project managed to test many of those assumptions, giving a thumbs-up that we are on the right path, and then people like Dharmendra Mohda developed the requisite dedicated architecture and hardware to realize many of the components (We are still looking at how to work the Memristor into all of this, as it shows to be incredibly promising as a component for AI systems, combining the Resistor and the Transistor - so it "Remembers" the past resistances it used, similar to a neuron remembering past action potential levels).

But.... Just "Exponential Progress" alone isn't really much of a statement on things, other than that it does seem to be occurring, and not just in the hardware any more (and, it is looking to be more of a geometric exponentiation for some areas, where the exponent itself is growing geometrically).

But that, itself, isn't the reason why we will be seeing a human-equivalent AI within the next decade (more like about 5 years).

MB


Automatically Appended Next Post:
 Grey Templar wrote:
 BobtheInquisitor wrote:
 Psienesis wrote:
And throughout those 30 years, we have been getting exponentially closer.


One half to the one half to the one half power. Next year we'll be one half to the one half to the one half to the one half power! Exciting times!

It's a good thing we know exactly how much technical progression is necessary for true AI, as well as how long it will take to achieve that progress. It's only half as long as it will take to make a functional ftl drive.


2 people can make a new non-artificial intelligence given a comfy bed and 9 months of their time.

Seems like a hell of a long time to try and make what will essentially be a baby, but one with super powers and no conscience.


This is just a "Proof of concept" that a new intelligence can be created.

The trick about building one in a non-biological substrate is that we can take it apart, and then re-assemble it, without having to worry overly much about harming it.

And, once we get a non-biological substrate consciousness built, then recursion takes over.

That Computer Consciousness can then examine itself using tools that we would have trouble using, to look for ways to optimize itself.

This is where the "Exponential" Argument starts to take hold.

As I have pointed out before, to obviate any possible dangers, we need to begin integrating these same technologies into our own biological existences: merging with the machine, as it were.

We are already headed in that direction (Smartphones, Fitbit, GPS, etc.). We just need to find what is considered by every cybernetician on the planet to be the holy grail of cybernetics (The silicon-organic interface that does not cause Sclerosis - The syndrome use in the Anime Ghost-in-the-Shell: Stand Alone Complex: The Laughing Man wasn't made up: Neurosclerosis, or what they called "Cyber-brain Sclerosis" is THE problem with why we do not now have brain implants. Turns out that they are pretty easy to create and install, but they tend to destroy the brains of rats and monkeys used to test them in the process via Neurosclerosis):

http://www.researchgate.net/profile/William_Reichert/publication/7567122_Response_of_brain_tissue_to_chronically_implanted_neural_electrodes/links/0f3175398b1d701a75000000.pdf

MB

This message was edited 1 time. Last update was at 2015/07/28 05:02:05


 
   
Made in us
Legendary Master of the Chapter





SoCal

 Grey Templar wrote:
You might prefer the poopy diaper.


Yeah, that was not an argument for AI. It was kind of the opposite.

If there was a way for me to bet money on it, I'd go all in with whatever AI eventually comes forward completely disappointing or horrifying the people pushing hard for AI. I don't expect the emergence of machine intelligence to turn life on earth into paradise any more than I expect it to usher in our extinction.

   
Made in gb
Raging Rat Ogre





England, UK

The difference between humans and other intelligent species is... rather large. If humans had the capability to make another species sapient, which means human-level intelligence, why wouldn't we instead focus on making ourselves smarter?

Seems stupid and dangerous to make another species as intelligent, or more intelligent, than our own.

Upcoming work for 2022:
* Calgar's Barmy Pandemic Special
* Battle Sisters story (untitled)
* T'au story: Full Metal Fury
* 20K: On Eagles' Wings
* 20K: Gods and Daemons
 
   
Made in au
Longtime Dakkanaut




Squatting with the squigs

Oh bs. There are many species which show an intelligence that is close to ours, perhaps we lack a true measure of other species intelligence as we still cannot figure out how to communicate with them on any level ,in regards to language- other than basic symbol recognition or problem solving - you have to get a pretty high level of language before you can really gauge intelligence.
Sure, they don't have tools but most of them possess all the tools they need naturally.

My new blog: http://kardoorkapers.blogspot.com.au/

Manchu - "But so what? The Bible also says the flood destroyed the world. You only need an allegorical boat to tackle an allegorical flood."

Shespits "Anything i see with YOLO has half naked eleventeen year olds Girls. And of course booze and drugs and more half naked elventeen yearolds Girls. O how i wish to YOLO again!"

Rubiksnoob "Next you'll say driving a stick with a Scandinavian supermodel on your lap while ripping a bong impairs your driving. And you know what, I'M NOT GOING TO STOP, YOU FILTHY COMMUNIST" 
   
Made in gb
Raging Rat Ogre





England, UK

You know what? You're right. Dolphins and elephants can recognise themselves. Your neighbour's dog can stand in your garden looking in your window and barking at you until you come out to play with him. (Yes, this actually happened to me.)

It's clearly a short step between an animal licking a stick and using it to snare termites, to building the transistor, nuclear reactors and space shuttles

Upcoming work for 2022:
* Calgar's Barmy Pandemic Special
* Battle Sisters story (untitled)
* T'au story: Full Metal Fury
* 20K: On Eagles' Wings
* 20K: Gods and Daemons
 
   
Made in us
Gore-Soaked Lunatic Witchhunter




Seattle

 BobtheInquisitor wrote:
 Grey Templar wrote:
You might prefer the poopy diaper.


Yeah, that was not an argument for AI. It was kind of the opposite.

If there was a way for me to bet money on it, I'd go all in with whatever AI eventually comes forward completely disappointing or horrifying the people pushing hard for AI. I don't expect the emergence of machine intelligence to turn life on earth into paradise any more than I expect it to usher in our extinction.


I, for one, am hoping for a Rise of the Machines scenario, and will welcome our new robotic overlords.

In the interests of full disclosure I am, frequently, accused of being a misanthrope.

It is best to be a pessimist. You are usually right and, when you're wrong, you're pleasantly surprised. 
   
Made in gb
Longtime Dakkanaut





 NoPoet wrote:
It's clearly a short step between an animal licking a stick and using it to snare termites, to building the transistor, nuclear reactors and space shuttles
In fairness, if you dropped the average man on a desert island and left him there for 10 years, you'd probably return to find him licking termites off a stick too. Few people would have all the necessary skills and knowledge in fields like mining, metallurgy, tool making, chemistry, mathematics etc... to be able to build much of anything from scratch. It took about 100,000 years of trial and error to make those things possible. No one thought it all the way through, it was lots of people each solving a small piece of the puzzle over generations. The amount of intelligence required to hit upon an innovative way to catch termites, and to hit upon something interesting you can do with magnets, is probably about the same.
   
 
Forum Index » Off-Topic Forum
Go to: