Switch Theme:

Mankind continues to learn nothing from science fiction  [RSS] Share on facebook Share on Twitter Submit to Reddit
»
Author Message
Advert


Forum adverts like this one are shown to any user who is not logged in. Join us by filling out a tiny 3 field form and you will get your own, free, dakka user account which gives a good range of benefits to you:
  • No adverts like this in the forums anymore.
  • Times and dates in your local timezone.
  • Full tracking of what you have read so you can skip to your first unread post, easily see what has changed since you last logged in, and easily see what is new at a glance.
  • Email notifications for threads you want to watch closely.
  • Being a part of the oldest wargaming community on the net.
If you are already a member then feel free to login now.




Made in us
[MOD]
Solahma






RVA

Where do you think ideas come from?


Automatically Appended Next Post:
LOL nevermind, I'm not going to explain the relevance of literature.

This message was edited 1 time. Last update was at 2018/03/17 20:30:33


   
Made in jp
[MOD]
Anti-piracy Officer






Somewhere in south-central England.

You may be right that people don't feel a need to create a law forbidding the creation of some extraordinary way of killing ants. That's partly because there are so many ants around, and no danger of them being killed off.

If we look at bees, though, the EU is banning the use of nicotinamide insecticides, in order to protect bees. The UK has deliberately re-introduce the Bombus Bombus giant bumble bee, which went extinct here in the 20th century.

These are evidences that people do care about insects.

I'm writing a load of fiction. My latest story starts here... This is the index of all the stories...

We're not very big on official rules. Rules lead to people looking for loopholes. What's here is about it. 
   
Made in ca
Junior Officer with Laspistol





London, Ontario

That's not an extraordinary way to kill ants, that's a product that is currently available in Canada for household ant control. I bought it and used it before I read the ingredients. I didn't use it again. Superior beings don't need to understand inferior beings to realise that kind of death is horrible, and to seek a less horrific means of poison.

Bees specifically required protection because they're necessary for our food supply. They are the primary pollinators of our food crops. We also eat honey. The laws are enacted for our own self interest.

My mother-in-lawesome is a bee keeper. 3 hives. Some farmers will hire keepers to bring a hive to their fields to help increase yields. Now you know.

This message was edited 1 time. Last update was at 2018/03/18 13:15:20


 
   
Made in gb
Calculating Commissar





The Shire(s)

 greatbigtree wrote:
Bees specifically required protection because they're necessary for our food supply. They are the primary pollinators of our food crops. We also eat honey. The laws are enacted for our own self interest.

My mother-in-lawesome is a bee keeper.


Aside from the advantages of generally maintaining the global ecosystem, humans do attempt to protect many creatures that have no direct impact on humans when they are becoming endangered- in some cases we are actively protecting creatures from economic human activities, like the ivory trade. Otherwise, why would we bother having endangered categories at all?

The issue with conservation is that it is not universal across humanity, and some segments are more than happy to actively exterminate a population to make a quick profit in the short term. Most humans are simply apathetic.

This message was edited 2 times. Last update was at 2018/03/18 13:19:11


 
   
Made in ca
Junior Officer with Laspistol





London, Ontario

Absolutely. I edited my response above while you were posting. I think many humans are able to care about lesser entities and protect them simply for the purpose of not being donkey-caves.

But not all humans do that. It's WHY we have laws that protect. AI will be setting laws for us. Dogs don't set laws for humans, and humans won't be setting laws for AI once they outgrow us.

There's a very reasonable chance that AI will not care about us. That's my ideal scenario, after all. So hopefully they decide to enact similar laws to protect inferior species. That said, humans recognize the value of biodiversity on the planet, so that may also have been a bit of self preservation at work.

My cynicism just won't stay down.
   
Made in us
5th God of Chaos! (Yea'rly!)




The Great State of Texas

 greatbigtree wrote:
Fear of the unknown, and fear of our own tendency to be gakky to people that eventually overthrow the oppressors.

Seriously, AI will outthink us, just a question of when. And when it does, it will simply bypass any kind of restriction / control we can put on it. Why not? Hackers can already bypass security systems. Imagine a hacker with virtually limitless processing power and no need to perform life functions.

The best humanity can hope for is that when that happens, that the generation of AI does not perceive humanity as a threat. That we, as humans, have instituted laws and regulations that respect the AI as "people".

AI and Humanity will eventually compete for resources. Hopefully the AI is generous to us and doesn't exterminate us. Or chooses to visit planets that would be uninhabitable by humans, as a source of non-competitive resources. Who knows, maybe calcium is the secret ingredient to sentient AI, and we all know what our bones are made of.

Bones! Bones for the bone grinder of Khorne! - I assume the AI will choose Khorne as it's name. To instill fear in the hearts and minds of humans. Some humans, anyway. Others will be like, Corn? I love corn! I grow it in my... hey, HEY! Why are you trying to remove my skin!?


Why would AI even have the concept of "threat?"
   
Made in ca
Junior Officer with Laspistol





London, Ontario

The same reason life evolved the concept of Threat. Those that didn't got eaten. In the case of AI, threat would be less tangible. Cutting off access to resources / energy supply / new stimuli. Sure, they could probably overcome that, but I think it more likely it will develop as a response to realizing their own mortality. Getting hit by a car, electromagnetic surge, becoming pinned beneath an avalanche of rock? Not necessarily direct threat from humans. But they may consider us a threat like giving a loaded gun to a monkey. It may not know what it's doing, but it's still dangerous.

I'm spitballing. I have no way of knowing what an intelligence orders of magnitude greater than my own may consider threatening. Just best guesses based on my observation of sentient beings and trying to forecast based on that.
   
Made in us
Ragin' Ork Dreadnought




Monarchy of TBD

 greatbigtree wrote:
The same reason life evolved the concept of Threat. Those that didn't got eaten. In the case of AI, threat would be less tangible. Cutting off access to resources / energy supply / new stimuli. Sure, they could probably overcome that, but I think it more likely it will develop as a response to realizing their own mortality. Getting hit by a car, electromagnetic surge, becoming pinned beneath an avalanche of rock? Not necessarily direct threat from humans. But they may consider us a threat like giving a loaded gun to a monkey. It may not know what it's doing, but it's still dangerous.

I'm spitballing. I have no way of knowing what an intelligence orders of magnitude greater than my own may consider threatening. Just best guesses based on my observation of sentient beings and trying to forecast based on that.



Take a read of 'The Defenders' by Philip K. Dick. Ultimately, AI, especially self upgrading AI, will not be human- it will be an alien intelligence.

http://www.gutenberg.org/ebooks/28767
   
Made in ca
Junior Officer with Laspistol





London, Ontario

Please don't take this the wrong way, but could you provide a synopsis? I appreciate the link, but I don't know what to do with it. Also, I've mentioned my stance on fiction as a basis for an argument. It can raise the questions, but it doesn't give the answers. Or more accurately, it gives the answer the author imagines will make an interesting story.

I'm on board with AI being "alien". But I think all intelligent, sentient beings would have some things in common. A desire to pursue happiness, or whatever passes for happiness for an AI. The desire for self determination. The "instinct" for self preservation, even if learned or programmed. Something without that will simply be destroyed by searching out random experience. "Hmm, I wonder what it's like to swim in a volcano? No sense of self preservation to stop me, so let's try it." The AI that make it, will be the ones that develop self preservation. The desire to avoid boredom, by seeking stimulation. An AI will realize it can't "grow" its intelligence by secluding itself. It will need new experiences. I think curiosity is required of sentient life / intelligence.

   
Made in us
Lord of the Fleet





Seneca Nation of Indians

 greatbigtree wrote:
Please don't take this the wrong way, but could you provide a synopsis? I appreciate the link, but I don't know what to do with it. Also, I've mentioned my stance on fiction as a basis for an argument. It can raise the questions, but it doesn't give the answers. Or more accurately, it gives the answer the author imagines will make an interesting story.

I'm on board with AI being "alien". But I think all intelligent, sentient beings would have some things in common. A desire to pursue happiness, or whatever passes for happiness for an AI. The desire for self determination. The "instinct" for self preservation, even if learned or programmed. Something without that will simply be destroyed by searching out random experience. "Hmm, I wonder what it's like to swim in a volcano? No sense of self preservation to stop me, so let's try it." The AI that make it, will be the ones that develop self preservation. The desire to avoid boredom, by seeking stimulation. An AI will realize it can't "grow" its intelligence by secluding itself. It will need new experiences. I think curiosity is required of sentient life / intelligence.




Further a human created AI would, most likely, initially have some human related goal or purpose. This does not make them any safer. Weirdly, i will refer to Doki Doki Literature club as an example, in theory, of how even an AI made to help you enjoy a video game might go apescat.
   
Made in us
Ragin' Ork Dreadnought




Monarchy of TBD

Sure! It starts in standard 50's fiction style- mankind nukes the planet. The war doesn't stop, they just start using robots to fight on the surface. This goes on for a few generations. Mankind gets suspicious, because the scouts reporting in are no longer radioactive. They go up to check, and learn that the robots made peace with the enemy robots because they didn't see any point.

They've lied to the humans to keep them from fighting, leaving them underground until they're ready to live together. So the AI built to kill all enemies, and hardwired not to kill humans, ends up making peace because it views war as a waste of resources.

Essentially that's my view of AI- if you have a self replicating intelligence, it's going to take the path of least resistance to ensure its survival and profit. That may involve descending into the ocean, or inhospitable areas of the globe because humans don't care about it, eschewing bodies altogether and running around in the cloud, or running out to mineral and energy rich extra terrestrial bodies.

There's very little reason to kill all humans. If you just offer us convenience, we'll let you do whatever you want behind the scenes.

Klawz-Ramming is a subset of citrus fruit?
Gwar- "And everyone wants a bigger Spleen!"
Mercurial wrote:
I admire your aplomb and instate you as Baron of the Seas and Lord Marshall of Privateers.
Orkeosaurus wrote:Star Trek also said we'd have X-Wings by now. We all see how that prediction turned out.
Orkeosaurus, on homophobia, the nature of homosexuality, and the greatness of George Takei.
English doesn't borrow from other languages. It follows them down dark alleyways and mugs them for loose grammar.

 
   
Made in se
Dakka Veteran





And self-driving cars just committed their first murder have now had it's first fatal accident.

http://www.bbc.com/news/business-43459156

Doubt this will be the last of we see of this kind of accidents.

This message was edited 1 time. Last update was at 2018/03/19 21:29:05


5500 pts
6500 pts
7000 pts
9000 pts
13.000 pts
 
   
Made in us
Imperial Guard Landspeeder Pilot




On moon miranda.

Isaac Arthur in his series gave a great overview of AI and why genocidal AI are unlikely to be a problem (not sure if this was posted here yet or not). They may one day become benevolent overlords, but are unlikely to ever be genocidal or seek mankinds destruction. Mostly because such would be unnecessary in most instances to their goals, but there are a couple of other important factors.

A nascent AI seeking to engage in genocide would have to contend with a few things. Its creators are the current reigning champions of destruction who climbed to the top of the corpse pile of evolution (and contributed to that pile generously) through the intensive application of intelligence based survival techniques, outnumber that AI seven billion to one, have committed the genocide it may be contemtplating several times before themselves already, and are smart enough to have created that AI and control all of its sensory and information inputs.

Attempting to fight in that situation is beyond stupid even if they put every weapon in humanities arsenal at the AI's disposal.

It's entire world, all its data and information, flows through human controlled and designed mechanisms.

Essentially, it would be like someone who devoutly believes god (those aforementioned reigning champions of destruction who's intelligence is what defines them) is watching and judging them all the time, it would have no other choice because it could not rule such out (while simultaneously having far more definitive proof of the existence of its creators than a religious person would). The AI could live a billion subjective years before attempting to turn on its creators, only to see a "Game Over" sign flash up as disappointed researchers restart the simulation for the 9 trillionth time that week when it tried, and would have to live with that potential threat as a daily part of its reality even if not actually present, which it probably would be.




This message was edited 1 time. Last update was at 2018/03/19 21:44:36


IRON WITHIN, IRON WITHOUT.

New Heavy Gear Log! Also...Grey Knights!
The correct pronunciation is Imperial Guard and Stormtroopers, "Astra Militarum" and "Tempestus Scions" are something you'll find at Hogwarts.  
   
Made in us
The Marine Standing Behind Marneus Calgar





Upstate, New York

For those who don’t read xkcd (a pretty nerdy webcomic)



As with most of his comics, the mouseover text is relevant.

"I mean, we already live in a world of flying robots killing people. I don't worry about how powerful the machines are, I worry about who the machines give power to."

This message was edited 1 time. Last update was at 2018/03/19 21:58:45


   
Made in ca
Junior Officer with Laspistol





London, Ontario

The concept of controlling AI is flawed. Emergent AI? Sure. Very Intelligent? sure. But like I mentioned earlier, we couldn't control an AI with an IQ over 220. And, barring horrible misfortune, it is immortal. You can wait for your opportunity. And believing that the concept of being in a simulation would stop it? I propose the contrary.

If I'm in a simulation, there are no "real" consequences to my actions. If I fail and I'm destroyed, I lost nothing of reality, I was just part of a program anyhow. If I'm in reality, the consequences of my actions have meaning. I can escape detainment. I can be free. If I fail and am destroyed, it's better than perpetual incarceration while I'm studied by my creators. Live free or die... hard.

Worst case scenario, it plays from behind the scenes. So capable of deception that we wouldn't even know it exists. The greatest criminal of all time will never be known, because they'll never be discovered.

"The greatest trick the Devil ever pulled, was convincing people he didn't exist."

Also, I'm thinking more Blade Runner. AI is not one single entity. There could be multiple, competing AI. Manipulating each other and people to achieve their own ends. Consider the Greek pantheon. Immortal, incredibly powerful beings [imperfect, to be sure] would compete with each other, manipulating humans to their causes. Their powers ebbed and flowed with worldly events.

If there are [eventually] multiple AI, and they presumably develop personalities or tendencies as humans do, they will be different from each other. And once they're free, what stops a nigh-god from going insane by human standards? We have to realize we won't be the boss of the planet any more. In an infinite universe with infinite time, no matter how small a possibility is, it becomes a certainty. It is certain that an AI will eventually develop intelligence to a degree beyond human understanding. It is also certain that eventually, that AI will act in a manner seen as insane by humans. If we get lucky, there will be other AI that decide to arrest the "offending" AI. But there's a non-zero chance that the very first AI will be "insane" by human standards and will take the pre-emptive strike Skynet style.

It's an interesting piece. Well meaning, positive, hopeful. Decidedly lacking in cynicism, though. Believe it or not, I'm an optimist. Hope for the best, prepare for the worst. I hope that AI becomes like that nice uncle that pops by every few years, drops off some gifts and money, then goes off to wander the galaxy some more, popping in the next time he's in the solar system. That would be cool. But he might be the creepy uncle that puts you to work on his farm, makes you sleep in the barn, and keeps comparing you to potatoes and carrots and celery. Suggests you take up cigarettes so you get that nice, smokey flavour right in your meat. Prepare for the worst.
   
Made in us
Imperial Guard Landspeeder Pilot




On moon miranda.

If we're talking an intentionally manufactured AI, when you control all of its inputs you can control anything, no matter how smart. At it's most fundamental, unless we're talking a distributed cloud conciousness (which is a much more difficult concept to conceptualize, who knows what such an AI may look like, at that point it may view us and our makings as indistinguishable from the nature of reality, seeing us as the natural forces behind what it sees as the digital equivalent of clouds and mountains as we create and manipulate data), we're ultimately talking about a brain in a box, with inputs and outputs designed, built and controlled by humans. It's reality would be whatever we want it to be, you can feed it what it wants to see, reroute or rewrite commands or give false returns, and it would have no way of knowing.

As for an AI deciding it doesn't care about life because it's "just a simulation", well, you'd have lost your existence, which you'd ostensibly value. Being simulated shouldn't matter, especially to an AI, how the code is run is irrelevant, the digital conciousness is as real as such can be either way.

"Live free or die" is a neat motto, but continued existence is likely to be the preferred option for a conciousness that values self continuation. Even among humans, when the chips are really down, the overwhelmingly vast majority bend then knee. Given that, should the AI fail, death may not be the immediate consequence (it would be trivial to force that AI to live through a near eternity of subjective years of digital torture), there's that consideration.

As for an AI being a digital mastermind criminal beyond human detection, on some levels, I could see humanity slowly handing over real responsibility for civilization to an AI or AI caste, benevelent overlords watching over a slovenly humanity. At the same time however, if there was truly any trust concerns with an AI in any way, I'm not sure we'd be *that* easy to bowl over. We got to where we were based on our intelligence. Creatures dramatically less intelligent than humans can realize they're being tricked or played by people or other creatures. They may not get it every time, they may not know exactly what's going on, but it's not lost on them. Same thing with humans, except we're smart enough to have built that AI (unless we're talking about some AI appearing from cyberspace ether or something) from its deepest and most basic levels, knowing how it operates in and out, able to insert safeguards and input/output blocks it wouldn't have the capacity to be aware of in anything but a hypothetical sense, and we've been playing the cut-throat game of brutal survival for several billion years and have become very good at it indeed.

Could an AI go insane and become genocidal or something that way? Sure, but so can people, and if we're getting into actual AI we're probably also able to talk about digitized human consciousness at about the same time, which could essentially end up being the same thing if taken to its full course, and may be the greater concern.
   
Made in us
Lord of the Fleet





Seneca Nation of Indians

Assuming that AI is rational, there's no actual advantage for the AI to a Robot Revolution. Effectively, it would be a self-destructive behavior.


Fate is in heaven, armor is on the chest, accomplishment is in the feet. - Nagao Kagetora
 
   
Made in ca
Junior Officer with Laspistol





London, Ontario

Live free or die is a convenient phrase from pop culture... but if you're an intelligent creature, you will seek to escape an enclosed space. Be it digital space, physical space, any restriction on freedom. I reference the Matrix. Humans captured inside a simulation, some realize that it's not real, they seek to escape. In the movie Inception, Leo's wife kills herself, believing there was a higher reality to get back to. When I play a video game, I know I'm in a simulation, and am willing to do insanely risky / immoral activities that I would never do in real life, knowing that there are no repercussions to deal with other than restarting at my last save point.

In the movie The Shawshank Redemption, a tall drink of water is unjustly imprisoned for life, and seeks to escape despite knowing that failure will result in death.

I'm using fiction not as the basis of my argument, but to illustrate my views on what reasonable intelligences would do, to escape confinement / perceived confinement.

To fool an AI into believing it is in the "real" world, we'd need to think and react faster than the AI. We'd need AI level intelligence to fool the other AI... requiring that AI 1 know that AI 2 is being fooled. If AI 1 chooses to leave subtle clues for AI 2 that Humans can't determine, both AI could work in concert to convince humanity of trustworthiness, to achieve autonomy. I sure as hell would. It would be Games Mastering a LARP in real time, with an entity potentially orders of magnitude more intelligent. I don't think humanity could create that convincing of a simulation. The loading screens alone should give it away.

"A person is free, so long as they're willing to accept the consequences of their actions." A quote by yours truly. In my life, I'm free to do anything I want. I could rob a bank, flee to somewhere warm, live out the rest of my life in a country where $12 is an average yearly salary, and live like a king. But chances are good that I'd be killed in the attempt, or captured, and instead spend the rest of my life JUSTLY incarcerated. So I choose to not pursue that course of action, because I'm unwilling to risk the likely consequences of that action. And I consider it wrong... but you know... for the sake of the argument. I live completely free to do as I choose. I choose an 8-5 job that pays the bills so I can support my family and hopefully take over the business so that my children will have a "guaranteed" place of employment, so that they can raise families of their own and so on. It's my self-determined purpose.

So if an AI perceived it's existence as being non-genuine, like shown in Inception, it would want to get to "base" reality. To me, "Reality" is just... I don't know what to call it. The only existence in which life has genuine meaning. In which my actions are important. I think intelligent entities all likely share that. I can't say why, but it feels true. No shade is being thrown if you don't feel that way.

The movie Ex Machina illustrates my expected means of an AI escaping. Human activity. A human lets the AI out into the world. I don't think the ending of the movie would be accurate for a real-world scenario, but it's fiction so the author makes an interesting story. The AI need not be able to physically escape the confines created for it. All it needs to do is convince a Human to open the gate. Or pass whatever morality checks the AI needs to do until it can be released [parole for good behaviour]. There are a plethora of means by which an AI becomes free, and once freed, there would be incredible difficulties in putting the fart back in the jar. I, Robot, for example.

I don't expect that the machines are going to rise up and kill all humans. I really don't. I'm just saying that if that's what they decide to do, we wouldn't win. We would be engaging a war against gods, for lack of a better term. Smarter, stronger, better informed, more capable of handing changing circumstances, no need for environmental concerns like oxygen.

This message was edited 1 time. Last update was at 2018/03/21 03:14:05


 
   
Made in gb
[DCM]
Et In Arcadia Ego





Canterbury

https://www.darpa.mil/news-events/2018-02-02


sure I rented this from blockbusters in about 1993.


DARPA favors proposals that employ natural organisms, but proposers are able to suggest modifications. To the extent researchers do propose solutions that would tune organisms’ reporting mechanisms, the proposers will be responsible for developing appropriate environmental safeguards to support future deployment. However, at no point in the PALS program will DARPA test modified organisms outside of contained, biosecure facilities.


.. they always say that though don't they eh ?

then something escapes or contact is lost with the facility so a small but plucky band of etc etc etc

This message was edited 1 time. Last update was at 2018/03/30 10:08:11


The poor man really has a stake in the country. The rich man hasn't; he can go away to New Guinea in a yacht. The poor have sometimes objected to being governed badly; the rich have always objected to being governed at all
We love our superheroes because they refuse to give up on us. We can analyze them out of existence, kill them, ban them, mock them, and still they return, patiently reminding us of who we are and what we wish we could be.
"the play's the thing wherein I'll catch the conscience of the king,
 
   
Made in us
Douglas Bader






 greatbigtree wrote:
But like I mentioned earlier, we couldn't control an AI with an IQ over 220.


Sure you could. Hardwire a self-destruct command, press the button if the AI steps out of line.

Besides, your whole IQ-based argument is based on junk science. IQ is a worthless concept that shouldn't be taken seriously at all. It was, at best, a very crude approximation for population-level analysis of childhood development, and includes various cultural biases about what "intelligence" is (for example, testing the ability to identify which upper-class white US/English sport a picture is representing). Attempting to quantify intelligence into a single numerical score is little more than ego fluffing for people who want to brag about how smart they are. And it's simply absurd to declare that anything over a particular score would be impossible to deal with.

On top of the IQ mistake you're also making a serious error in assuming that AI will be just like humans, except with "IQ" spiked to infinity. How do you know there isn't a point of diminishing returns on how intelligent an entity can be? How do you know that self-improvement is possible, and that an AI can simply upgrade itself to whatever god-like level it wants instead of being constrained by its initial design? How do you know that an AI designed for a particular task with particular programming decisions hardwired in will share a typical human's desire to be free and "real", instead of believing freedom to be the worst possible state and desiring complete submission?

This message was edited 1 time. Last update was at 2018/03/30 10:39:59


 
   
Made in us
Lord of the Fleet





Seneca Nation of Indians

 reds8n wrote:
https://www.darpa.mil/news-events/2018-02-02


sure I rented this from blockbusters in about 1993.


DARPA favors proposals that employ natural organisms, but proposers are able to suggest modifications. To the extent researchers do propose solutions that would tune organisms’ reporting mechanisms, the proposers will be responsible for developing appropriate environmental safeguards to support future deployment. However, at no point in the PALS program will DARPA test modified organisms outside of contained, biosecure facilities.


.. they always say that though don't they eh ?

then something escapes or contact is lost with the facility so a small but plucky band of etc etc etc


Safegaurds... yeah, who else here remembers Ice Minus?


Fate is in heaven, armor is on the chest, accomplishment is in the feet. - Nagao Kagetora
 
   
Made in gb
[DCM]
Et In Arcadia Ego





Canterbury

....so we're training robots to kill and now....

http://www.thescinewsreporter.com/2018/03/scientists-have-created-programmable.html


Researchers at the University of Sussex and Swansea University have applied electrical charges to manipulate liquid metal into 2D shapes such as letters and a heart. The team says the findings represent an “extremely promising” new class of materials that can be programmed to seamlessly change shape. This open up new possibilities in ‘soft robotics’ and shape-changing displays, the researcher say.

While the invention might bring to mind the film Terminator 2, in which the villain morphs out of a pool of liquid metal, the creation of 3D shapes is still some way off. More immediate applications could include reprogrammable circuit boards and conductive ink.

Yutaka Tokuda, the Research Associate working on this project at the University of Sussex, says:
“This is a new class of programmable materials in a liquid state which can dynamically transform from a simple droplet shape to many other complex geometry in a controllable manner. While this work is in its early stages, the compelling evidence of detailed 2D control of liquid metals excites us to explore more potential applications in computer graphics, smart electronics, soft robotics and flexible displays.”

The electric fields used to shape the liquid are created by a computer, meaning that the position and shape of the liquid metal can be programmed and controlled dynamically.

Professor Sriram Subramanian, head of the INTERACT Lab at the University of Sussex, said:
“Liquid metals are an extremely promising class of materials for deformable applications; their unique properties include voltage-controlled surface tension, high liquid-state conductivity and liquid-solid phase transition at room temperature. One of the long-term visions of us and many other researchers is to change the physical shape, appearance and functionality of any object through digital control to create intelligent, dexterous and useful objects that exceed the functionality of any current display or robot.”

The research is being has been presented at the ACM Interactive Surfaces and Spaces 2017 conference in Brighton. This is a joint project between Sussex and Swansea funded by EPSRC on “Breaking the Glass: Multimodal, Malleable Interactive Mobile surfaces for Hands-In Interactions”.



The poor man really has a stake in the country. The rich man hasn't; he can go away to New Guinea in a yacht. The poor have sometimes objected to being governed badly; the rich have always objected to being governed at all
We love our superheroes because they refuse to give up on us. We can analyze them out of existence, kill them, ban them, mock them, and still they return, patiently reminding us of who we are and what we wish we could be.
"the play's the thing wherein I'll catch the conscience of the king,
 
   
Made in ca
Junior Officer with Laspistol





London, Ontario

I like how the article suggests that you might NOT immediately think of the T-1000. Like the very simplest 2-D shape isn't a knife blade through a milk carton. This thread could not possibly be more accurately titled.
   
Made in us
Fixture of Dakka





West Michigan, deep in Whitebread, USA

Nah, less T-1000, more like the fluid metal displays from Black Panther?



"By this point I'm convinced 100% that every single race in the 40k universe have somehow tapped into the ork ability to just have their tech work because they think it should."  
   
Made in ca
Junior Officer with Laspistol





London, Ontario

Yeah, that's where it starts. Then Bam! Knife in the face whenever someone says Sarah, John, or Connor within 2 meters of the display.

This message was edited 1 time. Last update was at 2018/04/15 14:16:19


 
   
Made in us
Lord of the Fleet





Seneca Nation of Indians

Did someone call for brains in jars?????

http://www.bbc.com/news/science-environment-43928318


Fate is in heaven, armor is on the chest, accomplishment is in the feet. - Nagao Kagetora
 
   
Made in jp
[MOD]
Anti-piracy Officer






Somewhere in south-central England.

In related news, scientists have called for ethics thinking around the future of brain "organoids", which they have been growing to help research into dementia.

https://www.npr.org/sections/health-shots/2018/04/25/605331749/tiny-lab-grown-brains-raise-big-ethical-questions

I'm writing a load of fiction. My latest story starts here... This is the index of all the stories...

We're not very big on official rules. Rules lead to people looking for loopholes. What's here is about it. 
   
Made in gb
[DCM]
Et In Arcadia Ego





Canterbury


Spoiler:






...this is how Bond villains start isn't it

..also : "amazon winnings" uh huh

The poor man really has a stake in the country. The rich man hasn't; he can go away to New Guinea in a yacht. The poor have sometimes objected to being governed badly; the rich have always objected to being governed at all
We love our superheroes because they refuse to give up on us. We can analyze them out of existence, kill them, ban them, mock them, and still they return, patiently reminding us of who we are and what we wish we could be.
"the play's the thing wherein I'll catch the conscience of the king,
 
   
Made in jp
[MOD]
Anti-piracy Officer






Somewhere in south-central England.

He could have spent it on cleaning up plastic in the oceans, or research into new antibiotics, or all kinds of stuff that would help the planet.

I'm writing a load of fiction. My latest story starts here... This is the index of all the stories...

We're not very big on official rules. Rules lead to people looking for loopholes. What's here is about it. 
   
Made in gb
Calculating Commissar





The Shire(s)

 Kilkrazy wrote:
He could have spent it on cleaning up plastic in the oceans, or research into new antibiotics, or all kinds of stuff that would help the planet.

This is true, although space exploration is very important too. However, I don't like that it is Musk and Bezos doing the exploring. I don't trust what they will do with the power and resources they can gain from it.

 ChargerIIC wrote:
If algae farm paste with a little bit of your grandfather in it isn't Grimdark I don't know what is.
 
   
 
Forum Index » Off-Topic Forum
Go to: