After a lot of positive feedback on the discussion of robots, AI and the future of our world below, I thought I'd publish some more links and thoughts related to similar subjects, and point out some opportunities to read and discuss more. If my last post inspired you, then consider joining London Futurists who host regular meet ups to discuss more around these kind of topics.
Last week saw David Orban (another Hungarian), entrepreneur, thought leader on global tech and Advisor to Singularity University. He is Founder of Network Society Research - and talked about the exponential technologies that are emerging today, and what it means that these are commonly decentralised, and organised in a network. This will affect our Energy, Health, Food, Education, Finance, Entrepreneurship, Manufacturing, Security and Decision-making (policy) sectors - which at the moment are mostly very hierarchical and centralised in nature. You can read his book “Network Society–The Coming Phase Change In the World Around You And How You Can Thrive Through It" for free - as well as his new book on AI an the Technological Singularity "Something New". Both are offered via his consulting firm Futuroid, which makes its living off helping clients understand all of the above, and how they can cope/thrive. Cool!
Seeing as you probably missed David's talk with the Futurists last week - you can watch it here on YouTube. Thanks for the link, David ;)!
I've got my eye on the upcoming discussion in March with Professor Robin Hanson, Professor of economics and research associate at the Future of Humanity Institute at Oxford University (which will also be mentioned in coming posts). He wrote the book "The Age of Em - Work, Love, and Life when Robots Rule the Earth" - upon which is talk will be centred. For those of you who don't know, an 'em' is short for 'emulation' or brain emulation. This stems from the hypothesis that the first truly smart robot will emerge from literally constructing a model of all of the human brain's connections, running it on a 'fast computer' and voila - getting a robot brain. I think this may be a little simplified, and also presumptuous that our brain's form of intelligence is the only kind of intelligence that could be created on a computer.
All the more reason to attend the talk and ask some probing questions! Professor Hanson is an expert in physics, computer science and economics, and at the very least will paint an interesting prediction of what a world dominated by Ems would look like - especially how they would displace humans in most jobs. He postulates that Em descendants would potentially reject many of the values that we hold dear - and force us to question common assumptions of moral progress, just because they would think so differently. If you think about it, this isn't so different from how our farmer and forager ancestors may consider the way we run our lives and the world now.
Lastly, here ("Will Humans Be Around In a Billion Years or Trillion") is an extremely interesting article about a whole group of people who are researching the most likely way humans may become extinct, and how we can stop that.
The work actually stems from the Effective Altruism movement, but maximising impact on humans which do not yet exist, i.e. our descendants. It's like a hyper rationalism applied to saving the lives of the most people, i.e. 'Doing Good Better' - projecting to extending the possibility of being alive for millions of humans to come. It's apparently one of the dilemmas that keeps some of their top academics up at night: do I dedicate my life to try and save the lives of humans alive today, or do I dedicate my life to eradicating possible routes of human extinction, saving humanity itself? It's mind-boggling.
Their research takes place at the Future of Humanity Institute mentioned above, and they define themselves as a 'research collective tasked with pondering the long-term fate of human civilisation.' Philosopher, Nick Bostrom is the Director. Interestingly Bostrom is not really concerned with natural extinction risks. He reckons modern humans are so populous and geographically diverse that we would out-survive most natural disasters. The extinction risks that pose the most alarming threat to these thinkers are those for which there are no geological cast studies - and no human track record of survival. The risks arse from human technology: a force capable of introducing entirely new phenomena into the world.
A large part of their time is spent working out how AI may out-accelerate us, out-power us, or out-smart us, or what may happen after the Technological Singularity. The intellectual project is to try and predict these by-nature un-predictable technologies beyond the immediate horizon, and damage control or bar these potential extinction risks before it's too late.
It's a very long article, but one quote stood out to me especially, and again relates directly to this theme of trying to predict the negative impacts of AI, and prepare psychologically:
"To understand why an AI might be dangerous, you have to avoid anthropomorphising it. When you ask yourself what it might do in a particular situation, you can’t answer by proxy. You can’t picture a super-smart version of yourself floating above the situation. Human cognition is only one species of intelligence, one with built-in impulses like empathy that colour the way we see the world, and limit what we are willing to do to accomplish our goals. But these biochemical impulses aren’t essential components of intelligence. They’re incidental software applications, installed by aeons of evolution and culture. Bostrom told me that it’s best to think of an AI as a primordial force of nature, like a star system or a hurricane — something strong, but indifferent."
‘The basic problem is that the strong realisation of most motivations is incompatible with human existence,’ Dewey told me. ‘An AI might want to do certain things with matter in order to achieve a goal, things like building giant computers, or other large-scale engineering projects. Those things might involve intermediary steps, like tearing apart the Earth to make huge solar panels. A superintelligence might not take our interests into consideration in those situations, just like we don’t take root systems or ant colonies into account when we go to construct a building.’ "
It's a very interesting point to remember when you are trying to get your heard around what superintelligence would look like - and not to get confused by 'Em'-like forecast. Even if superintelligence was born from exactly emulating the connections of a human brain, empathy and wisdom would not by any means be by-products. The final sentence also shifts the perspective a little. Especially when you consider that we have been essentially acting in exactly the same way to our Earthly companions and ecological network - with pretty destructive, and often irrevocable results. Leaf-cutter ants are sometimes described as building the second most complex societies after humans. But perhaps human societies will in turn become second to AI.
Are we about to become the ants on the bottom of AI's shoes?