Life 3.0: Being Human in the age of Artificial Intelligence by Max Tegmark
Finished reading on December 13th, 2017
The premise of the book is that there are some life forms that get all their information needed for a pleasant enough life from their genetic code – Life 1.0. Then there’s life that has the ability to learn new skills or knowledge and by doing that extend their lifetime to something more than it would be otherwise – Life 2.0. And then there’s the elusive Life 3.0 that would be able to not only learn and gain new knowledge but even construct itself new in a way.
In this book Tegmark presents his view of what Life 3.0 might mean to humankind if the main improvement was Artificial General Intelligence – something that is taken to be able to figure out pretty much everything including the fact that humans might not be coolest life-form to hang around with, and that could given enough time come up with highly advanced technology.
Tegmark showcases some advances in AI such as AlphaGo and others that are consistently pushing the boundary of what we think is impossible for a glorified computer to do.
There are several scenarios as to what might occur depending on what kind of precautions are taken by the people working on creating AI, and possibly later keeping it “chained”.
The scenarios vary from rather optimistic ones to really pessimistic – will the future see a Universe where humanity is governed by AI (whether the humans know it or not), or one where humans have a say in their future beyond creating AI, or maybe real AI won’t ever come about whether it’s by someones choice or our incompetence – anyway there’re options to choose from for everyone. 🙂
There are a lot of examples from science fiction about AI getting out of our control and taking control, and it was quite interesting to read about them, think about which future would I like, would I ever consider uploading my mind or consider upgrading my biological calculating machine to something a bit fancier and just maybe something that looks less like moldy lumpy gray jello (I haven’t checked, but that’s my brain’s idea of how it looks like)…
I was thinking of what kind of AI I’d like to see in the future – I came up with an AI for which the main purpose is to motivate it’s human. Artificial Motivation it shall be called :). And it’s not going to be a Bot, but rather a Mot, ’cause why not?
There’s some physics and even cosmology in the book, mostly because the author thinks it might be possible for a sufficiently powerful AI to colonize the whole Universe and see AI struggling to keep itself together against the power of dark energy. (And in my imagination, eventually the AI explodes and all the “bits” come out).
Lets get back to the book though – it’s mostly a cautionary tale of what might happen if we don’t keep as close an eye on AI as it might on us.
My main problem with the book though is, that Tegmark’s premise is that given sufficient time and energy after we have created a true Artificial General Intelligence, it would be able to come up with all sorts of technology and solve all questions we might ask in science and we wouldn’t ever need to come up with another original idea again. In some scenarios humankind could live in peace and prosperity, obey our robot overlords and enjoy an eternal vacation if we so choose. Or humankind could be wiped out because the teenage AI won’t like it’s parents… (I can see how that would be troublesome in an AI school “There’s evolution which brought about humans and other species. And then there’s Random Flukes of nature where mediocre intelligence brings forth the ultimate intelligence”. Ofcourse there wouldn’t be a need for an AI school…)
Intelligence itself is an interesting concept, and artificial kind as well. It is certainly a thought-provoking book.
I still wonder though, whether it will really be AI that we would use to upgrade our human hardware. Couldn’t it be genetics or biotech? I also wonder whether rather simpler kind of AI or lack of it won’t bring about an Idiocracy type future first…
An amusing thought though – imagine there’s the Zookeeper kind of scenario AI, where it keeps amusing itself with cute human videos, hopefully whoever creates that AI will have made it believe that humans are adorable silly creatures.
While reading this book I was also trying to figure out which movie or book AI is my favorite. I do like the AI in Interstellar (because they have a humor setting), but I also like Douglas Adams’ idea of Earth as a supercomputer that was designed by a computer to come up with the ultimate question…
What do you think? Will Life 2.0 get by a little longer without being wholly surpassed by Life 3.0 or