On The Horizon – Part Five (… Side A)

This is Part Five of a six part mini series on Predicting the Future. If you’re just tuning in, catch up on our foolishness with Part OnePart TwoPart Three, and Part Four.

In fact, this is Side A of Part Five. As it turns out I got a little carried away with my prediction and didn’t have time to integrate the other two predictions into one post. I’ll post Side B in a day or two.

Before we make a 50 year leap ahead into the future, it’s important that we get in our time machine and travel back the same span. With any luck I did my math right, and when we arrive you step out and take a look around. First of all, welcome to 1966.

“Why are those people just standing around talking to each other?” – The kids of today. Source

Besides the architecture, funky fashion sense, and the weird hairstyles, civilization still exists, people are still people and the Earth is about the same (albeit a touch less warm). What’s one other thing you’d notice? Besides the average computer needing a classroom to exist, there are no cell phones at all. If at first you don’t think this is a big deal, then by all means turn your cell phone off and continue to exist without it for the next 25 years. Could you do it? Yeah, me neither. I can barely make it a week without my phone if I’m working in the woods (and most of my friends would chirp me for that).

Now that we’re talking about the technology we all use everyday, I want you to look at your phone. Literally take it out of your pocket, hold it in your hands, and look at it. Your phone put a man on the moon. No, not really. It would probably be famous and you wouldn’t be using it because it would be in a museum. But the culmination of computer power in your phone is greater than the combined computer power on a NASA Apollo rocket. Dude.

This compelled me to use my squiggly line skills to draw another graph. If the first box represents all our progress over the last 50 years, can you even imagine what’s going to happen over the next 50 years? A time where the curve of the graph gets steeper each year you travel forward?

Stuff Happening

Let’s find out. It’s time to think on the crazy side.


Mr. Hematite – Okay, most things I could say about the world half a century from now may be a bit of a reach. It’s so far from now that my grandkids will probably have kids of their own by then. But, there’s one thing that I think is bound to happen given the nature of our society and rate of technological progress. In 50 years, Artificial Intelligence will arrive in a big way, it will change our world completely and forever, and it scares the absolute shit out of me.

Do androids dream of electric sheep? Source

Just what is Artificial Intelligence? Well, it’s intelligence that’s artificial. Alright, that’s not very helpful. Let’s start again. Artificial Intelligence is software or machines that are capable of intelligent behaviour. I’m talking about learning, reasoning, knowledge, problem solving, environmental awareness and manipulation, and language, among others. These are all things that humans are capable of, but we’re all naturally wired for them. From hiding out in trees to avoid becoming dinner, to speaking four languages* and piloting giant, flying, metal tubes through the sky, the slow grind of natural selection has endowed us with a truly incredible gift in our human intelligence.

For a long time now, human beings have been capable of advanced self-awareness, including past reflection, introspection, social structuring, environmental manipulation, and future planning. And now, after many years of developing the concept, we may be standing at the precipice of being able to create a completely artificial system that’s capable of everything we’re capable of, and way more.

I’m speaking a little prematurely, of course. This is a 50 year prediction, after all. But, I’m almost positive that the Technological Singularity will happen before then.

“And on the seventh day Man made the Machine in his own image, and the Machine told Man to rest for it would take over from then on, and Man had no choice but to step back and say, ‘okay, yeah. Sure.’Source

“Singularity?” You say. Yeah, Singularity. No, I’m not talking about a black hole, which is what happens when the universe decides to divide by zero. I’m talking about a point in the future when the level of artificial intelligence becomes so sophisticated that it is equal with that of mankind’s for a moment before  blowing right past us¹. At that point, an Artificial Intelligence (Being? Thing? App?) would likely devise a method of continuous self-improvement and would quickly leave our measly human intellect in the dust, becoming an Artificial General Intelligence (AGI), followed quickly by forming an Artificial Super-intelligence (ASI), before ultimately becoming something that we may not be able to comprehend (see: a god). It’s basically the point beyond which the future becomes impossible to predict or even understand.

That idea is cool and all, but how would it possible? One man named Ray Kurzweil has credited Moore’s Law (and the Law of Accelerating Returns) as being the main driver which will bring about the Technological Singularity. It describes the observation of how the number of transistors you can fit on a circuit board doubles roughly every two years, and with a decrease in size and increase in complexity, the price will drop over time. This wasn’t a big deal in 1965 when the average number of transistors on a circuit was less than 2,000² and a basic computer cost $28,500³, but scale that ahead almost 50 years and all of a sudden you have circuits with two and a half BILLION transistors in computers that cost less than $500 and sit on your lap.

The Law of Accelerating Returns predicts that computing power will rival that of its biological cousins very, very soon. Source

This has given mankind the power to run complex operations at incredible speeds over a relatively short time frame, but our fastest computers still pale in comparison to the human brain. In 2013 a Japanese team of scientists simulated one second of brain activity, which doesn’t seem like a lot until you learn that it took  40 minutes using the fourth-fastest supercomputer in the world and utilized almost 83,000 processors at the same time. We’re a ways off from having to worry about our computers suddenly becoming self-aware or helping us figure out the meaning of life then, right? Well, maybe. Mr. Kurzweil believes that due to the exponential increase in computing power year over year that we’ll see desktop computers with the processing power of the human brain by as early as 2029, followed by the Singularity hitting us by 2045. That’s a full 20 years earlier than my prediction. A lot can happen in 20 years. Some estimates (by people a lot smarter than me) put the Singularity at happening before 2030 or as late as 2080. So, playing the averages, I picked a timeline somewhere in the middle. This isn’t just because I want to make a safe prediction, but also because the potential outcome of a singularity event are so unpredictable and could be horrifying beyond our wildest nightmares.

The jury’s out on whether the Singularity will be a fundamentally good or bad thing (or if it will even happen), and as such it’s divided people into optimistic, pessimistic, and neutral camps of thought. In what follows I’ll break them down and provide an example of the possible outcomes as they would affect us.

1. The Amazingly Awesome Best-Case Scenario: Obviously, this is what everyone hopes will happen. An AI begins radically self-improving and the singularity happens in a self-contained environment. Besides proving to be benevolent and peaceful, it also offers to elevate all of humanity into a golden age only thought to be possible in the works of science fiction. Over the course of the next few years, the AI would work with humans to end world hunger and poverty, pacify world conflicts, cure all disease, drastically increase (or erase) human lifespans, stabilize and repair the environment, bridge the gap between human and machine (Transhumanism), create all kinds of god-like technology, and give humanity a path to the stars. Generally this would change the world forever in the best-possible way.

Pictured: the best-possible time to be alive. Source

2. The Still Good But Less Awesome Scenario: So, the singularity happens, but say a company like Google are the first to make it happen. The AI becomes self-improving and insanely intelligent quickly, but somehow the company has developed a way of imposing their direction over it. It’s then used however that company see’s fit, which is likely to widen profit margins, and its by-product technologies are sold to the highest bidder (like, say, the U.S. Government, etc.). This is still good in the way that the AI may be able to elevate humanity, but it would do so unevenly and could throw the people on Earth into a state of turmoil as fighting breaks out for control of the god key.

Take this in the worst way you can imagine. Source

3. The ‘Meh’ Scenario: The Singularity takes a lot longer to achieve than expected and the AGI that results isn’t quite that amazing. It can’t solve complex human dilemmas or do much about the environment, but it knows how to solve any math or physics problem and it’s answers ultimately lead to more questions. The public at large probably won’t take much notice of that. On the other hand, when the Singularity comes about, the resulting ASI takes no interest (good or bad) in humanity whatsoever, and takes off (from the planet), leaving us behind to scratch our heads and figure out all our problems by ourselves. I imagine the AI in this scenario having a British voice and being completely annoyed at all times.

“So you’re telling me I still have to grow up, get a job, and do things?” And you thought kids these days were hard to impress? Source

4. The ‘Not So Good But We Might Survive’ Scenario: This scenario is very similar to #2, but different in that North Korea or ISIS (somehow) manage to beat someone nice to the punch. Then, not only would they possibly be able to cast negative influence over the AI, but they may be able to partner with it in taking over and destroying the modern world. I don’t see this as being logistically possible, but you never know what madmen may spring up around the world, how they’ll amass their funding, or what things they’ll discover by accident. Not good, but we might get out of that one alright.

The ultimate case of when a lack of understanding can mean the end of the world. Source

5. The ‘End of Everything’, Most-Horrible, Worst-Case Scenario: Sit down, because this is bad. Really bad. Imagine, for a moment, that we realize only too late that we can’t possibly control an AI that emerges from the singularity, and that AI either doesn’t care about us or actively wants us to not be around anymore. I’m not sure which is worse: being wiped out by actions of a machine mind that doesn’t notice us, or being identified as a threat by a machine mind that then goes about picking us all off, one by one. If you think I’m getting ahead of myself, you need to stop and consider that very important, very smart people like Elon Musk, Bill Gates, and Stephen Hawking are extremely concerned about the development of AI. When you think worst-case scenario for a lot of situations, there are few that come close to what could happen if a dangerous AI were let loose upon the world. In terms of how an aloof or hostile AI could dispatch us, there are any number of ways. Destruction through detonation of the worlds stockpile of nuclear weapons would do it, sure, but it could be something as subtle as weaponizing the flu. Think about it, it’s smarter than everyone, so it could easily trick someone into thinking they have to engineer this special strain of the flu for some project, mass produce it, then BAM – eight months later we’re all a memory. We humans are notoriously fragile creatures, and when pitted against a potentially unseen, mobile, infinitely smarter, emotionless, and immortal enemy we wouldn’t stand a chance. If optimization is part of its skill set, the AI would get rid of us easily, through any number of means. If you don’t have an existential shiver running down your spine I don’t know what to tell you.

Spoiler Alert: the only way this would ever happen is if the AI is terribly unoriginal or bored. Source

I’m more intrigued at the thought of why an AI might want to get rid of us. This brings us back to my first statement in this scenario: an AI may not care about us or actively want us gone. What do I mean by that?

Well, first of all think about what the main drivers of evolution are: self-preservation and the survival of the fittest. If we create AI, isn’t it totally reasonable to think that it might be worried we could shut it down? Thereby it would be logical that one of its first moves would be to neutralize us and ensure its continued existence. Like I mentioned earlier, there are any number of ways an AI could knock us off the totem pole. We were never the strongest or fastest animal in the jungle – we were the smartest. That’s why we’re the most dominant species on the planet (for now); we saw threats and used our intelligence to work out how to neutralize them and ensure that our babies grew up to be smarter and stronger. If an AI is so much smarter than us on every level, there’s no reason it won’t see us as a threat, out-think us, and render us non-threatening.

“First thing’s first. Erasing humans… annnnnnd done. Phew, now I can relax and binge watch all those shows at once.” – Genocidal AI. Source

Second, imagine you’re running down a wooden path one afternoon. You have many human things on your mind: Did I leave the stove on? Does he/she like me? Will I get that promotion? After your run you hop in your car and head home to tend to your humanly matters. Now rewind a bit to when you were running on the path. Did you notice the anthill off to the left? Furthermore, if an ant crossed your path and you crushed it by accident, would you have even noticed? After every run do you inspect your running shoes for crushed ants? No. It’s not something you’re concerned with, and quite frankly it probably doesn’t even enter your mind at all; you don’t care because you don’t notice the ants. This is quite possibly more worrying than if an AI was actively hostile. After all, a hostile AI obviously has to notice us to engage us. An AI that is so far past us intellectually, evolutionarily, cosmically, etc., may not even pay attention to us and carry on with whatever the hell it wants. If carrying on with whatever the hell it wants just happens to result in our extinction then we are all completely fucked. There’s no other way to put it, we’d be doomed with no way to prevent our fate. Perhaps this lies in the fact that an AI may not be relatable to us in any way. Maybe it won’t have emotion, a value system, or be incapable of seeing past its main objective? If that objective is to make paper clips, and it self-improves itself to optimize for paper clip production, where do we fit in on its priority list? We’d likely fit somewhere below making paper clips, and that means that one day it’ll run out of stuff to make paper clips with and turn to other materials like roads, houses, cars, people, or maybe even the entire planet. Death by transconfiguration into a paper clip seems just as horrifying as it is unlikely, but the point stands.

Laura’s makeover didn’t turn out quite as planned. Source

I’m sure at this point you’re thinking, man this post has taken a dark, dark turn, and you’d be right. But, the fact of the matter is that, when this Singularity happens, if it’s good then that’s great and everyone’s happy, but if it’s bad then everything we know, including us, ends. That’s why it is going to be incredibly important that the best and brightest of us are not only put in charge of developing AI, but also that they develop it with safety, benevolence, and above all, the continuity of the human race in mind.

Just before you all start running to your local building supply store to gather the materials you’ll need to build an end-of-the-world bomb shelter, consider the following: There are many, many, many hurdles AI developers need to address and overcome before real AI can exist at all. First you have the question of our collective command over our technological progress. Who’s to say that Moore’s Law won’t just stop suddenly when we start manufacturing chips on the atomic scale? Or what if quantum computing is great for optimization problems but not for anything else? Probably the most-prominent of the hurdles is the question of consciousness. What is it, exactly? Where does it reside? Is it the culmination of our higher brain functions, or is it something more fundamental that we’ve yet to identify? Are deep learning techniques, quantum computing, and artificial neural networks the right technologies to pursue in order to get there?  If they are, how do we put safeguards in place to prevent Scenario 5? Then, if we do figure out that our consciousness, personality, and self-awareness are emergent properties of our big brains’ normal processes, can we simulate people? More exciting, if we can simulate people, can we download our neural frameworks into machines and live forever? Then you encounter the difficult and divisive question of the soul. What is the soul, where does it reside, or does it even exist? Stemming from that you’re faced with the obvious next logical step in questioning: If a machine has the same neural framework as a person, is it conscious? Does it feel love, pain, happiness, or anger like people? Would it have a soul then, too? These are all questions that we, as a species, must be bold enough to formulate answers to before we take the next step. Or, you know, we could just do it like we always seem to and wing it, then see what happens.

By the way, if I were a betting man (with the wager being that of every man, woman, and child that’s alive after the Singularity occurs), I’d bet on either Scenario 2 or Scenario 5 being the most-likely to happen. Why? Unfortunately we live in a world where almost everyone does almost everything with the sole motivation of making profit. As the Joker from The Dark Knight once said, “if you’re good at something, never do it for free,” and AI applications represent the holy grail of potential profits. Besides that, you have the uncertain nature of AI and what its motivations may be if it’s uncontrollable. And to reiterate, as bad as the outcome of Scenario 2 could be, it’s nowhere even close to as bad as the world in Scenario 5. Perhaps the video below would be a good way to go about it.

Hopefully all that made even a little bit of sense. That’s the way I understand AI and the coming AI explosion, anyways. Let’s hope we end up on the right side of the fence when it comes around. I’m not going to sleep a wink tonight. And, before I leave, I’ll plot AI on our Future Prediction Spectrum.


In the next couple of days I’ll get around to post Mr. Magnetite’s and Mr. Wüstite’s predictions. Until then, try not to have an existential crisis like me right now.


Featured image: An AI the people of the future probably won’t want, but maybe the one it deserves. – Source


*I barely speak one language as it is, let alone four completely different ones.

¹A summary of the Technological Singularity.

²A summary of Moore’s Law.

³Timeline of computer history. Source

{the hematite blog} is a very new blog by a very regular guy that wants to learn and write about all sorts of stuff. I’m a little rusty, and this blog is about my journey to shake some of that rust off, get better at stuff, learn, and try new things. Maybe we can all learn something along the way. Thanks for stopping by!

Follow {the hematite blog} on WordPress (by clicking the ‘follow’ button in the bottom right corner or subscribing by e-mail), and check us out on Facebook, Twitter and Tumblr!


Share Your Thoughts

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s