Technology
5 min read

How Technological Convergence Poses An Existential Risk

Published on
September 27, 2021
Share this post

Imagine you are sitting at home relaxing from a chaotic day. You turn on your television, and just as you get into the storyline, the signal is interrupted. Suddenly, a strange image appears, and a voice begins speaking in an unfamiliar language. But that doesn’t matter, because somehow you understand what is being said. A near-instant translation manifests inside your head, and the message is clear.

Greetings, people of Earth. We have received your signals and have been observing you. It is apparent that you are a danger to yourselves and your planet. You require assistance, and we intend to come to your planet to salvage what is left of it. It will take us some time to reach you as we are located in a distant galaxy on the outer edges of your universe. But don’t worry. We are coming to help you.

And if that weren’t jarring enough, what you thought was a background prop emerges as the being that has just explained its intentions to “help” salvage Earth. It’s unlike anything you’ve ever imagined. Suddenly the feed is cut, and the world as you knew it is now gone. The clock is ticking to understand this alien civilization, its true intentions, and its imminent impact on humanity. Sounds like mere science fiction...right? Think again because, according to Stephen Hawking, Artificial Intelligence (AI) has the potential to evolve into a superintelligence capable of such feats. And just as we wouldn’t sit and wait for the aliens to invade, we must prepare for the emergence of superintelligence.¹

We live in a time of unprecedented technological innovation. Assuming we maintain this trajectory, further digital and biological infrastructure advancements will merge our physical world with virtual environments. The World Economic Forum has termed the emergence of this era The Fourth Industrial Revolution. What began at the turn of this century is characterized by a ubiquitous mobile internet; smaller, cheaper, more powerful sensors; machine learning, and AI. 

“Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.” —Stephen Hawking

AI is already transforming our lives. 

AI is everywhere, and we don’t even realize it. Today’s AI consists primarily of machine learning algorithms. Machine learning is a branch of AI that focuses on using algorithms to automate the process of extracting patterns from data. This process allows machines to imitate the way humans learn, gradually and continually improving their accuracy. Such algorithms are commonly used in everyday devices such as phones, cars, medical equipment, businesses, and social media platforms. The applications of AI will only expand as the technology becomes more scalable, more robust, and more widely distributed. 

Machine learning tends to be more accurate than previous methods of prediction. And it’s helping humans perform activities that were previously too expensive, time-consuming, or unachievable. For example, machine learning advancements combined with increasingly powerful computing systems and big data sets have made weather forecasting more accurate in predicting extreme conditions. Before machine learning, scientists used simpler models such as linear regression to predict weather forecasts. These models were time-consuming to create due to the multiple inputs that needed calculating, the time drove cost, and the models weren’t as accurate as machine learning methods. Because regression models containing a high degree of variation can be wrong for specific instances but still correct on average.

On the other hand, machine learning methods may be wrong on average but have less variance creating more accurate results on a per-event basis. As the algorithm receives feedback on its predictions, it becomes more accurate over time. Another way of thinking about this is through a target shooting analogy. Let’s assume you shoot twice. In the first shot, you score -5 to the left. In the second shot, you score 5 to the right. With simple regression, you could consider that a bullseye. With machine learning methods, you’re likely to miss the target completely, but get closer to the bullseye each time you aim.²

Training machine learning models require data. This training involves supervised or unsupervised learning methods to recognize patterns and make predictions. In supervised learning, a training set including labeled inputs and desired outputs is presented to the model. The training set allows the model to learn over time. Supervised learning models fall into two categories: classification and regression. Unsupervised learning models do not rely on labeled datasets and instead discover hidden patterns without human assistance. Unfortunately, even in structured formats, this data must undergo a tedious cleaning process before being usable by machine learning models. Because machines do not possess the ability to rationalize and make judgments as humans do, they rely on humans to help them learn for now.

You know you

Like this article

Sign up here to get you some more.

By signing up, you agree to my Terms and Conditions.
Thank you! We've received your submission.
Oops! Something went wrong. Please try again.

However, the emergence of AI poses serious risk.

Embedding machines that can learn poses significant upside potential. However, it also has a considerable downside which many people do not yet realize. As we become increasingly dependent on AI and machine learning, the issue of safety must be studied further. One of the most important aspects of AI safety is that of the system’s objective function. An objective function defines the system's goal as a way of judging whether it's doing what you want it to. Accidents can occur when we specify the wrong objective function, are not careful about the learning process, or commit other machine-learning related implementation errors.³ In the current state, machine intelligence lacks the cognitive abilities and judgment to outperform humans in decision-making. However, this may not always be the case.  

The Paperclip Maximizer is a well-known thought experiment first theorized by Nick Bostrom. In this hypothetical situation, a superintendence emerges with the sole goal of manufacturing as many paper clips as possible. As a consequence of an out-of-control objective function, it starts transforming all of the earth first and then increasing portions of space into paperclip manufacturing facilities.

More subtly, a superintelligence could bring about a state of affairs that we might now believe to be desirable but later realize to be an unimaginable nightmare scenario, in which essential resources become irreversibly lost.⁴

A superintelligence is an intellect that vastly outperforms the best human brains in practically every field, including scientific creativity, general wisdom and social skills.” —Nick Bostrom

In the scenario that consists of ubiquitous machine intelligence, superintelligence becomes an existential threat to humanity. Given the assumption that we have integrated machine intelligence into critical resources, such a being may become so powerful that humans are unable to control it and become potentially enslaved by it. Especially considering the convergence of interconnected systems, which may further propagate the advancement of information required to allow such a being to come into existence. 

The convergence of technological advancements.

Today we believe that and rely upon advancements in technology to make our lives easier. Machine learning, the internet of things (IoT), self-sovereign identity, cloud storage infrastructure, cryptocurrencies, and blockchain technologies are all making significant progress in this regard. 

What would you think of such technologies if they were to be transformed into weapons to be used against us? How would such a scenario play out?

Understanding how a superintelligence might weaponize such technologies requires a basic knowledge of the major components. Over the coming weeks/months, I intend to release a series of follow-up articles analyzing how such technologies fit into this mental model.


If you haven’t already, sign up for my newsletter so that you get them as they become available. In between, there may be other articles sprinkled in here and there.

References

  1. Hawking, S. (2017, October 23). Stephen Hawking: 'Are we taking artificial INTELLIGENCE SERIOUSLY. The Independent. Retrieved September 21, 2021, from https://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-implications-artificial-intelligence-are-we-taking-ai-seriously-enough-9313474.html. 
  2. Agrawal, A., Gans, J., & Goldfarb, A. (2018). Prediction machines the simple economics of artificial intelligence. Harvard Business Review Press.
  3. Amodei, D., Olah, C., Christiano, P., Schulman, J., & Mané, D. (2016). (tech.). Concrete Problems in AI Safety
  4.  Bostrom, Nick. (2003). Ethical Issues in Advanced Artificial Intelligence.
Share this post
Art
Creativity
Inspiration
Entrepreneurship

Think Different

Receive my 7-day Productivity boot camp series to prepare your mind and body for the work you've been putting off.

By signing up, you agree to my Terms and Conditions.
Thank you! We've received your submission.
Oops! Something went wrong. Please try again.