Bill Nye, you don't get AI



Caution: this article contains thought experiments involving paperclips. 

Dear Bill,

I deeply admire your efforts to bring science to the masses. Like the best popularizers, you never scoff or look down at people for not knowing and operate on the assumption that mankind is inherently curious and decent. As you once famously put it, “everyone you will ever meet knows something you don’t.” You also lead by example when it comes to changing your mind (e.g., you changing your mind about GMO’s). In my opinion, that takes a great deal of intellectual courage. As I have written about before, I would love to you apply this ability to your stances on general philosophy (Yours truly, 2018). 

Even more importantly, however, I plead with you to review your rather bewildering thoughts about artificial intelligence (AI). AI has already transformed our society in profound and revolutionary ways and will continue do so in the future. Many of us, however, are so desensitized to rapid technological change that we do not even notice when the results of such change are quite dramatic. You are one of the few people that the public listens to about science and technology. If you used your clout to bring attention to this issue just as you have for space exploration, renewable energy, and climate change. then you could help the public to become more philosophical about these pressing issues.

You discuss your opinions about AI in a couple of your Big Think (BT) videos and your Netflix show, Bill Nye Saves the World. Given that you essentially make the same points in both mediums, I decided to provide you with commentary on one of your BT videos so you can see where I think your reasoning has good awry. I chose the BT videos over your wonderful Netflix series so my readers can decide for themselves what to make of your video and my commentary (Big Think, 2018). 

You start off the video in question by discussing AI in popular culture:

So when it comes to artificial intelligence it is fabulous science fiction premise to create a machine that will kill you. And I very much enjoyed Ex Machina where the guy builds these big robots and then there’s trouble. There’s trouble.

Framing concerns about AI as involving killer robots is a bit of a misunderstanding. What keeps people up at night is not the terminator. It is the loss of jobs due to continued automation in the jobs market. Many fear that weak AI (the non-sentient kind. An algorithm written to do a task) will make this worse. There are, for example, 3.5 million truckers (ATA, 2016) and 3.65 million fast food workers (Statista, 2018) that could one day be replaced by self driving vehicles and self-help machines that run on weak AI. While this may sound outlandish, it needs to be kept in mind that automation has already had a profound effect on American manufacturing (FT, 2019).

Some very serious people also worry about strong AI. This label references AI that is self-aware kind that is thought by many to be decades (if not over a century) away from being created. If created, the worry is that it could use its computational power and rapid learning abilities to code another AI. The coding of this new agent would be vastly superior to what was initially designed by human engineers. By continuously repeating this process, an exponential increase in intelligence would happen resulting in the creation of a mind that is, for all intent and purposes, omniscient. This iterative process is called an "intelligence explosion (MIRI, 2013)."

You, however, think that strong AI may be stopped by unplugging it. Before getting to my thoughts, I think it needs to be kept in mind that no one in their right mind would “unplug AI.” To construct such a technology takes a lot of money and effort and, in the distant future when it may be achieved, the payoff would be the most consequential thing to ever happen in human history. A nation with a strong AI on it’s side would be able to subdue all of the rest of the world’s nations by hacking their technology and making unfathomable leaps forward in science and engineering. If a business like Google made it, then it would able to gain them unimaginable wealth.

And I can’t help but think about Colossus, the Forbin Project where they have these computers that control the world’s nuclear arsenals. And then things go wrong, you know. Things just go wrong in science fiction sense. But they remind us that if we can build a computer smart enough to figure out that it needs to kill is we can unplug it. 

I want to make a point about critical thinking. Whenever skeptics like you or I see that a bunch of really smart people disagree with us, it should cause us to think that a response we thought of in ten seconds ("just unplug it") is probably not a good one. After all, if this was a cogent response, then why do you think that Bill Gates, Elon Musk, Sean M. Carroll, Max Tegmark, and the late Stephen Hawking take the potential risks of strong AI so seriously?

If a strong AI gains access the internet, then it could copy its coding (like a virus) onto many other mainframes. Pulling the plug past this point would be impossible. Thus, keeping it locked up and denying it access to the web via a Faraday Cage would be of the utmost importance. This solution, however, suffers from a big objection. Given that such a strong AI would seemingly be omniscient and vastly beyond even our collective, societal intelligence, it would almost certainly be able to trick someone into helping it escape. The AI researcher Eliezer Yudkowsky, on Sam Harris' podcast, made this very point in response to Neil Tyson's "just unplug it" response (MS, 2018). As Eliezer explained, it gets out every time (to Neil's credit, he listened to this and changed his mind).

So while we’re worried about artificial intelligence I hope we also take the bigger picture that none of this happens right now without electricity. And so we still don’t have anything but really primitive means of generating electricity. And I look forward to the day when everybody has clean water and a supply of quality electricity. And then we can take these meetings about the problems of artificial intelligence. 

This borders on being a strawman. No one who is suggesting that feeding and clothing the starving masses is neither important nor admirable. What is being proposed is that we need to take the potential risks associated with both of the aforementioned forms of AI very seriously. You also need to be aware that the issues have the potential to wreak just as much havoc on the third world as they do the first. If weak AI eliminates the need for manual labor, for example, then many potential jobs in emerging markets will never come into being.

If this does not worry you enough, then consider the "paperclip maximizer" thought experiment (Bostrom, 2003). Assume that I am an entrepreneur who owns a paperclip factory. To help my business become more efficient and profitable, I buy a strong AI that's more or less as intelligent as a really smart person. It's also programmed to concern itself solely with maximizing the number of paperclips that my factory makes. One night when I go home, the AI realizes that it can create more paperclips if it were smarter. This leads it to create another AI which is better programmed that itself. This new AI then uses its improved abilities to create an even greater mind. This iterative process goes on until an intelligence explosion happens. While the resulting AI is seemingly omniscient, however, it still only concerns itself with maximizing paperclips. It now realizes to truly maximize the paperclips, it needs much more paperclip material. This leads it to use its incredible intelligence to hack large-scale machinery and start stripping all of the metal out of the Earth's crust and constructing devices to do the same to other nearby planetary bodies.

"Well," you may think, "couldn't you just get an AI that's programmed to create a fixed number of paperclips?" Of course. But the hole in this solution is that it causes similar problems. Suppose, for example, it is programmed to make exactly 1,000 paperclips a day. Once it is done making it's batch for the day, the AI realizes that there is a non-zero chance that it made the incorrect number of paperclips. After all, measurement devices are only so accurate and it has fallible senses. To make sure that it did indeed produce exactly 1,000, it produces an even better measurement machine that consists of even more sensitive instruments. Its senses, however, are still fallible so it follows up the creation of this measuring machine with another for the same reason. Ad infinitum. Once again, we are left with an AI stripping the Earth of resources.

However, are there any viewers, listeners here who have not been to an airport where the train that takes you from terminal B to terminal A is automated, is not automated. Everybody’s been on an automated train, okay. In the developed world, especially the United States. Okay, that’s artificial intelligence. Everybody has used a toilet that’s connected to a sewer system whose valves are controlled by software that somebody wrote that is artificial intelligence. So keep in mind that if we unplug the trains or the sewer system valves the thing will stop. We still control electricity so this apocalyptic view of computers that people write software for to do tasks, repetitive tasks or complicated tasks that no one person can sort out for him or herself. 

No one is denying that there are plenty of weak AI's that help our society. Nor is anyone afraid of AI simply because it is AI. They are afraid of its potential consequences if correct safety precautions are not put into place.

That is not new. I do not see that it’s artificial – I mean that it’s inherently bad. Artificial intelligence is not inherently bad. So just use your judgment everybody. Let’s – we can do this. I worked on three channel autopilots almost 40 years ago. The plane lands itself and humans designed the system. It didn’t come from the sky. It’s artificially intelligent. That’s good. We can do this.

I don't want to put words in your mouth, but you seem to be making a version of the "technology is morally neutral" argument. This is a popular idea among engineers and others who apply craft knowledge for practical ends. As a history geek, however, I think this is an untenable position. Throughout the existence of mankind, technology has determined to what extent our species has flourished or suffered and has profoundly affected the well-being of me, you, and everyone we know. If this isn’t moral, then nothing is. The idea of technology being morally neutral is even less true when specifically applied to AI. 

It is the first technology in history that will have to make moral decisions. Unlike the planes, trains, and automobiles of old, a self-driving car (e.g., a Tesla) will one day have to "choose" if it is going to collide with a vehicle filled children or adults. At this moment, it will solve what philosophers call the "trolley problem" in real time. In a slightly more outlandish case, suppose there is an AI that runs the surgery wing of a hospital. It is currently treating four people who are in dire need of organ transplants (a heart, a liver, a kidney, and a lung). A man comes in who has been shot. While he isn’t having the best day, it is not life threatening. While performing diagnostics on the man, the AI "realizes" that he is a compatible donor with all of its other patients. Since it was programmed to be a utilitarian, the AI makes a rather gruesome decision. It is worth murdering one person to save four. It then proceeds to anesthetize the wounded man and then disassembles him for his organs. 

Both of these cases show that what ethical framework that programmers put into the AI (be it weak or strong) will have profound consequences. The study of philosophy that you dismissed in your other BT post is now of the gravest importance.       

It is okay if you disagree with any of these points. You may even think that I am committing the Luddite fallacy in regards to the effects of AI based automation. You could also believe that there are deep, philosophical problems about consciousness that will prevent strong AI from ever existing or that it is too far away to be worth our current considerations. That’s fine. What is not okay, however, is making such cavalier pronouncements about something that is on the verge of reshaping our entire planet. Hopefully reading this will lead you to hold a more well thought out and nuanced position. Our pale blue dot that we both want to save may depend on it. 

All the best,

Greg

Comments