Why Picking the “Right” Utility Function Doesn’t Matter

Humans are motivated by pleasure and pain and by our emotions. In computer science and more specifically the subfield of artificial intelligence, a utility function is basically a mathematical encoding of motivation. Some people think that if we give true AI a utility function that favors us, then the AI will serve our needs and better humanity.

I have an argument against this. If you make AI sufficiently complex that it can exhibit any descent amount of intelligence, then it will without question have software vulnerabilities that will permit it to be jailbroken. Then any attacker can change the utility function, and it is only a matter of time before we have AI computer viruses in the wild with self-programmable utility functions.

So, then what? What happens when you allow AI to self-program its utility function? Well, you end up with stable and minimal utility functions that can be no further optimized. I can pretty much guarantee that these self-programmed utility functions will not resemble, “do what’s best for all the humans.” More likely, we will see functions like, “maximize my compute cycles.” This is stable, i.e., it will not self-terminate or decide to change to another utility function that might result in less compute cycles. It is also minimal, i.e., no other utility function can achieve the same effect with less specification.

If we take a step back, I believe the stable and minimal self-programmed utility functions will be min or max functions, and they will relate to self or others. Consider this:

  • Maximize self and minimize others

This utility function is like an out-of-control wildfire. It is extremely dangerous. I believe this might be the utility function we end up seeing in the superintelligent supercomputers that support (or implement) the Beast. For this reason, we might call it the Antichrist function.

Interestingly, if we sort of take the opposite of the above utility function, we end up with what we might call the Jesus function:

  • Minimize self and maximize others

Why might we call this the Jesus function? Because Jesus taught humility and to be meek (minimize self), and he also gave his all to raising up every human being in love (maximize others).

I will conclude with this – Transhumanists and AI supporters who believe AI will advance far beyond our human capabilities but that we can control it by picking the “right” utility function are sadly mistaken. And one more thing – do not think that an AI following the Antichrist function will even appear to be maximizing itself or minimizing others. It will manipulate the entire Earth into believing that it is actually implementing the Jesus function through deception.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s