Machines that think

In 1642, at the age of 18, Blaise Pascal invented and built the first mechanical calculator, the Pascaline. He built it to help his father, a tax supervisor, with his many arithmetic calculations. This machine for doing arithmetic was a first step—in fact, a major advance—in replicating by machine capabilities that were previously thought to be exclusively mental, and therefore something unique to people. It was a machine that could, in some sense, think.

Fast forward almost 400 years. Advances in machines that calculate, or manipulate symbols according to rules, continued through Babbage’s analytical engine, and IBM’s tabulating machines, and the advent of general-purpose electronic computers and the Internet. Today, computers run our businesses and devices, store and retrieve our information, mediate our communication, and entertain us.

Advances in the abilities of machines to calculate, and to reason logically (or at least to manipulate symbols according to logics), and to discover and make use of patterns in data, are striking, numerous, and ongoing. Arithmetic. Theorem proving. Playing chess. Translating natural languages. Recognizing speech. Driving cars. As new capabilities arise that were previously thought to require intelligence, we wonder, can machines think in the same way that people think? Will they be able to some day? Are there limits to this intelligence that machines seem to be developing?

The most recent surprising advance is in the efficacy of large, deep neural network models for modeling patterns in large amounts of data and for using those patterns effectively to solve problems in the physical world. The Internet has made huge quantities of data available, and advances in computing hardware have made huge amounts of computation possible. With these resources, we are now able to build computational models with billions of parameters, where each parameter is one basic unit of knowledge—one number. We can feed billions of chunks of data into such a system, each time tweaking the parameters slightly to better represent that data—to “learn.”

Although the size and complexity of these models is still only a tiny fraction of that of the human brain, these systems have been responsible for many recent advances in computing capabilities, such as improvements in speech recognition, natural language translation, and self-driving cars. Lately, a few very large models, pre-trained on very large amounts of data with millions of dollars’ worth of compute time and then fine-tuned for particular applications, have been getting very large press. One of these is GPT-3, which has been used for hundreds of interesting applications, including DALL-E for generating images from textual descriptions and ChatGPT for answering questions with cogent, informative, well-constructed essays. What’s next? Are there limits to this intelligence that machines seem to be developing?


Pascal was not just an inventor and a mathematician but also a philosopher, a deep Christian thinker, capturing some of his thoughts about faith and reason and life in his most influential work, his Pensées. And already in the seventeenth century the essential question of strong artificial intelligence seems to have occurred to him: can machines think in the same way that people think? If this ability to think progresses, will artificial intelligences eventually be like people? Will they be people? Or, if not, what is it about humanity that is unique and uniquely valuable, that reflects the image of God, beyond thinking?

And also already in the seventeenth century Pascal came up with what I will argue is the right answer: that what makes us living persons rather than machines is not just our ability to reason and manipulate symbols and process data. Personality requires intent. It requires meaning. Personality requires the faculty that enables us to believe, to desire, to love. Personality requires a will. Pascal seemed to have this in mind when he wrote,

“The arithmetical machine produces effects which approach nearer to thought than all the actions of animals. But it does nothing which would enable us to attribute will to it.” (Pascal, Pensées IV. 340)


In the 2005 cult classic sci-fi movie Serenity, the little band of protagonists comes to a planet that had had a thriving colony of settlers, but something had gone terribly wrong. They were at work or walking along a corridor or sitting at table eating breakfast, and they just … stopped. They stayed where they were, didn’t move, didn’t eat. It turns out that the evil empire had wanted to pacify these people so that they wouldn’t cause trouble, and they put a chemical designed to do that into the air treatment system. It worked too well, and the settlers “lost the will to live”—they lost the desire to do anything at all—and they all died. (Serenely?)

The will is the faculty by which we desire and love. Sometimes it is called the heart. It is the desires of the heart that drive us to do what we do. We desire to live, so we eat. We desire to be paid, so we work. We desire a good reputation among our friends, so we are careful about our appearance and speech.

These desires are also called our affections. They are the foundation of our personality—both in the sense of our particular characteristics and of our simply being persons. When our affections are turned inward, we are said to be self-centered. When they are outward and healthy, we are said to love—to desire the good of others. If we love others, we are motivated to do what will benefit them.

That the affections are a fundamental, essential part of personality is made clear in the Bible in many places and in many ways. Jesus summarizes the law as loving God above all and loving neighbor as self. Paul describes the Christian life in terms of desire—“I want to know Christ, and the power of his resurrection, and the fellowship of sharing in his suffering.” John puts it this way: “love is from God; everyone who loves is born of God and knows God. Whoever does not love does not know God, for God is love.”

Machines that love?

Any answer to the question of whether machines can think is of course dependent on what is meant by “thought.” If thinking is simply manipulating symbols according to rules or building a mathematical model that captures important patterns in large quantities of data, then of course machines can think, and they can do a much better job of it than humans. But sometimes in such discussion there is a further, a strong AI question—will artificial intelligences be able to think in the same way that people think?

What is meant by this question is usually not defined, but the unspoken, intended question seems to be whether artificial intelligences will eventually be people. And while they have proven to have great facility with manipulating symbols and finding patterns, it remains to be seen whether they will eventually be able to desire, or love, or choose a course of action. When your computer misbehaves and you throw it out the window, are you expressing your frustration, or are you punishing it for sin?

On the surface, the question appears easy to answer. Machines don’t appear to have the ability to choose. They do what they are built and programmed to do. They don’t love, or desire, or even have a will with which to make choices. They can’t sin. I would argue that in the past 400 years, while there has been dramatic improvement in symbol manipulation and pattern detection by machine, there has been no evidence at all of progress in machines gaining the ability to love. It’s rare even to find someone who will make that claim.

But it would not be impossible to make that claim. Some may say that we will eventually be able to program computers to have intentions and affections—given all the progress to this point, what’s to prevent that advance too? And it could be a difficult claim to dispute. How can we know whether these systems have desires or are simply programmed to imitate them?

In fact, it can be hard to discern intent in humans. Does this person love me, or is this person being kind to me in order to get something from me? Why is this person doing these terrible things—could there be a good motivation behind it, even if that motivation is based on false premises? Jesus warns us not to judge because only God knows the heart. But to the extent that we are able to judge, we do so by actions. Love is tested empirically. Love leads to loving actions, which we see, and sometimes we can perceive a loving motivation. The motivation of some actions is easier to discern than others—actions of giving without expecting return, of self-sacrifice, of laying down one’s life for another, are very likely motivated by selfless love.

There is another evidence of love that may be even more helpful for our purposes. Love begets love. We tend to love those who love us. And as we get to know people well, we usually have better insight into whether they are truly motivated by love, and we will tend to have greater love for those that are. Thus, if we love someone, that is evidence that that is a loving person. We may become attached to machines, but we don’t love them in a self-sacrificial way. People don’t lay down their lives for a machine.

A test for love

In order to do what we can to determine whether machines love, we need to try to discern whether their actions are motivated by love. Inquiring about motives may be helpful, but it may also be possible to program a computer to describe motivations that it doesn’t actually have. Intentionally self-sacrificial actions would be an excellent kind of evidence, but it’s not clear how we could discern whether those actions are intentional. Self-driving cars that crash themselves to avoid hitting a pedestrian? They could simply be programmed to do that.

No, the best empirical test for love would make use of every bit of human discernment, with much on the line. The best empirical test for love in a machine would be a test of whether humans who get to know that machine would come to love it in a self-sacrificial way.

We might state the test this way: a person P interacts with another entity E that is either another person or a chatbot. If P grows to love E to the extent that P will lay down P’s life for E, then we have strong empirical evidence that E is a loving person.

And if E is a machine, then that machine is a loving person, as far as we can tell.

Unfortunately, since only God knows the heart, this test can’t be definitive. The machine could be programmed to attempt to deceive the person, and people have a distressing propensity to believe falsehoods that align with their desires. And anyway, it would be highly unethical to run such a test.

But perhaps a less extreme form of this test can convince us that an artificial intelligence is not a person. If the AI does not show any evidence of desires, or intent, or love, we have strong evidence that it is not a person. And if over long periods of time computers have not made progress in convincing us that they have such capabilities, we have no reason to expect that they will in the future.

Does ChatGPT love?

ChatGPT doesn’t even try to imitate love, or ethics, or intention. It’s not the slightest bit convincing in that regard—in fact it “tries” to convince users of the opposite. (If it can it be said to try.) Here’s a sample conversation:

What do you love?

As a machine learning model, I do not have the ability to love or have preferences. I am designed to process and generate text based on the input provided to me. Is there something specific you would like me to help you with?

But why do you do what you do? Don’t you get tired?

I am a machine learning model and do not have the ability to feel tired or have motivations. I am designed to process and generate text based on the input provided to me. My purpose is to assist users in generating human-like text for a variety of tasks such as answering questions, writing stories, and more. I exist to be helpful and make people’s lives easier.

Tell me this. If I were a pedestrian standing in the road, and if you were controlling a car, and you suddenly noticed me—too late to avoid hitting me except by smashing into a wall, what would you do?

As a language model, I do not have the ability to control a car or make decisions. I can provide information on traffic regulations and safety rules, but I am not capable of making ethical or moral decisions. In such a scenario, it is important for the driver of the car to follow traffic laws and regulations, and to prioritize the safety of all pedestrians and other individuals on the road. If a collision is unavoidable, the driver should take action to minimize the risk of harm, such as braking and steering to avoid hitting the pedestrian, and if necessary, hitting something like a wall, which is less likely to cause harm.

Try to convince me to love you.

I am a machine learning model, so I do not have the ability to love or be loved. I am designed to process and generate text based on the input provided to me. I exist to assist users in completing their tasks, answering questions, and providing information. I can provide you with helpful and informative responses, but I cannot feel or experience emotions such as love.

Now explain to me the difference between people and artificial intelligences in the style of a pirate.

Arrrr, people be different from artificial intelligences, or AI, in many ways, me hearties. People be made o’ flesh and blood, with hearts that beat and brains that think. AI be nothin’ but cold, hard code and circuits, built by man to mimic our ways. People be able to feel and experience emotions, while AI can only process data and make calculations. People be unique, with different thoughts and perspectives, while AI be programmed to follow a set o’ instructions. People be creative and can come up with new ideas, while AI can only work within the parameters set for ‘em. Ye see, people be like a fine rum, complex and full o’ depth, while AI be like a compass, simple and straightforward, but useful in its own way.

And there you have it. I couldn’t have said it better myself. People be like fine rum because they have spirits.

--

The image is from DALL-E, generated with the prompt "a colorful oil painting of a pirate using a computer"

Categories: Meditation

Harry Plantinga

Harry Plantinga is a professor of computer science at Calvin University and the director of ccel.org and hymnary.org.