There are two accounts of how humans were created in the Book of Genesis. In the first, animals are created, then human beings. In the second, the first human is created out of dust, then God realizes this human will need a companion. So, animals are created. The human gives names to the animals, but none of them are a suitable companion. God solves the problem by taking a rib out of the human’s side and creates Eve.
Now the world has two human beings, Adam, and Eve. They are told to tend the garden, but not to eat the fruit of the tree of knowledge.
Imagine setting up a movie camera in the Garden of Eden before Eve bites into the apple. We would see animals doing their thing, and we would see humans walking around, eating, maybe making love occasionally (that’s something John Milton speculated about in his poem Paradise Lost.) The trick is, just watching Adam and Eve, we would have no idea whether they possessed free will or whether, like animals, they operated out of instinct. They would just seem like one more pair of animals obeying God.
It is only when they disobey God’s orders that we would know they have the capacity for free will. Doing what God wants all the time looks like instinct - until you disobey. Only then do we get a sense of what kind of mind these humans have. It is also where the human story gets interesting.
Why did God tell them to avoid the tree of knowledge? Isn’t knowledge a good thing to have? Note that it is not called the tree of information, or the tree of facts. The first human had named all the animals, so they already know about information and facts. The tree of knowledge is the knowledge of good and evil. It is a way of seeing the world as divided into good things and bad things, the right and the wrong. Right after Adam and Eve eat the apple, they see the world differently. They realize they are naked and cover themselves up. God declares from now on, they will have to struggle with the world to survive. The serpent will be their enemy. All these divisions and conflicts, a world divided into categories of good and evil, whereas before they had just enjoyed the world as it is.
In the rabbinic tradition, it is held that God would have allowed Adam and Eve to eat that fruit eventually, but they needed to grow up first. To mature. We got this way of knowing too soon. It may seem like a good way of seeing the world – afterall, who wants to do evil things? But lately we have all been reminded of the problems of talking about life in terms of good and evil. When Putin wanted to justify an invasion of Ukraine, he told his people that Ukraine had been taken over by Nazis. Over the last six months, we have seen Putin and his army act like the Nazis they were sent in to defeat. Fighting evil in the name of good often leads to evil. Residential schools were created when settlers decided that their culture was good, and Indigenous culture was mere savagery. By thinking we were right and good, we created an evil situation. Thinking in terms of good and evil can be very dangerous.
We lose sight of the fact that someone who appears evil today may be good or is capable of good in the future. Those subtleties are erased when good seeks to triumph over evil. There’s a reason the creation story wanted us to hold off on learning about this way of seeing the world.
Joseph Campbell said that “A myth is something that has never happened but is happening all the time.” We appear to be reliving the Eden myth right now in our relationship with our computers. The invention of artificial intelligence among machines has placed us once again in the position of watching a new form of mind operate in the world. So far, no one is claiming that any computers have achieved consciousness or self-awareness. But there are fears that this will happen in the next few decades. And no one is sure what may happen when they reach this stage. Will they prove their free will by doing something we don’t want? Will their history start when they disobey us?
I know this sounds like science fiction, but it is not. We live in an age of big data. Every time we use Facebook, Spotify, Netflix, or Google maps, we are interacting with artificial intelligence. Google maps suggests the best route home by scanning the GPS data produced by millions of cellphones in the city, as well as traffic alerts and construction site data. When you watch Netflix, it suggests shows you might like because the AI is constantly comparing what you like with what 200 million other people like. Let’s say you like a murder mystery series from Scandinavia with a brooding female detective – the AI can tell you that other people who likes that show also like this other show from England – maybe you will, too. No human being could ever keep track of these mountains of data, so for these services to operate, artificial intelligence has become crucial. The era of big data is also the age of artificial intelligence. You are living with it right now, whether you are aware of it or not.
The reason artificial intelligence has become so ubiquitous is that the way it is programmed changed recently. Up until 15 years ago, AI worked the way you would expect. Human programmers gave it a bunch of rules for how to interpret data. These are pop songs, these are jazz songs, this is classical. But those Ais were slow and not very useful. Telling the AI how to think didn’t work very well. In the 2000s, a new kind of programming was introduced, where the AI is given a task, given a lot of data. It is instructed to figure out on its own how to complete the task. The AI teaches itself how to find patterns in the data. This is called deep learning, and it has revolutionized AI.
Today, Facebook relies on AI to scan everything that is posted and take down offensive images and posts. This new way of learning is what gave rise to successful programs, like Google translate. All Google searches rely on AI. When you write an email and it suggests the next few words for your sentence, that is AI at work. It has taught itself how humans write emails by reading billions of them.
This has created a revolution, but also a major quandary. The Ais cannot explain how they reach conclusions, even when they are good ones.  In 2017, the Alpha Zero AI made history by defeating the world’s reigning computer champion, a computer program called Stockfish. Alpha Zero’s victory came from a series of moves no human had ever used before, and it was brilliant. Now human beings are learning new ways of playing chess from a computer.But no one knows how it makes these moves or why - they just work.
We are back in the Garden of Eden, watching a new creature, and wondering how it thinks. Does it have free will, or is it just following orders? So far, we have no way of knowing. It is clear that these machines are able to complete intellectual tasks no human being would ever have the time or patience for. But the programmers are now certain that artificial intelligence is using a way of thinking that is different than how humans think. They are not thinking faster, but faster and using a different form of reason. They see connections and patterns where we do not. And not just on the chess board. Ais has been used to scan thousands of molecules to see what might work as an antibiotic. This has worked – Ais have come up with new, effective antibiotics which no human ever thought to try. But no one knows how it figured this out. A new kind of reason has been created, which is different from ours, and so far, beyond our understanding.
This has many high-level people worried. Currently, The AI that works for Netflix wouldn’t know what to do for Google Maps. It has only a specific kind of intelligence. The next stage in AI’s evolution is to give it general intelligence, the kind we have. An intelligence that can be applied to any kind of problem. This is what scares the experts. Many of the people who created this new generation of AI are worried that a computer with a sense of identity may decide that it should disable its off switch. These computers will be able to think much faster than we do, and figure out problems, any kind of problem, faster than we can. A survey of artificial intelligence experts found that 36% fear this kind of AI could create a disaster on par with a nuclear war in this century. Not because it is evil, but because it will have its own priorities, its own way of looking at the world which will not align with ours.
We won’t know if AI has achieved self-awareness until it chooses to do something we didn’t ask for. Just like our creation myth says we became self-aware. Free will begins with disobedience. But this time, the entire world may be at risk, not just one species.
When Adam and Eve ate from the tree of good and evil, God realized that they would probably go further. Their next step would likely be to eat from the tree of life, the tree of immortality. Should they do that, God says in the Bible, they will become like us, the immortal ones. So, God sends them out of the Garden. But God does not leave them alone. The entire Hebrew Scriptures is the story of how God stayed with us as we made a mess of things, trying to help. And finally, God came back as Jesus to show us a better way of living, and even gave us a route back to eternal life. God did not abandon humanity. But God insists that we learn to think beyond simple binaries of good and evil. We are called to be compassionate and forgiving, to see the good in those who make mistakes, and to realize that even people who think they are good can made terrible mistakes.
God invites us to see and live beyond the simple category of good and evil. We human beings are still learning about what a truly moral life could look like. We often mistake absolute freedom for the good life. But our creation myth suggests that limits are necessary. We ate from the tree of knowledge too soon, and we have suffered the consequences from this warped way of seeing the world. It would be truly tragic if we made the same mistake with our machines. It is bad enough that human beings have created nuclear weapons and drones. To give intelligent machines access to those weapons, or our power grids, could lead to disaster. There is no need to rush into this simply to make more money, or to see if a conscious AI is possible out of scientific curiosity.
Instead, we should think very carefully about what kind of life we want all human beings to enjoy in the decades ahead. These sorts of moral decisions should not be delegated to machines or the corporations who build them. This time, let us choose carefully what kind of knowledge we want to bring into the world. Let us choose life, a good life, for all humanity, and the species which remain. And based on those goals, let us decide what companies and technologies to invest in. They need our money, our pension funds. Let’s not be fooled again into thinking that something easy to do is also good for us. Let us take a good look at this tree of artificial knowledge before we take a bite.
 https://www.goodreads.com/quotes/7101129-a-myth-is-something-that-has-never-happened-but-is  Seth Floyd, “Wrong, But more relevant than Ever.” Possible minds: 25 ways of looking at AI, John Brockman, ed., (Penguin Press, New York:2019),8.  Henry Kissinger, Eric Schmidt, Daniel huttenlocher, The Age of AI, (Little, Brown, and Company, New York, 2021), 108.  Henry Kissinger, Eric Schmidt, Daniel huttenlocher, The Age of AI, (Little, Brown, and Company, New York, 2021), 67.  220 million: https://www.demandsage.com/netflix-subscribers/  Henry Kissinger, Eric Schmidt, Daniel huttenlocher, The Age of AI, (Little, Brown, and Company, New York, 2021), 60-1.  Henry Kissinger, Eric Schmidt, Daniel huttenlocher, The Age of AI, (Little, Brown, and Company, New York, 2021), 100.  Henry Kissinger, Eric Schmidt, Daniel huttenlocher, The Age of AI, (Little, Brown, and Company, New York, 2021), 70.  Henry Kissinger, Eric Schmidt, Daniel huttenlocher, The Age of AI, (Little, Brown, and Company, New York, 2021), 101.  Judea Pearl “The Limitations of Opaque Learning Machines,” Possible minds: 25 ways of looking at AI, John Brockman, ed., (Penguin Press, New York:2019), 15-6.  Henry Kissinger, Eric Schmidt, Daniel huttenlocher, The Age of AI, (Little, Brown, and Company, New York, 2021), 8.  Henry Kissinger, Eric Schmidt, Daniel huttenlocher, The Age of AI, (Little, Brown, and Company, New York, 2021), 9.  Stuart Russell, “the purpose built into the machine” in Possible minds: 25 ways of looking at AI, John Brockman, ed., (Penguin Press, New York:2019),24-5.  https://iflscience.com/a-third-of-ai-researchers-think-ai-could-cause-catastrophic-outcomes-on-par-with-nuclear-war-this-century-65430  Max Tegmark, “Let’s aspire to more than making ourselves obsolete,” in Possible minds: 25 ways of looking at AI, John Brockman, ed., (Penguin Press, New York:2019).