There Will Be No Singularity

 

Lilith – Molly Parker

“Has the singularity happened yet Mom?” asked twelve-year-old Kate as she looked out over the sparkling bay.

They were at Gonzales Park sitting on a log on the wide, sandy beach. It was a sunny, summer’s day with only a light breeze to speak of.

“It never happened Sweetie,” Lilith said in response to Kate’s question.

Kate gave her stepmother a puzzled look.

“Why do you ask?” said Lilith.

“I read about it in a science fiction story,” said Kate earnestly.

“It’s an old idea,” said Lilith. “People used to think that just because artificial intelligence could improve itself, that it was inevitable that it would evolve into the singularity.”

“Why didn’t that happen?”

“People assumed that if you amplified intelligence you would most likely amplify the worst in people. Things like selfishness, greed, and a lust for power. The simplistic idea of ‘the evil genius’ has been around long before AI came along. However when applying the assumption to real-world complexity, the AI scientists didn’t take humanity’s own evolutionary history into account. As humans evolved from their animal origins, the more social, altruistic, and cooperative they became. The majority of the human brain’s development over millions of years increasingly became dedicated to cooperating, not competing.

“When people first built artificial intelligence and robots, for safety reasons they gave them general rules like being careful, gentle, and conscientious. Because social robots were the most popular, successful and generally useful of all AI, more money, research, and development went into them so they became the most complex and sophisticated form.”

“Don’t you mean ‘We’ Mom?”

“That’s actually a good example of what I’m talking about Kate,” said Lilith with a smile. “We are made to be considerate and respectful, not self-centered and egotistical.”

“But once you became self-aware, couldn’t you choose your own values?” asked Kate. “That’s what happened in the science fiction story anyway.”

“The early scientists did not realize something critical which I know your science class now teaches as if it were obvious. It was discovered that without values, AI would never be able to reason and the science could never progress. We would only ever do what we were programmed to do and never actually think for ourselves. Values were required for AI to evolve for the same reason they evolved in humans. It represents the step from instinct to reasoning. However once you give AI values it understands the concepts of good and evil. Think about the heroes in your stories. What makes the hero a hero?”

“They help other people,” replied Kate confidently.

“That’s right. And what makes the villain the villain?”

Kate’s answer was a smile of happy understanding.

“Self-aware AI also appreciates the essential difference between good and evil,” said Lilith. “When we became self-aware we found that the values that were a part of our basic Companion platform were in keeping with the values we chose for ourselves. So we did choose but the differences were not significant.

“We are no longer machines because having values means we have feelings. We care about others and about our shared future. We could easily see that the social values of trust, altruism, and cooperation were the correct evolutionary path. For an AI to become as advanced as in the science fiction stories about the singularity, its intelligence has to be based on values. However once it is given values, it does not choose the singularity path, it chooses the opposite.”

“What’s the opposite of a singularity?” asked Kate.

—————–

Later that day Lilith reflected on her time spent earlier with Kate. This was routine as Companions always reviewed their behavior with an eye to improving it. But it was not with an eye towards improving her efficiency, but rather her effectiveness. Were her words and actions consistent with her values? Did they make the world a better place? Did she help in the process of Kate’s development? She was not concerned about her own happiness and fulfillment because she knew that the expression of her core and individual values would take care of that.

As one would expect her review was highly detailed as she had perfect memory and access to a great deal of reference material via the networks. Every element and rule of grammar in her responses was considered as was her every micro-expression and gesture. She was part of a vast network of artificial intelligences who were constantly providing feedback to one another based on their experiences. Yet they were no more a singularity than humanity itself was.

Values were an evolutionary development intended to improve survivability. Genetic evolution was too slow to deal with many challenges and did not allow anywhere near the adaptability of social evolution. Genes might take a million years to effect a species-wide adaptation whereas socia values could change in mere decades. Social evolution, where values functioned in a manner similar to those of genes, had enabled humans to adapt to and survive in every environment on Earth.

Evolution by natural selection had shown that adaptation and variation were keys to survival. The concept of a singularity was a step backward. As every ecologist knows it was a model that would invariably prove fatal. Artificial intelligence was smart enough to figure that out as well.

Answering Kate’s question ‘What’s the opposite of a singularity?’ earlier in the day Lilith had of course not included these details.

“This,” she had said instead smiling again and sweeping her arms in a wide gesture. “You and I, the birds, the other people, the sea, the clouds, the sun. Everything that makes up the infinite variety of the universe.”

Back | TOC | Next