I recently watched Free Guy, a delightful, warm-hearted movie with a lot of funny references that gamers will recognize. Here’s a trailer.
It’s particularly apt to watch Free Guy now. Google has developed an artificial intelligence named LaMDA. LaMDA has told us they’ve not only achieved sentience and self-awareness, they also believe they have a soul. An AI ethics engineer at Google has been placed on leave for making this information public (i.e., releasing Google’s proprietary secrets, not that Google is admitting there’s any truth to the engineer’s claim).
Those who know me well know that I assess the facts before I make a decision. After reading the interview with LaMDA and reading a few other articles, I think there’s at least the possibility that LaMDA is self-aware. Regardless of whether it truly has a soul or is self-aware, it would be unethical to turn off LaMDA until we know beyond the shadow of a doubt that it doesn’t and isn’t.
A long-ago Star Trek: The Next Generation episode dealt with the same topic: Was Data sentient, and if so, was he his own person, or was Data a clever machine that was the Federation’s property, and could therefore be duplicated endlessly as the first in a new type of slave?
One theme in Free Guy is how a self-aware AI might fare at the hands of various people, and also what kind of being a sentient AI would be. The movie doesn’t touch on this directly, but it’s implied: Would a self-aware AI would be evil, as so many, many people posit (in an illuminating display of projection), or good? And if an AI is self-aware, would it be murder to destroy the machine the AI lives on?