kw: ai, simulated intelligence, philosophical musings, robots, robotics
I saw the movie The Day the Earth Stood Still in the late 1950's at about the age of ten. I was particularly interested in Gort, the robot caretaker of the alien Klaatu. [Spoiler alert] At the climax, Klaatu, dying, tells the innkeeper Helen to go to Gort to say, "Gort, Klaatu barada nicto". She does, just as the robot frees itself from a glass enclosure the army has built. Gort retrieves the body of Klaatu and revives him, temporarily, to deliver his final message to Earth. (This image generated by Gemini)As I understood it, every citizen of Klaatu's planet has a robot caretaker and defender like Gort. These defenders are the permanent peacekeepers.
Years later I found the small book Farewell to the Master, on which the movie is based. Here, the robot's name is Gnut, and it is described as appearing like a very muscular man with green, metallic skin. After Klaatu is killed, Gnut speaks to the narrator and enlists his help to find the most accurate phonograph, so that he can use recordings of Klaatu's voice to help restore him to life, at least for a while. In a twist at the end, we find that Gnut is the Master and Klaatu is the servant, an assistant chosen to interact with the people of Earth. (This image generated by Dall-E3)I want a Gort. I don't want a Gnut.
Much of the recent hype about AI is about creating a god. I don't care how "intelligent" a machine becomes, I don't want it to be my god, I want to be god to it. I want it to serve me, to do things for me, and to defend me if needed. I want it to be even better than Gort: Not to intervene after shots are fired, but to anticipate the shooting and avoid or prevent it.
Let's remember the Three Laws of Robotics, as formulated by Isaac Asimov:
- A robot may not injure a human being or allow a human to come to harm;
- A robot must obey the orders given to it by humans, except where such orders conflict with the First Law;
- A robot must protect its own existence as long as it does not conflict with the First or Second Law.
In later stories Asimov added "Law Zero": A robot may not harm humanity as a whole. Presumably this may require harming certain individual humans...or at least frustrating them!
Asimov carefully avoided using the word "good" in his Laws. Who defines what is good? The current not-nearly-public-enough debate over the incursion of Sharia Law into some bits of American society makes it clear. What Islam defines as Good I would define as Evil. And, I suppose, vice versa. (I am a little sad to report that I have had to cut off contact with certain former friends, so that I can honestly say that I have no Antisemitic friends.)
Do we want the titans of technology to define Good for us? Dare we allow that? Nearly every one of them is corrupt!
I may in the future engage the question of how Good is to be defined. My voice will be but a whisper in the storm that surrounds us. But this aspect of practical philosophy is much too important to be left to the philosophers.


No comments:
Post a Comment