The morality of computers…

They say that computers “learn” from trial and error.  They even learn to control our beloved gadgets; your smart TV, cell phone, word processor (even now it’s trying to auto-finish out my word/phrase for me!).  Computers can control your whole house if you let them (“Alexa, turn off the lights and lock the front door”).

Yes, AI (artificial intelligence) has that kind of capability as to what will happen next/is to be anticipated next/how to control things, etc., and reacts accordingly.  But such reactions are “really” more a case of AI zeroing in on repetitive maneuvers on our part.  It’s not really “learning” as we think of that word.  It’s “reacting” in a rote-type manner on its part, totally void of emotion.

They (read “I”) also say that a computer can be no more “moral” (immoral, amoral) than the programmer(s) who’ve programmed it.  In that way, a computer is a reflection of  the human “id” (as in id, ego, superego) of the programmer(s).  A computer cannot perpetrate “evil” (or “good” for that matter) on its own.  Or can it?

Okay, having said that, what is your opinion about the future of computers?  

Will they/Can they ever develop a “personality” on/of their own; personality in this case meaning "ability to make moral judgements" other than those programmed into it? 

How do you see a future world run by computers?  Go ahead, “sci-fi” for me here. 😁

(by PrimalSoup)

Comments

Popular posts from this blog

That Uplifting Tweet You Just Shared? A Russian Troll Sent It

The Nightmare Scenario That Keeps Election Lawyers Up At Night -- And Could Hand Trump A Second Term

Philosophical Question #14 – Lifestyle Choices