I had a heated debate with Mom and Dad a few weeks ago about the ethics of robotics (I'm not sure if you remember it, Mom and Dad, but it sure got me
I still don't get why so many people are afraid of the advancement of robotics. Why are people unafraid of computers to the point of not even giving supercomputers a second glance(supercomputers, mind you, that can calculate PI to thousands of places, and can easily defeat the worlds leading chessmasters in the game.), but the thought of Honda's ASIMO humanoid robot terrifies them? Why the bias? One is not any more of a threat than another.
This gave me an idea. You all have heard of Isaac Asimov's Three Laws of Robotics(If you feel like you don't know enough about them, see this- http://en.wikipedia.org/wiki/Three_Laws_of_Robotics) Though these three laws are mainly just a plot device used in Asimov's robot series, they can be progammed into today's robots. (I will try not to go too technical with this explanation). Have you heard of HARDWARE(not tools like you see at home depot ;) ) and SOFTWARE? You probably have heard of software. That's the sort of thing Microsoft makes. It adds programs to an already existing basic set of programs. The already existing set of programs are very hard to alter or delete, but the software can be changed quite easily. Hardware is the physical representation of the technology(http: en.wikipedia.org/wiki/computer_hardware). Think circuit boards and wires. Things programmed into hardware are designed to not be changed. Thethings can be, but really they're the sort of things you need to leave for the computer to work as desired. It could work so that you could create a non-removable chip (like the CPU of most computers)that had ethics closely resembling the three laws, right next to the CPU. Or, it could be stored as a non-deletable .EXE(or similar file extension) file as the permanent memory of the robot. A wide variety of options are availabe to give an adavnced robot the desired ethics programming, and we can> limit A.I. to allow the enough freedom so that they don't need humans to tell them everything, but they don't put thier own personal twist on thier ethics programing(as seen in the movie I,robot.). It is possible. Robots can be advanced, and with the bonus of thier being little to no risk of the robot breaking the oh-so-important first law of robotics-or even the "0th law"-a robot cannot harm humanity, or through inaction allow humanity to come into harm. All that is limiting this from happening is humanity's unwillingness to accept that this can work, and humanity's unwillingness to give potential builders enough funding to do so.
There is one last quote I want to include form the wikipedia link I included on the three laws. this quote comes from an essay written by Isaac Asimov. "Robots are very similar to tools. The three laws cna be changed just a little and they can fit a tool's laws. 1. A tool must be safe to use. (knives have handles, swords have hilts, and grenades have pins.) 2. A tool must perform it's function correctly unless this would harm the user. 3. A tool must remain intact during it's duration unless it's destruction is required for it's use or for safety."
Think about it...
"Do you ever get the feeling that the only reason we have elections is
to find out if the polls were right?"
My reply to my sister:
How did you manage to raise a kid like this in this culture? Somehow I, radical feminist that I am, have raised a girl who wants to be a model. Help me before I finally off myself.
My cousin's reply:
Of course she wants to be a model BECAUSE you are a radical feminist! When I was that age, I said/wanted/did any and everything that contradicted my mother. She was so uncool! Happily, she is much cooler through the eyes of my adult self!
Thank heaven for wise women! Instead of seeing teenaged rebellion, I was assuming failure as a parent. Sigh.