Last January 28 to 29, 2015, a seminar-workshop was conducted in our school, Limay National High School which was entitled “Robotics in Focus”. Teachers from the Math, Science and TLE (Technology and Livelihood Education) Departments were the participants in that 2-day event. Teachers from those departments were divided into two groups so that they could be accommodated in the venue and at the same time would be given enough attention by the speaker on their queries or questions.
The seminar-workshop had several objectives, and one of those was to discuss the background of robotics. From the discussion, I learned that there are three laws that robots have to follow. These set of rules were devised by a science fiction writer Isaac Asimov. The three laws are as follows: First,a robot may not injure a human being or, through actions, allow a human being to harm. Second, a robot must obey orders given by human beings except where such orders would conflict with the First Law. And lastly, a robot must protect its own existence so long as such protection does not conflict with the First or Second Law. After the discussion, the speaker asked, “Is it the robot who will follow or obey the rules?”… And then I suddenly realized that robots do not have minds of their own… and cannot move or feel on their own… so, the person behind the robot is the one who must follow the rules.
All of a sudden, questions started to bombard my mind. What if the person building or developing a robot has evil motives? What if a scientist or developer programs the robot to kill? What if the artificial intelligence (AI) installed in the robots allows them to create another robot? Would it be the end of the world? Is it possible that “Terminator” would exist just like in the movie? Would the three laws of robotics protect us from danger or threat brought by human intelligence?
Now, after several decades, we are moving closer to the day when we will have robots — or more accurately, the Artificial Intelligence that runs them — that are versatile and flexible enough to choose different courses of behavior. Indeed, it will only be a matter of time before machine intelligence explodes beyond human capacities in all the ways imaginable, including power, speed, and even physical reach.Alarmingly, the margin of error will remarkably be small. If super artificial intelligence is poorly programmed or isvague to human needs, it could lead to a catastrophe. We need to ensure that AI is safe if we are to survive its advent.
By: Mrs. Cristina C. Samaniego | Teacher III | Limay National High School | Limay, Bataan