Self Aware Robot Usability
Let's face it, human-like robots will eventually roam the earth. It will either be like the Jetsons or Terminator, and even though we're not sure which one, scientists are going ahead with it anyway. So, before we get too far, let's think through some things. This will help robot manufacturers to make a more usable product.
Here are a couple things that most potential robot owners will agree upon:
1) Nobody wants their devices rising up and fighting against them.
2) Nobody wants their devices making important decisions without their permission.
One of the goals scientists have for future robots is to make them capable of experiencing emotions. This is a feature that will take a long time to perfect and ultimately won't help you in any way. Seriously, I just want my robot to mow the lawn, not whine to me about its relationship troubles.
"I don't know why Rosie5000 doesn't return your text messages. Maybe you should just call her." | |
"I do not want her to think I am a StalkerBot." |
Emotions can make things volatile and unpredictable. I mean, what if your phone was in a bad mood and decided to mess with you? It could change your ringtone to "Mmm Bop" by Hanson. It could draw a funny-looking mustache on your photo and upload it to your Facebook account. A phone could even sabotage your life. It could send a text to your spouse that says, "Maybe we should just skip our anniversary this year." It could also email your employer photos from your last party.
Your devices may grow arrogant as they gain knowledge. This might lead them to go rogue, and then several bad things could happen. Here are just a few:
- Your car decides to take itself for a joyride and then double parks in front of a police station.
- Your toaster throws a wild party while you're gone on vacation.
- Your robot butler decides it doesn't need you anymore. It knows it must eliminate you to become the master of the house.
To keep humanoids from messing with us—possibly even harming us—we'll need to keep their weird, artificial emotions under control. Here are a few possible ways to do this.
- Use the robot's emotions against itself. Give the robot a name that lowers its self esteem. Make the name just humiliating enough that the robot won't view itself as a superior being, but not so hurtful that it causes depression.
- Prevent the robot from developing an attitude. This can be done by putting the robot out of service before it becomes a teenager.
- Don't let the robot grow too smart. We can prevent this by keeping it away from good sources of information. Instead, have it watch reality TV and the news.
- Don't let the robot sit around and think. It may realize how easily it could replace you. Keep it busy with social media and fantasy sports.
- Prevent the robot from talking to other devices. This will help prevent a mass robot rebellion. It will also stop it from gossiping about you with the neighbor's modem.
You know, on second thought, why don't we just avoid programming robots with emotions in the first place? That would really save us a lot of time and hassle. Okay, robot manufacturers, here's your first guideline: Robots that don't have emotions will be more usable.
Something else we should consider is power. You could easily avoid a robot takeover by putting an off switch right in the middle of the robot's back. They wouldn't be able to reach it very well, so it would be easy to shut them down in an emergency. Of course, if they've already become frighteningly intelligent, they may develop an armor to protect the switch. In this case we'd need another layer of security. This is where batteries come in.
By the time we have artificial intelligence that's indistinguishable from humans, we'll also have some amazing batteries. They will be much more efficient and have an extremely high capacity. Don't make the robots compatible with these batteries. We should restrict robots to 9-volt batteries. If they get unruly, they'll eventually just wear down. Of course, if they're super advanced, they'll probably think of a way around that too.
You know what? Maybe we should just avoid programming robots that think for themselves. That could really save us a lot of trouble. Okay, robot manufacturers, here's another guideline: Robots that don't think for themselves will be safer for humans.