Home » Science » 5 Hardest Things to Teach a Robot
May
14

teach-robot-1

 

Hey, robots can play catch. Robot Justin, a humanoid two-arm system, developed by the German air and space agency, Deutsches Zentrum fur Luft- und Raumfahrt, can perform given tasks autonomously such as catching balls or serving coffee.

Being a human is far easier than building a human.

Take something as simple as playing catch with a friend in the front yard. When you break down this activity into the discrete biological functions required to accomplish it, it’s not simple at all. You need sensors, transmitters and effectors. You need to calculate how hard to throw based on the distance between you and your companion. You need to account for sun glare, wind speed and nearby distractions. You need to determine how firmly to grip the ball and when to squeeze the mitt during a catch. And you need to be able to process a number of what-if scenarios: What if the ball goes over my head? What if it rolls into the street? What if it crashes through my neighbor’s window?

These questions demonstrate some of the most pressing challenges of robotics, and they set the stage for our countdown. We’ve compiled a list of the 10 hardest things to teach robots ordered roughly from “easiest” to “most difficult” — 10 things we’ll need to conquer if we’re ever going to realize the promises made by Bradbury, Dick, Asimov, Clarke and all of the other storytellers who have imagined a world in which machines behave like people.

 

Blaze a Trail

5-future-inventions-curiosity (1) 2

Moving from point A to point B sounds so easy. We humans do it all day, every day. For a robot, though,navigation — especially through a single environment that changes constantly or among environments it’s never encountered before — can be tricky business. First, the robot must be able to perceive its environment, and then it must be able to make sense of the incoming data.

Roboticists address the first issue by arming their machines with an array of sensors, scanners, cameras and other high-tech tools to assess their surroundings. Laser scanners have become increasingly popular, although they can’t be used in aquatic environments because water tends to disrupt the light and dramatically reduces the sensor’s range. Sonar technology offers a viable option in underwater robots, but in land-based applications, it’s far less accurate. And, of course, a vision system consisting of a set of integrated stereoscopic cameras can help a robot to “see” its landscape.

Collecting data about the environment is only half the battle. The bigger challenge involves processing that data and using it to make decisions. Many researchers have their robots navigate by using a prespecified map or constructing a map on the fly. In robotics, this is known as SLAM — simultaneous localization and mapping. Mapping describes how a robot converts information gathered with its sensors into a given representation. Localization describes how a robot positions itself relative to the map. In practice, these two processes must occur simultaneously, creating a chicken-and-egg conundrum that researchers have been able to overcome with more powerful computers and advanced algorithms that calculate position based on probabilities.

Exhibit Dexterity

teach-robot-3

Robots have been picking up parcels and parts in factories and warehouses for years. But they generally avoid humans in these situations, and they almost always work with consistently shaped objects in clutter-free environments. Life is far less structured for any robot that ventures beyond the factory floor. If such a machine ever hopes to work in homes or hospitals, it will need an advanced sense of touch capable of detecting nearby people and cherry-picking one item from an untidy collection of stuff.

These are difficult skills for a robot to learn. Traditionally, scientists avoided touch altogether, programming their machines to fail if they made contact with another object. But in the last five years or so, there have been significant advances in compliant designs and artificial skin. Compliance refers to a robot’s level of flexibility. Highly flexible machines are more compliant; rigid machines are less so.

In 2013, Georgia Tech researchers built a robot arm with springs for joints, which enables the appendage to bend and interact with its environment more like a human arm. Next, they covered the whole thing in “skin” capable of sensing pressure or touch. Some robot skins contain interlocking hexagonal circuit boards, each carrying infrared sensors that can detect anything that comes closer than a centimeter. Others come equipped with electronic “fingerprints” — raised and ridged surfaces that improve grip and facilitate signal processing.

Combine these high-tech arms with improved vision systems, and you get a robot that can offer a tender caress or reach into cabinets to select one item from a larger collection.

Hold a Conversation

teach-robot-4

Alan M. Turing, one of the founders of computer science, made a bold prediction in 1950: Machines would one day be able to speak so fluently that we wouldn’t be able to tell them apart from humans. Alas, robots (even Siri) haven’t lived up to Turing’s expectations — yet. That’s because speech recognition is much different than natural language processing — what our brains do to extract meaning from words and sentences during a conversation.

Initially, scientists thought it would be as simple as plugging the rules of grammar into a machine’s memory banks. But hard-coding a grammatical primer for any given language has turned out to be impossible. Even providing rules around the meanings of individual words has made language learning a daunting task. Need an example? Think “new” and “knew” or “bank” (a place to put money) and “bank” (the side of a river). Turns out humans make sense of these linguistic idiosyncrasies by relying on mental capabilities developed over many, many years of evolution, and scientists haven’t been able to break down these capabilities into discrete, identifiable rules.

As a result, many robots today base their language processing on statistics. Scientists feed them huge collections of text, known as a corpus, and then let their computers break down the longer text into chunks to find out which words often come together and in what order. This allows the robot to “learn” a language based on statistical analysis. For example, to a robot, the word “bat” accompanied by the word “fly” or “wing” refers to the flying mammal, whereas “bat” followed by “ball” or “glove” refers to the team sport.

Acquire New Skills

teach-robot-5

Let’s say someone who’s never played golf wants to learn how to swing a club. He could read a book about it and then try it, or he could watch a practiced golfer go through the proper motions, a faster and easier approach to learning the new behavior.

Roboticists face a similar dilemma when they try to build an autonomous machine capable of learning new skills. One approach, as with the golfing example, is to break down an activity into precise steps and then program the information into the robot’s brain. This assumes that every aspect of the activity can be dissected, described and coded, which, as it turns out, isn’t always easy to do. There are certain aspects of swinging a golf club, for example, that arguably can’t be described, like the interplay of wrist and elbow. These subtle details can be communicated far more easily by showing rather than telling.

In recent years, researchers have had some success teaching robots to mimic a human operator. They call this imitation learning or learning from demonstration (LfD), and they pull it off by arming their machines with arrays of wide-angle and zoom cameras. This equipment enables the robot to “see” a human teacher acting out a specific process or activity. Learning algorithms then process this data to produce a mathematical function map that connects visual input into desired actions. Of course, robots in LfD scenarios must be able to ignore certain aspects of its teacher’s behavior — such as scratching an itch — and deal with correspondence problems, which refers to ways that a robot’s anatomy differs from a human’s.

, , , , , , , , , , , , , , , , , , , , , , , ,

Add reply

*