Welcome to the IKCEST
Space exploration, robot partners require new algorithms for safety, says IEEE Fellow
Listen to this article
Georgia Tech IEEE

A Tigershark unmanned aerial vehicle. Prof. Panos Tsiotras says better controls will lead to more autonomy. Source: Georgia Tech

As on Earth, robots are increasingly seen as necessary aides to humans in space exploration. Not only are unmanned probes exploring the moon, other planets, and beyond, but autonomous systems are also expected to be partners in helping people maintain systems and return to the moon. To do these things safely, new algorithms must be developed, said Panagiotis Tsiotras, a professor at the Georgia Institute of Technology.

Tsiotras, who is an IEEE Fellow and the David and Andrew Lewis Chair of the Guggenheim School of Aerospace Engineering at Georgia Tech, is working on fail-safe integration of machine learning and related components with aerospace systems. He is working to improve perception so that space-based and other robots can have better situational awareness. In addition, Tsiotras is researching optimized controls so that robots can better identify paths.

“My research in past few years has been in autonomous systems for ground, aerial, and space applications,” he told The Robot Report. “The problems are the same for all of them — trying to develop strong decision-making controls.”

Applying lessons from autonomous vehicles

“With autonomous vehicles, we’re using the techniques of others. I’m interested in how can we make vehicles behave more naturally in traffic,” said Tsiotras. “It’s easy to get in a vehicle that’s safe, but it does not feel natural. Self-driving cars may go slowly or keep a lot of distance from other vehicles. People have different styles and desires.”

Prof. Panos Tsiotras. Source: Georgia Tech

“We want to make vehicles behave more naturally or closer to that of human drivers through driver cloning,” he said. “But one challenge is that everyone thinks they drive well.”

“Determining intent is important not just for detecting behavior and planning, but also for higher-level thinking,” Tsiotras expained. “Some self-driving cars might be more aggressive — without breaking any laws. Sometimes detecting driver intent is easy, if people indicate it with turn signals. Sometimes they don’t, and at an intersection with a flashing light, people may make eye contact, nod, or wave hands to indicate who should merge. Autonomous vehicles should also be able to see that and figure it out.”

Making vehicles and robots behave more like humans is important in mixed environments, he said. “In the future, if all vehicles are autonomous and have a way of ‘handshaking’ by network, then the problem would be solved,” said Tsiotras. “To figure out how a person drives in his or her own vehicle, systems can observe and adjust their own behavior accordingly. It can learn if someone is timid or aggressive, it can determine how much to intervene and compensate as needed.”

AI could lead to customized self-driving cars, says Tsiotras

The next step in developing and applying artificial intelligence to vehicles is to account for manufacturers’ requirements and consumer perceptions, Tsiotras said.

“Automotive manufacturers sometimes have features that are hidden from manufacturers, such as drive by wire and traction control,” he noted. “If they work too well, people won’t even know they’re there. Transparency is good but can be difficult for the business case.”

“We’ve started investigating this. It’s not clear whether people would do the same thing as passengers in intelligent vehicles as they would as drivers who know what they’re doing,” added Tsiotras. “That’s where driver cloning can help.”

“The idea is that the vehicle is always observing how someone is driving so it has the information and is able to detect whether the driver is not behaving properly in a particular situation. Perhaps they’re tired or having a bad day,” he said. “This would be a more proactive version of driver assist for an extra level of safety. Toyota uses the term ‘guardian angel under the hood.'”

But how close are today’s vehicles to such autonomous and advanced driver-assist systems (ADAS)? “We’re not even close to compete autonomy,” Tsiotras replied. “Such systems are slowly getting into vehicles. Trucking on highways is the easier problem, but we want the same level of autonomy in a downtown business district, where there are pedestrians, construction, and bicycles.”


Robots in space exploration

Another aerospace project that Tsiotras is working on for NASA involves detecting asteroids. “There is a lot of interest in small bodies for mining, and we’re sending spacecraft without astronauts to observe,” he said. “They have to be autonomous robots because tele-operation is not an option because of the distance.”

Unmanned probes have been able to perceive asteroids and identify places to land. They could autonomously use cameras and other sensors to determine their properties such as shape and mass, Tsiotras said.

“We won’t see completely autonomous controls — we’re adding increasing levels for different phases of a mission,” he said. “Take, for example, sending a robot on Mars to collect some rocks. Typically, a human operator on Earth would get pictures of a rock on the horizon and tell the robot where to go. After the signal delay, the robot would go find it and collect it. It could autonomously navigate the terrain and decide how to achieve its goal. At another level, the operator could just say, ‘Go find interesting geological formations.'”

“Another example is you could have a level of deciding when and where to land. Pinpoint landing on Mars must compensate for uncertainties in atmospheric entry, which is important for future missions in which we’ll send supplies ahead of humans,” said Tsiotras. “We could choose a general area in the desert and maybe have the lander’s cameras evaluate landing zones and autonomously do a lateral diversion to avoid rocks, as opposed to landing with balloons. Proximity is important for human landing zones, and a geologist might want to land a robotic rover near a certain formation.”


Tsiotras considers flying cars and robotaxis

Tsiotras’ research into decision making for autonomous systems also applies to aerial taxis, which are being developed worldwide.

“The main issue with these applications for connecting within metropolitan areas is that they need to be reliable. Because they’re operating in environments with humans rather than on Mars, the stakes are much higher,” he said. “It’s not just precision but also robustness and reliability, so you need to have redundant sensors and actuators, and integration is a challenge.”

“Like autonomous ground vehicles, these systems have to operate under many weather conditions,” observed Tsiotras. “Robots and drones are increasingly different from previous generations on factory floors. Systems have to adapt and learn on their own because nobody can pre-program them for every eventuality.”


Better-than-human performance expected

“The question is, at what point will the public accept loss of human life from machine error?” Tsiotras said. “Should a robot or autonomous vehicle be as good as a human, or 10 times better? We have 30,000 automotive deaths per year in the U.S. today, but if an autonomous vehicle kills one human, it makes the news. It’s a tall order, trying to make machines superhuman for safety, but developers must be very careful, or the public will turn against this technology.”

“Fortunately, machines are pretty good at pattern recognition,” he said. “With the right sensors and machine learning algorithms, autonomous systems can recognize a bicycle, a soccer ball, or a child running after that ball. It’s really about context and judgment, which aren’t so easy. This is related to what I was saying about detecting human intent.”

“Humans take years of experience to walk or drive well; we may have to let machines mature and observe the world for some time,” said Tsiotras. “Most automakers and technology companies are working on Level 3 autonomy right now. In my opinion, they need to go in some more structured steps, such as long-haul transportation or HOV-type [high-occupancy vehicle] lanes.”

“Just make sure self-driving cars can operate reliably in large numbers,” he added. “A more prudent approach to different environments and conditions, such as nighttime or a blizzard, would be useful. People like to drive but get bored on highways.”

“It’s a very exciting time — I envy my students,” Tsiotras concluded. “They’re always complaining that things seem difficult, but these are cool problems. Enabling technologies and talent are coming together to attack problems that seemed unsolvable 15 years ago. With the coalescence of control and signal theory, processing, AI, and modern robotics, it’s a good time to be a student.”

Original Text (This is the original text for your reference.)

Listen to this article
Georgia Tech IEEE

A Tigershark unmanned aerial vehicle. Prof. Panos Tsiotras says better controls will lead to more autonomy. Source: Georgia Tech

As on Earth, robots are increasingly seen as necessary aides to humans in space exploration. Not only are unmanned probes exploring the moon, other planets, and beyond, but autonomous systems are also expected to be partners in helping people maintain systems and return to the moon. To do these things safely, new algorithms must be developed, said Panagiotis Tsiotras, a professor at the Georgia Institute of Technology.

Tsiotras, who is an IEEE Fellow and the David and Andrew Lewis Chair of the Guggenheim School of Aerospace Engineering at Georgia Tech, is working on fail-safe integration of machine learning and related components with aerospace systems. He is working to improve perception so that space-based and other robots can have better situational awareness. In addition, Tsiotras is researching optimized controls so that robots can better identify paths.

“My research in past few years has been in autonomous systems for ground, aerial, and space applications,” he told The Robot Report. “The problems are the same for all of them — trying to develop strong decision-making controls.”

Applying lessons from autonomous vehicles

“With autonomous vehicles, we’re using the techniques of others. I’m interested in how can we make vehicles behave more naturally in traffic,” said Tsiotras. “It’s easy to get in a vehicle that’s safe, but it does not feel natural. Self-driving cars may go slowly or keep a lot of distance from other vehicles. People have different styles and desires.”

Prof. Panos Tsiotras. Source: Georgia Tech

“We want to make vehicles behave more naturally or closer to that of human drivers through driver cloning,” he said. “But one challenge is that everyone thinks they drive well.”

“Determining intent is important not just for detecting behavior and planning, but also for higher-level thinking,” Tsiotras expained. “Some self-driving cars might be more aggressive — without breaking any laws. Sometimes detecting driver intent is easy, if people indicate it with turn signals. Sometimes they don’t, and at an intersection with a flashing light, people may make eye contact, nod, or wave hands to indicate who should merge. Autonomous vehicles should also be able to see that and figure it out.”

Making vehicles and robots behave more like humans is important in mixed environments, he said. “In the future, if all vehicles are autonomous and have a way of ‘handshaking’ by network, then the problem would be solved,” said Tsiotras. “To figure out how a person drives in his or her own vehicle, systems can observe and adjust their own behavior accordingly. It can learn if someone is timid or aggressive, it can determine how much to intervene and compensate as needed.”

AI could lead to customized self-driving cars, says Tsiotras

The next step in developing and applying artificial intelligence to vehicles is to account for manufacturers’ requirements and consumer perceptions, Tsiotras said.

“Automotive manufacturers sometimes have features that are hidden from manufacturers, such as drive by wire and traction control,” he noted. “If they work too well, people won’t even know they’re there. Transparency is good but can be difficult for the business case.”

“We’ve started investigating this. It’s not clear whether people would do the same thing as passengers in intelligent vehicles as they would as drivers who know what they’re doing,” added Tsiotras. “That’s where driver cloning can help.”

“The idea is that the vehicle is always observing how someone is driving so it has the information and is able to detect whether the driver is not behaving properly in a particular situation. Perhaps they’re tired or having a bad day,” he said. “This would be a more proactive version of driver assist for an extra level of safety. Toyota uses the term ‘guardian angel under the hood.'”

But how close are today’s vehicles to such autonomous and advanced driver-assist systems (ADAS)? “We’re not even close to compete autonomy,” Tsiotras replied. “Such systems are slowly getting into vehicles. Trucking on highways is the easier problem, but we want the same level of autonomy in a downtown business district, where there are pedestrians, construction, and bicycles.”


Robots in space exploration

Another aerospace project that Tsiotras is working on for NASA involves detecting asteroids. “There is a lot of interest in small bodies for mining, and we’re sending spacecraft without astronauts to observe,” he said. “They have to be autonomous robots because tele-operation is not an option because of the distance.”

Unmanned probes have been able to perceive asteroids and identify places to land. They could autonomously use cameras and other sensors to determine their properties such as shape and mass, Tsiotras said.

“We won’t see completely autonomous controls — we’re adding increasing levels for different phases of a mission,” he said. “Take, for example, sending a robot on Mars to collect some rocks. Typically, a human operator on Earth would get pictures of a rock on the horizon and tell the robot where to go. After the signal delay, the robot would go find it and collect it. It could autonomously navigate the terrain and decide how to achieve its goal. At another level, the operator could just say, ‘Go find interesting geological formations.'”

“Another example is you could have a level of deciding when and where to land. Pinpoint landing on Mars must compensate for uncertainties in atmospheric entry, which is important for future missions in which we’ll send supplies ahead of humans,” said Tsiotras. “We could choose a general area in the desert and maybe have the lander’s cameras evaluate landing zones and autonomously do a lateral diversion to avoid rocks, as opposed to landing with balloons. Proximity is important for human landing zones, and a geologist might want to land a robotic rover near a certain formation.”


Tsiotras considers flying cars and robotaxis

Tsiotras’ research into decision making for autonomous systems also applies to aerial taxis, which are being developed worldwide.

“The main issue with these applications for connecting within metropolitan areas is that they need to be reliable. Because they’re operating in environments with humans rather than on Mars, the stakes are much higher,” he said. “It’s not just precision but also robustness and reliability, so you need to have redundant sensors and actuators, and integration is a challenge.”

“Like autonomous ground vehicles, these systems have to operate under many weather conditions,” observed Tsiotras. “Robots and drones are increasingly different from previous generations on factory floors. Systems have to adapt and learn on their own because nobody can pre-program them for every eventuality.”


Better-than-human performance expected

“The question is, at what point will the public accept loss of human life from machine error?” Tsiotras said. “Should a robot or autonomous vehicle be as good as a human, or 10 times better? We have 30,000 automotive deaths per year in the U.S. today, but if an autonomous vehicle kills one human, it makes the news. It’s a tall order, trying to make machines superhuman for safety, but developers must be very careful, or the public will turn against this technology.”

“Fortunately, machines are pretty good at pattern recognition,” he said. “With the right sensors and machine learning algorithms, autonomous systems can recognize a bicycle, a soccer ball, or a child running after that ball. It’s really about context and judgment, which aren’t so easy. This is related to what I was saying about detecting human intent.”

“Humans take years of experience to walk or drive well; we may have to let machines mature and observe the world for some time,” said Tsiotras. “Most automakers and technology companies are working on Level 3 autonomy right now. In my opinion, they need to go in some more structured steps, such as long-haul transportation or HOV-type [high-occupancy vehicle] lanes.”

“Just make sure self-driving cars can operate reliably in large numbers,” he added. “A more prudent approach to different environments and conditions, such as nighttime or a blizzard, would be useful. People like to drive but get bored on highways.”

“It’s a very exciting time — I envy my students,” Tsiotras concluded. “They’re always complaining that things seem difficult, but these are cool problems. Enabling technologies and talent are coming together to attack problems that seemed unsolvable 15 years ago. With the coalescence of control and signal theory, processing, AI, and modern robotics, it’s a good time to be a student.”

Comments

    Something to say?

    Log in or Sign up for free

    Disclaimer: The translated content is provided by third-party translation service providers, and IKCEST shall not assume any responsibility for the accuracy and legality of the content.
    Translate engine
    Article's language
    English
    中文
    Pусск
    Français
    Español
    العربية
    Português
    Kikongo
    Dutch
    kiswahili
    هَوُسَ
    IsiZulu
    Action
    Related

    Report

    Select your report category*



    Reason*



    By pressing send, your feedback will be used to improve IKCEST. Your privacy will be protected.

    Submit
    Cancel