ai_security  - ai security 650x434 - Robots Are Coming To Make Friends With Soldiers

AI are to protect their human friends in the very near future.

Researchers working at the Robotics Institute at Carnegie Mellon University and the United States Army Research Laboratory have managed to develop a brand new technique using which they can quick teach robots completely new traversal behaviors.

Moreover, using these techniques, these robots require minimal human oversight.

Researchers can now (using the new techniques) enable various robot platforms to have the ability to navigate different environments autonomously.

All the while, they can carry out a number of actions which a human alongside them would normally expect of such robotic platforms in a set of specific situations.

Researchers have published all the experiments that they had done as a part of the study and have also presented the report at the IEEE (Institute of Electrical and Electronics Engineers) Internal Conference in Brisbane, Australia, on Robotics and Automation.

John Rogers and Maggie Wigness (two researchers from the United States Army Research Laboratory) took the opportunity of engaging hundreds of attendees of the conference via face-to-face discussions during the interactive presentation which lasted more than two and a half hours.

According to Maggie Wigness, the research team working on the project had many goals in mind.

One of the goals that they wanted to achieve via autonomous robotics systems research was to offer on the ground reliable and autonomous robot teammates.

Wigness also mentioned that if a given robot acted as a soldier’s teammate, then the soldier could accomplish the given task much faster.

In the process of doing so, the soldier could obtain more detailed situational awareness.

Furthermore, Wigness said, soldiers could use robotic teammates in different situations as initial investigators.

Robots could potentially act as initial investigators to research potentially harmful scenarios.

Hence, they could keep soldiers further away from the way of harm.

In order to achieve such usefulness, according to Wigness, robots must gain the ability to use its newly-acquired learned intelligence in order to carry out a variety of tasks involving perception, reasoning and making correct decisions.

Wigness also mentioned that the team’s research mainly focused on how robotic intelligence could be easily learned from a very low number of human example demonstrations.

Additionally, the process that robots use to learn things is pretty fast.

Moreover, the process requires a minimal amount of human demonstration.

This makes such a robot ideal because it can make use of various learning techniques on the fly.

160819-F-RA696-478  - 160819 F RA696 478 650x434 - Robots Are Coming To Make Friends With Soldiers

Such an ability to learn quickly and without much human input could prove very useful in the field where a given mission’s requirements are always changing.

Carnegie Mellon University and United States Army Research Laboratory researchers initially tried to focus their investigation on making sure that they learned different robot traversal behaviors but with respect to the given robot’s image/visual perception of the specific terrain where it was to be used along with the objects that were present in the given environment.

To be more precise though, researchers “taught” the robot various techniques on how to learn to navigate from one point to another point in a given environment.

All the while, researchers also taught the robot how to stay near the very edge of the given road.

Moreover, researchers also made the robot learn how to use covert techniques in order to traverse the terrain by using nearby buildings as a form of cover.

Researchers who worked on the project said that when given various different tasks regarding a particular mission, the could activate the most accurate and appropriate learned traversal robot behaviors while the robot was already operating on the mission.

How did the researchers go about ensuring that?

Well, first, researchers successfully leveraged inverse optimal control.

What inverse optimal control?

It is just another name for inverse reinforcement learning.

That is how it is commonly referred to.

Reinforcement learning is just another class of machine learning.

The reinforcement learning technique, when provided with a known and accurate optimal policy, only seeks to recuperate a reward function.

In the case of robots helping out soldiers in accomplishing their mission tasks, a human (a soldier in the battlefield) has to demonstrate the optimal policy first.

How can the human do that?

The human can do that by driving the given robot right along the trajectory which provides the best representation of the behavior that the robot needs to learn and then emulate.

These trajectories then act as exemplars.

Research relate these exemplars to the present visual object/terrain features.

More specifically, researchers want to relate to features such as,

In this way, researchers are able to make the robot learn a reward function but with respect to the previously mentioned (and related)  environment features.

It is true that similar research does already exist in the field of autonomous robotic platforms, but what the United States Army Research Laboratory has done is unique in some senses.

First, according to Wigness, the operating scenarios and challenges that researchers in this project had to focus on while working at the United States Army Research Laboratory were extremely unique when compared to various other research projects that a different set of researchers were performing.

Moreover, the research team at United States Army Research Laboratory sought to create robotic systems that were intelligent.
Not only that, they also wanted their robotic platforms to operate in a reliable manner in various warfighter environments.

What does that mean for the robot?

That means, the robot has to find a way to operate up to a given standard in a highly unstructured and possibility very noisy environment.

Moreover, researchers need to be able to teach the robot on how do everything described above with very little priori knowledge (relatively speaking) of the current environment and its state.

In other words, researchers working at the United States Army Research Laboratory had a very different problem statement than the majority of other researchers working in the field.

This, according to Wigness, enabled the United States Army Research Laboratory to make a big impact in the field of autonomous robotic systems research.

The techniques that researchers used at the United States Army Research Laboratory were very robust in the sense that they could handle noise and had the ability to make the robot learn useful tasks with a relatively tiny amount of data.

Researchers had to do this because of the way that they defined their problem.

Wigness also told reporters that the team’s preliminary autonomous robotic systems research had actually helped the team itself and other researchers in demonstrating the actual feasibility of rapidly learning a given encoding of robotic traversal behaviors.

Wigness further added that as the team pushed its robotic systems research onto the next level, the team would also begin to focus more heavily on other complex behaviors.

These complex behaviors, in all possibility, would require the robotic platform to learn from more than a single source of features i.e visual perception features.

Additionally, Wigness mentioned, that the team had a flexible enough learning framework.

With such a framework the team could use a given environment’s priori intel that may or may not be available.

Such intel could consist of useful information regarding areas on the battlefield which adversaries could easily see or areas which are known to contain infrastructure for reliable communication.

Of course, such type of additional information may only be relevant to specific scenarios in a given mission.

Researchers believe that if they provided the robot with the means to learn with respect to all of the above-defined features, it would greatly enhance the mobile robot platform’s intelligence.

The team of researchers is also looking at ways to further explore how they could transfer such a type of robotic behavior learning to various other mobile platforms.

3127953038_e8484f17b8_b  - 3127953038 e8484f17b8 b 650x909 - Robots Are Coming To Make Friends With Soldiers

It is true that, to date, researchers have only evaluated their techniques by performing them with tiny and unmanned Clearpath Husky robots.

This particular robot has a relatively low visual field of review with respect to the ground below it.

Wigness mentioned that if researchers found a way to transfer such a technology to other larger robotic platforms then that would introduce completely new perception viewpoints along with different robotic platform maneuvering capabilities.

Furthermore, according to Wigness, the ability to learn to encode robotic behaviors which researchers can then easily transfer to various other robotic platforms could prove extremely valuable.

Especially in situations where one had to deal with an entire team of heterogeneous robotic platforms.

In such a case, a given robotic platform could learn the required behavior instead of each given platform learning that same behavior individually.

RCTA, or Robotic Collaborative Technology Alliance (sponsored by the United States Army Research Laboratory) funded this research project.

The Robotic Collaborative Technology Alliance has the aim of bringing together academic, industrial and government institutions in order to address R&D (research and development) required in order to enable the swift and effective deployment of various future military vehicles.

More specifically, future unmanned robotic ground vehicle systems.

These systems could range in size from ground combat robotic vehicles to man-portables.

Rogers recently told reporters in the media that the United States Army Research Laboratory had positioned itself to collaborate with all other members of the Robotics Collaborative Technology Alliance actively in order to leverage all the efforts of all the top researchers in the country working in academia to help and solve various Army problems.

Rogers further said that this specific research effort represented the synthesis of many components of the Robotic Collaborative Technology Alliance with their own research.

Without it, said Rogers, it would have been impossible for them.

In other words, the fact that teams from academia and the United States Army Research Laboratory worked so closely helped everybody to achieve their desired results.

Ultimately though, the research performed in the field of robotic platforms could prove itself as critical for the future performance of the US army on the battlefield.

This is where soldiers would have no other choice but to rely on robots.

And the more reliable the robots would be, the more confidence human soldiers would have in them in terms of these robots assisting soldiers in successfully executing their missions.

Rogers also commented that the new capabilities for the NGCV (Next Generation Army Combat Vehicles) to effectively and autonomously maneuver at operations tempo, operational tempo and operating temp in the actual battle of the coming future would give rise to new and powerful warfare tactics.

All the while, these new tactics would reduce the of harm coming to the soldier.

He also mentioned that if the Next Generation Combat Vehicle encountered some unforeseen battle conditions which required some form of teleoperation, then the current approach of researches could prove very useful in making the vehicle learn autonomously how to handle such types of conditions in the battlefield of the future.

Of course, AI and machine learning are not just for one type of security.

The future of warfare is not only limited to physical battlefields.

If the previous US presidential election is anything to go by then we know that countries are now waging wars against each other through the web.

The common term to refer to such types of attacks is cyber attacks.

Recently many security have named artificial intelligence as a hot new prospect for providing more .

But some believe it could be a dangerous gamble.

How come?

Well, there is little doubt about the fact that artificial intelligence and machine learning can assist to guard against many types of cyber attacks.

However, aren’t just sitting by either.

They now have the tools to foil various different security algorithms.

How can they do that?

Well, they can do that by targeting the very data that these security algorithms train on.

Hypothetically speaking, a hacker could change the warning flags that a typical security algorithm would look for.

Walk around any of the many exhibition floors of various Black Hat cybersecurity conferences in the city of Las Vegas, and you are destined to fund several companies making huge claims about their engineers are working round the clock making use of various artificial intelligence and machine learning techniques in order to assist everybody in making the whole world a much safer place than before.

However, some security experts have this worry that security vendors have given up paying a sufficient amount of attention to all the risks that are associated with these methods and the consequences of relying on technologies so heavily.

Raffel Marty, who works at Forcepoint (a security firm), has warned that what was happening at the moment was a little concerning to him and in a few cases, it felt even dangerous.

Of course, the security industry going to great lengths in using algorithms to improve their products is hardly surprising.

In all fairness, the security industry is currently facing a genuine tsunami in terms of the number of cyber attacks.

These cyber attacks are expected to increase even further as the number of devices hooked up the wider world of the internet look set to explode.

Simultaneously though, there is a huge gap between the number of skilled workers required and the number of skilled workers available.

There is little doubt that with an increase in the use of artificial intelligence and machine learning to automate the process of threat detection, as well as the response to that threat, is going to lift some of the burden off the shoulders of employees.

Some believe that these new techniques may also assist security companies to identify online threats much more efficiently than the existing approaches via software.

The data that is always in danger.

A few security experts, including Marty, at the Las Vegas Black Hat security conference have mentioned that a lot of security firms are now only starting to roll out artificial intelligence and machine-learning-based security products not because the market needs it but because these security firms feel that they have to in order to attract attention from customers who have already bought heavily into the machine learning and AI hype cycle.

Moreover, according to these experts, there is also a danger that security firms would simply overlook all the ways in which techniques related to AI and machine learning algorithms may create a genuinely false sense of online security.

A lot of security products that claim to take advantage of developments in the field of machine learning and AI use the supervised learning technique.

For this technique to work, security firms have to first select and then label huge data sets.

These are the things on which computer algorithms train on.

To take an example, an algorithm trains on a given data set by tagging different pieces of code that are clean and that are malware.

Marty recently told the media that the one big risk that security firms take in trying to rush their security products directly to the market is that they use incomplete training information.

What does that mean exactly?

It means that security firms use information that researchers have not thoroughly scrubbed for different kinds of anomalous data points.

One other problem is that cyber hackers who have the ability to get access to a given security firm’s computer system could potentially corrupt all the data by simply switching up the data labels.

Doing so would mean that the algorithm in charge of protecting the system would tag some malware code as clean code.

In fact, it is not even necessary for the bad guys to even think about the need to tamper with the given data.

There is another way.

That way involves them working out just some of the features that are present in the code.

By studying the code, they may know a lot about the that the code uses in order to flag a piece of code as malware.

If they are successful in knowing that model, then they can remove the problematic code from their own malware code so that the security algorithm would not have any problems with it.

 

Zohair  - zohair - Robots Are Coming To Make Friends With Soldiers

Zohair

Zohair is currently a content crafter at Security Gladiators and has been involved in the technology industry for more than a decade. He is an engineer by training and, naturally, likes to help people solve their tech related problems. When he is not writing, he can usually be found practicing his free-kicks in the ground beside his house.

Zohair  - zohair - Robots Are Coming To Make Friends With Soldiers

Latest posts by Zohair (see all)



Source link
Based Blockchain Network

LEAVE A REPLY

Please enter your comment!
Please enter your name here