US Army Research Laboratory (ARL) Assessment of Energy-Efficient and Model-Based Control

Abstract

The US Army Research Laboratory’s (ARL’s) Robotics Collaborative Technology Alliance is a program intended to change robots from tools that soldiers use into teammates with which soldiers can work. One desired ability of such a teammate is the ability to operate in an energy-efficient manner on a variety of surfaces. To develop such a teammate, alliance researchers developed planning algorithms that incorporate knowledge of the vehicle’s steering and control system. These algorithms adapt their navigation to different types of terrain, learning appropriate parameter values by conducting a brief set of trial maneuvers, and are intended to enable the robot to operate in a manner that is more energy efficient. In June of 2016, ARL researchers conducted an assessment of this technology by comparing this planning algorithm to a traditional minimum-distance planning algorithm. This assessment found an overall improvement in energy efficiency, which was clearly visible when the systems operated on grass, but unclear when the systems operated on asphalt. Overall, the results suggest that the energy-efficient planner does have the potential to plan a more energy-efficient path.

The Robotic Platform

Skid-steered vehicle Clearpath Husky is equipped with three Mac Minis used for hardware control, planning, environmental mapping, and communication with researcher. Sensory information comes from two Bumblebee 2 stereo cameras. It is equipped with LInear Hall effect sensors that measure wheel speed.

  FIG 1. ClearPath Husky. The upper camera (A) was used for visual odometry, and the lower (B) was not used at all. The lidar (C) was used for obstacle avoidance.

FIG 1. ClearPath Husky. The upper camera (A) was used for visual odometry, and the lower (B) was not used at all. The lidar (C) was used for obstacle avoidance.

Motion Planning on Learned Terrains

The Husky navigates two kinds of terrains commonly encountered in the field, asphalt and grass, equipped with one of three possible motion planners. The field is littered with cardboard towers that act as obstacles in its map. The motion planners’ algorithms create an optimal path that the robot tries to follow in the real world. As it moves it replans to avoid collisions with objects

  FIG 2. Representative obstacle course layout. The goal position is marked by an orange cone.

FIG 2. Representative obstacle course layout. The goal position is marked by an orange cone.

Publication

C. Lenon, M. Childers, M. Harper, C. Ordonez, N. Gupta, J. Pace, R. Kopinsky, A. Sharma, E. Collins, and J. Clark, “US Army Research Laboratory (ARL) Assessment of Energy-Efficient and Model-Based Control,” ARL-TR-8042, June 2017. [bib | pdf