A structure engulfed in flames can pose an extreme danger for fire-fighting personnel as well as any people trapped inside. A companion robot to assist the fire-fighters could potentially help speed up the search for humans while reducing risk for the fire-fighters. However, robots operating in these environments need to be able to operate in very low visibility conditions because of the heavy smoke, debris and unstructured terrain. This paper develops an audio classification algorithm to identify sounds relevant to fire-fighting such as people in distress (baby cries, screams, coughs), structural failure (wood snapping, glass breaking), fire, fire trucks, and crowds. The outputs of the classifier are then used as alerts for the fire-fighter or to modify the configuration of a robot capable of navigating unstructured terrain. The approach used extracts an array of features from audio recordings and employs a single hidden layer, feed forward neural network for classification. The simplicity in network structure enables performance on limited hardware and obtains classification results with an overall accuracy of 85.7%.
The Robotic Platform
Using an onboard mobile audio classifier the robot is taught to distinguish specific sounds to aid rescue efforts, including: a baby crying, a cough, a scream, crowd noise, fire, a fire truck, glass shattering, and wood snapping. Each stimulus is assigned to a responding behavior: the robot must send a search-and-rescue signal, reduce speed of motion, increase speed, send an alert signal to evacuate, or continue the search as normal. Above, (a) the robot hears crowd noise, and reduces its speed and retracts its legs. (b) When it detects fire, it increases its speed and configures its legs to improve mobility over uneven ground.