Mathieu Kemp, Principal Engineer
To explain what machine autonomy is, let’s start with automatic control. Automatic control is a piece of software and hardware that allows a machine to regulate a certain quantity. A good example is the cruise control on your car – to maintain speed. Another one is a thermostat – to maintain temperature. Automatic control is everywhere: it’s in your toaster, your appliances, your car, every airplane, every computer chip, every military drone, every quad-rotor, every unmanned ground vehicle, and every unmanned underwater vehicle. Automatic control is also seen everywhere in nature – biologists call it autonomic regulation- : it’s what keeps your body temperature constant, your blood sugar level, your blood sodium level, etc.
Autonomy is different: the goal in autonomy is not to regulate, but to deal with the real world. In autonomous driving for example, the goal is to have a machine drive safely whether it’s downtown Chicago or Route 66, on dry or icy road, daytime or nighttime, with or without a human. A simple way to explain the difference with automatic control is that automatic control is what keeps the car going straight, and autonomy is what keeps it from crashing into a wall.
So how do we get there? The Society for Automotive Engineers (SAE) recently published a technology roadmap to help manufacturers focus their investments. The roadmap defines six levels of autonomy: No autonomy, Driver Assistance, Partial Automation, Conditional Automation, High Automation, and Full Automation. Full Automation will be achieved when the vehicle can drive under any condition with or without a human on board. Car manufacturers are investing heavily into this technology, and so are chip manufacturers.
Autonomy is extremely challenging because it’s asking completely different questions. What am I seeing? Is it an obstacle? Will I cause it damage? Will it cause me damage? Should I slow down? Accelerate? Can I accelerate fast enough? Will I slip? Is this car too close? These are all “contextual” questions, but because the goal is to create working technology, what 5 years ago belonged to philosophy needs to be turned into practical solutions. We need math tools to “understand” any situation. We need math tools to encode “consequences”. We need to describe all of it mathematically, and we can’t just go in and code every single case because the real-world is unstructured.
The focus in this lab is a subset of machine autonomy we call machine awareness. Our focus is machine self-preservation. We ask what kind of autonomy is needed for a machine to survive in the world. This is particularly relevant for the marine sciences, where funding is limited and where expensive assets are regularly deployed in an unstructured world that suffers from severe communication limitations.
Our basic thesis is that self-preservation will require 3 elements. First the machine needs to predict its “state” – the quote will be explained shortly. Second, it needs a way of computing a “harm-to- self” function that depends on the state, i.e. the cost function. Third, it needs a mechanism to avoid hazard. So far, this is classical fault detection/isolation/prognostication/ recovery – see diagram below. The research twist for us is that, given the complexity of the world, the state and both the cost function and hazard avoidance mechanism must emerge from the machine’s own experience. And unlike nature, we don’t have 3.5 billion years to evolve solutions so better roll up our sleeves …