- The Australian Centre for Robotic Vision, the world’s first research centre specialising in robotic vision, is bringing us ever closer to the lifestyle showcased in the 1960s cartoon The Jetsons
- The Centre’s world-leading researchers are already giving robots the vision, understanding and hand-eye coordination to solve problems
29 September 2017: While many of the gadgets featured on The Jetsons, the futuristic cartoon launched in 1962, are now commonplace (think smart watch, video calls, flatscreen TVs and drones), we’re still a long way off Rosie, the hard-working robotic maid according to one of the world’s leading robotics vision experts. Or are we?
While we’ve made great strides in robotic evolution, the biggest challenge is enabling robots to see where they are so that they can carry out tasks in increasingly uncontrolled environments. The Australian Centre for Robotic Vision, headquartered at QUT, is a collaboration between researchers at QUT, the University of Adelaide, the Australian National University and Monash University and brings together the world’s top researchers to unlock this critical challenge.
“To enable a robot to compare what it “sees” with a database of images in order to translate the situation into tasks involves a mind-bogglingly complex set of algorithms and advanced 3D geometry and machine learning,” the Centre’s Ian Reid says.
The Centre is making great progress in giving the next generation of robots the vision and understanding to help solve real global challenges. It is now working on the next evolution of Harvey, the capsicum-picking robot developed by Australian Centre for Robotic Vision last year, however there’s still a lot of work to be done before we can enjoy the luxury of our very own Rosie.
“There’s a fundamental disconnect between what we roboticists say and what the public perceives,” Ian says.
“Robots used for search and rescue, for example, are little more than mechanical platforms that are remote controlled. Even in industries like mining, the trucks may be considerably smarter, but they’re still tied to a fixed environment. The robots we have at the moment in factories are simple and highly effective machines relying on absolute precision but they don’t work outside the factory.”
“At the heart of the problem is what we call robotic vision – how to use a camera to guide a robot to carry out tasks in increasingly uncontrolled environments.”
So how soon will we see robots that move, understand and make decisions just like Rosie?
“There is a significant difference between the closed world of the factory floor and the open environment of the real world where there can be no guarantees that the robot has actually seen everything,” Ian says. “For that we need to be able to understand uncertainty, to decide how confident we are in what the computer perceives.”
This means that “seeing” robots will slowly enter our lives, working first in semi-open environments where there are fewer variables.
Ian’s colleague Rob Mahony is prepared to make some predictions.
“The pieces will be in place within five years and robots will become more common in semi-structured environments,” he says. And once the technology is on the ground, you will see companies begin to exploit it. “You’ll see the big money industries move first,” he says, “with mining, already a leader, rapidly expanding the role of robots.”
“In rich countries like Japan where there are also demographic challenges, you will see a big increase in social robotics – in aged, robotic companions and robotic pets, Mahoney predicts. “Probably within 15 years, robots that can move, understand and make decisions will be a major part of our lives.”