We help our customers transform industries and enable new ones through automation and human augmentation.
An interesting area in robotics and automation at the moment is fulfilment in the retail space, especially when considering the impact of reverse logistics and omnichannel.
Today, fulfilment is generally partially automated. State of the art fulfilment centres already have plenty of automation in the tasks they perform. Pioneers like Amazon and Ocado, for example, use robots to bring goods-to-people – this can be up to 40 times more efficient than having a person walk around a store.
But although the transport is automated, bin picking is not, so there’s always a human in the loop.
External pressures are encouraging businesses to increase automation where possible, and new technology is making some more tasks amenable to automation.
In fulfilment, or in any robotics system, we can break the process down into three broad steps.
- We have sense, where the robot determines the state of the environment around it using vision, or other electronic means. This can be a single sensor or a combination of several different sensors.
- Then there is the compute step, where the sensor data is analysed to decide what to do based on the information it has gathered. This can be simple or extremely complex.
- Finally, there’s an actuation step, where the robot does something useful with the data it has gathered – moving an object for example.
I’ll look at each of these in turn, to see what challenges they hold for increasing automation in this space.
So, one of the secrets of robotics – most robots are pretty dumb. The ones you see in car production lines will move a car door from one place to another in a fraction of a second with millimetre accuracy – but they’ll do that whether or not there’s a person in the way. And one reason they’re dumb is a lack of sensing. It can be surprisingly hard for a robot to sense as effectively as a person can. The traditional approach is to make sensing unnecessary – control the environment so you know what’s coming.
This may work in a factory, but it’s no good in a fulfilment situation. The objects are too varied, the environment less controlled. So you need to sense, and the ideal solution is to use many different sensors, just like people do – a combination of vision to determine what you’re picking, depth sensing to work out where it is, and touch to confirm you’ve got it, for example.
All these things exist, the hard part is just figuring out a cost-effective combination that works for your particular problem.
Again, here, most robot arms are fairly unsophisticated – they play back exactly the same actions over and over again, so very little computation is required. More innovative robots can deal with semi-structured challenges – where there is variation in the objects or clutter in the environment – as long as the sensing is good enough. More complex warehouse systems need to co-ordinate across multiple robots too.
It’s clear that AI is a disruptive technology here, but it’s not a solution to all problems. For the right task it can be amazing, beating even the best humans, but that often requires gathering lots of training data. There are solutions for the training data problems – our AI team has had a lot of success with transfer learning and synthetic data, for example.
When it comes to actuation, it’s really hard to match the flexibility of a human hand, so actuators need to be designed to match the things they’re picking, and unlike software, you can’t just download a new upgrade.
The best actuators are designed in tandem with the sensing. We had a challenge on a project to manipulate an object that was dirty. The original plan was a very accurate gripper that selected a precise point on the edge of the object to grip from. The problem was, it wasn’t fast enough. In the end, we used a much faster algorithm that gave a rougher location, but we completely changed the form of the end gripper so that it didn’t need a precise location. That wouldn’t have been possible unless they’re designed together.
Example project – classifying similar-looking objects
This is a system we developed that sorts PCBs into different buckets, so they can be recycled. Crudely put, the more copper in a PCB, the more valuable it is, so if you can put all the high value ones in one place you get a better, more predictable return. Similarly, if there are expensive chips (e.g. microprocessor) you want to spot those and remove them to be recycled separately.
This technology is interesting for reverse logistics as well as recycling, as you’re identifying slightly different objects among visual noise, and classifying them based on small changes.
Example project – soft robotics and slip sensing
Soft robotics is a growing field of interest, and it can be particularly effective when trying to manipulate a large range of irregularly shaped objects.
That where our experience in Adaptive Robotics comes into play, where we have developed a capacitive sensor that can detect touch, griping force and slip all with the same, low cost, three fingered end effector. The effector senses the object in three ways:
Touch: the measured quantity is the change in pressure.
Gripped object elasticity: we vary the pressure in the air chamber according to a known modulation signal and measure the response.
Slip: slip is measured by tracking the change in contact point along the sensor array.
The future for automation
First a note of caution – automation isn’t always the answer. It’s an expensive thing to deploy, so has a long-term payback. Typically, for innovative robotics it’s a development process rather than buying off the shelf parts, so you need to amortise the development cost across the deployment – 5 years is often a good payback time.
In fact, the drivers we’re seeing across a range of industries are not about trying to save money – it’s more about automating jobs that they just can’t get the labour to do. And let’s be clear, these aren’t highly skilled jobs, these aren’t jobs people aspire to or want their children to do, so if there’s a better way to earn a living, people will take it. And this is true across sectors – we’re seeing a strong push towards automation in agriculture, for example, not to save money, but because of labour shortage.
It’s clear that automation will expand. But, these technologies are not off the shelf, so will only appear when the business case makes sense. This can mean deployment across multiple sites or partnering with a manufacturer.
But the technology is there – we’re not going to see human-shaped robots walking down the street, but more tasks will be automated. For companies that are willing to invest in development, new automation is a year or two from market.
We’re not going to lose people completely. More of the dull, repetitive tasks may be pushed on to robots, but there’ll always be a person doing the last 5% or 1% of the job.
Robotics is really exciting at the moment, there is a lot changing. Companies with the courage to spend money can be really disruptive and the prize is massive.