Imagine you’re picking a strawberry. First, you identify your target and move your hand towards it. As you move your arm, you can feel how it’s arranged in space and roughly where your hand is, even with your eyes closed (known as proprioception). Your eyes are open now though, and you use them to help zone in on the target – you compare where your hand is with where it should be and adjust to compensate.

We help our customers transform industries through automation.

Once you get closer, the strawberry is no longer visible – perhaps hidden behind your hand, or a leaf. But you know roughly where it is, so you can feel around until you find it. You grip the strawberry with just enough force to lift it without slipping as you tug it off the stalk. If you get it wrong, because the strawberry is unexpectedly wet or the stalk unexpectedly strong, you feel it slipping through your fingers and tighten your grasp to hold it securely. Maybe you give it a little squeeze to see whether it’s firm and ripe, or soft and rotten.

A simple enough task, but one that’s surprisingly hard to automate.

The problem with robots

Robots and automation have replaced human labour in many tasks requiring strength, speed and precision, from building cars to electronic assembly. A robot can do the same thing, in the same way, time and time again, without stopping or tiring. But humans still routinely outperform robots on tasks that can’t be specified precisely, where executing the same series of motions every time is a flaw rather than an advantage.

Fruit-picking is one example; assembling an order in an internet retailer’s warehouse is another. These are dull, repetitive tasks which exact a considerable toll on the health of the people performing them and should be a prime target for automation. But doing a good job requires flexibility and judgement – which are hard to program into a robot! Even in cases where a fruit-picking robot can compete with a human worker in job performance (speed, yield, and so on), the robot is often many times more expensive than a human. In large part, this is simply because conventional robots are designed to solve a different class of problem. A factory robot needs to be strong, fast and precise. Strength and speed require heavy members and large, powerful motors.

As for precision – robots rely entirely on position sensing, equivalent to human proprioception. High-resolution encoders measure the angle of each joint, and a simple geometrical calculation gives the position of the end effector in 3D space. For this method to work accurately, the robot’s structure must not bend appreciably under the forces exerted by the payload, so they must be very rigid, which adds weight. To achieve sub-millimetre accuracy, industrial robots are heavy, bulky and power-hungry – much more so than you might expect given their payload and speed specifications.

What can be done?

A design philosophy centred on precisely sensing the end effector position works well for an industrial robot which executes the same task again and again. But, as shown here, being able to perform a precise motion doesn’t help if you don’t know precisely where your target is, and all that weight and cost is superfluous. High-precision designs are a liability for taking robots out of the factory and on to the farm.

As engineers, we can often find inspiration in biological systems. That’s particularly true in cases like this where humans can achieve far better performance than existing robot systems. What are we missing?  Think back to the example we started with. Human proprioception is far less accurate than a robot’s position sense, but it doesn’t matter. We use a combination of several senses (sight, touch and proprioception) to perform manipulation tasks more precisely and dextrously than we could with any one alone.

One approach to making a more flexible robot is combining vision with a less-accurate position sensor. This is necessary and important work but it doesn’t solve every problem. If the line-of-sight to the target is obscured (by a leaf, or part of the robot itself), we must fall back on position sensors and modelling. It’s not always easy to tell visually when you have a secure grip on an object (particularly something with an irregular shape), which can lead to objects being dropped. Finally, processing visual data quickly enough to control an actuator is costly in terms of computation and power consumption.

Touch sensing robotics

Cambridge Consultants has built a technology demonstrator to see how machine touch can help. It is a soft actuator with touch and slip sensors embedded in the gripping surface of each of its three fingers. The fingers are moulded from silicone and are hollow – there’s a fluid chamber running up the centre. By pumping air in and out of the chamber, we can flex the finger and apply force.

A soft actuator is well suited to manipulating irregularly-shaped or fragile objects (like fruits) because it will naturally wrap around an object to spread force over it. The finger surface is food-safe and cleanable – and the fingers are cheap, so if they become damaged or worn, they can simply be thrown away and replaced.

Our sensors are very low-cost piezoelectric parts mounted on to a flexible PCB and moulded into the finger. When they deform in reaction to a changing force, they output a small current. An object touching the gripping surface results in a spike in the measured current. More subtly, we use the same sensors to identify vibrations generated when an object slips against the gripping surface – just like when you drag a wet finger across a pane of glass. This technique allows us to pick up very small amounts of slippage with relatively widely-spaced sensors, so it’s ideal for continuously adjusting grip strength to ensure an object is held securely.

The future

Our demonstrator is a step along the road to a smart end-effector that can react to changes in the environment around it – for example, by adjusting its grip strength in response to a slipping object as we’ve discussed. We’re also investigating using the same sensors to measure the firmness of an object, to measure the ripeness of fruit or help identify an object.

But the most important gains could come from using data from a touch-sensitive end effector as part of the robot’s overall control algorithm, complementing position sensing and machine vision. By combining all three senses we could build a robot for imprecise manipulation with better performance than the current state of the art. And by relaxing requirements on position sensing and vision, it could end up with a lower price tag too, making it competitive for many applications which are currently ill-suited to automation.

Author
Saajan Chana
Principal Embedded Systems Engineer

Saajan is a systems engineer with a background in embedded software and a wide experience of multi-disciplinary sensor development, ranging from low-cost consumer products to high-value industrial sensors.