It’s harvest time in Europe. A season of celebration when we’re reminded of romantic, pastoral scenes of farmers gathering in their crops; their activities characterised by honest, laborious and essentially manual toil. The reality is starkly different of course. Modern agriculture is a fiercely competitive business. Technology is driving vital change and delivering transformative automation. 

​​Improving end-to-end efficiency through the integration of technology

Such innovation has the potential to reduce manual workload and augment limited workforces. It enables more land to be seeded, treated and harvested in less time. More crops can be monitored and measured more frequently – and in more detail – than would ever be practical by hand. Here at Cambridge Consultants, we’re drawing on the latest developments in artificial intelligence and machine learning to help our agricultural clients prosper at harvest time, and all year round. 

This article is the last in a series – following ploughingplanting and growing – exploring closed loop control. This is a crucial requirement on the road through automation to autonomy in farming. With the harvest theme in mind, I’m going to discuss how the three key aspects – sensing, feedback and control – can be applied to day-to-day farming. Let’s start with control.  

Controlling complex machines 

Combine harvesters are out in the fields, gathering the crops. They’re complex machines with many parameters that need to be adjusted – or controlled – to ensure the optimum efficiency and the largest crop yield. Examples include blade height, or rotor speed. Control is an important factor in other forms of harvesting too.  

Arguably fruit and vegetable picking requires an even greater level of control in order to harvest produce without damaging it or the plant. And fruit can be hard to reach. For this reason, fruit and veg picking is most often done by hand. An interesting adjacent question is: can a robot ever have the dexterity and sensitivity of touch required to pick delicate fruit? We’ll get to that, bear with me. 

Sensing and feedback 

Feedback is the ability to adjust control in response to an external parameter – corn height, for example. This naturally goes hand in hand with sensing, as you can’t feedback and correct without making measurement of both the external parameter and the current state of the machine.  In this context, it will be some form of harvester/robot picker. Again, sensing and feedback is often a laborious manual process that often needs to be repeated to ensure consistent equipment performance. 

 If the sensing can be automated, the control can be adapted in real time through feedback to changing conditions. For broad acre farming, this automation removes the need for repeated manual calibration and ensures machinery is always harvesting most effectively. For picking applications, sensing and feedback is fundamental to being able to reach around leaves, avoid certain objects (not least people) and apply the right amount of pressure to grasp without damaging. Of course, with improved sensing comes the ability to better select produce for harvest – an essential requirement for consistent quality control. 

Advanced sensing 

Sensors come in a variety of forms, from tiny strain gauges to global satellite tracking systems. Advanced machine vision systems are increasingly prevalent in all industry sectors as they can be used to asses a wide variety of situations. Just like your eyes can.  

Traditionally, machine vision systems were limited to production line environments where lighting conditions can be well controlled. The background is simple, the number and range of objects is limited, and objects are constrained to lie in a limited range of poses. When you have such control over the camera scene you can develop simple algorithms to detect certain object features with a high signal to noise ratio.  

Nowadays deep convolutional neural networks (CNNs) can be trained to detect and measure objects from a range of perspectives and be robust in a wide variety of environment conditions. CNNs can do more than traditional algorithms that have been carefully crafted to detect specific features of objects. 

Smart machine vision systems now have the potential to be used outdoors. When attached to farming equipment they can detect and locate produce. They are adept at assessing ripeness, growth and disease in all weather conditions, and at all times of day. Vision systems can also monitor the farming equipment and measure its position in comparison to environment. The only catch is that CNN-based machine vision sensing systems need to be trained for a specific task on a large amount of data covering the full spectrum of conditions. 

The lack of commercially available, application specific, labelled datasets is the current factor limiting the availability of smart farming machine vision systems. But things are changing, and it is easier than ever to attach a vision system to existing farming equipment and gather the data required to train a CNN to automate a monitoring task.  

At Cambridge Consultants, we have developed many CNN machine vision systems for both indoor and outdoor farming applications. Projects typically begin with gathering suitable data to target a specific application, such as identifying and marking weeds, pests or disease to be treated. Our CNN ‘semantic segmentation’ algorithm can be trained with this data to be customised for a range of industrial and farming applications. We’ve produced a demonstrative example that can label green weeds on green grass for treatment – a traditionally hard machine vision problem. It works out in the real world under a wide range of lighting conditions. 

Advanced feedback and control 

It’s hard for me to talk about feedback without mentioning the letters P, I and D. Unfortunately, a discussion about their meaning is well outside the scope of this article. But suffice it to say that PID controllers have a huge variety of applications where you need to adjust a control to meet a certain criterion. Environment control systems use them to maintain the ideal temperature within indoor growing facilities.  

On a recent project to develop an autonomous agricultural rover, I used a PID controller to control the heading to keep the robot precisely following a path without colliding with a crop. PID controllers work great when measurements are direct and independent, and a given action has a direct effect on a single parameter. But for complex inter-related systems PID control doesn’t work well. As an example, the harvest process itself must maximise revenue from produce while minimising cost, which is done through complex control of process variables such speed, quality control, waste production. Fortunately, recent advances in deep learning can overcome this limitation, enabling close loop control to be used in complex systems, bringing the automation benefits that have yet to be realised. 

We are working on two interrelated areas of AI that can produce natural, environment sensitive feedback and control close to that which you would expect from a human. They are imitation learning and reinforcement learning. Both techniques can be used to train robotic systems to replace or augment intensive manual labour in farming, and hence enable more food to be produced at less cost. 

With imitation learning we were inspired by a great paper on how to teach a robot hand to pick up a fish. It’s an interesting problem, that is very analogous to fruit picking. There’s no obvious solution – fish are floppy and slippery, and easily crushed, and will lie in a pile in many ways. The best way to learn to pick is through experience, and a really good way to learn quickly is by copying – usually a human.  

In imitation learning, you show the AI model many examples of how you would pick up an object in lots of different scenarios, via a machine vision system. The model learns not only how to repeat those actions, but also to generalise to situations it hasn’t exactly seen before. Once trained, you can replicate the AI model and have multiple robots do the work of a single person. 

As with CNN vision systems, generating or acquiring an appropriate dataset to train a control model via imitation learning can be a hurdle. We have shown that training can be done in a virtual environment. This means the system can be effectively trained with more machine generated scenarios more quickly and easily than one trained through manual demonstration. Ultimately, this results in a more robust and cheaper development.  

As a demonstration, we have developed an AI controlled robot arm that was trained through imitation learning in a virtual environment built with the Unity game engine. When random toys are placed in front of the arm, it picks them up, and puts them in a box. The demonstration is deliberately contrived, but the underlying technology can be adapted to other difficult picking problems such as packing crates with food produce.  

The way to reinforce learning 

With the reinforcement learning technique, you let the AI system attempt a task many times, often in a virtual environment.  After every attempt it is scored: positively for good behaviour – like touching or picking up fruit – and negatively for bad behaviour – colliding with objects and squashing the fruit. Eventually, like a human, the system learns strategies to achieve a task in different situations.  

Reinforcement learning is not just for robotic arms. We’re currently working on a mobile robot to navigate in a dynamic environment, naturally avoiding people going about their business as well as inanimate objects that might appear in its path. The system gets scored very badly for hitting people, by the way. In the field such a system could work cooperatively with humans to carry produce through a field from picker to the packer. 

All of these recent advances in AI are ones that I find really exciting. They have the potential to improve farming equipment to an extent that has not been seen since the invention of the tractor. Cambridge Consultants is at the forefront of these developments, and we’re are passionate about turning them into competitive advantage for our clients. Please drop me an email to discuss any aspect of the topic, and explore how to reap those benefits, in more detail. 

Author
Joe Smallman
Technical lead

Joe develops sensors and algorithms to extract useful information from complex physical systems. His specialties include machine vision systems and robotics.