Data Science

Train as you fight. Why Computer Vision needs good data

: AI for Image Analysis needs good visual material. Our expert Laura Fink explains why and how, using an example of agricultural data.

AI and Computer Vision in agriculture

Many powerful machine learning techniques in AI are so-called black box algorithms. This means that they do not offer direct access to what has been learned – they are usually too complex for that. Thus, data collection mistakes with strong effects on the outcome can remain unnoticed. I will show here why it is important to deal with explainability of machine learning algorithms and how this helps to develop better solutions. To exemplify the argument, I will use an agricultural setting. The goal is here plant species recognition.

Common wheat or black grass?

In agriculture, it can be advantageous to identify the plants growing in a field at an early stage: Plants compete strongly for nutrients and water in their early germination phase. Thus, identifying and removing undesirable wild plants will strengthen the growth of the desired cultivated plants. AI in agriculture is therefore a technology of the future.

The research group “Computer Vision and Biosystems Signal Processing” at Aarhus University offers a public database on the Internet that contains, among other things, photos of twelve different wild and cultivated plants that are frequently found in Danish fields. We can use this data to develop an algorithm for identifying these plants. However, if you approach this task naively, you might introduce a non-obvious error that will lead the algorithm to fail in the field or greenhouse. What exactly is this error?

At first glance, the result looks quite good!

For a small amount of data, say about 3800 images, it is common to use “transfer learning” to train an artificial neural network. With this method, an already pre-trained algorithm such as ResNet is adapted to a new task by fine-tuning individual neural connections. In this way, the requirement of a large set of training data, that neural network usually have to yield good results, can be avoided.

If we now apply this method to our images of plants (1600 images, previously unseen by the algorithm), ResNet can predict the correct species for 80 % of the cases. This might sound promising at first, but it remains unclear which characteristics in the images were decisive for the classification of the individual plants. For this reason, we cannot blindly trust the seemingly good result and just apply our algorithm directly to actual agricultural purposes.

Trust is good – but control is better

Today, tools like SHAP (SHapley Additive exPlanations) or LIME (Local Interpretable Model-Agnostic Explanations) help debugging machine learning algorithms and explaining predictions. In our plant example, the use of LIME shows very quickly that the Artificial Neural Network did not derive its knowledge from the shape and color of the plants, but – oh dear – from the size and nature of the pebbles in the background of the image!

Why did this happen? – The human influence

If you look at how the data was collected, it is easy to see that the width and height of the photos increase as the plants grow. Thus, the pebbles appear smaller in the background of more recent photos. In early germination phases, however, some plant species had been photographed more often than others. Thus, the algorithm learned that very large stones in the picture probably have to be the “loose silky-bent” rather than “maize”. These so-called “unconscious biases” in machine learning are often difficult to find and, in our showcase, have lead the algorithm to fail miserably.

Train as you fight!

How can you do better? The military saying “Train as you fight!” also fits for the use of AI. First of all, it is important that the data collected corresponds to what we can expect in real use or production. Anyone who dreams of being able to differentiate between cultivated and wild plants in the field with the help of machine learning needs image data that has not been obtained in a research laboratory or under artificial conditions, but rather directly in the field.

What conditions are to be expected in the field?

But how can we help the algorithm to correctly evaluate the image data and make AI work in agriculture? It’s very important to capture the diversity in the photos that we expect to see in the field. For example, the light conditions can vary depending on the season and weather, the soil could be rich in clay, sand or humus, which also affects the color and texture of the plants. Neighboring plants may also overlap, moss may also be found in the background, etc.

In addition, the way in which photos are taken may vary. For example, the distance between the camera and the plant is not always the same and the sharpness of the image cannot be assumed to be constant.

Why is diversity needed in data?

Considering the diversity of the real world during data collection allows the AI algorithm to concentrate on the essentials and only extract the really important features from the data.The chance for such algorithms to work well is demonstrated by the high quality and explainability that Artificial Neural Networks can yield in object recognition tasks e.g., on the images of the large visual database ImageNet. Its images currently serve as benchmark for research and development of modern AI algorithms in the area of computer vision. It may serve as a good example for AI in agriculture.

Copyright Informations:
Plant Seedlings Dataset
© 2014 Mads Dyrmann, Peter Christiansen, University of Southern Denmark, and Aarhus University
The images and annotations are distributed under the Creative Commons BY-SA license.
If you use this dataset in your research or elsewhere, please cite/reference the following paper:
PAPER: A Public Image Database for Benchmark of Plant Seedling Classification Algorithms

 

Laura Fink