Pl@ntNet Crops: merging citizen science observations and structured survey data to improve crop recognition for agri-food-environment applications (2024)

Specialized social networks have an unprecedented ability to scale and match observations and expertise. Nowadays, the collection of photos and observations is often combined with powerful algorithms for various classification tasks (e.g. [1, 2]). Methodologies are designed that can efficiently channel citizen science into environmental monitoring to inform policy processes, e.g. the monitoring of the Sustainable Development Goals [3]. The synergy between expertise and citizen science is also explored in the domain of land cover and land use assessments. Citizen scientists have helped to assess the land available for biofuel production [4]. Bayas et al [5] investigated how the 'crowd' could contribute to in-situ land cover and use observations to complement the European Union's (EU) Land Use and Coverage Area frame Survey (LUCAS). For example, while LUCAS is carried out every three to four years, citizen scientists could make annual observations. Conversely, the geographic scope of LUCAS is EU-wide, avoiding biases that may occur in voluntarily driven observation schemes. Large-scale biodiversity and protected area monitoring are increasingly benefiting from citizen science observations [6]. Novice, amateur, and expert botanists have been collecting and revising millions of geo-tagged and time-stamped photographs of plant species with apps such as Flora Incognita [7], iNaturalist [8], and Pl@ntNet [9]. Besides contributing to biodiversity monitoring, these apps are intrinsically motivating volunteers as they function as modern day floras. After careful expert-based quality control procedures these observations can contribute to platforms such as the Global Biodiversity Information Facility (www.gbif.org/).

Synergies between demand and supply of collection capacity and expertise are at the core of several recent and successful examples in the agri-food-environment domain (e.g. [10]). Farmers are also actively participating in such activities. Although there is a long history of public engagement in agriculture, specifically through extension work with beneficial bidirectional flows of information, the term citizen science has rarely been applied to this [11]. Farmers and extension workers can benefit from crop disease expertise networks to access information on treatments [12]. Farmer citizen scientists participated in on-farm participatory trials providing insights into variety adaptation and selection across large geo-climatic scale [13]. Farmers also have been engaged in documenting temporal trends in farmland biodiversity and relating these to agricultural practices [14]. Accounting for the environmental performance of farming, but also an increasing disconnect between people and their food, warrants exploring the interfaces between food systems and citizen science [11].

Legacy data and photos can also improve underlying identification models provided proper tools facilitating integration and annotation are available. The massive digitization of herbarium specimens and annotation of specimen images is one such example [15]. Pl@ntNet provides the possibility to integrate specific flora previously collected by (amateur) botanists using dedicated semi-automatic tools. In this letter, we explore the use of legacy in-situ expert photos taken and labelled during past EU-wide LUCAS surveys. Taking advantage of the existing Pl@ntNet species identification algorithms and its mobile and web functionality, we present a new application in Pl@ntNet focusing on 'Crops'. Besides user provided pictures of crops, the application is enriched with suitable LUCAS cover photos of crops. Enabling this synergy required aligning both data sources as detailed in this manuscript. Eventually, combining user collected photos and LUCAS surveyed photos and observations should improve the ability to recognize crops with Pl@ntNet and deep learning algorithms in general. These developments could give rise to various applications in the agri-food-environment domain. Detailed objectives are to:

  • (a)

    Present the Pl@ntNet Crops application focusing on cultivated crops.

  • (b)

    Demonstrate how after legend matching and inference with the existing Pl@ntNet algorithms the Crops application is enriched with LUCAS cover photos.

  • (c)

    Demonstrate the synergy of voluntary and LUCAS collected photos and observations for deep learning model development to recognize crops.

  • (d)

    Suggest potential uses of the Pl@ntNet Crops application and openly published crop recognition models in the agri-food-environment domain.

2.1.Pl@ntNet

Pl@ntNet is an existing smartphone and web-based application that allows identifying plant species on images [16]. More broadly, Pl@ntNet is a collaborative system operating since 2010 [9] and involving a large community of expert and amateur contributors worldwide [17]. Available in 36 languages and in 200+ countries, about 200 000 to 600 000 users use Pl@ntNet each day. Experiencing nearly exponential growth since its conception, more than 200 million plant occurrences have been recorded with Pl@ntNet around the world. Pl@ntNet provides generic functionality to recognize plants that can be used by anyone, but also dedicated applications that focus on specific flora in e.g. natural parks, from herbaria, or on activities with an educational character. Pl@ntNet is straightforward to use. After selecting a flora, a user takes up to four close-up photos of the organs of a plant following Pl@ntNet guidelines. The user specifies if the organ is e.g. a flower, fruit, leaf, bark, the so-called view. In return, Pl@ntNet provides a ranked list of the most probable predicted species. Each species is illustrated by pictures of that species for comparison, grouped by their view, that may also include a 'habitat' view if the photo was not close-up but rather provided a landscape view. See figure 1 on the left side for typical photos submitted to Pl@ntNet. Additional information and links to descriptive web pages of the species are also provided.

Pl@ntNet Crops: merging citizen science observations and structured survey data to improve crop recognition for agri-food-environment applications (1)

2.2.LUCAS cover photos

The LUCAS has collected in-situ data on land use and cover across the EU in 2006, 2009, 2012, 2015 and 2018. During those surveys, observations are done on Pl@ntNet Crops: merging citizen science observations and structured survey data to improve crop recognition for agri-food-environment applications (6)300 000 points for a sample that is statistically representative with respect to land cover. Along with data on a set of relevant variables, photos are taken in the four cardinal directions from the point. A detailed methodological description, as well as the harmonised data and photos are published in [18]. Besides the photos mentioned above, previously unpublished 'cover' photos were also taken. The protocol specified that these cover photos 'should be taken at a close distance, so that the structure of leaves can be clearly seen, as well as flowers or fruits'. During the five campaigns, 874 646 LUCAS cover photos were collected, out of which 242 476 were of crops. After a two-step anonymization process relying on computer vision and visual inspection (detailed in [19]), these photos will be published in 2022. The Crops app ingests the cover photos of crops. See supplementary figure 1 and figure 1 on the right side for typical LUCAS cover photos of crops. As can be seen, these contain full plant views, but also more distant views, fences that may block the view, and occasionally the presence of objects to mark the point.

2.3.Pl@ntNet crop recognition models

Pl@ntNet is an automated visual identification system that integrates various novel approaches in handling and classifying imagery [9]. Pl@ntNet is built on the combination of a deep learning image classification model and a generalist content-based image retrieval method (for details see [9, 16]). Besides computer vision based identification, the Pl@ntNet framework is itself synchronized with observations that are validated by a network of expert botanists. This allows the recognition performances to improve with an ever increasing amount of training data. Pl@ntNet datasets and algorithms are also benchmarked in community-steered identification challenges such as LifeCLEF [1]. This type of long-term evaluation makes it possible to quickly detect major advances in terms of performance and to accelerate their integration into operational production systems such as Pl@ntNet.

2.4.Model performance and including LUCAS photos in the app

First, before including the LUCAS crop photos into the application, the accuracy of the Pl@ntNet algorithm to identify crops is evaluated on the photos contributed by citizen scientists. When training Pl@ntNet's recognition algorithm, a fraction of the data (randomly selected) that has been evaluated by experts is removed from the training set in order to evaluate the performance of the crop identification. Second, to decide which LUCAS cover photos of crops can be included in the application, considerations are made regarding the probability of the prediction and the accuracy of the crop identification by comparing the Pl@ntNet classification with the LUCAS label. This requires matching the LUCAS legend labels with the Pl@ntNet species reference list. Legend matching is also needed for the future use of the photos to train and improve the current Pl@ntNet species identification algorithm. In the Pl@ntNet procedure, while users need to specify the type of view when submitting the photo, the algorithm also classifies the view (e.g. flower, leave, bark, habitat) before classifying the species. After inclusion in the Crops app, the established expert driven Pl@ntNet quality control mechanism continues to take place.

3.1.Model performance

A total of 218 cultivated crop species are included in the Pl@ntNet Crops application (see supplementary table 1 and the Crops application). Along with scientific names, common names translated into 36 languages are also provided. Species covered include major crops (maize, wheat, rice, yam), but also vegetables (asparagus, eggplant), tree crops (olives, coconut), fruits (kiwi, apple), nuts (hazelnuts, pistachio), spices (cinnamon, black pepper), and cover or N-fixing crops (clover). Pl@ntNet users have already contributed 605 242 crop photos (as of 16 November 2021) from around the world to the Crops application. Out of these 605 242, the classification of 260 610 of those have also been evaluated by experts. The dataset that was used to train the model was selected from these.

Before including the LUCAS cover photos, the performance of the current Pl@ntNet Crops algorithm is evaluated with the randomly selected validation set that is removed from the training set. This set contains 2654 images and allows to measure the average identification performance of unseen crop images. As reported in table 1, the correct species is returned in first position 89% of the time, and among the top 5 predicted species 97% of the time. This provided a satisfactory performance of the algorithm that is used in the Pl@ntNet Crops application. A threshold on the prediction probability of the algorithm will determine whether a LUCAS cover photo can be included in the app.

Table 1.Performance evaluation of Pl@ntNet's crop recognition application.

Mean average precisionTop-1 accuracyTop-5 accuracy
0.9270.8910.972

3.2.Legend matching and species list

Pl@ntNet uses up to date Latin taxonomy to classify at species level. The LUCAS legend level 3, which is used to label the LUCAS cover photos, contains a mix of family and species names with synonyms and alternative spellings that needed to be semantically mapped to the Pl@ntNet reference species list (see supplementary table 1). The 218 cultivated crops in the application were matched to 36 LUCAS legend level 3 classes. Following the LUCAS class definition, the frequency distribution across the different crop types is shown in supplementary figure 2. This reveals the long-tail problem [9], for only a few of the crop species—the major crops—a large amount of LUCAS observations with photos exist. However, if the Pl@ntNet observations are classified following the LUCAS legend, a large group of observations are included in the 'other fruit trees and berries' class. Of course, a much richer differentiation exists at species level for the Pl@ntNet photos. This is illustrated in figure 2. Here each bar indicates a Pl@ntNet species, the size of the bar the number of Pl@ntNet citizen scientist observations for that species, while colour indicates the LUCAS legend level 3. Unfortunately, when using the original LUCAS label to evaluate the accuracy of the Pl@ntNet classification of the LUCAS cover photos, species will be grouped that may also be similar (e.g. 'dry pulses').

Pl@ntNet Crops: merging citizen science observations and structured survey data to improve crop recognition for agri-food-environment applications (7)

3.3.Spatial and temporal distribution of Pl@ntNet and LUCAS photos

Pl@ntNet users have already contributed 605 242 crop photos (as of 16 November 2021) to the Crops application. Approximately 400 000 photos contain GPS geolocation information and 260 610 of them are both present in Europe and have been validated by experts. The potential contribution of LUCAS cover photos of crops to the Pl@ntNet application amounts to a total of 242 476 photos and observations taken across the EU following the LUCAS classification (see supplementary figure 3 mapping the distribution of LUCAS cover photos across the EU for seven crops). The type of view classified by the algorithm (table 2) highlights the difference between Pl@ntNet user and LUCAS surveyor provided photo. While most Pl@ntNet user photos are recognized to have a view of flowers or leaves, most LUCAS photos are classified to be of habitat type, i.e. a full plant view, or a more distant photo of a cropped field. This is despite the LUCAS protocol, which referred to pictures to be taken at a close distance.

Table 2.Type of views of Pl@ntNet user and LUCAS cover pictures.

TotalFlowerFruitLeafBarkHabitOther
Pl@ntNet user (n)605 242231 66962 541260 00517 09926 6667262
Pl@ntNet user (%)381043341
LUCAS cover total (n)242 47624 40125 33367 488169298 23425 328
LUCAS cover total (%)10102814110
LUCAS cover inference Pl@ntNet Crops: merging citizen science observations and structured survey data to improve crop recognition for agri-food-environment applications (8) 0.5 (n)70 17012 516766223 1935526 235509
LUCAS cover inference Pl@ntNet Crops: merging citizen science observations and structured survey data to improve crop recognition for agri-food-environment applications (9) 0.5 (%)1811330371

The relative contribution of Pl@ntNet and LUCAS observations to the Crops app is evaluated by comparing the 260 620 evaluated Pl@ntNet observations and the 242 476 potential LUCAS observations. Geographically speaking, the relative contribution of LUCAS to Pl@ntNet observations across Europe is especially pronounced in Eastern Europe and away from populated coastal and metropolitan areas (see figure 3). Following the launch of Pl@ntNet in 2010 and the direct uptake in the existing French Tela Botanica network (www.tela-botanica.org), it is no surprise that most Pl@ntNet user made observations have been made in France. The LUCAS cover photos are geographically distributed more uniformly, a drawback is that generally only a single cover photo is taken at a LUCAS point. Some of these LUCAS points will have been revisited only once, while others may have been visited during all five past surveys [18].

Pl@ntNet Crops: merging citizen science observations and structured survey data to improve crop recognition for agri-food-environment applications (10)

Summarized across a year, the temporal distribution of observations contributed by Pl@ntNet users and LUCAS surveyors is seen in figure 3. For Pl@ntNet this encompasses the time period between 2010 and 2021, while this covers the five survey years for LUCAS. Clearly, most photos are provided during the growing season, when Pl@ntNet users are out and about, and when the LUCAS surveys are carried out. However, the contribution of Pl@ntNet users is more stable throughout the year, while the LUCAS surveys clearly peak with the campaign time. These campaigns are also more or less staggered according to phenological development from South to North. Since 2018 the number of evaluated Pl@ntNet observations of cultivated crops have increased dramatically, reaching more than 80 000 in 2020, illustrating one of the strengths of citizen science.

3.4.Including LUCAS cover photos

Several considerations are made when deciding to include LUCAS cover photos. This includes the probability of the prediction, the accuracy of the prediction against the original LUCAS classification of the crop on the photo (distinguishing 36 classes), but also the added value the photos provide. Considerations include maximizing the total number of photos that could be added to the app, minimizing the risk of confusion with the fuzzy LUCAS legend, or increasing the number of photos for species that have few or no photos associated with them. Since human expert review is part of the Pl@ntNet workflow, erroneous classifications may be corrected. Figure 4 quantifies this. Using the legend matching, the accuracy of the species classified with the highest probability by Pl@ntNet is calculated (upper left panel of figure 4). The accuracy (against the independent LUCAS labels) of the classification increases with probability threshold, and even the lowest Pl@ntNet probabilities correspond to an accuracy of 0.62. Figure 1 illustrates this further: LUCAS cover photos with a probability of classification Pl@ntNet Crops: merging citizen science observations and structured survey data to improve crop recognition for agri-food-environment applications (11)0.9 (L1) are included in the app, while photos predicted with a probability Pl@ntNet Crops: merging citizen science observations and structured survey data to improve crop recognition for agri-food-environment applications (12)0.9 but Pl@ntNet Crops: merging citizen science observations and structured survey data to improve crop recognition for agri-food-environment applications (13)0.4 (L2) and Pl@ntNet Crops: merging citizen science observations and structured survey data to improve crop recognition for agri-food-environment applications (14)0.4 (L3) are excluded. LUCAS cover photos classified with the lowest probability (L3) include photos that are very distant (B81), have little contrast (B11), depict harvested bare soil (B21), agricultural plastic (B45), and flattened crops (B51).

Pl@ntNet Crops: merging citizen science observations and structured survey data to improve crop recognition for agri-food-environment applications (15)

There are 112 445 LUCAS cover photos that could be included. The reduction from 242 476 occurs for two major reasons. First, 63 878 have been rejected as a classification could not be made by Pl@ntNet for various reasons, e.g. this happens for many of the photos with a habitat view, or there is an imposing object (e.g. a marker, notebookPl@ntNet Crops: merging citizen science observations and structured survey data to improve crop recognition for agri-food-environment applications (16)). Second, of the 178 598 for which a prediction could be made, the first species predicted is not part of the Crops reference list of 218 crops. This can be due to various reasons, again, the wider view of the habitat photos does not correspond to the close-up view usually provided to the algorithm which may hamper the classification, but also there is a lack of training data for the crop species in the reference list. Logically, the number of photos that could be included decreases with increasing probability threshold (upper right panel). A significant number of species could benefit from including the LUCAS cover photos. However, with an increasing probability threshold, that number decreases dramatically as well, from 133 to 26 species. Finally, the number of photos added to each species is specified in the boxplot in the lower right panel.

Following the considerations specified above, a reasonable compromise was found by setting the threshold for the probability for inclusion of LUCAS cover photos at Pl@ntNet Crops: merging citizen science observations and structured survey data to improve crop recognition for agri-food-environment applications (17)0.5. Hereby, 70 170 LUCAS photos classified with an accuracy of 0.9 are added to the Crops app enriching 101 species. The LUCAS Cover photos currently contributing to the app can be found here: https://identify.plantnet.org/partners/lucas-survey. In the near future, species below the 0.5 threshold will be visually evaluated as these LUCAS cover photos could improve the Pl@ntNet algorithms the most as they have been the most difficult to predict.

3.5.Pl@ntNet crops and application context

The interface and functionality of the mobile Pl@ntNet Crops application is illustrated in supplementary figure 4. The app includes more than 842 320 photos and more than 675 000 observations (as of 19 October 2022) and is available as an Android and iOs app as well as through a web interface (https://identify.plantnet.org/eu-crops). Various application contexts can be considered, but we consider three here. For more examples of other Pl@ntNet activities, please explore https://identify.plantnet.org/ and related literature.

3.5.1.Collecting in-situ observations

The increasing volume of freely accessible Earth Observation (EO) data through e.g. the Copernicus program, has not been matched with proportional amounts of in-situ, reference, and training data, hampering novel applications. Using automated workflows, the Pl@ntNet Crops app can contribute to collecting in-situ data for such EO applications. Since the app is global and includes 218 species cultivated around the world, this may be particularly relevant for data-poor environments. In food insecure regions, in-situ crop type data are generally missing [20]. In those regions the combination of in-situ data with EO data is crucial to better inform early warning systems and guide humanitarian responses, ensuring market transparency and stability, and informing national agricultural policies [21]. The use of citizen science tools such as the Pl@ntNet Crops app could also encourage individual farmers or surveyors to report on crops grown on specific parcels without the need to have expertise in geo-spatial technologies. Crop type recognition on geo-referenced images for in-situ data collection using computer vision has been successfully tested [20, 22]. Since the position of the camera may not be in the field, smart ways to link the precise position of the crop located on the photo and the field are needed. Pictures are taken directly in the field in campaigns such as those done for the Copernicus4GEOGLAM component (https://land.copernicus.eu/global/about-copernicus4geoglam) of the Copernicus Global Land Service. However, these protocols do not necessarily ensure a smooth integration in computer vision and machine learning applications. Metrics on the positional accuracy of the in-situ data collection will be extracted by comparisons against spatially explicit parcel-level farmers' declarations of crops (from e.g. France and the Netherlands) as gathered in the context of the Integrated Administration and Control System of the Common Agricultural Policy (CAP).

3.5.2.CAP

Automation and new technologies can reduce administrative burden and strengthen evidence provision for the European Union's CAP. When receiving specific subsidies, the responsible administration requests farmers to provide a georeferenced photo proving that they grow certain crops (see [23]), e.g. besides crops supported by voluntary coupled support, for cover, catch, or N-fixing crops. The deep learning model developed as part of the Pl@ntNet Crops application could be ported elsewhere and implemented to take advantage of this automation. For instance, should a photo taken by a farmer result in a classification with low probability (below a threshold accepted by the administration), he/she would be immediately alerted to pay attention to the protocol and retake the photo, e.g. from a more optimized viewpoint or distance to the plant, to ensure sufficient probability of a correct crop classification. The foreseen benefits are threefold: (a) for the farmer—ensuring that his/her duty of providing the evidence was fulfilled correctly and avoiding extra efforts of coming back to the field to retake the photo at a later date; (b) for the administration handling the subsidies—receiving high quality proof captured according to the protocol enabling automatic processing and thus decreasing the need for costly expert photo evaluation; (c) for further algorithm development—the misclassified first photo could be added to the training set to improve the classification probability. Such approaches are already implemented by certain administrations to improve the environmental performance of the CAP. To receive support in Thuringia, Germany, farmers need to evidence maintenance of environmentally sensitive and biodiverse grasslands by providing the occurrence of at least six key species out of list using Flora Incognita (https://marswiki.jrc.ec.europa.eu/wikicap/images/5/57/09_Detection_of_individual_plants.pdf). These approaches improve monitoring of practices, but also directly contribute to near real-time monitoring of ecological patterns and can thus help to assess progress towards the EU biodiversity targets in managed agricultural land.

3.5.3.Food system awareness raising

Engaging citizen scientists in food system research keeps on gaining momentum and now covers diverse applications [24]. At the same time, Ryan et al [11] note that the term citizen science has rarely been applied to a long history of public engagement in agriculture and food science. Consumers increasingly care about the origin and sustainability of the food they buy. In the context of the EU's Farm to Fork strategy, citizen science activities with consumers or schools could use the Crops app to explore local agricultural produce and in this process gain knowledge about the crops that are cultivated in their surroundings, the sustainability of the production process, and the interaction with the local environment. The transparency of the production chain for agricultural commodities (oil palm, cocoa, coffeePl@ntNet Crops: merging citizen science observations and structured survey data to improve crop recognition for agri-food-environment applications (18)) produced outside of Europe is also of concern to consumers. If geolocated crop type is combined with additional information, the development of targeted applications could improve monitoring of environmental externalities such as biodiversity impact [25] of e.g. traditional and agro-forestry-based systems. These developments could strengthen the relationship between producers and consumers and encourage agronomic and environmental awareness with positive impacts on food systems, especially at the production stages. Nevertheless, Mourad et al [26] found that many citizen and farmer science projects tend to focus on academic research outcomes, and not on sustainable solutions in practice. Here we propose that image recognition apps and deep learning models originating in the citizen science domain can also more indirectly feed work flows and policies implementing sustainable farm practices.

3.6.Lessons learned

Opportunistic use was made of legacy LUCAS cover photos. There was enough overlap between the visual requirements of the Pl@ntNet algorithms and the LUCAS cover photos so that 31% of the photos could be included in the app after a first inference. Inclusion of the LUCAS cover photos that are predominantly of 'habitat' type, improves the Pl@ntNet algorithm for this type of view. For 70 170 legacy LUCAS observations, the thematic accuracy was improved by distinguishing 218 species opposed to the original 36 levels. As expert evaluation improves the training set and thus the crop identification, LUCAS cover photos will increasingly be added as the algorithm improves.

Photos and observations that are voluntarily collected by apps such as Pl@ntNet and by statistically representative surveys such as LUCAS have different but complementary strengths. Now that the legends are matched, LUCAS surveyors could (voluntarily) use the app for the crop classification task, especially if they are not agronomists. While generally only a single photo is taken of a crop at a LUCAS point (i.e. similar to the one-shot problem, very few images per species and type of view, see [9]), multiple photos of the same species, and of different organs, can be used for the identification by Pl@ntNet. By combining photos from volunteers and surveys, as done here, increasingly rich training datasets are created. While noise in the imagery is a drawback, a positive aspect is the diversity of such datasets. Over time, this should lead to the development of more robust identification models. Such models are needed for real-world applications.

The LUCAS protocol was not built for use in computer vision and machine learning. Taking a photo of a cropped field with many closely spaced individuals is different compared to taking a close-up photo of an organ of a single plant. Furthermore, even though a protocol for close-up photos was specified, rather heterogeneous imagery is present in the LUCAS cover set. Following on from these experiences, we have improved the instructions for the 2022 LUCAS survey protocol. Background markers are no longer included during the acquisition of LUCAS cover photos to reduce the noise in the photo for further computer vision applications. Furthermore, in the training of surveyors emphasis was placed on the need of photos taken at a close distance, that have sufficient contrast, and where individual crops could be distinguished. Direct use of the app with requirements on the probability of the classification will be suggested for the next LUCAS survey. The cover photos of the LUCAS 2022 survey will also be included in the app once available. Besides photos of crops, the LUCAS cover photo-dataset also includes photos of trees (leaves) and grasses. These may be included within other Pl@ntNet floras. Other LUCAS legacy photos, such as those taken in the four cardinal directions, may prove useful in computer vision challenges related to land cover and use and landscape analysis [27].

A proliferation of apps for plant recognition has appeared in recent years (e.g. LeafSnap, PictureThis, PlantStory, Seek, etc). Jones [28] compared ten automated image recognition apps to identify British flora. Here, overall Pl@ntNet was a mid-level performer. Otter et al [29] concluded that such apps cannot yet be trusted to identify toxic plants, with two out of three apps tested (including Pl@ntNet) correctly predicting about half of plants. The extension service of Michigan State University (USA) evaluated eight different apps, with Pl@ntNet ranking second (www.canr.msu.edu/news/plant-identification-theres-an-app-for-that-actually-several). Clearly, the results of such evaluations depend strongly on the species chosen for the test set. In the future a stronger differentiation may appear between specialised systems and those oriented towards more common species.

One specific complication for certain crops is that varieties of the same species are very different. For example, Brassica oleracea L. includes cultivars such as cabbage, broccoli, cauliflower, kale, and Brussels sprouts. In Pl@ntNet, being one species, these cultivars are all in the same class, notwithstanding their different visual appearances. Traditionally in the LifeCLEF challenge [30], the leading activity on benchmarking new algorithms for automated plant recognition apps, accuracy is evaluated at species, genus, and family level. The particular difficulty of very visually distinct crop varieties has not been solved yet. Different phenological stages provide another challenge [22]. In fact, plant identification can be difficult, even for a trained botanist, and benchmarked datasets are needed to improve algorithms (see for example the public dataset created by Pl@ntNet on anemones, https://plantnet.org/en/2021/03/30/a-plntnet-dataset-for-machine-learning-researchers/).

A new Pl@ntNet app including more than 842k photos (as of 19 October 2022) of 218 crop species around the world is presented. The application and underlying algorithms are built on voluntarily collected photos of crops and enriched by legacy LUCAS survey cover photos of crops. The comparative strengths of merging such networked collaborative and survey-based approaches relate to temporal and geographic representivity of the sampling. The application can benefit various agri-food-environment activities and can contribute to the further development of computer vision algorithms to recognize crops on photos.

This project has received funding from the European Union's Horizon 2020 research and innovation program under Grant Agreement No. 863463 (Cos4Cloud project).

No new data were created or analysed in this study.

Pl@ntNet Crops: merging citizen science observations and structured survey data to improve crop recognition for agri-food-environment applications (2024)

References

Top Articles
Warframe Best Arcane in 2023 - ProGameTalk
How to Get Arcanes in Warframe
Dayton Overdrive
Authentication Portal Pvusd
Www.craigslist.com Springfield Mo
New Stores Coming To Canton Ohio 2022
Happy Valley Insider: Penn State Nittany Lions Football & Basketball Recruiting - Hướng dẫn xem: Những trò chơi nào nên xem người hâm mộ bang Pennsylvania vào cuối tuần này?
Jobs Hiring Start Tomorrow
Chubbs Canton Il
Teenbeautyfitness
James Cameron And Getting Trapped Inside Your Most Successful Creation
Estragon South End
Cool Math Games Unblocked 76
Craigslist Cars For Sale By Owner Oklahoma City
Myjohnshopkins Mychart
Karen Canelon Only
Trizzle Aarp
Hotfixes: September 13, 2024
Journeys Employee Discount Limit
P.o. Box 30924 Salt Lake City Ut
How Much Is Cvs Sports Physical
Pdinfoweb
Beaver Dam Locations Ark Lost Island
Erome.ccom
Clean My Mac Sign In
Wmu Academic Calendar 2022
2022 Jeep Grand Cherokee Lug Nut Torque
Nikki Catsouras Head Cut In Half
Sweeterthanolives
Quattrocento, Italienische Kunst des 15. Jahrhunderts
Lily Spa Roanoke Rapids Reviews
Hmnu Stocktwits
Simple Simon's Pizza Lone Jack Menu
Hingham Police Scanner Wicked Local
Längen umrechnen • m in mm, km in cm
CareCredit Lawsuit - Illegal Credit Card Charges And Fees
Charm City Kings 123Movies
Lubbock, Texas hotels, motels: rates, availability
Lockstraps Net Worth
Upc 044376295592
Jacksonville Jaguars should be happy they won't see the old Deshaun Watson | Gene Frenette
South Carolina Craigslist Motorcycles
Joe Aloi Beaver Pa
Cetaphil Samples For Providers
Po Box 6726 Portland Or 97228
Dimensional Doors Mod (1.20.1, 1.19.4) - Pocket Dimensions
Cibo Tx International Kitchen Schertz Menu
Six Broadway Wiki
Breckie Hill Shower Gif
Kaiju Universe: Best Monster Tier List (January 2024) - Item Level Gaming
8X10 Meters To Square Meters
Clarakitty 2022
Latest Posts
Article information

Author: Catherine Tremblay

Last Updated:

Views: 5239

Rating: 4.7 / 5 (47 voted)

Reviews: 86% of readers found this page helpful

Author information

Name: Catherine Tremblay

Birthday: 1999-09-23

Address: Suite 461 73643 Sherril Loaf, Dickinsonland, AZ 47941-2379

Phone: +2678139151039

Job: International Administration Supervisor

Hobby: Dowsing, Snowboarding, Rowing, Beekeeping, Calligraphy, Shooting, Air sports

Introduction: My name is Catherine Tremblay, I am a precious, perfect, tasty, enthusiastic, inexpensive, vast, kind person who loves writing and wants to share my knowledge and understanding with you.