Fotoherkenning Paddenstoelen: een vloek of een zegen?

Fotoherkenning Paddenstoelen: een vloek of een zegen? Coolia 2020(3)

In recent years there has been an explosion in the availability of apps for smartphones that can be
used to help with mushroom identification in the field. There are a number of approaches available, ranging from those apps that identify mushroom automatically based on the use of Artificial Intelligence (AI) and automated Image Recognition, through those that require the user to use traditional dichotomous keys or multi-access keys, to those that may only have a range of images without a clear system for identification of any species of interest

The coolia article seems related to this article Artificial Intelligence for plant identification on smartphones and tablets

Related documents--------------------------------------------------------------------------------------
BACHELORARBEIT MAGIC MUSHROOM APP -Mit Deep Learning essbare Pilze erkennen met Python!!!

Deep Shrooms: classifying mushroom images (Python)

Shroomnet: Kunstliches neuronales Netz f ¨ ur die Bestimmung von Pilzarten !!

Artificial Intelligence for plant identification on smartphones and tablets


Apps for identification mushrooms---------------------------------------------------------------------

Deens svampeatlas


iNaturalist Seek

Google Lens

Lähettänyt optilete optilete, 5. heinäkuuta 2020 15:02


Nice artickle

Lähettänyt ahospers noin 2 vuotta sitten (Lippu)

Vision Model Updates

iNaturalist currently uses vision models in two main places:
1) a private web-based API used by the website and the iNaturalist iOS and Android apps, and
2) within the recently updated Seek app.

When Seek 2.0 was released in April, it included a different vision model than we were using on the web. At that time the web-based model was a third-generation model we started using in early 2018. That web-based model was trained with the idea it would be run on servers, and servers can be configured to have far more computing power than a mobile device. As a result that model was far too large to be run on mobile devices.

Early this year, with an updated Seek in mind, we started another training run with two main goals:
-shrinking the file size of the model, and
-allowing it to recommend taxonomic ranks other than species (e.g. families, genera, etc.).

The mobile version of the model needs to be small in terms of file size to minimize the amount of data app users would need to download. Smaller models can also be used by more devices as they need fewer resources to run (e.g. memory, battery), and can generate results faster, which is important for Seek's real-time camera vision results. These models take a lot of time and money to train, so we also wanted a model that could be simultaneously trained to produce a large web-based version and a smaller version for use in mobile devices.

Unfortunately, shrinking the file size like this slightly decreased model accuracy compared to the larger web-based version (kind of similar to image compression), and we found that was an unavoidable tradeoff. We take this into account when processing the model results, and on average for a similar error rate, the mobile version might recommend a taxon at a higher taxonomic rank than the web-based version. The taxon results we show to users shouldn't be less accurate, but they may be less specific.

More Species Represented
We wanted the model to include more species data, even when some species don't have enough photos to be recognized as species level. There are some species with a small amount of photos that, if we trained on that small set of photos, likely wouldn't have enough information for the model to reliable recognize those species.

Our 2018 model only included taxa at rank species. We set a threshold for number of photos, and species below the threshold were not included. We could still recommend higher taxa by doing some post-processing of results, but the model itself would only assign scores to species. In our latest training run we allowed the photos from species under the threshold to be rolled up into their ancestor taxa until the threshold was reached, and we allowed the model to assign scores to these non-species nodes. This allows more species to be represented in this newer model, sometimes at the genus level mixed up with photos of other species in the genus under our threshold. Now instead of not knowing anything about these species, the model can at least identify the genus or family, etc.

Lähettänyt optilete noin 2 vuotta sitten (Lippu)

Bevat dit alle artikelen ? Je had toen ook een duitstalig artikel dacht ik

Lähettänyt ahospers melkein 2 vuotta sitten (Lippu)

Met < hr > krrijg je een mooie line

Lähettänyt ahospers melkein 2 vuotta sitten (Lippu)

Leuk stuk, zat net planten te kijken maar dat was geen goede test..deze is veel beter

Lähettänyt ahospers yli 1 vuosi sitten (Lippu)

Welke test is goed (url) en welke slecht (url)?

Lähettänyt optilete yli 1 vuosi sitten (Lippu)

Lisää kommentti

Kirjaudu sisään tai Rekisteröidy lisätäksesi kommentteja