18.24. Herkenning van Soorten met Model 7 (Juli 2021 ) in iNaturalist (TensorFlow 2)

Op dit moment is Inaturalist al weer bezig met de zesde versie van het Computer Kijk (Computer Vision) model
waarbij in September 2020 18 miljoen fotos apart gezet zijn waarme zo'n 35.000 soorten wereld wijd herkend kunnen worden.
De aanpak is het zelfde als voor model 5 alleen met veel meer fotos omdat er nu veel meer soorten in iNaturalist
2000 fotos heeft. In het verleden werden wel meer dan 2000 fotos per soort gebruikt maar de extra rekenkracht weegt niet op tegen het succes.
In totaal zal het doorrekenen van het model 210 dagen kosten en in het voorjaar van 2021 klaar zijn.
Naast het doorrekenen van hetzelfde model met meer fotos en meer soorten wordt tgelijkertijd het huidige systeem vergeleken
met "TensorFlow 2, Xception vs Inception" wat waarschijnlijk ditzelfde model niet in 210 dagen maar in 60 dagen doorrekend.
Als dit nieuwe TensorFlow 2, Xception vs Inception goed werkt dan wordt het zelfs nog winter 2021 een nieuwe model opgeleverd.
Om dit door rekenen was een nieuwe hardware besteld maar door COVID is dit nog niet geinstalleerd.
In het huidige model zijn 25.000 van de 300.000 soorten die waargenomen zijn in iNaturalist.
https://www.inaturalist.org/blog/42626-we-passed-300-000-species-observed-on-inaturalist#comments

Hoe wordt nu bepaald of een soort opgenomen wordt in het model ?
Als van een soort 100 waarnemingen met foto waarvan er minsten 50 een Research Grade community ID heeft wordt opgenomen in de training. (actually, that’s really verifiable + would-be-verifiable-if-not-captive, In het model worden ook ontsnapte en gekweekte soorten opgenomen). Voor de training wordt dus niet alleen en uitsluitend Research Grade fotos gebruikt.

Globaal waren de oude versies:
Waarneming.nl: Max 3 keer per jaar herziening van het model.

18C Computer Vision Artifical Knowledge Links

24. Herkenning van Soorten met Model 5 (Voorjaar 2020) in iNaturalist (TensorFlow 2)
Waarneming.nl

  1. December 2017 Photos van Voor 2017
  2. December 2019 Photos van Voor 2018
  3. December 2020 Photos van Voor 2019

    Globaal waren de oude versies:

    1. May 2017 Model 1 2-20 photos per species 20170512
    2. Aug 2017 Model 2 40 photos per species
    3. Jan 2018 Model 3 40 photographers per species
    4. Feb 2019 Model 4
    5. Sep 2019 Model 5 <1000 photos per species/li>
    6. Mar 2020 Model 6, Tensor Flow2 25,000
    7. Mar 2021 Model 7, Tensor Flow2 over 38,000 +25.000 leaves
    8. Mar 2022 Model 1.0, Tensor Flow2 over 38,000 +25.000 leaves
    9. Aug 2022 Model 11, Tensor Flow2 over 38,000 +25.000 leaves
    10. Sept 2022 Model 1.2, Tensor Flow2 over 38,000 +25.000 leaves
    11. Oct 2022 Model 1.3, Tensor Flow2 over 38,000 +25.000 leaves
    12. Nov 2022 Model 1.4, Tensor Flow2 over 38,000 +25.000 leaves

    Referentiewaarnemingen er gebruikt worden (5000 of 40)

    https://groups.google.com/forum/#!topic/inaturalist/K9nJOC0Cjss

    Training

    The previous model (v1.2) was replacing a model (v1.1) trained on data exported in April so there was a 4 month interval between these data exports (interval between A and B .

    Training Set 1

    In deze groep zitten geidentificeerde met

    1. De waarneming heeft een Taxon of een Genus, Familie toegewezen
    2. De waarneming heeft geen flags
    3. De waarneming heeft alle quality metrics gehaald behalve het toegestasnde wild / naturalized, dit zijn items die genoemd worden in de DQA, Quality Assesment

    Validation Set 1

    Met deze groep fotos wordt tijdens de training de voortgang van de training bekeken, een Toets of Examen dat het trainingmodel moet afleggen. De eisen aan deze validatieset zijn hetzelfde als van de Training Set 1 maar het is maar 5% van het aantal fotos.

    TestSet 1

    Met deze groep fotos wordt als de training is afgelopen gekeken of het model goed werkt. Het betreft uitsluitend
    fotos met een Community taxon, dus fotos die waarschijnlijk wel goed moeten zijn omdat meerdere personen een determinatie toegeveogd hebben aan de waarneming.
    Het bijzondere is dus dat aan de training ook minder zekere fotos mee mogen doen terwijl het testen tegen absoluut zekere waarnemingen gedaan wordt.
    Zie ook https://forum.inaturalist.org/t/identification-quality-on-inaturalist/7507
    Om te voorkomen dat er te veel soorten zijn waarvan er te weinig fotos zijn worden er niet te veel beperkingen aan de fotos gesteld. In de toekomst worden de eisen misschien strenger

    1. Fotos van Nieuwe gebruikers
    2. CID'd obs, waarnemingen met alleen een Computer Vision ID
    3. vision-based ID
    4. Gebruik geen fotos van IDs by users with X maverick IDs

    Het computer is niet te downloaden maar misschien dat er later nog een API komt. Training your own with https://www.kaggle.com/c/inaturalist-challenge-at-fgvc-2017

    Croppen van fotos, Volgorde, Best Photo First

    Al hoewel het op iNaturalist neit vaak gezegd wordt is het Croppen van een foto een goede methode om betere resultaten te krijgen.
    Het model neemt ook geografische data nog niet echt mee. In het verleden werden enorme aantallen Californische soorten voorgesteld maar in de loop van de modellen is dat wel afgenomen.

    Best Photo First
    Het is naast croppen erg verstandig om je beste foto het eerste neer te zetten omdat het model alleen de eerste foto van de waarneming gebruikt om een voorstel voor de soort te doen.
    De locatie, nauwkeurigheid van een foto die je neemt buiten de iNat app om is meestasl minder nauwkeurig dan wanner je de interne app gebruikt van iNat. Ook kun je dan inzoomen met je vingers spread out, zodat je de crop functionaliteit niet hoeft te gebruiken. Het model gebruikt niet het tijd van het seizoen (eikels en kastanjes in de herfst, Trekvogels in voorjaar en herfst. Geen zomervogels als gierzwaluw in de winter en verspreidinggegevens van soorten.. ALpenroosjes worden niet tot de ALpen beperkt.

    In 2017 the amount of recognised species was 20.000 and now it is still.....20.000?

    https://www.inaturalist.org/pages/help#cv-taxa
    FWIW, there's also discussion and some additional charts at https://forum.inaturalist.org/t/psst-new-vision-model-released/10854/11
    https://forum.inaturalist.org/t/identification-quality-on-inaturalist/7507
    https://www.pyimagesearch.com/2017/03/20/imagenet-vggnet-resnet-inception-xception-keras/
    https://www.inaturalist.org/posts/31806-a-new-vision-model#activity_comment_5763380

    Neural Networks (specifically, VGG16) pre-trained on the ImageNet dataset with Python and the Keras deep learning library.

    The pre-trained networks inside of Keras are capable of recognizing 1,000 different object categories, similar to objects we encounter in our day-to-day lives with high accuracy.

    Back then, the pre-trained ImageNet models were separate from the core Keras library, requiring us to clone a free-standing GitHub repo and then manually copy the code into our projects.

    This solution worked well enough; however, since my original blog post was published, the pre-trained networks (VGG16, VGG19, ResNet50, Inception V3, and Xception) have been fully integrated into the Keras core (no need to clone down a separate repo anymore) — these implementations can be found inside the applications sub-module.

    Because of this, I’ve decided to create a new, updated tutorial that demonstrates how to utilize these state-of-the-art networks in your own classification projects.

    Specifically, we’ll create a special Python script that can load any of these networks using either a TensorFlow or Theano backend, and then classify your own custom input images.

    To learn more about classifying images with VGGNet, ResNet, Inception, and Xception, just keep reading.

    = = = = = = = = = = = = = = = = =
    : https://www.youtube.com/watch?v=xfbabznYFV0

    https://towardsdatascience.com/xception-from-scratch-using-tensorflow-even-better-than-inception-940fb231ced9

    Xception: Implementing from scratch using Tensorflow
    Even better than Inception
    Convolutional Neural Networks (CNN) have come a long way, from the LeNet-style, AlexNet, VGG models, which used simple stacks of convolutional layers for feature extraction and max-pooling layers for spatial sub-sampling, stacked one after the other, to Inception and ResNet networks which use skip connections and multiple convolutional and max-pooling blocks in each layer. Since its introduction, one of the best networks in computer vision has been the Inception network. The Inception model uses a stack of modules, each module containing a bunch of feature extractors, which allow them to learn richer representations with fewer parameters.
    Xception paper — https://arxiv.org/abs/1610.02357

    = = = = = = = = = = = = = = = = = = = = =
    https://towardsdatascience.com/review-xception-with-depthwise-separable-convolution-better-than-inception-v3-image-dc967dd42568
    Inthis story, Xception [1] by Google, stands for Extreme version of Inception, is reviewed. With a modified depthwise separable convolution, it is even better than Inception-v3 2 for both ImageNet ILSVRC and JFT datasets. Though it is a 2017 CVPR paper which was just published last year, it’s already had more than 300 citations when I was writing this story. (Sik-Ho Tsang @ Medium)

    = = = = = = = = = = = = = = = = = = = = = = = = = = = =
    https://laptrinhx.com/xception-from-scratch-using-tensorflow-even-better-than-inception-212761016/
    Convolutional Neural Networks (CNN) have come a long way, from the LeNet-style, AlexNet, VGG models, which used simple stacks of convolutional layers for feature extraction and max-pooling layers for spatial sub-sampling, stacked one after the other, to Inception and ResNet networks which use skip connections and multiple convolutional and max-pooling blocks in each layer. Since its introduction, one of the best networks in computer vision has been the Inception network. The Inception model uses a stack of modules, each module containing a bunch of feature extractors, which allow them to learn richer representations with fewer parameters.

    Xception paper — https://arxiv.org/abs/1610.02357
    Herkenning van Soorten met Model 5 (Voorjaar 2020) in iNaturalist (TensorFlow 2, (24))
    : https://www.youtube.com/watch?v=xfbabznYFV0

    Literature

  4. https://forum.inaturalist.org/t/suggest-id-for-sounds/18115/12

  5. https://forum.inaturalist.org/t/recognize-sounds-automatically/3527/

  6. The iNat vision model itself is not available via public API and it’s not available to download.https://forum.inaturalist.org/t/where-to-find-inat-vision-model/12341

  7. https://www.inaturalist.org/blog/31806-a-new-vision-model

  8. https://www.inaturalist.org/pages/computer_vision_demo

  9. https://www.inaturalist.org/blog/54236-new-computer-vision-model almost 25,000 to over 38,000 +25.000 leaves

  10. https://forum.inaturalist.org/t/computer-vision-update-july-2021/24728

  11. https://forum.inaturalist.org/t/better-use-of-location-in-computer-vision-suggestions/915/47
  12. e


    @sbushes Great question, the iNat team were just talking about that this morning. The challenge with showing the badge on higher level taxa is user interpretation. We make suggestions at the leaf level and also using what we call a common ancestor - rolling scores up the tree to find a higher level suggestion with a higher combined score. I know you've been actively investigating ML and our vision system for a while, but in case you haven't seen this video, Ken-ichi explains our process in his keynote at TDWG last year: https://www.youtube.com/watch?v=xfbabznYFV0 - the relevant content starts at about the 16 minute mark.

    In that context, how to interpret a badge on a family? Would it mean that the family itself is in the model as a leaf node (ie none of its children are in the model)? Or perhaps would it mean that the family has children that are represented in the model? What if some but not all children are in the model? What if not all children are known for a family, would it fair to say that it is represented in the model, even if all known children are in the model?

    Definitely open to suggestions, but gist is that we wanted a label for species that are in/out of the model that's easy for all users to interpret in the context of "why am I getting suggestions for this species?" or more commonly "why is this stupid vision system not suggesting the obviously correct choice of X.

    As for the cutoff date, I have a lot of updates to IDs planned on a genus where I am seeing 10-20% error rate in RG observations. I was hoping to get the identification updates in before the next model starts, but don't know if I will get to it. Probably depends on when in August you get started.

    18.24. Herkenning van Soorten met Model 7 (Juli 2021 ) in iNaturalist (TensorFlow 2)

    Julkaistu tammikuu 12, 2021 11:57 IP. käyttäjältä ahospers ahospers

    Kommentit

    Corresponding author: Ken-ichi Ueda (kueda@inaturalist.org)
    Received: 29 Sep 2020 | Published: 01 Oct 2020
    Citation: Ueda K-i (2020) An Overview of Computer Vision in iNaturalist. Biodiversity Information Science and
    Standards 4: e59133. https://doi.org/10.3897/biss.4.59133

    --

    Abstract
    iNaturalist is a social network of people who record and share observations of biodiversity.
    For several years, iNaturalist has been employing computer vision models trained on
    iNaturalist data to provide automated species identification assistance to iNaturalist
    participants. This presentation offers an overview of how we are using this technology, the
    data and tools we used to create it, challenges we have faced in its development, and
    ways we might apply it in the future.
    Presenting author
    Ken-ichi Ueda
    Presented at
    TDWG 2020
    https://www.youtube.com/watch?v=xfbabznYFV0

    Lähettänyt ahospers yli 3 vuotta sitten

    https://forum.inaturalist.org/t/better-use-of-location-in-computer-vision-suggestions/915/41

    I just wanted to give a quick update on functionality changes to better use location in CV suggestions.

    On iNaturalist, we currently use location data to boost visually similar species that are also seen nearby, but we don’t do anything to demote visually similar species that aren’t seen nearby
    Ken-ichi gives a good overview of this in this talk he recently gave to TDWG:
    https://www.youtube.com/watch?v=xfbabznYFV0 2

    We’ve learned from model evaluation experiments that demoting visually similar species provides better predictions on average than our current approach of not doing that. But we’ve held off because we want iNaturalist to also work well in situations where location data might not help, such as a garden filled with ornamentals or a remote location without much nearby data to draw from.

    We’re currently working on altering the CV suggestions on iNaturalist so that by default it will demote visually similar species that aren’t seen nearby. But there will be a new toggle to have the CV ignore location data to accommodate these situations where location data doesn’t help (e.g. gardens, captivity).

    We’re rolling this out in Android first as part of a new more elaborate ‘species chooser’ which we’re currently testing internally and hope to have in beta some time in the next month. Why Android? That’s just where we have the development resources right now. Once we’ve figured out how to make it work there, we’ll move on to changing the default/adding the toggle on the website and getting it on to the iOS app in some form.

    On Seek, we currently don’t incorporate location data into the CV suggestions because Seek suggestions work offline and doing so requires getting location data on-device (there are a few exceptions related to the camera roll and older versions of iOS which use ‘online’ CV and thus location from the server). We’ve recently made progress on getting location data incorporated into the offline Seek CV suggestions (we have a working Android version) but we don’t yet have a release date. When this update is released, Seek will work in the same way as our plan for iNaturalist: i.e. demote species not seen nearby by default and have an option to ignore location data.

    Thanks for bearing with us and your patience. We hope these features will help towards reducing the number of wrong IDs suggested by the CV and will thus help alleviate identifier burnout.

    https://forum.inaturalist.org/t/better-use-of-location-in-computer-vision-suggestions/915/41

    Lähettänyt ahospers noin 3 vuotta sitten

    https://forum.inaturalist.org/t/suggest-id-for-sounds/18115/12 Scope: Part of what makes iNat so awesome is that all species in the tree of life are candidates for observation and identification, and all identifications hang off the tree of life. All the hard work of sorting and grinding out the taxonomy pays off when an observation gets an identification that’s attached to a real species label instead of a generic tag like “tree” or “bug.” The vision model we’re training now knows about roughly 30,000 leaf taxa (mostly species), and because of how it is deployed it can make predictions about parent or inner nodes as well, which represent another 25,000 higher ranking taxa. I believe the birdvox “fine” model can classify a few dozen different species, and the other two projects can only identify one or two. https://forum.inaturalist.org/t/suggest-id-for-sounds/18115/12

    Lähettänyt ahospers yli 2 vuotta sitten

    Lisää kommentti

    Kirjaudu sisään tai Rekisteröidy lisätäksesi kommentteja