RSS-Feed abonnieren

DOI: 10.1055/a-1866-2943
Novel Method for Three-Dimensional Facial Expression Recognition Using Self-Normalizing Neural Networks and Mobile Devices
Neuartige Methode zur 3-dimensionalen Mimikerkennung durch den Einsatz von selbstnormalisierenden neuronalen Netzen und mobilen Geräten
Abstract
Introduction To date, most ways to perform facial expression recognition rely on two-dimensional images, advanced approaches with three-dimensional data exist. These however demand stationary apparatuses and thus lack portability and possibilities to scale deployment. As human emotions, intent and even diseases may condense in distinct facial expressions or changes therein, the need for a portable yet capable solution is signified. Due to the superior informative value of three-dimensional data on facial morphology and because certain syndromes find expression in specific facial dysmorphisms, a solution should allow portable acquisition of true three-dimensional facial scans in real time. In this study we present a novel solution for the three-dimensional acquisition of facial geometry data and the recognition of facial expressions from it. The new technology presented here only requires the use of a smartphone or tablet with an integrated TrueDepth camera and enables real-time acquisition of the geometry and its categorization into distinct facial expressions.
Material and Methods Our approach consisted of two parts: First, training data was acquired by asking a collective of 226 medical students to adopt defined facial expressions while their current facial morphology was captured by our specially developed app running on iPads, placed in front of the students. In total, the list of the facial expressions to be shown by the participants consisted of “disappointed”, “stressed”, “happy”, “sad” and “surprised”. Second, the data were used to train a self-normalizing neural network. A set of all factors describing the current facial expression at a time is referred to as “snapshot”.
Results In total, over half a million snapshots were recorded in the study. Ultimately, the network achieved an overall accuracy of 80.54% after 400 epochs of training. In test, an overall accuracy of 81.15% was determined. Recall values differed by the category of a snapshot and ranged from 74.79% for “stressed” to 87.61% for “happy”. Precision showed similar results, whereas “sad” achieved the lowest value at 77.48% and “surprised” the highest at 86.87%.
Conclusions With the present work it can be demonstrated that respectable results can be achieved even when using data sets with some challenges. Through various measures, already incorporated into an optimized version of our app, it is to be expected that the training results can be significantly improved and made more precise in the future. Currently a follow-up study with the new version of our app that encompasses the suggested alterations and adaptions, is being conducted. We aim to build a large and open database of facial scans not only for facial expression recognition but to perform disease recognition and to monitor diseases’ treatment progresses.
Zusammenfassung
Einleitung Bisher beruhen die gebräuchlichsten Methoden zur Mimikerkennung auf 2-dimensionalen Bildern, obwohl es weiter entwickelte Methoden gibt, die 3-dimensionale Daten einsetzen. Diese benötigen aber stationäre Geräte, die weder tragbar sind noch im größeren Umfang bereitstehen. Da menschliche Emotionen, Absichten und sogar Krankheiten sich in spezifischen Gesichtsausdrücken oder durch Änderungen der Gesichtsmimik offenbaren können, ist eine kompetente und tragbare Lösung gefragt. Da 3-dimensionale Daten zur Gesichtsmorphologie eine höhere Aussagekraft haben und bestimmte Syndrome sich durch spezifische Gesichtsdysmorphien ausdrücken, kann dieses Problem dadurch gelöst werden, dass ein tragbares Gerät zur Erfassung von 3-dimensionalen Gesichtsscans in Echtzeit eingesetzt wird. In dieser Studie stellen wir eine neuartige Lösung für die 3-dimensionale Erfassung von gesichtsgeometrischen Daten und die darauf aufbauende Erkennung von Gesichtsausdrücken vor. Die neue Technologie, die hier vorgestellt wird, benötigt nur ein Smartphone oder ein Tablet mit integrierter TrueDepth-Kamera und erlaubt die Erfassung der Gesichtsgeometrie in Echtzeit sowie deren Zuordnung zu spezifischen Gesichtsausdrücken.
Material und Methoden Unser Ansatz bestand aus 2 Teilen. Zunächst wurden Trainingsdaten erstellt; dazu wurde ein Kollektiv bestehend aus 226 Medizinstudenten gebeten, bestimmte Gesichtsausdrücke anzunehmen, und ihre jeweilige Gesichtsmorphologie wurde währenddessen von unserer speziell entwickelten App auf iPads, die vor den Studenten aufgestellt waren, aufgezeichnet. Insgesamt bestand die Liste der Gesichtsausdrücke, die die Teilnehmer darstellen sollten, aus „enttäuscht“, „gestresst“, „glücklich“, „traurig“ und „überrascht“. In einem zweiten Schritt wurden die neu erworbenen Daten dazu verwendet, ein selbstnormalisierendes neuronales Netz zu trainieren. Ein Satz aller Faktoren, die ein aktuellen Gesichtsausdruck zu einem bestimmten Zeitpunkt beschrieben, wird als „Snapshot“ bezeichnet.
Ergebnisse Insgesamt wurden mehr als eine halbe Million Snapshots im Laufe der Studie aufgezeichnet. Im Endergebnis betrug die Gesamtgenauigkeit des neuronalen Netzes nach 400 Trainingsdurchgängen 80,54%. Im Test betrug die Gesamtgenauigkeit 81,15%. Die Sensitivität schwankte je nach Zuordnung des Snapshots und reichte von 74,79% für „gestresst“ bis 87,61% für „glücklich“. Bei dem positiven Vorhersagewert waren die Ergebnisse ähnlich, wobei „traurig“ den niedrigsten Wert erreichte mit 77,48% und „überrascht“ den höchsten Wert erzielte mit 86,87%.
Schlussfolgerungen Die Studie zeigt, dass respektable Ergebnisse erzielt werden können, selbst wenn anspruchsvolle Datensätze verwendet werden. Es wurden danach verschiedene Maßnahmen durchgeführt, die inzwischen schon in der optimierten Version unserer App integriert wurden. Damit sollten die Trainingsergebnisse voraussichtlich signifikant verbessert und in der Zukunft noch genauer werden. Zur Zeit wird eine Follow-up-Studie mit der neuesten Version unserer App durchgeführt, welche die vorgeschlagenen Änderungen und Anpassungen verwendet. Geplant ist nun der Aufbau einer großen, offenen Datenbank von Gesichtsscans, die nicht nur Mimik, sondern auch Krankheiten erkennen kann; damit könnten auch Fortschritte bei der Behandlung von Krankheiten verfolgt werden.
Schlüsselwörter
Mimikerkennung - selbstnormalisierende neuronale Netze - Gesichtsgeometrie - Erkennung von KrankheitenKeywords
facial expression recognition - self-normalizing neural networks - facial geometry - disease recognitionPublikationsverlauf
Eingereicht: 08. März 2022
Angenommen nach Revision: 26. Mai 2022
Artikel online veröffentlicht:
21. Juli 2022
© 2022. The Author(s). This is an open access article published by Thieme under the terms of the Creative Commons Attribution-NonDerivative-NonCommercial-License, permitting copying and reproduction so long as the original work is given appropriate credit. Contents may not be used for commercial purposes, or adapted, remixed, transformed or built upon. (https://creativecommons.org/licenses/by-nc-nd/4.0/).
Georg Thieme Verlag KG
Rüdigerstraße 14, 70469 Stuttgart, Germany
-
References
- 1 Newmark C. Charles Darwin: The Expression of the Emotions in Man and Animals. In: Senge K, Schützeichel R. Hauptwerke der Emotionssoziologie. Wiesbaden: Springer Fachmedien Wiesbaden; 2013: 85-88
- 2 Barrett LF, Adolphs R, Marsella S. et al. Emotional Expressions Reconsidered: Challenges to Inferring Emotion From Human Facial Movements. Psychol Sci Public Interest 2019; 20: 1-68
- 3 Kret ME, Maitner AT, Fischer AH. Interpreting Emotions From Women With Covered Faces: A Comparison Between a Middle Eastern and Western-European Sample. Front Psychol 2021; 12: 620632
- 4 Tcherkassof A, Dupré D. The emotion-facial expression link: evidence from human and automatic expression recognition. Psychol Res 2021; 85: 2954-2969
- 5 Camerlink I, Coulange E, Farish M. et al. Facial expression as a potential measure of both intent and emotion. Sci Rep 2018; 8: 17602
- 6 Burgoon JK. Microexpressions Are Not the Best Way to Catch a Liar. Front Psychol 2018; 9: 1672-1672
- 7 ten Brinke L, Porter S, Baker A. Darwin the detective: Observable facial muscle contractions reveal emotional high-stakes lies. Evol Hum Behav 2012; 33: 411-416
- 8 Hurley CM, Frank MG. Executing Facial Control During Deception Situations. J Nonverbal Behav 2011; 35: 119-131
- 9 Porter S, ten Brinke L, Wallace B. Secrets and Lies: Involuntary Leakage in Deceptive Facial Expressions as a Function of Emotional Intensity. J Nonverbal Behav 2012; 36: 23-37
- 10 Haan J. Protagonists with Parkinson’s disease. Front Neurol Neurosci 2013; 31: 178-187
- 11 Girard JM, Cohn JF, Mahoor MH. et al. Social Risk and Depression: Evidence from Manual and Automatic Facial Expression Analysis. Proc Int Conf Autom Face Gesture Recognit 2013;
- 12 Patel KR, Cherian J, Gohil K. et al. Schizophrenia: overview and treatment options. P T 2014; 39: 638-645
- 13 Foussias G, Remington G. Negative Symptoms in Schizophrenia: Avolition and Occam’s Razor. Schizophr Bull 2008; 36: 359-369
- 14 Carbon C-C. Wearing Face Masks Strongly Confuses Counterparts in Reading Emotions. Front Psychol 2020; 11: 566886
- 15 Sarkozy A, Digilio MC, Dallapiccola B. Leopard syndrome. Orphanet J Rare Dis 2008; 3: 13
- 16 Ajitkumar A, Jamil RT, Mathai JK. Cri Du Chat Syndrome. In: StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2020
- 17 Moramarco A, Himmelblau E, Miraglia E. et al. Ocular manifestations in Gorlin-Goltz syndrome. Orphanet J Rare Dis 2019; 14: 218
- 18 John A, Abhishek MC, Ajayan AS, Sanoop S, Kumar VR. Real-Time Facial Emotion Recognition System With Improved Preprocessing and Feature Extraction. In: 2020 Third International Conference on Smart Systems and Inventive Technology (ICSSIT). Tamil Nadu, India: IEEE; 2020: 1328-1333
- 19 Jeong M, Ko BC. Driver’s Facial Expression Recognition in Real-Time for Safe Driving. Sensors (Basel) 2018; 18
- 20 Tian Y, Kanade T, Cohn JF. Facial Expression Recognition. In: Li SZ, Jain AK. Handbook of Face Recognition. London: Springer; 2011: 487-519
- 21 Lopes AT, de Aguiar E, De Souza AF. et al. Facial expression recognition with Convolutional Neural Networks: Coping with few data and the training sample order. Pattern Recognit 2017; 61: 610-628
- 22 Huang Y, Chen F, Lv S. et al. Facial Expression Recognition: A Survey. Symmetry 2019; 11: 1189
- 23 Samadiani N, Huang G, Cai B. et al. A Review on Automatic Facial Expression Recognition Systems Assisted by Multimodal Sensor Data. Sensors (Basel, Switzerland) 2019; 19: 1863
- 24 Chen Y, Du J, Liu Q. et al. Robust and energy-efficient expression recognition based on improved deep ResNets. Biomed Tech (Berl) 2019; 64: 519-528
- 25 Dawes TR, Eden-Green B, Rosten C. et al. Objectively measuring pain using facial expression: is the technology finally ready?. Pain Manag 2018; 8: 105-113
- 26 Liu D, Cheng D, Houle TT. et al. Machine learning methods for automatic pain assessment using facial expression information: Protocol for a systematic review and meta-analysis. Medicine (Baltimore) 2018; 97: e13421
- 27 Bargshady G, Zhou X, Deo RC. et al. Enhanced deep learning algorithm development to detect pain intensity from facial expression images. Expert Syst Appl 2020; 149: 113305
- 28 Monwar MM, Rezaei S. Pain Recognition Using Artificial Neural Network. In: 2006 IEEE International Symposium on Signal Processing and Information Technology. Vancouver, BC, Canada: IEEE; 2006: 28-33
- 29 Haines N, Southward MW, Cheavens JS. et al. Using computer-vision and machine learning to automate facial coding of positive and negative affect intensity. PLoS One 2019; 14: e0211735
- 30 Lucey P, Cohn JF, Kanade T, Saragih J, Ambadar Z, Matthews I. The Extended Cohn-Kanade Dataset (CK+): A complete dataset for action unit and emotion-specified expression. In: 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition – Workshops. San Francisco, CA: IEEE; 2010: 94-101
- 31 Pantic M, Valstar M, Rademaker R, Maat L. Web-based database for facial expression analysis. In: 2005 IEEE International Conference on Multimedia and Expo. Amsterdam: IEEE; 2005.
- 32 Valstar M, Pantic M. Induced disgust, happiness and surprise: an addition to the mmi facial expression database. In: Proc. 3rd Intern. Workshop on EMOTION (satellite of LREC): Corpora for Research on Emotion and Affect. London: 2010: 65
- 33 Susskind JM, Anderson AK, Hinton GE. The Toronto Face Database. Department of Computer Science, University of Toronto. Toronto, ON, Canada: Tech Rep; 2010: 3
- 34 Goodfellow IJ, Erhan D, Carrier PL. et al. Challenges in representation learning: A report on three machine learning contests. In: International conference on neural information processing. Daegu, South Korea: Springer; 2013: 117-124
- 35 Dhall A, Murthy OVR, Goecke R, Joshi J, Gedeon T. Video and Image based Emotion Recognition Challenges in the Wild: EmotiW 2015. In: Proceedings of the 2015 ACM on International Conference on Multimodal Interaction. Seattle, Washington, USA: ACM; 2015
- 36 Dhall A, Goecke R, Ghosh S, Joshi J, Hoey J, Gedeon T. From individual to group-level emotion recognition: EmotiW 5.0. In: ICMI ’17: Proceedings of the 19th ACM International Conference on Multimodal Interaction. New York: ACM; 2017: 524-528
- 37 Gross R, Matthews I, Cohn J. et al. Multi-PIE. Image Vision Comput 2010; 28: 807-813
- 38 Zhao G, Huang X, Taini M. et al. Facial expression recognition from near-infrared videos. Image Vis Comput 2011; 29: 607-619
- 39 Zhang Z, Luo P, Loy CC. et al. From facial expression recognition to interpersonal relation prediction. Int J Comput Vis 2018; 126: 550-569
- 40 Mollahosseini A, Hasani B, Mahoor MH. AffectNet: A Database for Facial Expression, Valence, and Arousal Computing in the Wild. IEEE Trans Affect Comput 2019; 10: 18-31
- 41 Li S, Deng W. Reliable Crowdsourcing and Deep Locality-Preserving Learning for Unconstrained Facial Expression Recognition. IEEE Trans Affect Comput 2019; 28: 356-370
- 42 Benitez-Quiroz CF, Srinivasan R, Martinez AM. EmotioNet: An Accurate, Real-Time Algorithm for the Automatic Annotation of a Million Facial Expressions in the Wild. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2016: 5562-5570
- 43 Goeleven E, De Raedt R, Leyman L. et al. The Karolinska Directed Emotional Faces: A validation study. Cogn Emot 2008; 22: 1094-1118
- 44 Langner O, Dotsch R, Bijlstra G. et al. Presentation and validation of the Radboud Faces Database. Cogn Emot 2010; 24: 1377-1388
- 45 Lijun Y, Xiaozhou W, Yi S, Wang J, Rosato MJ. A 3D facial expression database for facial behavior research. In: 7th International Conference on Automatic Face and Gesture Recognition (FGR06). Southampton, UK: University of Southampton; 2006: 211-216
- 46 Habibu R, Syamsiah M, Hamiruce MM, Iqbal SM. UPM-3D Facial Expression Recognition Database (UPM-3DFE). In: PRICAI 2012: Trends in Artificial Intelligence. Berlin, Heidelberg: Springer; 2012: 470-479
- 47 Cao C, Weng Y, Zhou S. et al. FaceWarehouse: a 3D facial expression database for visual computing. IEEE Trans Vis Comput Graph 2014; 20: 413-425
- 48 Ertugrul IO, Cohn JF, Jeni LA. et al. Cross-domain AU Detection: Domains, Learning Approaches, and Measures. Proc Int Conf Autom Face Gesture Recognit 2019; 2019: 1-8
- 49 Yin L, Chen X, Sun Y, Worm T, Reale M. A high-resolution 3D dynamic facial expression database. In: 2008 8th IEEE International Conference on Automatic Face & Gesture Recognition. Amsterdam: IEEE; 2008: 1-6
- 50 Liu YC, Kuo RL, Shih SR. COVID-19: The first documented coronavirus pandemic in history. Biomed J 2020; 43: 328-333
- 51 United Nations Sustainable Development Group. Policy Brief: Education during COVID-19 and beyond. 2020. Zugriff am 01. September 2020 unter: https://unsdg.un.org/resources/policy-brief-education-during-covid-19-and-beyond
- 52 Schleicher A. The impact of COVID-19 on Education – Insights from Education at a Glance 2020. 2020 Zugriff am 22. Dezember 2020 unter: https://www.oecd.org/education/the-impact-of-covid-19-on-education-insights-education-at-a-glance-2020.pdf
- 53 Tuttle KR. Impact of the COVID-19 pandemic on clinical research. Nat Rev Nephrol 2020; 16: 562-564
- 54 Daroedono E, Erwin F, Alfarabi M. et al. The impact of COVID-19 on medical education: our students perception on the practice of long distance learning. Int J Community Med Public Health 2020;
- 55 Di Pietro G, Biagi F, Dinis Mota Da Costa P, Karpinski Z, Mazza J. The likely impact of COVID-19 on education: Reflections based on the existing literature and recent international datasets. Luxemburg: Publications Office of the European Union; 2020
- 56 Joint Review Commission on Education in Diagnostic Medical Sonography (JRC-DMS). JRC-DMS Covid-19 Statement. 2020 Zugriff am 05. September 2020 unter: https://www.jrcdms.org
- 57 Society for Vascular Ultrasound. Vascular Laboratory Responses During the COVID-19 Pandemic. 2020 Zugriff am 05. September 2020 unter: https://www.svu.org/svu-news/4183/
- 58 Hilburg R, Patel N, Ambruso S. et al. Medical Education During the Coronavirus Disease-2019 Pandemic: Learning From a Distance. Adv Chronic Kidney Dis 2020; 27: 412-417
- 59 Alsoufi A, Alsuyihili A, Msherghi A. et al. Impact of the COVID-19 pandemic on medical education: Medical students’ knowledge, attitudes, and practices regarding electronic learning. PLoS One 2020; 15: e0242905
- 60 Bundesamt für Gesundheit (BAG). Coronavirus: Massnahmen und Verordnungen. 2020 Zugriff am 19. Dezember 2020 unter: https://www.bag.admin.ch/bag/de/home/krankheiten/ausbrueche-epidemien-pandemien/aktuelle-ausbrueche-epidemien/novel-cov/massnahmen-des-bundes.html
- 61 Hartmann T, Friebe-Hoffmann U, Gregorio N. et al. Novel and flexible ultrasound simulation with smartphones and tablets in fetal echocardiography. Arch Gynecol Obstet 2022; 305: 19-29
- 62 Hartmann T, Friebe-Hoffmann U, Polasik A. et al. Fetale Echokardiographie via Scanbooster Ultraschall Simulator App üben – wie verhält sich diese neue Lernmethode in Bezug auf Effektivität und Motivation Studierender?. Geburtshilfe Frauenheilkd 2020; 80: P099
- 63 Hartmann T, Friebe-Hoffmann U, Polasik A. et al. Scanbooster Ultraschall Simulation mit Smartphone und Tablet in der Geburtshilfe. Geburtshilfe Frauenheilkd 2020; 80: P098
- 64 Hartmann TJ, Friebe-Hoffmann U, Lato C. et al. VP34.17: Practicing fetal echocardiography with the Scanbooster ultrasound simulator app on smartphone and tablet. Ultrasound Obstet Gynecol 2020; 56: 200-201
- 65 Hartmann TJ, Friebe-Hoffmann U, Lato C. et al. OC10.08: Comparing a new form of ultrasound simulation on smartphone and tablet to a conventional learning method. Ultrasound Obstet Gynecol 2020; 56: 30
- 66 Forchhammer S, Hartmann T. Digitale Dermatopathologie: Vorteile für Befundung, Forschung und Ausbildung. Der Deutsche Dermatologe 2021; 69: 810-813
- 67 Chipps J, Brysiewicz P, Mars M. A Systematic Review of the Effectiveness of Videoconference-Based Tele-Education for Medical and Nursing Education. Worldviews Evid Based Nurs 2012; 9: 78-87
- 68 Shadat A, Sayem M, Taylor B. et al. Effective use of Zoom technology and instructional videos to improve engagement and success of distance students in Engineering. Australas J Eng Educ 2017; 22: 926-931
- 69 Sathik M, Jonathan GS. Effect of facial expressions on student’s comprehension recognition in virtual educational environments. Springerplus 2013; 2: 455
- 70 Apple Inc.. Informationen zur fortschrittlichen Technologie von Face ID. 2020 Zugriff am 23. Juli 2020 unter: https://support.apple.com/de-de/HT208108
- 71 Apple Inc.. ARFaceAnchor | Apple Developer Documentation. 2020 Zugriff am 03. August 2020 unter: https://developer.apple.com/documentation/arkit/arfaceanchor
- 72 Apple Inc.. ARKit | Apple Developer Documentation. 2020 Zugriff am 03. August 2020 unter: https://developer.apple.com/documentation/arkit
- 73 Apple Inc.. geometry | Apple Developer Documentation. 2021 Zugriff am 15. Januar 2021 unter: https://developer.apple.com/documentation/arkit/arfaceanchor/2928271-geometry
- 74 Apple Inc.. ARFaceAnchor.BlendShapeLocation | Apple Developer Documentation. 2020 Zugriff am 03. August 2020 unter: https://developer.apple.com/documentation/arkit/arfaceanchor/blendshapelocation
- 75 Apple Inc.. blendShapes | Apple Developer Documentation. 2020 Zugriff am 03. August 2020 unter: https://developer.apple.com/documentation/arkit/arfaceanchor/2928251-blendshapes
- 76 Apple Inc.. Dictionary | Apple Developer Documentation. 2021 Zugriff am 15. Januar 2021 unter: https://developer.apple.com/documentation/swift/dictionary
- 77 Apple Inc.. shuffle() | Apple Developer Documentation. 2020 Zugriff am 24. Juli 2020 unter: https://developer.apple.com/documentation/swift/array/2994753-shuffle
- 78 Abadi M, Agarwal A, Barham P. et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems. In: Proceedings of the 12th USENIX conference on Operating Systems Design and Implementation (OSDI’16. Berkeley: USENIX; 2016: 265-283
- 79 Srivastava N, Hinton G, Krizhevsky A. et al. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. J Mach Learn Res 2014; 15: 1929-1958
- 80 Google Ireland Limited. tf.keras.utils.plot_model : TensorFlow Core v2.3.0. 2020 Zugriff am 04. August 2020 unter: https://www.tensorflow.org/api_docs/python/tf/keras/utils/plot_model
- 81 Mutasa S, Sun S, Ha R. Understanding artificial intelligence based radiology studies: What is overfitting?. Clin Imaging 2020; 65: 96-99
- 82 Klambauer G, Unterthiner T, Mayr A. et al. Self-Normalizing Neural Networks. arXiv 2017;
- 83 Kingma D, Ba J. Adam. A method for stochastic optimization. arXiv 2017;
- 84 Dodge Y, Institute IS, Commenges D. The Oxford Dictionary of Statistical Terms. Oxford; UK: Oxford University Press;: 2006
- 85 Google Ireland Limited. tf.keras.activations.selu | TensorFlow Core v2.3.0. 2020 Zugriff am 04. August 2020 unter: https://www.tensorflow.org/api_docs/python/tf/keras/activations/selu?hl=en
- 86 LeCun Y, Bottou L, Orr G, Müller K-R. Efficient BackProp. In: Montavon G, Orr GB, Müller K-R. Neural Networks: Tricks of the Trade Lecture Notes in Computer Science. Berlin, Heidelberg: Springer; 2012: 9-48
- 87 Google Ireland Limited. tf.keras.layers.AlphaDropout | TensorFlow Core v2.3.0. 2020 Zugriff am 04. August 2020 unter: https://www.tensorflow.org/api_docs/python/tf/keras/layers/AlphaDropout?hl=en
- 88 Cortes C, Mohri M, Rostamizadeh A. L2 regularization for learning kernels. arXiv 2012;
- 89 Anguita D, Ghelardoni L, Ghio A, Oneto L, Ridella S. The ‘K’ in K-fold Cross Validation. In: ESANN 2012 proceedings, European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning. Bruges (Belgium), 25–27 April 2012. Louvain-la-Neuve, Belgium: i6doc.com; 2012
- 90 Breitbarth A, Schardt T, Kind C, Brinkmann J, Dittrich P-G, Notni G. Measurement accuracy and dependence on external influences of the iPhone X TrueDepth sensor. In: Proc. SPIE 11144, Photonics and Education in Measurement Science 2019, 1114407 (17 September 2019). 2019.