Subscribe to RSS

DOI: 10.1055/s-0044-1800758
Best Paper Selection
- Appendix: Summary of Best Papers Selected for the IMIA Yearbook 2024, Sensors, Signals, and Imaging Informatics Section
- Appendix: Queries used for PubMed and Scopus
Appendix: Summary of Best Papers Selected for the IMIA Yearbook 2024, Sensors, Signals, and Imaging Informatics Section
Li Z, Fan Q, Bilgic B, Wang G, Wu W, Polimeni JR, Miller KL, Huang SY, Tian Q.
Diffusion MRI data analysis assisted by deep learning synthesized anatomical images (DeepAnat).
Med Image Anal. 2023 May;86:102744.
doi: 10.1016/j.media.2023.102744.
Diffusion MRI is a useful neuroimaging tool for non-invasive mapping of human brain microstructure and structural connections. This study proposes to synthesize high-quality T1-weighted anatomical images directly from diffusion data using convolutional neural networks, including a U-Net and a hy-brid generative adversarial network (GAN). The accuracy of brain segmentation is found to be slightly higher for the U-Net than for the GAN, and the synthesized T1 images and the results for brain seg-mentation and the comprehensive diffusion analysis tasks are highly similar to those from native T1 data. The efficacy of the approach is further validated using a larger dataset of 300 elderly subjects provided by the UK Biobank. As highlights of this work, we emphasize the quantitative and systematic evaluation. The U-Nets were trained and validated on different databases, which allows a high ge-neralizability of the results. Furthermore, the study demonstrates the benefits and practical feasibility of the approach, which has the potential to be beneficial for a wide range of clinical and neuroscientific applications.
Chen Y, Lu X, Xie Q.
Collaborative networks of transformers and convolutional neural networks are powerful and versatile learners for accurate 3D medical image segmentation.
Comput Biol Med. 2023 Sep;164:107228.
doi: 10.1016/j.compbiomed.2023.107228.
Since segmentation of medical images is very often a necessary step for any image-based study, a segmentation method that can perform robust 3D segmentation is of utmost relevance. In this paper, a network called TC-CoNet is proposed, which is able to encode spatial feature information and establish the concept of multi-scale objects by leveraging the strengths of two powerful architectures, Transformers and Convolutional Neural Networks. The TC-CoNet was tested in five challenging med-ical image segmentation tasks: multi-organ CT segmentation (aorta, gallbladder, spleen, left kidney, right kidney, liver, pancreas, and stomach); cardiac segmentation (cavity of the right ventricle, the myocardium of the left ventricle and the cavity of the left ventricle); brain tumor segmentation; left atrium of the heart segmentation; and lung tumor segmentation. Experiments were conducted thor-oughly, and the performance of the proposed network was compared to state-of-the-art approaches and outperformed them in the DSC, HD, and HD95 evaluation metrics, making it a promising solution for medical image segmentation. In addition, the code for TC-CoNet is freely available on GitHub, making this study reproducible.
Luo M, Yang X, Wang H, Dou H, Hu X, Huang Y, Ravikumar N, Xu S, Zhang Y, Xiong Y, Xue W, Frangi AF, Ni D, Sun L.
RecON: Online learning for sensorless freehand 3D ultrasound reconstruction.
Med Image Anal. 2023 Jul;87:102810.
doi: 10.1016/j.media.2023.102810.
While sensorless freehand 3D ultrasound reconstruction using deep networks offers significant advan-tages such as a large field of view, good resolution, affordability, and ease of use, existing methods primarily rely on basic scanning strategies with minimal inter-frame variations. In this study, the au-thors introduce a novel online learning framework for freehand 3D ultrasound reconstruction under complex scanning strategies that include diverse scanning velocities and poses. First, they propose a motion-weighted training loss during the training phase to regularize frame-by-frame scan variations and better address the negative impacts of uneven inter-frame velocity. Second, the online learning process is driven by local-to-global pseudo supervisions, effectively advancing by exploiting both frame-level contextual consistency and path-level similarity constraints to enhance inter-frame trans-formation estimation. Third, a feasible differentiable reconstruction approximation is developed to enable end-to-end optimization of the online learning process. Experimental results demonstrate that the freehand 3D ultrasound reconstruction system outperforms current methods on two large simulated datasets and one real dataset. Additionally, the proposed framework was validated on clinical scan videos, confirming its effectiveness and generalizability.
Ouzar Y, Djeldjli D, Bousefsaf F, Maaoui C.
X-iPPGNet: A novel one stage deep learning architecture based on depthwise separable convolutions for video-based pulse rate estimation.
Comput Biol Med. 2023 Mar;154:106592.
doi: 10.1016/j.compbiomed.2023.106592.
Pulse rate (PR) is a crucial marker for assessing an individual's health and requires routine monitoring to identify various health issues. Electrocardiography and photoplethysmography (PPG) are the primary methods used to measure heart rate, both of which involve contact sensors that must be attached to the body. With the growing demand for long-term health monitoring, non-contact PR estimation using imaging photoplethysmography (iPPG) is gaining significant attention. This paper presents a technique based on analyzing subtle changes in skin color. The authors propose a novel spatio-temporal end-to-end network, X-iPPGNet, for instant PR estimation directly from facial video recordings. Experimental results demonstrate high performance under various conditions, including head movements, facial expressions, and different skin tones, outperforming existing methods on three popular benchmark datasets. Notably, X-iPPGNet can integrate iPPG signal extraction and pulse rate prediction into a single step, making it more suitable for real-time measurements and sharp PR fluctuations. The proposed method also achieves excellent performance in less-constrained scenarios.
#
Appendix: Queries used for PubMed and Scopus
Pubmed – Sensors
( (“2023/01/01”[DP] : “2023/12/31”[DP]) AND Journal Article [pt] AND English[lang] AND has-abstract[text] NOT Bibliography[pt] NOT Comment[pt] NOT Editorial[pt] NOT Letter[pt] NOT News[pt] NOT Review[pt] NOT Case Reports[pt] NOT Published Erratum[pt] NOT Historical Ar-ticle[pt] NOT legislation[pt] NOT “clinical trial”[pt] NOT “evaluation studies”[pt] NOT “technical report”[pt] NOT “Scientific Integrity Review”[pt] NOT “Systematic Review”[pt] NOT “Retracted Publication”[pt] ) AND ( ( “sensor”[TI] OR “sensors”[TI] OR “sensing”[TI] ) AND ( “vital sign”[TI] OR “vital signs”[TI] OR “biological signal”[TI] OR “biological signals”[TI] OR “biological parame-ter”[TI] OR “biological parameters”[TI] OR “physiological parameter”[TI] OR “physiological parame-ters”[TI] OR “physiological signal”[TI] OR “physiological signals”[TI] OR “blood pressure”[TI] OR “temperature”[TI] OR “heart rate”[TI] OR “heartbeat”[TI] OR “heartbeats”[TI] OR “pulse rate”[TI] OR “respiration rate”[TI] OR “respiratory rate”[TI] OR “breathing rate”[TI] OR “ECG”[TI] OR “elec-trocardiography”[TI] OR “electrocardiogram”[TI] OR “menstrual cycle”[TI] OR “oxygen”[TI] OR “oximetry”[TI] OR “glucose”[TI] OR “end-tidal”[TI] OR “emg”[TI] OR “electromyography”[TI] OR “electromyogram”[TI] OR “ppg”[TI] OR “photoplethysmography”[TI] OR “photoplethysmogram”[TI] OR “pcg”[TI] OR “phonocardiography”[TI] OR “phonocardiogram”[TI] OR “bcg”[TI] OR “ballisto-cardiography”[TI] OR “ballistocardiogram”[TI] OR “scg”[TI] OR “seismocardiography”[TI] OR “seismocardiogram”[TI] OR “eog”[TI] OR “electrooculography”[TI] OR “electrooculogram”[TI] OR “eda”[TI] OR “electrodermal activity”[TI] OR “GSR”[TI] OR “Galvanic skin response” [TI] OR “eeg”[TI] OR “electroencephalogram”[TI] OR “bci”[TI] OR “brain computer interface”[TI] ) NOT ( “review”[TI] OR “survey”[TI] OR “conference”[ta] ) ) AND (“medic*”[TIAB] OR “biomed*”[TIAB] OR “biologic*[TIAB]”)
#
Scopus - Sensors
TITLE ( ( “sensor” OR “sensors” OR “sensing” ) AND ( “vital sign” OR “vital signs” OR “biological signal” OR “biological signals” OR “biological parameter” OR “biological parameters” OR “physio-logical parameter” OR “physiological parameters” OR “physiological signal” OR “physiological sig-nals” OR “blood pressure” OR “temperature” OR “heart rate” OR “heartbeat” OR “heartbeats” OR “pulse rate” OR “respiration rate” OR “respiratory rate” OR “breathing rate” OR “ECG” OR “electro-cardiography” OR “electrocardiogram” OR “menstrual cycle” OR “oxygen” OR “oximetry” OR “glu-cose” OR “end-tidal” OR “emg” OR “electromyography” OR “electromyogram” OR “ppg” OR “pho-toplethysmography” OR “photoplethysmogram” OR “pcg” OR “phonocardiography” OR “phonocar-diogram” OR “bcg” OR “ballistocardiography” OR “ballistocardiogram” OR “scg” OR “seismocardi-ography” OR “seismocardiogram” OR “eog” OR “electrooculography” OR “electrooculogram” OR “eda” OR “electrodermal activity” OR “GSR” OR “Galvanic skin response” OR “eeg” OR “electroen-cephalogram” OR “bci” OR “brain computer interface” ) AND NOT ( “review” OR “survey” ) ) AND TITLE-ABS ( “application” ) AND PUBDATETXT ( “January 2023” OR “February 2023” OR “March 2023” OR “April 2023” OR “May 2023” OR “June 2023” OR “July 2023” OR “August 2023” OR “September 2023” OR “October 2023” OR “November 2023” OR “December 2023” ) AND LANGUAGE ( english ) AND SUBJAREA ( medi ) AND SRCTYPE ( j ) AND DOCTYPE ( ar ) AND NOT DOCTYPE ( re )
#
Pubmed - Signals
( (“2023/01/01”[DP] : “2023/12/31”[DP]) AND Journal Article [pt] AND English[lang] AND has-abstract[text] NOT Bibliography[pt] NOT Comment[pt] NOT Editorial[pt] NOT Letter[pt] NOT News[pt] NOT Review[pt] NOT Case Reports[pt] NOT Published Erratum[pt] NOT Historical Arti-cle[pt] NOT legislation[pt] NOT “clinical trial”[pt] NOT “evaluation studies”[pt] NOT “technical re-port”[pt] NOT “Scientific Integrity Review”[pt] NOT “Systematic Review”[pt] NOT “Retracted Publi-cation”[pt] ) AND ( ( “biosignal”[TI] OR “biomedical signal”[TI] OR “physiological signal”[TI] OR “ecg”[TI] OR “electrocardiography”[TI] OR “electrocardiogram”[TI] OR “emg”[TI] OR “electromy-ography”[TI] OR “electromyogram”[TI] OR “ppg”[TI] OR “photoplethysmography”[TI] OR “photo-plethysmogram”[TI] OR “pcg”[TI] OR “phonocardiography”[TI] OR “phonocardiogram”[TI] OR “bcg”[TI] OR “ballistocardiography”[TI] OR “ballistocardiogram”[TI] OR “scg”[TI] OR “seismocar-diography”[TI] OR “seismocardiogram”[TI] OR “eog”[TI] OR “electrooculography”[TI] OR “elec-trooculogram”[TI] OR “eda”[TI] OR “electrodermal activity”[TI] OR “Respiration”[TI] OR “Blood Pressure”[TI] OR “eeg”[TI] OR “electroencephalogram”[TI] OR “bci”[TI] OR “brain computer inter-face”[TI] ) AND ( “processing”[TI] OR “analytics”[TI] OR “analysis”[TI] OR “analyse”[TI] OR “ana-lyze”[TI] OR “analysing”[TI] OR “analyzing”[TI] OR “enhancement”[TI] OR “enhancements”[TI] OR “segmentation”[TI] OR “feature extraction”[TI] OR “feature selection”[TI] OR “classification”[TI] OR “clustering”[TI] OR “measurement”[TI] OR “quantification”[TI] OR “registration”[TI] OR “recognition”[TI] OR “reconstruction”[TI] OR “interpretation”[TI] OR “retrieval”[TI] OR “augmenta-tion”[TI] OR “data mining”[TI] OR “computer-assisted”[TI] OR “computer-aided”[TI] OR “artificial intelligence”[TI] OR “machine learning”[TI] OR “deep learning”[TI] OR “neural network”[TI] OR “computer vision”[TI] OR “autoencoder”[TI] OR “auto-encoder”[TI] OR “Botzmann”[TI] OR “U-net”[TI] OR “support vector machine”[TI] OR “SVM”[TI] OR “random forest”[TI] ) NOT ( “re-view”[TI] OR “survey”[TI] OR “conference”[ta] ) ) AND ( “medical informatics”[MH] )
#
Scopus – Signals
TITLE ( ( “signal” OR “biosignal” OR “biomedical signal” OR “physiological signal” OR “ecg” OR “electrocardiography” OR “electrocardiogram” OR “emg” OR “electromyography” OR “electromyo-gram” OR “ppg” OR “photoplethysmography” OR “photoplethysmogram” OR “pcg” OR “phonocar-diography” OR “phonocardiogram” OR “bcg” OR “ballistocardiography” OR “ballistocardiogram” OR “scg” OR “seismocardiography” OR “seismocardiogram” OR “eog” OR “electrooculography” OR “electrooculogram” OR “eda” OR “electrodermal activity” OR “Respiration” OR “Blood Pressure” OR “eeg” OR “electroencephalogram” OR “bci” OR “brain computer interface” ) AND ( “processing” OR “analytics” OR “analysis” OR “analyse” OR “analyze” OR “analysing” OR “analyzing” OR “en-hancement” OR “enhancements” OR “segmentation” OR “feature extraction” OR “feature selection” OR “classification” OR “clustering” OR “measurement” OR “quantification” OR “registration” OR “recognition” OR “reconstruction” OR “interpretation” OR “retrieval” OR “augmentation” OR “data mining” OR “computer-assisted” OR “computer-aided” OR “artificial intelligence” OR “machine learning” OR “deep learning” OR “neural network” OR “computer vision” OR “autoencoder” OR “auto-encoder” OR “Botzmann” OR “U-net” OR “support vector machine” OR “SVM” OR “random forest” ) AND NOT ( “review” OR “survey” ) ) AND PUBDATETXT ( “January 2023” OR “February 2023” OR “March 2023” OR “April 2023” OR “May 2023” OR “June 2023” OR “July 2023” OR “August 2023” OR “September 2023” OR “October 2023” OR “November 2023” OR “December 2023” ) AND LANGUAGE ( english ) AND SUBJAREA ( medi ) AND SRCTYPE ( j ) AND DOCTYPE ( ar ) AND NOT DOCTYPE ( re )
#
PubMed – Imaging
( (“2023/01/01”[DP] : “2023/12/31”[DP]) AND Journal Article [pt] AND English[lang] AND has-abstract[text] NOT Bibliography[pt] NOT Comment[pt] NOT Editorial[pt] NOT Letter[pt] NOT News[pt] NOT Review[pt] NOT Case Reports[pt] NOT Published Erratum[pt] NOT Historical Ar-ticle[pt] NOT legislation[pt] NOT “clinical trial”[pt] NOT “evaluation studies”[pt] NOT “technical report”[pt] NOT “Scientific Integrity Review”[pt] NOT “Systematic Review”[pt] NOT “Retracted Publication”[pt] ) AND ( ( “image”[TI] OR “imaging”[TI] OR “video”[TI] OR “X-ray”[TI] OR “X ray”[TI] OR “radiography”[TI] OR “orthopantomography”[TI] OR “fluoroscopy”[TI] OR “angio-graphy”[TI] OR “tomography”[TI] OR “CT”[TI] OR “magnetic resonance”[TI] OR “MRI”[TI] OR “echocardiography”[TI] OR “sonography”[TI] OR “ultrasound”[TI] OR “endoscopy”[TI] OR “arthros-copy”[TI] OR “bronchoscopy”[TI] OR “colonoscopy”[TI] OR “cystoscopy”[TI] OR “laparoscopy”[TI] OR “nephroscopy”[TI] OR “laryngoscopy” [TI] OR “funduscopy”[TI] OR “thermography”[TI] OR “photography”[TI] OR “arthroscopy”[TI] OR “microscopy”[TI] ) AND ( “processing”[TI] OR “analy-tics”[TI] OR “analysis”[TI] OR “analyse”[TI] OR “analyze”[TI] OR “analysing”[TI] OR “analy-zing”[TI] OR “enhancement”[TI] OR “enhancements”[TI] OR “segmentation”[TI] OR “feature extrac-tion”[TI] OR “feature selection”[TI] OR “classification”[TI] OR “clustering”[TI] OR “mea-surement”[TI] OR “quantification”[TI] OR “registration”[TI] OR “recognition”[TI] OR “reconstruc-tion”[TI] OR “interpretation”[TI] OR “retrieval”[TI] OR “augmentation”[TI] OR “data mining”[TI] OR “computer-assisted”[TI] OR “computer-aided”[TI] OR “artificial intelligence”[TI] OR “machine learning”[TI] OR “deep learning”[TI] OR “neural network”[TI] OR “computer vision”[TI] OR “autoencoder”[TI] OR “auto-encoder”[TI] OR “Botzmann”[TI] OR “U-net”[TI] OR “support vector machine”[TI] OR “SVM”[TI] OR “random forest”[TI] ) NOT ( “review”[TI] OR “survey”[TI] OR “conference”[ta] ) ) AND ( “medical informatics”[MH] )
#
Scopus – Imaging
TITLE ( ( “image” OR “imaging” OR “video” OR “X-ray” OR “X ray” OR “radiography” OR “or-thopantomography” OR “fluoroscopy” OR “angiography” OR “tomography” OR “CT” OR “magnetic resonance” OR “MRI” OR “echocardiography” OR “sonography” OR “ultrasound” OR “endoscopy” OR “arthroscopy” OR “bronchoscopy” OR “colonoscopy” OR “cystoscopy” OR “laparoscopy” OR “nephroscopy” OR “laryngoscopy” OR “funduscopy” OR “thermography” OR “photography” OR “arthroscopy” OR “microscopy” ) AND ( “processing” OR “analytics” OR “analysis” OR “analyse” OR “analyze” OR “analysing” OR “analyzing” OR “enhancement” OR “enhancements” OR “segmentation” OR “feature extraction” OR “feature selection” OR “classification” OR “clustering” OR “measurement” OR “quantification” OR “registration” OR “recognition” OR “reconstruction” OR “interpretation” OR “retrieval” OR “augmentation” OR “data mining” OR “computer-assisted” OR “computer-aided” OR “artificial intelligence” OR “machine learning” OR “deep learning” OR “neural network” OR “computer vision” OR “autoencoder” OR “auto-encoder” OR “Botzmann” OR “U-net” OR “support vector machine” OR “SVM” OR “random forest” ) AND NOT ( “review” OR “survey” ) ) AND PUBDATETXT ( “January 2023” OR “February 2023” OR “March 2023” OR “April 2023” OR “May 2023” OR “June 2023” OR “July 2023” OR “August 2023” OR “September 2023” OR “October 2023” OR “November 2023” OR “December 2023” ) AND LANGUAGE ( english ) AND SUBJAREA ( medi ) AND SRCTYPE ( j ) AND DOCTYPE ( ar ) AND NOT DOCTYPE ( re )
#
#
#
No conflict of interest has been declared by the author(s).
Publication History
Article published online:
08 April 2025
© 2024. The Author(s). This is an open access article published by Thieme under the terms of the Creative Commons Attribution License, permitting unrestricted use, distribution, and reproduction so long as the original work is properly cited. (https://creativecommons.org/licenses/by/4.0/)
Georg Thieme Verlag KG
Rüdigerstraße 14, 70469 Stuttgart, Germany