Vitor Fortes Rey

(AG Lukowicz, TU Kaiserslautern)
hosted by PhD Program in CS @ TU KL

"Translating Videos into Synthetic Training Data for Wearable Sensor-Based Activity Recognition Systems"

( MPI-SWS talk in Kooperation mit dem Fachbereich Informatik)

Human activity recognition (HAR) using wearable sensors has benefited much less from recent advances in Deep Learning than fields such as computer vision and natural language processing. This is, to a large extent, due to the lack of large scale (as compared to computer vision) repositories of labeled training data for sensor-based HAR tasks. Thus, for example, ImageNet has images for around 100,000 categories (based on WordNet) with on average 1000 images per category (therefore up to 100,000,000 samples). The Kinetics-700 video activity data set has 650,000 video clips covering 700 different human activities (in total over 1800 h). By contrast, the total length of all sensor-based HAR data sets in the popular UCI machine learning repository is less than 63 h, with around 38 of those consisting of simple mode of locomotion activities like walking, standing or cycling. In our research we aim to facilitate the use of online videos, which exist in ample quantities for most activities and are much easier to label than sensor data, to simulate labeled wearable motion sensor data. In previous work we already demonstrated some preliminary results in this direction, focusing on very simple, activity specific simulation models and a single sensor modality (acceleration norm). In this paper, we show how we can train a regression model on generic motions for both accelerometer and gyro signals and then apply it to videos of the target activities to generate synthetic Inertial Measurement Units (IMU) data (acceleration and gyro norms) that can be used to train and/or improve HAR models. We demonstrate that systems trained on simulated data generated by our regression model can come to within around 10% of the mean F1 score of a system trained on real sensor data. Furthermore, we show that by either including a small amount of real sensor data for model calibration or simply leveraging the fact that (in general) we can easily generate much more simulated data from video than we can collect its real version, the advantage of the latter can eventually be equalized.Adriano Lucieri (Smart Data and Knowledge Services, DFKI): AI-based Decision Support for Healthcare Applications – Towards Explainable and Secure Solutions


AI and Deep Learning have shown remarkable achievements throughout various medical areas such as dermatology, ophthalmology, and radiology. However, the application of state-of-the-art methods is still impeded by a lack of trust in the algorithms, linked to the intransparency of their decision making. In this talk, I will introduce Deep Learning based models for the classification of skin cancer as well as a framework combining different approaches towards explaining their decision making. This framework focuses on human-understandable decision justification through concept-based explanation tailored for clinical settings as well as medical research.

Bio:


Time: Monday, 06.12.2021, 16:00
Place:
Video:

Termin als iCAL Datei downloaden und in den Kalender importieren.