Lecture 04 : Presenting one HCI Researcher
Finding an HCI Reasearcher
To find the perfect candidate for this lecture, I decided to use a classical technique. I went on Google Scholar, typed in Human Computer Interaction and checked the authors of the most quoted papers.
That’s how I came across Gregory D. Abowd, because he is one of the coauthors of the second edition of the HCI’s Bible : “Human-computer interaction”.
Presenting Gregory D.Abowd and explaining why I chose him
Gregory D.Abowd was born on September 12th, 1964 in the USA and is a computer scientist best know for his work on ubiquitous computing and technologies for autism. Since 1994 he is a professor in the School of Interactive Computing, based in the Georgia Institute of Technology.
He is an Oxford graduate in the field of Computation and became a teacher researcher as soon as his studying career ended. He mostly publishes in the areas of Human-Computer Interaction, Ubiquitous Computing, Software Engineering, and Computer Supported Cooperative Work. But he is particularly known for his work in ubiquitous computing, where he has made contributions in the areas of automated capture and access, context-aware computing, and smart home technologies.
Abowd’s research primarily has an applications focus, where he has worked to develop systems for health care, education, the home, and individuals with autism and that is mainly why I chose him. Being pretty sensitized to the issues around disablities, I believe that the work around it is either too small or not advertised enough. But when renowned researchers actually use their skills and their audience to work for the sometimes inferioriazed leads to nothing but my respect.
As you may have guessed from the introduction, M.Abowd is a pretty successful researcher with over 53284 citations on Google Scholar. Alongside the bible of HCI, he published Towards a better understanding of context and context-awareness and A conceptual framework and a toolkit for supporting the rapid prototyping of context-aware applications which gather more than 4000 quotations respectively.
What is he working on now ?
I decided to look around the latest works that M.Abowd released and found the following one (25/07/2020) : IMUTube: Automatic extraction of virtual on-body accelerometry from video for human activity recognition.
In this paper, M.Abowd and his associates introduce the issue of the lack of large-scale, labeled data sets for on-body sensor-based human activity recognition (HAR) which impedes the progess in the creation of new models. These datasets are scarce, hard to implement, time consuming and expensive.. They are also error-prone and that’s why we find so few, but this scarcity is actually too much of an obstacle for reasearch to really blossom.
To address this problem, they introduce IMUTube, an automated processing pipeline that integrates existing computer vision and signal processing techniques to convert videos of human activity into virtual streams of IMU data. These virtual IMU streams represent accelerometry at a wide variety of locations on the human body.
The virtually-generated IMU data improves the performance of a variety of models on known HAR datasets. They are thus trying to reduce as much as possible the overall cost of data generation for HAR and their results are labelled as extremely promising. Obviously, they tell us that this is only the beginning and that the future might be bright for HAR.