The Three A’s of Wearable and Ubiquitous Computing: Activity, Affect, and Attention

Van Laerhoven, Kristof (2021) The Three A’s of Wearable and Ubiquitous Computing: Activity, Affect, and Attention. Frontiers in Computer Science, 3. ISSN 2624-9898

[thumbnail of pubmed-zip/versions/2/package-entries/fcomp-03-691622.pdf] Text
pubmed-zip/versions/2/package-entries/fcomp-03-691622.pdf - Published Version

Download (725kB)

Abstract

A long lasting challenge in wearable and ubiquitous computing has been to bridge the interaction gap between the users and their manifold computers. How can we as humans easily perceive and interpret contextual information? Noticing whether someone is bored, stressed, busy, or fascinated in face-to-face interactions, is still largely unsolved for computers in everyday life. The first message of this article is that much of the research of the past decades aiming to alleviate this context gap between computers and their users, has clustered into three fields. The aim is to model human users in different observable categories (alphabetically ordered): Activity, Affect, and Attention. A second important point to make is that the research fields aiming for machine recognition of these three A’s, thus far have had only a limited amount of overlap, but are bound to converge in terms of methodology and from a systems perspective. A final point then concludes with the following call to action: A consequence of such a possible merger between the three A’s is the need for a more consolidated way of performing solid, reproducible research studies. These fields can learn from each other’s best practices, and their interaction can both lead to the creation of overarching benchmarks, as well as establish common data pipelines.

The opportunities are plenty. As early as 1960, J. C. R. Licklider regarded the symbiosis between human and machine as a flourishing field of research to come: “A multidisciplinary study group, examining future research and development problems of the Air Force, estimated that it would be 1980 before developments in artificial intelligence make it possible for machines alone to do much thinking or problem solving of military significance. That would leave, say, 5 years to develop man-computer symbiosis and 15 years to use it. The 15 may be 10 or 500, but those years should be intellectually the most creative and exciting in the history of mankind.” (Licklider, 1960). Advances in Machine Learning, Deep Learning and Sensors Research have shown in the past years that computers have mastered many problem domains. Computers have improved immensely in tasks such as spotting objects from camera footage, or inferring our vital signs from miniature sensors placed on our skins. Keeping track of what the system’s user is doing (Activity), how they are feeling (Affect), and what they are focusing on (Attention), has proven a much more difficult task. There is no sensor that directly can measure even one of these A’s, and there are thus far no models for them to facilitate their machine recognition. This makes the three A’s an ideal “holy grail” to aim for, likely for the upcoming decade. The automatic detection of a user’s Activity, Affect, and Attention is on one hand more specific than the similar research field of context awareness (Schmidt et al., 1999), yet challenging and well-defined enough to spur (and require) multi-disciplinary and high-quality research. As Figure 1 shows, the ultimate goal here is to achieve a more descriptive and accurate model of the computer’s user, as sensed through wearable or ubiquitous technology.

Item Type: Article
Subjects: Impact Archive > Computer Science
Depositing User: Managing Editor
Date Deposited: 14 Feb 2023 06:20
Last Modified: 15 Mar 2024 12:11
URI: http://research.sdpublishers.net/id/eprint/304

Actions (login required)

View Item
View Item