Publications

2024

From Walls to Windows: Creating Transparency to Understand Filter Bubbles in Social Media

In

NORMalize 2024: The Second Workshop on the Normative Design and Evaluation of Recommender Systems, co-located with the ACM Conference on Recommender Systems 2024 (RecSys 2024)

Workshop

Date

October 18, 2024

Authors

Luka Bekavac, Kimberly Garcia, Jannis Strecker, Simon Mayer, and Aurelia Tamò-Larrieux

Abstract

Social media platforms play a significant role in shaping public opinion and societal norms. Understanding this influence requires examining the diversity of content that users are exposed to. However, studying filter bubbles in social media recommender systems has proven challenging, despite extensive research in this area. In this work, we introduce SOAP (System for Observing and Analyzing Posts), a novel system designed to collect and analyze very large online platforms (VLOPs) data to study filter bubbles at scale. Our methodology aligns with established definitions and frameworks, allowing us to comprehensively explore and log filter bubbles data. From an input prompt referring to a topic, our system is capable of creating and navigating filter bubbles using a multimodal LLM. We demonstrate SOAP by creating three distinct filter bubbles in the feed of social media users, revealing a significant decline in topic diversity as fast as in 60min of scrolling. Furthermore, we validate the LLM analysis of posts through an inter-and intra-reliability testing. Finally, we open source SOAP as a robust tool for facilitating further empirical studies on filter bubbles in social media.

Text Reference

Luka Bekavac, Kimberly Garcia, Jannis Strecker, Simon Mayer, and Aurelia Tamò-Larrieux. 2024. From Walls to Windows: Creating Transparency to Understand Filter Bubbles in Social Media. In NORMalize 2024: The Second Workshop on the Normative Design and Evaluation of Recommender Systems. 12 pages.

Link to Published Paper Download Paper Link to Code

Reader-aware Writing Assistance through Reader Profiles

In

34th ACM Conference on Hypertext and Social Media (HT '24)

Conference

Date

September 10, 2024

Authors

Ge Li, Danai Vachtsevanou, Jérémy Lemée, Simon Mayer, and Jannis Strecker

Abstract

Establishing rapport between authors and readers of scientific texts is essential for supporting readers in understanding texts as intended, facilitating socio-discursive practices within disciplinary communities, and helping in identifying interdisciplinary links among scientific writings.We propose a Reader-aware Congruence Assistant (RaCA), which supports writers to create texts that are adapted to target readers. Similar to user-centered design which is based on user profiles, RaCA features reader-centered writing through reader profiles that are dynamically computed from information discovered through academic search engines. Our assistant then leverages large language models to measure the congruence of a written text with a given reader profile, and provides feedback to the writer. We demonstrate our approach with an implemented prototype that illustrates how RaCA exploits information available on the Web to construct reader profiles, assesses writer-reader congruence and offers writers color-coded visual feedback accordingly. We argue that our approach to reader-oriented scientific writing paves the way towards the more personalized interaction of readers and writers with scientific content, and discuss how integration with Semantic Web technologies and Adaptive User Interface design can help materialize this vision within an ever-growing Web of scientific ideas, proof, and discourse.

Text Reference

Ge Li, Danai Vachtsevanou, Jérémy Lemée, Simon Mayer, and Jannis Strecker. 2024.2024. Reader-aware Writing Assistance through Reader Profiles. In 34th ACM Conference on Hypertext and Social Media (HT ’24), September 10–13, 2024, Poznań, Poland. ACM, New York, NY, USA, 7 pages.

Link to Published Paper Download Paper Link to Code

Personalized Reality: Challenges of Responsible Ubiquitous Personalization

In

ABIS 2024 - 28th International Workshop on Personalization and Recommendation at Mensch und Computer 2024

Workshop

Date

September 01, 2024

Authors

Jannis Strecker, Simon Mayer, and Kenan Bektaş

Abstract

The expanding capabilities of Mixed Reality and Ubiquitous Computing technologies enable personalization to be increasingly integrated with physical reality in all areas of people's lives. While such ubiquitous personalization promises more inclusive, efficient, pleasurable, and safer everyday interaction, it may also entail serious societal consequences such as isolated perceptions of reality or a loss of control and agency. We present this paper to initiate a discussion towards the responsible creation of ubiquitous personalization experiences that mitigate these harmful implications while retaining the benefits of personalization. To this end, we present the concept of Personalized Reality (PR) to describe a perceived reality that has been adapted in response to personal user data. We provide avenues for future work, and list open questions and challenges towards the creation of responsible PR experiences.

Text Reference

Jannis Strecker, Simon Mayer, and Kenan Bektaş. 2024. Personalized Reality: Challenges of Responsible Ubiquitous Personalization. In Proceedings of Mensch und Computer 2024 – Workshopband, Gesellschaft für Informatik e.V. (MuC'24). 5 pages. https://doi.org/10.18420/muc2024-mci-ws11-200

BibTex Reference
@inproceedings{strecker2024a,
title = {{Personalized Reality: Challenges of Responsible Ubiquitous Personalization}},
booktitle = {Mensch Und {{Computer}} 2024 \textendash{} {{Workshopband}}},
author = {Strecker, Jannis and Mayer, Simon and Bekta{\c s}, Kenan},
year = 2024,
publisher = {{Gesellschaft f\"ur Informatik e.V.}},
doi = {muc2024-mci-ws11-200},
langid = {english}
}

Link to Published Paper Download Paper

ABIS 2024 - International Workshop on Personalization and Recommendation

In

ABIS 2024 - 28th International Workshop on Personalization and Recommendation at Mensch und Computer 2024

Workshop

Date

September 01, 2024

Authors

Thomas Neumayr, Enes Yigitbas, Mirjam Augstein, Eelco Herder, Laura Stojko, and Jannis Strecker

Abstract

ABIS is an international workshop, organized by the SIG on Adaptivity and User Modeling in Interactive Software Systems of the German Gesellschaft für Informatik. For more than 25 years, the ABIS Workshop has been a highly interactive forum for discussing the state of the art in personalization, user modeling, and related areas. ABIS 2024’s focus will be on the topics of personalization and recommendation within the areas of Computer-Supported Cooperative Work (CSCW) (i.e., support of individuals who work organized in groups), Cross-Reality (XR) Interaction (e.g., transitions inside the reality-virtuality continuum), and/or making sense of sensory data for personalization purposes. To discuss such questions, our workshop aims to bring together researchers and practitioners who are interested in the general personalization domain, and/or in our SIG’s current focus. Our goal is to identify current issues and future directions of research and foster future development of the discipline and collaborations.

Text Reference

Thomas Neumayr, Enes Yigitbas, Mirjam Augstein, Eelco Herder, Laura Stojko, and Jannis Strecker. 2024. ABIS 2024 - International Workshop on Personalization and Recommendation. In Proceedings of Mensch und Computer 2024 – Workshopband, Gesellschaft für Informatik e.V. (MuC'24). 3 pages. https://doi.org/10.18420/muc2024-mci-ws11-107

Link to Published Paper Download Paper

Towards new realities: implications of personalized online layers in our daily lives

In

i-com - Journal of Interactive Media

Journal

Date

June 18, 2024

Authors

Eelco Herder, Laura Stojko, Jannis Strecker, Thomas Neumayr, Enes Yigitbas and Mirjam Augstein

Abstract

We are currently in a period of upheaval, as many new technologies are emerging that open up new possibilities to shape our everyday lives. Particularly, within the field of Personalized Human-Computer Interaction we observe high potential, but also challenges. In this article,we explore how an increasing amount of online services and tools not only further facilitates our lives, but also shapes our lives and how we perceive our environments. For this purpose, we adopt the metaphor of personalized ‘online layers’ and show how these layers are and will be interwoven with the lives that we live in the ‘human layer’ of the real world.

Text Reference

Eelco Herder, Laura Stojko, Jannis Strecker, Thomas Neumayr, Enes Yigitbas, and Mirjam Augstein. 2024. Towards new realities: implications of personalized online layers in our daily lives. i-com (June 2024). https://doi.org/10.1515/icom-2024-0017

BibTex Reference
@article{herder2024,
title = {Towards New Realities: Implications of Personalized Online Layers in Our Daily Lives},
shorttitle = {Towards New Realities},
author = {Herder, Eelco and Stojko, Laura and Strecker, Jannis and Neumayr, Thomas and Yigitbas, Enes and Augstein, Mirjam},
year = {2024},
month = jun,
journal = {i-com},
publisher = {Oldenbourg Wissenschaftsverlag},
issn = {2196-6826},
doi = {10.1515/icom-2024-0017},
urldate = {2024-06-18},
abstract = {We are currently in a period of upheaval, as many new technologies are emerging that open up new possibilities to shape our everyday lives. Particularly, within the field of Personalized Human-Computer Interaction we observe high potential, but also challenges. In this article, we explore how an increasing amount of online services and tools not only further facilitates our lives, but also shapes our lives and how we perceive our environments. For this purpose, we adopt the metaphor of personalized `online layers' and show how these layers are and will be interwoven with the lives that we live in the `human layer' of the real world.},
langid = {english}
}

Link to Published Paper Download Paper

NeighboAR: Efficient Object Retrieval using Proximity-and Gaze-based Object Grouping with an AR System

In

Proceedings of the ACM on Human-Computer Interaction (ETRA)

Journal

Date

May 28, 2024

Authors

Aleksandar Slavuljica, Kenan Bektaş, Jannis Strecker, and Simon Mayer

Abstract

Humans only recognize a few items in a scene at once and memorize three to seven items in the short term. Such limitations can be mitigated using cognitive offloading (e.g., sticky notes, digital reminders). We studied whether a gaze-enabled Augmented Reality (AR) system could facilitate cognitive offloading and improve object retrieval performance. To this end, we developed NeighboAR, which detects objects in a user's surroundings and generates a graph that stores object proximity relationships and user's gaze dwell times for each object. In a controlled experiment, we asked N=17 participants to inspect randomly distributed objects and later recall the position of a given target object. Our results show that displaying the target together with the proximity object with the longest user gaze dwell time helps recalling the position of the target. Specifically, NeighboAR significantly reduces the retrieval time by 33%, number of errors by 71%, and perceived workload by 10%.

Text Reference

Aleksandar Slavuljica, Kenan Bektaş, Jannis Strecker, and Simon Mayer. 2024. NeighboAR: Efficient Object Retrieval using Proximity- and Gaze-based Object Grouping with an AR System. Proc. ACM Hum.-Comput. Interact. 8, ETRA, Article 225 (May 2024), 19 pages. https://doi.org/10.1145/3655599

BibTex Reference
@inproceedings{slavuljica2024,
author = {Slavuljica, Aleksandar and Bekta\c{s}, Kenan and Strecker, Jannis and Mayer, Simon},
title = {NeighboAR: Efficient Object Retrieval using Proximity-and Gaze-based Object Grouping with an AR System},
year = {2024},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3655599},
doi = {10.1145/3655599},
abstract = {Humans only recognize a few items in a scene at once and memorize three to seven items in the short term. Such limitations can be mitigated using cognitive offloading (e.g., sticky notes, digital reminders). We studied whether a gaze-enabled Augmented Reality (AR) system could facilitate cognitive offloading and improve object retrieval performance. To this end, we developed NeighboAR, which detects objects in a user's surroundings and generates a graph that stores object proximity relationships and user's gaze dwell times for each object. In a controlled experiment, we asked N=17 participants to inspect randomly distributed objects and later recall the position of a given target object. Our results show that displaying the target together with the proximity object with the longest user gaze dwell time helps recalling the position of the target. Specifically, NeighboAR significantly reduces the retrieval time by 33%, number of errors by 71%, and perceived workload by 10%.},
booktitle = {Proc. ACM Hum.-Comput. Interact.},
volume = {8}
issue = {ETRA},
articleno = {225},
numpages = {19},
keywords = {augmented reality, cognitive offloading, eye tracking, object detection,human augmentation, mixed reality, working memory, visual search},
}

Link to Published Paper Download Paper Link to Code

ShoppingCoach: Using Diminished Reality to Prevent Unhealthy Food Choices in an Offline Supermarket Scenario

In

Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (CHI EA ’24)

Conference

Date

May 11, 2024

Authors

Jannis Strecker, Jing Wu, Kenan Bektaş, Conrad Vaslin, and Simon Mayer

Abstract

Non-communicable diseases, such as obesity and diabetes, have a significant global impact on health outcomes. While governments worldwide focus on promoting healthy eating, individuals still struggle to follow dietary recommendations. Augmented Reality (AR) might be a useful tool to emphasize specific food products at the point of purchase. However, AR may also add visual clutter to an already complex supermarket environment. Instead, reducing the visual prevalence of unhealthy food products through Diminished Reality (DR) could be a viable alternative: We present Shopping-Coach, a DR prototype that identifies supermarket food products and visually diminishes them dependent on the deviation of the target product’s composition from dietary recommendations. In a study with 12 participants, we found that ShoppingCoach increased compliance with dietary recommendations from 75% to 100% and reduced decision time by 41%. These results demonstrate the promising potential of DR in promoting healthier food choices and thus enhancing public health.

Text Reference

Jannis Strecker, Jing Wu, Kenan Bektaş, Conrad Vaslin, and Simon Mayer. 2024. ShoppingCoach: Using Diminished Reality to Prevent Unhealthy Food Choices in an Offline Supermarket Scenario. In Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (CHI EA ’24), May 11–16, 2024, Honolulu, HI, USA. ACM, New York, NY, USA, 8 pages. https://doi.org/10.1145/3613905.3650795

BibTex Reference
@inproceedings{10.1145/3613905.3650795,
author = {Strecker, Jannis and Wu, Jing and Bekta\c{s}, Kenan and Vaslin, Conrad and Mayer, Simon},
title = {ShoppingCoach: Using Diminished Reality to Prevent Unhealthy Food Choices in an Offline Supermarket Scenario},
year = {2024},
isbn = {9798400703317},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3613905.3650795},
doi = {10.1145/3613905.3650795},
abstract = {Non-communicable diseases, such as obesity and diabetes, have a significant global impact on health outcomes. While governments worldwide focus on promoting healthy eating, individuals still struggle to follow dietary recommendations. Augmented Reality (AR) might be a useful tool to emphasize specific food products at the point of purchase. However, AR may also add visual clutter to an already complex supermarket environment. Instead, reducing the visual prevalence of unhealthy food products through Diminished Reality (DR) could be a viable alternative: We present ShoppingCoach, a DR prototype that identifies supermarket food products and visually diminishes them dependent on the deviation of the target product’s composition from dietary recommendations. In a study with 12 participants, we found that ShoppingCoach increased compliance with dietary recommendations from 75\% to 100\% and reduced decision time by 41\%. These results demonstrate the promising potential of DR in promoting healthier food choices and thus enhancing public health.},
booktitle = {Extended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems},
articleno = {288},
numpages = {8},
keywords = {diminished reality, extended reality, food choices, health informatics, nutrition and health},
location = {Honolulu, HI, USA},
series = {CHI EA '24}
}

Teaser Video

Link to Published Paper Download Paper

QR Code Integrity by Design

In

Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (CHI EA ’24)

Conference

Date

May 11, 2024

Authors

Luka Bekavac, Simon Mayer, and Jannis Strecker

Abstract

As QR codes become ubiquitous in various applications and places, their susceptibility to tampering, known as quishing, poses a significant threat to user security. In this paper we introduce SafeQR codes that address this challenge by introducing innovative design strategies to enhance QR code security. Leveraging visual elements and secure design principles, the project aims to make tampering more noticeable, thereby empowering users to recognize and avoid potential phishing threats. Further, we highlight the limitations of current user-education methods in combating quishing and propose different attacker models tailored to address quishing attacks. In addition, we introduce a multi-faceted defense strategy that merges design innovation with user vigilance. Through a user study, we demonstrate the efficacy of ’Integrity by Design’ QR codes. These innovatively designed QR codes significantly raise user suspicion in case of tampering and effectively reduce the likelihood of successful quishing attacks.

Text Reference

Luka Bekavac, Simon Mayer, and Jannis Strecker. 2024. QR Code Integrity by Design. In Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (CHI EA ’24), May 11–16, 2024, Honolulu, HI, USA. ACM, New York, NY, USA, 9 pages. https://doi.org/10.1145/3613905.3651006

BibTex Reference
@inproceedings{10.1145/3613905.3651006,
author = {Bekavac, Luka Jure Lars and Mayer, Simon and Strecker, Jannis},
title = {QR-Code Integrity by Design},
year = {2024},
isbn = {9798400703317},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3613905.3651006},
doi = {10.1145/3613905.3651006},
abstract = {As QR codes become ubiquitous in various applications and places, their susceptibility to tampering, known as quishing, poses a significant threat to user security. In this paper we introduce SafeQR codes that address this challenge by introducing innovative design strategies to enhance QR code security. Leveraging visual elements and secure design principles, the project aims to make tampering more noticeable, thereby empowering users to recognize and avoid potential phishing threats. Further, we highlight the limitations of current user-education methods in combating quishing and propose different attacker models tailored to address quishing attacks. In addition, we introduce a multi-faceted defense strategy that merges design innovation with user vigilance. Through a user study, we demonstrate the efficacy of ’Integrity by Design’ QR codes. These innovatively designed QR codes significantly raise user suspicion in case of tampering and effectively reduce the likelihood of successful quishing attacks.},
booktitle = {Extended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems},
articleno = {274},
numpages = {9},
keywords = {QR code based phishing, QR codes, phishing susceptibility, privacy, quishing},
location = {Honolulu,HI,USA},
series = {CHI EA '24}
}
}

Teaser Video

Link to Published Paper Download Paper

GlassBoARd: A Gaze-Enabled AR Interface for Collaborative Work

In

Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (CHI EA ’24)

Conference

Date

May 11, 2024

Authors

Kenan Bektaş, Adrian Pandjaitan, Jannis Strecker, and Simon Mayer

Abstract

Recent research on remote collaboration focuses on improving the sense of co-presence and mutual understanding among the collaborators, whereas there is limited research on using non-verbal cues such as gaze or head direction alongside their main communication channel. Our system – GlassBoARd – permits collaborators to see each other’s gaze behavior and even make eye contact while communicating verbally and in writing. GlassBoARd features a transparent shared Augmented Reality interface that is situated in-between two users, allowing face-to-face collaboration. From the perspective of each user, the remote collaborator is represented as an avatar that is located behind the GlassBoARd and whose eye movements are contingent on the remote collaborator’s instant eye movements. In three iterations, we improved the design of GlassBoARd and tested it with two use cases. Our preliminary evaluations showed that GlassBoARd facilitates an environment for conducting future user experiments to study the effect of sharing eye gaze on the communication bandwidth.

Text Reference

Kenan Bektaş, Adrian Pandjaitan, Jannis Strecker, and Simon Mayer. 2024. GlassBoARd: A Gaze-Enabled AR Interface for Collaborative Work. In Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (CHI EA ’24), May 11–16, 2024, Honolulu, HI, USA. ACM, New York, NY, USA, 8 pages. https://doi.org/10.1145/3613905.3650965

BibTex Reference
@inproceedings{10.1145/3613905.3650965,
author = {Bekta\c{s}, Kenan and Pandjaitan, Adrian and Strecker, Jannis and Mayer, Simon},
title = {GlassBoARd: A Gaze-Enabled AR Interface for Collaborative Work},
year = {2024},
isbn = {9798400703317},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3613905.3650965},
doi = {10.1145/3613905.3650965},
abstract = {Recent research on remote collaboration focuses on improving the sense of co-presence and mutual understanding among the collaborators, whereas there is limited research on using non-verbal cues such as gaze or head direction alongside their main communication channel. Our system – GlassBoARd – permits collaborators to see each other’s gaze behavior and even make eye contact while communicating verbally and in writing. GlassBoARd features a transparent shared Augmented Reality interface that is situated in-between two users, allowing face-to-face collaboration. From the perspective of each user, the remote collaborator is represented as an avatar that is located behind the GlassBoARd and whose eye movements are contingent on the remote collaborator’s instant eye movements. In three iterations, we improved the design of GlassBoARd and tested it with two use cases. Our preliminary evaluations showed that GlassBoARd facilitates an environment for conducting future user experiments to study the effect of sharing eye gaze on the communication bandwidth.},
booktitle = {Extended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems},
articleno = {181},
numpages = {8},
keywords = {CSCW, augmented reality, eye tracking, gaze, non-verbal cues, presence, remote collaboration},
location = {Honolulu, HI, USA},
series = {CHI EA '24}
}

Teaser Video

Link to Published Paper Download Paper Link to Code

Gaze-based Opportunistic Privacy-preserving Human-Agent Collaboration

In

Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (CHI EA ’24)

Conference

Date

May 11, 2024

Authors

Jan Grau, Simon Mayer, Jannis Strecker, Kimberly Garcia, and Kenan Bektaş

Abstract

This paper introduces a novel system to enhance the spatiotemporal alignment of human abilities in agent-based workflows. This optimization is realized through the application of Linked Data and Semantic Web technologies and the system makes use of gaze data and contextual information. The showcased prototype demonstrates the feasibility of implementing such a system, where we specifically emphasize the system’s ability to constrain the dissemination of privacy-relevant information.

Text Reference

Jan Grau, Simon Mayer, Jannis Strecker, Kimberly Garcia, and Kenan Bektaş. 2024. Gaze-based Opportunistic Privacy-preserving Human-Agent Collaboration. In Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (CHI EA ’24), May 11–16, 2024, Honolulu, HI, USA. ACM, New York, NY, USA, 7 pages. https://doi.org/10.1145/3613905.3651066

BibTex Reference
@inproceedings{10.1145/3613905.3651066,
author = {Grau, Jan and Mayer, Simon and Strecker, Jannis and Garcia, Kimberly and Bektas, Kenan},
title = {Gaze-based Opportunistic Privacy-preserving Human-Agent Collaboration},
year = {2024},
isbn = {9798400703317},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3613905.3651066},
doi = {10.1145/3613905.3651066},
abstract = {This paper introduces a novel system to enhance the spatiotemporal alignment of human abilities in agent-based workflows. This optimization is realized through the application of Linked Data and Semantic Web technologies and the system makes use of gaze data and contextual information. The showcased prototype demonstrates the feasibility of implementing such a system, where we specifically emphasize the system’s ability to constrain the dissemination of privacy-relevant information.},
booktitle = {Extended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems},
articleno = {176},
numpages = {6},
keywords = {Human-Agent-Collaboration, Koreografeye, Privacy-Preserving, Solid},
location = {Honolulu, HI, USA},
series = {CHI EA '24}
}

Teaser Video

Link to Published Paper Download Paper

AuctentionAR -Auctioning Off Visual Attention in Mixed Reality

In

Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (CHI EA ’24)

Conference

Date

May 11, 2024

Authors

Adrian Pandjaitan, Jannis Strecker, Kenan Bektaş, and Simon Mayer

Abstract

Mixed Reality technologies are increasingly interwoven with our everyday lives. A variety of powerful Head Mounted Displays have recently entered consumer electronics markets, and more are under development, opening new dimensions for spatial computing. This development will likely not stop at the advertising industry either, as first forays into this area have already been made. We present AuctentionAR which allows users to sell off their visual attention to interested parties. It consists of a HoloLens 2, a remote server executing the auctioning logic, the YOLOv7 model for image recognition of products which may induce an advertising intent, and several bidders interested in advertising their products. As this system comes with substantial privacy implications, we discuss what needs to be considered in future implementation so as to make this system a basis for a privacy preserving MR advertising future.

Text Reference

Adrian Pandjaitan, Jannis Strecker, Kenan Bektaş, and Simon Mayer. 2024. AuctentionAR - Auctioning Off Visual Attention in Mixed Reality. In Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (CHI EA ’24), May 11–16, 2024, Honolulu, HI, USA. ACM, New York, NY, USA, 6 pages. https://doi.org/10.1145/3613905.3650941

BibTex Reference
@inproceedings{pandjaitan2024,
author = {Pandjaitan, Adrian and Strecker, Jannis and Bekta{\c{s}}, Kenan and Mayer, Simon},
title = {{AuctentionAR} - {Auctioning} Off Visual Attention in Mixed Reality},
year = {2024},
isbn = {79-8-4007-0331-7/24/05},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3613905.3650941},
doi = {10.1145/3613905.3650941},
booktitle = {Extended Abstracts of the 2024 CHI Conference on Human Factors in Computing Systems},
location = {Honolulu, HI, USA},
series = {CHI EA '24}
}

Teaser Video

Link to Published Paper Download Paper

Gaze-enabled activity recognition for augmented reality feedback

In

Computers and Graphics

Journal

Date

March 16, 2024

Authors

Kenan Bektaş, Jannis Strecker, Simon Mayer, and Kimberly Garcia

Abstract

Head-mounted Augmented Reality (AR) displays overlay digital information on physical objects. Through eye tracking, they provide insights into user attention, intentions, and activities, and allow novel interaction methods based on this information. However, in physical environments, the implications of using gaze-enabled AR for human activity recognition have not been explored in detail. In an experimental study with the Microsoft HoloLens 2, we collected gaze data from 20 users while they performed three activities: Reading a text, Inspecting a device, and Searching for an object. We trained machine learning models (SVM, Random Forest, Extremely Randomized Trees) with extracted features and achieved up to 89.6% activity-recognition accuracy. Based on the recognized activity, our system—GEAR—then provides users with relevant AR feedback. Due to the sensitivity of the personal (gaze) data GEAR collects, the system further incorporates a novel solution based on the Solid specification for giving users fine-grained control over the sharing of their data. The provided code and anonymized datasets may be used to reproduce and extend our findings, and as teaching material.

Text Reference

Kenan Bektaş, Jannis Strecker, Simon Mayer, and Kimberly Garcia. 2024. Gaze-enabled activity recognition for augmented reality feedback. Computers & Graphics (March 2024), 103909. https://doi.org/10.1016/j.cag.2024.103909

Link to Published Paper Download Paper Link to Code

2023

MR Object Identification and Interaction: Fusing Object Situation Information from Heterogeneous Sources

In

Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies

Journal

Date

September 27, 2023

Authors

Jannis Strecker, Khakim Akhunov, Federico Carbone, Kimberly García, Kenan Bektaş, Andres Gomez, Simon Mayer, and Kasim Sinan Yildirim

Abstract

The increasing number of objects in ubiquitous computing environments creates a need for effective object detection and identification mechanisms that permit users to intuitively initiate interactions with these objects. While multiple approaches to such object detection -- including through visual object detection, fiducial markers, relative localization, or absolute spatial referencing -- are available, each of these suffers from drawbacks that limit their applicability. In this paper, we propose ODIF, an architecture that permits the fusion of object situation information from such heterogeneous sources and that remains vertically and horizontally modular to allow extending and upgrading systems that are constructed accordingly. We furthermore present BLEARVIS, a prototype system that builds on the proposed architecture and integrates computer-vision (CV) based object detection with radio-frequency (RF) angle of arrival (AoA) estimation to identify BLE-tagged objects. In our system, the front camera of a Mixed Reality (MR) head-mounted display (HMD) provides a live image stream to a vision-based object detection module, while an antenna array that is mounted on the HMD collects AoA information from ambient devices. In this way, BLEARVIS is able to differentiate between visually identical objects in the same environment and can provide an MR overlay of information (data and controls) that relates to them. We include experimental evaluations of both, the CV-based object detection and the RF-based AoA estimation, and discuss the applicability of the combined RF and CV pipelines in different ubiquitous computing scenarios. This research can form a starting point to spawn the integration of diverse object detection, identification, and interaction approaches that function across the electromagnetic spectrum, and beyond.

Text Reference

Jannis Strecker, Khakim Akhunov, Federico Carbone, Kimberly García, Kenan Bektaş, Andres Gomez, Simon Mayer, and Kasim Sinan Yildirim. 2023. MR Object Identification and Interaction: Fusing Object Situation Information from Heterogeneous Sources. Proc. ACM Interact. Mob. Wearable Ubiquitous Technol. 7, 3, Article 124 (September 2023), 26 pages. https://doi.org/10.1145/3610879

BibTex Reference
@article{10.1145/3610879,
author = {Strecker, Jannis and Akhunov, Khakim and Carbone, Federico and Garc\'{\i}a, Kimberly and Bekta\c{s}, Kenan and Gomez, Andres and Mayer, Simon and Yildirim, Kasim Sinan},
title = {MR Object Identification and Interaction: Fusing Object Situation Information from Heterogeneous Sources},
year = {2023},
issue_date = {September 2023},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
volume = {7},
number = {3},
url = {https://doi.org/10.1145/3610879},
doi = {10.1145/3610879},
abstract = {The increasing number of objects in ubiquitous computing environments creates a need for effective object detection and identification mechanisms that permit users to intuitively initiate interactions with these objects. While multiple approaches to such object detection -- including through visual object detection, fiducial markers, relative localization, or absolute spatial referencing -- are available, each of these suffers from drawbacks that limit their applicability. In this paper, we propose ODIF, an architecture that permits the fusion of object situation information from such heterogeneous sources and that remains vertically and horizontally modular to allow extending and upgrading systems that are constructed accordingly. We furthermore present BLEARVIS, a prototype system that builds on the proposed architecture and integrates computer-vision (CV) based object detection with radio-frequency (RF) angle of arrival (AoA) estimation to identify BLE-tagged objects. In our system, the front camera of a Mixed Reality (MR) head-mounted display (HMD) provides a live image stream to a vision-based object detection module, while an antenna array that is mounted on the HMD collects AoA information from ambient devices. In this way, BLEARVIS is able to differentiate between visually identical objects in the same environment and can provide an MR overlay of information (data and controls) that relates to them. We include experimental evaluations of both, the CV-based object detection and the RF-based AoA estimation, and discuss the applicability of the combined RF and CV pipelines in different ubiquitous computing scenarios. This research can form a starting point to spawn the integration of diverse object detection, identification, and interaction approaches that function across the electromagnetic spectrum, and beyond.},
journal = {Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.},
month = {sep},
articleno = {124},
numpages = {26},
keywords = {computer vision, detection, identification, mixed reality}
}

Short Demo Video

Link to Published Paper Download Paper Link to Code

Sharing Personalized Mixed Reality Experiences

In

ABIS 2023 - 27th International Workshop on Personalization and Recommendation at Mensch und Computer 2023

Workshop

Date

September 03, 2023

Authors

Jannis Strecker, Simon Mayer, and Kenan Bektas

Abstract

Nowadays, people encounter personalized services predominantly on the Web using personal computers or mobile devices. The increasing capabilities and pervasiveness of Mixed Reality (MR) devices, however, prepare the ground for personalization possibilities that are increasingly interwoven with our physical reality, extending beyond these traditional devices. Such ubiquitous, personalized MR experiences bring the potential to make our lives and interactions with our environments more convenient, intuitive, and safer. However, these experiences will also be prone to amplify the known beneficial and, notably, harmful implications of personalization. For instance, the loss of shared world objects or the nourishing of "real-world filter bubbles" might have serious social and societal consequences as they could lead to increasingly isolated experienced realities. In this work, we envision different modes for the sharing of personalized MR environments to counteract these potential harms of ubiquitous personalization. We furthermore illustrate the different modes with use cases and list open questions towards this vision.

Text Reference

Jannis Strecker, Simon Mayer, and Kenan Bektas. (2023). Sharing Personalized Mixed Reality Experiences. In P. Fröhlich & V. Cobus (Eds.): Mensch und Computer 2023 – Workshopband. 03.-06. September 2023. Rapperswil (SG). https://doi.org/10.18420/muc2023-mci-ws12-263

BibTex Reference
@inproceedings{strecker2023a,
title = {Sharing {{Personalized Mixed Reality Experiences}}},
booktitle = {Mensch Und {{Computer}} 2023 \textendash{} {{Workshopband}}},
author = {Strecker, Jannis and Mayer, Simon and Bekta{\c s}, Kenan},
year = 2023,
publisher = {{Gesellschaft f\"ur Informatik e.V.}},
doi = {10.18420/muc2023-mci-ws12-263},
langid = {english}
}

Link to Published Paper Download Paper

GEAR: Gaze-enabled augmented reality for human activity recognition

In

2023 Symposium on Eye Tracking Research and Applications (ETRA ’23)

Conference

Date

May 30, 2023

Authors

Kenan Bektaş, Jannis Strecker, Simon Mayer, Kimberly Garcia, Jonas Hermann, Kay Erik Jenss, Yasmine Sheila Antille, and Marc Elias Solèr

Abstract

Head-mounted Augmented Reality (AR) displays overlay digital information on physical objects. Through eye tracking, they allow novel interaction methods and provide insights into user attention, intentions, and activities. However, only few studies have used gaze-enabled AR displays for human activity recognition (HAR). In an experimental study, we collected gaze data from 10 users on a HoloLens 2 (HL2) while they performed three activities (i.e., read, inspect, search). We trained machine learning models (SVM, Random Forest, Extremely Randomized Trees) with extracted features and achieved an up to 98.7% activity-recognition accuracy. On the HL2, we provided users with an AR feedback that is relevant to their current activity. We present the components of our system (GEAR) including a novel solution to enable the controlled sharing of collected data. We provide the scripts and anonymized datasets which can be used as teaching material in graduate courses or for reproducing our findings.

Text Reference

Kenan Bektaş, Jannis Strecker, Simon Mayer, Kimberly Garcia, Jonas Hermann, Kay Erik Jenss, Yasmine Sheila Antille, and Marc Elias Solèr. 2023. GEAR: Gaze-enabled augmented reality for human activity recognition. In 2023 Symposium on Eye Tracking Research and Applications (ETRA ’23), May 30–June 02, 2023, Tubingen, Germany. ACM, New York, NY, USA, 9 pages. https://doi.org/10.1145/3588015.3588402

BibTex Reference
@inproceedings{10.1145/3588015.3588402,
author = {Bekta\c{s}, Kenan and Strecker, Jannis and Mayer, Simon and Garcia, Dr. Kimberly and Hermann, Jonas and Jen\ss{}, Kay Erik and Antille, Yasmine Sheila and Sol\`{e}r, Marc},
title = {GEAR: Gaze-enabled augmented reality for human activity recognition},
year = {2023},
isbn = {9798400701504},
publisher = {Association for Computing Machinery},
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3588015.3588402},
doi = {10.1145/3588015.3588402},
abstract = {Head-mounted Augmented Reality (AR) displays overlay digital information on physical objects. Through eye tracking, they allow novel interaction methods and provide insights into user attention, intentions, and activities. However, only few studies have used gaze-enabled AR displays for human activity recognition (HAR). In an experimental study, we collected gaze data from 10 users on a HoloLens 2 (HL2) while they performed three activities (i.e., read, inspect, search). We trained machine learning models (SVM, Random Forest, Extremely Randomized Trees) with extracted features and achieved an up to 98.7\% activity-recognition accuracy. On the HL2, we provided users with an AR feedback that is relevant to their current activity. We present the components of our system (GEAR) including a novel solution to enable the controlled sharing of collected data. We provide the scripts and anonymized datasets which can be used as teaching material in graduate courses or for reproducing our findings.},
booktitle = {Proceedings of the 2023 Symposium on Eye Tracking Research and Applications},
articleno = {9},
numpages = {9},
keywords = {attention, augmented reality, context-awareness, human activity recognition, pervasive eye tracking},
location = {Tubingen, Germany},
series = {ETRA '23}
}

Demo Video

Link to Published Paper Download Paper Link to Code

2022

SOCRAR: Semantic OCR through Augmented Reality

In

12th International Conference on the Internet of Things (IoT22)

Conference

Date

October 01, 2022

Authors

Jannis Strecker, Kimberly García, Kenan Bektaş, Simon Mayer, and Ganesh Ramanathan

Abstract

To enable people to interact more efficiently with virtual and physical services in their surroundings, it would be beneficial if information could more fluently be passed across digital and non-digital spaces. To this end, we propose to combine semantic technologies with Optical Character Recognition on an Augmented Reality (AR) interface to enable the semantic integration of (written) information located in our everyday environments with Internet of Things devices. We hence present SOCRAR, a system that is able to detect written information from a user’s physical environment while contextualizing this data through a semantic backend. The SOCRAR system enables in-band semantic translation on an AR interface, permits semantic filtering and selection of appropriate device interfaces, and provides cognitive offloading by enabling users to store information for later use. We demonstrate the feasibility of SOCRAR through the implementation of three concrete scenarios.

Text Reference

Jannis Strecker, Kimberly García, Kenan Bektaş, Simon Mayer, and Ganesh Ramanathan. 2022. SOCRAR: Semantic OCR through Augmented Reality. In Proceedings of the 12th International Conference on the Internet of Things (IoT ’22), November 7–10, 2022, Delft, Netherlands. ACM, New York, NY, USA, 8 pages. https://doi.org/10.1145/3567445.3567453

BibTex Reference
@inproceedings{10.1145/3567445.3567453,
author = {Strecker, Jannis and Garc\\'{\\i}a, Kimberly and Bekta\\c{s}, Kenan and Mayer, Simon and Ramanathan, Ganesh},
title = {SOCRAR: Semantic OCR through Augmented Reality}, 
year = {2023}, 
isbn = {9781450396653}, 
publisher = {Association for Computing Machinery}, 
address = {New York, NY, USA},
url = {https://doi.org/10.1145/3567445.3567453},
doi = {10.1145/3567445.3567453},
abstract = {To enable people to interact more efficiently with virtual and physical services in their surroundings, it would be beneficial if information could more fluently be passed across digital and non-digital spaces. To this end, we propose to combine semantic technologies with Optical Character Recognition on an Augmented Reality (AR) interface to enable the semantic integration of (written) information located in our everyday environments with Internet of Things devices. We hence present SOCRAR, a system that is able to detect written information from a user’s physical environment while contextualizing this data through a semantic backend. The SOCRAR system enables in-band semantic translation on an AR interface, permits semantic filtering and selection of appropriate device interfaces, and provides cognitive offloading by enabling users to store information for later use. We demonstrate the feasibility of SOCRAR through the implementation of three concrete scenarios.},
booktitle = {Proceedings of the 12th International Conference on the Internet of Things},
pages = {25–32},
numpages = {8},
keywords = {Augmented Reality, Knowledge Graph, Optical Character Recognition, Ubiquitous Computing, Web of Things},
location = {Delft, Netherlands},
series = {IoT '22}
}

Teaser Video

Link to Published Paper Download Paper

EToS-1: Eye Tracking on Shopfloors for User Engagement with Automation

In

Workshop on Engaging with Automation co-located with CHI22

Workshop

Date

April 27, 2022

Authors

Kenan Bektas, Jannis Strecker, Simon Mayer, and Markus Stolze

Abstract

Mixed Reality (MR) is becoming an integral part of many context-aware industrial applications. In maintenance and remote support operations, the individual steps of computer-supported (cooperative) work can be defined and presented to human operators through MR headsets. Tracking of eye movements can provide valuable insights into a user’s decision-making and interaction processes. Thus, our overarching goal is to better understand the visual inspection behavior of machine operators on shopfloors and to find ways to provide them with attention-aware and context-aware assistance through MR headsets that increasingly come with eye tracking (ET) as a default feature. Toward this goal, in two industrial scenarios, we used two mobile eye tracking devices and systematically compared the visual inspection behavior of novice and expert operators. In this paper we present our preliminary findings and lessons learned

Text Reference

Kenan Bektas, Jannis Strecker, Simon Mayer, and Markus Stolze. 2022. EToS-1: Eye Tracking on Shopfloors for User Engagement with Automation. In Proceedings of the Workshop on Engaging with Automation co-located with the ACM Conference on Human Factors in Computing Systems (CHI 2022), April 30, 2022, New Orleans, LA, USA. https://www.alexandria.unisg.ch/266339

Link to Published Paper Download Paper