Social Acceptability in HCI

A Survey of Methods, Measures, and Design Strategies

With the increasing ubiquity of personal, embodied and embedded devices, social acceptability of human-machine interactions has gained relevance and growing interest from the HCI community. For our CHI 2020 paper, we analyzed research practices around social acceptability in Human-Computer Interaction (HCI). Our analysis of methods, measures and design strategies considered works between 2000 and 2018. It lays the groundwork for a more nuanced evaluation of social acceptability, the development of best practices, and a future research agenda. The annotated data set is available on Github.

  •  
  • Marion Koelle, Swamy Ananthanarayan, and Susanne Boll.
  • Social Acceptability in HCI: A Survey of Methods, Measures, and Design Strategies.
  • In: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, ACM, 2020.
  • With the increasing ubiquity of personal devices, social acceptability of human-machine interactions has gained relevance and growing interest from the HCI community. Yet, there are no best practices or established methods for evaluating social acceptability. Design strategies for increasing social acceptability have been described and employed, but so far not been holistically appraised and evaluated. We offer a systematic literature analysis (N=69) of social acceptability in HCI and contribute a better understanding of current research practices, namely, methods employed, measures and design strategies. Our review identified an unbalanced distribution of study approaches, shortcomings in employed measures, and a lack of interweaving between empirical and artifact-creating approaches. The latter causes a discrepancy between design recommendations based on user research, and design strategies employed in artifact creation. Our survey lays the groundwork for a more nuanced evaluation of social acceptability, the development of best practices, and a future research agenda.

  • DOI
  •     PDF
  •    Data Set

Below, we provide a continously updated literature collection around social acceptability in HCI, comprising both the works included in the survey paper as well as works from 2019 onwards. The goal of this collection is to serve as a related work directory and provide newcomers to the topic with a starting point. Know of a paper that is still missing? Please feel free to reach out!

Literature Collection

Related work directory and starting point. As creators of this webpage, we do not hold the rights for these works. Please contact the authors directly.

2021

  • 2021
  • Pia von Terzi, Stefan Tretter, Alarith Uhde, Marc Hassenzahl, and Sarah Diefenbach
  • Technology-Mediated Experiences and Social Context: Relevant Needs in Private Vs. Public Interaction and the Importance of Others for Positive Affect
  • In: Frontiers in Psychology, 01 September 2021
  • Technologies, such as smartphones or wearables, take a central role in our daily lives. Making their use meaningful and enjoyable requires a better understanding of the prerequisites and underpinnings of positive experiences with such technologies. So far, a focus had been on the users themselves, that is, their individual goals, desires, feelings, and acceptance. However, technology is often used in a social context, observed by others or even used in interaction with others, and thus shapes social dynamics considerably. In the present paper, we start from the notion that meaningful and/or enjoyable experiences (i.e., wellbeing) are a major outcome of technology use. We investigate how these experiences are further shaped by social context, such as potential spectators. More specifically, we gathered private (while being alone) and public (while other people are present) positive experiences with technology and compared need fulfillment and affective experience. In addition, we asked participants to imagine a change in context (from private to public or public to private) and to report the impact of this change on experience. Results support the idea of particular social needs, such as relatedness and popularity, which are especially relevant and better fulfilled in public than in private contexts. Moreover, our findings show that participants experience less positive affect when imaginatively removing the present others from a formerly public interaction, i.e., when they imagine performing the same interaction but without the other people present. Overall, this underlines the importance of social context for Human-Computer Interaction practice and research. Practical implications relate to product development, e.g., designing interactive technologies that can adapt to context (changes) or allow for context-sensitive interaction sets. We discuss limitations related to the experimental exploration of social context, such as the method of data collection, as well as potential alternatives to address those limitations, such as diary studies.

  • DOI
  • 2021
  • Laxmi Pandey, Khalad Hasan, and Ahmed Sabbir Arif.
  • Acceptability of Speech and Silent Speech Input Methods in Private and Public.
  • In: Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (CHI'21).
  • Silent speech input converts non-acoustic features like tongue and lip movements into text. It has been demonstrated as a promising input method on mobile devices and has been explored for a variety of audiences and contexts where the acoustic signal is unavailable (e.g., people with speech disorders) or unreliable (e.g., noisy environment). Though the method shows promise, very little is known about peoples’ perceptions regarding using it. In this work, first, we conduct two user studies to explore users’ attitudes towards the method with a particular focus on social acceptance and error tolerance. Results show that people perceive silent speech as more socially acceptable than speech input and are willing to tolerate more errors with it to uphold privacy and security. We then conduct a third study to identify a suitable method for providing real-time feedback on silent speech input. Results show users find an abstract feedback method effective and significantly more private and secure than a commonly used video feedback method.

  • DOI
  • 2021
  • Jarrod Knibbe, Rachel Freire, Marion Koelle, and Paul Strohmeier.
  • Skill-Sleeves: Designing Electrode Garments for Wearability.
  • Proceedings of the Fifteenth International Conference on Tangible, Embedded, and Embodied Interaction (TEI '21).
  • Many existing explorations of wearables for HCI consider functionality first and wearability second. Typically, as the technologies, designs, and experiential understandings develop, attention can shift towards questions of deployment and wearability. To support this shift of focus we present a case study of the iterative design of electrode sleeves. We consider the design motivations and background that led to the existing, prototype EMS sleeves, and the resultant challenges around their wearability. Through our own design research practice, we seek to reveal design criteria towards the wearability of such a sleeve, and provide designs that optimise for those criteria. We contribute (1) new electrode sleeve designs, which begin to make it practicable to take EMS beyond the lab, (2) new fabrication processes that support rapid production and personalisation, and (3) reflections on criteria for wearability across new eTextile garments.

  • DOI
  •     PDF
  •     Page
  • 2021
  • Alarith Uhde and Marc Hassenzahl.
  • Towards a Better Understanding of Social Acceptability.
  • In: Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems (CHI'21).
  • Social contexts play an important role in understanding acceptance and use of technology. However, current approaches used in HCI to describe contextual influence do not capture it appropriately. On the one hand, the often used Technology Acceptance Model and related frameworks are too rigid to account for the nuanced variations of social situations. On the other hand, Goffman’s dramaturgical model of social interactions emphasizes interpersonal relations but mostly overlooks the material (e.g., technology) that is central to HCI. As an alternative, we suggest an approach based on Social Practice Theory. We conceptualize social context as interactions between co-located social practices and acceptability as a matter of their (in)compatibilities. Finally, we outline how this approach provides designers with a better understanding of different types of social acceptability problems and helps finding appropriate solutions.

  • DOI
  •     PDF
  • 2021
  • Arathi Sethumadhavan, Josh Lovejoy, and David Mondello.
  • A Framework for Wvaluating Social Acceptability of Spatial Computing Devices.
  • In: interactions 28, 3 (May - June 2021).
  • Spatial computing devices are designed to integrate digital information more directly within a wearer's cognitive processes than traditional computing devices. They do this via techniques such as holographic projection, immersive spatial audio, haptic feedback, artificial intelligent assistants that are audible only to the wearer, and direct manipulation of synthetic content. While such information can enhance the wearer's environmental or contextual awareness, the use of these devices in social interactions can also contribute to a reduction in social presence for both device wearers and bystanders.

  • DOI
  •     PDF
  • 2021
  • Christian Nordstrøm Rasmussen, Minna Pakanen, and Marianne Graves Petersen.
  • Designing Socially Acceptable Light Therapy Glasses for Self-managing Seasonal Affective Disorder.
  • In: In Augmented Humans Conference 2021 (AHs'21).
  • This paper presents an autobiographical design process aiming towards aesthetically pleasing light-therapy device that can be augmented on the body and in the everyday life of the wearer. The design (fig. 1) is meant for easing seasonal affective disorder symptoms through longer and less bright light therapy sessions (e.g., 2 hours with max 2500 Lux brightness) than with typical light box. The glasses were chosen as a formfactor as they allow user to accomplish other tasks, such as eating breakfast, commuting, or working while on a light therapy. The resulting design is moderately small and easy to integrate into everyday life through classical eye-glasses look and adjustable lighting output. First author's initial experiences in real-life use, however, indicate that the design is not completely socially acceptable as light output puts him too much on display.

  • DOI
  •     Page
  • 2021
  • Jan Ole Rixen, Teresa Hirzle, Mark Colley, Yannick Etzel, Enrico Rukzio, and Jan Gugenheimer.
  • Exploring Augmented Visual Alterations in Interpersonal Communication.
  • In: of the 2021 CHI Conference on Human Factors in Computing Systems (CHI'21)
  • Augmented Reality (AR) glasses equip users with the tools to modify the visual appearance of their surrounding environment. This might severely impact interpersonal communication, as the conversational partners will no longer share the same visual perception of reality. Grounded in color-in-context theory, we present a potential AR application scenario in which users can modify the color of the environment to achieve subconscious benefits. In a consecutive online survey (N=64), we measured the user’s comfort, acceptance of altering and being altered, and how it is impacted by being able to perceive or not perceive the alteration. We identified significant differences depending on (1) who or what is the target of the alteration, (2) which body part is altered, and (3) which relationship the conversational partners share. In light of our quantitative and qualitative findings, we discuss ethical and practical implications for future devices and applications that employ visual alterations.

  • DOI
  •     PDF
  •     Video
  • 2021
  • Chloe Eghtebas, Francisco Kiss, Marion Koelle, and Paweł Woźniak.
  • Advantage and Misuse of Vision Augmentation – Exploring User Perceptions and Attitudes using a Zoom Prototype.
  • In: Proceedings of the Augmented Humans Conference 2021 (AHs'21).
  • Consequences that deter adoption, such as asymmetrical encounters between wearers and bystanders, need to be explored in order to make Ubiquitous Augmented Reality (UAR) acceptable. In our work we outline how social perception is based on a Head Mounted Displays (HMD) capability, appearance, and the role of the wearer. We fixed the device capability to zooming in AR and explored the privacy implications in 12 interviews through a prototype with the mocked ability to ”super humanly” zoom in on targets. Next, we used the resulting themes to survey 100 participants to deeper explore augmented zoom while we permutate on the device appearance housed in three form-factors: contact lenses, glasses, and helmet and role of wearer based on level of involvement in an abstracted scenario transpiring in a public space. Our results showed that explicit visibility of an AR system provides social translucence as it is rated least likely to cause misuse but also perceived as least likely to have an advantage.

  • DOI
  • 2021
  • Jacob Logas, Georgianna Lin, Kelsie Belan, Advait Gogate, and Thad Starner.
  • Conversational Partner’s Perception of Subtle Display Use for Monitoring Notifications.
  • In: Proceedings of the Augmented Humans Conference 2021 (AHs'21).
  • We examine whether the gaze direction of a user reveals the use of a subtle display during a face-to-face conversation with a partner who is not initially aware of the display. We measure twelve participants’ perceptions of a casual conversational partner’s engagement between a control condition of no notification and notifications displayed behind the participant’s head at 0, 10, and 20 degrees to the right of the conversational partner’s line of sight. No differences in reported conversational engagement were found. However, once the presence of the display was revealed, engagement scores went down over all conditions compared to the prior uninformed variant of the experiment. Still, no difference was found between the control and the subtle display conditions, and informed participants were only 40% accurate on average in detecting the use of the display. In a second study comparing subtle display user engagement with smartwatch user engagement, six participants rated a conversational partner more distracted when the partner used a smartwatch to monitor notifications than when the partner used a display secretly mounted behind the participant’s head. Participants in both studies did not realize the presence of the display until it was revealed. These results suggest that eye movement when using a subtle display detracts less from the conversational experience than the use of a smartwatch.

  • DOI

2020

  • 2020
  • Alex Olwal, Kevin Balke, Dmitrii Votintcev, Thad Starner, Paula Conn, Bonnie Chinh, and Benoit Corda.
  • Wearable Subtitles: Augmenting Spoken Communication with Lightweight Eyewear for All-day Captioning.
  • In: Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology.
  • Mobile solutions can help transform speech and sound into visual representations for people who are deaf or hard-of-hearing (DHH). However, where handheld phones present challenges, head-worn displays (HWDs) could further communication through privately transcribed text, hands-free use, improved mobility, and socially acceptable interactions. Wearable Subtitles is a lightweight 3D-printed proof-of-concept HWD that explores augmenting communication through sound transcription for a full workday. Using a low-power microcontroller architecture, we enable up to 15 hours of continuous use. We describe a large survey (n=501) and three user studies with 24 deaf/hard-of-hearing participants which inform our development and help us refine our prototypes. Our studies and prior research identify critical challenges for the adoption of HWDs which we address through extended battery life, lightweight and balanced mechanical design (54 g), fitting options, and form factors that are compatible with current social norms.

  • DOI
  •     PDF
  •     Video
  • 2020
  • Kyungjun Lee, Daisuke Sato, Saki Asakawa, Hernisa Kacorri, and Chieko Asakawa.
  • Pedestrian Detection with Wearable Cameras for the Blind: A Two-way Perspective.
  • In: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems.
  • Blind people have limited access to information about their surroundings, which is important for ensuring one's safety, managing social interactions, and identifying approaching pedestrians. With advances in computer vision, wearable cameras can provide equitable access to such information. However, the always-on nature of these assistive technologies poses privacy concerns for parties that may get recorded. We explore this tension from both perspectives, those of sighted passersby and blind users, taking into account camera visibility, in-person versus remote experience, and extracted visual information. We conduct two studies: an online survey with MTurkers (N=206) and an in-person experience study between pairs of blind (N=10) and sighted (N=40) participants, where blind participants wear a working prototype for pedestrian detection and pass by sighted participants. Our results suggest that both of the perspectives of users and bystanders and the several factors mentioned above need to be carefully considered to mitigate potential social tensions.

  • DOI
  •     PDF
  • 2020
  • Yanan Wang, Judith Amores, and Pattie Maes.
  • On-Face Olfactory Interfaces.
  • In: Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems.
  • On-face wearables are currently limited to piercings, tattoos, or interactive makeup that aesthetically enhances the user, and have been minimally used for scent-delivery methods. However, on-face scent interfaces could provide an advantage for personal scent delivery in comparison with other modalities or body locations since they are closer to the nose. In this paper, we present the mechanical and industrial design details of a series of form factors for on-face olfactory wearables that are lightweight and can be adhered to the skin or attached to glasses or piercings. We assessed the usability of three prototypes by testing with 12 participants in a within-subject study design while they were interacting in pairs at a close personal distance. We compare two of these designs with an "off-face" olfactory necklace and evaluate their social acceptance, comfort as well as perceived odor intensity for both the wearer and observer.

  • DOI
  •     PDF
  •     Page
  •     Video
  • 2020
  • Oliver Alonzo, Lisa Elliot, Becca Dingman, and Matt Huenerfauth.
  • Reading Experiences and Interest in Reading-Assistance Tools Among Deaf and Hard-of-Hearing Computing Professionals.
  • In: Proceedings of the 22nd International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS '20).
  • Automatic Text Simplification (ATS) software replaces text with simpler alternatives. While some prior research has explored its use as a reading assistance technology, including some empirical findings suggesting benefits for deploying this technology among particular groups of users, relatively little work has investigated the interest and requirements of specific groups of users of this technology. In this study, we investigated the interests of Deaf and Hard-of-Hearing (DHH) individuals in the computing industry in ATS-based reading assistance tools, motivated by prior work establishing that computing professionals often need to read about new technologies in order to stay current in their profession. Through a survey and follow-up interviews, we investigate these DHH individuals’ reading practices, current techniques for overcoming complicated text, and their interest in reading assistance tools for their work. Our results suggest that these users read relatively often, especially in support of their work, and they were interested in tools to assist them with complicated texts. This empirical contribution provides motivation for further research into ATS-based reading assistance tools for these users, prioritizing which reading activities users are most interested in seeing application of this technology, as well as some insights into design considerations for such tools.

  • DOI
  • 2020
  • Rzayev, Rufat, Susanne Korbely, Milena Maul, Alina Schark, Valentin Schwind, and Niels Henze.
  • Effects of Position and Alignment of Notifications on AR Glasses during Social Interaction.
  • In: Proceedings of the 11th Nordic Conference on Human-Computer Interaction (NordiCHI'20).
  • Notifications are one of the smartphones’ key features. However, notifications can be disruptive, especially during social interaction. Augmented reality (AR) glasses can embed notifications directly into the user’s field of view and enable reading them while being engaged in a primary task. However, for efficient notification presentation using AR glasses, it is necessary to understand how notifications should be displayed without negatively affecting social interaction. Therefore, we conducted a study with 32 participants (16 pairs) using AR glasses to investigate how to display notifications during face-to-face communication. We compared center and top-right positions for notifications while aligning them relative to the user’s field of view or with the conversation partner. We found significant effects of notification position and alignment on how notifications are perceived using AR glasses during face-to-face communication. Insights from our study inform the design of applications for AR glasses that support displaying digital notifications.

  • DOI
  •     PDF
  • 2020
  • Victor Cheung and Audrey Girouard.
  • Exploring Acceptability and Utility of Deformable Interactive Garment Buttons.
  • In: Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems (CHI EA '20).
  • Wearable devices have received tremendous interest in fitness and personal assistance sectors. Yet most are still worn as auxiliary hardware; falling short in ubiquity and convenience. We examine the potential of a novel deformable wearable device that embeds interactive technologies into garment buttons, and seek to enhance the form factor of buttons to incorporate deformation and motion as inputs. We surveyed garment buttons in everyday clothing to inform an exploratory study, where we investigated social acceptance and elicited interaction gestures using mockups. Our results indicate people mostly prefer smaller sizes, and regard sleeves as the most comfortable area to operate and look at when seen by others. Based on our findings, we discuss potential context of use, possible applications, and future work.

  • DOI
  •     PDF
  • 2020
  • Çağlar Genç, Ashley Colley, Markus Löchtefeld, and Jonna Häkkilä.
  • Face Mask Design to mitigate Facial Expression Occlusion.
  • In: Proceedings of the 2020 International Symposium on Wearable Computers (ISWC '20).
  • The COVID-19 pandemic dictated that wearing face masks during public interactions was the new norm across much of the globe. As the masks naturally occlude part of the wearer's face, the part of communication that occurs through facial expressions is lost, and could reduce acceptance of mask wear. To address the issue, we created 2 face mask prototypes, incorporating simple expressive display elements and evaluated them in a user study. Aiming to explore the potential for low-cost solutions, suitable for large-scale deployment, our concepts utilized bi-state electrochromic displays. One concept Mouthy Mask aimed to reproduce the image of the wearer's mouth, whilst the Smiley Mask was symbolic in nature. The smart face masks were considered useful in public contexts to support short socially expected rituals. Generally a visualization directly representing the wearer's mouth was preferred to an emoji style visualization. As a contribution, our work presents a stepping stone towards productizable solutions for smart face masks that potentially increase the acceptability of face mask wear in public.

  • DOI
  •     PDF
  •     Video
  • 2020
  • Hui-Shyong Yeo, Juyoung Lee, Andrea Bianchi, Alejandro Samboy, Hideki Koike, Woontack Woo, and Aaron Quigley.
  • WristLens: Enabling Single-Handed Surface Gesture Interaction for Wrist-Worn Devices Using Optical Motion Sensor.
  • In: Proceedings of the Augmented Humans International Conference (AHs '20).
  • WristLens is a system for surface interaction from wrist-worn wearable devices such as smartwatches and fitness trackers. It enables eyes-free, single-handed gestures on surfaces, using an optical motion sensor embedded in a wrist-strap. This allows the user to leverage any proximate surface, including their own body, for input and interaction. An experimental study was conducted to measure the performance of gesture interaction on three different body parts. Our results show that directional gestures are accurately recognized but less so for shape gestures. Finally, we explore the interaction design space enabled by WristLens, and demonstrate novel use cases and applications, such as on-body interaction, bimanual interaction, cursor control and 3D measurement.

  • DOI
  •     PDF
  •     Video
  • 2020
  • Wouter van der Woude, Daniel Tetteroo, and Rong-Hao Liang.
  • Presley: Designing Non-Obtrusive Tactile Rhythmic Wearable Devices for Improving Speech Fluency.
  • In: Companion Publication of the 2020 ACM Designing Interactive Systems Conference (DIS' 20 Companion).
  • People who stutter often lack self-esteem and self-efficacy caused by self-stigma. Current speech fluency devices mainly focus on the efficiency of increasing fluency, but seldom address the psychological factors that people experienced in everyday life. In this paper, we present a work-in-progress on designing non-obtrusive tactile rhythmic feedback devices that are wearable, readily-available, yet unnoticeable by others. We review the background, related work, and reflect on the early experiences of an experiential prototype with both persons who stutter or not. Based on the results, we enlighten the future design of socially-acceptable speech fluency devices.

  • DOI
  •     Video

2019

  • 2019
  • Mona Hosseinkhani Loorak, Wei Zhou, Ha Trinh, Jian Zhao, and Wei Li.
  • Hand-Over-Face Input Sensing for Interaction with Smartphones through the Built-in Camera.
  • In: Proceedings of the 21st International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI '19).
  • This paper proposes using face as a touch surface and employing hand-over-face (HOF) gestures as a novel input modality for interaction with smartphones, especially when touch input is limited. We contribute InterFace, a general system framework that enables the HOF input modality using advanced computer vision techniques. As an examplar of the usage of this framework, we demonstrate the feasibility and usefulness of HOF with an Android application for improving single-user and group selfie-taking experience through providing appearance customization in real-time. In a within-subjects study comparing HOF against touch input for single-user interaction, we found that HOF input led to significant improvements in accuracy and perceived workload, and was preferred by the participants. Qualitative results of an observational study also demonstrated the potential of HOF input modality to improve the user experience in multi-user interactions. Based on the lessons learned from our studies, we propose a set of potential applications of HOF to support smartphone interaction. We envision that the affordances provided by the this modality can expand the mobile interaction vocabulary and facilitate scenarios where touch input is limited or even not possible.

  • DOI
  •     PDF
  • 2019
  • Ella Dagan, Elena Márquez Segura, Ferran Altarriba Bertran, Miguel Flores, Robb Mitchell, and Katherine Isbister.
  • Design Framework for Social Wearables.
  • In: Proceedings of the 2019 on Designing Interactive Systems Conference (DIS '19).
  • Wearables are integrated into many aspects of our lives, yet, we still need further guidance to develop devices that truly enhance in-person interactions, rather than detract from them by taking people's attention away from the moment and one another. The value of this paper is twofold: first, we present an annotated portfolio of 'social wearables', namely technology designs worn on the body that augment co-located interaction. The design work described can serve as inspiration for others. Then we propose a design framework for social wearables grounded in prior work, as well as our own design research, that can help designers to ideate by raising valuable questions to begin their inquiry with and use to evaluate their designs. We illustrate the evaluative value of this framework through two social wearable designs, each tested in the appropriate social setting.

  • DOI
  • 2019
  • Ilyena Hirskyj-Douglas, Mikko Kytö, and David McGookin.
  • Head-mounted Displays, Smartphones, or Smartwatches? -- Augmenting Conversations with Digital Representation of Self.
  • In: Proceedings of the ACM Hum.-Comput. Interact. 3, CSCW, Article 179 (November 2019).
  • Technologies that augment face-to-face interactions with a digital sense of self have been used to support conversations. That work has employed one homogenous technology, either 'off-the-shelf' or with a bespoke prototype, across all participants. Beyond speculative instances, it is unclear what technology individuals themselves would choose, if any, to augment their social interactions; what influence it may exert; or how use of heterogeneous devices may affect the value of this augmentation. This is important, as the devices that we use directly affect our behaviour, influencing affordances and how we engage in social interactions. Through a study of 28 participants, we compared head-mounted display, smartphones, and smartwatches to support digital augmentation of self during face-to-face interactions within a group. We identified a preference among participants for head-mounted displays to support privacy, while smartwatches and smartphones better supported conversational events (such as grounding and repair), along with group use through screen-sharing. Accordingly, we present software and hardware design recommendations and user interface guidelines for integrating a digital form of self into face-to-face conversations.

  • DOI
  •     PDF
  • 2019
  • Chuang-Wen You, Ya-Fang Lin, Elle Luo, Hung-Yeh Lin, and Hsin-Liu (Cindy) Kao.
  • Understanding Social Perceptions towards Interacting with On-skin Interfaces in Public.
  • In: Proceedings of the 23rd International Symposium on Wearable Computers (ISWC '19).
  • Wearable devices have evolved towards intrinsic human augmentation, unlocking the human skin as an interface for seamless interaction. However, the non-traditional form factor of these on-skin interfaces, as well as the gestural interactions performed on them may raise concerns for public wear. These perceptions will influence whether a new form of technology will eventually be accepted, or rejected by society. Therefore, it is essential for researchers to consider the societal implications of device design. In this paper, we investigate the third person perceptions of a user's interactions with an on-skin touch sensor. Specifically, we examine social perceptions towards the placement of the on-skin interface in different body locations, as well as gestural interactions performed on the device. The study was conducted in the United States and Taiwan to examine cross-cultural attitudes towards device usage. The results of this structured examination offer insight into the design of on-skin interfaces for public use.

  • DOI
  •     PDF
  • 2019
  • Pouya Eghbali, Kaisa Väänänen, and Tero Jokela.
  • Social Acceptability of Virtual Reality in Public Spaces: Experiential Factors and Design Recommendations.
  • In: Proceedings of the 18th International Conference on Mobile and Ubiquitous Multimedia (MUM '19).
  • With the latest advancements in Virtual Reality (VR), the possible use of VR devices in public and social contexts has increased. Since the use of VR typically requires wearing a Head-Mounted Display (HMD), the user is not able to see others - the spectators - present in the same context. This may lead to a decrease of social acceptability of VR by both the users and the spectators. We conducted a field experiment to explore what are the experiential factors of the users of VR (N=10) and spectators of VR use (N=30). We found experiential factors for the users to be adjustment of interaction, uninterruptable immersion, un-intrusive communication, freedom to switch between realities, sense of safety, physical privacy, shared experience, and sense of belonging. For the spectators, the main factors are shared experience, enticing curiosity, feeling normal, and sense of safety. We then run three sessions with user experience (UX) experts (N=9) to create a set of design recommendations for socially acceptable VR. The resulting ten recommendations provide a holistic view to designing acceptable experiences for VR in public spaces.

  • DOI
  •     PDF
  • 2019
  • Valentin Schwind, Niklas Deierlein, Romina Poguntke, and Niels Henze.
  • Understanding the Social Acceptability of Mobile Devices Using the Stereotype Content Model.
  • In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI'19).
  • Understanding social perception is important for designing mobile devices that are socially acceptable. Previous work not only investigated the social acceptability of mobile devices and interaction techniques but also provided tools to measure social acceptance. However, we lack a robust model that explains the underlying factors that make devices socially acceptable. In this paper, we consider mobile devices as social objects and investigate if the stereotype content model (SCM) can be applied to those devices. Through a study that assesses combinations of mobile devices and group stereotypes, we show that mobile devices have a systematic effect on the stereotypes' warmth and competence. Supported by a second study, which combined mobile devices without a specific stereotypical user, our result suggests that mobile devices are perceived stereotypically by themselves. Our combined results highlight mobile devices as social objects and the importance of considering stereotypes when assessing social acceptance of mobile devices.

  • DOI
  •     PDF
  • 2019
  • Julie R. Williamson, Mark McGill, and Khari Outram.
  • PlaneVR: Social Acceptability of Virtual Reality for Aeroplane Passengers.
  • In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI'19).
  • Virtual reality (VR) headsets allow wearers to escape their physical surroundings, immersing themselves in a virtual world. Although escape may not be realistic or acceptable in many everyday situations, air travel is one context where early adoption of VR could be very attractive. While travelling, passengers are seated in restricted spaces for long durations, reliant on limited seat-back displays or mobile devices. This paper explores the social acceptability and usability of VR for in-flight entertainment. In an initial survey, we captured respondents' attitudes towards the social acceptability of VR headsets during air travel. Based on the survey results, we developed a VR in-flight entertainment prototype and evaluated this in a focus group study. Our results discuss methods for improving the acceptability of VR in-flight, including using mixed reality to help users transition between virtual and physical environments and supporting interruption from other co-located people.

  • DOI
  •     PDF
  • 2019
  • Marion Koelle, Torben Wallbaum, Wilko Heuten, and Susanne Boll.
  • Evaluating a Wearable Camera's Social Acceptability In-the-Wild.
  • In: Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, p. LBW1222. ACM, 2019.
  • With increasing ubiquity, wearable technologies are becoming part of everyday life where they may cause controversy, discomfort and social tension. Particularly, body-worn "always-on" cameras raise social acceptability concerns as their form factors hinder bystanders to infer whether they are "in the frame". Screen-based status indicators have been suggested as remedy, but not evaluated in-the-wild. Simultaneously, best practices for evaluating social acceptability in field studies are rare. This work contributes to closing both gaps. First, we contribute results of an in-the-wild evaluation of a screen-based status indicator testing the suitability of the "displayed camera image" design strategy. Second, we discuss methodical implications for evaluating social acceptability in the field, and cover lessons learned from collecting hypersubjective self-reports. We provide a self-critical, in-depth discussion of our field experiment, including study-related behavior patterns, and prototype fidelity. Our work may serve as a reference for field studies evaluating social acceptability.

  • DOI
  • 2019
  • Pablo Gallego Cascón, Denys J.C. Matthies, Sachith Muthukumarana, and Suranga Nanayakkara.
  • ChewIt. An Intraoral Interface for Discreet Interactions.
  • In: Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems.
  • Sensing interfaces relying on head or facial gestures provide effective solutions for hands-free scenarios. Most of these interfaces utilize sensors attached to the face, as well as into the mouth, being either obtrusive or limited in input bandwidth. In this paper, we propose ChewIt -- a novel intraoral input interface. ChewIt resembles an edible object that allows users to perform various hands-free input operations, both simply and discreetly. Our design is informed by a series of studies investigating the implications of shape, size, locations for comfort, discreetness, maneuverability, and obstructiveness. Additionally, we evaluated potential gestures that users could use to interact with such an intraoral interface.

  • DOI
  •     PDF
  •     Page
  • 2019
  • Qiushi Zhou, Joshua Newn, Benjamin Tag, Hao-Ping Lee, Chaofan Wang, and Eduardo Velloso.
  • Ubiquitous Smart Eyewear Interactions using Implicit Sensing and Unobtrusive Information Output.
  • In: Adjunct Proceedings of the 2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2019 ACM International Symposium on Wearable Computers (UbiComp/ISWC '19 Adjunct).
  • Premature technology, privacy, intrusiveness, power consumption, and user habits are all factors potentially contributing to the lack of social acceptance of smart glasses. After investigating the recent development of commercial smart eyewear and its related research, we propose a design space for ubiquitous smart eyewear interactions while maximising interactivity with minimal obtrusiveness. We focus on implicit and explicit interactions enabled by the combination of miniature sensor technology, low-resolution display and simplistic interaction modalities. Additionally, we are presenting example applications outlining future development directions. Finally, we aim at raising the awareness of designing for ubiquitous eyewear with implicit sensing and unobtrusive information output abilities.

  • DOI

2018

  • 2018
  • Fouad Alallah, Ali Neshati, Yumiko Sakamoto, Khalad Hasan, Edward Lank, Andrea Bunt, and Pourang Irani.
  • Performer vs. Observer: Whose Comfort Level Should We Consider when Examining the Social Acceptability of Input Modalities for Head-worn Display?
  • In: Proceedings of the ACM Symposium on Virtual Reality Software and Technology (VRST'18).
  • The popularity of head-worn displays (HWD) technologies such as Virtual Reality (VR) and Augmented Reality (AR) headsets is growing rapidly. To predict their commercial success, it is essential to understand the acceptability of these new technologies, along with new methods to interact with them. In this vein, the evaluation of social acceptability of interactions with these technologies has received significant attention, particularly from the performer's (i.e., user's) viewpoint. However, little work has considered social acceptability concerns from observers' (i.e., spectators') perspective. Although HWDs are designed to be personal devices, interacting with their interfaces are often quite noticeable, making them an ideal platform to contrast performer and observer perspectives on social acceptability. Through two studies, this paper contrasts performers' and observers' perspectives of social acceptability interactions with HWDs under different social contexts. Results indicate similarities as well as differences, in acceptability, and advocate for the importance of including both perspectives when exploring social acceptability of emerging technologies. We provide guidelines for understanding social acceptability specifically from the observers' perspective, thus complementing our current practices used for understanding the acceptability of interacting with these devices.

  • DOI
  •     PDF
  • 2018
  • Fouad Alallah, Ali Neshati, Nima Sheibani, Yumiko Sakamoto, Andrea Bunt, Pourang Irani, and Khalad Hasan.
  • Crowdsourcing vs Laboratory-Style Social Acceptability Studies?: Examining the Social Acceptability of Spatial User Interactions for Head-Worn Displays.
  • In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI'18).
  • The use of crowdsourcing platforms for data collection in HCI research is attractive in their ability to provide rapid access to large and diverse participant samples. As a result, several researchers have conducted studies investigating the similarities and differences between data collected through crowdsourcing and more traditional, laboratory-style data collection. We add to this body of research by examining the feasibility of conducting social acceptability studies via crowdsourcing. Social acceptability can be a key determinant for the early adoption of emerging technologies, and as such, we focus our investigation on social acceptability for Head-Worn Display (HWD) input modalities. Our results indicate that data collected via a crowdsourced experiment and a laboratory-style setting did not differ at a statistically significant level. These results provide initial support for crowdsourcing platforms as viable options for conducting social acceptability research.

  • DOI
  •     PDF
  • 2018
  • Mauro Avila Soto and Markus Funk.
  • Look, a Guidance Drone! Assessing the Social Acceptability of Companion Drones for Blind Travelers in Public Spaces.
  • In: Proceedings of the International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS'18).
  • Using assistance technology always comes with the challenge of social acceptability. While an accessibility device might have great implications for a person with disabilities, it might come with unpleasant social implications. In this paper, we want to assess the social implications of flying companion quadcopters for navigating persons with visual impairments. We conducted an acceptability study with 15 sighted and 5 visually impaired participants and report on the results.

  • DOI
  •     PDF
  • 2018
  • Christine Dierk, Sarah Sterman, Molly Jane Pearce Nicholas, and Eric Paulos.
  • HäirIÖ: Human Hair As Interactive Material.
  • In: Proceedings of the International Conference on Tangible, Embedded, and Embodied Interaction (TEI'18).
  • Human hair is a cultural material, with a rich history displaying individuality, cultural expression and group identity. It is malleable in length, color and style, highly visible, and embedded in a range of personal and group interactions. As wearable technologies move ever closer to the body, and embodied interactions become more common and desirable, hair presents a unique and little-explored site for novel interactions. In this paper, we present an exploration and working prototype of hair as a site for novel interaction, leveraging its position as something both public and private, social and personal, malleable and permanent. We develop applications and interactions around this new material in HäirIÖ: a novel integration of hair-based technologies and braids that combine capacitive touch input and dynamic output through color and shape change. Finally, we evaluate this hair-based interactive technology with users, including the integration of HäirIÖ within the landscape of existing wearable and mobile technologies.

  • DOI
  •     PDF
  •     Page
  • 2018
  • Marion Koelle, Swamy Ananthanarayan, Simon Czupalla, Wilko Heuten, and Susanne Boll.
  • Your Smart Glasses' Camera Bothers Me!: Exploring Opt-in and Opt-out Gestures for Privacy Mediation.
  • In: Proceedings of the Nordic Conference on Human-Computer Interaction (NordiCHI'18).
  • Bystanders have little say in whether they are being recorded by "always-on" cameras. One approach is to use gestural interaction to enable bystanders to signal their preference to camera devices. Since there is no established gestural vocabulary for this use case, we explored gestures to explicitly express consent (Opt-in) or disapproval (Opt-out) in a particular recording. We started with a gesture elicitation study, where we invited 15 users to envision potential Opt-in and Opt-out gestures. Subsequently, we conducted a large-scale online survey (N=127) investigating ambiguity, representativeness, understandability, social acceptability, and comfort of a subset of gestures derived from the elicitation study. Our results indicate that it is feasible to find gestures that are suitable, understandable, and socially acceptable. Gestures should be illustrative, complementary, and extendable (e.g., through sequential linkage) to account for more granular control, as well as not be beset with common meaning. Moreover, we discuss ethicality and legal implications in the context of GDPR.

  • DOI
  •     PDF
  • 2018
  • DoYoung Lee, Youryang Lee, Yonghwan Shin, and Ian Oakley.
  • Designing Socially Acceptable Hand-to-Face Input.
  • In: Proceedings of the ACM Symposium on User Interface Software and Technology (UIST'18).
  • Wearable head-mounted displays combine rich graphical output with an impoverished input space. Hand-to-face gestures have been proposed as a way to add input expressivity while keeping control movements unobtrusive. To better understand how to design such techniques, we describe an elicitation study conducted in a busy public space in which pairs of users were asked to generate unobtrusive, socially acceptable hand-to-face input actions. Based on the results, we describe five design strategies: miniaturizing, obfuscating, screening, camouflaging and re-purposing. We instantiate these strategies in two hand-to-face input prototypes, one based on touches to the ear and the other based on touches of the thumbnail to the chin or cheek. Performance assessments characterize time and error rates with these devices. The paper closes with a validation study in which pairs of users experience the prototypes in a public setting and we gather data on the social acceptability of the designs and reflect on the effectiveness of the different strategies.

  • DOI
  •     PDF
  • 2018
  • Halley Profita, Abigale Stangl, Laura Matuszewska, Sigrunn Sky, Raja Kushalnagar, and Shaun K. Kane.
  • "Wear It Loud": How and Why Hearing Aid and Cochlear Implant Users Customize Their Devices.
  • In: ACM TRANS ACCESS COMPUT, 11(3), 13:1--13:32.
  • We investigate the role of aesthetic customization in managing sociocultural issues of assistive technology (AT) use. First, we examined an online forum dedicated to customized hearing aids and cochlear implants to understand the breadth of activity occurring in this space. Next, we conducted a series of interviews to understand motivational factors and sociocultural outcomes related to expressive AT. We found that community members discussed customization tools and techniques, shared their customizations, and provided each other with encouragement and support. Community members customized their devices as a means of self-expression that demonstrated the wearer's fashion sense, revealed favorite sports teams and characters, and marked holidays and personal milestones. We also found that aesthetic customization worked on multiple levels to create personal and meaningful relationships with one's AT and with other AT users, and also to manage societal expectations regarding hearing loss. Our findings may inform the design of assistive technologies that better support personalization, customization, and self-expression.

  • DOI
  •     PDF
  • 2018
  • Valentin Schwind, Jens Reinhardt, Rufat Rzayev, Niels Henze, and Katrin Wolf.
  • Virtual Reality on the Go?: A Study on Social Acceptance of VR Glasses.
  • In: Proceedings of the ACM International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI'18).
  • Virtual reality (VR) glasses enable to be present in an environment while the own physical body is located in another place. Recent mobile VR glasses enable users to be present in any environment they want at any time and physical place. Still, mobile VR glasses are rarely used. One explanation is that it is not considered socially acceptable to immerse in another environment in certain situations. We conducted an online experiment that investigates the social acceptance of VR glasses in six different contexts. Our results confirm that social acceptability depends on the situation. In the bed, in the metro, or in a train, mobile VR glasses seem to be acceptable. However, while being surrounded by other people where a social interaction between people is expected, such as in a living room or a public cafe, the acceptance of mobile VR glasses is significantly reduced. If one or two persons wear glasses seems to have a negligible effect. We conclude that social acceptability of VR glasses depends on the situation and is lower when the user is supposed to interact with surrounding people.

  • DOI
  •     PDF
  • 2018
  • Aleksandra Taniberg, Lars Botin, and Kashmiri Stec.
  • Context of Use Affects the Social Acceptability of Gesture Interaction.
  • In: Proceedings of the Nordic Conference on Human-Computer Interaction (NordiCHI'18).
  • We use a Wizard-of-Oz design to investigate the effects of physical context on the social acceptability of touchless (3D) gesture interaction for pairs of mature users (age 30+) controlling a sound system in a living room environment. As part of this, we also investigate how the production of the gesture set varies with respect to physical context. Participants took turns being host (user) and guest (observer) in two conditions: "easy" and "hard". We find a tendency for social acceptability to be higher for the "easy" setup compared to "hard" and for hosts to rate the experience higher than guests. We also find a tendency for gesture size to decrease across sessions, though gestures in the "hard" setup tend to be larger than those in the "easy" setup.

  • DOI
  •     PDF

2017

  • 2017
  • Jun Gong, Lan Li, Daniel Vogel, and Xing-Dong Yang.
  • Cito: An Actuated Smartwatch for Extended Interactions.
  • In: In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI'17).
  • We propose and explore actuating a smartwatch face to enable extended interactions. Five face movements are defined: rotation, hinging, translation, rising, and orbiting. These movements are incorporated into interaction techniques to address limitations of a fixed watch face. A 20-person study uses concept videos of a passive low fidelity prototype to confirm the usefulness of the actuated interaction techniques. A second 20-person study uses 3D rendered animations to access social acceptability and perceived comfort for different actuation dynamics and usage contexts. Finally, we present Cito, a high-fidelity proof-of-concept hardware prototype that investigates technical challenges.

  • DOI
  •     PDF
  •     Page
  • 2017
  • Christian Holz and Edward J. Wang.
  • Glabella: Continuously Sensing Blood Pressure Behavior Using an Unobtrusive Wearable Device.
  • In: PROC ACM IMWUT, 1(3), 58:1--58:23
  • We propose Glabella, a wearable device that continuously and unobtrusively monitors heart rates at three sites on the wearer’s head. Our glasses prototype incorporates optical sensors, processing, storage, and communication components, all integrated into the frame to passively collect physiological data about the user without the need for any interaction. Glabella continuously records the stream of reflected light intensities from blood flow as well as inertial measurements of the user’s head. From the temporal differences in pulse events across the sensors, our prototype derives the wearer’s pulse transit time on a beat-to-beat basis. Numerous efforts have found a significant correlation between a person’s pulse transit time and their systolic blood pressure. In this paper, we leverage this insight to continuously observe pulse transit time as a proxy for the behavior of systolic blood pressure levels—at a substantially higher level of convenience and higher rate than traditional blood pressure monitors, such as cuff-based oscillometric devices. This enables our prototype to model the beat-to-beat fluctuations in the user’s blood pressure over the course of the day and record its short-term responses to events, such as postural changes, exercise, eating and drinking, resting, medication intake, location changes, or time of day. During our in-the-wild evaluation, four participants wore a custom-fit Glabella prototype device over the course of five days throughout their daytime job and regular activities. Participants additionally measured their radial blood pressure three times an hour using a commercial oscillometric cuff. Our analysis shows a high correlation between the pulse transit times computed on our devices with participants’ heart rates (mean r = 0.92, SE = 0.03, angular artery) and systolic blood pressure values measured using the oscillometric cuffs (mean r = 0.79, SE = 0.15, angular-superficial temporal artery, considering participants’ self-administered cuff-based measurements as ground truth). Our results indicate that Glabella has the potential to serve as a socially-acceptable capture device, requiring no user input or behavior changes during regular activities, and whose continuous measurements may prove informative to physicians as well as users’ self-tracking activities.

  • DOI
  •     PDF
  •     Page
  • 2017
  • Marion Koelle, Abdallah El Ali, Vanessa Cobus, Wilko Heuten, and Susanne Boll.
  • All About Acceptability?: Identifying Factors for the Adoption of Data Glasses.
  • In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI'17).
  • Innovations often trigger objections before becoming widely accepted. This paper assesses whether a familiarisation over time can be expected for data glasses, too. While user attitudes towards those devices have been reported to be prevalently negative [14], it is still unclear, to what extent this initial, negative user attitude might impede adoption. However, indepth understanding is crucial for reducing barriers early in order to gain access to potential benefits from the technology. With this paper we contribute to a better understanding of factors affecting data glasses adoption, as well as current trends and opinions. Our multiple-year case study (N=118) shows, against expectations, no significant change towards a more positive attitude between 2014 and 2016. We complement these findings with an expert survey (N=51) investigating prognoses, challenges and discussing the relevance of social acceptability. We elicit and contrast a controversial spectrum of expert opinions, and assess whether initial objections can be overwritten. Our analysis shows that while social acceptability is considered relevant for the time being, utility and usability are more valued for long-term adoption.

  • DOI
  •     PDF
  • 2017
  • Kiyoshi Murata, Andrew A. Adams, Yasunori Fukuta, Yohko Orito, Mario Arias-Oliva, and Jorge Pelegrin-Borondo.
  • From a Science Fiction to Reality: Cyborg Ethics in Japan.
  • In: COMPUT SOC, 47(3), 72--85.
  • This study deals with young people's attitudes towards and social acceptance of "cyborg technology" including wearables and insideables (or implantable devices) to enhance human ability in Japan as part of the international research project on cyborg ethics, taking Japanese socio-cultural characteristics surrounding cyborg technology into consideration. Those subjects were investigated through questionnaire surveys of Japanese university students, which were conducted in November and December 2016. The survey results demonstrated respondents' relatively low resistance to using wearables and insideables to improve human physical ability and intellectual power. On the other hand, the morality of insideables was questioned by respondents. In various aspects, statistically significant differences in attitudes towards the technologies between genders were detected.

  • DOI
  • 2017
  • Uran Oh, Lee Stearns, Alisha Pradhan, Jon E. Froehlich, and Leah Findlater.
  • Investigating Microinteractions for People with Visual Impairments and the Potential Role of On-Body Interaction.
  • In: Proceedings of the International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS'17).
  • For screenreader users who are blind or visually impaired (VI), today's mobile devices, while reasonably accessible, are not necessarily efficient. This inefficiency may be especially problematic for microinteractions, which are brief but high-frequency interactions that take only a few seconds for sighted users to complete (e.g., checking the weather or for new messages). One potential solution to support efficient non-visual microinteractions is on-body input, which appropriates the user's own body as the interaction medium. In this paper, we address two related research questions: How well are microinteractions currently supported for VI users' How should on-body interaction be designed to best support microinteractions for this user group? We conducted two studies: (1) an online survey to compare current microinteraction use between VI and sighted users (N=117); and (2) an in-person study where 12 VI screenreader users qualitatively evaluated a real-time on-body interaction system that provided three contrasting input designs. Our findings suggest that efficient microinteractions are not currently well-supported for VI users, at least using manual input, which highlights the need for new interaction approaches. On-body input offers this potential and the qualitative evaluation revealed tradeoffs with different on-body interaction techniques in terms of perceived efficiency, learnability, social acceptability, and ability to use on the go.

  • DOI
  •     PDF
  • 2017
  • Jeni Paay, Jesper Kjeldskov, Dimitrios Raptis, Mikael B. Skov, Ivan S. Penchev, and Elias Ringhauge.
  • Cross-device Interaction with Large Displays in Public: Insights from Both Users' and Observers' Perspectives.
  • In: Proceedings of the Australasian Computer-Human Interaction Conference (OzCHI'17).
  • Using a mixture of physical gestures and one's smartphone is a convenient way for people to engage and interact with large displays in public. Yet, one of the challenges of cross-device interactions is to design techniques that encourage participation. This paper presents a study of people using four different cross-device interaction techniques in a public setting, to identify how both users and observers feel about the device actions and bodily gestures required to interact with a large display using smartphones. We collected both direct feedback and observational data of users' and observers' attitudes and reactions to using these techniques in public. We identified five key factors influencing people's experience of interacting while being observed by others: Familiarity, Social Acceptability, Purpose, Easiness and Playfulness. We argue that it is important to consider observer attitudes when designing cross-device interactions for large displays in public, to encourage the participation of passers-by.

  • DOI
  •     PDF

2016

  • 2016
  • Julian Frommel, Katja Rogers, Thomas Dreja, Julian Winterfeldt, Christian Hunger, Maximilian Bär, and Michael Weber.
  • 2084 Safe New World: Designing Ubiquitous Interactions.
  • In: In: Proceedings of the Annual Symposium on Computer-Human Interaction in Play (CHI PLAY'16).
  • This paper investigates a concept for highly ubiquitous game interactions in pervasive games. Pervasive gaming is increasingly popular, but steadily improving mobile and ubiquitous technologies (e.g. smartwatches) have yet to be utilised to their full potential in this area. For this purpose, we implemented 2084 Safe New World; a pervasive game that allows particularly ubiquitous gameplay through micro interactions of varying duration. In a lab study, different interaction techniques based on gestures and touch input were compared on two mobile devices regarding usability and game input observability. A second study evaluated the player experience under more realistic circumstances; in particular, it examined how well the game can be integrated into everyday life, and tested boundaries of social acceptance of ubiquitous interactions in a pervasive spy game.

  • DOI
  •     PDF
  • 2016
  • Yi-Ta Hsieh, Antti Jylhä, Valeria Orso, Luciano Gamberini, and Giulio Jacucci.
  • Designing a Willing-to-Use-in-Public Hand Gestural Interaction Technique for Smart Glasses.
  • In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI'16).
  • Smart glasses suffer from obtrusive or cumbersome interaction techniques. Studies show that people are not willing to publicly use, for example, voice control or mid-air gestures in front of the face. Some techniques also hamper the high degree of freedom of the glasses. In this paper, we derive design principles for socially acceptable, yet versatile, interaction techniques for smart glasses based on a survey of related work. We propose an exemplary design, based on a haptic glove integrated with smart glasses, as an embodiment of the design principles. The design is further refined into three interaction scenarios: text entry, scrolling, and point-and-select. Through a user study conducted in a public space we show that the interaction technique is considered unobtrusive and socially acceptable. Furthermore, the performance of the technique in text entry is comparable to state-of-the-art techniques. We conclude by reflecting on the advantages of the proposed design.

  • DOI
  •     PDF
  • 2016
  • Norene Kelly and Stephen Gilbert.
  • The WEAR Scale: Developing a Measure of the Social Acceptability of a Wearable Device.
  • In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI'16).
  • The factors affecting the social acceptability of wearable devices are not well understood, yet they have a strong influence on whether a new wearable succeeds or fails. Factors uniquely affecting wearable acceptability as compared to other technology include manners, moral codes, the symbolic communication of dress, habits of dress, fashion, context of use, form, and aesthetics. This paper describes the development of the WEarable Acceptability Range (WEAR Scale), designed to predict acceptance of a particular wearable. First, the construct "social acceptability of a wearable" was defined using literature and an interview study. Second, the WEAR Scale's item pool was composed, and reviewed by experts. Third, the resulting scale was administered to sample respondents along with validation measures. The data will be evaluated for reliability and validity, and the scale's length will be adjusted, culminating in a validated WEAR Scale useful to both industry and academia.

  • DOI
  •     PDF
  • 2016
  • Katsutoshi Masai, Yuta Sugiura, Masa Ogata, Kai Kunze, Masahiko Inami, and Maki Sugimoto.
  • Facial Expression Recognition in Daily Life by Embedded Photo Reflective Sensors on Smart Eyewear.
  • In: Proceedings of the International Conference on Intelligent User Interfaces (IUI'16).
  • This paper presents a novel smart eyewear that uses embedded photo reflective sensors and machine learning to recognize a wearer's facial expressions in daily life. We leverage the skin deformation when wearers change their facial expressions. With small photo reflective sensors, we measure the proximity between the skin surface on a face and the eyewear frame where 17 sensors are integrated. A Support Vector Machine (SVM) algorithm was applied for the sensor information. The sensors can cover various facial muscle movements and can be integrated into everyday glasses. The main contributions of our work are as follows. (1) The eyewear recognizes eight facial expressions (92.8% accuracy for one time use and 78.1% for use on 3 different days). (2) It is designed and implemented considering social acceptability. The device looks like normal eyewear, so users can wear it anytime, anywhere. (3) Initial field trials in daily life were undertaken. Our work is one of the first attempts to recognize and evaluate a variety of facial expressions in the form of an unobtrusive wearable device.

  • DOI
  •     PDF
  • 2016
  • Daniel Pohl and Fernandez de Tejada Quemada, Carlos.
  • See What I See: Concepts to Improve the Social Acceptance of HMDs.
  • In: Proceedings of the IEEE Virtual Reality Conference (VR'16).
  • Virtual Reality is reaching into the consumer space. Mobile virtual reality solutions are nowadays widely available and affordable for many smartphones, by adding a case with attached lenses around the phone to create a head-mounted display. While using these in public places or at social gatherings where the head-mounted display is given around to others, it can lead to problems regarding social acceptance, as the surrounding people are not aware of what the virtual reality user is seeing and doing. We address this problem by adding a second, front-facing screen to the head-mounted display. We build and evaluate two prototypes for this usage.

  • DOI
  •     PDF
  • 2016
  • Halley Profita, Reem Albaghli, Leah Findlater, Paul Jaeger, and Shaun K. Kane.
  • The AT Effect: How Disability Affects the Perceived Social Acceptability of Head-Mounted Display Use.
  • In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI'16).
  • Wearable computing devices offer new possibilities to increase accessibility and independence for individuals with disabilities. However, the adoption of such devices may be influenced by social factors, and useful devices may not be adopted if they are considered inappropriate to use. While public policy may adapt to support accommodations for assistive technology, emerging technologies may be unfamiliar or unaccepted by bystanders. We surveyed 1200 individuals about the use of a head-mounted display in a public setting, examining how information about the user's disability affected judgments of the social acceptability of the scenario. Our findings reveal that observers considered head-mounted display use more socially acceptable if the device was being used to support a person with a disability.

  • DOI
  •     PDF

2015

  • 2015
  • Barrett Ens, Tovi Grossman, Fraser Anderson, Justin Matejka, and George Fitzmaurice.
  • Candid Interaction: Revealing Hidden Mobile and Wearable Computing Activities.
  • In: Proceedings of the ACM Symposium on User Interface Software and Technology (UIST'15).
  • The growth of mobile and wearable technologies has made it often difficult to understand what people in our surroundings are doing with their technology. In this paper, we introduce the concept of candid interaction: techniques for providing awareness about our mobile and wearable device usage to others in the vicinity. We motivate and ground this exploration through a survey on current attitudes toward device usage during interpersonal encounters. We then explore a design space for candid interaction through seven prototypes that leverage a wide range of technological enhancements, such as Augmented Reality, shape memory muscle wire, and wearable projection. Preliminary user feedback of our prototypes highlights the trade-offs between the benefits of sharing device activity and the need to protect user privacy.

  • DOI
  •     PDF
  • 2015
  • Jonna Häkkilä, Farnaz Vahabpour, Ashley Colley, Jani Väyrynen, and Timo Koskela.
  • Design Probes Study on User Perceptions of a Smart Glasses Concept.
  • In: Proceedings of the International Conference on Mobile and Ubiquitous Multimedia (MUM'15).
  • Until today, mobile computing has been very much confined to conventional computing form factors, i.e. laptops, tablets and smartphones, which have achieved de facto design standards in outlook and shape. However, wearable devices are emerging, and especially glasses are an appealing form factor for future devices. Currently, although companies such as Google have productized a solution, little user research and design exploration has been published on either the user preferences or the technology. We set ourselves to explore the design directions for smart glasses with user research grounded use cases and design alternatives. We describe our user research utilizing a smart glasses design probe in an experience sampling method study (n=12), and present a focus group based study (n=14) providing results on perceptions on alternative industrial designs for smart glasses.

  • DOI
  •     PDF
  • 2015
  • Marion Koelle, Matthias Kranz, and Andreas Möller.
  • Don't Look at Me That Way!: Understanding User Attitudes Towards Data Glasses Usage.
  • In: In: Proceedings of the International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI'15).
  • Data glasses do carry promising potential for hands-free interaction, but also raise various concerns amongst their potential users. In order to gain insights into the nature of those concerns, we investigate how potential usage scenarios are perceived by device users and their peers. We present results of a two-step approach: a focus group discussion with 7 participants, and a user study with 38 participants. In particular, we look into differences between the usage of data glasses and more established devices such as smart phones. We provide quantitative measures for scenario-related social acceptability and point out factors that can influence user attitudes. Based on our quantitative and qualitative results, we derive design implications that might support the development of head-worn devices and applications with an improved social acceptability.

  • DOI
  •     PDF
  • 2015
  • Zhihan Lv, Alaa Halawani, Shengzhong Feng, Shafiq Ur Réhman, and Haibo Li.
  • Touch-less Interactive Augmented Reality Game on Visionbased Wearable Device.
  • In: PERS UBIQUIT COMPUT, 19(3--4), 551--567.
  • There is an increasing interest in creating pervasive games based on emerging interaction technologies. In order to develop touch-less, interactive and augmented reality games on vision-based wearable device, a touch-less motion interaction technology is designed and evaluated in this work. Users interact with the augmented reality games with dynamic hands/feet gestures in front of the camera, which triggers the interaction event to interact with the virtual object in the scene. Three primitive augmented reality games with eleven dynamic gestures are developed based on the proposed touch-less interaction technology as proof. At last, a comparing evaluation is proposed to demonstrate the social acceptability and usability of the touch-less approach, running on a hybrid wearable framework or with Google Glass, as well as workload assessment, user’s emotions and satisfaction.

  • DOI
  •     PDF
  • 2015
  • Jennifer Pearson, Simon Robinson, and Matt Jones.
  • It's About Time: Smartwatches As Public Displays.
  • In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI'15).
  • Current uses of smartwatches are focused solely around the wearer's content, viewed by the wearer alone. When worn on a wrist, however, watches are often visible to many other people, making it easy to quickly glance at their displays. We explore the possibility of extending smartwatch interactions to turn personal wearables into more public displays. We begin opening up this area by investigating fundamental aspects of this interaction form, such as the social acceptability and noticeability of looking at someone else's watch, as well as the likelihood of a watch face being visible to others. We then sketch out interaction dimensions as a design space, evaluating each aspect via a web-based study and a deployment of three potential designs. We conclude with a discussion of the findings, implications of the approach and ways in which designers in this space can approach public wrist-worn wearables.

  • DOI
  •     PDF
  • 2015
  • Halley Profita, Nicholas Farrow, and Nikolaus Correll.
  • Flutter: An Exploration of an Assistive Garment Using Distributed Sensing, Computation and Actuation.
  • In: Proceedings of the International Conference on Tangible, Embedded, and Embodied Interaction (TEI'15).
  • Assistive technology (AT) has the ability to improve the standard of living of those with disabilities, however, it can often be abandoned for aesthetic or stigmatizing reasons. Garment-based AT offers novel opportunities to address these issues as it can stay with the user to continuously monitor and convey relevant information, is non-invasive, and can provide aesthetically pleasing alternatives. In an effort to overcome traditional AT and wearable computing challenges including, cumbersome hardware constraints and social acceptability, we present Flutter, a fashion-oriented wearable AT. Flutter seamlessly embeds low-profile networked sensing, computation, and actuation to facilitate sensory augmentation for those with hearing loss. The miniaturized distributed hardware enables both textile integration and new methods to pair fashion with function, as embellishments are functionally leveraged to complement technology integration. Finally, we discuss future applications and broader implications of using such computationally-enabled textile wearables to support sensory augmentation beyond the realm of AT.

  • DOI
  •     PDF
  •     Page
  • 2015
  • Ying-Chao Tung, Chun-Yen Hsu, Han-Yu Wang, Silvia Chyou, Jhe-Wei Lin, Pei-Jung Wu, Andries Valstar, and Mike Y. Chen.
  • User-Defined Game Input for Smart Glasses in Public Space.
  • In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI'15).
  • Smart glasses, such as Google Glass, provide always-available displays not offered by console and mobile gaming devices, and could potentially offer a pervasive gaming experience. However, research on input for games on smart glasses has been constrained by the available sensors to date. To help inform design directions, this paper explores user-defined game input for smart glasses beyond the capabilities of current sensors, and focuses on the interaction in public settings. We conducted a user-defined input study with 24 participants, each performing 17 common game control tasks using 3 classes of interaction and 2 form factors of smart glasses, for a total of 2448 trials. Results show that users significantly preferred non-touch and non-handheld interaction over using handheld input devices, such as in-air gestures. Also, for touch input without handheld devices, users preferred interacting with their palms over wearable devices (51% vs 20%). In addition, users preferred interactions that are less noticeable due to concerns with social acceptance, and preferred in-air gestures in front of the torso rather than in front of the face (63% vs 37%).

  • DOI
  •     PDF
  • 2015
  • Surbhit Verma, Himanshu Bansal, and Keyur Sorathia.
  • A Study for Investigating Suitable Gesture Based Selection for Gestural User Interfaces.
  • In: Proceedings of the International Conference on HCI (IndiaHCI'15).
  • With gestural interfaces becoming increasingly popular across multiple platforms and devices, there is a need to investigate suitable gestural inputs, its effectiveness and acceptance of such interaction techniques. In this paper, we present a study conducted to investigate four suitable gestural input methods (i) Grabbing (ii) Pointing (iii) Hand motion in 2D plane and (iv) Body movement in 3D space for menu selection, their performance and acceptance for gesture-controlled user interfaces using a game "Treasure Hunt". Study was conducted with two different sets of 20-20 participants in public and private environment. We investigated the gestures from participants for their ranking preferences, social acceptance (user and spectator) and positive-negativeaffect after experiencing each menu selection method. Each selection method was also evaluated for its usability through collecting error rates and task completion time. These parameters were analyzed for exploring trends within and between environments. We observed that Grabbing was most preferred for both public and private environment, however it took significantly higher time to complete the task. Pointing and Hand Motion in 2D Plane found most errors in both environments. Higher social acceptance was found in public environment than private environment. We also present subjective analysis and discuss them in detail.

  • DOI
  •     PDF
  • 2015
  • Cheng-Yao Wang, Wei-Chen Chu, Po-Tsung Chiu, Min-Chieh Hsiu, Yih-Harn Chiang, and Mike Y. Chen. PalmType: Using Palms As Keyboards for Smart Glasses.
  • PalmType: Using Palms As Keyboards for Smart Glasses.
  • In: Proceedings of the ACM International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI'15).
  • We present PalmType, which uses palms as interactive keyboards for smart wearable displays, such as Google Glass. PalmType leverages users' innate ability to pinpoint specific areas of their palms and fingers without visual attention (i.e. proprioception), and provides visual feedback via the wearable displays. With wrist-worn sensors and wearable displays, PalmType enables typing without requiring users to hold any devices and does not require visual attention to their hands. We conducted design sessions with 6 participants to see how users map QWERTY layout to their hands based on their proprioception. To evaluate typing performance and preference, we conducted a 12-person user study using Google Glass and Vicon motion tracking system, which showed that PalmType with optimized QWERTY layout is 39% faster than current touchpad-based keyboards. In addition, PalmType is preferred by 92% of the participants. We demonstrate the feasibility of wearable PalmType by building a prototype that uses a wrist-worn array of 15 infrared sensors to detect users' finger position and taps, and provides visual feedback via Google Glass.

  • DOI
  •     PDF
  •     Page
  • 2015
  • Martin Weigel, Tong Lu, Gilles Bailly, Antti Oulasvirta, Carmel Majidi, and Jürgen Steimle.
  • iSkin: Flexible, Stretchable and Visually Customizable On-Body Touch Sensors for Mobile Computing.
  • In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI'15).
  • We propose iSkin, a novel class of skin-worn sensors for touch input on the body. iSkin is a very thin sensor overlay, made of biocompatible materials, and is flexible and stretchable. It can be produced in different shapes and sizes to suit various locations of the body such as the finger, forearm, or ear. Integrating capacitive and resistive touch sensing, the sensor is capable of detecting touch input with two levels of pressure, even when stretched by 30% or when bent with a radius of 0.5cm. Furthermore, iSkin supports single or multiple touch areas of custom shape and arrangement, as well as more complex widgets, such as sliders and click wheels. Recognizing the social importance of skin, we show visual design patterns to customize functional touch sensors and allow for a visually aesthetic appearance. Taken together, these contributions enable new types of on-body devices. This includes finger-worn devices, extensions to conventional wearable devices, and touch input stickers, all fostering direct, quick, and discreet input for mobile computing.

  • DOI
  •     PDF
  •     Page
  • 2015
  • Sang Ho Yoon, Ke Huo, Vinh P. Nguyen, and Karthik Ramani.
  • TIMMi: Finger-worn Textile Input Device with Multimodal Sensing in Mobile Interaction.
  • In: Proceedings of the International Conference on Tangible, Embedded, and Embodied Interaction (TEI'15).
  • We introduce TIMMi, a textile input device for mobile interactions. TIMMi is worn on the index finger to provide a multimodal sensing input metaphor. The prototype is fabricated on a single layer of textile where the conductive silicone rubber is painted and the conductive threads are stitched. The sensing area comprises of three equally spaced dots and a separate wide line. Strain and pressure values are extracted from the line and three dots, respectively via voltage dividers. Regression analysis is performed to model the relationship between sensing values and finger pressure and bending. A multi-level thresholding is applied to capture different levels of finger bending and pressure. A temporal position tracking algorithm is implemented to capture the swipe gesture. In this preliminary study, we demonstrate TIMMi as a finger-worn input device with two applications: controlling music player and interacting with smartglasses.

  • DOI
  •     PDF
  •     Page

2014

  • 2014
  • David Ahlström, Khalad Hasan, and Pourang Irani.
  • Are You Comfortable Doing That?: Acceptance Studies of Around-device Gestures in and for Public Settings.
  • In: Proceedings of the ACM International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI'14).
  • Several research groups have demonstrated advantages of extending a mobile device's input vocabulary with in-air gestures. Such gestures show promise but are not yet being integrated onto commercial devices. One reason for this might be the uncertainty about users' perceptions regarding the social acceptance of such around-device gestures. In three studies, performed in public settings, we explore users' and spectators' attitudes about using around-device gestures in public. The results show that people are concerned about others' reactions. They are also sensitive and selective regarding where and in front of whom they would feel comfortable using around-device gestures. However, acceptance and comfort are strongly linked to gesture characteristics, such as, gesture size, duration and in-air position. Based on our findings we present recommendations for around-device input designers and suggest new approaches for evaluating the social acceptability of novel input methods.

  • DOI
  •     PDF
  • 2014
  • Lucy E. Dunne, Halley Profita, Clint Zeagler, James Clawson, Scott Gilliland, Ellen Yi-Luen Do, and Jim Budd.
  • The Social Comfort of Wearable Technology and Gestural Interaction.
  • In: Proceedings of the IEEE Engineering in Medicine and Biology Society (EMBC'14).
  • The “wearability” of wearable technology addresses the factors that affect the degree of comfort the wearer experiences while wearing a device, including physical, psychological, and social aspects. While the physical and psychological aspects of wearing technology have been investigated since early in the development of the field of wearable computing, the social aspects of wearability have been less fully-explored. As wearable technology becomes increasingly common on the commercial market, social wearability is becoming an ever-more-important variable contributing to the success or failure of new products. Here we present an analysis of social aspects of wearability within the context of the greater understanding of wearability in wearable technology, and focus on selected theoretical frameworks for understanding how wearable products are perceived and evaluated in a social context. Qualitative results from a study of social acceptability of on-body interactions are presented as a case study of social wearability.

  • DOI
  • 2014
  • Euan Freeman, Stephen Brewster, and Vuokko Lantz.
  • Towards Usable and Acceptable above-device Interactions.
  • In: Proceedings of the ACM International Conference on HumanComputer Interaction with Mobile Devices and Services (MobileHCI'14).
  • Gestures above a mobile phone would let users interact with their devices quickly and easily from a distance. While both researchers and smartphone manufacturers develop new gesture sensing technologies, little is known about how best to design these gestures and interaction techniques. Our research looks at creating usable and socially acceptable above-device interaction techniques. We present an initial gesture collection, a preliminary evaluation of these gestures and some design recommendations. Our findings identify interesting areas for future research and will help designers create better gesture interfaces.

  • DOI
  •     PDF
  • 2014
  • Jaeyeon Jung and Matthai Philipose.
  • Courteous Glass.
  • In: Proceedings of the ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp'14).
  • Small and always-on, wearable video cameras disrupt social norms that have been established for traditional hand-held video cameras, which explicitly signal when and which subjects are being recorded to people around the camera-holder. We first discuss privacy-related social cues that people employ when recording other people (as a camera-holder) or when being recorded by others (as a bystander or a subject). We then discuss how low-fidelity sensors such as far-infrared imagers can be used to capture these social cues and to control video cameras accordingly in order to respect the privacy of others. We present a few initial steps toward implementing a fully functioning wearable camera that recognizes social cues related to video privacy and generates signals that can be used by others to adjust their privacy expectations.

  • DOI
  •     PDF
  • 2014
  • Andrés Lucero and Akos Vetek.
  • NotifEye: Using Interactive Glasses to Deal with Notifications While Walking in Public.
  • In: In: Proceedings of the Conference on Advances in Computer Entertainment Technology (ACE'14).
  • In this paper we explore the use of interactive eyewear in public. We introduce NotifEye, an application that allows a person to receive social network notifications on interactive glasses while walking on a busy street. The prototype uses a minimalistic user interface (UI) for interactive glasses to help people focus their attention on their surroundings and supports discreet interaction by using a finger rub pad to take action on incoming notifications. We studied pragmatic and hedonic aspects of the prototype during a pedestrian navigation task in a city center. We found that, despite the potential risk of overwhelming people with information, participants were able to keep track of their surroundings as they dealt with incoming notifications. Participants also positively valued the use of a discreet device to provide input for interactive glasses. Finally, participants reflected on their (evolving) perception of interactive glasses, indicating that glasses should become smaller, more comfortable to wear, and somewhat of a fashion accessory.

  • DOI
  •     PDF
  • 2014
  • Uran Oh and Leah Findlater.
  • Design of and Subjective Response to On-body Input for People with Visual Impairments.
  • In: Proceedings of the International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS'14).
  • For users with visual impairments, who do not necessarily need the visual display of a mobile device, non-visual on-body interaction (e.g., Imaginary Interfaces) could provide accessible input in a mobile context. Such interaction provides the potential advantages of an always-available input surface, and increased tactile and proprioceptive feedback compared to a smooth touchscreen. To investigate preferences for and design of accessible on-body interaction, we conducted a study with 12 visually impaired participants. Participants evaluated five locations for on-body input and compared on-phone to on-hand interaction with one versus two hands. Our findings show that the least preferred areas were the face/neck and the forearm, while locations on the hands were considered to be more discreet and natural. The findings also suggest that participants may prioritize social acceptability over ease of use and physical comfort when assessing the feasibility of input at different locations of the body. Finally, tradeoffs were seen in preferences for touchscreen versus on-body input, with on-body input considered useful for contexts where one hand is busy (e.g., holding a cane or dog leash). We provide implications for the design of accessible on-body input.

  • DOI
  •     PDF
  • 2014
  • Marcos Serrano, Barrett M. Ens, and Pourang P. Irani.
  • Exploring the Use of Hand-to-Face Input for Interacting with Head-worn Displays.
  • In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI'14).
  • We propose the use of Hand-to-Face input, a method to interact with head-worn displays (HWDs) that involves contact with the face. We explore Hand-to-Face interaction to find suitable techniques for common mobile tasks. We evaluate this form of interaction with document navigation tasks and examine its social acceptability. In a first study, users identify the cheek and forehead as predominant areas for interaction and agree on gestures for tasks involving continuous input, such as document navigation. These results guide the design of several Hand-to-Face navigation techniques and reveal that gestures performed on the cheek are more efficient and less tiring than interactions directly on the HWD. Initial results on the social acceptability of Hand-to-Face input allow us to further refine our design choices, and reveal unforeseen results: some gestures are considered culturally inappropriate and gender plays a role in selection of specific Hand-to-Face interactions. From our overall results, we provide a set of guidelines for developing effective Hand-to-Face interaction techniques.

  • DOI
  •     PDF
  • 2014
  • Martin Weigel, Vikram Mehta, and Jürgen Steimle.
  • More Than Touch: Understanding How People Use Skin As an Input Surface for Mobile Computing.
  • In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI'14).
  • This paper contributes results from an empirical study of on-skin input, an emerging technique for controlling mobile devices. Skin is fundamentally different from off-body touch surfaces, opening up a new and largely unexplored interaction space. We investigate characteristics of the various skin-specific input modalities, analyze what kinds of gestures are performed on skin, and study what are preferred input locations. Our main findings show that (1) users intuitively leverage the properties of skin for a wide range of more expressive commands than on conventional touch surfaces; (2) established multi-touch gestures can be transferred to on-skin input; (3) physically uncomfortable modalities are deliberately used for irreversible commands and expressing negative emotions; and (4) the forearm and the hand are the most preferred locations on the upper limb for on-skin input. We detail on users' mental models and contribute a first consolidated set of on-skin gestures. Our findings provide guidance for developers of future sensors as well as for designers of future applications of on-skin input.

  • DOI
  •     PDF
  •     Page

2013

  • 2013
  • Zhihan Lv, Alaa Halawani, Muhammad Sikandar Lal Khan, Shafiq Ur Réhman, and Haibo Li.
  • Finger in Air: Touch-less Interaction on Smartphone.
  • In: Proceedings of the International Conference on Mobile and Ubiquitous Multimedia (MUM'13).
  • In this paper we present a vision based intuitive interaction method for smart mobile devices. It is based on markerless finger gesture detection which attempts to provide a 'natural user interface'. There is no additional hardware necessary for real-time finger gesture estimation. To evaluate the strengths and effectiveness of proposed method, we design two smart phone applications, namely circle menu application - provides user with graphics and smart phone's status information, and bouncing ball game- a finger gesture based bouncing ball application. The users interact with these applications using finger gestures through the smart phone's camera view, which trigger the interaction event and generate activity sequences for interactive buffers. Our preliminary user study evaluation demonstrates effectiveness and the social acceptability of proposed interaction approach.

  • DOI
  •     PDF
  • 2013
  • Suranga Nanayakkara, Roy Shilkrot, Kian Peen Yeo, and Pattie Maes.
  • EyeRing: A Finger-worn Input Device for Seamless Interactions with Our Surroundings.
  • In: Proceedings of the Augmented Human International Conference (AH'13).
  • Finger-worn interfaces remain a vastly unexplored space for user interfaces, despite the fact that our fingers and hands are naturally used for referencing and interacting with the environment. In this paper we present design guidelines and implementation of a finger-worn I/O device, the EyeRing, which leverages the universal and natural gesture of pointing. We present use cases of EyeRing for both visually impaired and sighted people. We discuss initial reactions from visually impaired users which suggest that EyeRing may indeed offer a more seamless solution for dealing with their immediate surroundings than the solutions they currently use. We also report on a user study that demonstrates how EyeRing reduces effort and disruption to a sighted user. We conclude that this highly promising form factor offers both audiences enhanced, seamless interaction with information related to objects in the environment.

  • DOI
  •     PDF
  •     Page
  • 2013
  • Masa Ogata, Yuta Sugiura, Yasutoshi Makino, Masahiko Inami, and Michita Imai.
  • SenSkin: Adapting Skin As a Soft Interface.
  • In: Proceedings of the ACM Symposium on User Interface Software and Technology (UIST'13).
  • We present a sensing technology and input method that uses skin deformation estimated through a thin band-type device attached to the human body, the appearance of which seems socially acceptable in daily life. An input interface usually requires feedback. SenSkin provides tactile feedback that enables users to know which part of the skin they are touching in order to issue commands. The user, having found an acceptable area before beginning the input operation, can continue to input commands without receiving explicit feedback. We developed an experimental device with two armbands to sense three-dimensional pressure applied to the skin. Sensing tangential force on uncovered skin without haptic obstacles has not previously been achieved. SenSkin is also novel in that quantitative tangential force applied to the skin, such as that of the forearm or fingers, is measured. An infrared (IR) reflective sensor is used since its durability and inexpensiveness make it suitable for everyday human sensing purposes. The multiple sensors located on the two armbands allow the tangential and normal force applied to the skin dimension to be sensed. The input command is learned and recognized using a Support Vector Machine (SVM). Finally, we show an application in which this input method is implemented.

  • DOI
  •     PDF
  •     Page
  • 2013
  • Halley Profita, James Clawson, Scott Gilliland, Clint Zeagler, Thad Starner, Jim Budd, and Ellen Yi-Luen Do.
  • Don't Mind me Touching my Wrist: a Case Study of Interacting with On-body Technology.
  • In: Proceedings of the International Symposium on Wearable Computers (ISWC'13).
  • Wearable technology, specifically e-textiles, offers the potential for interacting with electronic devices in a whole new manner. However, some may find the operation of a system that employs non-traditional on-body interactions uncomfortable to perform in a public setting, impacting how readily a new form of mobile technology may be received. Thus, it is important for interaction designers to take into consideration the implications of on-body gesture interactions when designing wearable interfaces. In this study, we explore the third-party perceptions of a user's interactions with a wearable e-textile interface. This two-prong evaluation examines the societal perceptions of a user interacting with the textile interface at different on-body locations, as well as the observer's attitudes toward on-body controller placement. We performed the study in the United States and South Korea to gain cultural insights into the perceptions of on-body technology usage.

  • DOI
  •     PDF
  • 2013
  • Mikko J. Rissanen, Owen Noel Newton Fernando, Horathalge Iroshan, Samantha Vu, Natalie Pang, and Schubert Foo.
  • Ubiquitous Shortcuts: Mnemonics by Just Taking Photos.
  • In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI'13).
  • Ubiquitous Shortcuts is an image processing based method for making and using mnemonics set onto the real world using smartphones or other computing systems. The mnemonics can be created by taking photos of the user's vicinity and by binding them onto command sequences. The mnemonic is triggered every time a similar photo is taken. Our method uses natural feature matching algorithms and end-user programming approaches. The mnemonics can be concatenated into more complex command sequences. Thus, limited user input is realized by just taking photos with a camera embedded into a finger-ring, which enables rapid, subtle and socially acceptable user interaction. Our method can be used as semi-automatic way of achieving location and context sensitive services, activity recognition or tangible interaction.

  • DOI
  •     PDF
  • 2013
  • Julie R. Williamson, Stephen Brewster, and Rama Vennelakanti.
  • Mo! Games: Evaluating Mobile Gestures In the Wild.
  • In: Proceedings of the International Conference on Multimodal Interfaces (ICMI'13).
  • The user experience of performing gesture-based interactions in public spaces is highly dependent on context, where users must decide which gestures they will use and how they will perform them. In order to complete a realistic evaluation of how users make these decisions, the evaluation of such user experiences must be completed "in the wild." Furthermore, studies need to be completed within different cultural contexts in order to understand how users might adopt gesture differently in different cultures. This paper presents such a study using a mobile gesture-based game, where users in the UK and India interacted with this game over the span of 6 days. The results of this study demonstrate similarities between gesture use in these divergent cultural settings, illustrate factors that influence gesture acceptance such as perceived size of movement and perceived accuracy, and provide insights into the interaction design of mobile gestures when gestures are distributed across the body.

  • DOI
  •     PDF

2012

  • 2012
  • Gilles Bailly, Jörg Müller, Michael Rohs, Daniel Wigdor, Sven Kratz, Lisa G. Cowan, Nadir Weibel, William G. Griswold, Laura R. Pina, and James D. Hollan.
  • ShoeSense: A New Perspective on Gestural Interaction and Wearable Applications.
  • In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI'12).
  • When the user is engaged with a real-world task it can be inappropriate or difficult to use a smartphone. To address this concern, we developed ShoeSense, a wearable system consisting in part of a shoe-mounted depth sensor pointing upward at the wearer. ShoeSense recognizes relaxed and discreet as well as large and demonstrative hand gestures. In particular, we designed three gesture sets (Triangle, Radial, and Finger-Count) for this setup, which can be performed without visual attention. The advantages of ShoeSense are illustrated in five scenarios: (1) quickly performing frequent operations without reaching for the phone, (2) discreetly performing operations without disturbing others, (3) enhancing operations on mobile devices, (4) supporting accessibility, and (5) artistic performances. We present a proof-of-concept, wearable implementation based on a depth camera and report on a lab study comparing social acceptability, physical and mental demand, and user preference. A second study demonstrates a 94-99% recognition rate of our recognizers.

  • DOI
  •     PDF
  • 2012
  • Masa Ogata, Yuta Sugiura, Hirotaka Osawa, and Michita Imai.
  • iRing: Intelligent Ring Using Infrared Reflection.
  • In: In: Proceedings of the ACM Symposium on User Interface Software and Technology (UIST'12).
  • We present the iRing, an intelligent input ring device developed for measuring finger gestures and external input. iRing recognizes rotation, finger bending, and external force via an infrared (IR) reflection sensor that leverages skin characteristics such as reflectance and softness. Furthermore, iRing allows using a push and stroke input method, which is popular in touch displays. The ring design has potential to be used as a wearable controller because its accessory shape is socially acceptable, easy to install, and safe, and iRing does not require extra devices. We present examples of iRing applications and discuss its validity as an inexpensive wearable interface and as a human sensing device.

  • DOI
  •     PDF
  •     Page

2011

  • 2011
  • Thorsten Karrer, Moritz Wittenhagen, Leonhard Lichtschlag, Florian Heller, and Jan Borchers.
  • Pinstripe: Eyes-free Continuous Input on Interactive Clothing.
  • In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI'11).
  • We present Pinstripe, a textile user interface element for eyes-free, continuous value input on smart garments that uses pinching and rolling a piece of cloth between your fingers. The input granularity can be controlled in a natural way by varying the amount of cloth pinched. Pinstripe input elements physically consist of fields of parallel conductive lines sewn onto the fabric. This way, they can be invisible, and can be included across large areas of a garment. Pinstripe also addresses several problems previously identified in the placement and operation of textile UI elements on smart clothing. Two user studies evaluate ideal placement and orientation of Pinstripe elements on the users' garments as well as acceptance and perceived ease of use of this novel textile input technique.

  • DOI
  •     PDF
  •     Page
  • 2011
  • Roisin McNaney, Stephen Lindsay, Karim Ladha, Cassim Ladha, Guy Schofield, Thomas Ploetz, Nils Hammerla, Daniel Jackson, Richard Walker, Nick Miller, and Patrick Olivier.
  • Cueing for Drooling in Parkinson's Disease.
  • In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI'11).
  • We present the development of a socially acceptable cueing device for drooling in Parkinson's disease (PD). Sialorrhea, or drooling, is a significant problem associated with PD and has a strong negative emotional impact on those who experience it. Previous studies have shown the potential for managing drooling by using a cueing device. However, the devices used in these studies were deemed unacceptable by their users due to factors such as hearing impairment and social embarrassment. We conducted exploratory scoping work and high fidelity iterative prototyping with people with PD to get their input on the design of a cueing aid and this has given us an insight into challenges that confront users with PD and limit device usability and acceptability. The key finding from working with people with PD was the need for the device to be socially acceptable.

  • DOI
  •     PDF
  • 2011
  • Kristen Shinohara and Jacob O. Wobbrock.
  • In the Shadow of Misperception: Assistive Technology Use and Social Interactions.
  • In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI'11).
  • Few research studies focus on how the use of assistive technologies is affected by social interaction among people. We present an interview study of 20 individuals to determine how assistive technology use is affected by social and professional contexts and interactions. We found that specific assistive devices sometimes marked their users as having disabilities; that functional access took priority over feeling self-conscious when using assistive technologies; and that two misperceptions pervaded assistive technology use: (1) that assistive devices could functionally eliminate a disability, and (2) that people with disabilities would be helpless without their devices. Our findings provide further evidence that accessibility should be built into mainstream technologies. When this is not feasible, assistive devices should incorporate cutting edge technologies and strive to be designed for social acceptability, a new design approach we propose here.

  • DOI
  •     PDF
  • 2011
  • Elena Vildjiounaite, Julia Kantorovitch, Vesa Kyllönen, Ilkka Niskanen, Mika Hillukkala, Kimmo Virtanen, Olli Vuorinen, Satu-Marja Mäkelä, Tommi Keränen, Johannes Peltola, Jani Mäntyjärvi, and Andrew Tokmakoff.
  • Designing Socially Acceptable Multimodal Interaction in Cooking Assistants.
  • In: Proceedings of the International Conference on Intelligent User Interfaces (IUI'11).
  • Cooking assistant is an application that needs to find a trade-off between providing efficient help to the users (e.g., reminding them to stir a meal if it is about to burn) and avoiding users' annoyance. This trade-off may vary in different contexts, such as cooking alone or in a group, cooking new or known recipe etc. The results of the user study, presented in this paper, show which features of a multimodal interface users perceive as socially acceptable or unacceptable in different situations, and how this perception depends on user's age.

  • DOI
  • 2011
  • Julie R. Williamson, Andrew Crossan, and Stephen Brewster.
  • Multimodal Mobile Interactions: Usability Studies in Real World Settings.
  • In: Proceedings of the International Conference on Multimodal Interfaces (ICMI'11).
  • This paper presents a study that explores the issues of mobile multimodal interactions while on the move in the real world. Because multimodal interfaces allow new kinds of eyes and hands free interactions, usability issues while moving through different public spaces becomes an important issue in user experience and acceptance of multimodal interaction. This study focuses on these issues by deploying an RSS reader that participants used during their daily commute every day for one week. The system allows users on the move to access news feeds eyes free through head- phones playing audio and speech and hands free through wearable sensors attached to the wrists. The results showed participants were able to interact with the system on the move and became more comfortable performing these interactions as the study progressed. Users were also far more comfortable gesturing on the street than on public transport, which was reflected in the number of interactions and the perceived social acceptability of the gestures in different contexts.

  • DOI
  •     PDF
  • 2011
  • Julie Rico Williamson.
  • Send Me Bubbles: Multimodal Performance and Social Acceptability.
  • In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI'11).
  • The use of performance as the focus of interaction provides the opportunity for exploratory and individual experiences but can also put users in an uncomfortable position. This paper presents an initial user study of a mobile remote awareness application in which users can control their own fish in a virtual fish tank using multimodal input from an external sensing device, where the input styles are created and performed by participants in an open ended sensing model. The study was designed in order to better understand the issues of performance when audience members are both casual passersby and familiar others watching remotely. Additionally, this study investigated the creation of performances and the effects of props when used in different social settings. The study involved pairs of participants interacting with the system in both public and private locations over repeated sessions. The results of this study show how users created and interpreted performances as well as how their consideration of passersby influenced their experiences.

  • DOI
  •     PDF
  • 2011
  • Tetsuya Yamamoto, Tsutomu Terada, and Masahiko Tsukamoto.
  • Designing Gestures for Hands and Feet in Daily Life.
  • In: Proceedings of the Interantional Conference on Advances in Mobile Computing and Multimedia (MoMM'11).
  • In wearable computing environments, people handles various information anytime and anywhere with a wearing computer. In such situation, a gesture is one of powerful methods as input method because it needs no physical devices to touch and a user can input quickly. However, there are various restrictions for gesture input in daily life; gestures must be socially acceptable because a user has to gesture with unusual movements in a crowd, gestures must be flexible because a user cannot gesture when he/she has a bag with his hand that is used for a gesture. In this paper, we clarify the restrictions on gesture interfaces in daily life, then propose practical gestures for selecting simple menu items with hands and feet.

  • DOI
  •     PDF
  • 2011
  • Kamer Ali Yuksel, Sinan Buyukbas, and Serdar Hasan Adali.
  • Designing Mobile Phones Using Silent Speech Input and Auditory Feedback.
  • In: Proceedings of the ACM International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI'11).
  • In this work, we have propose a novel design for a basic mobile phone, which is focused on the essence of mobile communication and connectivity, based on a silent speech interface and auditory feedback. This assistive interface takes the advantages of voice control systems while discarding its disadvantages such as the background noise, privacy and social acceptance. The proposed device utilizes low-cost and commercially available hardware components. Thus, it would be affordable and accessible by majority of users including disabled, elderly and illiterate people.

  • DOI
  •     PDF

2010

  • 2010
  • Calkin S. Montero, Jason Alexander, Mark T. Marshall, and Sriram Subramanian.
  • Would you do that?: Understanding Social Acceptance of Gestural Interfaces.
  • In: Proceedings of the ACM International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI'10).
  • With gesture-based interactions in mobile settings becoming more popular, there is a growing concern regarding the social acceptance of these interaction techniques. In this paper we begin by examining the various definitions of social acceptance that have been proposed in the literature to synthesize a definition that is based on how the user feels about performing a particular interaction as well as how the bystanders perceive the user during this interaction. We then present the main factors that influence gestures' social acceptance including culture, time, interaction type and the user's position on the innovation adoption curve. Through a user study we show that an important factor in determining social acceptance of gesture-based interaction techniques is the user's perception of others ability to interpret the potential effect of a manipulation.

  • DOI
  •     PDF
  • 2010
  • Julie Rico and Stephen Brewster.
  • Gesture and Voice Prototyping for Early Evaluations of Social Acceptability in Multimodal Interfaces.
  • In: Proceedings of the International Conference on Multimodal Interfaces and the Workshop on Machine Learning for Multimodal Interaction (ICMI-MLMI'10).
  • Interaction techniques that require users to adopt new behaviors mean that designers must take into account social acceptability and user experience otherwise the techniques may be rejected by users as they are too embarrassing to do in public. This research uses a set of low cost prototypes to study social acceptability and user perceptions of multimodal mobile interaction techniques early on in the design process. We describe 4 prototypes that were used with 8 focus groups to evaluate user perceptions of novel multimodal interactions using gesture, speech and nonspeech sounds, and gain feedback about the usefulness of the prototypes for studying social acceptability. The results of this research describe user perceptions of social acceptability and the realities of using multimodal interaction techniques in daily life. The results also describe key differences between young users (18-29) and older users (70-95) with respect to evaluation and approach to understanding these interaction techniques.

  • DOI
  •     PDF
  • 2010
  • Julie Rico and Stephen Brewster.
  • Usable Gestures for Mobile Interfaces: Evaluating Social Acceptability.
  • In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI'10).
  • Gesture-based mobile interfaces require users to change the way they use technology in public settings. Since mobile phones are part of our public appearance, designers must integrate gestures that users perceive as acceptable for pub-lic use. This topic has received little attention in the litera-ture so far. The studies described in this paper begin to look at the social acceptability of a set of gestures with re-spect to location and audience in order to investigate possi-ble ways of measuring social acceptability. The results of the initial survey showed that location and audience had a significant impact on a user's willingness to perform ges-tures. These results were further examined through a user study where participants were asked to perform gestures in different settings (including a busy street) over repeated trials. The results of this work provide gesture design rec-ommendations as well as social acceptability evaluation guidelines.

  • DOI
  •     PDF

2009

  • 2009
  • Julie Rico and Stephen Brewster.
  • Gestures all Around Us: User Differences in Social Acceptability Perceptions of Gesture Based Interfaces.
  • In: Proceedings of the ACM International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI'09).
  • Gesture based interfaces provide a new way for us to interact with mobile devices, but also require us to make new decisions about how we feel about this new technology and which gestures we decide are usable and appropriate. These decisions are based on the social and public settings where these devices are used on a daily basis. Our ideas about which gestures are socially acceptable or not are an important factor in whether or not these gestures will be adopted. The ways in which users evaluate social acceptability is not only highly variable, but with drastically different results amongst different users. These differences are not dependant on factors such as age, gender, occupation, geographic location, or previous technology usage. Future work into the social acceptability perceptions of users will focus on personality traits as a new way of understanding how social acceptability is determined.

  • DOI
  •     PDF

2008

  • 2008
  • Paul Holleis, Albrecht Schmidt, Susanna Paasovaara, Arto Puikkonen, and Jonna Häkkilä.
  • Evaluating Capacitive Touch Input on Clothes.
  • In: Proceedings of the ACM International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI'08).
  • Wearable computing and smart clothing have attracted a lot of attention in the last years. For a variety of applications, it can be seen as potential future direction of mobile user interfaces. In this paper, we concentrate on usability and applicability issues concerned with capacitive touch input on clothing. To be able to perform user studies, we built a generic platform for attaching, e.g., capacitive sensors of different types. On top of that, several prototypes of wearable accessories and clothing and implemented various application scenarios. We report on two studies we undertook with these implementations with a user group randomly sampled at a shopping mall. We provide a significant set of guidelines and lessons learned that emerged from our experiences and those studies. Thus, developers of similar projects have to put major efforts into minimizing the delay between button activation and feedback and to make location and identification of controls and their function as simple and quick as possible. Issues that have to be treated in all designs include the requirement of one-handed interaction and that, even for minimal functionality, to find a general solution with regard to layout and button-to-function mapping is hardly possible. Additionally, in order to generate a satisfactory user experience good usability must be combined with aesthetical factors.

  • DOI
  •     PDF

2007

  • 2007
  • Scott W. Campbell.
  • Perceptions of Mobile Phone Use in Public Settings: A Cross-cultural Comparison.
  • In: INT J COMMUN-US, 1(1), 20.
  • This study entailed a cross-cultural comparison of perceptions of mobile phone use in select public settings, including a movie theater, restaurant, bus, grocery store, classroom, and sidewalk. A sample of participants from the U.S. Mainland, Hawaii, Japan, Taiwan, and Sweden was surveyed for social acceptability assessments of talking on a mobile phone in each of these locations. As hypothesized, settings involving collective attention were considered least acceptable for talking on a mobile phone. Results also revealed numerous cultural similarities and differences. Taiwanese participants tended to report more tolerance for mobile phone use in a theater, restaurant, and classroom than did participants from the other cultural groupings. Japanese participants also tended to be more tolerant of mobile phone use in a classroom, but less tolerant of use on a sidewalk and on a bus than were the other participants. The discussion offers theoretical implications of the findings.

  •     PDF
  • 2007
  • Sami Ronkainen, Jonna Häkkilä, Saana Kaleva, Ashley Colley, and Jukka Linjama.
  • Tap Input as an Embedded Interaction Method for Mobile Devices.
  • In: Proceedings of the International Conference on Tangible and Embedded Interaction (TEI'07).
  • In this paper we describe a novel interaction method for interacting with mobile devices without the need to access a keypad or a display. A tap with a hand can be reliably detected e.g. through a pocket by means of an acceleration sensor. By carefully designing the user interface, the tap can be used to activate logically similar functionalities on the device, leading to a simple but useful interaction method. We present results of user tests aimed at studying the usability of various tap input based user interface applications.

  • DOI
  •     PDF

2006

  • 2006
  • Enrico Costanza, Samuel A. Inverso, Elan Pavlov, Rebecca Allen, and Pattie Maes.
  • Eye-q: Eyeglass Peripheral Display for Subtle Intimate Notifications.
  • In: Proceedings of the ACM International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI'06).
  • Mobile devices are generally used in public, where the user is surrounded by others not involved in the interaction. Audible notification cues are often a cause of unnecessary disruption and distraction both for co-located people and even for the user to whom they are directed. We present a wearable peripheral display embedded in eyeglasses that delivers subtle, discreet and unobtrusive cues. The display is personal and intimate; it delivers visual cues in the wearers' periphery without disrupting their immediate environment. A user study conducted to validate the design reveals that the display is effective and subtle in notifying users. Experimental results show, with significance, that the cues can be designed to meet specific levels of visibility and disruption for the wearer, so that some cues are less noticeable when the user is not under high workload, which is highly desirable in many practical circumstances. Hence, peripheral notification displays can provide an effective solution for designing socially acceptable notification displays, unobtrusive to the user and the immediate environment.

  • DOI
  •     PDF
  • 2006
  • Seoktae Kim, Minjung Sohn, Jinhee Pak, and Woohun Lee.
  • One-key Keyboard: A Very Small QWERTY Keyboard Supporting Text Entry for Wearable Computing.
  • In: Proceedings of the Australasian Computer-Human Interaction Conference (OzCHI'06).
  • Most of the commercialized wearable text input devices are wrist-worn keyboards that have adopted the minimization method of reducing keys. Generally, a drastic key reduction in order to achieve sufficient wearability increases KSPC (Keystrokes per Character), decreases text entry performance, and requires additional effort to learn a new typing method. We are faced with wearability-usability tradeoff problems in designing a good wearable keyboard. To address this problem, we adopted a new keyboard minimization method of reducing key pitch and have developed the One-key Keyboard. The traditional desktop keyboard has one key per character, but One-key Keyboard has only one key (70mmX35mm) on which a 10*5 QWERTY key array is printed. One-key Keyboard detects the position of the fingertip at the time of the keying event and figures out the character entered. We conducted a text entry performance test comprised of 5 sessions. The participants typed 18.9WPM with a 6.7% error rate over all sessions and achieved up to 24.5WPM. From the experiment's results, the One-key Keyboard was evaluated as a potential text input device for wearable computing, balancing wearability, social acceptance, input speed, and learnability.

  • DOI
  •     PDF

2004

  • 2004
  • Andrew Monk, Jenni Carroll, Sarah Parker, and Mark Blythe.
  • Why Are Mobile Phones Annoying?
  • In: BEHAV INFORM TECHNOL, 23(1), 33--41.
  • Sixty four members of the public were exposed to the same staged conversation either while waiting in a bus station or travelling on a train. Half of the conversations were by mobile phone, so that only one end of the conversation was heard, and half were co present face-to-face conversations. The volume of the conversations was controlled at one of two levels: the actors' usual speech level and exaggeratedly loud. Following exposure to the conversation participants were approached and asked to give verbal ratings on six scales. Analysis of variance showed that mobile phone conversations were significantly more noticeable and annoying than face-to-face conversations at the same volume when the content of the conversation is controlled. Indeed this effect of medium was as large as the effect of loudness. Various explanations of this effect are explored, with their practical implications.

  • DOI

2001

  • 2001
  • Jun Rekimoto.
  • GestureWrist and GesturePad: Unobtrusive Wearable Interaction Devices.
  • In: Proceedings of the International Symposium on Wearable Computers (ISWC'01).
  • In this paper we introduce two input devices for wearable computers, called GestureWrist and GesturePad. Both devices allow users to interact with wearable or nearby computers by using gesture-based commands. Both are designed to be as unobtrusive as possible, so they can be used under various social contexts. The first device, called GestureWrist, is a wristband-type input device that recognizes hand gestures and forearm movements. Unlike DataGloves or other hand gesture-input devices, all sensing elements are embedded in a normal wristband. The second device, called GesturePad, is a sensing module that can be attached on the inside of clothes, and users can interact with this module from the outside. It transforms conventional clothes into an interactive device without changing their appearance.

  • DOI
  •     PDF