Link

Papers

Position Papers to be presented at the Workshop:

  • The Presentation of Synthesized Artifacts in Example-driven Interfaces

    • Azza Abouzied (New York University)
    • Example-driven interfaces (EDIs) for data tasks represent a class of mixed-initiative user interfaces that enable users to specify tasks such as data extraction, cleaning, transformation, querying or analysis, in the casual and familiar language of examples. From user-provided examples of intended task behavior, EDIs synthesize programs, scripts or rules, that can perform the task in a fashion consistent with the examples. In this position paper, we examine the design of the interface component that presents the synthesized artifacts to the users.
  • On User Awareness in Model-based Collaborative Filtering Systems

    • Benedikt Loepp (University of Duisburg-Essen), Jürgen Ziegler (University of Duisburg-Essen)
    • In this paper, we discuss several aspects that users are typically not fully aware of when using model-based Collaborative Filtering systems. For instance, the methods prevalently used in conventional recommenders infer abstract models that are opaque to users, making it difficult to understand the learned profile, and consequently, why certain items are recommended. Further, users are not able to keep an overview of the item space, and thus the alternatives that in principle could also be suggested. By summarizing our experiences on exploiting latent factor models for increasing control and transparency, we show that the respective techniques may also contribute to make users more aware of their preferences’ representation, the rationale be-hind the results, and further items of potential interest.
  • Interface Design for Interactive Sound Event Detection

    • Bongjun Kim (Northwestern University), Bryan Pardo (Northwestern University)
    • To label sound events (e.g. “dog bark”) in an hours-long audio file quickly and accurately, an interactive human-inthe-loop paradigm can be applied to sound event detection. While interactive learning has been used in many other labeling tasks (e.g. image or text), sound event detection requires a different interface approach. In this article, we discuss principles of effective interaction design for interactive sound event detection and labeling, using examples from our work in this area.
  • Should Context-aware IDE Command Recommendations Always Be Presented In-context or Not

    • Marko Gasparic (Free University of Bozen-Bolzano), Francesco Ricci (Free University of Bozen-Bolzano)
    • Aiming at improving software developers’ knowledge of their selected IDE, recommender system technologies have been recently introduced. In this short paper, we  argue that IDE command recommendations must be context-aware in order to be accepted by the users, but they do not have to be necessarily presented in the context in which they can be executed. Instead, we suggest that the system allows the users to browse recommendations when they find it more convenient.
  • The What, When and How of Awareness for Effective Learning in Surgical Simulation

    • Myat Su Yin (Mahidol University), Peter Haddawy (Mahidol University)
    • Fine motor skill is indispensable for dental surgeons as every movement needs to be precise given the minute margins of error in endodontic surgery. Training in manual dexterity and proper instrument handling are crucial components in the dental curriculum. During training, the students need pay close attention on the undertaken actions and need to be aware of incorrect or inappropriate manipulation of instruments and their consequences. To create awareness of errors, we propose a feedback mechanism in a VR dental surgical simulator. To achieve effective learning, we focus on three aspects of feedback as What (Feedback content on error), When (Feedback timing) and How (Feedback modality).
  • User Awareness in Expressing Emotional Intensity using Touchscreen Gestures

    • Nabil Bin Hannan (Dalhousie University), Derek Reilly (Dalhousie University)
    • Understanding the varying intensities of human emotions are crucial to maintain a proper workflow in day to day activities. There are numerous factors that need to be considered to realize what makes people aware of their state of mind and how they feel. We conducted a participatory design study with recreational runners, followed by a controlled study to explore the awareness of how people use touchscreens to express emotional intensities. We discuss user awareness of emotional expression through gestures during common activities.
  • Enabling Awareness in Playful Environments for Animals using Body Tracking

    • Patricia Pons (Universitat Politècnica de València), Javier Jaen (Universitat Politècnica de València)
    • This paper explores the diverse possibilities for awareness that could be implemented in the development of intelligent playful environments for animals. These opportunities are described from the perspective of both the animal and the human playing, based on the interaction with the system through embodied interactions using non-wearable tracking.
  • When Less is More: Semantic Awareness in Web Screen Reading

    • Vikas Ashok (Stony Brook University), Yevgen Borodin (Charmtech Labs LLC), I V Ramakrishnan (Stony Brook University)
    • Semantic awareness is the key to making the Web usable by blind people who use screen readers, an assistive technology, to read aloud digital content serially. A semantics-aware web screen reader (SRAA) elevates the interaction to a higher level of abstraction from operating on (syntactic) HTML elements, as is done now with a conventional screen reader, to operating on web entities (which are semantically meaningful collections of related HTML elements, e.g. search results, menus, widgets, etc.). With SRAA, users  can give commands such as “Read the article”, “Log in”, “Next result”, as well as interact with commonwidgets such as the Date-Picker by saying “Next month”, “Which day is March 13th”, etc. Doing so brings blind users closer to how sighted people perceive and operate over web entities.
  • Enhancing Social Inclusion between Drivers by Digital Augmentation

    • Chao Wang (Eindhoven University of Technology), Jacques Terken (Eindhoven University of
      Technology), Jun Hu (Eindhoven University of Technology)
    • The physical structure of vehicles induces a tendency to depersonalize other drivers, which may lead to aggressive driving and social isolation. With everywhere available connectivity and the broad penetration of social network services, the relationship between drivers on the road may gain more transparency, enabling social information to pass through the steel shell of the cars and giving opportunities to reduce anonymity and strengthen empathy. In this study, we introduced two social concepts on the road, which utilize the latest Vehicle to Vehicle communication technology. Furthermore, two corresponding prototypes  on a driving simulator were developed for furtherexploration to get the insights of social awareness between drivers by digital augmentation.
  • Getting Aware of the Potential Success of the Classifier through Visualizations

    • Yunjia Sun (University of Waterloo), Edward Lank (University of Waterloo), Michael Terry (University of Waterloo)
    • Building a successful machine learning classifier requires a large amount of work. It is important to be aware of the classifier’s probability of success at an early stage to make sure that the cost is well-spent. In this position paper, we describe our experience in building Label-and-Learn, a visualization interface that helps users establish awareness of the characteristics of the machine learning problem as  early as the data labeling process, so that they could effectively adjust their strategies. We would also discuss the success and failures of our design based on user study results.
Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s