Evaluation of User Gestures in Multi-touch Interaction: a Case Study in Pair-programming.

Abstract: Natural User Interfaces are often described as familiar, evocative and intuitive, predictable, based on common skills. Though unquestionable in principle, such definitions don’t provide the designer with effective means to design a natural interface or evaluate a design choice vs another. Two main issues in particular are open: (i) how do we evaluate a natural interface, is there a way to measure ‘naturalness’; (ii) do natural user interfaces provide a concrete advantage in terms of efficiency, with respect to more traditional interface paradigms? In this paper we discuss and compare observations of user behavior in the task of pair programming, performed at a traditional desktop versus a multi-touch table. We show how the adoption of a multi-touch user interface fosters a significant, observable and measurable, increase of nonverbal communication in general and of gestures in particular, that in turn appears related to the overall performance of the users in the task of algorithm understanding and debugging.

Authors: A. Soro, S. A. Iacolina, R. Scateni, S. Uras.
Evaluation of User Gestures in Multi-touch Interaction: a Case Study in Pair-programming.
ICMI 2011, 161-168.
Alicante, Spagna, Novembre 2011.

3D-ize U! A Real-time 3D Head-model Texture Generator for Android.

Abstract: Recently, the number of applications developed for smartphones has dramatically increased; however, at the moment, applications having the purpose of creating and displaying 3D models are quite rare. The goal of this work is to build an application that allows the user to see the virtual three-dimensional representations of their friends and interact with them. The main challenge is to achieve results similar to those that a computer would produce, optimizing the process to deal with the constraints due to the technology used. Since there are no similar mobile applications, this work will make possible to create a base onto which will be possible to realize applications that have customized 3D models as a common feature.

Authors: S. Boi, F. Sorrentino, S. Marras, R. Scateni.
3D-ize U! A Real-time 3D Head-model Texture Generator for Android.
EuroGraphics Italian Chapter 2011, 41-46.
Salerno, Italia, Novembre 2011.

Gestural Interaction for Robot Motion Control

Abstract: Recent advances in gesture recognition made the problem of controlling a humanoid robot in the most natural possible way an interesting challenge. Learning from Demonstration field takes strong advantage from this kind of interaction since users who have no robotics knowledge are allowed to teach new tasks to robots easier than ever before. In this work we present a cheap and easy way to implement humanoid robot along with a visual interaction interface allowing users to control it. The visual system is based on the Microsoft Kinect’s RGB-D camera. Users can deal with the robot just by standing in front of the depth camera and mimicking a particular task they want to be performed by the robot. Our framework is cheap, easy to reproduce, and does not strictly depend on the particular underlying sensor or gesture recognition system.

Authors: G. Broccia, M. Livesu, R. Scateni.
Gestural Interaction for Robot Motion Control.
EuroGraphics Italian Chapter 2011, 61-66.
Salerno, Italia, Novembre 2011.

MORAVIA: A Video-Annotation System Supporting Gesture Recognition

Abstract: Gestures and gesticulation play an important role in communication, particularly in public speech. We describe here the design, development and initial evaluation of MORAVIA (MOtion Recognition And VIdeo Annotation): a collaborative web application for (semi)automatic gesture annotation. MORAVIA was conceived as a support for the automatic evaluation of a speech based on non-verbal components, that is, as much as possible independent from the verbal content. We adopt an evaluation model, based on quality metrics related to gestures and provided by experts in the education and psychology domain. The final goal is to design and implement a system able to detect the gestures using a video camera and a depth camera, such as the Microsoft Kinect, to detect the position and the movements of the speaker. Then, the web application for video-annotation allows collaborative review and analysis of the different video sequences. This is useful both to domain experts, as a research tool, and to end users, for self-evaluation.

Authors: M. Careddu, L. Carrus, A. Soro, S. A. Iacolina, R. Scateni.
MORAVIA: A Video-Annotation System Supporting Gesture Recognition.
ACM SIGCHI Italian Chapter (CHItaly 2011) Adjunct Proceedings.
Alghero, Italia, Settembre 2011.

Walk, Look and Smell Through

Abstract: Human Computer interaction is typically constrained to the use of sight, hear, and touch. This paper describes an attempt to get over these limitations. We introduce the smell in the interaction with the aim of obtaining information from scents, i.e. giving meaning to odours and understand how people would appreciate such extensions. We discuss the design and implementation of our prototype system. The system is able to represent/manage an immersive environment, where the user interacts by means of visual, hearing and olfactory informations. We have implemented an odour emitter controlled by a presence sensor device. When the system perceives the presence of a user it activates audio/visual contents to encourage engaging in interaction. Then a specific scent is diffused in the air to augment the perceive reality of the experience. We discuss technical difficulties and initial empirical observations.

Authors: V. Cozza, G. Fenu, R. Scateni, A. Soro.
Walk, Look and Smell Through.
ACM SIGCHI Italian Chapter (CHItaly 2011) Adjunct Proceedings (poster).
Alghero, Italia, Settembre 2011.