3D-ize U! A Real-time 3D Head-model Texture Generator for Android.

Abstract: Recently, the number of applications developed for smartphones has dramatically increased; however, at the moment, applications having the purpose of creating and displaying 3D models are quite rare. The goal of this work is to build an application that allows the user to see the virtual three-dimensional representations of their friends and interact with them. The main challenge is to achieve results similar to those that a computer would produce, optimizing the process to deal with the constraints due to the technology used. Since there are no similar mobile applications, this work will make possible to create a base onto which will be possible to realize applications that have customized 3D models as a common feature.

Authors: S. Boi, F. Sorrentino, S. Marras, R. Scateni.
3D-ize U! A Real-time 3D Head-model Texture Generator for Android.
EuroGraphics Italian Chapter 2011, 41-46.
Salerno, Italia, Novembre 2011.

MORAVIA: A Video-Annotation System Supporting Gesture Recognition

Abstract: Gestures and gesticulation play an important role in communication, particularly in public speech. We describe here the design, development and initial evaluation of MORAVIA (MOtion Recognition And VIdeo Annotation): a collaborative web application for (semi)automatic gesture annotation. MORAVIA was conceived as a support for the automatic evaluation of a speech based on non-verbal components, that is, as much as possible independent from the verbal content. We adopt an evaluation model, based on quality metrics related to gestures and provided by experts in the education and psychology domain. The final goal is to design and implement a system able to detect the gestures using a video camera and a depth camera, such as the Microsoft Kinect, to detect the position and the movements of the speaker. Then, the web application for video-annotation allows collaborative review and analysis of the different video sequences. This is useful both to domain experts, as a research tool, and to end users, for self-evaluation.

Authors: M. Careddu, L. Carrus, A. Soro, S. A. Iacolina, R. Scateni.
MORAVIA: A Video-Annotation System Supporting Gesture Recognition.
ACM SIGCHI Italian Chapter (CHItaly 2011) Adjunct Proceedings.
Alghero, Italia, Settembre 2011.

Walk, Look and Smell Through

Abstract: Human Computer interaction is typically constrained to the use of sight, hear, and touch. This paper describes an attempt to get over these limitations. We introduce the smell in the interaction with the aim of obtaining information from scents, i.e. giving meaning to odours and understand how people would appreciate such extensions. We discuss the design and implementation of our prototype system. The system is able to represent/manage an immersive environment, where the user interacts by means of visual, hearing and olfactory informations. We have implemented an odour emitter controlled by a presence sensor device. When the system perceives the presence of a user it activates audio/visual contents to encourage engaging in interaction. Then a specific scent is diffused in the air to augment the perceive reality of the experience. We discuss technical difficulties and initial empirical observations.

Authors: V. Cozza, G. Fenu, R. Scateni, A. Soro.
Walk, Look and Smell Through.
ACM SIGCHI Italian Chapter (CHItaly 2011) Adjunct Proceedings (poster).
Alghero, Italia, Settembre 2011.

Multi-touch and Tangible Interface: Two Different Interaction Modes in the Same System

Abstract: We present here a system built around the idea of giving to several users the possibility to interact with a cheap and solid hardware, at the same time, using natural gestures supported by an intuitive user interface. A projector and a camera are placed underneath a Plexiglas sheet, framed with an array of infrared LEDs, all set into a wooden table box. This allows for multiple users (up to four or five) to freely move around the box and manipulate the objects retro-projected on the screen, using a tangible interface designed aiming at offering few simple operations: geometric transforms (rotations, translations and scales), drawing, erasing and color selections. All of these are executed through the use of either a custom built IR LED pen and/or directly the fingers. The main purpose of the project is to offer an instrument of tangible interaction to classrooms of naive users (i.e.: neither technology nor science professors and students) in a university environment.

Authors: D. Cabiddu, G. Marcias, A. Soro, R. Scateni.
Multi-touch and Tangible Interface: Two Different Interaction Modes in the Same System.
ACM SIGCHI Italian Chapter (CHItaly 2011) Adjunct Proceedings (poster).
Alghero, Italia, Settembre 2011.

Natural exploration of 3D models

Abstract: We report on two interactive systems for natural exploration of 3D models. Manipulation and navigation of 3D virtual objects can be a difficult task for a novel user, specially with a common 2D display. With traditional input devices such as 3D mice, trackballs, etc. the interaction doesn’t insist directly on the models, but is mediated and not intuitive. Our user interface allows casual users to inspect 3D objects at various scales, panning, rotating, and zooming, all through hand manipulations analogous to the way people interact with the real world. We show the design and compare the tests on two alternative natural interfaces: multitouch and free-hand gestures. Both provide a natural dual-handed interaction and at the same time free the user from the need of adopting a separate device.

Authors: S. A. Iacolina, A. Soro, R. Scateni.
Natural exploration of 3D models.
ACM SIGCHI Italian Chapter (CHItaly 2011), 118-121.
Alghero, Italia, Settembre 2011.