Abstract: We report on an alternative OCGM interface for a bulletin board, where a user can pin a note or a drawing, and actually shares contents. Exploiting direct and continuous manipulations, opposite to discrete gestures, to explore containers, the proposed interface supports a more natural and immediate interaction. It manages also the presence of different simultaneous users, allowing for the creation of local multimedia contents, the connection to social networks, providing a suitable working environment for cooperative and collaborative tasks in a multi-touch setup, such as touch-tables, interactive walls or multimedia boards.
Authors: S. A. Iacolina, M. Corrias, O. Pontis, A. Soro, F. Sorrentino, R. Scateni.
A Multitouch Notice Board Fostering Social Interaction.
ACM SIGCHI Italian Chapter (CHItaly 2013), 13:1–13:4.
Trento, Italia, Settembre 2013.
Abstract: The wide availability of low-cost sensing devices is opening the possibility to easily create different interaction settings, which exploit various techniques for a more natural interaction, especially in public and shared settings. In this paper, we compared two different solutions for enhancing the interaction experience of a planetarium application, both replicable at a reasonable cost. The first version is based on a simple multitouch paradigm, while the second one exploits a full-body interaction together with a projection on geodetic sphere. We detail the technical implementation of both versions and, in addition, we discuss the results of user-study that compared the two modalities, which highlights a tradeoff between the control and the users’ involvement in the virtual environment.
Abstract: We present here the preliminary results of our efforts towards the de?nition of a novel paradigm for the procedural generation of pseudo-animals, using a grammar-based approach. With the term “pseudo-animal” we denote a living being characterized by a set of features mimicking the ones of real animals, but not necessarily belonging to existing species. The generation of these pseudo-animals should also plausibly re?ect the properties of the environment where the model will live.
Abstract: We present here the ?rst release of an SDK (Software Development Kit) for mobile devices supporting the animation of 3D talking heads: THAL-k. The SDK is constantly evolving and here we discuss the features of version 1.0. This library is thought as a support for all the developers wishing to build applications on smartphones or tablets including avatars to enhance the interaction functionalities. The main challenge we face is to provide developers with a complete SDK for the creation, customization and real-time animation of the models.
Abstract: The number and quality of smartphones on the market has dramatically raised lately. Researchers and developers are, thus, more and more pushed to bring algorithms and techniques from desktop environments to mobile platforms. One of the biggest constraints in mobile applications is the fine control of computing power and the relative power consumption. Although smartphones’ manufactures are offering better computing performance and longer battery life, the mobile architecture is not always powerful enough. Furthermore, nowadays, the touchless interaction (e.g., the usage of voice commands) on mobile devices is particularly attractive. The device can also possibly answer to our questions (e.g., Siri-Speech Interpretation and Recognition Interface, which according to Apple is “the intelligent personal assistant that helps you get things done just by asking”). The use of talking avatars can improve the quality of the interaction and make it more useful and pleasant. Since avatars are static models, but the interaction requires dynamics, it is almost obliged to introduce avatars’ animations.