Here are a few examples of the kind of research we have carried out in recent years, which should give some indication of what we have planned for the future.
- Augmented Signage for Navigation Support
- Evaluation Location-Based Services
- User-Generated Gestures
- Collocated Photo Sharing
- Opportunistic Location Sharing
Every year, a large number of pilgrims visit Mecca in Saudi Arabia. During their stay, they perform a number of rites in and around the city. Due to large crowds forming on particular days, incidents frequently occur, where people are injured, sometimes fatally. In this article, we investigate the question whether the use of dynamic public signage can help people to navigate in this setting and in comparable ones. We present an analysis of the situation in Mecca based on a literature review and on interviews with pilgrims, and then introduce a prototypical dynamic sign system aimed at supporting pilgrims in navigating one particular area. To evaluate the system, we conducted a user study in a realistic setting, and the results suggest that dynamic signage may be a feasible option in this setting. The design and evaluation of our prototype also led to a number of insights regarding the design of such systems. We discuss difficulties encountered during the design process and the evaluation and reflect on implications for the design and evaluation of systems supporting navigation for large crowds.
Fathi Hamhoum and Christian Kray. Supporting Pilgrims in Navigating Densely Crowded Religious Sites. Accepted for publication in Personal and Ubiquitous Computing, Special Issue on Extreme Navigation, forthcoming.
A key issue in mobile and ambient computing is the effort required to rapidly prototype and evaluate user interfaces and applications. Existing technologies for these tasks suffer either from low fidelity (e.g. paper prototypes, mental walkthroughs) or effectively require a near full-scale deployment. We propose an approach using immersive video with surround sound and a simulated infrastructure to create a very realistic environment in the office or the lab. It provides a low-cost and rapid means to prototype user interfaces and applications, and to evaluate them in a realistic simulation of the context, in which they are intended to be used.
Gestures can offer an intuitive way to interact with a computer. In this paper, we investigate the question whether gesturing with a mobile phone can help to perform complex tasks involving two devices. We present results from a user study, where we asked participants to spontaneously produce gestures with their phone to trigger a set of different activities. We investigated three conditions (device configurations): phone-to-phone, phone-to-tabletop, and phone to public display. We report on the kinds of gestures we observed as well as on feedback from the participants, and provide an initial assessment of which sensors might facilitate gesture recognition in a phone. The results suggest that phone gestures have the potential to be easily understood by end users and that certain device configurations and activities may be well suited for gesture control.
Passing around stacks of paper photographs while sitting around a table is one of the key social practices defining what is commonly referred to as the 'Kodak Generation'. Due to the way digital photographs are stored and handled, this practice does not translate well to the 'Flickr Generation', where collocated photo sharing often involves the (wireless) transmission of a photo from one mobile device to another. In order to facilitate 'cross-generation' sharing without enforcing either practice, it is desirable to bridge this gap in a way that incorporates familiar aspects of both. In this paper, we discuss a novel interaction technique that addresses some of the constraints introduced by current communication technology, and that enables photo sharing in a way, which resembles the passing of stacks of paper photographs. This technique is based on dynamically generated spatial regions around mobile devices and has been evaluated through two user studies. The results we obtained indicate that our technique is easy to learn and as fast, or faster than, current technology such as transmitting photos between devices using Bluetooth. In addition, we found evidence of different sharing techniques influencing social practice around photo sharing. The use of our technique resulted in a more inclusive and group-oriented behavior in contrast to Bluetooth photo sharing, which resulted in a more fractured setting composed of sub-groups.
Knowing where people and objects are located with respect to each other is a key factor in determining the context and the full semantics of any interactions that are taking place. It can also help greatly in simplifying interaction or in enabling new ways of interacting with technology. This applies to both small scale scenarios (e.g. on a table, in a room) and larger scale settings (e.g. in a city). Location sensing frequently requires the availability of a specific infrastructure, which can be disadvantageous e.g. in terms of cost, resilience and reach. The idea behind opportunistic location-sharing is to enable localisation amongst collocated 'peers' by sharing information amongst them. The three example papers listed on the right look at this idea from different perspectives (e.g. room-scale vs. building/city-scale, custom sensors vs. off-the-shelf sensors).