Multitouch technology with computationally enhanced surfaces has attracted considerable attention in recent years, not least because of its potential to improve human/computer interaction. Optical approaches to system design use image processing to determine the locations of interactions with the surface. Infrared illumination and simple setups mean that these systems can potentially be very robust. Hardware implementations, such as frustrated total internal reflection (FTIR) and diffused illumination (DI), have enabled low-cost development of surfaces. Laser-light plane and diffused-screen illumination offer other advantages.1
Buxton's multi-touch Web page2 provides a thorough overview of the underlying technologies and history of multitouch surfaces and interaction. Despite its recent surge in popularity,3 the technology has actually been available in different forms since the late 1970s. However, Han's 2005 presentation of a low-cost, camera-based sensing technique using FTIR truly highlighted the potential role the technology could play in developing the next generation of human/computer interfaces.4 Han's system was both cheap and easy to build. It was used to illustrate a range of creatively applied interaction techniques. His Youtube demonstration captured the imagination of experts and laymen alike. In 2007, interest increased further when Apple released details of the iPhone,5 a mobile phone with a multitouch screen as user interface. The iPhone's interface and interaction techniques received considerable media attention and brought multitouch to the forefront of the consumer-electronics market.
Later in 2007, Microsoft announced their Surface multitouch table,6 which has the appearance of a coffee table with an embedded interactive screen. Like the HoloWall,7 the Surface uses a DI approach with a diffuser attached to the projection surface and IR illumination from below. Reflections of hands and objects are captured by cameras inside the table. By employing a grid of cameras, the Surface has a sensing resolution sufficient to track objects augmented with visual markers.
The technology's growth is evidenced by the considerable amount of research exploring the benefits of multitouch interaction and surfaces.8–17 In addition, several conferences are held annually in this and related fields. Interactive Tabletops and Surfaces (ITS)18 is the premier conference for presenting research in the design and use of new and emerging tabletop and interactive-surface technologies. As a young community, ITS embraces the discipline's growth in a wide variety of areas, including innovations in hardware, software, and interactive design, and in studies expanding our understanding of design considerations for applications in modern society.
Our goal is to facilitate better application of the technology by pointing out the many disappointing presentations of multitouch-enabled surfaces and addressing the lack of awareness of genuine advantages of multitouch interaction: multi-touch can do more than just rotate and scale pictures and videos on a screen (see Figure 1). Of course, there are also many good examples of applications. Domains such as education, entertainment, or command-and-control scenarios have the potential to highlight the benefits of multitouch interaction. For example, the SMART Table19 represents the first multitouch, multi-user interactive learning experience that allows groups of early-education students to work simultaneously on its surface. Jeff Han's company Perceptive Pixel20 also presents their interactive wall as a tool for command-and-control scenarios. Microsoft's Surface is an ideal platform for various kinds of games (for example, the ‘Firefly’ game or the ‘Dungeons and Dragons’ concept).
Multitouch interaction in different domains. (A), (B), and (C) Interaction with a virtual globe. (D) Exploration of volumetric medical data.
In our own work, we focus on how multitouch can interact with geospatial information. As a demonstration of our own FTIR-based multitouch wall in a pedestrian underpass, we allowed users to navigate through a virtual globe and explore points of interest (see Figure 1). We also focus on how other modalities can be combined to enrich the interaction with spatial information.21,22 In more recent work, we focus on new paradigms and interactions that combine both traditional 2D and novel 3D interactions on a touch surface to form a new class of systems that we refer to as interscopic multitouch surfaces (iMUTS). iMUTS-based user interfaces support interaction with 2D content displayed in monoscopic mode and 3D content usually shown stereoscopically.23
Buxton2 says, “Remember that it took 30 years between when the mouse was invented by Engelbart and English in 1965 to when it became ubiquitous.” In time, multitouch will successfully pass through the inevitable media hype and become a genuinely useful technology, a powerful way in which people interact with next-generation computers.