« Back to news

RSS

What if you could enter a museum and immediately interact with the exhibits from your own smartphone? What if you could do it without installing or configuring an app?

For some time now, ART+COM research has been exploring the seamless integration of spatial media and mobile devices. Our team is researching ways to let passers-by easily interact with responsive environments and with each other using their own mobile devices. Our goal is to extend spacial interaction beyond traditional kiosk-based setups and to expand the experience of media spaces to include multi-user and multi-device simultaneous interaction. With the right combination of modern web technologies and properly devised media environments, we believe it is possible to design such user experiences using current mobile technology. To demonstrate, we have built prototypes that connect modern web applications to responsive spaces.

In this early experiment, each user took control of one of several timelines displayed on a shared screen. Scrolling on the mobile device simultaneously scrolls the timeline on the main screen.

Challenges

Contextual awareness
Current mobile devices only have limited context awareness (the ability to acquire knowledge of resources and changes in the environment). Rudimentary context awareness is often achieved through proprietary technology (e.g., iBeacon or S Beam) or apps that need to be installed and configured by the user. With the emergence of the internet of things and smart homes, there is no doubt that mobile devices will become better at responding to their surroundings, but do we really have to wait?

Responsiveness
Proper usability demands fast and robust communication between the input device (a mobile device in this case) and the system being controlled (a responsive space in this case). Implementing the kind of low latency required for good user experience over a wireless network, within a public space and for a large number of users is a challenge both in terms of software and network architecture.

Pairing, installation & configuration
The default way to have a mobile device talk to connected hardware is to join a local network (most times, typing a password), install a dedicated app and pair your mobile with each appliance on the network. This is cumbersome enough in a home environment but in public spaces like museums or airports where the user is just passing by, such limitations make it impractical to implement any kind of ad-hoc interaction.

Cross-platform compatibility
Since we want to let users bring their own devices, we have to support the largest possible number of platforms. Using browser-based applications mostly solves the software part of the problem, but some issues remain due to hardware limitations and platform specific restrictions.

Definitions

NFC
NFC stands for Near Field Communication. It is a short range wireless technology similar to RFID. You may know it from the contact-less ticketing used by most large public transit systems. Inexpensive NFC stickers (called tags) can store small amounts of data that can be read by an NFC-compatible device or used to trigger special functions on a smartphone. For example: swiping an NFC-enabled smartphone over a tag storing a web address will open the corresponding page in a browser.

WebSockets
Real-time communication between client and server has been a recurring issue since the early 90s. The latest solution to date is a protocol called WebSockets. What makes WebSockets different is their focus on low latency, high frequency bi-directional communication. For our purpose, this means that — for example — a tap on a touchscreen can trigger a sound in the environment with no perceivable delay, or that several devices can be triggered to display specific content at a specific time.

Pop-up interfaces
What we call pop-up interfaces are interactive modules that can be summoned when and where an action must be performed. To illustrate, the pop-up equivalent of a doorknob would be one that only appears when you approach the door. Pop-up interfaces stay out of the way when they are not needed. In this, they are related to the concept of Calm Technology.

The visit

We developed a scenario as part of the UHCI (Universal Home Control Interface) project, and a prototype that demonstrates ad-hoc interaction with a smart-home environment. It makes clever use of short range wireless communication and web technologies to immerse the user in an interactive and responsive environment.

Setup
The originality of this prototype is in the combination of NFC, HTML5 and home automation. An NFC tag is used to store the URL of the application which will run in a browser window. Therefore, the user’s device doesn’t need to know anything about the system (or the devices connected to it) beforehand. The following diagram shows a simplified representation of the setup.

The system at rest.

The user taps their device on the NFC tag and a confirmation message appears (first time only). 

The browser opens and requests the page from the app server (running on a Raspberry Pi in our prototype).

The web app is sent to the phone. 

As the user interacts with the app, a message is sent to turn the light on.

Scenario
In this AirBnB-like scenario, the user is taken through a tour of a flat. We wondered what the onboarding experience for a smart home could look like. Everything starts when the user enters the flat and finds a note…

Tapping the phone on the NFC-equipped card opens our HTML5 application in a browser window.

The application controls the ambient light and sound.

Interaction
The user accesses the application by tapping their phone to an NFC tag. They are then guided by a voice through a series of on-screen cards, which can be tapped to interact with the environment in different ways. The environment also responds to the user’s progression through the cards. Audio plays through the room’s sound system, either in sync with the progression of the story or in response to input from the user. The system therefore acts as its own demonstrator, gradually introducing the functionalities of the system to the user.

Audio synchronisation

The following is a preview of an early prototype building upon our previous research on realtime web-based interaction. It is the first of a series of experiments being developed in the context of an ongoing research project called SintRaM. We wanted to see how tightly we could synchronise audio playback on multiple devices.

In this first version, each increment is accompanied by a change of colour… and the machine goes ‘ping’!

The second prototype used a ‘breathing’ motion and softer, more ambient sounds.

Setup
The prototype uses a combination of WebSockets (through engine.io), Node.js and the Web Audio API. A synchronisation server runs the master clock and acts as the orchestra conductor, sending regular pings to all connected clients. It adjusts dynamically for network lag by measuring how long it takes to receive an answer. Once the client’s clocks are synchronised with the master clock, visuals and sound are scheduled for update.

Results
According to our current benchmarks the latency is around 100ms. However, we still have some tricks up our sleeves and there is a good chance this could be improved. It should be noted that our brain is better at perceiving bad synchrony in sounds than visual cues. In addition, ambient sounds are much more forgiving than hard beats. Those are things we will have to keep in mind as we develop further use-cases.

Discussion

What makes this approach special?
The main benefit of using embedded web applications is that users don’t need to use any custom hardware, nor to install or configure anything. Just connect and play. Pop-up interfaces are specially useful for providing highly customisable interfaces to devices and spaces in which an embedded touch screen would be impractical, or in cases where multiple people need to interact with an installation simultaneously.

What is it good for?
Great user experience! The fact that the apps are web-based makes this design particularly interesting in contexts where many users are passing by and need to interact with their environment: fairs, exhibitions, hotels, lobbies, airports, etc.

The ability to synchronise content is great for scenarios in which many users interact with each other. It is also useful for cases where the environment is changing over time (videos playing, kinetic installations moving, performances going on, etc.) and where we want to provide people with contextual information in realtime to match the current state of the space.

This is a generic solution, so it is impossible to list all possibles use-cases, but here are some examples:
– A kinetic installation broadcasting a synchronised music track you listen to on your smartphone
– A video wall sending contextual information or subtitles to people’s devices in realtime
– A hotel room universal remote with an interface in the user’s language
– A multi-user audioguide

What limitations are there?
In an ideal world, any off-the-shelf mobile device would be able to discover resources (free wireless networks and interactive appliances for example) in their surroundings, inform the user and display the appropriate interface if requested. No major technological breakthrough would be needed for this to happen, yet it is not possible out of the box with current mobile technology. However, the evolution of mobile platforms is encouraging. For example: the latest version of Android allows accessing a WiFi network by taping an NFC tag. Once this feature becomes more widespread, it will make it much easier to deploy spatial interaction on mobile using browser-based applications and we will be one step closer to autonomous discovery.

The main roadblock is the fact that Apple devices don’t have a built-in contactless method to redirect users to a website: NFC tags are not supported, iBeacons don’t have the feature and QR codes require the use of a 3rd party application. Looking for workarounds, we went as far as experimenting with repurposing iOS’s captive portal assistant (the web view that pops up when you connect to a restricted WiFi network) but this is hardly a robust or future-proof solution.

What’s next?
In spite of remaining limitations, web applications are a promising platform for spatial interaction, both at home and in public spaces. They require less resources to develop, are (mostly) compatible with all platforms, and users don’t need to download a native app or type in a password to get hooked up.

We can’t reveal everything yet, but expect to see more seamless integration of mobile devices in ART+COM’s future commissioned media spaces… Stay tuned!

About me:
I am a graphic designer and new media artist from France. Aside from my research and computational design work at ART+COM Studios I teach workshops and create digital art installations. I am one of the organisers of Creative Code Berlin. Follow me on Twitter: @sableRaph

 

This research was supported by the Federal Ministry for Economic Affairs and Energy in the context of the projects UHCI and SintRaM.