The blog is where we report on current projects and activities. We provide insight into our design approach and practice, technological developments and software solutions. Projects and events that we find interesting and important are also featured.

Following two-and-half years of renovations, Royal Jelling re-opens to visitors on the sixth of June. A new exhibition space of 1000 square metres now spans the existing structure and new extension.

UNESCO World Heritage listed Jelling, with its rune stones and burial mounds, is one of the most important cultural-historical sites in Denmark. The Royal Jelling exhibition is an exploration of Danish history and mythology and, crucially, the mysteries of the Viking kings of Jelling.

Archaeological artefacts and Viking legends are powerfully presented in a diverse and media-rich exhibition. Visitors can travel back to the earliest days of Jelling and immerse themselves in lives of the Vikings through ten different exhibition areas and forty media exhibits.

Because the subject is an archaeological site, a number of important artefacts are not viewable inside the museum. In the exhibition, these finds are presented with media installations so that they can be experienced virtually.

ART+COM Studios won an international competition to design the new exhibition, and went on to develop the overall design as well as individual media exhibits.

Art director, Felix Hardmood Beck, and creative director, Jussi Ängeslevä, headed up the exhibition and scenography design. In this interview, Felix H. Beck talks about the specific features of the exhibition and offers insights into the design process.

Susanne: What was the central design approach for the new Royal Jelling?

Felix: Jelling has been known for decades as the place of Viking king Harald Bluetooth. Bluetooth’s rune stones are World Heritage listed and constitute a part of Danish identity. Excavations over the past ten years have shown that sections of Danish history need to be rewritten because Jelling is much more important than previously thought. The stone ship, discovered long ago, has turned out to be twice as big as first estimated, and the remains of a wooden fortification were found — a huge wooden palisade surrounding the whole area. The finds suggest that Jelling was the most important place of its time, up to the year 960, and played an even more important role in the period of flux between old Viking beliefs and the adoption of Christianity than ever previously considered.

To communicate this archaeological process — that knowledge evolves with every new discovery — was the central approach to the design of the exhibition. We show visitors not just one state of knowledge, but several. In some areas, visitors can even contribute their own interpretations of the findings. In this sense Royal Jelling is not a traditional archaeological museum, but one that makes archaeology accessible as a continuous and active process.

Susanne: How did you and the design team get to grips with this topic? And how was the collaboration with the museum in that respect?

Felix: Two years working on a Viking exhibition is every little boy’s dream come true. As well as researching the subject through the usual channels, books, films and exhibitions, we had access to scientific material in the Danish National Museum. The director, the project leader and an archaeologist from Royal Jelling were our interfaces with the National Museum. The collaboration just flowed right from the first moment. I was able to visit the National Museum Archives, and we had access to scientific experts as required to ask questions and develop content that we thought would be important for the exhibition. We were able to find some auratic artefacts in the archive, which we used for visualisations of certain areas of the life of the Vikings: a clay bowl represents life in the village, a sword illustrates fighting and conquest, coins portray trade. The exhibition reveals what roles these areas played in the lives of the Vikings and also how special the place is — that it was a royal place where European history was made.

Susanne: How is the exhibition structured and staged? And how do visitors experience the exhibition?

Felix: We organised the exhibition chronologically and by subject. It begins with the time of the Vikings and goes through the development of Jelling right up to the here and now. Visitors will have a different experience in each room. For the whole exhibition as well as for the individual rooms, we developed Epic Profiles to define an array of topics, content and media exhibits that build up suspense. The exhibition is rich in content and some elements are continually changing so that visitors who come for a second or third time still experience new atmospheres and stories.

Susanne: Could you give me a concrete example?

Felix: The most obvious instance is the ‘Mythology Room’. The focus of this immersive space is sound. Visitors hear stories that have been handed down orally through the centuries. Moving images are projected onto a sculpture in the centre of the room that is a reference to Yggdrasil, the Vikings’ mythological tree of life. On the walls are images of a wide variety of sacrificial offerings, supported by scientific evidence. The walls, with silhouettes — two-dimensional cut-outs — are backlit and change colour over timed intervals. In conjunction with the stories, new atmospheres are continuously generated in the space.

Susanne: The exhibition presents archaeology and history as active processes. Accordingly, you would expect a lot of interactive exhibitions where visitors can take action themselves. How does the design factor in different visitor profiles?

Felix: We have something in every room for all types of visitors. Analogue and media experiences are distributed evenly throughout. Many of the exhibits are designed so that they need to be explored. For example, the floor plans of six other Viking sites and castles built by Harald Bluetooth are on show, and printed onto them, in invisible ink, are archaeological events and discoveries. Visitors can explore and discover these themselves using a UV lamp. And for very young visitors, laser stencils have been hidden throughout the exhibition. When found, rubbings can be made with pencil and paper and then the images taken home.

Susanne: How do the artefacts and the media exhibits work together in the exhibition?

Felix: Most artefacts in the exhibition are staged using new media. But we also made some objects ourselves. One particularly nice example is the installation ‘Hammer and Cross’. The starting point was a found object — a mould for the production of the Thor-hammer-and-cross-shaped jewellery that marked the Vikings’ shift from pagan religion to Christianity. This change of religion began with Harald Bluetooth, the first Viking king to be baptised. In fact, it took another two to three hundred years before the Vikings completely converted to the Christian faith. In the intervening years both religions were practiced, depending on the occasion. When it was stormy, the Vikings brought their Thor hammers to the fore; when trading with Christians, they wore crucifixes visibly around their necks. With our installation, we found a metaphor for this back and forth between religions. In the room is a small, freestanding, metallic, 3D-printed object. The object casts a shadow on the wall showing a cross or a Thor hammer depending on the lamp position. So we have reinterpreted a piece of jewellery and from that developed an exhibit in which its shadow becomes a spatial experience.

Susanne: Drawings are a recurrent theme and a central motif throughout the exhibition. And there is a drawing robot working right in the foyer that visitors can watch. How did that come about?

Felix: The starting point for the drawing robots were the rune stones, which, in the medium of the Vikings — stone engraving — communicated that the Vikings had become Christians. It was knowledge that existed around AD 1000 and was documented with the rune stones. We took that as a basic theme and reinterpreted it with a new, contemporary medium. The drawing robot presents knowledge not as incontrovertible fact, but illustrates that knowledge is a process by drawing new finds and discoveries on the wall. We created software that allows museum employees to import and position image files (SVGs). Then the drawing robot draws the image, first creating a rough outline across the wall, and over the following hours and days, fills in the details.

Susanne: The special thing about Royal Jelling is that the archaeological site is accessible, so a part of the exhibition is located outside. How have you managed to connect the outdoor and indoor areas?

Felix: It’s possible to experience the scale of monumental Jelling of Viking time if you explore the area on foot. In the last exhibition room, the terrain can be viewed through a huge panoramic window. Right next to that is a large screen showing a bird’s eye view; visitors can control a prerecorded Archeodrone flight with a joystick and ‘fly’ across the area.

Right at the end of the exhibition, visitors move onto a roof terrace. From there, they look over the outdoor space with images from the exhibition in their heads. Now they can imagine how people constructed the site, how they pushed those enormous stones through the wetlands, how they sat around fires working the stakes with axes. And to enhance these mental pictures further still, two Timescopes have been installed for visitors to look through. They can see the most important areas of the site marked out in a transition of the live view. With the push of a button, images of different periods can be selected. The Timescopes show how the longhouse, the church, the burial mounds, and the wooden palisade have changed over the centuries.

About Felix Hardmood Beck:
Felix is art director at ART+COM Studios. His research interest and work focus is on the overlapping fields of design and technology — both in his work for ART+COM Studios and in his independent work.
http://www.felix-beck.de/

About Dr Susanne Jaschko:
Susanne is an art historian and curator based in Berlin. Next to her freelance writing for ART+COM Studios, she focuses on participatory art and design processes. prozessagenten website: http://prozessagenten.org

 

ART+COM Studios team
Design direction: Jussi Ängeslevä, Felix Hardmood Beck
Project management: Mascha Thomas, Gert Monath, Peter Böhm
Illustration: Sebastian Bausdorf
Design: Tim Horntrich, Arne Michel, Christoph Steinlehner, Simon Häcker, Dimitar Ruszev, Eva Offenberg, Felix Groll, Patrick Garus, Sarah Klemisch, Elisa Hasselberg, Duc Nhan Truong, Maximilian Sedlak, Adrian Bernstein, Alyssa Trawkina, Jip Eilbracht, Alexander Pospischil, Christine Eyberg, Lukas Dürrbeck, Julius Winckler
Development: Valentin Schunack, Gunnar Marten, Claudia Winterstein, Andreas Marr
Computational design: Christian Riekoff, Max Göttner
3D: Simon Häcker
Content editor: Anna Elena Hauff
Media planning (Medienprojekt P2): Marcus Heyden, Stephanie Sturm
System administration: Rasca Gmelch

Exhibition design in collaboration with

Bertron Schwarz Frey (Prof Ulrich Schwarz, Johanna Ziemann)

Realisation and construction
M.o.l.i.t.o.r. (construction management: Jochen Voos)

Media production in collaboration with
m box, Markus Lerner, Jonas und der Wolf (Janek Jonas, Daniel Urria Redia, Mikkel Bech), picaroMEDIA/Peter Weinsheimer (Audio), Weißpunktundpurpur (Antonia Spieß)

Special structures MKT, Dieter Sachse

Scale modelling Monath & Menzel

Client

Vejle Kommune (project manager: Morten Teilmann-Jørgensen, unit-leader Kongernes Jelling: Hans Ole Matthiesen)


In response to the many requests that reached us: No, it wasn’t us. For those who did not see it: “Kinetische Skulpturen” were a central element of the 2015 Eurovision Song Contest’s stage design. The ‘creator’ of the so-called ‘Kugelballett’, Florian Wieder, is based in Munich, where our Kinetic Sculpture is part of the permanent exhibition at BMW Museum since 2008…

ORF announcement
Website Kinetic Sculpture
Video BMW 5 series campaign


RGB/CMY Kinetic being created for SonarPLANTA

April 29, 2015, ART+COM Studios

Sónar and the Sorigué Foundation will premiere the monumental audiovisual installation RGB|CMY Kinetic created by us for SonarPLANTA.

RGB|CMY Kinetic is both a sculpture suspended in the air and a choreography of light, with its roots in two important traditions of the twentieth century: kinetic art and light art. The piece consists of five reflective disks that will rise above the installation space. These disks will reflect light from three bulbs, casting three primary colours on a floor-mounted display. The precise and harmonious movement of the disks — via a system of motors and cables — will be used to decompose the light emitted by the bulbs in multiple shades and distinct tones that will fill the screen.

The third choreography element will be sound. For the sound of RGB|CMY Kinetic we have continued our successful collaboration with the exceptional Icelandic composer Ólafur Arnalds. His composition will be activated by the position of the piece’s disks in each moment.

The piece has been specifically developed for SonarPLANTA and is inspired by the PLANTA project, a multidisciplinary creation space that brings together art, architecture, science, innovation, talent and enterprise, located in the Sorigué industrial environment.

The monumental SonarPLANTA space will support a meditative and transcendental experience of RGB|CMY Kinetic, in which technology is used to explore nature’s poetic dimensions and fundamental principles. The piece will immerse viewers in colour, movement and sound.

Created last year, SonarPLANTA is the joint initiative from Sónar and the Sorigué Foundation, which will run for the next three festivals, and aims to promote and celebrate research and experimentation in creative languages, based around technology and New Media Art. Following last year’s first SonarPLANTA, each year will see three internationally renowned artists submit a new creation proposal that experiments with creative languages and technology. The selected artist will be rewarded with the production of their piece, which will be premiered in the new SonarPLANTA space.

RGB|CMY Kinetic follows on the first SonarPLANTA piece hosted by Sónar 2014: a monumental immersive audiovisual installation unidisplay by Carsten Nicolai.

RGB|CMY Kinetic — currently being produced in Berlin — will have its worldwide premiere at SónarBarcelona on 18, 19 and 20 June, 2015.

The installation will be manufactured and installed by MKT, Olching.

Read on


What if you could enter a museum and immediately interact with the exhibits from your own smartphone? What if you could do it without installing or configuring an app?

For some time now, ART+COM research has been exploring the seamless integration of spatial media and mobile devices. Our team is researching ways to let passers-by easily interact with responsive environments and with each other using their own mobile devices. Our goal is to extend spacial interaction beyond traditional kiosk-based setups and to expand the experience of media spaces to include multi-user and multi-device simultaneous interaction. With the right combination of modern web technologies and properly devised media environments, we believe it is possible to design such user experiences using current mobile technology. To demonstrate, we have built prototypes that connect modern web applications to responsive spaces.

In this early experiment, each user took control of one of several timelines displayed on a shared screen. Scrolling on the mobile device simultaneously scrolls the timeline on the main screen.

Challenges

Contextual awareness
Current mobile devices only have limited context awareness (the ability to acquire knowledge of resources and changes in the environment). Rudimentary context awareness is often achieved through proprietary technology (e.g., iBeacon or S Beam) or apps that need to be installed and configured by the user. With the emergence of the internet of things and smart homes, there is no doubt that mobile devices will become better at responding to their surroundings, but do we really have to wait?

Responsiveness
Proper usability demands fast and robust communication between the input device (a mobile device in this case) and the system being controlled (a responsive space in this case). Implementing the kind of low latency required for good user experience over a wireless network, within a public space and for a large number of users is a challenge both in terms of software and network architecture.

Pairing, installation & configuration
The default way to have a mobile device talk to connected hardware is to join a local network (most times, typing a password), install a dedicated app and pair your mobile with each appliance on the network. This is cumbersome enough in a home environment but in public spaces like museums or airports where the user is just passing by, such limitations make it impractical to implement any kind of ad-hoc interaction.

Cross-platform compatibility
Since we want to let users bring their own devices, we have to support the largest possible number of platforms. Using browser-based applications mostly solves the software part of the problem, but some issues remain due to hardware limitations and platform specific restrictions.

Definitions

NFC
NFC stands for Near Field Communication. It is a short range wireless technology similar to RFID. You may know it from the contact-less ticketing used by most large public transit systems. Inexpensive NFC stickers (called tags) can store small amounts of data that can be read by an NFC-compatible device or used to trigger special functions on a smartphone. For example: swiping an NFC-enabled smartphone over a tag storing a web address will open the corresponding page in a browser.

WebSockets
Real-time communication between client and server has been a recurring issue since the early 90s. The latest solution to date is a protocol called WebSockets. What makes WebSockets different is their focus on low latency, high frequency bi-directional communication. For our purpose, this means that — for example — a tap on a touchscreen can trigger a sound in the environment with no perceivable delay, or that several devices can be triggered to display specific content at a specific time.

Pop-up interfaces
What we call pop-up interfaces are interactive modules that can be summoned when and where an action must be performed. To illustrate, the pop-up equivalent of a doorknob would be one that only appears when you approach the door. Pop-up interfaces stay out of the way when they are not needed. In this, they are related to the concept of Calm Technology.

The visit

We developed a scenario as part of the UHCI (Universal Home Control Interface) project, and a prototype that demonstrates ad-hoc interaction with a smart-home environment. It makes clever use of short range wireless communication and web technologies to immerse the user in an interactive and responsive environment.

Setup
The originality of this prototype is in the combination of NFC, HTML5 and home automation. An NFC tag is used to store the URL of the application which will run in a browser window. Therefore, the user’s device doesn’t need to know anything about the system (or the devices connected to it) beforehand. The following diagram shows a simplified representation of the setup.

The system at rest.

The user taps their device on the NFC tag and a confirmation message appears (first time only). 

The browser opens and requests the page from the app server (running on a Raspberry Pi in our prototype).

The web app is sent to the phone. 

As the user interacts with the app, a message is sent to turn the light on.

Scenario
In this AirBnB-like scenario, the user is taken through a tour of a flat. We wondered what the onboarding experience for a smart home could look like. Everything starts when the user enters the flat and finds a note…

Tapping the phone on the NFC-equipped card opens our HTML5 application in a browser window.

The application controls the ambient light and sound.

Interaction
The user accesses the application by tapping their phone to an NFC tag. They are then guided by a voice through a series of on-screen cards, which can be tapped to interact with the environment in different ways. The environment also responds to the user’s progression through the cards. Audio plays through the room’s sound system, either in sync with the progression of the story or in response to input from the user. The system therefore acts as its own demonstrator, gradually introducing the functionalities of the system to the user.

Audio synchronisation

The following is a preview of an early prototype building upon our previous research on realtime web-based interaction. It is the first of a series of experiments being developed in the context of an ongoing research project called SintRaM. We wanted to see how tightly we could synchronise audio playback on multiple devices.

In this first version, each increment is accompanied by a change of colour… and the machine goes ‘ping’!

The second prototype used a ‘breathing’ motion and softer, more ambient sounds.

Setup
The prototype uses a combination of WebSockets (through engine.io), Node.js and the Web Audio API. A synchronisation server runs the master clock and acts as the orchestra conductor, sending regular pings to all connected clients. It adjusts dynamically for network lag by measuring how long it takes to receive an answer. Once the client’s clocks are synchronised with the master clock, visuals and sound are scheduled for update.

Results
According to our current benchmarks the latency is around 100ms. However, we still have some tricks up our sleeves and there is a good chance this could be improved. It should be noted that our brain is better at perceiving bad synchrony in sounds than visual cues. In addition, ambient sounds are much more forgiving than hard beats. Those are things we will have to keep in mind as we develop further use-cases.

Discussion

What makes this approach special?
The main benefit of using embedded web applications is that users don’t need to use any custom hardware, nor to install or configure anything. Just connect and play. Pop-up interfaces are specially useful for providing highly customisable interfaces to devices and spaces in which an embedded touch screen would be impractical, or in cases where multiple people need to interact with an installation simultaneously.

What is it good for?
Great user experience! The fact that the apps are web-based makes this design particularly interesting in contexts where many users are passing by and need to interact with their environment: fairs, exhibitions, hotels, lobbies, airports, etc.

The ability to synchronise content is great for scenarios in which many users interact with each other. It is also useful for cases where the environment is changing over time (videos playing, kinetic installations moving, performances going on, etc.) and where we want to provide people with contextual information in realtime to match the current state of the space.

This is a generic solution, so it is impossible to list all possibles use-cases, but here are some examples:
– A kinetic installation broadcasting a synchronised music track you listen to on your smartphone
– A video wall sending contextual information or subtitles to people’s devices in realtime
– A hotel room universal remote with an interface in the user’s language
– A multi-user audioguide

What limitations are there?
In an ideal world, any off-the-shelf mobile device would be able to discover resources (free wireless networks and interactive appliances for example) in their surroundings, inform the user and display the appropriate interface if requested. No major technological breakthrough would be needed for this to happen, yet it is not possible out of the box with current mobile technology. However, the evolution of mobile platforms is encouraging. For example: the latest version of Android allows accessing a WiFi network by taping an NFC tag. Once this feature becomes more widespread, it will make it much easier to deploy spatial interaction on mobile using browser-based applications and we will be one step closer to autonomous discovery.

The main roadblock is the fact that Apple devices don’t have a built-in contactless method to redirect users to a website: NFC tags are not supported, iBeacons don’t have the feature and QR codes require the use of a 3rd party application. Looking for workarounds, we went as far as experimenting with repurposing iOS’s captive portal assistant (the web view that pops up when you connect to a restricted WiFi network) but this is hardly a robust or future-proof solution.

What’s next?
In spite of remaining limitations, web applications are a promising platform for spatial interaction, both at home and in public spaces. They require less resources to develop, are (mostly) compatible with all platforms, and users don’t need to download a native app or type in a password to get hooked up.

We can’t reveal everything yet, but expect to see more seamless integration of mobile devices in ART+COM’s future commissioned media spaces… Stay tuned!

About me:
I am a graphic designer and new media artist from France. Aside from my research and computational design work at ART+COM Studios I teach workshops and create digital art installations. I am one of the organisers of Creative Code Berlin. Follow me on Twitter: @sableRaph

 

This research was supported by the Federal Ministry of Education and Research in the context of the projects UHCI and SintRaM.

 

 


Les Bains, a legendary nightclub and international party scene hot spot in the 1980s and 90s is currently enjoying a soft launch after its redevelopment into a hotel, restaurant and club.

ART+COM Studios has created a site-specific installation for an extraordinary space in the renovated Les Bains. The sculpture, which brings to mind an exploded disco ball, covers the surrounding walls in scattered points of reflected light from which the words “RE TROUVE LE TEMPS PERDU” are composed over and over again. These words can be understood as “The lost times are found” (LE TEMPS PERDU RE TROUVE) or “Find the lost times” (RE TROUVE LE TEMPS PERDU) depending on where the viewer starts reading.

The words are a reference to the legendary past of the space as Les Bains nightclub. The title of the installation, À la recherche, ties into Marcel Proust’s magnum opus “À la recherche du temps perdu” (In Search of Lost Time), in which involuntary memories triggered by everyday activities, objects and sensations is the recurring motif.

À la recherche was created for a room that served as a water tank for the spa in the 19th and early 20th century. At approximately twenty square metres, the floor area is not large, but the ceiling is 15 metres high. From a height of about two metres the walls have been left unrenovated – there, the sculpture rotates slowly above the heads of visitors, throwing reflections onto the rough walls. The space is part of the restaurant in Les Bains.

The sculpture is illuminated by several spotlights that bring a warm glow to its gilded interior and cast pinpricks of light onto the walls in swarms.

Manual labour and computational design went hand in hand for the production of À la recherche. Following brainstorming and initial sketches, the orientation and distribution of the mirror facets were computationally designed. The sculpture is made up of a sphere in 33 fragments, to which a total of 2800 small, square mirrors are attached. The mirror facets sit separately on individual holders that were made with a 3D printer and then glued on by hand. The mirror elements that reflect the words were also 3D printed, attached to flexible joints and individually aligned.

The slow movement of the reflections in the space and the recurring motif make À la recherche a contemplative work of art that pays homage to the wild years of Les Bains.

Pictures © Christophe Bielsa

 


A drone for good: the Archeodrone

February 13, 2015, Susanne Jaschko

We participated in the first UAE Drones for Good Award with the Archeodrone concept. ART+COM Studios developed the idea for the drone’s cultural use together with the Danish media company Redia. Although we were not among the lucky winners in the end, we believe that the Archeodrone is a great tool for immediate visitor experience of an archaeological site. In the below interview, Jussi Ängeslevä speaks about the device’s application for spatial communication and what needs to be considered when putting it to use.

The employment of drones in archaeology is an established practice. Drones provide an overview of a site in a hard-to-get-to area with preconfigured flight paths and image capture. They help in seeing ‘the bigger picture’. Cultural centres at archaeological sites present their heritage to the general public. It’s common practice to show visualisations of the site as part of the exhibition, although the original subject is right next door.

The Archeodrone connects indoor and outdoor through a live video recorded in real-time by cameras mounted onto a flying drone and displaying the video inside the museum. The drone flies a predefined flight path, ‘a scenic tour’, around the site and captures a spherical video of the flight. The drone is fully autonomous, but the museum visitor can choose which way to look freely, being the ‘second pilot’ of the flight (just like in any professional drone camera team, where a pilot flies and a camera man manoeuvres the camera).

Six GoPro cameras are mounted onto a custom built octocopter, and custom software stitches the camera views to a single ultra high definition spherical video. Audience in the visitor centre can operate the camera view and see the flight on a big screen.

All the flown flights’ recordings are added to a database, which slowly grows to cover all weather conditions and seasons. Navigating through this spatial and temporal capture non-expert audiences can experience the archaeological site in an exciting and immersive way.

The Archeodrone uses a tool common in archaeology — the drone — as a communication vehicle and gives a sense of authenticity and empowerment to the audience. It can inspire younger generations to engage with history, helps communicate the changing conditions of the sites and could become a platform for interactive drones in exhibition use beyond archaeology.

Susanne: How did the project come about?

Jussi: We were working on an exhibition project for a museum at an archaeological site and wanted to communicate the authenticity of the place — something that you cannot experience when you are at home, browsing the web. We wanted to provide a bird’s view of the archaeological site to make visitors understand what the place is like. So we developed the idea to use the drone, which is a professional archaeological tool, for spatial communication.

Redia had been working on their octocopter drone for a while — so it was really out of this collaboration and common interest that we developed the concept for the Archeodrone together.

Susanne: Why is this project particularly interesting for ART+COM Studios?

Jussi: ART+COM creates new forms of communication using modern technology. The drone is such a piece of technology, a high tech gadget, which we use and make accessible to ordinary people. You don’t need to be an expert drone pilot to use it, because we make it accessible to you — and we tell a story with it.

Susanne: What is the most challenging part in the project?

Jussi: The laws of nature: when you do something outdoors, wind, rain, snow, and all these kind of things have a big effect. We cannot control the space like we normally can when working for indoors. The second challenge is legal issues. In order to get permission for flying a drone in populated areas you must have a safe, redundant system, so that the drone cannot suddenly break, crash and plunge on somebody or something. A drone still weights a couple of kilos. That could be dangerous.

Our solution to this is that the drone does not always have to fly. In bad weather conditions or after sunset you cannot fly the drone, but you can still look at the previously recorded flight videos. It could even be used so that the drone does only few flights altogether, and presenting an interactive film as the output.

Susanne: So what’s the missing step from concept to realisation?

Jussi: There are still a couple of things that we need to do. One is to enable a fully autonomous flight of the drone including take-off and safe landing, dealing with the batteries etc. Another issue we must resolve is the video image. During the recording, the flight is not completely stable, so we must do a lot of post-processing in order to stitch the 360° film together. This computer graphics part is not perfect yet, and we can make the video look a lot better than what it looks right now.

Susanne: What do you think about the UAE Drones for Good Award? What do you think did it achieve or will achieve in the future?

Jussi: Since it was the first time that this award competition was organised, there was a crazy range of different kinds of concepts. Probably about 60 per cent were recycling the most common services you can imagine, for example using drones for disaster relief or logistics. Many felt like there was no real innovation in them. The ideas have been there, just not yet in the market.

Due to the predefined categories you would have products by professional drone building companies next to DIY drone hacks by students with visionary ideas, which makes these categories a bit difficult to evaluate. But at the same time, the award marked a point in the field of enthusiasm for drones and it succeeded in bringing the people together, and the winning teams absolutely deserved the prize.

Continue reading Redia’s blog article about the project

About Jussi Ängeslevä:
Jussi is vice creative director at ART+COM Studios, professor at University of Arts (UdK), Berlin and visiting lecturer at IED programme at Royal College of Art (RCA), London.
http://angesleva.iki.fi

About Dr Susanne Jaschko:
Susanne is an art historian and curator based in Berlin. Next to her freelance writing for ART+COM Studios, she focuses on participatory art and design processes. prozessagenten website: http://prozessagenten.org

 


For the production of the digital dioramas at the new Moesgaard Museum, seven physically reconstructed human ancestors were combined with a virtual landscape, enabling the figures to retain their strong physical presence. A look through one of the seven sets of binoculars reveals the humans in their original habitat. Viewers feel as though they are inside the landscape, circling around the figures.

Before the landscape could be generated, sites, vegetation, fauna and possible interactions had to be defined. Everyday scenes portraying the ancient humans’ respective ways of life were developed in cooperation with museum scientists. It was also necessary to create a photorealistic replica of the staircase, since the spatial conditions of the site did not allow for rotations.

We made the first rough animatics of the desired orbits in Cinema 4D, which were then successively refined. First we set the lighting atmosphere and incidence of sunshine, then we filled the scenes with trees, bushes, grasses, stones, and many animals whose occurrence has been scientifically confirmed. To create realistic ground and rock, a fair number of displacements had to be calculated.

The rendering was done in Vray where, in addition to the beauty pass, several mask channels for the most important elements and a depth channel were rendered at the same time. Fur, like that of the dead antelope in the Turkana boy scene, was calculated in C4D with the physical renderer.

The virtual flight camera data was handed over as FBX to the company Mastermoves, which used a programmable crane system, the Milo Long Arm — one of only seven in use in the world — with a RED Epic to rotate the figures.
Identical lighting was also very important for the seamless integration of figure and computer animation. A real 360 degree orbit around the figures in greenscreen would never have been evenly illuminated. And the crane would have inevitably got into the light, so the figures were placed on a turntable. Reflectors were used for the surface ambient light, and an equally reflective 2,5 kW ARRISUN lit the figure exactly as set out in the 3D animation. The light had to be hung on a round traverse system so that its movement could be synchronised with the rotation of the figure. In this way, the orbit around the figure could be converted into translational motion of the camera and rotational movement of the figure.

The shooting was done in multiple passes: in front of a greenscreen and a black background, so that the figures, very hairy in parts, could be cleanly isolated later.

After the filming, everything was brought together in After Effects: the filmed figure, the rendered background, the staircase, and the separately rendered figure shadows. Motion blur, colour correction and small 3D elements, like the swarm of bees in the Sediba scene, were then added. Watch the construction of the 3D-renderings for post-production.

The finished films are now image sequences on small computers in the foot of the binocular viewers. With a turn of a knob, the films can be played forwards, backwards and stopped at every point where the image of motion blur is replaced by one without, for the best image quality.

 

ART+COM Studios team

Art director 3D: Simon Häcker

3D: Susanne Träger, Simon Häcker

Compositing: Björn Müller

Project manager: Mascha Thomas

Mastermoves team, Visavis Filmproduktion

Director, project manager: Marcel Neumann

Lighting cameraman: Marcel Reategui

Motion control operator, 3D/CGI artist: Heiko Matting

Compositing artist: Oliver Thomas

Reconstructions Kennis & Kennis, Oscar Nilsson

Client Moesgaard Museum

Science advisor Peter C. Kjaergaard, PhD

Partner Moesgaard Museum Design Studio

 

 

 


People usually visit museums and exhibitions in groups, with family, friends, and in school groups. Unfortunately, social cohesion within groups is lost with the classic audio guides currently in use. The Museum App, which I developed as part of my Master’s thesis, makes an exhibition visit into a shared experience. The web app is part of a client‐server‐system that can be installed at exhibitions, ensuring synchronous interaction. The project was supervised by Christian Riekhoff and Dr Joachim Quantz, both of ART+COM Studios.

How it works
On entering the exhibition, visitors log in to the wireless network on their mobile devices and are automatically redirected to Museum App, which shows all content in whichever language was detected. The group is invited to choose an administrator, who then creates a virtual group with a unique group ID and a group colour. Once everyone has joined the group, the administrator can start the tour. There are currently five menu items to choose from that will offer the group added value in terms of playful learning and additional information.

Organising a tour
An exhibition map displayed on a large screen in the entrance area shows all active groups, including the number of members in each. The group can organise their tour by choosing exhibits on their mobile device, and as exhibits are selected, the group’s colour flashes on the large map at the same location. The choices of the group may prompt other visitors to look at the selected exhibits as well.

Synchronous audio
In contrast to a classic audio guide, Museum App plays audio to all group members synchronously, e.g., while watching a video displayed without sound in the exhibition.

Shared learning experience
The group is pushed to exchange information on exhibits though the way information is received. Administrators, for example, receive only images while members get the respective text. Another feature offers each member a mosaic piece of an image. In order to see the whole picture, the group members must bring their mobile devices together.

Museum quiz
Groups can play the museum quiz together. Questions are displayed with the possible answers on both a big screen and on mobile devices. If a majority of members answers correctly, the group gets a point. The groups’ score can be saved to a high‐score list after all questions have been answered.

Guestbook
At the exit the visitors encounter the virtual guestbook on another big screen. Using their devices, they can type feedback and click on send or make a throw gesture to make the comment appear on the screen along with an avatar (gravatar.com).

Résumé
The app forms the basis of a new interactive, language-sensitive system for exhibitions. The current functions are only examples of possible applications and interactions. With the app, museums and exhibitions can reach out to new target groups and create alternative visitor experiences. Museum App video on Vimeo

About me:
I started working as a student employee at ART + COM Studios in January 2014. I studied Media and Computer Sciences and will soon graduate from Beuth University of Applied Sciences in Berlin. www.wiesenberg.info


On the development of the new Zerseher

September 17, 2014, Susanne Jaschko

Until mid December, the Akademie der Künste in Berlin shows the exhibition “Schwindel der Wirklichkeit” (Vertigo of Reality) that includes artworks by Marina Abramović, Thomas Demand, Ólafur Eliasson, Valie Export, Harun Farocki, Dan Graham, Bruce Nauman, Nam June Paik, Ulrike Rosenbach, Bill Viola and others.

ART+COM Studios was invited to present Zerseher, a seminal work from the early 1990. Since neither its hardware nor its software is still existing, a small team created a contemporary version of the earlier artwork in just two months. For this interview, I spoke with two core members of the small team, Raphaël and Dimitar, about its development and design. Both Raphaël and Dimitar are computational designers and worked together on the design and development of the new Zerseher.

Susanne: Would you say that the new Zerseher is a completely new piece or a further development of the original Zerseher created by Joachim Sauter and Dirk Lüsebrink in 1992? What are the similarities and differences between the two works?

Raphaël: I would say it is a new piece, but we tried to keep the spirit of the original. We started from scratch on many levels, but we kept what made the original Zerseher interesting: the idea that the way you look at an object affects the object and the perception you have of it, which is embodied in the gaze. The original Zerseher was a statement about digital art of that time and promoted the idea of interactivity, which is why it would not have made sense conceptually to simply redo it. The new Zerseher works on very different principles than the first one did. Despite some technical similarities, the mechanics, the code and the visual effect are completely different.

Dimitar: Technologically, the first Zerseher was pioneer work, and one side effect was that it didn’t run completely smoothly as an exhibition piece. In contrast, the modules of the new Zerseher are almost all off-the-shelf products. Also the software has evolved a lot, so even we designers are now able to write code that produces graphically compelling results, which was not possible 20 years ago.

Susanne: What were the technical challenges of the project? Did you have to solve certain problems in very specific ways?

Raphaël: Something that the original Zerseher had — and Joachim was keen to include as a central element in the new version — was the image of the eye. The original piece used an eye-tracking device that was as big as a small fridge. The new one, on the other hand, is the size of a pencil case, and though it is better in many ways, it doesn’t give access to the image of the eye used for the tracking. That caused a lot of headaches. We combined the tracker with a high-resolution camera to isolate the image of the eye and display it on the screen. In the first stage, we tried to project the 3D coordinates of the eye that we get from the tracker onto the image plane of the camera. But we did not manage within the timeframe that we had for this project. Instead we decided to do computer-vision analysis of the camera image in order to capture the eye of the spectator. The eye tracker still plays a central role, tracking the gaze of the user, but also detecting if anyone is present and not moving too much, so that a sharp picture can be taken.

We did not want people to go through a time-intensive calibration process, but we feared that the tracker would not work that well for people with glasses or contact lenses. A lot of time went into the investigation of the gaze-tracking and the software that comes with the eye-tracker. In the end, the piece will have a sign stating that people with glasses and contact lenses might not enjoy the full experience, which is a compromise that we feel is okay.

Susanne: There were also clear limitations in developing the piece such as the budget and the timeframe. How did these and other specifications affect the project?

Dimitar: I don’t think of them as limitations but as trade-offs. There are always multiple ways of doing things. If you can’t do it in an optimal way, then you can come up with an alternative solution, which might work almost as well. In this case, some of the conditions pushed the concept and the visuals, like the fact that we used a black and white camera provided by our partner Allied Vision Tech. That had a beneficial effect on the aesthetic quality of the piece.

Raphaël: Also the project made us work on the eye-tracking technology, which is certainly going to be useful in a current research project on multi-modal interaction and interaction design for the home and industrial environments.

Susanne: Let’s talk aesthetics and visuals. The new Zerseher shows the image of the viewer’s eye that is then distorted and even ruptured or cut open by its own gaze. What was the inspiration for these visuals?

Raphaël: We tried many things and we finally stayed with something that seemed right. The final visual effect is taking a lot from existing Processing sketches that we used as inspiration. Something that I am really keen on is using physical properties of real-world materials and not simulating them as accurately as possible, but making the piece more sensual. When we see a material and how it moves or reacts, we feel a certain way, because it reminds us of past experiences. These are the raw materials of generative and digital art: processes — natural, engineered or hybrid — and their computational representation.

Susanne: The visuals immediately remind me of skin, as one sees a body part, the eye that is cut. This image of the cut eye has a strong emotional effect, and of course it recalls famous predecessors like the scene in Buñuel’s film “Un Chien Andalou”.

Raphaël: The cutting is also a reference to the original Zerseher, which showed a canvas.

Dimitar: I think the piece is also about observation and what is at the root of machine-intelligence, because it finds our eye. You can read it as a kind of critique of our obsessiveness with ourselves that is fuelled by social media.

About Dimitar Ruszev:
Dimitar grew up in Budapest. He studied communication design at the Potsdam University of Applied Sciences, and works for ART+COM Studios as computational designer. He was gradually drawn to interaction and generative design and has a great interest in exploring the intersection of art, technology and society.

About Raphaël de Courville:
Raphaël is a graphic designer and new media artist from Paris. Aside from his research and computational design work at ART+COM Studios he teaches workshops on such topics as computational aesthetics. He is also one of the organisers of the monthly Creative Code Jam & Creative Coding Stammtisch. Raphaël on Twitter: https://twitter.com/sableraph

About Dr Susanne Jaschko:
Susanne is an art historian and curator based in Berlin. Next to her freelance writing for ART+COM Studios, she focuses on participatory art and design processes. prozessagenten website: http://prozessagenten.org

About the project:
Zerseher, 2014
Artist: Joachim Sauter
Computational design: Raphaël de Courville, Dimitar Ruszev
Software development: Sinan Goo, Teemu Kallio
Construction: Marcus Heyden
Commissioned by Berlin Akademie der Künste for the exhibition “Schwindel der Wirklichkeit”, September 17 – December 14, 2014


Prototyping — Thinking ideas in the world

November 13, 2014, Jussi Ängeslevä

My understanding of ‘prototyping’ is ‘thinking in the world’ or ‘integrating with the word by physical means’. Prototyping is the creation of simplified and abstracted models, which can be used to examine individual aspects of a design. The see the overall complexity and effect of a design ‘purely in thought’ is impossible past the very earliest stage.

The way we perceive the world with our eyes can be compared to prototyping: the whole is not considered, rather tiny segments and specific details that make up the whole are tested. Human eyes function in a similar way: we imagine we are seeing a continuous, consistent image of the world, but in fact this image is a construction of the mind. Our eyes are constantly moving, focusing on individual details and scanning them, so to speak.

In our practice, prototyping is the decision matrix of our design process. Even in the inspiration phase we frequently use prototyping; studies of a prototype help in the evaluation and concretisation of concepts, so decisions about which direction to take can be made early on. The same applies to hardware prototyping, here providing ergonomic and production test results that significantly influence further development. Later in the design process, through behavioural studies and application tests, the concept is optimised in an advanced prototype.

So prototyping helps bring form and clarity to ideas. Further, a prototype can be tested and experimented with — which is impossible with written concepts and visualisations. Perhaps the most surprising aspect of prototyping is that it can serve as communication tool. It has become an instrument of communication, particularly with clients, because it is often difficult to convey principle ideas and concepts without a prototype. The strength of prototyping here is the prototype itself — the simplification and abstraction of an idea. Of course, a prototype cannot relate the whole truth; it is merely a functional model. But its strength as a communication medium is in its physical presence and a focus on essentials. A strong impression can be made on clients with a prototype. It fosters transparency and trust for the continued design process — it has at times been so convincing that the client granted complete freedom on an idea — because a prototype is an idea put into the world and made into an engaging experience.

And specifically for BMW Kinetik: During the design phase, the project grew more and more complex, so, early in the process we asked: How can we get the design quickly transposed into a three-dimensional prototype? How many balls will represent the object well, and is the system technically even feasible? We wanted to convince the client that the effort would be worth it, so we made a hardware prototype to be sure that it would function technically in the long term: a cabin with twenty-five metal balls hanging on wires that could be raised and lowered with individually controllable step motors. The prototype enabled us to test and further develop the aesthetic effect of the movement of the three-dimensional matrix. We then combined this prototype with a software prototype so that the client could see both the movement of the balls in physical space and watch the dramaturgy of a complete simulation on screen at the same moment. The answer: “Wow, that is amazing. Let’s do it!”

This text was published in German under the title “Beyond prototyping. Design and Rapid Manufacturing” in Paul, D., Sick, A. eds. “Rauchwolken und Luftschlösser: Temporäre Räume”, Textem Verlag, Hamburg, Germany, 2013.

More on this topic:
Video Conversations on Prototyping: Jussi Ängeslevä
Research Project Rethinking Prototyping

About me:
I am vice creative director at ART+COM Studios, professor at University of Arts (UdK), Berlin and visiting lecturer at IED programme at Royal College of Art (RCA), London. http://angesleva.iki.fi


Closer to man at the new Moesgaard Museum

October 13, 2014, ART+COM Studios

At the new Moesgaard Museum visitors literally come face-to-face with the ancestors of the human race. The museum accommodates archaeological and ethnographic exhibitions, special exhibitions, an auditorium, conference rooms and other visitor facilities. The combination of science with new ways of displaying the artefacts and the use of technology is characteristic for the exhibitions of the new museum. The archaeological exhibitions display the lives of the species of the past through the use of narratives and settings with light, sounds and animations.

In collaboration with Moesgaard Museum Design Studio, ART+COM Studios developed two elements for the permanent exhibition: digital dioramas and reflective displays.

A unique collection of anatomically precise reconstructions of human species greets visitors already on the staircase in the museum foyer. The figures can be experienced up close or through ‘binoculars’. Looking through the binoculars at the Homo sapiens or another human species reconstructed by the artists Kennis & Kennis, one sees a digital diorama of the lifelike figures in in their indigenous settings. The viewer feels like moving around them and inside the landscape. In order to achieve this effect, each environment was built in 3D and a virtual tracking shot was designed. The data of the virtual flight was used to film the physical reconstructions with a motion control system that followed the exact perspective of the virtual camera. Thus a seamless integration of the figure into the 3D environment was achieved. As a result, the viewer gets a rich impression of the habitats while the figures keep their strong physical presence.

On the lower ground floor, visitors can make a direct connection between the skulls of various human species and their reconstructed heads with the help of the reflective displays created by ART+COM Studios. The reflective glass superimposes the reflection of the skull on the modelled head and vice versa.

Moesgaard Museum Website


Micropia opened in Amsterdam

October 1, 2014, ART+COM Studios

Yesterday a unique museum opened that is set to inspire the general public, encouraging their interest in microorganisms and microbiology: Micropia. The visual and the experiential are central, while the focus is firmly on the mostly positive relationship between microbes and humans. The idea for Micropia came from Haig Balian, Director of Amsterdam’s Natura Artis Magistra (known as Artis), the oldest zoo (1838) in the Netherlands and one of the world. Micropia aims at becoming also an international platform for microbiology that brings diverse interest groups together in order to brigde the gap between science and the general public.

The exhibition was designed by Kossmann.dejong, an Amsterdam-based exhibition design studio, in collaboration with ART+COM Studios. While Kossmann.dejong were responsible for the overall concept and scenography, ART+COM worked primarily on the conception, design and development of the media exhibits, from the very earliest sketches through to interaction design, programming and hardware design.

The uniqueness of Micropia lies in its mix of living and virtual microbes. Most exhibits show living microorganisms as well as having a media extension. Film, images and text provide insight into microbe appearance and behaviour, and the diversity of the microorganism relationship with humans.

Additionally, five large, pure media exhibits vividly convey microbiological knowledge. Right at the beginning of the exhibition, in the lift, a camera image zooms over the heads of visitors, closing in on the eye of one to reveal the microorganisms that commonly live on our eyelashes.

Dramatic differences in size between various microscopic life forms can be seen on an almost 10 x 5 metre reactive monitor wall, on which microbes swim as though in an aquarium.

In an interactive “extremophile“ panorama, visitors experience microbe life in places of extreme conditions such as radioactivity or cold. The virtual microorganisms float thanks to a special projection technique with dynamic 3D landscapes.

Visitors encounter themselves and their microbes repeatedly throughout the exhibition, but perhaps most intensively with an installation where their bodies can be interactively explored: the visitor confronts a virtual ‘me’ and can explore the, on average, two-kilogram mass of microbes living in various parts of the body. The technology used is sophisticated body tracking.

Alongside the microbes, the visitor is central to the exhibition — and on a number of levels: as observer, interactive agent and researcher, and as a valid object of self-exploration.

Artis Website

Photos: Thijs Wolzak; Micropia, Maarten van der Wal


The Smart Home field has been receiving attention from computer technologists and designers for some time already; the promise within closer reach of consumers every day. And while a smart home is filled with sensors, screens and chips, their presence should be attenuated — technology fades into the background as attention seeking notifications make way for calm interactions. This is made possible by ‘smart’ technology: interconnected devices that are able to think for us. The building blocks that make up a smart home are readily available, WiFi connected lights (Philips Hue) for example, and smart thermostats (Nest). But will connected devices ever gain mainstream adoption in the face of high costs and growing concerns over privacy?

Regardless, the house of the future means an explosion in electronics: a huge increase in the number of screens, chips and sensors, all able to connect to the Internet. But don’t we already have such devices lying around at home? Devices that are currently collecting dust? Old smartphones. Androids that can’t run the latest OS anymore, iPhones replaced by thinner ones or phones that weren’t worth the replacement of a cracked screen. These legacy devices may still work, they don’t meet our requirements for a smartphone anymore. So can we then, instead of fabricating new dedicated smart home devices, use the old phones that we don’t use but don’t throw away?

As part of my internship at ART+COM, I explored the idea of using legacy devices in a smart home. To make the concept more tangible, a quick prototype was made. The starting point of the prototype was an old iPhone 3GS a team member had lying around. This phone is well suited to running web apps, so a small webpage was quickly developed. To transform the iPhone into something other than a phone, a new phone case was made. The form was inspired by crochet ­­— a decoration often found in houses that has no association with technology. The shape was then 3D printed, with smaller details added by computer cut stickers.

The prototype functions as an ambient display, slowly changing its contents over time. It abstractly visualises data acquired either via its sensors or via other connected devices. The display also reacts to direct user interactions, changing contents when the user touches the screen. This illustrates how the user can always influence whatever the device is controlling or showing.

While this prototype doesn’t bring about any practical functionality, it does demonstrate what is possible with legacy phones. We should start making use of their limited specifications, which are still ample for dedicated tasks. Being able to try out smart home applications without needing to buy a new device, but only run an app and print a case, will definitely lower the threshold. Or when we go a step further and are able to create those apps ourselves in conjunction with a functional case, it could deliver an even better fit for our own smart homes’ needs. Is there any faster way to bring smart technology into people’s homes than utilising the electronics that are already present?

About me:
I have been an intern at ART+COM for five months, during which I participated in the research team. I am studying at the Technical University in Delft, where I am currently doing a Masters in Design for Interaction. www.jip.xyz