Im Blog berichten wir über aktuelle Projekte und Aktivi­täten. Wir geben Einblick in unsere gestalterische Haltung und Praxis, technologischen Entwicklungen und Software­lösungen. Hier featuren wir auch Events und Projekte, die wir für interessant und wichtig halten.

On the development of the new Zerseher

September 17, 2014, Susanne Jaschko

Until mid December, the Akademie der Künste in Berlin shows the exhibition “Schwindel der Wirklichkeit” (Vertigo of Reality) that includes artworks by Marina Abramović, Thomas Demand, Ólafur Eliasson, Valie Export, Harun Farocki, Dan Graham, Bruce Nauman, Nam June Paik, Ulrike Rosenbach, Bill Viola and others.

ART+COM Studios was invited to present a new piece that reinterprets one of its seminal works from the early 1990s, Zerseher. A small team created a contemporary version of the earlier artwork in just two months. For this interview, I spoke with two core members of the small team, Raphaël and Dimitar, about its development and design. Both Raphaël and Dimitar are computational designers and worked together on the design and development of the new Zerseher.

Susanne: Would you say that the new Zerseher is a completely new piece or a further development of the original Zerseher created by Joachim Sauter and Dirk Lüsebrink in 1992? What are the similarities and differences between the two works?

Raphaël: I would say it is a new piece, but we tried to keep the spirit of the original. We started from scratch on many levels, but we kept what made the original Zerseher interesting: the idea that the way you look at an object affects the object and the perception you have of it, which is embodied in the gaze. The original Zerseher was a statement about digital art of that time and promoted the idea of interactivity, which is why it would not have made sense conceptually to simply redo it. The new Zerseher works on very different principles than the first one did. Despite some technical similarities, the mechanics, the code and the visual effect are completely different.

Dimitar: Technologically, the first Zerseher was pioneer work, and one side effect was that it didn’t run completely smoothly as an exhibition piece. In contrast, the modules of the new Zerseher are almost all off-the-shelf products. Also the software has evolved a lot, so even we designers are now able to write code that produces graphically compelling results, which was not possible 20 years ago.

Susanne: What were the technical challenges of the project? Did you have to solve certain problems in very specific ways?

Raphaël: Something that the original Zerseher had — and Joachim was keen to include as a central element in the new version — was the image of the eye. The original piece used an eye-tracking device that was as big as a small fridge. The new one, on the other hand, is the size of a pencil case, and though it is better in many ways, it doesn’t give access to the image of the eye used for the tracking. That caused a lot of headaches. We combined the tracker with a high-resolution camera to isolate the image of the eye and display it on the screen. In the first stage, we tried to project the 3D coordinates of the eye that we get from the tracker onto the image plane of the camera. But we did not manage within the timeframe that we had for this project. Instead we decided to do computer-vision analysis of the camera image in order to capture the eye of the spectator. The eye tracker still plays a central role, tracking the gaze of the user, but also detecting if anyone is present and not moving too much, so that a sharp picture can be taken.

We did not want people to go through a time-intensive calibration process, but we feared that the tracker would not work that well for people with glasses or contact lenses. A lot of time went into the investigation of the gaze-tracking and the software that comes with the eye-tracker. In the end, the piece will have a sign stating that people with glasses and contact lenses might not enjoy the full experience, which is a compromise that we feel is okay.

Susanne: There were also clear limitations in developing the piece such as the budget and the timeframe. How did these and other specifications affect the project?

Dimitar: I don’t think of them as limitations but as trade-offs. There are always multiple ways of doing things. If you can’t do it in an optimal way, then you can come up with an alternative solution, which might work almost as well. In this case, some of the conditions pushed the concept and the visuals, like the fact that we used a black and white camera provided by our partner Allied Vision Tech. That had a beneficial effect on the aesthetic quality of the piece.

Raphaël: Also the project made us work on the eye-tracking technology, which is certainly going to be useful in a current research project on multi-modal interaction and interaction design for the home and industrial environments.

Susanne: Let’s talk aesthetics and visuals. The new Zerseher shows the image of the viewer’s eye that is then distorted and even ruptured or cut open by its own gaze. What was the inspiration for these visuals?

Raphaël: We tried many things and we finally stayed with something that seemed right. The final visual effect is taking a lot from existing Processing sketches that we used as inspiration. Something that I am really keen on is using physical properties of real-world materials and not simulating them as accurately as possible, but making the piece more sensual. When we see a material and how it moves or reacts, we feel a certain way, because it reminds us of past experiences. These are the raw materials of generative and digital art: processes — natural, engineered or hybrid — and their computational representation.

Susanne: The visuals immediately remind me of skin, as one sees a body part, the eye that is cut. This image of the cut eye has a strong emotional effect, and of course it recalls famous predecessors like the scene in Buñuel’s film “Un Chien Andalou”.

Raphaël: The cutting is also a reference to the original Zerseher, which showed a canvas.

Dimitar: I think the piece is also about observation and what is at the root of machine-intelligence, because it finds our eye. You can read it as a kind of critique of our obsessiveness with ourselves that is fuelled by social media.

About Dimitar Ruszev:
Dimitar grew up in Budapest. He studied communication design at the Potsdam University of Applied Sciences, and works for ART+COM Studios as computational designer. He was gradually drawn to interaction and generative design and has a great interest in exploring the intersection of art, technology and society.

About Raphaël de Courville:
Raphaël is a graphic designer and new media artist from Paris. Aside from his research and computational design work at ART+COM Studios he teaches workshops on such topics as computational aesthetics. He is also one of the organisers of the monthly Creative Code Jam & Creative Coding Stammtisch. Raphaël on Twitter:

About Dr Susanne Jaschko:
Susanne is an art historian and curator based in Berlin. Next to her freelance writing for ART+COM Studios, she focuses on participatory art and design processes. prozessagenten website:

About the project:
Zerseher, 2014
Art direction: Joachim Sauter
Design: Raphaël de Courville, Dimitar Ruszev
Software development: Sinan Goo, Teemu Kallio
Construction: Marcus Heyden
Commissioned by Berlin Akademie der Künste for the exhibition “Schwindel der Wirklichkeit”, September 17 – December 14, 2014

Prototyping — Thinking ideas in the world

November 13, 2014, Jussi Ängeslevä

My understanding of ‘prototyping’ is ‘thinking in the world’ or ‘integrating with the word by physical means’. Prototyping is the creation of simplified and abstracted models, which can be used to examine individual aspects of a design. The see the overall complexity and effect of a design ‘purely in thought’ is impossible past the very earliest stage.

The way we perceive the world with our eyes can be compared to prototyping: the whole is not considered, rather tiny segments and specific details that make up the whole are tested. Human eyes function in a similar way: we imagine we are seeing a continuous, consistent image of the world, but in fact this image is a construction of the mind. Our eyes are constantly moving, focusing on individual details and scanning them, so to speak.

In our practice, prototyping is the decision matrix of our design process. Even in the inspiration phase we frequently use prototyping; studies of a prototype help in the evaluation and concretisation of concepts, so decisions about which direction to take can be made early on. The same applies to hardware prototyping, here providing ergonomic and production test results that significantly influence further development. Later in the design process, through behavioural studies and application tests, the concept is optimised in an advanced prototype.

So prototyping helps bring form and clarity to ideas. Further, a prototype can be tested and experimented with — which is impossible with written concepts and visualisations. Perhaps the most surprising aspect of prototyping is that it can serve as communication tool. It has become an instrument of communication, particularly with clients, because it is often difficult to convey principle ideas and concepts without a prototype. The strength of prototyping here is the prototype itself — the simplification and abstraction of an idea. Of course, a prototype cannot relate the whole truth; it is merely a functional model. But its strength as a communication medium is in its physical presence and a focus on essentials. A strong impression can be made on clients with a prototype. It fosters transparency and trust for the continued design process — it has at times been so convincing that the client granted complete freedom on an idea — because a prototype is an idea put into the world and made into an engaging experience.

And specifically for BMW Kinetik: During the design phase, the project grew more and more complex, so, early in the process we asked: How can we get the design quickly transposed into a three-dimensional prototype? How many balls will represent the object well, and is the system technically even feasible? We wanted to convince the client that the effort would be worth it, so we made a hardware prototype to be sure that it would function technically in the long term: a cabin with twenty-five metal balls hanging on wires that could be raised and lowered with individually controllable step motors. The prototype enabled us to test and further develop the aesthetic effect of the movement of the three-dimensional matrix. We then combined this prototype with a software prototype so that the client could see both the movement of the balls in physical space and watch the dramaturgy of a complete simulation on screen at the same moment. The answer: “Wow, that is amazing. Let’s do it!”

This text was published in German under the title “Beyond prototyping. Design and Rapid Manufacturing” in Paul, D., Sick, A. eds. “Rauchwolken und Luftschlösser: Temporäre Räume”, Textem Verlag, Hamburg, Germany, 2013.

More on this topic:
Video Conversations on Prototyping: Jussi Ängeslevä
Research Project Rethinking Prototyping

About me:
I am vice creative director at ART+COM Studios, professor at University of Arts (UdK), Berlin and visiting lecturer at IED programme at Royal College of Art (RCA), London.

The Smart Home field has been receiving attention from computer technologists and designers for some time already; the promise within closer reach of consumers every day. And while a smart home is filled with sensors, screens and chips, their presence should be attenuated — technology fades into the background as attention seeking notifications make way for calm interactions. This is made possible by ‘smart’ technology: interconnected devices that are able to think for us. The building blocks that make up a smart home are readily available, WiFi connected lights (Philips Hue) for example, and smart thermostats (Nest). But will connected devices ever gain mainstream adoption in the face of high costs and growing concerns over privacy?

Regardless, the house of the future means an explosion in electronics: a huge increase in the number of screens, chips and sensors, all able to connect to the Internet. But don’t we already have such devices lying around at home? Devices that are currently collecting dust? Old smartphones. Androids that can’t run the latest OS anymore, iPhones replaced by thinner ones or phones that weren’t worth the replacement of a cracked screen. These legacy devices may still work, they don’t meet our requirements for a smartphone anymore. So can we then, instead of fabricating new dedicated smart home devices, use the old phones that we don’t use but don’t throw away?

As part of my internship at ART+COM, I explored the idea of using legacy devices in a smart home. To make the concept more tangible, a quick prototype was made. The starting point of the prototype was an old iPhone 3GS a team member had lying around. This phone is well suited to running web apps, so a small webpage was quickly developed. To transform the iPhone into something other than a phone, a new phone case was made. The form was inspired by crochet ­­— a decoration often found in houses that has no association with technology. The shape was then 3D printed, with smaller details added by computer cut stickers.

The prototype functions as an ambient display, slowly changing its contents over time. It abstractly visualises data acquired either via its sensors or via other connected devices. The display also reacts to direct user interactions, changing contents when the user touches the screen. This illustrates how the user can always influence whatever the device is controlling or showing.

While this prototype doesn’t bring about any practical functionality, it does demonstrate what is possible with legacy phones. We should start making use of their limited specifications, which are still ample for dedicated tasks. Being able to try out smart home applications without needing to buy a new device, but only run an app and print a case, will definitely lower the threshold. Or when we go a step further and are able to create those apps ourselves in conjunction with a functional case, it could deliver an even better fit for our own smart homes’ needs. Is there any faster way to bring smart technology into people’s homes than utilising the electronics that are already present?

About me:
I have been an intern at ART+COM for five months, during which I participated in the research team. I am studying at the Technical University in Delft, where I am currently doing a Masters in Design for Interaction.