In some of my projects I investigated in technologically augmented environments. Others attempt to grant new ways of viewing events that were previously inaccessible due to their spatial or temporal sequence.
The work Screen Spaces is the product of a one-year cooperation between the University Hospital Cologne and the Cologne International School of Design. Through numerous observations and audiovisual documentations, the work shows how the spatial relationship between the surgeon and the surgical field changes through the use of the Da Vinci® Surgical System. The collected material represents a novelty in the field of investigative spatial research and allows the surgeon to reflect on her own work in dealing with the system. The particular creative relevance of this exploration is reflected in the social discourse on the handling of such systems and thus increases the transparency of complex fields of application in the sense of science communication.
Since some of the following images may be disturbing to some viewers, they have been masked out of focus. If you want to see the unblurred version, please click on the corresponding photo. The masking will then be temporarily removed.
The Da-Vinci-Robot is a modular system whose components are distributed within the operating room. The system’s assignment is to help the surgeon with minimally invasive surgical interventions.
The structure and the spatial settings of the different components result in an altered distribution of roles and tasks among the surgical staff. The surgeon treats the patient by using the robot system averted from the field of operation through a control panel which captures the hand movements and transfers them to the endoscopic instruments in the surgical area via digital signal transmission.
The Da-Vinci XI Surgical System was originally developed for the initial treatment of wounded soldiers in crisis areas. The surgeon was to be able to perform surgical procedures on wounded soldiers in crisis and war zones from a distance without having to expose himself to the risk of being wounded.
For this purpose, telemanipulators, which had already been developed in the mid-1940s, were further developed for the purpose of safe handling of radioactive material within hermetically sealed rooms (so-called hot cells). While these first telemanipulators, which were still rather crude, still transmitted the hand movements of the "operator" to the corresponding gripper arm inside the radioactively contaminated cell by means of mechanical power transmission, it was possible from the 1980s onwards, thanks to the leaps and bounds in microelectronics, to spatially separate the transmitter and receiver modules from one another and to digitize the signal transmission between the two.
Instead of direct mechanical power transmission between the manipulator and the gripper arm, electric motors were used, which were controlled by microcontrollers and thus actuated Bowden cables, which ultimately controlled the gripper arm. In this way, it was possible to spatially separate the transmitter and receiver modules. As these systems matured technologically, increasingly complex motion schemes became feasible. Currently, the Da-Vinci system has 7 degrees of freedom - more than the human hand. Intuitive Surgical refers to these endoscopic instrument attachments, controlled by Bowden cables and servo motors, as "endowrists" - endoscopic wrists.
Whereas medical studies try to verify the benefit of the Da-Vinci-System®, questions on both the image-space structure and on the action field between the system and the surgery’s participants are to a great extent left aside. How the surgeon’s perception during the system’s usage is structured, how the system spurs the surgeon’s actions, and which senses, apart from the visual sense, are triggered, has not been the subject of studies and inquiries yet.
The joint interdisciplinary project of Cologne University Hospital and KISD – Köln International School of Design of TH Köln now is willing to detect the short- and long-term effects of that operational system and to unveil the conditions and possibilities of that complex man-machine interaction.
Adobe After Effects
Kölner Designpreis 2019 (Nominated)
KISD Parcours 2019
GFM-Tagung „MEDIEN-MATERIALITÄTEN“ Uni Köln
Prof. Dr. Carolin Höfler TH Köln
Priv.-Doz. Dr. Hans Fuchs Uniklinik Köln
Univ.-Prof. Dr. Christiane Bruns Uniklinik Köln
The new VR glasses of Facebook, Samsung, Google and HTC are entering the mass market and promising full immersion into virtual reality. Head-mounted displays of the latest generation transform one’s perception in a manner that the large companies announce the total replacement of reality by immersive simulation.
Usually, VR technology is used to build up alternative, fictitious worlds. In contrast to that the students of the short-term project “With Eyes Wide Shut” will use the technology to construct a virtual reality, which simulates the real-physical environment of the viewer in real time. By doing this, the virtual visual space is superimposed with the tactile and movement space. In the project, the relationships between these two areas are deeply investigated and designed.
The application plays with the human perception and expectation of weight and size. The spatial position of the wooden slat, which in reality is exactly one meter long, is detected by sensors and transferred into virtual reality. Thus, it becomes possible for the wearer of the headset to balance the wooden staff, for example, on two fingers, although he can not see it in reality. By pressing the lever at the upper end of the bar, this changes its length. The real size of the wooden staff does not change.
While handling the wooden slat, some participants reported a change in the “felt” weight of the slat with increasing or decreasing length. As we increased the virtual length of the slat in 10meters, the participants immediately reported a drastic decrease in weight. Some users even assumed a change in the materiality of the slat like balsa wood. Apart from the subjective change of weight and materiality, the for us most surprising finding was the subjects' startled and protective reaction after bystanding people stole the wooden slat and seemingly attacked the subject from a safe distance.
Museumsnacht Köln 2017
Mit weit geschlossenen Augen. Virtuelle Realitäten entwerfen 2017
Prof. Dr. Carolin Höfler TH Köln
Dr. Philipp Reinfeld TU Braunschweig
Jonas Hansen TU Braunschweig
Max Justus Hoven TU Braunschweig
Oliver Köneke TH Köln
Camera: Sebastian Miller TH Köln
Camera: Tonda Markus Budszus TH Köln
„The form of a city changes faster, alas, than a mortal’s heart” (Charles Beaudelaire)
Baudelaire’s observation addresses the transformation of major parts of Paris in the 19th century, but particularly applies to metropolises in countries that underwent or are undergoing economic and political transformations of socialist to market-oriented paradigms.
To visualize the Changes, both cities went through in the last 70 years, we developed an interactive spatial installation witch shows a visualization of several bars and coffee shops in Berlin and Hanoi. How did the interior change over time?
What kind of artifacts accumulated in these gems of local interactions?
The visitor can load specific locations from both cities into a self-built controller. The system then shows an ordered exploded view of the interior inside the specific locations in which the visitor can navigate through the time.
Vietnam University of Fine Art
Prof. Dr. Oliver Baron TH Köln
Hannes Hummel TH Köln
Fabian Schäfer TH Köln
Steffen Brücken TH Köln
Markus Schiemann ABK Stuttgart
The aim of the work is to create an image sequence based on several successive photogrammetric models, which suggests the illusion of movement to the viewer.
Photogrammetry is a remote sensing technique. The spatial position and three-dimensional shape of an object are determined from several measurement images or photographs. This is done by evaluating the geometric information transported in the images. The purpose of photogrammetry is the reconstruction of the spatial position of the individual images in relation to each other. In further steps, additional points can then be identified that are present in several images. This results in a dense point cloud, which is afterwards converted into a textured polygon object.
First, the selected object is photographed from all sides. This creates dozens of images that represent the object in its current state. In the next step, the object is minimally altered and the process is repeated. In the course of the "shooting", several thousand images are created, each of which depicts the object in a minimally altered form from 360 degrees.
The object is positioned on a turntable and is evenly illuminated. After each picture, the turntable is rotated a few degrees further until the object has been captured from all directions. The object is then changed slightly and the process repeated.
The Images, taken before, are loaded into a photogrammetric software. In this Case AgisoftPhotoscan. At first the program finds matching points between overlapping images and estimates the camera positions for each photo.
After aligning the images, the program automatically produces a point cloud which will later be processed to the final object.
Once the cameras are aligned, the dense point cloud is created by comparing each individual image to their neighbours and finding overlapping points.
The program can not only determine the spatial orientation of the individual points but also give a good representation of the texture and colour of the object.
Finally, each individual 3D object is placed in a 3D software like Cinema 4D. The objects are then animated to create the Illusion of movement. Since the object consists of a closed polygon mesh, it is now possible to animate a camera flight around the Object with more interpolation steps than the original camera positions allowed. The Final sequence consists of a total of 52 Lego bricks that have been assembled to form the stylized shape of a camera. Gradually, one Lego brick at a time is to be removed, creating the illusion of motion. As a result, a total of 52 photogrammetric Objects are created and animated. Here shown as a GIF sequence.
Michael Eichhorn TH Köln
In order to visualize the density of the crowd, the surveillance video from Camera 14 was reduced to 1% opacity. The video was then duplicated several hundred times and placed one on top of the other, each with a single frame offset, so the final product again showed a total opacity of 100%. The video was then accelerated by a factor of ten. The Screenshot shows the arrangement of the videos in Adobe After Effects. The bright turquoise bars, which are staggered one above the other, are each the same video from camera 14 reduced to 1% opacity. The blue line shows a cross-section through the 100 active videos, which are used for the calculation of the current frame.
It is easy to see how the density of the crowd increases over the course of the video. In the beginning you can recognize the structural conditions within the tunnel and its floor. The light from the access ramps to the festival area on the left side of the tunnel still reaches the bottom. For the first one and a half hours, the visitors flock freely to the festival grounds. Only when the police block access to the main ramp around 16:00, the number of people in the interior of the tunnel abruptly increases. The crowd transforms into a solid body. The picture brightens as the incoming light is thrown back from the heads of the People inside the dense crowd.
The Documentation ends a few minutes before the first People died on the main ramp. One of the main reasons for the disaster could be due to the fact that the incoming visitors had no clear view of the main ramp and thus could not respond to the congestion there. As a result, more and more pressure was put on people on the main ramp by the new visitors flocking in from behind.
Adobe After Effects
Martin Marburger TH Köln