Case Study: Kelp Room at Power Station of Art, Shanghai

When: 2019

Client: La Mer, Agency: Patten Studio

Role: Lead Programmer

Part of a larger activation campaign with Patten Studio, we built an interactive kelp forest that spanned two rooms, projected onto seven walls of varying shape. This was designed to help tell the story of a brand product during an exhibition held at Shanghai’s Power Station of Art.


From a creative standpoint, we took inspiration in the kelp forests found in the Channel Islands off the California coast. The life-size kelp strands exhibited a natural oceanic sway, and viewers could influence the movement by waving their arms. Personally, this was quite a challenging project, given the sheer size and shape of the rooms, that the exhibition was in Shanghai, China, a new and very foreign country to myself, and that interactive forests are both beautiful and intricate. Nonetheless, creating a scene based on nature using almost entirely procedural methods has to be on the top of my list of things I love about programming.


Full project information.

 

Technical Description

The application was built in C++ using Cinder as well as a few components from my personal library called mason. This closer-to-metal approach allows us to control crucial bottlenecks that are common in installations of this nature, like running at massive resolutions (the larger of the two rooms was 11,000x1060 pixels), rendering to quirky wall layouts using multiple viewports, and using custom GPU-based physics and rendering to more easily fill up these large canvases and remain running at 60 frames-per-second. On the other hand, beginning with a few well-tested components found in libcinder or add-ons lets us get running off the ground fast enough to deliver for a short deadline.

Hardware Setup

Each room runs on one Windows 10 PC with an Intel Core i7-9800X CPU and dual NVidia Quadro RTX 5000 GPUs. We used NVidia Mosaic to stitch together seven 1080p displays for the room with four walls and five 1080p displays for the corridor (three walls), both laid out as a long horizontal strip.


We manage the projector overlap as well as varying shapes of the walls using Cinder-Warping, which was configured per-room with a GUI built directly into the application. Using a custom warping solution here allowed us to work fast without the need of hiring a third-party vendor to handle projection mapping, and at the same time allow us to make last-minute adjustments within our tight time-frame.

warping_config

Motion Sensors: Depth Cameras


The scene is made interactive with a network of Intel RealSense D435 depth cameras, which output motion vectors that are injected as temperature splats into a 3D fluid field. Each sensor runs on a separate Raspberry Pi 3, computes sparse optical flow on the depth image and sends motion vectors to the client PC over a websocket. We built a gui to monitor this, filter or solo specific sensors, and also playback simple recordings of users in the space for testing.


The sensors had to be mounted quite high, and this combined with the dark nature of the room meant quite a lot of noise. We combated this by considering the motion vectors over time, so only sudden and drastic movements (eg. walking) would trigger motion vectors.

ladder

Scene

To render the scene within the panoramic projector layout, a separate viewport is configured for each wall, with perspective cameras (ci::CameraPersp) centered at the scene origin and rotated ninety degrees to each other. Due to the uneven shape of the room (the front and back walls were about four times longer than the left and right walls), we weren’t able to achieve a perfect ‘skybox’ feeling at all depths, so we focused on configuring each camera’s field of view to align up at a specific distance from the origin. Then we focused kelp strands near the corners at this distance, so that they could naturally sway between the walls. Strands at other distances near the wall edges were placed considerably further back, so that (along with the fog at this distance) they felt like a natural part of the forest composition, despite not being able to traverse between walls. Creating a completely seamless experience between adjacent walls is something I’d like to return to in future projects since it really is quite effective in providing immersion in public spaces like our arrangement.

scene_viewports

Structure of the Kelp Strands

The kelps stipes (strands or stalks) were modeled as rope joints anchored on the seafloor, slightly reducing the spring coefficient as you go further from the base. They were rendered with a single instanced ci::geom::Cylinder, where the cylinder’s width was controlled based on height and a per-strand randomness. The kelp blades (leaves) were modeled as cloths (NxM lattice of joints) anchored on a specific stipe joint, rendered with a geom::Plane and randomly picking from a set of transparency enabled textures. The spring coefficient of the center-most row of joints is increased a bit to give the blade a bit more of a spine as it sways.

joint_structure

Physics and Geometry

The physics simulation is handled using one compute shader, which processes an SSBO of about 500k mass-spring joints. While most of the application runs at a 60fps fixed timestep, the physics loop must run many more times than this to allow for the spring forces to resolve. For our large canvas, we could only eke out eight physics updates per app update, although it would have been nice to run it closer to about 20x. I used basic midpoint Euler integration, which was good enough for our needs as we didn’t deal with collisions.


The initial spawning positions of each kelp strand and blade are repositioned to look more natural using B-Spline interpolation and some simple fractal noise to distribute blade placement. These positions essentially become restoring points for each joint, so that no matter how users affected the scene via movement, each joint would always restore to their initial positions. This was quite a rigid solution in my opinion, but it solved many problems around using spring-mass physics within an interactive and noisy environment, so that we could ensure the scene always looked natural. In the future, I’d like to investigate a solution that uses joint to joint constraints in these situations, so that the spatial relationships can be conserved while still allowing the shape of each strand and joint to evolve over time, as it would in the ocean.


It was necessary for quite a high count of physics joints in order to make the scene look natural while standing right next to the walls, which was the main motivation behind implementing custom physics on the GPU. However since vertex drawing was nowhere’s near the bottleneck, I was able to reduce the count quite considerably using cubic and bicubic interpolation to try extremely smooth curves for the kelp stipes and blades, respectively. The mapping was achieved by taking the floating point texture coordinate for each vertex and converting it to a 1D index into the joints buffer. This scaled quite nicely since we could control the smoothness of the geometry completely independent of the interactive joints, which is great for when you don’t exactly know what type of performance hurdles you’ll be up against until you are at the installation site and you are rendering at full resolution.

 

Ocean Movement

The essential movement of the scene is some randomized trochoids, or ‘Gerstner waves’, which provide the gentle oscillating sway. Similar to Thon 2005, there is less sway for joints closer to the seafloor, which move more in an ellipsoidal. For the fluid movement, I ended up adapting a GPU-based Navier-Stokes implementation from David Li. The fluid sim provides a place to inject user motion vectors as temperature ‘splats, which contribute to a fixed ‘fluid field. Components (kelp joints, debris, and bubbles) then use their position in the scene as a lookup in a 3D velocity texture.

Lighting

The kelp blade shading was a huge aspect of creating a convincing kelp forest scene, due to the intricate details of foliage lighting like semi-transparency and occlusion. For this, Paul Houx helped out and ended up with a very creative two-pass solution. The first pass renders a black-and-white image of the light transmission of all kelp blades from the camera’s eye point, front to back, with an additional thickness texture to add some variance to each blade’s transmission. Other than this pass, the entire scene is rendered with depth buffer enabled, so the kelp blade transmission texture is rendered first. Then, the blades are rendered along with the rest of the scene, using a blurred version of the transmission buffer for shading.


Probably the most important aspect that creates a sense of depth was the fog, which was based on the simple model found here. For the color of the fog, I attached the background texture to and looked into that for each element, which was created procedurally using a simple ray trace.


The last remaining elements contributing to lighting were caustics and sunrays. Caustics are the usual randomized lookup into a cellular-looking texture based on position and texture coordinate. Sunrays were a bit more expensive (particularly considering our multi-viewport scene), implemented as a post-process radial blur based on the sun location. We ended up only having one ‘sun’ in the middle of the front wall, although I would have loved to have been able to place one at every corner of the room, even allowing the beams to cross.

Kelp Strand Editor

Usually when working on a project that contains natural components such as a forest, we’d like to populate the scene procedurally as it makes updates convenient - every time you load the scene you’ll get the latest changes. Due to the desire for a very dense and ‘lush’ aesthetic, and that we wanted the kelp blades to sway naturally like they do in the ocean, as well as respond to human interaction, this was quite the challenge.


My first attempt at addressing this was to manually assign levels of detail based on each strand’s distance to the center of the room. However, this proved to be inadequate, as we still needed about twenty very high-level strands that users could manipulate by walking or waving their hands, and then hundreds of medium level strands behind these to create the feeling of a forest in all directions. Each strand needed between fifty and a couple hundred blades to look natural and dense, and as each blade is basically a separate cloth simulation, well it was a ton of physics joints, using GPU compute or not.


After realizing that we needed more control of each strand, I built a ‘strand editor’ GUI that allowed me to first randomly populate some strands within a region on the XZ plane, assigning them with initial properties. I then ‘pruned’ the kelp strands that either didn’t add much to the forest composition (sometimes to give an open space) or were already occluded by other strands in front of them, until we had a layout that was fairly sparse yet felt dense from the four viewpoints in the room. Every time an edit was made, the entire KelpStrand container was pushed onto a stack, which allowed us infinite undo during the forest editing process. Properties of the strands could then be edited either by region (which would randomly generate some things within a range) or individually, such as height, resting shape (BSpline), blade pounds per stipe (the kelp stalk) joint. I also added editing controls for each individual kelp blade (size, direction, stiffness, resting shape, etc.), which was mostly only used in the largest strands

One big optimization was disabling physics update on the low-lod group, and instead using only the trochoidal wave movement to give them the oceanic sway. As long as they are far enough away so the fog makes them feel non-interactive, this worked great to add depth to the forest.

strand_layout_lod
kelpstrand_inspector

Ocean Debris and Bubbles

The other elements of the scene were much simpler. Ocean debris was added as simple particle sprites, with some simple skewing and then billboarded to match the orientation of each viewport. Ocean bubbles were modelled as spheres with some modulation in the vertex shader. To make them look translucent, I took a run-of-the-mill sky environment map and then altered the hue to match the color of our scene. It looks fine when the bubbles are two to three pixels wide!


Configuration

The application configuration is stored in json files that are both set from a master configuration and then later in the user interface. This config system is something I’ve been developing over the course of a dozen or so similar projects, to be both flexible for the nature of this face-paced work and also robust in tracking down the source of error. There are three separate json files that are read on app reload, and then recursively merged into one global config:

  • config.json: the master config, which contains some comments and decent values for things that non-developers may end up needing to tweak. These may or may not exist in the GUI, due to development time constraints.
  • local.json: this is unversioned and dev-specific, allowing you to override certain things for when you’re working on a non-production setup (ex. You’re on a laptop and not connected to seven HD displays, so you need to enable the virtual canvas in order for the scene to still look right). These override the master settings.
  • user.json: these settings come from changes in the UI, and allow end users of the application to save their customizations without needing to modify json files. These override everything else, and can be versioned or not.

The configuration is managed with a class called mason::Info (previously called Dictionary, modeled after the OS X Cocoa equivalent). This is a data structure common in more dynamic languages, well suited for loading / saving from configs as well as manual serialization. The structure is based on a key = string to value = std::any relationship, so you can hold anything you want for the value as long as you know how to convert it back at runtime. mason::Info converts to and from most of the types used in cinder, along with some convenience things like converting a json array [1, 2, 3, 4] to glm::vec4(1, 2, 3, 4) or ci::ColorA(1, 2, 3, 4), or json string to a filesystem path.


All of the major components of the application contain three methods: load( const Info &info ), save( Info &info ), and updateUI(). I group these together as the three control the configuration together. I’ll usually start with adding components to the UI, and then when it is in a good place, fill out the load() / save() methods, hitting a Ctrl+S followed by Ctrl+Shift+R for good measure to ensure everything reloads correctly.


On using C++ exceptions: while I often see that game companies find exception handling to be blasphemy, I’m quite happy with the system I’m working with, as long as you follow a couple of simple guidelines. First, only use try / except during load or save, never at runtime. This, along with a decent logger that can print detailed information automatically (cinder logging is pretty good at this, especially with the CI_LOG_EXCEPTION macro). Second, things that are loaded or reloaded at runtime (a Texture, Batch, Fbo, etc) are always checked if they are null before using, and become no-ops if they aren’t. If you follow both of these, you should get error logs that can be filtered and the app won’t crash, yet you can still benefit from runtime reloads and a more dynamic (easier) way to store configuration data.


Why I didn’t use a formal C++ serialization library (such as boost::serialization or cereal): a large aspect of my workflow, and a growing number of C++ front-end programmers, is compile times, especially because I utilize runtime hot-loading tools (discussed below in “Note on Dev Tools”). As many are now realizing, compile times directly correspond to the speed at which we can iterate on a design, be it structural or artistic. These serialization libraries all rely heavily on templating tricks and even with basic usage, end doubling compile times. On top of that, although you can serialize to a human-readable format such as json with these libraries, it is hardly navigable without a considerable amount of manual labor in your hand-written serialization methods. As such, I’ve found a system where I can use very minimal code to load, save, and manipulate with a GUI a new parameter, all programmed at runtime. I get both faster compile times this way, as well as a more concise view of the configuration.

Dev Tools

Live coding is an important aspect of my workflow, to the point that I feel limited when I use tools that don’t support it. C++ is notorious for long compile times and a static build nature, however times are changing. For the past couple of years, I’ve been using Live++ along with ‘immediate-mode’ GUIs (the now very popular Dear ImGui) so that working with code that is constantly updated becomes seamless. The creative process is obviously improved, but furthermore, the process of debugging or even figuring out new code becomes a new, quite lively experience. I would go as far as to say that I try to enable as much hot-loaded functionality for team reviews, since it allows me to show experimental features on a dime, and get immediate feedback about them. Alternatively, a feature may be left to the question “could you do that…” and is passed up before inspection.


One very sweet aspect of this setup is that once we are happy with a prototype of whatever you are building, you also have a near-production quality system that can be shipped statically within the executable, with no extra need for runtime script parsing or any VM overhead. All you really need at that point is a bit of refactoring, which is also wonderful when live coding, since you can ensure with every save that the code functions the same, without having to restart the application and navigate to the same place. We just turn Live++ off in the app’s config. At any point we want to resume prototyping or use Live++ for inspecting code, or anything else, it is always there ready to be turned back on.

Selected Projects

Connections Wall at Northwestern Universitymulti-touch, multi-user interactive wall at NWU Visitor Center

Flow ParticlesGPU Particles with the Intel Realsense Depth camera and optical flow.

Glimpse Twitter Visualizationinterative visualization

Cinder Audioreal-time, modular, node-based audio library