Creepy facial projections, 3D hair capture, digital creepers and more from Eurographics 2017

The Eurographics conference is underway in Lyon and among the more technical offerings from computer graphics specialists are a few cool enough for just about anyone to appreciate. Unless you’re afraid of clowns. In that case, you probably shouldn’t scroll down.

First is a system for simulating vines and creepers as they grow in complex interweaving patterns. The plants are a series of particles that interact with their immediate environment — angle, material, light exposure, other plants — and grow appropriately. The video shows the simulation without plant trappings, and it looks like intelligent sausage links.

Coupled with fluid dynamics and a few other tricks, these virtual plants can grow and be interacted with in real time. Imagine exploring (and pruning)  a world where intelligent vines slither like snakes up the landscape, pursuing you and your murderous machete. Shivers!

It supports up to 20 plants simultaneously and up to 25,000 branches. You can read more about the project or download the paper here.

Next is a pair of projects from Disney Research that demonstrate impressive capabilities both to bring the digital world to the real one and vice versa. In one case it’s applying graphics to a person’s face in real time, and in the other it’s capturing the subtleties of their hair in 3D by observing it in motion.

The facial projection rig uses a high-speed camera to track facial motion and expressions in real time, combining that with static graphics to create a custom image to project back onto the face — updated hundreds of times per second. So you can put together any kind of cool design, mask, facial hair or scars…

…And, of course, the first thing they do is put a horrifying clown face on someone, and then the even more horrifying face of the Joker from The Dark Knight. It works well, but why not something a little less creepy?

(Note that in the video, the flicker you see comes from an interaction between the DLP image and the camera’s rolling shutter — you wouldn’t see it in real life.)

The lighting demo in particular I think is pretty cool. This would be useful in real-life performances like theater, or, now that I think of it, theme parks. Who’d have thought?

Disney researchers also put together a technique for capturing and simulating hair accurately by observing it in vivo. The hair movement thing is a hard problem but one that’s addressed pretty well by current models; setting up hair so it can be simulated in the first place is a different question entirely. Want to go around manually configuring every strand of a 5,000-strand 3D hair model? Me neither.

This team uses a setup of 10 cameras to track head motion and hair motion in minute detail. By simply sitting in what I assume must be a very hot, brightly lit studio and shaking one’s head back and forth, the system gets enough data to put together a really quite decent reproduction of one’s head of hair.

This would be extremely useful when making an avatar of oneself. Right now part of the reason they always turn out creepy is poor representation of the hair. That plus the robotic articulation and dead eyes.

The last one I thought was super cool is this crowd simulator. Simulating a crowd is easy to do poorly and really hard to do realistically, with dozens or hundreds of individuals pathing around each other, stopping and starting, getting too close or forming weird patterns. If you want to put a fake crowd in the background of a film or game, those problems are super obvious to observers and, as has probably happened to many readers, immediately damages any suspension of disbelief.

This new approach basically gives each individual a rudimentary sense of vision and basic understanding of its surroundings. Based on what it sees, it makes the best choice for navigating around an obstacle or another individual in the crowd, more or less how humans do it: you’ll go a couple of feet out of your way, but you won’t slowly veer into the street because the other guy is on your side of the sidewalk (awkward).

The results are really quite good, with the little crowds navigating one another deftly — but never too deftly. When two streams of people cross, they don’t find a perfect function for how to do so without modifying their speeds or looking around. There’s a little negotiation and some slowdown and that’s what you see with this vision-based approach.

Perhaps they can show this video to people in Seattle, who could do with a few tips on how this whole walking-around-other-people thing works.

Dozens more papers are being presented over the next week; check the full schedule for any others that pique your interest.