AT&T "Spring" (Super Bowl)
January 2011
Role: 3d Lead Artist
VFX: The Mill
Director: TWiN
Production Company: Rabbit Content
Agency: BBDO
Awards: Ciclope Bronze - Best VFX
From a procedural vine and flower growing system I developed using ICE in XSI, I ended up working with Mill New York to lead the job jointly with Rob Petrie.
In the end, a range of techniques were used, from using hand animation for close-up shots, procedural vine growth, and a separate ICE system which I made to handle the more volumetric based growth. See below for a full discussion in a press release interview from 2011.
The variety of the shots and challenges that this job presented meant it was one of the most rewarding jobs that I worked on while I was at The Mill.
Press Release Interview (2011)
How does this vine growing simulation relate to or borrow from the familiar particle, flocking, crowd or even city-building simulations? If you have based it on existing software could you say what that is?
The vine growing simulation was created inside Autodesk's XSI using ICE. ICE is such a fast and flexible system to use that it makes it easy to prototype a wide range of ideas very quickly. We were also rendering in XSI, so keeping the majority of the work in a single 3D application helped to simplify the job considerably.
The system itself was constructed from scratch. It used particles to crawl over the buildings leaving ICE strand trails behind that represented the vine growth. After the vine particle moved a random distance, a stalk particle was generated and a leaf was attached to the end of it. As the stalk grew, the leaf would proceed through an animation to represent the emergence of the leaf from the bud. Every now and then, a new vine branch would get generated and the process repeated recursively up to a limit. If we needed flowers on the vines, then we'd just occasionally swap in a flower instead of a leaf.
There were only a couple of shots in the AT&T commercial that required seeing the vines and flowers grow together, but the growing simulation was still used for many of the static shots like Chicago's Washington Street as it provided a nice natural coverage of the buildings. Particle systems based on volume emission were used for many of the wider shots and also for where the flowers and leaves needed to look a bit more bushy.
For some elements we used the Ivy Generator written by Thomas Luft (University of Konstanz). It's a free to use application (even for commercial use) for creating ivy vines and leaves. It's available to download from his own university webpage (http://graphics.uni-konstanz.de/~luft/ivy_generator/). Initially we thought that we'd use it on most of the shots, but we had to be careful to make sure it matched the look of our own ivy and it didn't offer us quite as much control to art direct the growth of the ivy as our own proprietary system. In the end, we mainly used it for dangling ivy from bridges.
Finally, when we needed precise control over the placement of vines, bezier curves could be hand drawn and snapped onto the underlying geometry. ICE set-ups would use these curves to generate renderable vines and scatter leaves and flowers. That was a big time saver and it really helped to blend in some of the particle generated elements. It was kind of fun drawing them on with our Wacom tablets so it made for a nice break from tweaking the simulations.
How are plates chosen that will work well with the application?
The plates weren't chosen for the ivy simulation as it was still in development at the time of editing. However the directors (Josh and Jonathan Baker from TWiN) had a clear vision of exactly how far the ivy and the flowers should have grown on each shot. It made life a lot easier during the layout process as it minimised experimentation. Some of the shots were nudged or swapped around in the edit to support the narrative as the work progressed. Having this flexibility in the edit meant that we could let the plate dictate to some degree the composition of the ivy and helped it feel more natural.
How do you give the application enough 3D information about the scene so it knows the coverage it needs to achieve?
As you'd expect, the first process was to track the plate. We found that we needed to get a similar level of accuracy as if we were tracking a plate for stereoscopic 3D because we needed to get a tight geometry match. No cheating was allowed! Once we'd built the model, positions were specified for the ivy to sprout from. The ivy grows simultaneously from multiple locations but each root can be given different behaviour, such as speed of growth, duration, and direction. It gave us a lot of control over the layout. Additionally, hints, in the form of arrow shaped objects, were placed to act as a force field to guide the way the ivy crawled across the surface.
What other essential information does it need about the scene and about the plants, flowers etc before it can start?
The most important controlling factor over the plants is the direction that the underlying surface is facing. Everything about the growth and orientation of the leaves and flowers is based on that. The model had to be adjusted in several cases to get the desired response. Some shots benefited from the flowers being made to turn slightly towards camera which helped to increase the sense of volume to the foliage.
Can the density be controlled?
Being essentially a recursive system, the ivy simulation was very sensitive to small changes in its setup, which meant it was necessary to try to keep everything about the inputs to the simulation as consistent as possible. For example, we were very careful to keep the scale of the scenes the same so that once we achieved a look that we liked, we could leave all the controls for leaf emission and density alone. This simplified everything considerably and meant we could achieve a consistent look across all the shots.
The important aesthetic that was tweaked on each shot was the relative layout of the flowers and leaves. Generally, we used flowers to accentuate the lit areas of the plate while the leaves tended to be kept to the shade. Not only did it make sense that a plant would grow that way, but it also helped to get a better sense of depth and shape. It also meant we could keep the orange colors bright and avoid the potential for muddy browns. All of this can be seen to best effect in the "Randy's Donut" shot in the 60 second version of the spot.
Once the plants and flowers are in place, how does the lighting work so that they look like a natural part of the environment?
For many of the shots, we used the Arnold renderer due to the excessive amount of particle instanced geometry we were using. Arnold chews this stuff up and it meant that we could get quite ambitious with the level of detail that we put into some of the shots. For one of the wider city shots at the end of the 60 sec spot we actually used individual flower and leaf instances, despite the fact that you could fit several into a single pixel of the final image. The amount of geometry didn't seem to matter and in the end it helped to really sell the scale of it all and remain consistent to the other shots with the lighting.
Arnold is also extremely good at bouncing light around in a soft diffuse way. This was perfect for the shot where the man walks out of his house to find it covered in flowers. The final image you see in the spot is pretty close to the render we got straight out of XSI. We just cranked up the bounce lighting component for the shader which helped to soften the lighting and steered it towards the complex light interaction that you get within dense foliage.
For the closeup shots we used mental ray because we know how to get the most out of it to be able to really sell the level of detail that was needed. If you look closely you should be able to see a subtle hair system that was added to the vines for increased realism. Mental ray also has the advantage that it's The Mill's primary renderer [ed: in 2011], so all of our artists know it very well.
When it comes to including the hand animated flowers in the foreground, is this an artistic type of decision - that is, does the simulation hold up only at certain distances from the camera?
For me, I think the opening shots of the commercial go to prove that you can't beat an animator's sense of timing and composition. Those shots work really well and the level of control the animators have is way above what I can offer as an FX TD writing a simulation.
We did find that from initial tests, the simulation was able to stand up to fairly close scrutiny due to the visual complexity of the stalks and leaves. For the New Orleans balcony shot we used a hand animated rig in the immediate foreground but we were able to use the particle simulation for the rest of the layout. It's also one of the few shots where we can see the flowers and leaves growing using the particle system.
Are the animators working with the same elements that the simulation is using to populate the environments?
No, the assets were quite separate. We had a custom rig for the animation team to be able to grow vines and sprout leaves, and we always had them animating the high resolution flowers. We could blend them in quite easily with the particle based flowers in the distance if necessary.
This application could really be useful in many situations. What ways have you been considering that you could extend or modify it to work for other projects?
The first step would be looking into simplification of the simulation. We used three independent particle systems for the vines, stalks, and leaves/flowers and it proved to be a little cumbersome for caching. We occasionally ran into synchronisation issues where leaves would start sprouting in mid air!
After that, it's all about adding controllability and stability to the system for artists who maybe haven't quite got into ICE yet but might need to use similar functionality in their projects. Adding convincing vegetation is one of those things that CG generally finds to be quite a difficult challenge, so having a stock solution like this is always useful.