Slide background
Slide background
11:30 pm 11:30 pm

Senza Peso: New Virtual Reality Experience + Short Film

By |June 3rd, 2014|Categories: Blog|Tags: , , , , , |46 Comments

DOWNLOAD THE VR EXPERIENCE: Windows
VR YOUTUBE PLAYBACK: https://www.youtube.com/watch?v=lrq0KRbRk1U
WATCH THE MOVIE:  senzapeso.com
This is our most visually-rich experience to date — a high-end PC is required.

We’re excited to present our latest cinematic Virtual Reality experience, Senza Peso, along with its short film counterpart.

The idea: Watch the visually and musically-rich short film, then descend into the beautiful realms depicted therein and experience the afterlife for yourself without having to die first!

 

Senza Peso

Some highlights & interesting facts:

- We switched from Unity to Unreal Engine 4 for this one (After Ikrima’s lifelong dream came true and UE became affordable, including the source code!).

- The short film was a passion project of my friend Alain Vasquez and myself. It took 5 years to make, and was finally finished 2 months ago.

- Because I wasn’t completely sick of working on it after 5 years (sarcasm), Ikrima and I decided to spend another 8 weeks building this awesome VR experience with all the film’s assets, music, etc…

- Senza Peso the short film was based upon and inspired by an amazing song some friends of ours made. You can find out more about the music and short film here: senzapeso.com

- Early in the Senza Peso VR project, we were joined by John Dewar, a rad fx generalist and programmer.

- More blog posts to come on this project, as well as on using UE4 vs Unity.

- For the UE4 community, we’re planning to release a realm or two into the marketplace.



Feel free to share and redistribute, just give us credit.
Creative Commons License
Senza Peso by Kite & Lightning is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

2:57 pm 2:57 pm

VR EXPERIENCE TITLE OPEN

By |March 18th, 2014|Categories: Blog|Tags: |0 Comments

We just posted up our new VR experience titled K&L Station which leads you into our experience,  the CAVE.  (The link is in the previous blog post)

knlBlog_studioLogos

We wanted to create a branded opening experience that leads you into each of our VR experiences. Similar to a Film Studio title in front of a movie, like FOX or the Warner Brothers logo.  We love how those short logo animations and music get you excited your about to watch a big movie and also how they enhance the logos depending on the movie (the WB logo for the Matrix, etc…) However in VR space, this idea can be interpreted in many different ways…. Not to mention in VR it can’t just be a logo… right?  It has to be a full blown killer experience!!

Our inspiration somehow landed at Hugo meets the Fifth Element!  For those of you who haven’t seen the experience you start in a busy retro furturisic Parisian train station complete with flying cars. The K&L Express arrives to take you on your journey as your faced with some of the funny characters who hang out at the station!  And there is a Transwarping portal at the end!

The thought was a good VR opening experience could serve several purposes.  It gives first time VR experiencers an opportunity to acclimate into VR space.  This seemed important considering one of the new experiences we’re creating is very surreal… not exactly grounded in reality,  so allowing them to start the journey in a more reality based setting seemed like it would help the transition into surreal.  Second, we loved the idea that a setting giving you a sense of travel and going on a journey was fun and enhanced the overall teleportive aspects of VR.  Like any good song, movie or book there is a build up in the dynamic structure of the story so a good VR open could serve as the first step in getting the juices flowing so your viewer is at full connection when they arrive at the main experience.

The train station also becomes a great playground for re-skinning the experience to enhance where your going.  For example,  we can launch you from different station platforms depending on which experience you choose or instead of a pickup by the K&L express, a bad ass big steampunk train pulls up, hinting your about to travel somewhere very far! It can be funny or mysterious or adventuresome (Jason Bourne style chase scene through the station) all to help create a mood and warm the viewer up for a particular experience.

I’m curious what you guys think about a VR open, things that work or don’t about the idea itself along with the same for our particular first pass at it.  Seems obvious there should be an option to skip the open or maybe a shorter version?

And thanks to all for checking it out and participating in the excitement,  it fuels our creative fire!

 

Cory

9:48 am 9:48 am

New Oculus Experience: The K&L Station

By |March 14th, 2014|Categories: Blog|Tags: , , , , , |8 Comments

A Cinematic Hugo meets the 5th element

Quick blog post today. First off, thanks to everyone who helped beta test. The feedback really helped us turn up the graphics to 11. So without any delay, here’s our next Oculus VR release:

[Updated Mirror Link] Windows x86: http://bit.ly/1fA4KAK
Mac: http://bit.ly/1pFmTmt

A couple of notes. This demo requires Quicktime to be installed; linux users, we haven’t forsaken you. I just didn’t realize this would be a problem until this morning so you’ll have to wait a little bit while we figure out an alternative.

As always, let me know what you think!

Download Details:

Windows: http://bit.ly/1fA4KAK
Mac & Linux coming soon.

The usual Oculus Rift bindings:

‘W,S’ – Move forward/back
‘A,D’ – Move left/right
‘Q,R’ – Turn the camera left/right



Feel free to share and redistribute, just give us credit. We don’t have a license for all of the content here, so we can’t use our normal license to allow people to remix things but hopefully in the future, we’ll release stuff that others can use.
Creative Commons License
The Cave by Kite & Lightning is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

7:12 am 7:12 am

Variance Shadow Maps

By |February 24th, 2014|Categories: Blog|Tags: , , , |0 Comments

Real time shadows are still an annoyance in real time graphics. Surprisingly, even the latest next-gen games use a multitiude of shadowing techniques to compensate for each one’s shortcommings. (Crytek shadow techniques: http://www.crytek.com/download/Playing%20with%20Real-Time%20Shadows.pdf)

The solution? Visual scoping. Be cogniscent of the techniques available and craft the art direction within those constraints. Our art direction is heavy cinematic lighting. Certain assumptions are

  • Small number of lights that don’t move. Lights don’t move around in real life
  • Physical Plausible Pipeline (Area lights, not point lights; we still have a ways to go here)
  • Aliasing is the worst offender. Give up hard-shadows in preference to soft shadows if there’s going to be sampling
  • 60 fps hard minimum at 1080p

So we have a lot of wiggle room within this art direction but we still need some sort of shadowing mechanism for dynamic characters.

Variance Shadow Maps Overview

My first inclination was to implement Variance Shadow Maps because they are very fast. VSMs use a probability distribution function to compute shadow visibility. The idea behind them is that we want to separate the shadowing function terms into occluder terms (things that go into shadow-map) vs the receiver terms (the scene you’re rendering) because this allows us to perform pre-filtering on the shadow map (Gaussian blur, mipmapping, bilinear/trilinear sampling, etc, all are things prevent aliasing, biasing problems such as shadow-acne, etc) The initial insight for this technique came from computing volumetric shadows (Deep Shado Maps by Locovic & Veach).

So, what does it mean when people talk about the shadow test as being a function? Our shadow test is normally a function that returns 1 if a fragment is not in shadow and 0 if a fragment is in shadow. This is a heaviside function defined as

clip_image002

clip_image003

VSM’s approximate this function as a probabilistic function instead of a heave-side step function.

clip_image004

where clip_image005

do becomes a random variable that represents occluder depth distribution function. Instead of each texel in a shadow map representing a single depth value, it represents a distribution of depth values. This is powerful because most shadow bias/acne problems come from the quantization of the shadow map:

clip_image007

In traditional shadow mapping, the red lines show the depth sample stored at each texel. The teal object spans multiple depth at each texel because it’s curved. When the camera renders the pixels depicted by arrows, we get self-shadowing because of this quantization.

VSM Deets: Sprinkle That Math Magic

So, instead of storing a single depth value, we store a distribution of depth values at each texel. P(do < dr) is the probability that our current fragment depth is further than all the depth samples in the distribution. So, how do store this distribution? Well, we store the first two moments that allow us to reconstruct the distribution:

clip_image009

x is our depth at our current pixel, p(x) is our filter weight, E(x) is our expected value of the distribution in this neighborhood (which is the result of averaging/filtering the shadowmap texel)

Bringing back that undergrad Probability, we can compute the variance and the mean:

clip_image010

Using Chebyshev’s inequality, we can compute an upper bound for P(do < dr):

clip_image012

Fortunately, this upper bound is a good enough approximation for planar receivers. For a detailed explanation and assumptions, you can check out the VSM paper: http://www.punkuser.net/vsm/vsm_paper.pdf

So to recap, here are the general steps to VSM:

  • Render a shadow map and store z, z*z to a render texture. Use a linear z-depth. Perspective-corrected z (aka z/w that is stored in the depth buffer) is horrible. For floating point textures, you can remap the linear z to [-1,1]. You can enable the usual AA flags on the texture (MSAA, bilinear/trilinear sampling, etc)
  • Optionally blur the shadow map (box or gaussian filter)
  • Generate mipmaps
  • Render the scene as usual. For the shadow test, use Chebyshev’s inequality to compute p_max. p_max is your shadow occlusion factor
  • Attenuate the light contribution by p_max

VSMs are extremely fast b/c you of this pre-filtering (e.g. blur it, mipmap, do anistropic bilinear filtering) so you get a nice fall-off at the edges. However, one of the unavoidable problems though is that you get light leaking and peter panning when you have high-variance in your depth distribution:

clip_image013

High variance in the depth distribution caused by multiple overlapping occluders

Implementation Fun That Drives You Crazy

While I implementing this, I ran into a couple more implementation gotchas. For anyone implementing this in Unity, here are some pitfalls that you can avoid:

clip_image014

Careful with DirectX vs OpenGL: DirectX NDC.z goes from 0 to 1 whereas OpenGL NDC.z goes from -1 to 1 (x,y go from -1 to 1 for both). I was effectively halving my depth buffer resolution by scaling in DirectX to .5 to 1!

 

clip_image015

Much better but you can still see the effects of peter panning & light leakage

Next problem to fix was realizing that A)I was using the wrong projection matrix for the DX render path. Unity uses OpenGL projection matrices by default so you have to call GetGPUProjectionMatrix() to set the right version. That caused a change in handedness which lead to 7 hours of shader debugging to find that Cull Back turned into Cull Front B) You shouldn’t front face cull, you’ll get this winnowing effect where the shadows shrink

clip_image016

This is the correct image which shows a bit of light leakage but better contact shadows and not as much peter panning.

 

I also noticed there was a difference between computing VSM’s on linearized depth vs perspective corrected z (z/w which is what’s stored in the depth buffer). Surprisingly, there was a difference, even though I was using 32-bit floating point textures. There’s a little bit less light bleeding

clip_image017 clip_image018

Using perspective corrected depth values (z/w) vs  using linearized depth (z values mapped linearly from [Near Plane,Far Plan] to [0,1])

Sadly, my foundation in Probability isn’t solid enough to explain this. If you can explain it, please add some insight in the comments. I’d imagine it has something to do with how perspective correction alters the variance of our depth distribution random variable.

Ultimately, I didn’t settle on this technique as the light leaking problem artifact was too detrimental. The standard light leakage reduction technique created some “cartoony” fattened shadows. Instead, I switched over to Exponential Shadow Maps, which are even faster and better:

clip_image019

Stay tuned for a detailed follow-up…

5:42 am 5:42 am

Art+Tech Demo:Virtual Reality Iron Man UI

By |January 27th, 2014|Categories: Blog|Tags: , , , , , , |13 Comments

The Cave: Chillin’ with J.A.R.V.I.S in Virtual Reality

Ever wanted to know what it would be like to control J.A.R.V.I.S. like Iron Man? So did we. So we made a Virtual Reality demo for the Oculus. Try it out and let us know!

We’re really excited about creating cinematic interactive narratives for virtual reality. Over the past year, we have been doing a lot of research and development on our process for creating hyper-realistic 3D scans of people. We ultimately want to get to the point of being able to do a 4D performance capture.

But that’s not just it. Our goal is to make it easy so that everyone can go out and quickly capture 3D content for their own VR experiences, whether it’s downtown Paris or a performance capture of some actors. I think once you’ve experienced the power of hyper-realistic performance capture and VR, it’s hard to go back.

So, here’s our first demo that we created for the rift that includes a UI inspired by Iron Man and a 3D human capture of one of our friends. It’s a proof of concept so it took us ~3 weeks to hack together with previously built assets but we think it still looks pretty badass.

Our Key Concept Tests:

  • 3D Holographic UI in VR
  • Realistic 3D Scans of people
  • How to port our existing art production pipeline

Also, stay tuned as we’ve bene prepping these last weeks to start publicly sharing a lot of our little experiments we’ve been doing internally over the last 4 months!

Let me know what you guys think in the comments or you can hit me up on twitter (@ikrimae or @knlstudio)

Download Details:

Windows: https://www.dropbox.com/s/b0b66errssl3tcg/TheCave1.0.zip
Linux: https://www.dropbox.com/s/v5xvpqlknncztig/BatCave-1.0-Linux-x86_64.zip

The usual Oculus Rift bindings:

‘W,S’ – Move forward/back
‘A,D’ – Move left/right
‘Q,R’ – Turn the camera left/right

Feel free to share and redistribute, just give us credit. We don’t have a license for all of the content here, so we can’t use our normal license to allow people to remix things but hopefully in the future, we’ll release stuff that others can use.
Creative Commons License
The Cave by Kite & Lightning is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

7:46 pm 7:46 pm

Breakthrough: A 3D Head Scan Using One Camera

By |March 13th, 2013|Categories: Blog|Tags: , , , |7 Comments

So over the last couple of months, Cory & I have been working on putting together an augmented reality short and we’ve been chipping away at all sorts of workflow obstacles to overcome. One of the big challenges is figuring out a way to not have to use 45 cameras because A. it’s freaking expensive & B. it’s really freaking expensive.  45 cameras means 45x lenses, 45x batteries, 45x memory cards, etc. So until we swimming pools full of gold coins like Scrooge McDuck, we have to rely on some clever workarounds.

8:57 pm 8:57 pm

Character Animation with Kinect Motion Capture

By |February 19th, 2013|Categories: Blog|Tags: , , , |2 Comments

Last time, I gave an overview on how we can capture 3D photo-quality people for augmented reality executions.  One of the challenges going forward is once we capture them, how do you animate them? Again, our number one constraint is that we are always fighting the time budget. Animating people by hand in a realistic manner is out of the question. So that means we have to quickly turn to Kinect motion capture.

For those that don’t know what motion capture is, it’s where you get your actors to dress up in those silly black suits in a studio and then record their movements in 3D.

Black suits and white balls for traditional motion capture

The little white balls are what the computer uses to record movement in 3D space

 

Problem solved, right? Well, not exactly.  There are a couple of main problems we’ve found with motion capture: jitter, fidelity, and cost. Most of the data that comes out from motion capture systems tend to have a lot of noise, which shows up as jitter in your animations:

8:04 pm 8:04 pm

Unity: Our drag n drop augmented reality engine

By |February 18th, 2013|Categories: Blog|Tags: , |0 Comments

Now that we have a 3D model, we have to figure out a way to make her show up in augmented reality space, which usually means we need a video game engine. Normally, these things take a lot of time and experience to write (I think it took me about a year to write my first one when I was 16) but now there are a decent number of easy to use solutions. We settled on Unity 3D because it’s very artist friendly and has an active community.  What this means is that you can get your own video game up and running on your iPad very easily without knowing how to program. If you know some javascript, you can even write scripts for it. So Unity is the platform that allows us to interact with 3D objects. But, we still need a way for us to track our coaster with the 3D model.  Back in 2011, we would have had to roll our own. Writing your own computer vision algorithm (make the computer detect a specific image in a video and then determine it’s orientation in 3D space) is no small feat.

2:08 am 2:08 am

Photographing People in 3D with Photogrammetry

By |February 13th, 2013|Categories: Blog|Tags: , |0 Comments

[phosphor src="http://blog.mythly.com/wp-content/uploads/2013/02/JadeBlue.jpg" atlascount="8" width="480" height="360" autoplay="true" isinteractive="true" loop="true"]

Photogrammetry

When you don’t have an army of vfx artists at your disposal, you are constantly fighting one big enemy: time. Surprisingly, the time budget is your biggest enemy, even more than the money budget.  When working at the high-end vfx level, the time it takes to do any work in post grows exponentially. It’s similar to the amount of effort required to move from shooting stills to shooting videos and how everything becomes a factor of 10 more expensive to do.  Need to edit out a blemish? In the still world, no problem; at most you have 40 selects from a photoshoot. One click with the heal brush in photoshop and you’re done!  Need to do that on a 3 minute video? That’s 4300 frames. My rule of thumb is that it’s an equivalent jump going from photography to film as it is when going from filming to creating purely 3D realistic content. That’s why movies like The Life Of Pi cost 100 million dollars to make even though it was mostly shot in one location on a blue screen sound stage. It takes a lot of man hours to make something CG. But, don’t despair. That’s where our secret weapon #1 comes into play: photogrammetry!

3:02 am 3:02 am

An Augmented Reality Breakdown with Bud Light

By |February 9th, 2013|Categories: Blog|Tags: |0 Comments

Back in November 2011, I saw a pretty cool augmented reality ad-app: the app provided you X-ray vision into Moose Jaw’s catalog and it would show you the models in their underwear. Now this app got a lot of buzz (increased catalog sales by 35%!) but what I found most exciting about it was that it demonstrated mobile hardware was now fast enough to do real-time Augmented Reality.