Slide background
Slide background
4:04 pm 4:04 pm

Download Insurgent VR: Shatter Reality

By |March 10th, 2015|Categories: Blog|Tags: , , , , , , , , , |6 Comments

Be a Divergent in Insurgent

TLDR: Download @
Gear VR: Oculus Home on GearVR

Hey guys,

I’m excited to finally be able to share with you guys what we’ve been working on for the past 4 months – our first movie tie-in VR experience!

How would you like to be a character in a scene from a blockbuster movie? This was how Lionsgate approached us back in September when they asked us to create a VR experience for their upcoming Insurgent movie.

Since the movie revolves around a VR simulation tie in, we thought it would be perfect for VR and right up our alley – combine cinematic narrative with surreal and epic visuals. Lionsgate’s support allowed us to film the principal talent such as Kate Winslet & Mekhi Phifer on set during film production. We also got a chance to introduce them to VR with Senza Peso.

We wanted to have the widest release possible for this so we have:

  • A DK2 Version
  • A Gear VR Movie Theater experience to watch the movie trailer and GearVR port of the DK2 Version
  • A traveling city tour with a custom designed/built chair from the VR experience with full haptic feedback and 4D components
  • Google Cardboard mobile apps (Android & iOS)

But of course, we didn’t want to sacrifice visual quality to achieve this. So we developed some tools that we hope to share with you guys down the road.

UE4 Stereo 360 Movie Export Plugin

We wanted to maintain the same level of visual quality across all the different devices so we created a UE4 plugin that allows VR Developers to export synthetic stereoscopic 360 movies.

Our new plugin will allow developers to easily create GearVR ports of their passive desktop experiences. We imagine down the road, this can also be an amazing way for Let’s Play videos that people can consume in VR.

Best news of all: we’re opensourcing the plugin and giving it out for free! We hope to post it to our github repo in the next couple of weeks but stay tuned for more details to follow.

Alembic Cache Playback

To create a fast content production workflow, we also wrote an alembic cache plugin that allows you to playback Alembic files in UE4. This allows us to import vertex cache animation such as water simulations or rigid motion animation such as our lab destruction sequence containing 10k destroyed lab fragments shown in the screen capture above.

We’re excited about this because this will hopefully allow us to bring film like visual effects into our VR experiences in a fast big way.

Our plan

We’ve been fortunate and thankful to have a couple of big brands and clients ask us to do some commercial VR work, but like we’ve always said, our main goal is to create original content. It’s been amazing having these clients so that we can sustain ourselves while doing VR fulltime until the consumer headsets start coming out. Our other options were to raise investor money or seek out a publishing deal, neither of which is appealing to us at this stage.

The great thing is that after our next client project, our plan is to work from June to December on our own original idea and hopefully be able to grow the 2 man K&L duo to a trio or even a team :)

Either way, we’re super stoked and can’t wait for the VR headsets coming out this year!

And as always, let me know what you think of the Insurgent experience.


Cory & Ikrima

3:48 pm 3:48 pm

NBC’s The Voice 360: A 3D 360 VR Experience

By |September 23rd, 2014|Categories: Blog|Tags: , , , , , , |1 Comment

TLDR: Download at

A bit of a change of pace from our usual fantasy & sci-fi worlds, we just wrapped working with NBC to create a VR component for their Voice 3D 360 chair tour. When NBC approached us about this, we were excited that about the opportunity to experiment with 3D 360 video in VR and the opportunity to have the four judges (Pharrell, Adam Levine, Blake Shelton, & Gwen Stefani) from the show participate in a little VR experience. From the outset, we said that this was a big experiment and it may not work at all, but to NBC’s credit, they decided to boldly charge ahead anyways!

And from a personal perspective, I was happy because I know my mom’s a big fan of the show so I was looking forward to seeing her & her friends reactions. I think the more avenues we get people into VR, the faster VR will hit the mainstream so we can all benefit from it. (If only she was as excited about Senza!) I also loved the reaction from introducing VR to everyone on the set, from the crew all the way up to the producers who were all enthusiastic about VR.

We were also excited that NBC went the extra mile to allow the experience to be posted online so we could share it with others and hear their perspective (until Dec 19th when it has to go down because of music rights). I’d love to hear reactions & how family members, especially fans of the show, reacted. I was surprised at how well it worked; in our next live action experience, we’d love to play with moving the camera and adding fake parallax for positional tracking. What’re your thoughts on 3D 360 VR?

8:59 am 8:59 am

Senza Peso VR: DK2 Edition

By |September 3rd, 2014|Categories: Blog|Tags: , , , , , |24 Comments

DOWNLOAD THE DK2 VR EXPERIENCE: Windows VR YOUTUBE PLAYBACK: This is our most visually-rich experience to date — a high-end PC is required. The road to the DK2 Edition was full of surprising challenges but we’re happy that we have a 75 fps version with solid positional tracking. SenzaPesoVR-DarkRealmScreenshot I know you guys are super excited to try it so I’ll keep this short. A couple of new things:

The Launcher

We’re rolling out a very early launcher with a user registration system! We’re doing this because we want to get in the habit of providing autoupdates, delta upgrades, and quick fixes instead of having long periods of no releases (yay software development practices from this century!) I’m excited about where we want the launcher to grow but for now, it’ll primarily be a central hub for all of our content. We’re also rolling out a registration system so that I can mark those of you that want it as beta users to try out our latest experimental stuff. Our little newsletter has grown from 200 people to over 2000 people so managing that by email is getting a little crazy >_< (But thanks to all of you who beta tested and offered to beta test!) Right now, John created the launcher with autoupdate capabilities and a very basic user registration system.

The Epic Games Booth @ PAX

The ever-so-awesome guys at Epic Games offered us a spot on their booth at PAX Prime in Seattle last weekend. It was our first PAX and an awesome experience seeing so many people lose their VRginity on Senza Peso. Also, a personal shout out to the Xing & the Darknet teams for showcasing their game at the Indie Megabooth.

Next Project Reveal Coming Soon….

We can’t say much about it other than we’ll be announcing it next Tuesday. Stay tuned for more rad VR! Looking forward to seeing you guys in the metaverse! And as always, love to hear your feedback on anything and everything. Feel free to share and redistribute, just give us credit. Creative Commons License Senza Peso by Kite & Lightning is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

11:30 pm 11:30 pm

Senza Peso: New Virtual Reality Experience + Short Film

By |June 3rd, 2014|Categories: Blog|Tags: , , , , , |99 Comments

This is our most visually-rich experience to date — a high-end PC is required.

We’re excited to present our latest cinematic Virtual Reality experience, Senza Peso, along with its short film counterpart.

The idea: Watch the visually and musically-rich short film, then descend into the beautiful realms depicted therein and experience the afterlife for yourself without having to die first!


Senza Peso

Some highlights & interesting facts:

– We switched from Unity to Unreal Engine 4 for this one (After Ikrima’s lifelong dream came true and UE became affordable, including the source code!).

– The short film was a passion project of my friend Alain Vasquez and myself. It took 5 years to make, and was finally finished 2 months ago.

– Because I wasn’t completely sick of working on it after 5 years (sarcasm), Ikrima and I decided to spend another 8 weeks building this awesome VR experience with all the film’s assets, music, etc…

Senza Peso the short film was based upon and inspired by an amazing song some friends of ours made. You can find out more about the music and short film here:

– Early in the Senza Peso VR project, we were joined by John Dewar, a rad fx generalist and programmer.

– More blog posts to come on this project, as well as on using UE4 vs Unity.

– For the UE4 community, we’re planning to release a realm or two into the marketplace.

Feel free to share and redistribute, just give us credit.
Creative Commons License
Senza Peso by Kite & Lightning is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

2:57 pm 2:57 pm


By |March 18th, 2014|Categories: Blog|Tags: |3 Comments

We just posted up our new VR experience titled K&L Station which leads you into our experience,  the CAVE.  (The link is in the previous blog post)


We wanted to create a branded opening experience that leads you into each of our VR experiences. Similar to a Film Studio title in front of a movie, like FOX or the Warner Brothers logo.  We love how those short logo animations and music get you excited your about to watch a big movie and also how they enhance the logos depending on the movie (the WB logo for the Matrix, etc…) However in VR space, this idea can be interpreted in many different ways…. Not to mention in VR it can’t just be a logo… right?  It has to be a full blown killer experience!!

Our inspiration somehow landed at Hugo meets the Fifth Element!  For those of you who haven’t seen the experience you start in a busy retro furturisic Parisian train station complete with flying cars. The K&L Express arrives to take you on your journey as your faced with some of the funny characters who hang out at the station!  And there is a Transwarping portal at the end!

The thought was a good VR opening experience could serve several purposes.  It gives first time VR experiencers an opportunity to acclimate into VR space.  This seemed important considering one of the new experiences we’re creating is very surreal… not exactly grounded in reality,  so allowing them to start the journey in a more reality based setting seemed like it would help the transition into surreal.  Second, we loved the idea that a setting giving you a sense of travel and going on a journey was fun and enhanced the overall teleportive aspects of VR.  Like any good song, movie or book there is a build up in the dynamic structure of the story so a good VR open could serve as the first step in getting the juices flowing so your viewer is at full connection when they arrive at the main experience.

The train station also becomes a great playground for re-skinning the experience to enhance where your going.  For example,  we can launch you from different station platforms depending on which experience you choose or instead of a pickup by the K&L express, a bad ass big steampunk train pulls up, hinting your about to travel somewhere very far! It can be funny or mysterious or adventuresome (Jason Bourne style chase scene through the station) all to help create a mood and warm the viewer up for a particular experience.

I’m curious what you guys think about a VR open, things that work or don’t about the idea itself along with the same for our particular first pass at it.  Seems obvious there should be an option to skip the open or maybe a shorter version?

And thanks to all for checking it out and participating in the excitement,  it fuels our creative fire!



9:48 am 9:48 am

New Oculus Experience: The K&L Station

By |March 14th, 2014|Categories: Blog|Tags: , , , , , |11 Comments

A Cinematic Hugo meets the 5th element

Quick blog post today. First off, thanks to everyone who helped beta test. The feedback really helped us turn up the graphics to 11. So without any delay, here’s our next Oculus VR release:

[Updated Mirror Link] Windows x86:

A couple of notes. This demo requires Quicktime to be installed; linux users, we haven’t forsaken you. I just didn’t realize this would be a problem until this morning so you’ll have to wait a little bit while we figure out an alternative.

As always, let me know what you think!

Download Details:

Mac & Linux coming soon.

The usual Oculus Rift bindings:

‘W,S’ – Move forward/back
‘A,D’ – Move left/right
‘Q,R’ – Turn the camera left/right

Feel free to share and redistribute, just give us credit. We don’t have a license for all of the content here, so we can’t use our normal license to allow people to remix things but hopefully in the future, we’ll release stuff that others can use.
Creative Commons License
The Cave by Kite & Lightning is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

7:12 am 7:12 am

Variance Shadow Maps

By |February 24th, 2014|Categories: Blog|Tags: , , , |0 Comments

Real time shadows are still an annoyance in real time graphics. Surprisingly, even the latest next-gen games use a multitiude of shadowing techniques to compensate for each one’s shortcommings. (Crytek shadow techniques:

The solution? Visual scoping. Be cogniscent of the techniques available and craft the art direction within those constraints. Our art direction is heavy cinematic lighting. Certain assumptions are

  • Small number of lights that don’t move. Lights don’t move around in real life
  • Physical Plausible Pipeline (Area lights, not point lights; we still have a ways to go here)
  • Aliasing is the worst offender. Give up hard-shadows in preference to soft shadows if there’s going to be sampling
  • 60 fps hard minimum at 1080p

So we have a lot of wiggle room within this art direction but we still need some sort of shadowing mechanism for dynamic characters.

Variance Shadow Maps Overview

My first inclination was to implement Variance Shadow Maps because they are very fast. VSMs use a probability distribution function to compute shadow visibility. The idea behind them is that we want to separate the shadowing function terms into occluder terms (things that go into shadow-map) vs the receiver terms (the scene you’re rendering) because this allows us to perform pre-filtering on the shadow map (Gaussian blur, mipmapping, bilinear/trilinear sampling, etc, all are things prevent aliasing, biasing problems such as shadow-acne, etc) The initial insight for this technique came from computing volumetric shadows (Deep Shado Maps by Locovic & Veach).

So, what does it mean when people talk about the shadow test as being a function? Our shadow test is normally a function that returns 1 if a fragment is not in shadow and 0 if a fragment is in shadow. This is a heaviside function defined as



VSM’s approximate this function as a probabilistic function instead of a heave-side step function.


where clip_image005

do becomes a random variable that represents occluder depth distribution function. Instead of each texel in a shadow map representing a single depth value, it represents a distribution of depth values. This is powerful because most shadow bias/acne problems come from the quantization of the shadow map:


In traditional shadow mapping, the red lines show the depth sample stored at each texel. The teal object spans multiple depth at each texel because it’s curved. When the camera renders the pixels depicted by arrows, we get self-shadowing because of this quantization.

VSM Deets: Sprinkle That Math Magic

So, instead of storing a single depth value, we store a distribution of depth values at each texel. P(do < dr) is the probability that our current fragment depth is further than all the depth samples in the distribution. So, how do store this distribution? Well, we store the first two moments that allow us to reconstruct the distribution:


x is our depth at our current pixel, p(x) is our filter weight, E(x) is our expected value of the distribution in this neighborhood (which is the result of averaging/filtering the shadowmap texel)

Bringing back that undergrad Probability, we can compute the variance and the mean:


Using Chebyshev’s inequality, we can compute an upper bound for P(do < dr):


Fortunately, this upper bound is a good enough approximation for planar receivers. For a detailed explanation and assumptions, you can check out the VSM paper:

So to recap, here are the general steps to VSM:

  • Render a shadow map and store z, z*z to a render texture. Use a linear z-depth. Perspective-corrected z (aka z/w that is stored in the depth buffer) is horrible. For floating point textures, you can remap the linear z to [-1,1]. You can enable the usual AA flags on the texture (MSAA, bilinear/trilinear sampling, etc)
  • Optionally blur the shadow map (box or gaussian filter)
  • Generate mipmaps
  • Render the scene as usual. For the shadow test, use Chebyshev’s inequality to compute p_max. p_max is your shadow occlusion factor
  • Attenuate the light contribution by p_max

VSMs are extremely fast b/c you of this pre-filtering (e.g. blur it, mipmap, do anistropic bilinear filtering) so you get a nice fall-off at the edges. However, one of the unavoidable problems though is that you get light leaking and peter panning when you have high-variance in your depth distribution:


High variance in the depth distribution caused by multiple overlapping occluders

Implementation Fun That Drives You Crazy

While I implementing this, I ran into a couple more implementation gotchas. For anyone implementing this in Unity, here are some pitfalls that you can avoid:


Careful with DirectX vs OpenGL: DirectX NDC.z goes from 0 to 1 whereas OpenGL NDC.z goes from -1 to 1 (x,y go from -1 to 1 for both). I was effectively halving my depth buffer resolution by scaling in DirectX to .5 to 1!



Much better but you can still see the effects of peter panning & light leakage

Next problem to fix was realizing that A)I was using the wrong projection matrix for the DX render path. Unity uses OpenGL projection matrices by default so you have to call GetGPUProjectionMatrix() to set the right version. That caused a change in handedness which lead to 7 hours of shader debugging to find that Cull Back turned into Cull Front B) You shouldn’t front face cull, you’ll get this winnowing effect where the shadows shrink


This is the correct image which shows a bit of light leakage but better contact shadows and not as much peter panning.


I also noticed there was a difference between computing VSM’s on linearized depth vs perspective corrected z (z/w which is what’s stored in the depth buffer). Surprisingly, there was a difference, even though I was using 32-bit floating point textures. There’s a little bit less light bleeding

clip_image017 clip_image018

Using perspective corrected depth values (z/w) vs  using linearized depth (z values mapped linearly from [Near Plane,Far Plan] to [0,1])

Sadly, my foundation in Probability isn’t solid enough to explain this. If you can explain it, please add some insight in the comments. I’d imagine it has something to do with how perspective correction alters the variance of our depth distribution random variable.

Ultimately, I didn’t settle on this technique as the light leaking problem artifact was too detrimental. The standard light leakage reduction technique created some “cartoony” fattened shadows. Instead, I switched over to Exponential Shadow Maps, which are even faster and better:


Stay tuned for a detailed follow-up…

5:42 am 5:42 am

Art+Tech Demo:Virtual Reality Iron Man UI

By |January 27th, 2014|Categories: Blog|Tags: , , , , , , |16 Comments

The Cave: Chillin’ with J.A.R.V.I.S in Virtual Reality

Ever wanted to know what it would be like to control J.A.R.V.I.S. like Iron Man? So did we. So we made a Virtual Reality demo for the Oculus. Try it out and let us know!

We’re really excited about creating cinematic interactive narratives for virtual reality. Over the past year, we have been doing a lot of research and development on our process for creating hyper-realistic 3D scans of people. We ultimately want to get to the point of being able to do a 4D performance capture.

But that’s not just it. Our goal is to make it easy so that everyone can go out and quickly capture 3D content for their own VR experiences, whether it’s downtown Paris or a performance capture of some actors. I think once you’ve experienced the power of hyper-realistic performance capture and VR, it’s hard to go back.

So, here’s our first demo that we created for the rift that includes a UI inspired by Iron Man and a 3D human capture of one of our friends. It’s a proof of concept so it took us ~3 weeks to hack together with previously built assets but we think it still looks pretty badass.

Our Key Concept Tests:

  • 3D Holographic UI in VR
  • Realistic 3D Scans of people
  • How to port our existing art production pipeline

Also, stay tuned as we’ve bene prepping these last weeks to start publicly sharing a lot of our little experiments we’ve been doing internally over the last 4 months!

Let me know what you guys think in the comments or you can hit me up on twitter (@ikrimae or @knlstudio)

Download Details:


The usual Oculus Rift bindings:

‘W,S’ – Move forward/back
‘A,D’ – Move left/right
‘Q,R’ – Turn the camera left/right

Feel free to share and redistribute, just give us credit. We don’t have a license for all of the content here, so we can’t use our normal license to allow people to remix things but hopefully in the future, we’ll release stuff that others can use.
Creative Commons License
The Cave by Kite & Lightning is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

7:46 pm 7:46 pm

Breakthrough: A 3D Head Scan Using One Camera

By |March 13th, 2013|Categories: Blog|Tags: , , , |7 Comments

So over the last couple of months, Cory & I have been working on putting together an augmented reality short and we’ve been chipping away at all sorts of workflow obstacles to overcome. One of the big challenges is figuring out a way to not have to use 45 cameras because A. it’s freaking expensive & B. it’s really freaking expensive.  45 cameras means 45x lenses, 45x batteries, 45x memory cards, etc. So until we swimming pools full of gold coins like Scrooge McDuck, we have to rely on some clever workarounds.

8:57 pm 8:57 pm

Character Animation with Kinect Motion Capture

By |February 19th, 2013|Categories: Blog|Tags: , , , |2 Comments

Last time, I gave an overview on how we can capture 3D photo-quality people for augmented reality executions.  One of the challenges going forward is once we capture them, how do you animate them? Again, our number one constraint is that we are always fighting the time budget. Animating people by hand in a realistic manner is out of the question. So that means we have to quickly turn to Kinect motion capture.

For those that don’t know what motion capture is, it’s where you get your actors to dress up in those silly black suits in a studio and then record their movements in 3D.

Black suits and white balls for traditional motion capture

The little white balls are what the computer uses to record movement in 3D space


Problem solved, right? Well, not exactly.  There are a couple of main problems we’ve found with motion capture: jitter, fidelity, and cost. Most of the data that comes out from motion capture systems tend to have a lot of noise, which shows up as jitter in your animations: