Slide background
Slide background
18 03, 2014

VR EXPERIENCE TITLE OPEN

By |March 18th, 2014|Blog|0 Comments|

We just posted up our new VR experience titled K&L Station which leads you into our experience,  the CAVE.  (The link is in the previous blog post)

knlBlog_studioLogos

We wanted to create a branded opening experience that leads you into each of our VR experiences. Similar to a Film Studio title in front of a movie, like FOX or the Warner Brothers logo.  We love how those short logo animations and music get you excited your about to watch a big movie and also how they enhance the logos depending on the movie (the WB logo for the Matrix, etc…) However in VR space, this idea can be interpreted in many different ways…. Not to mention in VR it can’t just be a logo… right?  It has to be a full blown killer experience!!

Our inspiration somehow landed at Hugo meets the Fifth Element!  For those of you who haven’t seen the experience you start in a busy retro furturisic Parisian train station complete with flying cars. The K&L Express arrives to take you on your journey as your faced with some of the funny characters who hang out at the station!  And there is a Transwarping portal at the end!

The thought was a good VR opening experience could serve several purposes.  It gives first time VR experiencers an opportunity to acclimate into VR space.  This seemed important considering one of the new experiences we’re creating is very surreal… not exactly grounded in reality,  so allowing them to start the journey in a more reality based setting seemed like it would help the transition into surreal.  Second, we loved the idea that a setting giving you […]

14 03, 2014

New Oculus Experience: The K&L Station

By |March 14th, 2014|Blog|7 Comments|

A Cinematic Hugo meets the 5th element

Quick blog post today. First off, thanks to everyone who helped beta test. The feedback really helped us turn up the graphics to 11. So without any delay, here’s our next Oculus VR release:

[Updated Mirror Link] Windows x86: http://bit.ly/1fA4KAK
Mac: http://bit.ly/1pFmTmt

A couple of notes. This demo requires Quicktime to be installed; linux users, we haven’t forsaken you. I just didn’t realize this would be a problem until this morning so you’ll have to wait a little bit while we figure out an alternative.

As always, let me know what you think!

Download Details:

Windows: http://bit.ly/1fA4KAK
Mac & Linux coming soon.

The usual Oculus Rift bindings:

‘W,S’ – Move forward/back
‘A,D’ – Move left/right
‘Q,R’ – Turn the camera left/right



Feel free to share and redistribute, just give us credit. We don’t have a license for all of the content here, so we can’t use our normal license to allow people to remix things but hopefully in the future, we’ll release stuff that others can use.
Creative Commons License
The Cave by Kite […]

24 02, 2014

Variance Shadow Maps

By |February 24th, 2014|Blog|0 Comments|

Real time shadows are still an annoyance in real time graphics. Surprisingly, even the latest next-gen games use a multitiude of shadowing techniques to compensate for each one’s shortcommings. (Crytek shadow techniques: http://www.crytek.com/download/Playing%20with%20Real-Time%20Shadows.pdf)

The solution? Visual scoping. Be cogniscent of the techniques available and craft the art direction within those constraints. Our art direction is heavy cinematic lighting. Certain assumptions are

  • Small number of lights that don’t move. Lights don’t move around in real life
  • Physical Plausible Pipeline (Area lights, not point lights; we still have a ways to go here)
  • Aliasing is the worst offender. Give up hard-shadows in preference to soft shadows if there’s going to be sampling
  • 60 fps hard minimum at 1080p

So we have a lot of wiggle room within this art direction but we still need some sort of shadowing mechanism for dynamic characters.

Variance Shadow Maps Overview

My first inclination was to implement Variance Shadow Maps because they are very fast. VSMs use a probability distribution function to compute shadow visibility. The idea behind them is that we want to separate the shadowing function terms into occluder terms (things that go into shadow-map) vs the receiver terms (the scene you’re rendering) because this allows us to perform pre-filtering on the shadow map (Gaussian blur, mipmapping, bilinear/trilinear sampling, etc, all are things prevent aliasing, biasing problems such as shadow-acne, etc) The initial insight for this technique came from computing volumetric shadows (Deep Shado Maps by Locovic & Veach).

So, what does it mean when people talk about the shadow test as being a function? Our shadow test is normally a function that returns 1 if a fragment is not in shadow and 0 if a fragment is in […]

27 01, 2014

Art+Tech Demo:Virtual Reality Iron Man UI

By |January 27th, 2014|Blog|12 Comments|

The Cave: Chillin’ with J.A.R.V.I.S in Virtual Reality

Ever wanted to know what it would be like to control J.A.R.V.I.S. like Iron Man? So did we. So we made a Virtual Reality demo for the Oculus. Try it out and let us know!

We’re really excited about creating cinematic interactive narratives for virtual reality. Over the past year, we have been doing a lot of research and development on our process for creating hyper-realistic 3D scans of people. We ultimately want to get to the point of being able to do a 4D performance capture.

But that’s not just it. Our goal is to make it easy so that everyone can go out and quickly capture 3D content for their own VR experiences, whether it’s downtown Paris or a performance capture of some actors. I think once you’ve experienced the power of hyper-realistic performance capture and VR, it’s hard to go back.

So, here’s our first demo that we created for the rift that includes a UI inspired by Iron Man and a 3D human capture of one of our friends. It’s a proof of concept so it took us ~3 weeks to hack together with previously built assets but we think it still looks pretty badass.

Our Key Concept Tests:

  • 3D Holographic UI in VR
  • Realistic 3D Scans of people
  • How to port our existing art production pipeline

Also, stay tuned as we’ve bene prepping these last weeks to start publicly sharing a lot of our little experiments we’ve been doing internally over the last 4 months!

Let me know what you guys think in the comments or you can hit me up on twitter (@ikrimae or @knlstudio)

Download Details:

Windows: https://www.dropbox.com/s/b0b66errssl3tcg/TheCave1.0.zip
Linux: https://www.dropbox.com/s/v5xvpqlknncztig/BatCave-1.0-Linux-x86_64.zip

The usual Oculus Rift bindings:

‘W,S’ – Move forward/back
‘A,D’ – Move left/right
‘Q,R’ – Turn […]

13 03, 2013

Breakthrough: A 3D Head Scan Using One Camera

By |March 13th, 2013|Blog|5 Comments|

So over the last couple of months, Cory & I have been working on putting together an augmented reality short and we’ve been chipping away at all sorts of workflow obstacles to overcome. One of the big challenges is figuring out a way to not have to use 45 cameras because A. it’s freaking expensive & B. it’s really freaking expensive.  45 cameras means 45x lenses, 45x batteries, 45x memory cards, etc. So until we swimming pools full of gold coins like Scrooge McDuck, we have to rely on some clever workarounds.

19 02, 2013

Character Animation with Kinect Motion Capture

By |February 19th, 2013|Blog|0 Comments|

Last time, I gave an overview on how we can capture 3D photo-quality people for augmented reality executions.  One of the challenges going forward is once we capture them, how do you animate them? Again, our number one constraint is that we are always fighting the time budget. Animating people by hand in a realistic manner is out of the question. So that means we have to quickly turn to Kinect motion capture. For those that don’t know what motion capture is, it’s where you get your actors to dress up in those silly black suits in a studio and then record their movements in 3D.   Problem solved, right? Well, not exactly.  There are a couple of main problems we’ve found with motion capture: jitter, fidelity, and cost. Most of the data that comes out from motion capture systems tend to have a lot of noise, which shows up as jitter in your animations:

18 02, 2013

Unity: Our drag n drop augmented reality engine

By |February 18th, 2013|Blog|0 Comments|

Now that we have a 3D model, we have to figure out a way to make her show up in augmented reality space, which usually means we need a video game engine. Normally, these things take a lot of time and experience to write (I think it took me about a year to write my first one when I was 16) but now there are a decent number of easy to use solutions. We settled on Unity 3D because it’s very artist friendly and has an active community.  What this means is that you can get your own video game up and running on your iPad very easily without knowing how to program. If you know some javascript, you can even write scripts for it. So Unity is the platform that allows us to interact with 3D objects. But, we still need a way for us to track our coaster with the 3D model.  Back in 2011, we would have had to roll our own. Writing your own computer vision algorithm (make the computer detect a specific image in a video and then determine it’s orientation in 3D space) is no small feat.

13 02, 2013

Photographing People in 3D with Photogrammetry

By |February 13th, 2013|Blog|0 Comments|

Photogrammetry When you don’t have an army of vfx artists at your disposal, you are constantly fighting one big enemy: time. Surprisingly, the time budget is your biggest enemy, even more than the money budget.  When working at the high-end vfx level, the time it takes to do any work in post grows exponentially. It’s similar to the amount of effort required to move from shooting stills to shooting videos and how everything becomes a factor of 10 more expensive to do.  Need to edit out a blemish? In the still world, no problem; at most you have 40 selects from a photoshoot. One click with the heal brush in photoshop and you’re done!  Need to do that on a 3 minute video? That’s 4300 frames. My rule of thumb is that it’s an equivalent jump going from photography to film as it is when going from filming to creating purely 3D realistic content. That’s why movies like The Life Of Pi cost 100 million dollars to make even though it was mostly shot in one location on a blue screen sound stage. It takes a lot of man hours to make something CG. But, don’t despair. That’s where our secret weapon #1 comes into play: photogrammetry!

9 02, 2013

An Augmented Reality Breakdown with Bud Light

By |February 9th, 2013|Blog|0 Comments|

Back in November 2011, I saw a pretty cool augmented reality ad-app: the app provided you X-ray vision into Moose Jaw’s catalog and it would show you the models in their underwear. Now this app got a lot of buzz (increased catalog sales by 35%!) but what I found most exciting about it was that it demonstrated mobile hardware was now fast enough to do real-time Augmented Reality.

2 04, 2012

How We Increased Search Traffic by 22% in 3 Weeks

By |April 2nd, 2012|Blog|1 Comment|

[Mythly Studios is an app development company based in Los Angeles.  If you'd like help with iOS apps, web development, or analytics send a mail to [email protected]] I recently gave a presentation at an L.A. Hacker News Meetup about how I helped a fellow Hacker News-er, Alex Muir, with his site HowACarWorks.com.  After 3 weeks, we’ve seen a 22% increase in organic search traffic and 20%-60% decreases in page load times.  My presentation was supposed to be 20 minutes, but it ballooned to be about an hour due to people’s interest in the topic.  Needless to say, the audience recommended I blog about the experience.  In this post I’ll focus specifically on how we measured and improved the site speed and in future posts I’ll cover some of the on-page SEO techniques we used.