What do you get when you combine two parts velcro, two parts magnet, two parts glass, and fold them into a little bit of cardboard? According to Google, that’s how you get a glimpse of the future… and the view is GREAT!
In June of 2014 at the Google I/O event, a team of three developers presented a simple DIY device that could transform your smartphone into a virtual reality headset which they simply called Cardboard. The presentation they gave (shown in the video below) demonstrates them using the cardboard hardware with a software development kit to build an app. The reality is, it takes a lot more than leftover shipping material to make your own Cardboard headset. The hardest part to source is the two biconvex lenses you’ll need to reduce eye strain. With the event long over and these lenses in high demand, you can find many versions of Cardboard as kits and they save you just enough headache to make it worth the extra cost. I got mine from DODOcase. The design is basically the same with perhaps a little less cardboard (the material) involved.
Google I/O 2014 – Cardboard: VR for Android
So, how did Google take a simple building material to make such an impressive device? I thought I’d never ask… The basic technology at work here is stereoscopic photography which is an optical trick that’s been around since before penny arcades. I remember being mesmerized by the picturesque landscapes of my View Master, which did basically the same thing. What happens is each eye is presented with one of a pair of pictures which are taken at roughly the same distance apart as your eyes. When you do that, your brain stitches the images together just like you were looking at it in person and PRESTO! you’re overlooking Niagara Falls from the comfort of your living room. Google then took that and added the accelerometer and gyro in your smartphone to allow you to move around within these virtual, stereoscopic pictures making them more immersive. They also use a magnetic field interacting with the hall effect sensor in the phone to give the ability to interact with the virtual space.
The DODOcase Cardboard VR Toolkit Assembled
There are lots of technologies that have been developed in the pursuit of letting people see in three dimensions. The one people my age can remember from childhood are the red / blue tinted 3D glasses. These work by taking one image that you can focus on and splitting the color information between each eye. A similar technique is used in 3D films where polarization is used to split an image instead of color. Active Shutter 3D glasses also split a single image into information for each eye by rapidly alternating blacking out light from each eye at the same rate that the image on the screen changes. The Cardboard technique is a little older and doesn’t bother trying to overlay one image on another. On the plus side, you can get really good 3D imaging results without much technology to figure out (which means lower cost, etc). On the negative side, however, this technique can lead to eye strain because not every pair of eyes is the same distance apart. To illustrate, lets say that your eyes are 1cm closer together than the average pair of eyes that Cardboard was designed for. Then lets say that the simulated image is a ball floating in air straight ahead of you, so the simulated image for an average person would have the image of the object just slightly off center toward your nose so that both eyes settle on the image and your line of sight is where the object would be if it were actually there. To your slightly narrow-set eyes, the image would appear closer to right in front of your eyes, making the object seem further away and distorts the 3D effect. Don’t misunderstand, eye strain of this sort plagues every 3D viewing technique, but it can be more pronounced when the actual image is closer to your face.
The Business-End of Cardboard
The head-tracking feature is a pretty exciting one, too. It works on the principle that in any cartesian coordinate system, movement can be described by 3 dimensions of translation and 3 dimensions of rotation. I’m guessing that the apps on the phone will not account for translation. I came to this conclusion from experience and also by imagining idiots like me holding the VR headset up to their faces while tripping over furniture. Using the information from the accelerometer to tell the phone which direction is down and the angular rate from the gyro (that’s how fast it’s turning), the phone can calculate about which direction you’re facing. There are apps that do this very well with very sophisticated integration and error correction algorithms and there are others that don’t do this well at all. One app that comes to mind will track your head if you move, but once you stop moving, the image slowly drifts back to dead center again. It’s possible that this was done deliberately to keep the viewer’s eyes up front, but I can only guess.
Obviously, I think this technology and implementation are awesome and I’d like to see more happen with it. Some things in particular are more stereoscopic videos on youtube and for NASA to convert it’s library of stereoscopic images from Mars to work on Cardboard. Perhaps the technology could even be extended to augmented reality.
That was my project day, how was yours?
Did you like It’s Project Day? You can subscribe to email notifications by clicking ‘Follow’ in the side bar on the right, or leave a comment below.