Virtual Reality: Defining and Applying

There is quite a lot out there at the moment about virtual reality.  News just today in the NY Times assesses the current position of this technology in Gartner’s hype cycle – apparently we are now in the “trough of disillusionment.”  Indeed, some have even claimed that this new tech may even be the “ultimate empathy machine”.  Okay.  As I said back in 2007, let’s get real about the virtual.

Defining it:  Virtual reality, to my mind, falls into three different categories.

  1. Augmented – Digital content applied through a visible overlay onto one’s current physical environment
  2. Immersive Video – 360 video as experienced through a head-mounted viewer.
  3. Synthetic – Completely computer-generated environments to be experienced on a flat screen or through a head-mounted viewer.

I know that there is a growing number of haptic systems that allow for additional input / feedback systems (Oculus and HTC Vive, for example), but I’m only talking at the moment about broad categories of virtual reality.  To that end, I’d like to share a few examples and offer some possible applications of these virtual reality technologies to learning.

Following is a list illustrating three different types of VR, example tech, and some applications for learning:

Augmented Reality:  augmentedMicrosoft Hololens, Apple AR Smart Glasses, Sony SmartEyeglass,
Google Glass, and a host of others.  Think of the Iron Man suit, and the heads-up displays you see Tony Stark working with.  At present, this VR is accessed through apps on tablets and smartphones, with or without glasses that you wear.

Applications for learning:  An immediate answer here is any kind of physical process training, where the learner needs to operate within a physical space, with physical objects.  Certainly virtual orientations to facilities is an opportunity, as well as remote collaboration (live coaching / collaboration).  FUTURE:  Personal efficacy.  I really think we’ll see some convergence of wearables and personal digital assistants to help people through their days.

Immersive 360 Video360vidLots of production companies are out there now (notably Within, featuring work by Chris Milk), and pair the video with an app and a head-mounted display (e.g. Oculus, Google  Cardboard, Samsung Gear VR, HTC Vive, etc.).  It all starts with a specialized 360 camera rig to capture everything, and then there is a lot of post-production to add additional audio, visuals, and do the editing.

Applications for learning:  Duke CE is currently using this kind of VR to drop participants into unfamiliar contexts where observation must be practiced.  This experienced is paired with a debrief with experts who help managers and leaders sharpen their awareness of their own biases while also honing cross-cultural competence.  A perhaps more obvious application is to provide virtual orientations of remote facilities, and to help bring deeper understanding of customer experiences.  Less obvious might be the application of this technology to helping people experience bias.  FUTURE:  I believe that we will see a live component with this technology:  think Periscope meets 360 meets planetarium.

Synthetic WorldssyntheticUse a regular computer screen (or head-mounted display optionally) and an avatar to visit 360, 3D computer-generated environments.  InWorldz, High Fidelity, Minecraft, Lego Fusion, Second Life, and others.  This type of VR requires some online system / environment that you access.  Typically, these are live spaces that features a great deal of user-generated (virtual) content, but this is also the kind of VR implemented in gaming environments.

Applications for learning:  Duke CE has used these environments to prototype virtual experiential activities (e.g. 3D Team Building in Second Life) back in 2009.  There is still an opportunity here for scaled collaboration on ‘physical’ prototypes and spatial innovation efforts.  Less obvious is perhaps recognizing the proteus effect, and organizing some avatar-based experiences designed to effect improved real-world behaviors.

For additional reading, check our Jeremy Bailenson’s work at Stanford’s Virtual Human Interactions Lab

 

This entry was posted in Education, Technology. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s