So I’ve pretty much completed the v1.0 re-creation of my virtual office, in VR. It started as an experiment in high-fidelity haptics, aka RealHaptics®. At this point, that purpose has utterly failed. However, as with most failures, many lessons were learned. Read on and learn, young Padawan.
At first, my solution felt elegant: If you want an environment with haptics, simply build a virtual world that is a precise replica of surrounding physical world. That way, when you reach out and touch something in the virtual, you would really touch it in the physical. No gloves needed. Sounds simple. Just take total precision measurements of the physcial environment, model it in VR, and go. Now, in actual implementation…
GBR-vRoom-01-reality
Step 1: measure the physical environment precisely.
realhaptics in progress
RealHaptics step 2: precisely align virtual assets in space with real-world objects
GBR-vRoom-03-virtual
step 3: enter the virtual world and try your best to touch things with your real hands. take copious notes. tweak and test again. repeat until perfect.
Several challenges surfaced right away: namely,
calibration, transformation, and registration.

Challenge 1: Calibration

First, calibration: the default position of the camera is where the interactive is assumed to “start”. This means that when the application is launched, the physcial HMD needs to be in precisely the exact position, altitude, and orientation that the app expects it to be. Any deviation will cause a related translation error in the world render. What’s a translation error? When a physical surface is an inch to the left and 2 inches lower than you percieve it to be virtually. Many stubbed thumbs result.
There is a simple fix to this, in fact: Tell the app / environment not where you expect the HMD to be at start, but rather where you know the tracking camera to be. Since the tracking camera is fixed, and the HMD is naturally mobile, the tracking camera has a much more reliable and consistent position from day to day (and moment to moment).

Challenge 2: Transformations

The second problem is a little harder, yet certainly tunable. Even though I precisely measured my environment and put it in with metrics, it appears that there is a scaling issue. Extending my hands to the corners of the [physical] desk, they are clearly 2 inches or so wider than the precisely rendered virtual desk. Combined with this is translation. I thought 1-inch precision would be decent… the challenge is, these errors are additive. 1 inch here, one inch there… by the time you’re 6 feet left of camera, you’ve got 6 inches of error. And 6 inches of vertical error on the visual placement of a stairstep is potentially deadly. In fact, I can say without a doubt that its easier to navigate my virtual/physical environment with my eyes closed than it is to have a proximal visual simulation of same.

Challenge 3: Registration / Homography

The final problem is tracking skippage. When I exit the tracking volume, skippage occurs. The inertial sensors in the HMD start to take over, but the fact is, massive skippage occurs. In otherwords, when the VR visuals appear that I am 3 feet left of desk, I am actaully 6 feet left and 4 feet forward of desk.
What all this adds up to is: think about the last time you were descending your home’s staircase in the middle of the night, in the dark. Remember thinking you were on the ground floor, and placing your foot confidently orward, only to find air, and an 8″ drop? That feeling in your stomach, and twitch in your ankle as you prepare for shocked impact? That’s pretty much how it feels in my VR office.

RealHaptics : Phase I Conclusions

Conclusion 1:

A lot more work will need to be done to match physical to virtual.
The Devil is in the Details.

Conclusion 2:

Palmer may be right.
The Oculus just might be “a seated experience”

Conjecture A: Best Methods to Navigate / Ambulate in VR

Optimal navigation methods merit serious exploration. The two methods which I’ve seen explored most are a) gamepad / joystick, and nodal teleportation. At present, I am a big fan of nodal. Having tried the gamepad extensively (one joystick moves you laterally; the second controls rotation/pivot), it flies directly counter to the feeling of presence. It puts you in “video game mode”… you start paying more attention to the controller than to the environment… and that sucks.
Nodal navigation, on the other hand, is something that few of us have encountered outside of VR. Or rather, we have, just not recently. Remember Myst? Or Adventure? This method references fixed waypoints in the 3d environment. Upon staring at any given node for 1+ seconds, your consciousness is speeded to that new location, nearly instantaneously. Experiments have shown that an intuitive acceleration / deceleration arc is quite disorienting; so instead, we perform an “instant acceleration” to high human velocity, say  3 meters per second, and an equally “instant deceleration” to landing, safely at the new viewing node.

Conjecture B: Flying is Funnest

Flying is the optimal form of navigation.
It is dreamlike, feels safe, and has none of the balance issues of standing / walking with mis-registration. For that matter, done correctly (with force fields / repellers and such), there isn’t even any collision detection to worry about. The caveat here is that flying is generally done, or generally imagined, as a belly-down, back-arched activity. Yet most Players are in swivel chairs. So how do we reconcile the flying act with a seated body position?
Flying wheelchairs anyone?
Let the games begin.

further topics / random thoughts on these matters:

1. INCREASED IN-WORLD DURATIONS

My comfortable in-world-time has expanded from 1 minute to 2 minutes to now 5 minutes. How do I know this? Because my screen saver comes on, and whereas I used to be relieved, now I am WANTING for more. This might be a really good measure of certification levels. Total time logged inside the Rift. I will continue to extend out my screen-saver “dive timer” as my comfort levels build.

2. next steps of development / R+D

a) properly tracked mobility == larger camera tracking volume.
this is of paramount importance.
slipping outside of the tracking volume absolutely sucks.
possibly investigate a smooth transition from optical camera tracking to
in-HMD inertial / gyro / accelerometer (IMU) tracking.
b)  the VR world should be a physically responsive environment
  • first, i’m already over the fact that i can’t see my body (even though my daughter claims we’re vampires because we can’t see ourselves in the mirror… does that mean we’re ghosts because our hands pass through objects?)
  • this is the core challenge of AGENCY
  • getting hands into the picture is a first step, cameras such as Leap and Kinect seem to be the best methods
  • hand controllers (tool holders a la Sixense STEM and Sony Move) might be the more pragmatic approach. they are inherently more accurate, and theoretically can deliver lower latency.
  • …though these gripped solutions must approximate limb articulation and finger position based on context and inverse kinematics

b.i) when i swivel in the chair, i want to see the chair move.
this is the first real disconnect

c) if there are objects on the desk, i want to be able to play with them:
not only to touch them, but to pick them up, manipulate them, and play with them
  • lift and examine the coffee cup
  • remove the pen from the cup

those two tasks are kind of hard.
…but then there are tasks that could happen fast with a little bit of clever coding:

  • move the physical / virtual mouse around on the desk
  • type on the virtual/physical keyboard…
  • …and see the text output on the virtual screen <— COOL!!!!

What do you think?
Is this a valid approach to haptics? Or too limiting?
what would you like me to build next?

Comment below: 
 

Leave a Reply

Your email address will not be published. Required fields are marked *