Bruno’s comment about touch-screens got me thinking. While most users still interface with the computer via mouse, keyboard and a one-way display, things are going to change fast in the coming years. The old KVM (Keyboard/Video/Mouse) user interface is being replaced by more powerful and natural tools. The point&click / drag&drop metaphors popularized by Apple’s Macintosh since 1984 after the invention of the ball mouse at the Xerox Parc are due for an update. Ever smaller and powerful mobile devices, accelerometers, touch screens, 3D screens. How will they interface between the user and the VR Panorama?
The solutions I have observed so far simply hard wire the behavior of these new devices to the mouse. This is no different than my 1998 Wacom Tablet (which still works!). The simulation of the mouse limits the interaction designer to define the device in relationship to mouse behavior and either mimic it or its inverse. After half a century it is time to break those limits; to look at the interactions anew and to design device/context specific metaphores; to mold the intearction around the human. I see a combination of relevant factors, including the device’s physical characteristics and the context in which it is used. A touch-screen on a desktop requires a different metaphore than one on a smartphone. And what to do when two competing input devices with conflicting metaphores are attached, such as an accelerometer (3D mouse) and a touch-screen?
For the desktop touch-screen and for the laptop touch-screen I tend to agree with Bruno that the Google StreetView metaphore is the way to go, at least until the computer can discern if the index finger is at a nearly perpendicular angle and fully straightened as in a pointing; or it has a smaller angle and a slightly more curved posture as in a natural dragging movement.
Things become more touchy (pun intended) with mobile devices which typically have an accelerometer and a touch-screen. Which one should drive the VR panorama interaction and how? To me the most natural would be to use the accelerometer and point the iPhone in the direction I want to see, but in some situation such explicit movements are embarassing, unconvenient, inappropriate, or all of the above; and the more discreet dragging by finger on the touch-screen is the right way to go.
Whichever it is, I think modern panoramic viewer should make provisions to accommodate both behaviors – dragging and pointing. Ideally the system would tell the VR player in what context it is playing and the VR player would adapt, using an appropriate metaphores. Currently the browser only let the VR player assume the presence of a pointing device, and the devices all interface by mimicking the mouse. In the current situation making the mouse behavior a parameter, as implemented in the KRpano viewer, is the best thing to do. When a reliable detection mechanism can tell the player what device is attached, the choice may be automated.
I look forward to see the results of León’s Google Summer of Code project adding QTVR playback and Wiimote interaction capabilities to the VLC media player. In the meantime I got help from the Liquidware guys in my still unsuccessful attempts to make their Antipasto Arduino IDE work on my Ubuntu notebook. The TouchShield Slide touch-screen rocks and I am keen to toy on new interfaces with it.