Tags
"second life", hydra, oculus rift, omni, razor, sixense, virtual reality
One of the problems Linden Lab is probably working on as we speak, is making the User Interface (UI) work for people who are exploring their virtual world called Second Life with a Virtual reality system, such as the Oculus Rift.
After all, in Second Life we want to do more than just walking and looking around us.
The current viewer is based on a 2D system and it isn’t very practical.
To simply walk around, interact and communicate, you have to use all sorts of buttons, keep windows open, etc.
For this reason the mouse view option in Second Life is pretty useless, even when you’re using voice chat.
Changing this might be quite a challenge, because when you’re inside a game or virtual world with your VR setup, you can’t see your keyboard.
In my view the solution is to take everything ‘inworld’, put everything you need inside the game.
Need to type something in chat, make a hand gesture and a keyboard will appear in your view, on the screen and with your virtual hands you can type your message on that keyboard.
Need to select something from a menu?
Touch an object and right there, floating in mid air, a menu appears that you again can click with your hands.
Recently I found a video that uses that very idea, but it also shows a few other things that I am sure the people of Second Life will find very interesting.
Watch it with me and then I’ll continue my story;
The VR setup used here is far from perfect, they use the Sixense STEM System which is really cool, you add sensors to your body and they tell the computer where those parts of your body are and how they are moving.
But before we go into that, let’s look at the part I am interested most;
As you can see, the inworld floating menu idea is used here quite effectively.
Of course this is a simple menu and the hand isn’t very exact, but I bet that this could be improved and also work with more complicated menus, such as the SL pie menu that allows us to interact with things.
Many of us already use tiny keyboards that are projected onto our tablets or smartphone screens, I reckon we can get used to a floating one as well.
I don’t think we need a huge one that fills the entire screen (although that should be an option for people with bad eyesight for instance), it could be a smaller keyboard that perhaps even looks like a real keyboard, or perhaps just floating letters, etc.
There are many options, but we’ll no longer need the RL keyboard of our computer.
Anyway, let’s get back to the video because it shows a lot more exciting things.
First the bit I don’t like; they controllers.
What is it with controllers?!
We’ve been using those since the 1970s (yes I am that old) and just like the VR headset has just made a huge comeback, I think it is time that the virtual glove should make a comeback.
As you can see in this video, he use two of the five trackers to tell the computer where his hands are, this works very well but it also means the hands don’t come with many options.
For the feet, this is fine, it is very exciting to see him play around with them and even kick a football.
Will we finally be able to really dance in Second Life… and will that be a good thing? 😉
But with your hands, you want to do more than slap, push and make a fist.
Perhaps you want to play a virtual piano, poke someone in the eye, point in a direction, pick your nose or use all ten fingers to type on that virtual keyboard I just talked about.
The Stem system but also the Hydra, have lots of buttons on their controllers to give you more options but in the end that means that you’re still just walking around with some updated joysticks.
I don’t want to feel like I’m holding controllers, I want to feel like my hands are in the game, just like my feet and body.
So I hope that someone will start developing a new VR glove, perhaps with a few buttons on the top of the glove so you can still have a few action keys there, but one that will allow us to use all our fingers naturally inworld.
Besides, if you need action keys, something inworld could perhaps designed better.
After all, we don’t need action keys in RL do we?
Nevertheless, I think that this video again shows another huge step forward towards a whole new kind of experiencing Virtual Reality.
We’re still waiting for someone to combine the Oculus Rift, the Hydra Razor, the Sixense, the Omni AND THOSE VR GLOVES into one complete setup.
And of course we’re still waiting for Linden Lab to finish their Oculus Rift friendly viewer that was supposed to come at the end of the summer…
But try and imagine the amazing potential all this have.
In less then a year SL could have changed beyond all recognition and our VR experience more real than we can even imagine at this very moment.
Exciting times to live in, twice.
ty for very interesting vid link 🙂
LikeLike
I can’t believe he squashed the butterfly! Really, I’m so impressed w/ this. To be able to walk up to a box in SL, physically pick it up and put it over there is my dream. My husband & I have been talking about VR gloves all day- this is a great idea. It would free up the hands from having to use a joystick, instead allowing for the potential of a virtual keyboard, I think. My 1st SL/Rift experience was amazing; I can’t wait to see how all this plays out! So exciting..
LikeLike
I think we’re in a place right now, tech-wise, where we don’t need ANY of those gadgets. The only time we’d need them is as hand held props for tactile sensation.
All we need is the helmet, with a couple of mics attached, maybe two omnidirectional FLIR CCDs, or one Omni and one focused one, plus a third spectrum.
and moving oculus to wireless IMO is pretty much mandatory, eventually. so there’s a 4th bandwidth.
but that’s all that’s needed.
well, ok there’s a big “IF” there.
IF a motion interpreting context predictive layer can be implemented in combination with existing surface thought reading histogram generation techniques and active echo of the room and person for double duty as microtwitch reader AND motion detector of other body microtwitches, to give more context to the crude IR spatial data, we won’t need anything else.
no keyboard, no mouse, nothing but the helmet and your mind, and focused will.
what’s happening here is that science is learning to pick apart body language.
we’re learning to interpret stuff which has always been there, which certain people have always been able to pick up on better than others. Mentalists, “mind readers”, con men, and so on.
people have no idea how much they reveal in their body language and microtwitches, lol.
I’ve always been able to read body language. and i’m also able to interpret stuff in-world in SL about people based on their avatars, and how they move, and their conversational text, and profile info.
this is simply a matter of taking that cold reading process a quantum leap further. and it’s already being done.
I suspect that’s why you’re seeing this delay.
that, and perhaps safety issues that need further testing and are being addressed.
I suspect that immersion at this level of detail comes with some problems which arise from the way the human eye works vs the way the graphics rendering and auto adjustments from in-world camera work.
suspension of disbelief leading to a rather jarring snap back when you come out of the goggles and back to reality is an obvious problem which I bet makes you blink a lot and have out of focus eye issues for a while.
but I suspect there’s more.
I think they’re going to need to kind of steadycam float the oculus POV, add loads of motion blur when you pan, make the interface mimick the human eye working in tandem with the eye muscles and neck, before it’s ready, and safe for the public to use on a regular basis.
which is why useful idiot guinea pigs like me are so valuable, lol.
can’t wait to get strapped into this rig when it ships, lol.
=)
LikeLike
Yes, very interresting times now indeed
About sensors: see also Kickstarter project PrioVR
They are also busy with some hand sensors / gloves
What I also like: projects to capture RL via 3D scanning, and import them via standard cad formats
It will boosts all kinds of creativity, now in reach for affordable consumer prices with high quality scanning sensors
See Structure Sensor http://www.kickstarter.com/projects/occipital/structure-sensor-capture-the-world-in-3d
and Fuel3D http://www.kickstarter.com/projects/45699157/fuel3d-a-handheld-3d-scanner-for-less-than-1000
LikeLike
Pingback: More Oculus Rift 2013-39
There have already been some clues as to what the Lab is doing / thinking, UI-wise.
Simon Linden indicated a while back that work at the Lab has been focused on placing menus floating “above” the Rift user, just outside of the normal field of view, so you “look up” to see them. While chatting with Oz Linden at the start of the week, I asked if this was still the case – he recently had a Rift demo – he confirmed that it was.
Some work has apparently also been done to change the perspective of the menus when seen in this way, but what this is isn’t clear to me; nor is the meachanism for interaction (but I assume it is through some form of conventional device).
It also appears that the Lab may by leaning towards voice being the primarily means of outward communications when using the Rift, rather than perhaps focusing on a means to communicate via text. However, I’m basing this purely on a passing comment from a Lab employee.
It currently doesn’t appear as if work with the likes of Leap Motion – which Simon prototyped, code-wise, with SL – is being taken anywhere, so it may be doubtful that any anciliary devices are being looked at to work “alongside” the Rift.
From comments that have been passed, it would seem that the primary focus of the UI work is ensuring that the Lab doesn’t end up with two different viewer code bases – one for non-OR users and one for OR users. They want everything integrated into the one viewer code base. This may limit them as to what can be done.
LikeLike
Thanks for that interesting stuff!
I decided to email LL to get some of that confirmed and put that in my next blog.
LikeLike
Easiest thing is to pop along to the Tuesday Simulator meeting. Simon co-runs it and has had hands-on with the Rift.
See: http://wiki.secondlife.com/wiki/Server/Sim/Scripting_User_Group
LikeLike
Thanks for the tip, I keep hearing about those meetings but always forget about them, miss them or don’t remember where to find the info.
Bookmarked the link, will try to be there some time.
LikeLike
The Rift is a popular topic at the Tuesday meetings. Sometimes comes up at other meetings as well, but Tuesday, time-wise and interest-wise, is probably your best bet :).
LikeLike
Excellent, I’ll try and make it to a couple of them!
LikeLike
Pingback: Widely Linden talks about the Oculus Rift SL viewer | Jo Yardley's Second Life