The first step into the Metaverse isn't the hardest. It's the nth step that you do for the nth time.
Response post to: The Forgotten Stage of Human Progress
I'm knee deep in an XR implementation project. It's going forward by inches; each step aches with how small it is. If I measured it, it feels like it would barely tick one mark on a stick. However, like a gardener that makes one small snip here, one pull of a weed there, there is no overnight transformation. But still-- in the messy work of IMPLEMENTATION, I'm making a garden that turns heads and makes people think "I want to be there."
Seriously, here is the garden:
Today is one of those days where it feels like we are going 2 steps backwards with no step forward. When you hear it mentioned quietly, but over and over and over, that one of the biggest implementation problems we have in XR for education is "sound" -- WE ARE NOT KIDDING.
We have more problems with sound that with any other aspect of an experience. It is the TOP problem source.
Virbela had this problem in buckets. My hosts cringed every time I estimated that 20% of incoming users had sound problems. 20%! If YouTube had a 20% failure rate that they presented to users, they would far, far out of business by now.
I watched this video dated November 5, 2021 put out by Stanford University touting the first course taught in XR with Jeremy Bailenson where he claims it will be "an incredible journey for about half of this class"
Here is the video promo text:
"263 students, all with their own VR headsets, across 20 weeks and two courses, spent over 200,000 shared minutes together in the Metaverse. They engaged in large group field trips, small group discussions, performed live music and skits, and worked both alone and together to build their own virtual worlds."
First: posed shot OR photoshopped image. Notice: no Zoom markings at all. It's not "live", people are not moving.
For someone like me with enough live event logistics and tech support experience, watching this video shows me that I suspected the course was riddled with sound problems.
The background music starts at 0:18, so "hearing" the students will be hard.
Watch for how much students were cordoned off into small groups (that's not just a teaching method, that's to put them soundwise AWAY from each other and minimize disruption) and then just listen to what you CAN hear of the sound provided in the video, you will get snippets and what you will hear will be blurbs of users acting more awkward and users waiting around on another user.
The "you made it" comment is somewhat telling. It is HARD to get users into XR. Admittedly, it might easier if you are at Stanford and everyone has an Oculus Quest 2 (Meta Quest). (smirk)
Privilege much?
At 1:14 there is a LOT of talk over and by 1:18 the video has been sped up to just overwhelm with ADDING models or processing to VR on the ENGAGE platform.
I'm not trying to douse flames of innovation here. But I'm trying to point out that implementation, as the Atlantic article points out, is a much messier, day-by-day process than the glitz and glamour of a moment.
The video shows THIS as what appears to be a class highlight moment.
The sound is a man speaking saying "Nice work everyone!"
Just let that sink in while looking at that image.
2021. Stanford University. That is one of our very best learning instituations, folks.
Ironically, all of the avatars with awkward arms ARE the users actually using headsets. That one avatar in the middle in the gray shirt with this hands at his sides? He is the one user in 2D, not a headset.
Snicker now, because he is the only one looking normal in this bunch.
Implementation is HARD!