Part 6 Myth: Immersive learning is active learning

 

Photo by Blake Cheek on Unsplash

Part 6 Myth: Immersive learning is active learning

The next myth is that learning in immersive experiences is active, kinesthetic, or like an internship, which is “the way most people learn best” (D’Agostino, 2022, para 17.)


Active learning was first associated with immersive experiences because learners could observe or engage with or, more properly described, engage within simulations (Dede, 2009). The term active simply meant that the learner was present at a simulated place and time; the original use of the active learning phrase with reference to immersive experiences did not imply that a learner could do anything other than observe. The emphasis was much more on the time and space travel afforded by XR.

This claim has been controversial (Khorasani et al., 2023), in part because of the differing degrees of activity that a learner can have - ranging from simply being inside an immersive environment and observing (e.g. historical re-enactment simulations) to taking actions that have non-trivial consequences (e.g. practicing a surgical technique).

Active learning is a phrase on the move


Dede (2009) referred to actional immersion as situations where learner actions have “novel, intriguing consequences” that are “highly motivating and sharply focus attention.” (p. 66). The active learning claim moved from a focus on the learner’s actions and instead focused on the learner’s body ownership illusion. Further, the relationship between user bodies and virtual depictions (avatars) was reformulated and later called implicit learning (Slater, 2017, p. 29).

I want to pause here and really dissect the difference because in this area, there has definitely been vocabulary "drift".  Learner's actions focus on what the learner causes to happen.  Learner's body ownership focuses on parts of the body that the learner uses to cause actions.

For example, picture a chemistry lab simulation.

Image: Labster

Focusing on the learner's actions means that we could use a 2D display screen and mouse and have the learner click on the pipette, click on a liquid to suck up with the pipette, and then click on a vial within which to dispense the liquid.  Those could be right to left actions, but the learner is causing the actions to happen on the screen. They are using a mouse and moving their hand generally right to left.  No hand needs to be visible to do these actions. Activities could be "ghost like" in that they could be caused by no visual physical object whatsoever.  In reality, the computer mouse is doing the most physical 'work'.

Focusing on the learner's body ownership however, would have the learner reaching out (they need to be able to reach) to the pipette, to grab it (they need to be able to firmly grasp), to possible depress the button on the top to create the needed suction, to move the pipette, see the liquid and subsequent vial, and depress the button to dispense the liquid. The movements could be all right to left. Key in this visual depiction, however, is A HAND with workable fingers that is somehow connection via experience to a learner's IRL hand.


In the former example, the learner causes the actions to occur but we are not focused on their body parts doing the action. In the latter example, we are very interested in the body parts doing work that is replicate (in this case) to the real world work of operating a pipette. In the former, we could have confidence that a learner is exposed to the cause and effect of pipette work; it sucks up a reliable amount of liquid and can squirt it back out. In the latter, we could have confidence that a learner is exposed to how pipettes physically work (button press down equal prime for suck, release equals suck, button press down again equals squirt). 

See that the focus is different?

My point is that the FOCUS of what was coming to be called active learning with reference to XR was changing already between 2009 and 2017.


Drawing from the educational history of the Montessori method and considering the interfaces available within immersive experiences, implicit bodily learning (from 2017) transformed to embodied learning (by 2018). Indeed, Johnson-Glenberg initially postulated that “doing actual physical gestures in a virtual environment should have positive, and lasting, effects on learning in the real world” (2018, p. 1). Movement became synonymous with active learning. “Active, motor-driven concepts may stimulate distributed semantic networks (meaning), as well as the associated motor cortices which would have been used to learn long ago, in childhood” (Johson Glenberg, 2018 p. 3). [Hat tip, by the way to all research into the mind-body connection within learning. This post throws no shade on the phenomena.] With specific, other than meaningful, actions now excluded, some researchers appeared to support the claim that all movement somehow begets learning. (That sentence is confusing, I wrote it and even I'm wondering what I meant. It's this: Inside a XR-for-learning experience, a learner might be instructed to do something. Pick this up, move it there.  Because that learning is specific to the learning event, I'm setting it aside. It's not part of this argument.  What I am referring to are the learner-instigated but non-instructed movements. Let's say, a learner joins XR and wanders to the left for 2 minutes before a lesson begins. Or let's say instead of looking to the "front" at the end of the experience, the learner is looking to the "back". These random but learner-instigated actions are...wait for it...somehow the secret sauce of learning in XR.  I kid you not. I really try to pin down the meaning from educators that belief this myth and THIS is what they come up with; because you can move in XR, you are learning (more) in XR.



 

The supporting hypothesis then became that immersive experiences are an inherently active learning method precisely because the learner can move. 

 I'm going to repeat that for emphasis:


The supporting hypothesis then became that immersive experiences are an inherently active learning method precisely because the learner can move. 

The Emperor's New Clothes. Image by Helen Stratton, Public domain, via Wikimedia Commons

Did you catch that? Are you catching on? Aren't the emperor's new clothes splendid?


By incorporating the word “active” educators are reminded of the belief that active learning is better than passive learning (Slater, 2017). Ooo! Shade thrown there, for sure, because no teacher wants to be accused of being a passive educator.

[BTW, there is reams of garbage research out there for anyone looking for a topic. Go ahead and dig into active versus passive in educational psychology papers. It's almost as big of a research garbage dump as XR; teachers radically redefine and appeal to this topic. My point is that the appeal to "active learning" when coupled with XR provides scant evidence of such. To this day, I RARELY see active learning in XR.]


Let's bear down now. To be specific, the 'active learning' coupling with 'XR' claim is not about being fidgety, randomly moving about, or purely reacting as a user would in a game. It is movement, usually performed by the learner via an avatar or minimally via hand controllers where the learner is autonomously and purposely manipulating content.  This is known as embodiment or embodied learning (Johnson-Glenberg, 2018; Markowitz et al., 2018) although definitions of embodiment vary. The definitions vary including how much a learner is embodied. It should also be noted that the term embodiment is often used interchangeably with ‘embodied learning’, which is a theory that the meaningful gestures in and with the environment aid a learner’s cognitive processes (that's the no shade thing I referred to earlier). But even 'embodiment' and 'embodied learning' are slightly different things. Whew! Keeping up?

The Emperor's clothes should be splendid


In 2018, Johnson-Glenberg claimed that presence and embodiment were “profound affordances” of immersive environments and this embodiment affordance should facilitate learner control, also known as agency (p. 1). One further hat tip to Mina: she did actually use a somewhat scientific body action in her research --I believe it was catching butterflies with a butterfly net-- something that biologists WOULD do with their bodies. So it's a real world action.  I point this out because some XR actions are nonsensical. I'm looking at you people who change vocabulary words to bouncing balls or something.

But aren't


A follow-up paper by Mina, however, found that while embodiment does have a connection to learning, it does not exclusively cause learning, or perhaps better said, it doesn't interact with learning. Referring to high or low embodied VR and the connection to learning, “platform is not destiny” (Johnson‐Glenberg et al., 2021, p. 20). So in lay talk that means it had no effect.


A capture that fell flat with the audience: VR had no effect on learning, even when embodied.


 

This confounding (confusing/muddling up/drift of vocabulary) of movement in immersive experiences with active learning forms the myth. Because active learning is considered better than passive learning, claims are made that immersive experiences must cause more learning due to the body-movement connection. The research, however, does not support that claim.

The active learning myth appears to be referred to more often in academic literature than evidence to the contrary. It is true that immersive experiences can allow for more movement-based learning experiences than other forms of media, but it is not definitive that immersive experiences cause learning simply because they can contain learner movement or agency.



Just because you can move in XR, doesn't mean you do learn. Full stop.


Part 7 will be our last myth for this series: Immersive learning causes empathy.

References



D’Agustino, S. (2022, August 3). College in the metaverse is here. Is higher ed ready? Inside Higher Ed. https://www.insidehighered.com/news/2022/08/03/college-metaverse-here-higher-ed-ready

Dede, C. (2009). Immersive interfaces for engagement and learning. Science, 323(5910), 66–69. https://doi.org/10.1126/science.1167311

Johnson-Glenberg, M. C. (2018). Immersive VR and education: Embodied design principles that include gesture and hand controls. Frontiers in Robotics and AI, 81.

Johnson‐Glenberg, M. C., Bartolomea, H., & Kalina, E. (2021). Platform is not destiny: Embodied learning effects comparing 2D desktop to 3D virtual reality STEM experiences. Journal of Computer Assisted Learning, 37(5), 1263-1284.

Khorasani, S., Syiem, B. V., Nawaz, S., Knibbe, J., & Velloso, E. (2023). Hands-on or hands-off: Deciphering the impact of interactivity on embodied learning in VR. Computers & Education: X Reality, 3, 100037.

Markowitz, D. M., Laha, R., Perone, B. P., Pea, R. D., & Bailenson, J. N. (2018). Immersive virtual reality field trips facilitate learning about climate change. Frontiers in Psychology, 9, 2364.

Slater, M. (2017). Implicit learning through embodiment in immersive virtual reality. Virtual, augmented, and mixed realities in education, 19-33.

The content cannot be used to train or be reviewed by AI. All copyrights retained.

Did you miss the other parts of this series? Here they are!

Part 1: From Myths To Principles: Navigating Instructional Design in Immersive Environments

Part 2: The Immersive Environment Delusion

Part 3: The Case Against Virtual Campuses

Part 4: Myth: Learners Learn Faster

Part 5: Myth: Learners Learn More

# The example for img2dataset, although the default is *None* User-agent: img2dataset Disallow: / # Brandwatch - "AI to discover new trends" User-agent: magpie-crawler Disallow: / # webz.io - they sell data for training LLMs. User-agent: Omgilibot Disallow: / # Items below were sourced from darkvisitors.com # Categories included: "AI Data Scraper", "AI Assistant", "AI Search Crawler", "Undocumented AI Agent" # AI Search Crawler # https://darkvisitors.com/agents/amazonbot User-agent: Amazonbot Disallow: / # Undocumented AI Agent # https://darkvisitors.com/agents/anthropic-ai User-agent: anthropic-ai Disallow: / # AI Search Crawler # https://darkvisitors.com/agents/applebot User-agent: Applebot Disallow: / # AI Data Scraper # https://darkvisitors.com/agents/applebot-extended User-agent: Applebot-Extended Disallow: / # AI Data Scraper # https://darkvisitors.com/agents/bytespider User-agent: Bytespider Disallow: / # AI Data Scraper # https://darkvisitors.com/agents/ccbot User-agent: CCBot Disallow: / # AI Assistant # https://darkvisitors.com/agents/chatgpt-user User-agent: ChatGPT-User Disallow: / # Undocumented AI Agent # https://darkvisitors.com/agents/claude-web User-agent: Claude-Web Disallow: / # AI Data Scraper # https://darkvisitors.com/agents/claudebot User-agent: ClaudeBot Disallow: / # Undocumented AI Agent # https://darkvisitors.com/agents/cohere-ai User-agent: cohere-ai Disallow: / # AI Data Scraper # https://darkvisitors.com/agents/diffbot User-agent: Diffbot Disallow: / # AI Data Scraper # https://darkvisitors.com/agents/facebookbot User-agent: FacebookBot Disallow: / # AI Data Scraper # https://darkvisitors.com/agents/google-extended User-agent: Google-Extended Disallow: / # AI Data Scraper # https://darkvisitors.com/agents/gptbot User-agent: GPTBot Disallow: / # AI Data Scraper # https://darkvisitors.com/agents/omgili User-agent: omgili Disallow: / # AI Search Crawler # https://darkvisitors.com/agents/perplexitybot User-agent: PerplexityBot Disallow: / # AI Search Crawler # https://darkvisitors.com/agents/youbot User-agent: YouBot Disallow: /