2017 2018

August 20, 2019

2017 – 2018

“Reality,” noted science fiction writer Philip K. Dick, “is that which, when you stop believing in it, doesn’t go away.” If everything that we know about what we call “reality” comes to us through our senses, how do we understand what happens when we enter virtual spaces, in which our sensory inputs are overtaken by digital interfaces? Furthermore, how do we make virtual reality (VR) spaces — which have been defined thus far primarily by the needs and interests of gamers — accommodating and accessible to a wide range of bodies and backgrounds? And how do we test the limits and possibilities of VR in order to bring critical faculties to bear as we test its possibilities and limitations?


Get artists and computer scientists working together. 


“By encouraging artists, designers, and others in creative fields to work with this technology, says Justin Berry, critic at the Yale School of Art and a core faculty member at Yale’s Center for Collaborative Arts and Media (CCAM), “we are discovering its possibilities, edges and limitations as well as the cultural complications of using it.” 


In this year’s iteration of Yale’s Blended Reality Applied Research Project headed by Justin Berry, the team divided their investigations into three categories, Light, Sound, and Body, that each target the ways our senses transport us into virtual worlds. One of the projects exploring how our experience is physically to our body was the Embodied Navigation experiment.  In collaboration with Johannes DeYoung, director of Yale’s Center for Collaborative Arts and Media, and VR pioneer Stephanie Riggs, this project focuses on creating a framework for developing alternative ways to navigate virtual spaces.  In addition, students from a variety of disciplines (e.g. drama, sculpture, painting, computing in the arts) developed projects examining the other senses in virtual reality, such as Jack Wesson’s exploration of sight in the Palantir project, and Michael Costagliola working with Justin Berry to produce immersive soundscapes for language learning.


While virtual reality technologies have become more accessible and affordable, the navigation mechanisms in virtual environments have been limited primarily to handheld input devices similar to game controllers. That kind of model for navigating virtual space, while it has been the industry standard, is also problematic, says Stephanie Riggs, founder of Sunchaser Entertainment, and a consultant to the Embodied Navigation project. “One issue,” she says, “is cognitive load – the number of elements used in working memory during a task. In immersive mediums, the process of entering an unfamiliar medium, navigating it through unintuitive mechanisms, and interacting with foreign controllers creates a heavy cognitive load.” Another concern, says Berry, is that “in many ways, the controller, with its ‘point and shoot’ construction, is a gendered form of engagement that may not be welcoming or intuitive to a broad range of bodies, ages, and experiences.”


For those not familiar with or comfortable with standard gaming input, VR navigation devices are not intuitively mapped to real world forms of locomotion and can be frustrating and disorienting. Currently, the industry lacks an institutive, inexpensive navigation method that can be easily learned and accessed by those who are not oriented towards video games. 

The Embodied Navigation project involved building a simple mode of embodied navigation — using the body itself rather than traditional controller mechanisms — for VR in the Unity 3D game engine. The participant can engage in natural motions such as leaning forward and back to propel the body, stopping, and picking up objects. “Our research,” says DeYoung, “is asking whether embodied navigation allows user more intuitive user interaction in virtual environments, thus easier access and greater engagement.” 


While some research has been done on controllerless navigation through other devices such as boards and chairs, Riggs notes that most investigators tend to instruct users on how to navigate — thus skipping a key factor in effective human-computer interaction. “Learnability, or the ease with which a user learns a system, plays a major role in how user-friendly the system is,” she says. “By prompting the users in advance, researchers missed the opportunity to learn whether the navigation methods are truly intuitive. We want to understand what happens naturally when someone enters an embodied navigation environment.” Given the growing presence of VR in our lives — for example, the increasing use of virtual modeling in fields as diverse as architecture, medicine, meteorology, and military training — this kind of research has the potential for wide application.  


Additionally, says Berry, “As we live in increasingly virtual environments, we need to develop a robust vocabulary for articulating what is real, what is not, and examine how we occupy both of those kinds of spaces and the spaces in between.” In exploring how sight, sound (focused on immersive language learning) and proprioception interact with VR, the students created a series of art works that examined physiological and psychological affects through a range of blended reality experiences.


Following is a selection of representative team projects:

Embodied Motion
Jack Wesson (Yale College ’19) and Lance Chantiles (Yale College ’19) with Johannes DeYoung, Justin Berry, and Stephanie Riggs 

Embodied Motion is a research project geared towards creating more inclusive VR environments by focusing on intuitive and non-controller based navigation. The environment is a maze that must be navigated using three options for player-directed movement, two that are more typical of VR, such as teleportation and joystick driven navigation and one that, by our design, tracks the player in space and moves them according to their relative position within the play area. A study will be conducted seeing how different types of users, gamers and non-gamers, are able to navigate the maze using the criteria of ease, enjoyment, speed, and learning curve. A paper related to this study is to be presented at the IEEE Games, Entertainment and Media conference in Galway, Ireland this summer.

Archive
Valentina Zamfirescu (Yale School of Art ’18)

Zamfirescu created a large sculptural object populated with avatars of herself.  These representations are engaged in various activities, from the jocular to the somber. The viewer is moved through the space along with another character who is only partially visible and acts as the viewer’s companion. Other sculptural objects also populate the space. The work investigates the relationship of a viewer to their portrayal in VR as well as to how the female form is represented in virtual worlds.

Linguistic Landscape
Michael Costagliola (Yale School of Drama ’18) with Justin Berry

This is part of a research project that seeks better ways for people to engage in immersive audio landscapes for the purpose of learning new languages. By replicating how we experience language in the real world, distributed through space and in a non-linear format, the project looks to help users gain an emotional and contextual understanding of a foreign language. Using embodied motion and a system that generates spatial audio, with visual cues to the location of sound, the project focuses on the auditory experience of virtual spaces rather than the more common visual experiences.

Seventy-Four Letters to Her
Yong Eun Ryou (Yale School of Art ’18)

Seventy-Four Letters From Her is an anthology of letters between Yo-E and her alter egos. This virtual reality project begins from looking at the act of correspondence: Writing a letter is a paradoxical experience in which your imagination creates a fictional presence of the person who is not there, and interacts with the tension between a void and trying to fill this void. Yo-E created an audio-visual landscape of the letters by adapting the soundscape set up by Justin Berry and Michael Costagliola to create a space where letters written by her alter egos are read aloud.  Short videos float in the air and, as viewers go to investigate them, they are able to hear the poetic letters, recorded using binaural audio — a type of audio that replicates the way we hear sounds through the lens of our ears.

Whale
Ilana Savdie (Yale School of Art ’18) and Antonia Robins Kuo (Yale School of Art ’18)

Ilana Savdie and Antonia Kuo created a heavy sculptural apparatus that attaches to a VR headset. To navigate the world, users must lift the unwieldy object and are able to move only in relationship to it. Inside the VR experience a whale-like creature moves as the participant moves, generating the feeling of moving a heavy and enormous character along with oneself. This work brings the tactile feelings of weight and gravity into the simulation, emphasizing the participant’s cumbersome dependency on the VR apparatus.

Ghosts
Bobby Berry (Yale College ‘18)

Berry’s work begins in an entirely empty white world. Whenever the player moves through the space a character is created based on the player’s location. Over time the world becomes filled with bodies that act as a legacy or memory of everywhere the viewer has been, or multiple viewers have been. The experience is about building a world based entirely on one’s own movement through it. The work is also about how virtual worlds and digital systems track movement and position — without asking permission — in ways we may not even be aware of.

Living Instrument
Michael Costagliola (Yale School of Drama ’18)

Costagliola set up our motion capture system to track the position of a series of baseball caps. By knowing where the hats are located in space, and using that information to control audio, users in the space become living instruments and can make music by moving around the room.  Each user controls a single sound source and by coordinating their movements they can spontaneously compose music together. This is a lot of fun! It is also the framework for a larger composition that Costagliola will present as his thesis project for his MFA in sound design

Palantir
Jack Wesson (Yale College ’19)

Wesson has created a way for viewers to have more empathetic VR experiences. Users begin in a blank work populated by a single character. A larger world is visible, but only by looking at/through the character, as if the character were a living screen. If the user places their head inside the head of that character they can see the entire world as seen by that character. By discovering new people and matching their movements with one’s own, and seeing through their eyes, the participant can see a world whose features and qualities are different depending on whose eyes provide the window. A child might see the world filled with color and movement while a parent might see it as more subdued and static. The goal is to make users identify and connect with the people populating the virtual world in a more emotional way.