In the late 1980s, throughout the ‘90s, and well into the 2000s, MIT Principle Research Scientist, Gloriana Davenport investigated the idea that movies had begun to operate as a kind of new “elastic” media. She wrote, “Interactive cinema reflects the longing of cinema to become something new, something more complex, something more intimate, as if in conversation with an audience.”1 The cultural artefacts informing this shift were in Davenport’s opinion as disparate as interactive TV, video games, large format simulation rides, and experimental VR. Between 1987 and 2004, her Interactive Cinema Lab at MIT designed multi-threaded movies, multiplayer VR experiences, previsualization tools for visual effects, documentary platforms, and “smart” VR characters driven by story databases. When asked in a 1995 interview with American Cinematographer, what kind of filmmaker would adopt the radical new tools she was building, Davenport replied that the current generation of 30-year old filmmakers steeped in the culture of video games were already deploying them.2

Indeed, this felt true. The cultural moment of immersive CAVE environments, Virtuality arcades, Quicktime VR, and films like Lawnmower Man (Brett Leonard, 1992), and Strange Days (Kathryn Bigelow, 1995), made Hamlet on the Holodeck required reading for any visually literate student of the period. A linear history of virtual reality forces us to evaluate what ensued in the following decade as a VR winter, rendering the current momentum around VR as a gamble, a lark, or just a boom cycle. But what if we looked at VR as a thread embedded in the elasticity of cinema itself? This thread might stretch back to the origins of cinema and television and then forward to what Davenport referred to as “Movies of the Future”.3

This essay takes a non-linear approach and looks at the development of VR from the perspective of what theorist Siegfried Zielinksi calls deep time4 When Zielinksi writes, “…do not seek the old in the new, but find something new in the old,”5 he reminds us that newness in media is a fluid concept. Throwing the newness of VR into question and considering its hidden concepts, false starts, and unexpected antecedents clarifies what it is doing as a medium. Zielinkski’s media archaeology methodology helps us understand how a medium can be neither new nor old, but instead best understood as an elliptical excavation.

This excavation of VR examines developments in large format technologies, the concept of the volume as a storytelling space, and the plastic reality of visual effects. As Davenport illustrated in her work, these disparate but related formal considerations of media are the building blocks of VR. Their overlapping and somewhat meandering histories — in particular the recurring relevance of Andre Bazin’s new media essays, and the filmmakers in 1960s-‘70s European and American experimental cinema scenes who were informed by Bazin — are a connecting thread.

Alphaville (Jean-Luc Godard, 1965)

(Re)Adjusting Our Gaze

Much of the research in VR is currently focused on content and platform — how do we tell stories and how do we deliver them — but of equal significance is how do we look in VR? How does visual literacy operate in the spherical volume? Many of our storytelling cues in VR have been taken from cinema which is grounded in a public communal audience. What effect do the private enclosed nature of games and television, and the one-off bespoke nature of large installation screens have on the experience?

In the early 1950s André Bazin wrote for the French journal Radio-Cinéma-Télévision. We can understand Bazin’s now emblematic questioning of cinema as a desire to expand the definition of it. When he asked, What is cinema? he was also asking, what is television, and theatre, and radio — and though he did not ever state so explicitly, what is virtual reality?

Television, in the 1950s, and again more recently with its proliferation across multiple platforms, encourages us to look in a way that is quite distinct from cinema. Davenport pointed this out in a 2000 Scientific American article. Her timeline of the evolution of interactive entertainment began with the TV show Winky Dink and You (CBS, 1953) in which viewers placed plastic sheets over the TV screen and participated in drawing exercises.6 Bazin understood TV as both an opportunity and an obstacle for directors working in multiple mediums. He saw TV as an authentic social medium allowing access to a much wider audience than cinema.7 An audience interested in experiencing storytelling in their personal domestic environment rather than seeking it out in a public one. Bazin’s identification of the emotional register of TV as one of intimacy continues to be useful for our understanding of how new media can operate. Much of this intimacy emerges out of proximity — the TV screen may be smaller, but it is also much closer to us. We invite it into our homes and it becomes as social and familiar as the hearth. VR extends this idea, pushing the screen as far forward as the bridge of our nose. Intimacy gives way to privacy, replacing the televisual hearth with an immersive dream. This proximity of the screen — very close or very far to us — is also about the expansion and contraction of formats.

Bazin’s preoccupation with the shifting landscape of large format cinema — specifically Polyvision, Cinemascope, and Cinerama is bound up in the idea of immersion and presence. Large-format cinema has always been about playing with the viewer’s proximity to the image. In addition to offering a higher resolution image, large-format theatres offer significantly better sound systems and more comfortable seating. These seating arrangements along with their initial locations in museum spaces contributed to the special status many large-format theatres enjoy. They eschew the domesticity of the local mall or even the best quality home theatre. This prioritisation of immersion, the feeling of being transported to a virtual world, and the audience’s sense of engagement with that world, is sold as a premium experience and is reflected in a higher price point. The premium price tag entails a premium experience.8

In his essay, “Cinerama, a Disappointment,” Bazin pointed out that the fundamental flaw in Cinerama was the imperfection of the vertical seams where the three projected images met. The title of the essay alludes to the gap between the promise of a better viewing experience and the technical limitations of achieving it — specifically the calibration of projectors to exact specifications. Nevertheless, he felt Cinerama ushered in a new era of realism due to the overall impression of increased depth and luminosity in the image. Sceptical of the hype around new technologies, he nevertheless admitted that certain kinds of documentary passages such as a plane flying at a great height, or the running of the bulls, allow for a heightened sense of realism in large format that is deeply compelling.9

Other essays on large format and aspect ratios see Bazin taking a more philosophical approach to the technologies. In “Will Cinemascope save the Cinema?” he argued that large formats fundamentally change the way spectators look. Similar to the experience of looking around in the real world, the 146 degrees of Cinerama cannot be taken in by a fixed position. “…you have to let your gaze wander not only by moving your eyes but by turning your head.”10 Bazin saw this as the necessary next step of cinema because “…everything that contributes to the active participation of the spectator is progress.”11

This is perhaps a less polemical way of reading American director Chris Milk’s statements about the new language of VR:12

So here’s what’s special about VR. In all other mediums, your consciousness interprets the medium. In VR, your consciousness is the medium. So the potential for VR is enormous. But where are we now? What is the current state of the art? Well, we are here. We are the equivalent of year one of cinema. This is the Lumière Brothers film that allegedly sent a theater full of people running for their lives as they thought a train was coming toward them. Similar to this early stage of this medium, in VR, we also have to move past the spectacle and into the storytelling. It took this medium decades to figure out its preferred language of storytelling, in the form of a feature film. In VR today, we’re more learning grammar than writing language.

The new language is not so much the director’s visual grammar in constructing the 360 image, but more so the visual literacy required by the viewer to decode that image. The viewer needs to learn to look in a way that allows the gaze to wander rather than remain fixed straight ahead. To date, the most successful VR experiences have challenged the viewer’s gaze and the way in which images are composed for the full field-of-view. The Academy Award nominated VR short Pearl (Patrick Osborne, 2016), for example, stages characters moving in and out of the viewer’s field-of-view so that we must turn and look to the far left and right of the centrally staged action in order to experience the unfolding of key relationships in the story. The filmmakers implicitly acknowledged that as one of the earliest narrative pieces in the new wave of VR, we, the audience, did not yet know how to engage with the entire spherical volume. Characters entering and exiting the framing device of the car in Pearl helped us learn how to look.

Cinema As Volume

Bazin’s interest in the changing format of cinema was shared by other writers and theorists in French cinema circles in his time. In 1944, René Barjavel, who was primarily known as a science fiction writer in France, wrote a short treatise titled, “Total Cinema: Essays on the Future Forms of Cinema.” In it he deconstructed the underlying assumptions of film production such as sound, colour, exhibition, distribution, and even audience spectatorship, calling each aspect into question and suggesting a more revolutionary way forward. He was concerned in particular with the concept of relief, which he differentiated from stereoscopy. Barjavel’s technical description of relief comes across as a little abstract, but when he wrote of “…a voluminous and transparent screen which may perhaps even be completely immaterial, itself created by a bundle of waves,”13 it sounds a lot like he was describing augmented reality. This is the radical position he established for cinema before circling back to define the cinema volume in greater detail:14

Since it first emerged, the cinema has been undergoing a constant evolution. This evolution will be complete when it is able to offer us characters in full relief, in full colour, and even perhaps whose perfume we can detect; a time when these characters will be freed from the screen and the darkness of the film theatres to step out into the city streets and the private quarters of their audiences. Even then, science will continue adding finer touches to its perfection. But to all intents and purposes it will have reached its ultimate state. A state we call total cinema.

Barjavel utilized much of the terminology of contemporary film technology before it had been invented — he may have been one of the first writers to explicitly use the term “virtual image.”15 He theorized that this virtual image was the only way to address the stasis of the studio system which had already given rise to a homogenous application of sound and colour technologies. Francesco Casetti has written that the entire history of cinema has been a series of crisis where at every turn, new forms threaten its very existence. Cinema he says, can best be described as a medium that negotiates “the tension between persistence and transformation.”16 Just as Bazin saw that television was both an opportunity and a crisis for cinema, Barjavel saw that cinema was not so much what it was in 1944, struggling to accommodate the formal strategies of sound and colour, but rather, what it was destined to become at some future date: the virtual image, or rather, virtual reality.

In his analysis of French poetry and cinema,17 Christophe Wall-Romana asserts that Bazin’s essays may have been directly inspired by Barjavel’s work. We do not know if Bazin and Barjavel met, but we can assume they were moving in similar circles in which the spirit of French cinema and revolution were in vogue. Though the films of the French New Wave are intricately tied to the writings of Bazin via Cahiers du Cinéma, it is also likely the sweeping ideas behind them, and filmmakers’ investment in pushing the boundaries of film form towards a more radical expression of the image through technical manipulation, were influenced at least in part by the vision laid out by Barjavel.

In their biography of Francois Truffaut, Antoine de Baecque and Serge Toubiana further tease out this connection. The opportunity to write and direct was first presented to Truffaut by Julien Duvivier, a director who at the time was co-writing a film with Barjavel. 18 An experienced filmmaker, Duvivier encouraged Truffaut to get behind the camera at a time when Truffaut was known solely as a film critic. Furthermore, Barjavel’s reputation as France’s foremost science-fiction writer in the 60’s and 70s would have made some impression on Truffaut and Jean-Luc Godard’s own forays into sci-fi with films like Alphaville (Jean-Luc Godard, 1965) and Fahrenheit 451 (Francois Truffaut, 1966). All three writers were very much concerned with the themes of urban alienation and technology as a controlling mechanism that were bound up in the experimental French cinema of this period.19

Expanding Cinema

The deep time of virtual reality saw experimentation with form happening across the work of seemingly unrelated artists. Around the time Goddard and Truffaut were directing Alphaville and Faranheit 451, an experimental American filmmaker named Francis Thompson was constructing large-scale multi-screen films for the 1967 World Expo. Thompson’s work initially emerged out of the Jonas Mekas-Andy Warhol New York underground film scene in the mid 1960s. But it also featured in Gene Youngblood’s Expanded Cinema scene and his 1970 book of the same name. According to Youngblood, Expanded Cinema was roughly characterized by its new media consciousness via the use of technology, special effects, computers, video, and holography among other experimental practices.20

Thompson, whose work was equally oriented towards direct cinema and museum-based installation practices, ranged from the 1957 non-objective short, N.Y.,N.Y. (Francis Thompson, 1957) to one of the first Imax films, released in 1976 and titled To Fly! (Jim Freeman, Greg MacGillivray, 1976). But it was his 1965 film, To Be Alive! (Francis Thompson, 1964), that illustrates the most compelling case for the way cinema formats and audience spectatorship were being expanded at this time.

We Are Young! (Frances Thompson, 1965) screening at Expo 67

To Be Alive! was a documentary created for the 1964 New York World’s Fair and exhibited across three screens.21 It’s one of the best examples of Thompson’s on-going investigation into the development of multi and large screen cinema exhibition formats, and is considered an early inspiration for the Imax format.22 He won an Academy award for the film the following year and continued until his retirement to work closely with Imax and the Large Format Cinema Association to bring experimental formats into Hollywood exhibition practices.23

Thompson summed up his interest in large format exhibition in a way that both harkens back to Bazin’s belief that technological progress in cinema should result in a more immersive audience experience,24 and explicitly points back to Barjavel’s total cinema:25

I would like to make a theatre that would be a huge sphere, as big as Radio City Music Hall or larger, and seat the audience around one side of it…The picture comes around as far as you can see, and beneath you too. What I see is a theatre with so great an area that you no longer think in terms of a screen…Your images should come out of this great, completely-surrounding area and hit you in the eye or go off into infinity. So you’re no longer working with a flat surface but rather an infinite volume.

Thompson describes the kind of physical hardware that would be required to push this technology in a direction that not only offers immersive entertainment applications, but also a new way of thinking that is meaningfully integrated into our everyday lives. The device he imagines would be a kind of aural/visual “hoodlike training device used in aircraft and navigation training.”26 He theorized, “You would have images that completely fill your field of vision and sound that would fill your entire range of hearing…With a great sphere you’re introducing people into a whole new visual world which would be emotionally, physically, and intellectually overwhelming.”27

Thompson’s imagining of VR headsets was indebted to the military research being done at the time towards drone applications. The arrival of digital computing technologies allowed for a standardization of the previous WWII era flight simulation training undertaken by military defense industry contractors.28 This research eventually made its way into education/research environments like Davenport’s MIT Lab, CAVE, immersive gaming platforms, and finally, consumer-ready platforms like Facebook’s Oculus, HTC’s Vive, and Samsung’s Gear VR.29

Handmade Cybernetics

The development of large format was one way in which experimental filmmakers like Thompson contributed to the technical innovations that wound their way into the deep time of VR. Another was the architecting of hardware and software that allowed for the integration of computer graphics and live-action images. Known today as visual effects (VFX), in the 1960s this experimental field was one of many burgeoning disciplines referred to more broadly as cybernetics — the study of how technology can be used to control any system — as defined by MIT Mathematician Norbert Weiner.30

Prior to the 1960s, several artists built hand-made machines that sought to combine different media like drawings, video images, and automated machine processes to create a new kind of moving image. These moving images were deeply expressive, but they remained tied to the gallery space and functioned more as installations than cinematic entertainment. It was this kind of experimentation that created much of the groundswell that was gathered up under the genre of Expanded Cinema across cities like New York, Boston, Chicago, San Francisco, and Los Angeles. But it wasn’t until 1963 with the development of Ken Knowlton’s BELFLIX system at Bell labs that hardware-software interfaces became stable and usable enough for artists to begin working with. BELFIX was the first computer animation system to integrate a hardware platform and a programming language for manipulating visual input. And it was American artist Stan VanDerBeek’s Poemfield series of films authored on the BELFLIX that really illustrated the potential for combining computers with the first inklings of VFX workflows to arrive at an entirely new way of making movies in the late ‘60s.

In The Experience Machine: Stan VanDerBeek’s Movie-Drone and Expanded Cinema, Gloria Sutton describes VanDerBeek’s process for crafting his Poemfield films. “He often spliced computer-generated images into his short films that also incorporated his stark line drawings, hand-collaged elements, and stop-animation sequences. Computer animation and hand-built collages that were then filmed using stop motion techniques were both labour intensive processes that relied on VanDerBeek’s facility with editing images and crafting within a single frame.”31 The assembly-line like process Sutton outlines here encompasses all the essential steps that would come to comprise VFX — previsualization, creation of elements via live-action and computer graphics, compositing, and a resultant image which conveys a new reality — the essential framework for VFX pipelines later developed at scale by George Lucas’s VFX house Industrial Light and Magic (ILM), and still in use today in VFX houses everywhere. Of note is also the bespoke nature of constructing each frame. Vanderbeek’s process was handcrafted. Each frame of a sequence was a unique piece of art. Though today’s VFX pipelines are far more complex and technical, this aspect of production remains consistent: it is still a very bespoke labour-intensive process.

VanDerBeek wasn’t content to present these hand-crafted images as traditional cinematic films. He leapfrogged through the deep time of VR and placed them into a volume — a spherical theatre called the Movie Dome that he built. The idea was considered revolutionary by the Expanded Cinema community in 1965, but wouldn’t capture the public’s fascination for another fifty years. He filled the Movie Dome with projections of varying aspect ratios, resolutions, orientations, and colour. In archive photos of it we see a preference for spatial montage and an exploration of the dreamscape quality of VR. He filled the volume with kinetic clips, sound, and projected light, privileging the physical experience of moving through space over the fixed gaze of the traditional cinema spectator. The Movie Dome offered an immersive virtual tour into the mind of Stan VanDerBeek.

Stan Vanderbeek outside his Movie Drome Theater in Stonybrook New York, circa 1970

In 1965 these collage-based animated films and proto-VFX workflows placed VanDerBeek as a central figure not just in the Expanded Cinema movement, but also the more Hollywood oriented New Cinema32 in which George Lucas was also a participant. The auteur driven nature of VanDerbeek’s remix Poemfield films was coming out of the same avant-garde milieu as Lucas’s early experimental tone-poem shorts. The work of the two filmmakers was certainly connected via their individual contact with Gene Youngblood who wrote extensively about each as a key figure in the Expanded Cinema period. 33 VanDerBeek’s films are significant because like Francis Thompson, they represent the intersection of several key moments and concepts that unite the deep time of VR: the influence of the European avant-garde, Expanded Cinema, the New Cinema, the exploitation of cinema’s spherical volume, early VFX workflows, and even a future oriented stance towards science-fiction storytelling.

It was Youngblood who took Norbet Weiner’s definition of cybernetics and applied it to a new genre of films — those authored on analogue computers and pushing the limits of hardware and software capabilities of the time. 34 For Youngblood, cybernetics was a term that implied the utopian ideals of computers converging with human consciousness and a radical shift in production/distribution/exhibition modes. Cybernetics was a product of the post-war period, a time when technology had not yet been consolidated into platforms run by multinational corporations. By today’s standards the term is perhaps better represented as something more pragmatic, what VanDerBeek himself35 and film theorist Julie Turnock refer to as the “plastic experience” of cinema.36

Plastic Reality

Turnock views the plastic reality of late 20th century cinema — its formal strategies of stylization, graphic dynamism, immersion, and kineticism — as a way for New Cinema filmmakers like George Lucas to more fully express their personal vision of the world through visual effects work. The plasticity she describes is arrived at by large format and VFX technologies, and the Bazinian drive towards intimacy between the viewer and the cinematic experience. She writes, “The new style of blockbuster filmmaking that emerged at the end of that decade [1970s] emphasized a sense of immersion and bodily engagement.”37 For Turnock, this immersion and bodily engagement exists at the level of the performative with complex camera work enabled by motion control systems, digital doubles paired with motion captured actors, and effects elements such as wind, fire and water. Stan VanDerBeek’s BELFIX experiments had very quickly evolved by the late ‘70s and early ‘80s into sophisticated compositing techniques allowing for the seamless integration of any number of elements into a single frame. Audience perception of the human body’s movement through cinematic space dramatically shifted, and along with it, our own proximity to those bodies which could now move away from and towards us with kinetic speed. This industrial shift was soon picked up by Gloriana Davenport and MIT Media Lab to further exploit narrativity and form in cinema.

The making of Star Wars: Episode IV – A New Hope (George Lucas, 1977)

The immersion Turnock refers to applies equally to the viewer/participant’s bodily engagement with the story world. The large format simulator experiences directed by New Cinema filmmakers like Lucas and Coppola for theme parks in the 1980s took plastic reality into the physical realm. In experiences like Star Tours and Captain EO (Francis Ford Coppola, 1986) made for Disneyland, audiences experience sound, smell, and simulated motion in four dimensions, pushing the sensation of the traditional theatre into an immersive experience similar to the one envisioned by Barjavel’s volume and Stan VanDerBeek’s Media Dome. Captain EO in particular conflates the new media of television and cinema as Bazin predicted, packaging up the televisual and still nascent aesthetic of MTV’s music videos to present a cinematic sci-fi imagining of pop icon Michael Jackson as envisioned by Francis Ford Coppola.

The legacy of plastic reality continues to resonate in contemporary VR. Experiences like Dear Angelica (Saschka Unseld, 2017) push the lucid fever dream quality of the medium. Directed by filmmaker Saschka Unseld who honed his craft at Pixar before breaking away to direct for the now defunct Oculus Story Studio, Dear Angelica reflects the many disparate moments of VR’s deep time coming together. We see a director working in the highly technical and streamlined pipeline of a studio built on the VFX model of Lucas’s ILM. As a Layout Artist on films like Toy Story 3 (Lee Unkrich, 2003), Cars 2 (John Lasseter, 2011), and Brave (Brenda Chapman, Mark Andrews, 2012), Unseld would have had a thorough understanding of the overall 3d pipeline, digital cameras, and worked closely with directors to previsualize the way shots are composed. He worked with developers at Oculus38 to take the Quill toolset into production. Quill is an application that allows an artist’s brushstrokes to be rendered in three dimensions that viewers can interact with in VR. On Dear Angelica Unseld led a team of artists to create a story structured around memory rather than the kind of linear logic at work in most films, to create an experience that feels like it’s really exploiting the potential of a different medium. Unseld’s skills as a director are an amalgamation of Francis Thompson, Stan VanDerBeek, and George Lucas — all filmmakers who worked with cutting edge technology to craft experimental films fuelled by their very personal vision of the world. This new breed of VR director — those who began their careers working in highly technical and specialized roles in VFX — can now be seen across the VR landscape. Directors like Ben Grossmann, Aruna Inversin, and Sam Macaroni are exploring the intersection of commercial and experimental interests as the Expanded and New Cinema filmmakers did before them.39

Expanded Cinematography

When Gloriana Davenport stated in 1995 that 30-year old filmmakers raised on videogames were already making movies of the future,40 she was essentially describing Mexican cinematographer Emmanuel Lubezki. Working with a variety of directors during this period, Lubezki began developing his signature long-take style which coalesced a decade later in Children of Men (Alfonso Cuarón, 2006). In a continuous six-minute shot that takes place in a futuristic car in the film, Lubezki combined Bazin’s prinicples of realism with what game theorist Ian Bogost describes as the core feature of video games: a lengthening of action in order to convey the magnitude of an event.41

In his discussion of the video game Heavy Rain (Quantic Dream, 2010), Bogost theorizes that games reject cinematic editing in favour of prolonging. The mental effort exerted by a player who is forced to go through all the minute actions that comprise an event becomes essential to a scene’s meaning. These prolonged moments carry the story’s “dominant payload”.42

Lubezki’s experimentation with the long-take arose out of his aversion to traditional film coverage. He found that the “A-B-A-B” intercutting of shots between two actors resulted in a sameness that characterizes most films. “It’s as if the cinematic language hasn’t really evolved that much. Many films just cover the dialogue without really exploring the visual dimension…We did the movie in long shots to try to get the audience to feel they are there.”43

The making of Children of Men (Alfonso Cuarón, 2006)

There is a symmetry in Davenport and Lubezki’s visions for the evolution of cinema. While Lubezki’s cinematography isn’t as radical as the interactive narrative databases Davenport staked out, he does use the tools and visual language pioneered by Expanded Cinema artists and the MIT Media Lab to digitally construct seamless continuity in his shots. More importantly, the payload in Children of Men arises entirely out of its rejection of some of cinema’s core formal strategies in favour of a distinctly game-like visual approach.

Viewer as Cinematographer

There is an implied fluidity in the skillsets of those who help manufacture cinema’s most commercial big budget spectacle films and VR’s experimental emotional landscapes. Bringing together both high and low practices such as VFX techniques and experimental non-traditional exhibition spaces has now become common practice in the transmedia landscape. In her essay in Fluid Screens, Expanded Cinema, Haidee Wasson makes a case for the ways in which both very small and very large screens are integral to understanding cinema’s continued expansion and proliferation into visual culture.44 She focuses on the Quicktime and Imax formats as technologies that occupy seemingly oppositional spaces but nevertheless allow us to understand the Cinema’s ability and need to appropriate a multiplicity of screens. What Francesco Casetti refers to as cinema’s ability to appropriate both high and low definition in order to facilitate its overall strategy of expansion.45

Wasson and Casetti both draw attention to cinema’s utilisation of “poor” and “rich” image fidelity to expand onto new platforms like browsers, mobile devices, and immersive theatrical spaces. The enlarging of the frame is illustrated in the marketing, prestige, and distribution strategies for large-format films like Paul Thomas Anderson’s The Master (2012), Quentin Tarantino’s The Hateful Eight (2016), and Christopher Nolan’s Dunkirk (2017). But the sustained profitability of these films is due to their ability to contract and stream on laptops, mobile phones, tablets, and airplanes.

Contraction and expansion can be seen as flip sides of the same equation in the move towards increased cinema immersion. The contraction of cinema onto our mobile devices and personal computers paved the way for those same devices to become distribution channels and exhibition forums for VR platforms like Google Cardboard, Samsung Gear VR and a host of other low-fi headsets that utilize the mobile phone as a 360 screen. This malleability of the high/low, rich/poor, enlarged/contracted indicates cinema’s adaptation to yet another crisis state — in this case dwindling eyeballs and competing media — that has been integral to the push for wide scale adoption of VR.

In The Lumière Galaxy: Seven Key Words for the Cinema to Come, Casetti connects the foundation and ideals of 1960s Expanded Cinema and contemporary cinema. Casetti’s interest lies mainly in cinema’s continued malleability. He points out that Expanded Cinema’s preoccupation with the diffusion of technology, its feedback between the medium and the spectator, its connection and reliance on other media, and most significantly, its appropriation of the computer, continue to resonate with today’s production and distribution practices in a way that throws the very idea of spectatorship into question.46 Like Bazin, he is concerned with how cinema can be both a private and public experience. Spectatorship is no longer a formal activity, it can happen anywhere at any time. Casetti’s discussion ends in 2014 with the renewed interest in 3-D and large format technologies. Had his research been published one year later, it surely would have included a discussion of how the spherical 360 digital format once again redefines our notions of screen space.

The affordances of VR, particularly the way in which the medium encourages a gaze that wanders and drives the narrative, also presents a challenge to the deep-seated notion that cinema is an auteur-driven medium. Head tracking, a technology that allows the precise physical location and movements of the VR participant to be recorded so that the virtual world responds to the participant’s head movements, is now a standard feature in most headsets. The participant becomes a cameraperson, removing the decision making around camera placement and movement from the filmmaker’s hands.

The Degrees of Freedom (DoF) offered by positional tracking continues to evolve. Low-end headsets offer 3 DoF: rolling, pitching, and yawing. More sophisticated headsets like Oculus Rift and HTC Vive go further with 6 DoF, extending the experience to elevation (the participant’s vertical movement), strafing (horizontal movement) and surging (forward and backward). All this is to say that the viewer/participant now functions like a cinematographer, curating the lens through which a VR story is experienced. In a VR story, the participant cannot be told what to do by the director. The participant can only be encouraged. In this way the director of VR becomes more of a designer, building a story world and interactive mechanics which are then given over to the agency of a participant who will dictate the “final cut.”

Eye-tracking offers additional affordances unique to VR. Foveated rendering — a rendering technique which significantly optimizes rendering workload by reducing image quality in the participant’s peripheral view — promises to create more responsive experiences by placing less load on the graphics processor. In general, being able to track exactly where the eye is looking paves the way for much more intuitive interactions. Story mechanics can be controlled by nothing more than the participant looking at or in the direction of something, essentially taking the notion of the classic cinematic gaze and throwing it into hyper-drive.

In her discussion of the use of VFX in contemporary cinema, Kristen Whissel describes the spatial dialectics of many big budget spectacle films as expressing a “new verticality.” In the new verticality, VFX technology is used to emphasize and support action that foregrounds extreme movement across the frame’s y-axis. Whissel writes, “Digital processes have given rise to a new film aesthetic based on height, depth, immersion, and the exploitation of the screen’s y and z axes.”47 Think martial arts performances requiring digital doubles, motion capture, and wire removal in films like Crouching Tiger Hidden Dragon (Ang Lee, 2000), and The Matrix (Lana Wachowski, Lilly Wachowski, 1999). But just as verticality has become one of the signifiers of movies that seek to defy gravity, VR uses head and eye tracking technology to introduce a kind of new horizontality to the cinema.

Experiences such as Square Enix’s Tales of the Wedding Rings VR (Sou Kaei, 2018) exploit this new horizontality by combining comic book style side scrolling panels and the Bazinian long-take. With a first-person user point-of-view, the experience exploits eye-tracking to push horizontal framing as far right or left as the participant looks. The result is an extremely comfortable way of watching action unfold. The vast scale of an urban Japanese landscape is preserved even as it is rather economically and intimately rendered.

VR Is

In the last few years we’ve seen this horizontality in the progression of Emmanuel Lubezki’s visual style with films like Gravity (Alfonso Cuarón, 2013), Birdman (Alejandro González Iñárritu, 2014), The Revenant (Iñárritu, 2015), and the VR short Carne y arena (Iñárritu, 2017). In Gravity, the camera hovers in space, a blackness almost identical to the VR volume. Often our point-of-view is aligned closely, if not directly, with Sandra Bullock. At times her arms and legs float out in front of us — a recurring trope of first person VR and game experiences. In Birdman, that first person point-of-view is the defining formal strategy of the film, the mechanic by which the entire story and our prolonged experience of the theatre’s story world unfolds. Under the conceit of a single two hour take, the streets and back stages of New York are presented as one uninterrupted real-time vista, hidden behind the technology of seamless compositing and computer-generated set extensions. And finally, in The Revenant’s opening shots, we experience an ultra-wide lens, camera height set roughly to that of our protagonist, and a floating position that pans horizontally left and right as it moves forward, convincingly mimicking the human gaze. This is the VR aesthetic making its way back into cinema. Given this body of work, it was a simple formality that Lubezki shot an actual VR film the following year.

The overall effect is one that returns to the Bazinian ideal of the long-take’s uninterrupted pan across a landscape, conveying both realism and truth. With the long-take, the focus remains on time unfolding and elongating within the shot, rather than the director’s manipulation of time and curation of meaning through linkages to other shots. The spherical volume pushes this idea further by replacing the length of a shot and the necessity to cut with the agency of the participant. The new horizontality allows the viewer to control panning, editing, and pacing via head tracking. What appears as subtle allusion to pristine vistas that go on as far as the eye can see in Lubezki’s films, in VR is made explicit. VR is the ultimate Bazinian long-take.

The strata of virtual reality seen through deep time is neither neat nor conclusive. It’s an attempt to untangle the complex and frenzied way in which technology develops in fits and starts and then snowballs. New is a word that appeals and repeals. It’s attractive because it sends a signal through the noise vying for our attention. But it prevents us from seeing the interconnectedness of histories. The ever-expanding definition of cinema, and its relationship to other media, new and old, remains the most useful clue in the continuing excavation of virtual reality.

This article has been peer reviewed.

Endnotes:

  1. Phillip Rodrigo Tiongson. Active Stories: Infusing author’s intention with content to tell a computationally expressive story (Master’s thesis, Massachusettes Institute of Technology, 1998), p. 18. https://dspace.mit.edu/handle/1721.1/88314#files-area
  2. Frank Beacham. “Movies of the Future:Storytelling with Computers,” American Cinematographer (April 1995): p. 38.
  3. Ibid., 36.
  4. Siegfried Zielinski. Deep Time of the Media: Toward an Archaeology of Hearing and Seeing by Technical Means, trans. Gloria Custance (Cambridge, Massachusetts: MIT Press, 2006).
  5. Zielinski, 3.
  6. Gloriana Davenport. “Your Own Virtual Storyworld,” Scientific American Vol. 283, No. 5 (November 2000): p. 80.
  7. André Bazin and Dudley Andrew. Andre Bazin’s New Media. (Oakland, California: University of California Press, 2014).
  8. Brooks Barnes. “Battle for the Bigger Screen,” New York Times, 11 November 2014, https://www.nytimes.com/2014/04/12/business/media/battle-for-the-bigger-screen.html
  9. Bazin and Andrew, 230.
  10. Bazin and Andrew, 222.
  11. Bazin and Andrew, 224.
  12. Milk, 05:43.
  13. René Barjavel. “Total Cinema: Essay on the Future Forms of Cinema,” unpublished translation, Alfio Leotta, trans. (Victoria University of Wellington, 2016), p 18.
  14. Barjavel, 1.
  15. Christophe Wall-Romana. Cinepoetry: Imaginary Cinemas in French Poetry (New York: Fordham University Press, 2012), p. 210.
  16. Francesco Casetti. The Lumiere Galaxy: Seven Key Words for the Cinema to Come (New York: Columbia University Press, 2015), p. 4.
  17. Wall-Romana, 211.
  18. Antoine De Baecque, Serge Toubiana, and Catherine Temerson. Francois Truffaut: A Biography (Berkley: University of California Press, 1996), p. 91.
  19. Jean-Marc Lofficier and Randy Lofficier. French Science Fiction, Fantasy, Horror and Pulp Fiction: A Guide to Cinema, Television, Radio, Animation, Comic Books and Literature from the Middle Ages to the Present, (North Carolina: McFarland Publishing, 2000).
  20. Gene Youngblood. Expanded Cinema (New York: Dutton, 1970), p. 354.
  21. Monika Kin Gagnon and Janine Marchessault. Reimagining Cinema: Film at Expo 67 (Ontario, Canada: McGill-Queen’s University Press, 2014), p. 117.
  22. Patrick Healy. “Francis Thompson, 95, Whose Films Inspired Imax,” New York Times, 29 December 2003, https://www.nytimes.com/2003/12/29/arts/francis-thompson-95-whose-films-inspired-imax.html.
  23. James Zoltak. “Future of Large Format Films,” Amusement Business (1998).
  24. Bazin and Andrews, 217.
  25. Youngblood, 358.
  26. Ibid.
  27. Ibid.
  28. Ajey Lee. “Virtual Reality and its Military Utility,” Journal of Ambient Intelligence and Humanized Computing (28 May 2011): p. 6.
  29.  University of Southern California. “John Milius to Serve as Creative Consultant With Institute for Creative Technologies,” 12 June 2000, https://search.proquest.com/docview/447510098?accountid=14782.
  30. Norbert Weiner. Cybernetics: Or, Control and Communication in the Animal and the Machine (Cambridge: Massachusetts: MIT press, 1961), p. 144.
  31. Gloria Sutton. The Experience Machine: Stan VanderBeek’s Movie-Drome and Expanded Cinema (Cambridge, Massachusetts: The MIT Press, 2015), p. 168.
  32. Sutton, 168.
  33. Gene Youngblood and George Lucas. “George Lucas: Maker of Films.” Filmed Summer 1971. Los Angeles, United States, KCET, video, 57:40, https://vimeo.com/88892421.
  34. Youngblood, p. 194.
  35. Sutton, 164.
  36. Julie A. Turnock. Plastic Reality: Special Effects, Technology, and the Emergence of 1970s Blockbuster Aesthetics (New York: Columbia University Press, 2015).
  37. Turnock, 3.
  38. Angela Watercutter. “Dear Angelica is the Film — and Filmmaking Tool — VR Needs,” Wired, 20 January 2017, https://www.wired.com/2017/01/oculus-dear-angelica-premiere/.
  39. Cinefx. “From VFX to VR: How to Transition.” Filmed April 2017. Los Angeles, United States, VRLA video, 52:02, https://www.youtube.com/watch?v=gjHYlKLXOPU.
  40. Beacham, 38.
  41. Ian Bogost. How to Talk About Videogames (Minneapolis: University of Minnesota Press, 2015), p. 98.
  42. Bogost, 101.
  43. Benjamim B. “Humanity’s Last Hope,” American Cinematographer (December 2006): p. 62.
  44. Haidee Wasson. “The Networked Screen: Moving Images, Materiality, and the Aesthetics of Size” in Fluid Screens, Expanded Cinema, Janine Marchessault and Susan Lord, eds. (Toronto: University of Toronto Press, 2007), p. 91.
  45. Casetti, 118.
  46. Casetti, 102.
  47. Kristen Whissel. Spectacular Digital Effects: CGI and Contemporary Cinema (Durham: Duke University Press, 2014), p. 13.

About The Author

Raqi Syed is a Senior Lecturer, School of Design, at Victoria University of Wellington in New Zealand. She’s worked as a Visual Effects Artist on films such as Tangled, District 9, Avatar, Dawn of The Planet of the Apes, and The Hobbit Trilogy. In 2017, The Los Angeles Times pegged Raqi for its list of 100 Industry professionals who can help fix Hollywood’s Diversity Problem. Her writing focuses on film and gender, new media technologies, and the history and business of visual effects. Her essays have appeared in TechCrunch, Vice, Salon, Quartz, and The Los Angeles Review of Books. Raqi is a 2018 Sundance New Frontier Story Lab Fellow and a Turner Fellow. She holds an MFA from the USC School of Cinematic Arts and an MA in Creative Writing from Victoria University of Wellington.