Welcome to The Queue — your daily distraction of curated video content sourced from across the web. Today, we’re watching a video on how video game cameras are affecting cinematography.
The relationship between movies and video games has, historically, been pretty one-sided. Cinematic adaptations of video games have a deservedly bad reputation, with a scant handful of exceptions (2002’s Resident Evil boasts a 35% on Rotten Tomatoes and a 100% in my heart). Meanwhile, video games have been on a decades-long quest to become, for lack of a better term, more cinematic.
Like movies, video games have cameras. And while they operate with wildly different rules (given that they are not physical objects and are controlled by unpredictable camera operators), their designers have been steadily working to emulate real-world cinematography. This includes everything from lens-flares to shaky-cam, to deep focus. But, as the video essay below explains, after decades of video games emulating films, it’s now filmmakers who are beginning to take a page from the visual styles and techniques of video games.
In the pursuit of physics-defying one-takes, cinematographers are beginning to employ some of the tried-and-tested tricks video game camera designers use to disguise load screens. Over-the-shoulder shots are beginning to become more common with the likes of 1917. Hell, the innovative curved wall of LED screens, which provides real-time rendered backgrounds on big-name projects like The Mandalorian, is powered by the Unreal Engine, a video game engine. So, while most of the inspiration still flows in mostly one direction, the balance is beginning to shift slightly.
Watch “Game cameras are changing Hollywood as we know it“: