(notitle)


Videogames as a medium for storytelling have often taken cues from movies, and the clearest example of this is the use of screens. Pac-Man is quite often said to be the first game to use cutting scenes rather than moving directly from level to level without interruption. After the player hit each scene, it would play a short vignette showing simple scenes of Pac-Man and the ghosts chasing each other.

While these little scenes are quite obviously far from how modern archipelagos are used in games, the core concept is the same.

The game removes control of the character from the player for a sequence to introduce some kind of new information. The duration of these sequences can vary greatly - Konami's Metal Gear Solid series is notorious for having long cut scenes, with Metal Gear Solid 4 clocking it for more than eight hours of cutting scenes - and can be used for a variety of purposes.

They are used to introduce characters, develop established ones, provide backstory, atmosphere, dialogue and more.

Despite their ubiquitous presence in modern big budget games, nuclear power is not necessarily the best way to tell a story in a game. There have been many highly acclaimed games that used few cut scenes, instead they preferred to let the player control the character throughout the game.

Half-Life 2 by Valve Software is currently the most-rated PC game on the Metacritic review aggregation page, and it has only one cutting scene at each end. Control is rarely removed from the player for more than a few moments - with the exception of an on-rails sequence towards the end - and much of the background information that would be displayed in a cut scene elsewhere is instead displayed through scripted events or background details in the environment.

But are Half-Life 2's inseparable, scripted sequences different from cuts? After all, the player often can't continue until other characters have completed their assigned actions and dialogue - so why not just use traditional screen shots and be done with it? To get truly unique experiences, we must first look at what makes the video game unique as a medium for storytelling. Unlike movies, where the viewer has no control over the action, or traditional table games, where the player's actions have verylied in the way of visual results, video games provide a unique opportunity to merge interactivity and stories. Games like Gone Home, Dear Esther and other games in the so-called "walking simulator" genre have been lauded as fantastic examples of the kind of stories that can be unique to games.

But for some players, these games pose a completely different problem - although they rarely take control away from the player, they also offer very little way to play themselves. Dear Esther really has no way in which the player can affect the world around them - the only action that can be taken is to walk along a predetermined path to the end of the game. There is no way to "lose", no interaction with the environment, just what corresponds to a scenic tour with some super story. So despite the lack of cuts in the game, the almost complete lack of player control and interaction means in the first place that it is little to distinguish it from a admittedly quite elaborate cutting scene.

Since video games currently exist, there seems to be a kind of dichotomy between traditional storytelling and games. For a game to tell a story to a player, there must be some degree of limitation in what the player can do - either temporarily in the form of a movie scene or scripted sequence, or by limiting the player's actions during the course of play. Maybe future games will be able to integrate a lot of player interaction with compelling stories. But that cannot be achieved by removing the players control and forcing them to watch a short movie instead of letting them play the game.