Using Interactive Surfaces, we have an additional tool to let the player maintain multiple frames of reference while streamlining transition between them. This is important for any kind of game-within-game or subproblem-nested-in-problem conceptual hierarchy in our design.
Many games are designed by implementing very constrained and narrow core gameplay mechanics that are nested inside others, in a hierarchy defined by the granularity of user input and the latency of simulation response. However, adding up mini-games can add too many input paths to user interface. It is also not always easy for designer and player to avoid tedium with respect to e.g. resource gathering mechanics.
Like Terminal DOOM, games often show off their support for Interactive Surfaces by evocation of classic arcade games inside the game. However, games like Lunar Lander or Missile Defense are also valid mini-games for contemporary game worlds - the difference being that unlike Termninal DOOM and its predecessors, they affect the same simulation the player avatar exists in - possibly the player avatar itself. Imagine control of a "Half Life 1" gun turret through an iconic representation resembling the motion detector of the "Aliens Vs. Predator" HUD, or steering an escape pod using bang-bang controls with limited fuel. Imagine the nose camera view of a TOW guided on an in-game display. Embedding mini-games through Interactive Surfaces lets the designer spatialize and restrict use. The presence or absence of the related terminals communicates clearly to the player which options are available at present, and instead of adding to the user interface, it is overloaded with mutually exclusive modes.
The mechanics of recovery from "real" death does not fit into many if not most game fictions, although it might work well for cyborgs, vampires, zombies or ghosts as primary avatars. Recovery of a proxy, however, is a possibility: a gun platform might start up again on secondary power, a loss of signal might be resolved by neutralizing the jamming device. Furthermore, many game puzzles are problems stacked inside problems requiring recursive solutions to subproblems. The concept of recovering or repairing proxies by using other proxies moves this idea from a static set-piece to a tactical choice in a dynamic environment.
Half-Life 1 contains textbook examples for multi-stage puzzles: take, for example, the Blast Pit level, in which the player had to start up a jet engine to destroy a tentacle boss monster. Several switches had to be used by the player in more or less the right order. The game's fiction offered some guidance in understanding those puzzles - thinking about the machinery, it was possible to break down the problem (start rocket engine to kill the tentacle) into sub-problems (fuel pumps have to be started, pumps need electricity etc.). Of course, the ultimate purpose of these puzzles was to place single-purpose buttons in different locations of the level to motivate (i.e force) player movement.
Interactive surfaces (especially if they can be re-purposed by the player) let such multistage puzzle step into the foreground. In theory, a game could answer user queries with respect to the puzzle, and offer background information and help when demanded, through terminals and info screens. It is also possible to solve the same sub-problem from multiple locations, if the game's fiction supports accessing a computer network from several terminals. Ultimately, a well-disguised interface with the game logic itself allows the player to obtain and exercise control over the game world independent of her location - it is now possible to open a door or turn on a light without having to walk over to it - to create and execute scripts.
The defining characteristic of outdoor vs. indoor game spaces is the amount of GPU/CPU resources expended per surface area unit. DOOM3 is a perfect example - dynamic lighting that takes all of the GPU and parts of of the CPU resources to create an astonishing amount of consistent detail within the tight, narrow confines of a corridor crawler. The problem with this type of space is that the visual horizon is never far out - and consequently, so is the player's planning horizon. Strategic depth depends on foresight, and corridor crawlers lack the range of view - level design devices like windows in horseshoe spaces nowithstanding.
Interactive Surfaces do is give the designer and player new windows to tunnel through occlusion and expand the player's horizon. Imagine the player picking security cameras from an overviewn map to see snapshots of the space ahead, or remote-control a flying camera. Taking control of an armed sentry bot, the player can opt for reconnaisance in force, or try to shape the battlefield by bait and ambush before entering it herself, actively explore space ahead. This way the look beyond the horizon is no longer passive - it has become integral part of the game experience.
In other words, IS separate proximity in euclidean space from proximity in play space. Many game designs have required and used this separation, implementing it by teleportation. Quake3's "see-through" portals implement one-way non-euclidean "shortcuts": the player can look ahead into the destination space, and there is no "cut to view" discontinuity. The resurrected Prey, if it implements a fully-fledged hierarchical portal-cell system, could feature bidirectional shortcuts that are no longer recognizable as teleport portals: there is no longer a distinction between euclidean and non-eucledian paths through game space. The important difference between IS and transparent portals, however, is that the former explicitely supports stacking frames of reference, while the latter does not: cognitive mapping of the game space is intuitive and easy with IS, it might prove a challenge in its own right for a game that uses Prey-style transparent portals excessively to collapse game space.