Anatomy of a Level

Over the past few months we spent quite some time iterating on levels in preparation for Early Access. Most of them now feature improved graphics, a dedicated music track and countless gameplay improvements. But how does a level in Guntastic come to be?

The workflow we currently use is the result of our past work experience with 3D shooters and experimentation done in the early production phases of Guntastic. If you’re only interested in the actual workflow, feel free to jump to the final section of the post. Otherwise, read on.

In The Beginning…

When we first started working on Guntastic we didn’t know much about what the game was going to be, apart from the fact that it would be an arena shooter at its core. We decided to go this way in part because we love shooters and we believe that going indie should be about building the games we love (take this you cold, number-crunching business people!), but also because most of our past work experience involved building shooter games. With so many things that can go wrong developing a game, we felt that a shooter was our safest bet (reality check!).

A screenshot taken from the FPS arena shooter Toxikk (PC 2016).
Now this doesn't look so different from pixel art does it?

One of the fields in which we benefited the most from our background was level design as most of the guiding design principles for 3D levels can be applied to their two-dimensional counterparts. In particular:

  1. Gameplay and visuals should work in tandem. Always design for gameplay first, but having an idea of how the level should look like in the end makes the final result feel more natural.
  2. Using a modular approach based on grids and reusable assets makes it easy to think about, prototype and build the levels. It also promotes visual clarity, aiding players navigating the world in an intuitive way.
  3. Each level should look and feel unique, both from a visual and gameplay point of view. Players are more likely to remember how to play a memorable level rather than an uninteresting one. (Plus, memorable usually means more fun at the design stage!)

Prototypes! Prototypes! Prototypes!

Guntastic, however, was still our first 2D game – and a very particular one. A whole level needs to fit on a single screen and have enough room to accommodate four players at the same time, which puts severe constraints on the level design. We also had character movement to sort out: how high could a player jump? Were there going to be special movements such as dodges or wall jumps in the game at all?

Meme of Steve Ballmer yelling ”Developers! Developers! Developers!“.

Since the best way to figure things out is through experimentation, we started creating rough level shells to play with. The following is the prototyping kit I originally put together in Photoshop back in 2016 and that we still use today to block out level shells. It includes tiles of various sizes and shapes that can be combined in a wide variety of ways. The idea of having half-tiles came a bit later, and proved crucial to deal with the very limited amount of space available on screen.

Tiles of various sizes and shapes used to prototype levels.
Programmer art, baby.

We spent several months iterating on an handful of levels as we tweaked the movement mechanics to our liking (more about this in an upcoming post). Dynamic environmental elements such as moving platforms, lifts, jump pads, areas with lower gravity, teleporters, destroyable elements, death pits, traps and more were gradually introduced into the mix. Working almost exclusively in Blueprints proved invaluable at this stage, enabling very fast iteration times – although most things were later moved to C++ for additional flexibility.

A level prototype, filled with all sorts of dynamic gameplay elements.
Sometimes we got a little carried away with dynamic elements.

During this time we also started investigating the production of tile sets and environment pixel art. The first environment we envisioned was an underground sewer. While definitely not the fanciest way to start, it provided us with a good amount of artistical challenges in a small and controllable scope. It gave us a chance to practice with different materials, rounded shapes, backgrounds, lights, decorations and more and proved a great learning experience overall.

A draft of the sewer theme.
One of the first in-game tests of the sewer theme.

Of all the levels built during this stage, only two survived in the long run – with the rest being discarded because overly complex, difficult to navigate or simply boring to play.

The current state of the ”Sewer“ level.
Current in-game version of the sewer level.

Summing It Up

As we built new prototypes we found ourselves relying more and more on dynamic elements. Doors, jump pad, lifts and destroyable elements not only helped players move faster around the level and created unique gameplay opportunities by encouraging players to be creative with their surroundings, but also made levels far more unique and memorable. Some elements simply didn’t work and were cut. Moving platforms, for example, required way too much room to work, leaving no space for anything else. We also realized that having traps and death pits that killed players on touch (which counted as suicides, resulting in a one-point penalty) was frustrating and often left players wondering as to how and why they died and lost a point.

The turning point with environmental traps was the lift in what later became the Sawmill level. The lift needs to be activated by a player and has buzzsaws underneath that kill opponents on touch, rewarding the player who activated the lift with a point when that happens. This was a big paradigm shift. Players were now in charge of activating the traps found in the level and could actively exploit them to their advantage without risking penalties. This gradually pushed us into implementing dynamic elements that were more audacious, to the point that new levels could be built around a main dynamic element which also set the overall visual theme of the level.

In-game screenshot of the ”Subway“ level.
In-game screenshot of the ”Subway“ level. The whole level revolves around a dynamic element, the train, that players can activate using the switch at the top.

The Actual Process

This brought us to the process we currently use to create levels. We first envision a strong dynamic environmental element and then block out the level around it. During this stage we search the internet for reference images and prepare mood boards to help later on in the process. The first sketches of both the layout and visuals are carried out using pen and paper. Things don’t need to look pretty at this stage (and in fact they don’t).

A pen and paper sketch of the ”Subway“ level.
Pen and paper sketches of the ”Subway“ level.
A mood board with screenshots to help setting the visual theme for the ”Sawmill“ level.
Mood board for the ”Sawmill“ level.

Once the layout looks promising we rebuild it inside the editor using the prototyping tileset. Dynamic elements are implemented at the same time. We test and iterate on the level until we’re satisfied about how it plays, changing the layout and the way dynamic elements work as we see fit. If we don’t like where things are going we simply throw everything away and start over.

Gameplay pass of the ”Sawmill“ level.
Gameplay pass for the ”Sawmill“ level.

The next step is what we call the mood pass. I take a screenshot of the final level layout (this is one of the few cases where having levels that fit on a single screen actually helps 😄) and do a paint over in either Photoshop or Pixel Edit blocking out the main foreground and background elements. Things don’t need to look pretty yet, but it’s important to discern what’s what. Special care is also put into making sure everything is tidily aligned to the grid. A base color palette is also defined at this stage to set the overall mood.

Mood pass of the ”Sawmill“ level.
Mood pass for the ”Sawmill“ level.

The level now pass on to Simone who uses the mood boards and additional references to perform the main visual pass.

Mood pass of the ”Sawmill“ level.
Visual pass for the ”Sawmill“ level.

Once the visual pass is complete, it’s my turn to break everything up in an easy-to-use tileset. This is important to rationalize the graphics and helps in case we need to do small adjustments to the layout later on (which we try to avoid, but things happen!). The level is then reassembled into the editor, where we add all the final bells and whistles such as lights, eye candy and visual effects.

Mood pass of the ”Sawmill“ level.
Final pass (in-game) for the ”Sawmill“ level.

The whole process usually takes a month from start to finish. Naturally, it isn’t always possible to follow this workflow to the letter, but the price paid in terms of additional strain and time needed to complete a level in those cases can be pretty high. Game development is a messy business! 😄

3 Tricks to Improve Pixel Art Rendering in UE4

The pixelated look of the games of the past was largely the result of the severe constraints that graphics hardware imposed to developers back then. Obtaining the same look on modern game engines, however, can be quite difficult and requires some setup.

In a previous post we’ve seen how the game camera can be setup in Unreal Engine 4 to make pixel art look crisp when viewed on most screens. Today, I’d like to go over a series of tricks that we use inside the engine to enforce a more authentic pixelated look in Guntastic.

1. Everything Should Be Aligned to the Pixel Grid

On a screen there is no such thing as an half-pixel, but in a game world it’s common for sprites to end up in positions that aren’t aligned to the pixel grid. This introduces some noticeable inconsistencies in the way sprites are aligned.

A comparison of two sprites which shows the importance of conforming to a pixel grid

The simplest solution to prevent this from happening is to snap the sprites to the pixel art grid just before rendering. Most implementations I could find online perform the snapping directly in the world, by conforming object locations to the grid just before rendering. The original location is then restored on the objects at the beginning of the next frame.

Unfortunately, this is cumbersome to do in Unreal Engine 4 (the resources I found were for Unity), and might have unwanted side-effects (like triggering overlaps, for example). As such, in Guntastic we ended up using a simpler approach: snapping is applied directly in the vertex shader by offsetting the sprite geometry vertices.

Simple shader that snaps geometry vertices to the pixel art grid
A simple material function that can be used to snap vertices to the pixel grid.
The result of the shader illustrated above
Pixel snapping at work.

This works on the assumption that the vertices of the rendering geometry of the sprites are aligned to the pixels of the art itself, so special care should be taken to generate valid rendering geometry.

Examples of render geometries, showing that only sprites with render geometry vertices aligned to the pixel grid will work with the snapping shader
Sprite render geometry vertices should be aligned to the pixel grid for the snapping to work correctly.

Prevent Jittering When the Camera Moves

It’s also important for the camera to be aligned to the pixel grid. Otherwise, when the camera moves, we’ll probably end up looking at the sprites from locations that are not on the grid, which will make the sprites jitter out from their expected positions.

In our implementation, we apply the snapping at the end of the camera update logic, after everything else (including special effects like screen shakes, etc.) have been calculated. This is how the actual code looks like:

void AFolliesPlayerCameraManager::DoUpdateCamera(float DeltaTime)
{
    // Update the camera
    Super::DoUpdateCamera(DeltaTime);

    // Snap the final camera location to the pixel grid
    {
        const float PixelsPerUnits = 0.24f;
        const float UnitsPerPixel = 1.0f / PixelsPerUnits;

        FMinimalViewInfo CameraCachePOV = GetCameraCachePOV();
        CameraCachePOV.Location.X = FMath::GridSnap(CameraCachePOV.Location.X, UnitsPerPixel);
        CameraCachePOV.Location.Z = FMath::GridSnap(CameraCachePOV.Location.Z, UnitsPerPixel);
        FillCameraCache(CameraCachePOV);
    }
}

2. Sprites Shouldn’t Rotate

Pixels in a grid can’t rotate, and so shouldn’t your sprites. Furthermore, rotating textures that use nearest-neighbor filtering introduces evident aliasing between pixels.

A rotated sprite, showing some ugly aliasing between pixels
Rotated sprites look bad.

The simplest solution here it to avoid to physically rotate the objects in the world, and use hand-drawn rotated versions of a sprite where rotation is actually needed.

While developing Guntastic, however, we encountered some edge cases that still required to handle in-world rotations. An example of such cases are guided missiles, which need to track a target by pointing towards it: here the amount of rotated sprites to draw was simply too much to handle for a team with a single artist.

To handle these (sporadic) cases, we fell back to an antialiasing technique, created by Cláudio Fernandes, called Manual Texture Filtering. This technique works by:

[…] performing linear interpolation between texels on the edges of each texel in the fragment shader, but sampling the nearest texel everywhere else.

In other terms, it smooths out jaggies between pixels, while keeping the overall result crisp. The only caveat when working with this technique is that linear filtering should be applied to the sprite texture (instead of nearest-neighbor filtering). Here’s how the shader looks like when implemented in an UE4 material function:

The "Manual Texture Filtering" shader implemented in UE4
The "Manual Texture Filtering for Pixelated Games" shader implemented in UE4.

This almost completely eliminated the problem:

Manual texture filtering at work inside UE4
Manual texture filtering at work.

3. Maintain Pixel Density

Finally, it’s important for the sprites in the game to have the same pixel density: this means they should be created using the same reference grid, and never scaled up.

Two sprites of the same size, but with different pixel densities

Luckily, applying this inside the engine is straightforward as it only takes some discipline to never apply scale to the sprites.

Taking It Further

The final step to ensure a true pixel-perfect look would be to render the game at the original art resolution, and upscale the rendered frame to match the actual screen resolution. This would automatically eliminate any possibility of inconsistent pixel sizes, because it’s impossible to draw anything smaller than a pixel.

Unfortunately we hit a couple of major road blocks when trying to implement it, and decided to abandon it (at least for now):

  • Rendering at a very low resolution would make the handful of post processing effects we use in the game (such as bloom) look distorted and aliased once upscaled.
  • Implementing this solution in Unreal Engine 4 requires changes at the engine level. This is something we’re trying to avoid as it can soon become a nightmare to manage for a very small team such as ours, with no full-time programmer/engineer.

If you’re interested in seeing this technique in action, I highly encourage you to take a look at the implementation available on Alex Ocias’ blog for Unity.

Anatomy of a Character

A couple of weeks ago we introduced our fifth character in Guntastic, so I thought it might be a good time to talk about our character creation workflow, from the concept stage to the actual implementation in Unreal Engine 4.

In The Beginning…

The process begins with our artist Simone doing some quick sketches of different characters, from which we usually pick the one we like the most.

Quick sketches of various characters: a pirate, a robot, Rick from Rick&Morty and the Demonic Koala
Guess which concept would've make us win a copyright infringement lawsuit!

He then iterates on the design until we have something we like. Otherwise we drop a tear, throw everything away and start over.

Successive iterations on the Demonic Koala

Once the character design is final, he then proceeds to create all the required animations. It’s important for the design to be as refined as possible at this stage: altering it afterwards requires massive time as it involves editing all the animations frames one by one (a character currently has 162 unique animation frames – and counting!).

Although I tried to make him do the jump to a pixel-art drawing software (such as Pro Motion NG or Pyxel Edit), he stubbornly continued to use Photoshop for all the editing. This required some technical setup to allow for quick iteration times: each animation frame is on a separate layer, which is linked to a keyframe in the timeline for easy previewing. Each PSD usually contains multiple characters so that it’s easy to use previously created characters as blueprints for a new one.

A screenshot a work file in Photoshop
That's a lot of layer groups!

From Photoshop To UE4

Exporting all the frames by hand would be madness, so we have a custom Photoshop script that automates the export process and generates a single PNG file out of each animation frame. Automating error-prone and repetitive tasks is invaluable for the overall workflow, especially for small teams with no fancy QA department. 😃 Our export script also helps us maintaining a strong nomenclature, which is invaluable to avoid the risk of going insane tracking errors, duplicated sprites, etc.

Automation FTW.

We then use Texture Packer to compress all the frames in a single texture atlas. If you do any 2D game development and don’t know about Texture Packer, you really need to check it out: it’s a nifty utility that packs multiple sprites into a single file (A.K.A. atlas, or spritesheet). This not only helps lowering texture memory usage, but also keeps the assets tidily organized - which soon becomes a necessity when you have to deal with large amounts of sprites. Texture Packer also works from the command line, so this step can be automated as well. At the moment we have a single atlas per-character.

A screenshot of TexturePacker, with the Koala spritesheet open
Now that's a good looking spritesheet.

The next step is (re)importing the atlas into the engine. We started out using Paper2D (that, for the uninitiated, is the 2D framework available by default in Unreal Engine 4), but slowly transitioned to a custom solution over time. This allowed us to have far more control over both the import process and the rendering of sprites and flipbooks (right now we use Paper2D only for editing and rendering tilemaps).

Once the atlas is imported we end up with multiple folders containing the sprites and flipbooks ready to use. To keep the amount of manual editing to a minimum, the system is also fed with spreadsheets containing useful defaults that are automatically applied to the sprites and flipbooks (such as frame rate, etc.). Additional animation assets are also generated in this step, which are used by our animation system to actually play the animations – more on this later.

The spreadsheet containing FPS for animations
The spreadsheet is authored in LibreOffice Calc and then imported into UE4 to be used by our import tool.

Character Setup

Before digging into the actual animation system, lets briefly talk about how our characters are setup. In Guntastic a player can run, jump, shoot, fall, etc. – all at the same time. What’s more, a player can also aim at different direction while performing any of the aforementioned movements. We also have different weapon types, such as pistols and rifles, that require different character stances. Moreover, some animations such as jump, land and firing are one-shots rather than loops.

While prototyping the game, we soon found out that having full body animations for every permutation of the different actions a player can perform would soon make the animation count skyrocket, out of reach of the hands of only two developers. After some tinkering we ended up splitting the character into two different sprites: one for the lower body (legs, feet), and one for the upper body (torso, head, arms). This is similar to what it’s usually done on 3D characters, and allowed us to seamlessly play multiple animations on different parts of the body at the same time: for example we can have a walk animation play on the legs, while a firing animation is playing on the torso.

A screenshot of our character Blueprint in Unreal Engine 4, showing the various sprites used to compose a character
The component tree of our characters in Unreal Engine 4.

The Animation System

Our animation system is implemented in C++ in our custom Character class and works by using the descriptor assets that were generated during the import process. These hold references to different flipbooks, each representing a different movement state (i.e. idle, walking, etc.), a different weapon stance (i.e. pistol, rifle, etc.) or a different aim offset (i.e. forward, up, down). The system then sorts out which flipbook to use on the lower and upper body sprites based on the current character state.

A screenshot of the various descriptor assets in Unreal Engine 4
A descriptor asset is generated for every possible physics state of the character.

We also apply some modifiers to the animations: for example, the walk cycle is played faster or slower based on the current movement speed of the character. The whole system was kept intentionally simple (~150 lines of code) by enforcing constraints on the animations, which also make authoring them simpler. For example, all animations runs at framerates that are whole divisor of 24 FPS (i.e. 12, 6, etc.) to easily sync them.

Wish List

While the whole system works quite well, there’s still large room for improvements (as it’s always the case in game development!). Something I would have liked to have the time to work on was a UE4 editor extension that would import the animation frames directly from the PSD files. The extension would then take care of packaging the frames into texture atlases automatically, thus eliminating the need for external utilities and scripts.

Something I also would have loved to implement was a visual, state-machine-based animation system for flipbooks – similar to what Unreal Engine 4 provides out-of-the-box for skeletal meshes. While presumably less performing than the current hard-coded C++ solution (but then again, performances are rarely an issue in modern 2D games!) it would have helped tremendously during the prototyping phase. Maybe for the next game! 😉