3 Tricks to Improve Pixel Art Rendering in UE4

The pixelated look of the games of the past was largely the result of the severe constraints that graphics hardware imposed to developers back then. Obtaining the same look on modern game engines, however, can be quite difficult and requires some setup.

In a previous post we’ve seen how the game camera can be setup in Unreal Engine 4 to make pixel art look crisp when viewed on most screens. Today, I’d like to go over a series of tricks that we use inside the engine to enforce a more authentic pixelated look in Guntastic.

1. Everything Should Be Aligned to the Pixel Grid

On a screen there is no such thing as an half-pixel, but in a game world it’s common for sprites to end up in positions that aren’t aligned to the pixel grid. This introduces some noticeable inconsistencies in the way sprites are aligned.

A comparison of two sprites which shows the importance of conforming to a pixel grid

The simplest solution to prevent this from happening is to snap the sprites to the pixel art grid just before rendering. Most implementations I could find online perform the snapping directly in the world, by conforming object locations to the grid just before rendering. The original location is then restored on the objects at the beginning of the next frame.

Unfortunately, this is cumbersome to do in Unreal Engine 4 (the resources I found were for Unity), and might have unwanted side-effects (like triggering overlaps, for example). As such, in Guntastic we ended up using a simpler approach: snapping is applied directly in the vertex shader by offsetting the sprite geometry vertices.

Simple shader that snaps geometry vertices to the pixel art grid
A simple material function that can be used to snap vertices to the pixel grid.
The result of the shader illustrated above
Pixel snapping at work.

This works on the assumption that the vertices of the rendering geometry of the sprites are aligned to the pixels of the art itself, so special care should be taken to generate valid rendering geometry.

Examples of render geometries, showing that only sprites with render geometry vertices aligned to the pixel grid will work with the snapping shader
Sprite render geometry vertices should be aligned to the pixel grid for the snapping to work correctly.

Prevent Jittering When the Camera Moves

It’s also important for the camera to be aligned to the pixel grid. Otherwise, when the camera moves, we’ll probably end up looking at the sprites from locations that are not on the grid, which will make the sprites jitter out from their expected positions.

In our implementation, we apply the snapping at the end of the camera update logic, after everything else (including special effects like screen shakes, etc.) have been calculated. This is how the actual code looks like:

void AFolliesPlayerCameraManager::DoUpdateCamera(float DeltaTime)
{
    // Update the camera
    Super::DoUpdateCamera(DeltaTime);

    // Snap the final camera location to the pixel grid
    {
        const float PixelsPerUnits = 0.24f;
        const float UnitsPerPixel = 1.0f / PixelsPerUnits;

        FMinimalViewInfo CameraCachePOV = GetCameraCachePOV();
        CameraCachePOV.Location.X = FMath::GridSnap(CameraCachePOV.Location.X, UnitsPerPixel);
        CameraCachePOV.Location.Z = FMath::GridSnap(CameraCachePOV.Location.Z, UnitsPerPixel);
        FillCameraCache(CameraCachePOV);
    }
}

2. Sprites Shouldn’t Rotate

Pixels in a grid can’t rotate, and so shouldn’t your sprites. Furthermore, rotating textures that use nearest-neighbor filtering introduces evident aliasing between pixels.

A rotated sprite, showing some ugly aliasing between pixels
Rotated sprites look bad.

The simplest solution here it to avoid to physically rotate the objects in the world, and use hand-drawn rotated versions of a sprite where rotation is actually needed.

While developing Guntastic, however, we encountered some edge cases that still required to handle in-world rotations. An example of such cases are guided missiles, which need to track a target by pointing towards it: here the amount of rotated sprites to draw was simply too much to handle for a team with a single artist.

To handle these (sporadic) cases, we fell back to an antialiasing technique, created by Cláudio Fernandes, called Manual Texture Filtering. This technique works by:

[…] performing linear interpolation between texels on the edges of each texel in the fragment shader, but sampling the nearest texel everywhere else.

In other terms, it smooths out jaggies between pixels, while keeping the overall result crisp. The only caveat when working with this technique is that linear filtering should be applied to the sprite texture (instead of nearest-neighbor filtering). Here’s how the shader looks like when implemented in an UE4 material function:

The "Manual Texture Filtering" shader implemented in UE4
The "Manual Texture Filtering for Pixelated Games" shader implemented in UE4.

This almost completely eliminated the problem:

Manual texture filtering at work inside UE4
Manual texture filtering at work.

3. Maintain Pixel Density

Finally, it’s important for the sprites in the game to have the same pixel density: this means they should be created using the same reference grid, and never scaled up.

Two sprites of the same size, but with different pixel densities

Luckily, applying this inside the engine is straightforward as it only takes some discipline to never apply scale to the sprites.

Taking It Further

The final step to ensure a true pixel-perfect look would be to render the game at the original art resolution, and upscale the rendered frame to match the actual screen resolution. This would automatically eliminate any possibility of inconsistent pixel sizes, because it’s impossible to draw anything smaller than a pixel.

Unfortunately we hit a couple of major road blocks when trying to implement it, and decided to abandon it (at least for now):

  • Rendering at a very low resolution would make the handful of post processing effects we use in the game (such as bloom) look distorted and aliased once upscaled.
  • Implementing this solution in Unreal Engine 4 requires changes at the engine level. This is something we’re trying to avoid as it can soon become a nightmare to manage for a very small team such as ours, with no full-time programmer/engineer.

If you’re interested in seeing this technique in action, I highly encourage you to take a look at the implementation available on Alex Ocias’ blog for Unity.

Anatomy of a Character

A couple of weeks ago we introduced our fifth character in Guntastic, so I thought it might be a good time to talk about our character creation workflow, from the concept stage to the actual implementation in Unreal Engine 4.

In The Beginning…

The process begins with our artist Simone doing some quick sketches of different characters, from which we usually pick the one we like the most.

Quick sketches of various characters: a pirate, a robot, Rick from Rick&Morty and the Demonic Koala
Guess which concept would've make us win a copyright infringement lawsuit!

He then iterates on the design until we have something we like. Otherwise we drop a tear, throw everything away and start over.

Successive iterations on the Demonic Koala

Once the character design is final, he then proceeds to create all the required animations. It’s important for the design to be as refined as possible at this stage: altering it afterwards requires massive time as it involves editing all the animations frames one by one (a character currently has 162 unique animation frames – and counting!).

Although I tried to make him do the jump to a pixel-art drawing software (such as Pro Motion NG or Pyxel Edit), he stubbornly continued to use Photoshop for all the editing. This required some technical setup to allow for quick iteration times: each animation frame is on a separate layer, which is linked to a keyframe in the timeline for easy previewing. Each PSD usually contains multiple characters so that it’s easy to use previously created characters as blueprints for a new one.

A screenshot a work file in Photoshop
That's a lot of layer groups!

From Photoshop To UE4

Exporting all the frames by hand would be madness, so we have a custom Photoshop script that automates the export process and generates a single PNG file out of each animation frame. Automating error-prone and repetitive tasks is invaluable for the overall workflow, especially for small teams with no fancy QA department. 😃 Our export script also helps us maintaining a strong nomenclature, which is invaluable to avoid the risk of going insane tracking errors, duplicated sprites, etc.

Automation FTW.

We then use Texture Packer to compress all the frames in a single texture atlas. If you do any 2D game development and don’t know about Texture Packer, you really need to check it out: it’s a nifty utility that packs multiple sprites into a single file (A.K.A. atlas, or spritesheet). This not only helps lowering texture memory usage, but also keeps the assets tidily organized - which soon becomes a necessity when you have to deal with large amounts of sprites. Texture Packer also works from the command line, so this step can be automated as well. At the moment we have a single atlas per-character.

A screenshot of TexturePacker, with the Koala spritesheet open
Now that's a good looking spritesheet.

The next step is (re)importing the atlas into the engine. We started out using Paper2D (that, for the uninitiated, is the 2D framework available by default in Unreal Engine 4), but slowly transitioned to a custom solution over time. This allowed us to have far more control over both the import process and the rendering of sprites and flipbooks (right now we use Paper2D only for editing and rendering tilemaps).

Once the atlas is imported we end up with multiple folders containing the sprites and flipbooks ready to use. To keep the amount of manual editing to a minimum, the system is also fed with spreadsheets containing useful defaults that are automatically applied to the sprites and flipbooks (such as frame rate, etc.). Additional animation assets are also generated in this step, which are used by our animation system to actually play the animations – more on this later.

The spreadsheet containing FPS for animations
The spreadsheet is authored in LibreOffice Calc and then imported into UE4 to be used by our import tool.

Character Setup

Before digging into the actual animation system, lets briefly talk about how our characters are setup. In Guntastic a player can run, jump, shoot, fall, etc. – all at the same time. What’s more, a player can also aim at different direction while performing any of the aforementioned movements. We also have different weapon types, such as pistols and rifles, that require different character stances. Moreover, some animations such as jump, land and firing are one-shots rather than loops.

While prototyping the game, we soon found out that having full body animations for every permutation of the different actions a player can perform would soon make the animation count skyrocket, out of reach of the hands of only two developers. After some tinkering we ended up splitting the character into two different sprites: one for the lower body (legs, feet), and one for the upper body (torso, head, arms). This is similar to what it’s usually done on 3D characters, and allowed us to seamlessly play multiple animations on different parts of the body at the same time: for example we can have a walk animation play on the legs, while a firing animation is playing on the torso.

A screenshot of our character Blueprint in Unreal Engine 4, showing the various sprites used to compose a character
The component tree of our characters in Unreal Engine 4.

The Animation System

Our animation system is implemented in C++ in our custom Character class and works by using the descriptor assets that were generated during the import process. These hold references to different flipbooks, each representing a different movement state (i.e. idle, walking, etc.), a different weapon stance (i.e. pistol, rifle, etc.) or a different aim offset (i.e. forward, up, down). The system then sorts out which flipbook to use on the lower and upper body sprites based on the current character state.

A screenshot of the various descriptor assets in Unreal Engine 4
A descriptor asset is generated for every possible physics state of the character.

We also apply some modifiers to the animations: for example, the walk cycle is played faster or slower based on the current movement speed of the character. The whole system was kept intentionally simple (~150 lines of code) by enforcing constraints on the animations, which also make authoring them simpler. For example, all animations runs at framerates that are whole divisor of 24 FPS (i.e. 12, 6, etc.) to easily sync them.

Wish List

While the whole system works quite well, there’s still large room for improvements (as it’s always the case in game development!). Something I would have liked to have the time to work on was a UE4 editor extension that would import the animation frames directly from the PSD files. The extension would then take care of packaging the frames into texture atlases automatically, thus eliminating the need for external utilities and scripts.

Something I also would have loved to implement was a visual, state-machine-based animation system for flipbooks – similar to what Unreal Engine 4 provides out-of-the-box for skeletal meshes. While presumably less performing than the current hard-coded C++ solution (but then again, performances are rarely an issue in modern 2D games!) it would have helped tremendously during the prototyping phase. Maybe for the next game! 😉

Pixel Sharp Graphics at Multiple Screen Resolutions

Pixel art can be very picky when it comes to being scaled as it easily loses its pixel-perfect look, becoming blurry and distorted. This creates an interesting technical challenge for pixel art games since they are drawn at a fixed native resolution, but need to adapt to a wide variety of screen resolutions and aspect ratios.

A comparison of a pixel art character scaled at different sizes with various methods, showing obvious distortion or blurriness when non-integer scale factors are used
Pixel art doesn't like being scaled by non-integer factors. At all.

A Modern Problem

The old gaming consoles – where pixel art was born – would always output video data at only one resolution, regardless of the CRT monitors and TVs they were plugged to. This inevitably led to distorted and blurred graphics, which were made worse by the artefacts generated by the way cathode tubes and video standards worked. At the end of the day, what players saw on the screen looked nothing like the original pixel art and developers could do nothing about it.

A comparison of the original art from the game Zelda and a screenshot from a CRT display, showing how the latter is distorted by the screen
Old screens were super-crappy [source].

Yet today most players expect pixel art to look crisp and sharp regardless of whether they’re using the latest 4K OLED TV or an old 4∶3 LCD screen. In this post we’re going to take a look at how we handled this problem in Guntastic.

Assessing the Situation

During pre-production we investigated which screen resolutions players actually used to get a better idea of the problem scale. While it was quite safe to assume home consoles worked at standard 16∶9 resolutions such as 720p and 1080p (and now 2160p), the situation on PC was far more complex:

Screen Resolution Aspect Ratio Adoption
1920×1080 (FHD) 16∶9 38.21%
1366×768 (WXGA) ∼16∶9 24.76%
1600×900 16∶9 6.05%
1440×900 16∶10 4.59%
1536×864 16∶9 4.48%
1280×1024 (SXGA) 4∶3 4.08%
1680×1050 16∶10 3.62%
1360×768 ∼16∶9 2.89%
Screen resolutions with more than 2% market share in December 2016, according to the Steam Hardware Survey.

By looking at the data above we observed that:

  1. Perhaps unsurprisingly 16∶9 was the more common aspect ratio, with the standard 1920×1080 resolution which was used on more than 38% of PCs (with Steam installed, that’s it).
  2. There are some awkward screen resolutions out there, with queer 16∶9-like aspect ratios, or perfect 16∶9 aspect ratios but uneven sizes.
  3. More than 7% of the users had 16∶10 screens.
  4. Screens with a 4∶3 aspect ratio still existed, although had a shrinking a small market share.

Remember, the data above is from December 2016. Had we started working on the game today, we would have found a radically different scenario: 1080p is used on more than 60% of PCs, while non-HD resolutions (which were predominantly used on laptops) and 4∶3 screens have declined sharply.

A Starting Point

As standard 16∶9 HD resolutions were already the de-facto target to develop for, we decided to use a reference resolution of 640×360 pixels. A canvas of this size scales perfectly using integer factors to all HD resolutions: 2× for 1280×720, 3× for 1920×1080 and so on, including 4K, 8K and possibly more (although I can’t imagine what a pixel art game would look on such screens!). We’ve already seen in a previous post that 640×360 proved to be the most viable option from a gameplay point of view as well.

Our initial implementation of the scaling mechanism was quite standard, and worked by changing the orthographic size of the game camera based on the detected screen resolution. A very good introduction to this technique can be found in the Pixel Perfect 2D post on the Unity blog, which also provides example code that can be easily ported over to UE4.

Thick Borders to the Rescue

To prevent distortions and blurriness at the remaining screen resolutions we identified a simple solution in a slightly modified version of the “Thick Borders” technique outlined in the same blog post.

Basically, the idea is to scale the game to the nearest whole multiple of the reference resolution and then show or hide a small portion of the game world to the sides of the screen to compensate for the small difference in screen resolution. This proved to work well especially on 16∶9-like aspect ratios displays that have resolutions that only differs marginally from standard 16∶9 HD resolutions (e.g. 1366×768, 1360×768, etc.).

A screenshot of the game taken at 1366×768, with the thick border area highlighted
A screenshot of the game taken at 1366×768, with the thick border area highlighted in green.

To support this technique we had to change how the final scaling factor is calculated: if it’s near to the next integer, we round it up, even if we’ll lose a bit of the world to the sides; otherwise we round it down, so thick borders will fill the rest of the screen. The final implementation lives inside our PlayerCameraManager class and takes approximately 80 lines of code. A simplified version is presented below:

// The native resolution of the pixel art
const FVector2D ReferenceResolution(640.0f, 360.0f);

// The pixels per unit (translation factor from pixels to world units, in our case 24px (a tile) = 100cm)
const float ReferencePixelsPerUnit = 0.24f;

void AFolliesPlayerCameraManager::UpdateCameraWidth(FMinimalViewInfo& OutCameraView)
{
    if (GEngine == nullptr)
    {
        return;
    }

    const UGameViewportClient* GameViewport = GEngine->GameViewportForWorld(GetWorld());
    if (GameViewport != nullptr && GameViewport->Viewport != nullptr)
    {
        // Get the viewport size
        const FVector2D ViewportSize(GameViewport->Viewport->GetSizeXY());

        // Calculate the new orthographic width based on pixel art scale and viewport size
        OutCameraView.OrthoWidth = (ViewportSize.X / ReferencePixelsPerUnit) / GetPixelArtScale(ViewportSize);
    }
}

float AFolliesPlayerCameraManager::GetPixelArtScale(const FVector2D& InViewportSize)
{
    // Calculate the new art scale factor
    float BasePixelArtScale = (InViewportSize.X / ReferenceResolution.X);

    // Round it up or down
    BasePixelArtScale = (FMath::Frac(BasePixelArtScale) > 0.9f) ? FMath::CeilToFloat(BasePixelArtScale) : FMath::FloorToFloat(BasePixelArtScale);

    // In the extremely rare case where the display resolution is lower than the reference resolution we
    // also need to protect against divisions by zero, although in this case the game will be unplayable :)
    BasePixelArtScale = FMath::Max(1.0f, BasePixelArtScale);

    return BasePixelArtScale;
}

While thick borders won’t work well for some games since they would inevitably affect gameplay by changing the portion of the world that is visible to the player, they worked quite well for us. In Guntastic every level need to be completely self-contained to fit on a single screen, and having additional room to spare to the sides even gave us a chance to fill these areas with additional details. As an additional precaution, we also established a safe area of one tile around every level that can be hidden, without affecting gameplay, in the very rare cases where the scaling factor is rounded up instead of down.

Handling Out of Gauge Screens: Letterboxing

Although thick borders did the trick in most cases, they failed to handle situations where the area to cover to the sides was way too big, in example with some 4∶3 screens and uneven resolutions such as 1600×900. Since these screens were not widespread we decided to solve the problem by applying a simple letterboxing effect. This also served as a failsafe solution in case a player is using a very strange resolution we didn’t directly account for.

A screenshot of the game taken at 1600×900, with the letterboxing effect visible to the sides
The game on a 1600×900 screen, with the letterboxing effect visible to the sides.

We initially tried to apply the letterboxing effect through the built-in vignette post processing filter, but it proved difficult to control and integrated badly with the pixelated look of the rest of the game. Currently, the effect is implemented inside the game world by using a simple rectangular static mesh, with a big central hole that helps keeping overdraw to a minimum. Not the cleanest approach, but it gets the job done!

The Final Result

In the end, every level in Guntastic is composed of 34×24 tiles. The playable area only account for 24×16 of them (which is approximately our native resolution of 640×360 pixels), with the rest being there only to support the thick borders and letterboxing techniques. The following table and figure show which scaling factor and techniques are applied at common resolutions:

Screen Resolution Scaling Factor Visible Techniques
1920×1080 (FHD) 3 None
1366×768 (WXGA) 2.1 ⇒ 2 Thick Borders
1600×900 2.5 ⇒ 2 Thick Borders, Letterboxing
1440×900 2.3 ⇒ 2 Thick Borders, Letterboxing
1536×864 2.4 ⇒ 2 Thick Borders, Letterboxing
1280×1024 (SXGA) 2 Thick Borders, Letterboxing
1680×1050 2.6 ⇒ 2 Thick Borders, Letterboxing
1360×768 2.1 ⇒ 2 Thick Borders
Scaling factors and techniques applied to screen resolutions with more than 2% market share (December 2016, Steam Hardware Survey data).
A figure illustrating the scaling factor applied at different resolutions
  • Green: 1× scaling factor.
  • Blue: 2× scaling factor.
  • Yellow: 3× scaling factor.

Conclusion

In this post we presented the solution we adopted to support multiple screen resolutions in Guntastic, which is a mix of three different techniques: scaling, thick borders and letterboxing. While definitely not perfect, this approach was quite simple to implement while remaining – we hope – future proof and flexible enough to handle even niche resolutions.