Backbone – Working with 2D Pixel Art in a 3D World | Unreal Indie Dev Days 2019 | Unreal Engine

Backbone – Working with 2D Pixel Art in a 3D World | Unreal Indie Dev Days 2019 | Unreal Engine


>>Nikita: Hi everyone.
That’s a pretty cool experience to see your game
on his big screen like that. My name is Nikita.
Danshin. I am, well the thing said
a co-founder, developer, and a composer
on the team of EggNut. We are 12 people
in the studio based across
five different countries working remotely on
our first title, Backbone. So, this is a noir
role-playing adventure about a raccoon
detective in dystopian Vancouver, BC, Canada. It’s an immersive mix
of high-quality pixel art as you may see
and graphic tech commonly not used
with pixel art, like dynamic lighting,
PBR materials, normal maps, decals, and all
is built in Unreal Engine 4. So, this talk is presented
by me and Radu Girjoaba
our tech artist. It was prepped with the
help of Toma Klepinina, our lead environment
artist. She was supposed to talk
instead of me, but I’m covering for her as she was not able
to present here today. So, this talk consists
of two parts, one where we discuss
our art techniques and one where we look at them
deeper from tech standpoint. So, in this first part, let’s talk about
lighting in general and how we make 2D art
with 3D environment. Lighting is arguably the most
important aspect of art. In the dawn of digital era,
artists used to draw lighting straight on the Textures
or sprites as there was no engines capable of calculating
the realistic behavior of light. As an example of that will have
old school sprite games like Heroes of Might and Magic,
Might and Magic, somebody definitely likes that,
Castlevania, and Syndicate. With evolving technologies
and higher processing power, new real-time
lighting capabilities were developed,
such as dynamic lights, shadows, normal maps.
Light became more realistic, taking on properties
from the actual laws of physics. These examples are from Blade of
Darkness with dynamic shadows. Silent Hill 2, I believe,
Thief, and Doom 3. As time progressed further,
the new lighting tech raised the fidelity
higher than ever. For example, Crysis 1, using
screen space ambient occlusion, FEAR, using ray marched
volumetric lighting, Uncharted 4,
with reflective shadow maps for global illumination, and recently released Control,
with ray tracing. However, old techniques
are still valid today, even with an abundance
of new tools, artists choose to create
lighted Textures; some to follow
nostalgic aesthetics, some because of technical
limitations of their engines, and some just think
it looks great. These examples are
from Stardew Valley, Moonlighter, Guilty Gear,
and Hyperlight Drifter. You might know about those, so we think that pixel
art aesthetics are great, but they could use
a little refreshment. So, for our first title,
Backbone, we combined pixel art
with modern lighting techniques. So, figuring out an art style
for your game might take some time
and a lot of experimentation. For Backbone, we gradually
came up with a pipeline that facilitates
all our stylistic needs and provides a transparent
and effective framework for each frame the player
will see in the final game. So I’m going to go
through this pipeline using one of the locations
from the game as a reference. So, the first one, Sketch. Sketch, our main goal here
is to plan out the location, light sources, and their colors.
This is the most crucial step that dictates the rest
of the work on that scene. Then we draw pixel art Assets
with that lighting in mind. We position them on a scene
according to the sketch. As you see everything
is located on the 3D plane. We use the characters
for size reference to make sure
that everything looks fine. In terms of positioning, there are actually
no set rules here at all. We don’t use
any kind of grid system, which just do
whatever looks good. Then we place light sources, play with reflective surfaces
like that mirror over there, set up coloring of the lights, scattering and many
other properties that we’ll just dive into later.
For floors, walls, and ceilings, we use materials with custom
normal and roughness maps that react to light sources
and add more depth to the image. And that will be
the final result. Here are a couple
of more examples from that level in the game. As you see on the sketch,
we set up the lighting, make sure that it comes from the
top, from those four dots. Then we sketch those Assets,
we position them on the scene, set up the lighting, set up the custom materials
for example, for those floors
at the bottom. And that’s how the scene
looks like at the final stage. So, for Backbone, we draw lights
and shadows directly on sprites. In our experience it helps
to convey the color, shape, and form of an object. Remember that visual art
is all about fooling the eye into thinking that it’s looking
at the real object. So, pixel art is no different.
In this example, we drew the red lighting specs
on the column and wires hanging next to the planned source
of a very bright red neon light. So, when we turn the lights on,
the image comes to life. These couple of red dots made the Engine lighting
even more vibrant. Before I will dive deeper
into the specifics of lighting, let’s go back to the basics. I want to talk
about composition. Generally, each scene
can be broken down into different planes. Foreground, middle ground,
and the background. In Backbone,
all important information is concentrated
in the middle ground. This is where the player
character is located as well as interactable objects,
NPCs, enemies, and so on. Foreground is used to add
another layer of information and make this scene
seem deeper. In our case, foreground objects
are usually located on the sides or upper lower parts
of the screen. This way they just don’t block
the view of the player character and don’t take
too much attention from what’s happening
in the middle. Remember that smart positioning
is better immersion for a player,
always like that. We’ll also break this rule
for different narrative and artistic purposes. In this scene, we deliberately
block player’s view with objects
in the foreground. This adds to the atmosphere of
an unsettling cramped up place where a player doesn’t know
what to expect next. And that’s how it looks like. The silhouette
of the player character is obscured
by foreground objects creating a feeling
of uncertainty. There are also locations
where foreground is simplified to silhouettes like this scene
in a busy night bar. Now that I’ve covered
the basics, let’s build on the difficulty and introduce multidimensional
approach to the composition. Backbone is built by placing 2D
sprites in the 3D environment, so we decided to use it
to our own advantage. It’s difficult to mix 3D
and 2D with flat planes as your Assets are in
multidimensional environment. The visual style like that
is sometimes called two and a half dimensional. Sprites and Meshes in Backbone
might be flat, but we positioned them in a way
that makes them look 3D. For example,
our movie theater board is made of sprites positioned in
certain angle in 3D environment. 3D objects can help
add more volume, make the world seem more vibrant
and immersive, but placed next to plane
2D sprites, 3D objects
draw player’s attention and make everything around them
seem even more flat. So, in our case it’s important
to use them sparingly. In Backbone, you will find
3D objects on larger structures like parts of buildings
and mostly in the middle ground. If they’re positioned in the
foreground or a background, perspective
during movement distorts them, which doesn’t really look nice
in our opinion. This is the footage from
Backbone Kickstarter trailer from 2018. Here we experimented with 3D
on smaller objects. The green drawer over there
and the table in the front. The sink on the drawer
is still 2D plane and looks unnatural and distracting positioned
next to a 3D object. To solve this, we could have
gotten rid of all 3D whatsoever or completely switch
to voxel art. We chose to use 3D sparingly and it seems like it’s the right
direction for us at the moment. Today, Backbone rooms are basically cubicles
filled with 2D Assets To combat the flatness,
we set the camera further away and tighten the field of view. This way perspective distortion
on floors and ceilings does not conflict
with overall style. The space between
the objects still adds to the feeling of depth,
creates parallax effects and gives space for shadows
that the objects create. Here’s an example. These identical elongated shapes
placed in the scene look like cubes
with our scene’s camera set up. This is why we’ll stretch walls,
floors and ceilings. That way they look proper
with our camera settings. Without the stretching, pixelated Textures
from the walls would create too much visual
noise like the one on the left. And we want all the attention
directed at the scene, not the walls. So, the result would be the one
that we have on the right. Objects located
next to the walls like doors, paintings are drawn
in perspective distortion. This trickery is invisible
to player, yet the whole scene would look
wrong if we didn’t do that. By keeping the perspective
distortion in mind when creating Assets, the objects look natural
from every player position. Doors are the most
encountered example of these. Getting back to the cubes, notice how perspective
distortion affects the cubes differently.
While moving through the level, you can see both blue sides
on the left cube, but only one blue side
is visible on the right one. The position of the player
character and camera all affect
the object perspective. So, here is an example
of our perspective. Chair on the left here there
is drawn in linear perspective while the one on the right
is orthogonal, which is usually used
in schematic drawings. We use that
in most of our Assets. So, as Howard walks around, you can see that the table
in the foreground is in the orthogonal
perspective. The cork board on the right wall
is in linear, so see how the camera
moves around and they stay within there. So, my last point is layers. Layering is our main instrument
for creating more depth and volume in a two
and a half dimensional world. Layering 2D sprites in a 3D
environment creates depth, so called parallax. Parallax is an amazing
effective tool for adding detail and depth to the game world.
We use it a lot, as you may see, there are plenty of objects
stacked into each other and we use it with room
decorations and backgrounds to make the world
seem more alive. But obviously we can break
this rule again as we do
in our close-up mechanic. That thing allows the player
to interact with objects through a drag
and drop gameplay and there’s no need for depth. So, the visual rules
change here. Texture resolution
is usually higher and we just don’t use
any Engine lighting, instead indicating the lighting
straight into the Textures. That’s the basics of how we deal
with perspective layering pixel Assets and positioning 2D Assets
in 3D environment. Now Radu will talk about
how we deal with visuals from the technical standpoint.>>So, I’m Radu Girjoaba and I did
the technical art on Backbone. So, while prototyping
the style of Backbone, we encountered multiple issues. Some of them were due
to the unpredictable nature of combining 2D and 3D. Some were due to
technical limitations, but most of the time
we found whatever, some stuff we did
just didn’t look good. And so, we’d like to tell you
about what worked for us and what didn’t.
But just remember what we did are just some of
the many approaches we could have taken to lighting. So, our game doesn’t have
much geometric detail. So, when building our world,
a lot of the visual detail needs to be added in
through lighting and materials in order to create
a convincing atmosphere. But since most of our objects
are in 2D, we have come across some
challenges lighting our game, namely Paper2D
not supporting lightmaps. And as it turns out, it’s hard
lighting a game in 3 dimensions when most of your objects
are just flat planes facing the same direction. So here I’ve set up
a quick example to illustrate the challenge
we had with Paper2D. At the top there
there’s a static sphere with baked lighting.
And so, it affects both Mesh lightmaps
and volumetric lightmaps, but it doesn’t act
as an actual direct light. And so, on the left here
we have a Static Plane Mesh and because it has
surface lightmaps, it gets nicely lit
by the sphere. And it’s also shadowed near
the edges of the cube there. In the middle we’ve got
the same Plane Mesh but has a dynamic mobility. So, while it doesn’t have
surface lightmaps, it’s still gets affected
by the volumetric lightmaps so it’s not quite as accurate,
but it still is able to be affected by
the surrounding static lighting. And then on the video right here
we have a Paper2D sprite. And so,
it doesn’t have lightmaps and it doesn’t get affected
by volumetric lightmaps. The only lighting it receives
is from the Skylight and any dynamic direct lights
we have in this scene. So, to get around this issue, we started using Plane Meshes
for static sprites instead, and this allowed us to start
bouncing lighting off sprites. And so, we could bake
lightmaps onto them and then also have them affect and be affected by
the volumetric lightmaps. And so how we do that currently
is we just have a tool that places planes
into the world. And since our sprites
are generally half a texel per centimeter,
we can also scale them automatically based
on their Texture resolution. But we don’t actually use
the same Plane Mesh for all of our sprites
since they all aren’t square. If we did that, we would start
getting stretched lightmaps due to the different
aspect ratios. So, what we could do
is add the alpha maps of all our sprites so they would
all be perfectly square, but then that would start
slowing down performance because of overdraw. So instead we have
a few different elongated, Plane Meshes
with different lightmap UVs and that allows us to have
more consistent lightmaps throughout the game. The issue now is we still need
to use Paper2D for our sprites so we can support
character animations properly. So, if we had them running around
this completely baked scene, then they would look out
of place if they weren’t fully effected by the static
lighting around them. So, what we did to help mitigate
this issue is when we implemented
some custom deferred lights and these basically act
like stock and real lights for the artists,
but they’re diffuse only. So, they are a lot cheaper
to render. And because Paper2D
still writes the gBuffer, then these lights can help
ground our characters into the world without having
as much of a performance impact as unshadowed lights would.
And so, because we want them to only
contribute diffuse lighting, we want the lighting impact
to be more ambient despite the fact that they
are actually just point lights. So, we only calculate distance
fall off, normal contribution, and in some cases
the spotlight cone angle. So, there’s no roughness
calculation for now, specular shadows, just anything else
that would be physically based. So, for the distance fall off, we noticed
inverse squared fall off gives these really
bright highlights close to the light source and then that just rapidly
fades off and we wanted the lighting
to be more ambient so this doesn’t really look
very good for the effect
we’re trying to go for. So instead of inverse
squared fall off, we just square
the initial distance fall off and that gives us a nice,
really smooth gradient with a more ambient feel. But we still have this problem
that the contribution of normal still makes it easy to tell
where these sort of fake lights are coming from in the scenery. So, for the normal contribution,
we just, instead of dotting the normals
with the light direction, we have an artist-controlled
wrap parameter to wrap the lighting
around the geometry, not directly facing the light. And so, in the first half
of this video here you can see what it looks like moving
that value from zero to one and it gives
some nice smooth lighting. But then if we push that
zero past the zero to one range, we can fade that off
to just falloff where it’s just the pixel
distance from the light and it’s essentially, visually it just looks like
the normals aren’t even being considered. And so, another problem we had
with these is since the lights
aren’t shadowed, they can leak through
solid walls quite easily. And this specific example
is quite extreme. But if we have this spotlight
pointing at a wall like this and we want light
bouncing off of that wall, if we just put
a point light there, then that’s going to leak
through to the other side and cause all sorts of issues. So, this is a problem for us
because although the game is mostly comprised
of 2D sprites, we still have
3 dimensional walls separating our interior rooms. So, placing these bounce lights
right next to the walls would make lighting leak
in between those rooms. So, we added
a cone angle parameter to the lights as well
for cases like this. And for the artists, it’s just
like using a normal spotlight, going from zero
to one 180 degrees, but in the shader,
it’s passed in from the negative one to one range with negative
one being 180 degrees and with one
presenting zero degrees. And this allows us to give
the lights some directionality when it’s needed and keeps the lighting
from leaking through walls. So, in-game we just see
these bounce lights where we’ve got lights
dynamically changing or where we feel the lighting,
the static lighting isn’t enough to shade
our characters properly. And so, in this case,
we’ve just got this flickering light
in this dark alley. And so, we’ve just put
an ambient light right below the floor there. And that does a nice job
of lighting up the environment and lighting up our character and bouncing that light
back up from the ground. So, touching on volumetric fog,
just for a second, one challenge
we had was in one of our levels, we have these cars
with volumetric fog coming from the headlights
moving in the foreground. And so, if you look at the taxi
on the left there, the volumetric fog
from the car behind it is clipping through
at the very beginning and they’re also leaving this
quite jarring ghosting artifact. So, we can’t just- and that’s the result
of the temporal reprojection that volumetric fog uses.
And we can’t just turn that off because then we’d get some
pretty bad looking volumetric lighting throughout
the rest of the level. So, what we ended up doing was
we re-used the early light code from our custom lights in a 2-dimensional
sprite material and then we just attached those
sprites to the car’s headlights. And so, then we can just pass
in the spotlight parameters into the sprite. And that ended up giving us
a very similar looking result while getting rid of the issue
that volumetric fog was causing and then also saving
some performance. So great. We’ve got some nice
general lighting and the big lightmap gives us
some pretty good shadows. But then the issue remains that when building lighting
in some areas we still need these either
sharp shadows or these shadows with
really sharp contact shadows. And unless we start bumping up
lightmap resolution and lighting quality, then that’s not really something
achievable, it’s just lightmaps. So, we decided to implement some
simple shadow decals to our game and this is what
the material looks like. It’s very simple.
Just zero for the base color, one for the roughness
and a parameter for opacity. And the reasons specularity
is a parameter here is because
in some of our earlier levels, some of the surfaces used some- the specularity values
weren’t consistent and so the shadow decals
would end up looking either too bright
or too dark. And then we also add a small
amount of dithering to the opacity to mitigate
some banding artifacts we get, and while we don’t use temporal
anti-aliasing for denoising, the effect is subtle enough
that we don’t actually need it. And the final decals
are still relatively smooth. And so here we’ve got them
just placed around this room. On the right there,
it’s the shadow decal specific to the sprite
that’s casting the shadow. And then on the left wall
and the ceiling there we’ve got some just general
linear falloff decals. Here again, we’ve got
some decals on the ceiling to simulate
sort of ambient lighting coming in through that window
from outside. And so generally we don’t
use them too excessively, just where we feel some
sharp ambient lighting will help or where we need
some more accurate shadows without bumping up
lighting quality. Another issue we had was in
some scenes the player character felt a little out of place. And so, we’ve got this really strong
directional ambient lighting. And so, in this case there’s
a lot of ambient lighting coming in from that stage
in the background. And so, what we could have done
is set up this low-res spotlight to sort of emulate that effect. But we’ve already got enough
shadow casting lights in the scene as it is and we don’t want to drag
down performance even more. So, for situations like this,
we came up with a special decal that dynamically moves
and morphs around the player based on some artist-placed
pivot representing the focal point
of the ambient light. And so, in this video you
can see that decal in action. We’ve got this area light
right above the player there. And as the player moves, the decal also changes shape
around the player. And then as the light
also moves around, the decal is also
morphing around the player. So, we can change these
at runtime, they’re not just
statically placed. And so now here’s
that previous scene again. And on the right,
you can see what it looks like with the shadow decal. And it really helps ground
our character into the world a little more. And so how the actual decal
works is we just take a circle and stretch it non-uniformly and then we can do
a uniform blur on it and then a non-uniform blur
in the opposite direction of which it’s being stretched. And then we can pack
eight frames of that into the flipbook
on the right there. And so, as it gets stretched,
the uniform blur is decreased and the non-uniform blur
is increased and that gives the illusion
of a larger penumbra coming from the top portion
of the character. And so, the actual Texture
was made in substance designer. So, in engine for the material,
we just interpolate between the two nearest frames
based on the characters distance from this artist-placed
light pivot. Something else we did
some testing with is dynamically blurring
shadow decals at runtime so we wouldn’t have to make
these pre-blurred Textures for a lot of individual sprites. On the left here
we have this pixel art sprite that we would use in our game. And on the right, there is just
some random Texture with an alpha map. And so, you can see
as the shadow’s base we get these really
nice contact shadows, these really nice sharp shadows. Then as we get further away,
they get blurry and blurrier. And the way these decals work is
there would just be a Mesh decal in the form of a plane
just placed on the ground or any flat surface. And then with a Vertex shader
we could individually move the left, right,
and top edges of that. And then the decal
can be blurred with a parameter specified
by the artist. And then we can just,
we just mask that off with the Texture coordinates. So, we get that nice contact
hardening effect. And the way the blur works
is similar to what Playdead did
for their frosted glass. But essentially,
we just sample equal areas up to eight times
within radius around each pixel and then we jitter
that sample position each frame so the result so the result
can be cleaned up by TAA. But our game, we want to keep this pixel
perfect look in our game, meaning we don’t,
we can’t actually use TAA. And so, we ended up not using
this specific type of decal. We just thought
it was worth mentioning. So, going back to our general
sprite lighting, even with baked lighting and lots of lights placed
throughout the environment, the scene can still look
pretty flat in a lot of cases. Looking at the scene
with the lighting preview, the middle ground objects
look like flat cards, which they are, but we still need
these objects to look and feel like objects
grounded in the world. And then additionally
on top of that, their silhouettes
sort of blend in together and it’s really hard to make
out their individual shapes. So, we thought about how
we could add some extra detail to both our Mesh sprites as well
as our Paper2D characters. And so, we decided
that normal maps would be a good way to do that. We’ve got a lot of unique
sprites in our game. Over 800 in our largest level and then around 200 per level
on average. And we do have a lot of levels
in our game. So then how could we create
normal maps for so many different Assets? Do we generate them
from height maps? Do we just create
them all by hand? And so, we decided to look into
a few different approaches. The first approach we tried was generating normal maps
from height maps. And so that’s how 3D workflows
normally work. So, this is something
we decided to look into. But then if we do that, then how
do we create the height maps? Well, for reference, this is what our sprites
generally look like. This is just
the base color Texture and you can see
it’s got already occlusion, specular highlights and other
lighting information painted in. So, we could use programs like
Bitmap2Material or other similar programs
to convert these color Textures to height maps as is
sometimes then in 3D workflows. But because all that lighting
info is already painted in and because of
the low resolution, when converting the result
to normal map, it’s hard to get
any legible info out of it. So, we could obviously
spend a lot more time adjusting these Textures. But then at that point we might
as well be hand painting them. So, then what if we do hand
paint them? It’s still about the same
workload for the artists, but the height maps can now
be accurate to what the sprites
were for them. So then if we see here, if we convert
this height map to a normal map, then the flat faces
are a lot more readable. But then the normal map
still breaks up when we’ve got these in areas where the height map
has too much detail And so we decided we couldn’t
actually get any usable normal maps from height maps. So, then what if we
paint them by hand, paint the normal maps by hand. Doing this was the most accurate
we could get. But then the workload
is even larger now for artists as they have to paint
in more information. So, we decided this approach
also wasn’t viable for us. In the end, we realized
that because our sprites are so low resolution, we could essentially run an edge
detection shader on them and then we could just generate
edge normals from that. And so how that works is,
we’ve got an example here of a selection of some
theoretical alpha map we have. And so, the white squares
are just opaque texels, the black are transparent,
and the red are outside the zero to one UV view range. So, we’ll just focus on
the green texel on the center and then we can also assume
that texels outside the UV range are going to be transparent. And so, we only need to focus
on the eight surrounding texels of the texel we’re focusing on. So here we check
the top left texel and that’s outside the UV range so we can assume
it’s transparent and we had a unit vector
in that direction. Then the one below that as well. And then same goes
for the one below that. And so, then we check the one
to the right of that and it’s inside the UV range. So, then we sample the alpha map
and we find that it’s opaque. So that means we don’t need
to do anything there and we check the texel to the
right of that same story there. Then we check the texel
above that and we sample the alpha map and we find out
it’s transparent. So, then we add in a unit vector
in that direction. And then same story goes
for the one above that and same for the last one. So, if we add
all these vectors up, we’d get something
that looks like this and you’ll notice
we get something that’s over a unit long but we don’t actually care
about the length, we just care
about the direction. And so, all these vectors have
been calculated in 2 dimensions. So, for the final normal output, we have an artist-controlled
intensity parameter. And so, we normalize
the 2D vector and then we append one minus
the edge normal intensity as the Z component
where intensity is just some float between zero to one.
And then that guarantees that all normals
have the same intensity when getting renormalized
by the Engine. But then that’s not actually
one minus the intensity. It’s 1.001 minus the intensity. So that there’s still normal
data for all non-edge texels so they can be normalized
properly. So, for reference, here’s what
our sprites look like again, without the edge normals. And here’s what they look like
with edge normals. You can see they add
a lot of depth to the sprites and the scene, and they help ground them
in the world. And now they look a lot more
like physical objects instead of just flat cards
that have been propped up. And so, here’s a side by side comparison of the fully
Textured scene. The normal maps help sell
the ominous mood of the scene that we’re going here
for a lot more, and this isn’t really
a perfect solution as it doesn’t cover
all of our sprites perfectly, but it’s enough to help sell
the lighting and ground objects in the world. And so,
while it’s subtle enough, while it’s subtle,
a lot of the time it helps improve the look
of the game quite noticeably. Something we also noticed
is in some cases it can look like
a bad bevel shader in motion, which it essentially is, but specifically
on our games protagonists, if you’ll sort of look
at the edges of him as he moves around the areas that aren’t being
shaded properly or that aren’t
being shaded fully, they look more like
an outline shader and that’s not really
what we’re going for. So, in those cases we can lower the intensity
of the edge normals and then combine them
with some handmade normal maps that the artists would make and then we can adjust
the intensities of those separately based
on what we need. At the moment, we’re still experimenting
with handmade normal maps since they do take
a while to make, especially for these multi
frame animations that our characters have. But here’s an example of what
the protagonist looks like with them applied
at different intensities. On the left there,
there is no normals. In the middle,
we have them at full intensity and it looks cool,
but it’s a bit overdramatic. And so, on the right there is
something that we would use in the final game. It’s just something
in between the two. And so, wrapping up, what should you take away
from this presentation? Nothing is set in stone.
You should experiment to see what works best
with your visual style. And also, you don’t have to use
all the latest technology to make your game look good. Sometimes just doing what
looks right is better than using these complex or
physically accurate techniques. And so, I just want to say
thanks to the team, specifically Alex,
Toma, and Kristina for helping us
create this presentation. And then we’ve got some
references there at the bottom, specifically the INSIDE one
that really influenced us a lot. And yeah, that’s all I’ve got.

8 thoughts on “Backbone – Working with 2D Pixel Art in a 3D World | Unreal Indie Dev Days 2019 | Unreal Engine

  1. Since when is 2D bad in … make tools for making a pic better , light effects that can affect a pic – i have a feeling this is a forgotten or neglected thing form the past and news to new devolopers or newbies. Anyway lightning and couloring a map is really intresting topic for me .

  2. I wish Epic would give more support to Paper 2D. Unreal is great for doing incredible things like these 2.5D games but Epic seemed to abandon Paper 2D.
    Iā€™m actually surprised (and thankful) they uploaded this.

  3. Ever since seeing examples of this a few years ago I fell in love with the style and have attempted it myself, though not to the extent of this game. The lighting really does make all the difference in making a game look modern.

  4. I really like the display of pixel style in 3d scenes.Such as < Octopath Traveler >.Thank you for your sharing !!

Leave a Reply

Your email address will not be published. Required fields are marked *