Features

10 Vital Things Video Game Graphics Need to Reach the Next Level

You think they look good now? Just wait.

10 things video game graphics need to reach the next level red dead redemption 2
Image Source: Rockstar Games

As the generations of gaming systems continue to evolve and improve, the eyes of players continue to scrutinize and analyze the games that accompany them. While the graphical bar in the gaming industry continues to increase, there still remain to be a few aspects of gaming graphics that are the first to be picked out. Graphics are constantly improving to the point where some of the items on this list are certainly nitpicks, but there is always more to be desired when thinking in retrospect. Here are 10 things that video game graphics need to improve to reach the next level.

Smooth Out the Characters’ Hair

video game graphics hair Jedi survivor
Image Source: EA via Twinfinite

When it comes down to the things that video game graphics consistently miss the mark on, it’s a character’s hair. That’s not to say that hair always looks bad, but it can be that one aspect of a scene that pulls the player’s focus away from the story. During cutscenes, the hair is often a part of a full computer generated model, and it’s being placed into a scene under natural and controlled circumstances, but during gameplay is where you can more easily see the visuals break down.

These issues usually come in the form of either lighting or textural problems, with the lighting color making the hair look brutally artificial, and textures not sitting properly on a character’s head. Depending on the character and their hairstyle, it might come out looking a bit straw-like or wiry as opposed to looking full and fluffy. Unfortunately, the more physical hairs that you’ve got on a character, the more performance is going to be required for simulating them, and that’s generally the point where developers have to choose what is more important for their game.

Jedi Survivor had some great examples of hairs acting up, but none that really lasted or detracted from my experience. I did play it on the Xbox Series S, so I can imagine that these issues are less prevalent on a Series X or PS5. Cal’s hair really did look exceptional, with it being reactive to his movements and environment, but there would still be times where it would stick out arbitrarily to the side or flash to a shockingly unrealistic shade of orange before falling back flat on his head again.

Keep Clothing Where It Should Be

Have you ever had it where your character has been wearing a cape or a jacket, when suddenly you can see their arm or leg clip through their clothes? That’s an issue with cloth simulations actively running during gameplay, because there’s a lot of math that goes into simulating a piece of cloth. When one of the vectors goes a little loopy, the article of clothing has to try to catch up and will often glitch in an unnatural way causing a strange flapping or stiffness in the fabric.

Having an object as flexible and light as cloth interact with other objects can also be taxing on a system, so developers have to be somewhat picky about how much clothing can move and where. They also run the risk of the clothing not reacting properly with the character model underneath it, which can cause strange artifacts with a character’s clothing as well.

While this issue is much less prevalent in modern games, it is still noticeable under certain circumstances like characters interacting with each other or making sharp, sudden movements. As the engines that games are built on continue to improve, the cloth simulations become easier to implement for developers, improving the overall end result.

Use Foliage to Expand and Blend the Environment

video game graphics unreal engine 5 5.2 foliage procedural generation
Image Source: GameSpot YouTube

When games are being rendered, trees and plant life are some of the most immersive aspects of the environment, however they can come off as flat once you get up close or try to interact with them. To add more plants, shrubs and trees means adding more polygons and shadows that the engine has to render, and that can get heavily taxing on a system.

Unreal Engine 5.2 has done exceptional work on foliage, with plants being able to interact with objects and actually bend out of the way instead of clipping through the player. Not only that, but developers can add elements into the environment and the engine will procedurally incorporate themselves into the world in ways like shifting trees, plants, fog and rocks to seamlessly blend everything together.

The more useful tools like Unreal Engine that game developers incorporate into their work will certainly continue to improve the overall quality and photorealism in games on current technology. Games having sliders where you can scale your graphics quality also then gives players the option to choose what setting to run the game at and focus on the elements that are the most important to them.

Embrace Nanite for High-Resolution Textures

video game graphics Exoprimal texture bug
Image Source: Capcom via Twinfinite

The way in which game engines render environmental textures has gone through a lot of changes since scenes were 2D images. The trailer for Unrecord has drawn attention from players and visual effects experts, and it’s not for no reason. Unreal Engine 5 is once again responsible for the game looking as photo-real as it does (albeit with a few tricks of the eye that really drive it home), using the new Nanite system for mapping geometry easier than ever before.

Previously, artists would have had to take a high quality scan on an asset, reduce the quality to something that the engine could process and make graphical maps out of those lower-poly assets. Now, Nanite allows for high resolution assets to be placed directly into the environment so artists can focus more on the assets that they’re choosing and blending them into the scene, rather than worrying about polygon counts.

Unfortunately, not all games that are released perform quite the way that they might’ve been supposed to. For example, even just after the first mission in Exoprimal – a 2023 release – a character sits on a car that isn’t properly rendered in, and it immediately throws off the scene. Perhaps that’s a sign of performance issues on the Xbox Series S, but regardless it feels out of place in a modern game.

Let Unreal Engine’s Path-Tracing Light the Way

video game graphics Exoprimal shadows lighting
Image Source: Capcom via Twinfinite

Lighting is one of the most important aspects to making your scene look photo-real (or match the style you’re emulating), and also one that has historically been difficult to master. One of the reasons lighting can be so tough is because artists and developers need to have a clear understanding of the light’s source, what the light is interacting with and how those interactions affect the lighting of the rest of the scene. The lighting directly affects things like shadows, reflections and colors, all of which are integral to making sure your scene looks how it’s supposed to.

During the State of Unreal Engine 5.2 tech demo, the team showed off the way that the engine uses Dynamic Global Illumination to have light interact with surfaces more realistically than before. There are multiple layers to a surface – including layers of dirt and dust – that interact with the light according to their relative position to the light source. Up until this point, lighting surfaces in games has been more flat and two-dimensional than how light behaves in reality, so the steps forward are being made in stride with techniques like bounce-lighting emulating the realistic path of light as it travels through a scene.

The most noticable breakdown in games can oftentimes be character’s shadows. It comes as a direct interaction with the light itself, and when it’s done improperly, it can cause a character to look disconnected from the world around them. Not only that, but the environments need to be lit correctly so that there aren’t any dark corners in the middle of a room or a corner that has no shadows at all. The dinosaurs in Exoprimal seem to serve as a good example of this being an issue, as they often look like they’re gliding along the floor, not really connected to the overall environment, and it’s because of the way they’re lit.

Objects Have to Interact Correctly

One thing that turns out to be rather difficult when simulating or animating a character is making that character interact with objects realistically. The problem is that there has to be some sort of physics system that applies to both the object and the character, and the object has to react to the character’s touching it the same as the character needs to react to interacting with the object. Not to mention that if a character starts to clip through the map, it’s surely going to break some of the immersion for the player.

Oftentimes this can be noticed in the way a character and the object move after the interaction. Say a character is lifting a weapon that should be heavy, it’s jarring to see them lift it with one hand like it’s made of paper. It’s also jarring to see the object move as though it has no mass, unless there’s a circumstance like Master Chief holding an Assault Rifle, where it’s understood that the character is so strong that the object’s mass is negligable.

Collision detection in the software that game developers use is also a big problem for making contact interactions look realistic. The work that has to be done for the movement to look accurate takes a heavy toll on the machines used to render these interactions, so oftentimes artists have to use other, less precise methods for making objects interact that can look less convincing. As technology continues to improve, these tradeoffs will have to be made less and less, allowing for more realistic, accurate interactions in games.

Depending on the circumstances, modern developers might now use motion capture to get the visual appearance that they’re looking for. Game directors can use a prop as though filming with a regular camera, and that way they can more accurately represent what that object interaction would actually look like. Jedi Survivor’s development saw Cameron Monaghan doing the motion capture for Cal Kestis, which grounds the movements and interactions in reality so that it doesn’t look too artificial in the end product.

Smooth and Diversify Animations

Image Source: Hole in the Sleeve

Animations do go hand in hand with contact interaction, but I’m referring to the way a character physically appears as they move through space. Movement is at the core of controlling almost every game, so to make sure that those movements look smooth and realistic is of high value to a developer.

Depending on the graphic style of the game, developers might choose to hand animate the motions for characters, or if they can, they might use motion capture to achieve the intended look. Hole in the Sleeve is a game studio currently working on a parkour/freerunning simulator, reminiscent of Skate 3, and they’ve claimed to have had to capture over a hundred animations using motion capture with more to come during development. This is to achieve a smooth, realistic experience in a game where the movement is really the main focus, but for a game that doesn’t utilize human or realistic characters, hand-animating those movements gives an artist the freedom to create the character they envisioned.

Animations ground players into the game that they’re playing, and if those animations don’t make sense or don’t accurately reflect a character’s place in the world, it stands out to the eye. Xbox Game Pass has even advertised on the Xbox Home screen for games that particularly show the player’s legs in first person, because that’s something that truly grounds a character and both players and developers alike are passionate about making it look right.

Let the Game’s Style Guide the Particle Simulation

video game graphics particle simulation ghost of tsushima
Image Source: Sony Interactive Entertainment

When I say particle simulations, I’m talking about the leaves that fall through the sky as you walk under a tree, or the bugs floating through the field of view. Those little details draw the player into the game’s world, and they can really help in adding to the stylization of a game when the developers aren’t aiming for photorealism. These particle simulations have to take into account a plethora of different variables in the game engine, one of the most important being wind and wind direction.

Ghost of Tsushima serves as a great example of particles being used to aid a game’s stylization, but it also demonstrates how much more has to go into a particle simulation than merely just the particles themselves. During the 2021 Game Developer’s Conference, Bill Rockenbeck explained that the wind was one of the more valuable aspects to the game, but the developers had to do a massive amount of work to make sure that it matched the style of the game. The wind had to serve as a particle simulation in itself because it was being used as a navigation system, but it still couldn’t be visually distracting from the game’s Samurai-film style.

The particles and foliage have to exist in a way that enhances and represents the world they exist in; in Ghost of Tsushima’s case, the leaves should match the color of the trees around them or move in the same direction as the wind. Without paying attention to these key factors, the world may look still and flat as opposed to dynamic and lively. Adding realistic particles like leaves, embers or fog brings the world to life around the characters and allows for an increased level of immersion.

Facial Expressions Should Get the Attention they Deserve

You know all that time you spend creating your character in a game like Fallout or Cyberpunk? How much more fulfilling would that be if you could really get to see the detail in that character throughout the gameplay? I don’t just mean the face looking good, but I mean if it was actually expressive. Movies and some games will use facial capture for the character’s expressions if the character is played by someone like Troy Baker or Cameron Monaghan, but these performances obviously don’t look as convincing when the character was created by the player.

As the technology allowing artists to create detailed characters continues to improve, it will allow for more precise facial animations in games that look like more than the characters mouth opening and closing. This is not to say that games have never before had impressive facial animations; L.A. Noire had entire mechanics based around the character’s facial expressions during the cutscenes. The flip side of that is with NPCs like in Grand Theft Auto V, where if you walk up to them and see them tlaking, their mouths don’t reflect what they’re actually saying.

On top of that, artists also have to have a clear understanding of human anatomy so that the movements look as good as the textures themselves. If the game direction calls for facial capture from a physical actor, the facial expressions will likely be more realistic than games where faces are hand-animated and have to make all the decisions about how to move the face using only reference. Without these steps forward, characters can likely never fully escape the uncanny valley, no matter how close they come.

For future games to really reach the next level, the characters will eventually have to stop looking like video game characters and keep inching closer to reality. In the case of games that don’t aim to look photo-real, they may not be able to use full-on motion capture to capture a performance, but they can use improved animation techniques to refine the motions that do need to look realistic without shattering their budget.

Allow Players to Meaningfully Alter the Environment

video game graphics battlefield destructible environments
Image Source: EA

When I first played Battlefield 3 and Dice used the Frostbite engine to show me that I could throw a grenade and it would actually have an effect on the environment, I thought it would signal a shift in gaming environments from then on. Unfortunately, other games haven’t really seemed to take full advantage of that feature. Games like Rainbow 6 Siege allow for some walls to be broken, but there comes a limit on how much of the environment can be destroyed.

For games that last as long as a game of Battlefield can, it is incredibly interesting to watch as the landscape of a map completely changes over the course of a battle. I feel like that feeling is often lost in other games where after the dust settles, the map looks just the same as when it started. Having player actions have a visible, game-changing effect on the environment is one of the best ways to immerse a player in the action. Not to mention watching a building collapse is almost always breathtaking.

This isn’t to say that every game has to allow the players to destroy every building they walk into, but being able to leave a lasting impression on a game makes you feel like you’re having a personal experience. It would be amazing if – while playing Starfield, for instance – you could destroy something on a planet, travel across the galaxy on an adventure, only to go back and see that things were just the way you left them. It makes the game feel like there are consequences to actions, leaving more choice in the hands of the players.

That’s it for the top 10 things video game graphics need to reach the next level. As time goes on, the things on this list will eventually be addressed and improved in some capacity, but it’s important to keep in mind all the different factors that go into games turning out the way they do. Regardless of oversights in development or budgeting issues, we can always be grateful for the games that we do get to play, and look forward to the games yet to release.

About the author

Avatar photo

Nick Rivera

Nick Rivera graduated from the University of Pittsburgh in 2021 studying Digital Media and started as a Freelance Writer with Twinfinite in early 2023. Nick plays anything from Halo to Stardew Valley to Peggle, but is a sucker for a magnetic story.

Comments