[Don't blame me for this article. Justisaur demanded it! ;-)]
I'll be the first to say I'm not the best person to be writing it.
Feel free to correct me as needed. I'm just laying the groundwork for
the discussion.
So, the issue at hand is nvidia's recent announcement at GDC about its
new DLSS5 technology. It's garnered a lot of attention, with many
decrying it as an AI-slop generator (and a lot of funny meme-images).
Then nvidia CEO Huang chimed in with a (somewhat condescending)
response about how everybody was 'completely wrong' and tried to
defend the tech... which went about as well as you might expect. The
tech itself has gotten few defenders (outside of nvidia), and many
developers are expressing dismay at what it does.
== THE TECH ==
What is DLSS5? Poorly named, is what it is, I'd say.
Nvidia's previous DLSS technologies were basically frame-interpolators
that smoothed out the framerate. But I think its important to
understand what that older technology was doing in order to compare it
to the new one.
Say your playing a video game. It uses 3D graphics (polygons), texture mapping, and lighting effects to generate a complicated scene on your
screen. This requires a lot of CPU power, so your computer can only
manage this 30 times per second. Nvidia's DLSS technology would look
at the output of any two of the frames, and generate a new image
'halfway' between the two screenshots and insert that into the flow, essentially smoothing out the framerate a bit. Your actual framerate
was still 30fps, but it looked and felt like 45, 60 or more (depending
on how many extra frames DLSS shoved in there), which made for more
pleasing visuals. (It fucked up your input though, because the actual framerate was still much lower... which is why players of a lot of twitch-heavy games disabled it.)
DLSS5 --the new tech-- is somewhat similar, except instead of just
adding interpolated frames in between 'real' frames, it is ALSO
adjusting the appearance of every frame using generative AI. It takes
the output image, and runs an AI filter on it. In the examples nvidia
used in its announcement, the filter seemed to be tailored to "make everything look more lifelike' but (according to nvidia) this filter
can be tailored somewhat. And, undeniably, in the no-DLSS5 and
after-DLSS5 images shown, the characters of the latter looked a lot
more realistic.
== THE UPROAR ==
So why is this bad? Is it bad?
There are a variety of issues that have been brought up.
First, the technology alters the visuals regardless of the artistic
desires of the game's developers. In one example, a character --a
female harried FBI agent who was rushing to handle a disaster-- was
given a drastic makeover where, amongst other changes, her hair color
was changed (original: all blond, after DLSS5: blond with brownish
roots), and make-up was added. In another, a character with a sharp
(and somewhat ugly) haircut had the trim somewhat altered making the
style less shocking. There are other examples too.
This is because the generative AI is taking the screenshot from the
game and running it past a generative AI, which takes the unique look
of the character and averages it out compared to 'real life' images.
Because the often unique appearances of the PC game characters don't
match, it smoothes out many of those differences to match its database
of faces. It looks more real, but often the point of the artist who
created those characters was to emphasize the differences. So DLSS5
doesn't abide by the artists' choices
[This is why so many developers are aghast at the technology]
Generative AI is also very bad at consistency. It's smoothing out
these images based on its own database of what faces should look like.
But video games move at 30 or 60 (or more) frames per second, and the lighting, direction and focus (not to mention things like shake,
blood, smoke, etc.) can change in seconds. The generative AI just
takes the current frame and smoothes it out to an average. The output
doesn't really have to match what it showed two minutes ago... so
characters could very well start looking different scene to scene. Not radically different --DoomGuy isn't going to appear to be buff one
scene and nebbish-like the next-- but details won't remain consistent.
It's probably no coincidence that _all_ the scenes nvidia used had the characters remain almost perfectly still and in constant lighting.
The messaging is also an issue. As mentioned, I think nvidia should
have separated this tech from its DLSS tech. But also, nvidia has
implied that the tech doesn't actually change the appearance of
characters. "It doesn't affect the underlying textures and geometry",
they say. True... but it overlays its generative AI imagery on top of
that geometry. Nvidia is implying that it's mostly just a new lighting
effect when --even in their own examples-- it is obviously doing much
more.
Apparently, developers also have relatively little control over the
effects too, despite claims to the contrary. When pressed, nvidia has admitted that they can block certain scenes from getting processed,
and increase or decrease the intensity, but actually changing what
gets done? That doesn't seem to be in the cards.
Also, it's likely to be another proprietary nvidia tech, rather than
using an open standard.
Oh, also, at the moment it only works if you have TWO 5090 RTX cards
running in tandem and at full speed. Although presumably in future
revisions of the tech, you won't need the most powerful cards on the
market (and a small nuclear reactor for power) to take advantage of
it.
So there's reasons for upset.
== IS IT REALLY THAT BAD? ==
That I can't answer. For some games, I think the effect is pretty
good. It's a cheap-and-easy way to make your games look 'more real'...
and, you know, if I were making an asset-flip game with no artistic
character of its own, this would be a cool way to make my game look competitive. Imagine "Supermarket Simulator" with visuals on par with
the latest AAA game!
But for everything else? It comes with downsides. It sacrifices the
artistic intent of the developer. It averages out visuals. It makes
more unique looking games less unique looking. It pushes 'ultra
realism' at a cost to stuff like cell-shading or other visual styles.
It's proprietary and requires expensive, power-hungry tech. And nvidia
hasn't been honest about what it does in any way.
So while I'm not sure the huge uproar is deserved, neither is it
entirely undeserved. This is a technology nobody really wanted or
needed, and it was poorly represented by nvidia (and then their CEO doubled-down and 'blamed the customer' rather than reading the room).
Some pushback was necessary.
IMHO. You may disagree. Like I said, I've only done minimal research
into this mess. Heck, it's nothing that will directly affect me
anytime soon (I don't even have ONE 5090RTS card, and -not having AI
bro money- am unlikely to acquire one for years). So feel free to add corrections or your own opinions.
But if you want to talk DLSS5, at least now maybe we have a common understanding. Hopefully. Unless I totally fucked up my explanations.
But even then, it'll give you all something to talk about.
Do you care about DLSS5? Is it good or bad tech? Is all the noise on
the Internet about it deserved? Do you think the meme images are
funny? And will anybody buy me two 5090RTX cards so I can see what it
does for myself? ;-)
I see two issues here. One is forcing the company's AI into/on every
game a customer plays. I can very easily see it evolving into Nvidia >dictating what games (and other software) are allowed to do. And
preventing some games and software from running at all if Nvidia wants
to shut them down. (Hello Microsoft Skynet kind of situation.) This is >"just" more AI-Tech Bros power play shit.
Two is the way they seem to be introducing it. "We know what's best for >you, digital serfs! So shut up and pay your subscription or we'll >permanently shut down your computer!" It isn't about making a better >product for the customers, its about control of the customers.
On Mon, 23 Mar 2026 07:25:45 -0700, Dimensional Traveler
<dtravel@sonic.net> said this thing:
I see two issues here. One is forcing the company's AI into/on every
game a customer plays. I can very easily see it evolving into Nvidia
dictating what games (and other software) are allowed to do. And
preventing some games and software from running at all if Nvidia wants
to shut them down. (Hello Microsoft Skynet kind of situation.) This is
"just" more AI-Tech Bros power play shit.
In fairness, this /isn't/ happening. Just like 'old-fashioned' DLSS, developers have to enable it in their games. It isn't something that
the hardware just does on its own. Maybe it can be forced on by the
end-user despite a lack of support in the game (sort of like toggling
'always use vsync' in the nvidia control panel) but that rarely works
as well as it being engaged through the game itself, because then the
game knows the feature is being used and can optimize itself for it.
Nvidia has made it very clear, 'if you don't like it, you can turn it
off.' both to develppers and to customers.
[Don't blame me for this article. Justisaur demanded it! ;-)]
I'll be the first to say I'm not the best person to be writing it.
Feel free to correct me as needed. I'm just laying the groundwork for
the discussion.
So, the issue at hand is nvidia's recent announcement at GDC about its
new DLSS5 technology. It's garnered a lot of attention, with many
decrying it as an AI-slop generator (and a lot of funny meme-images).
Then nvidia CEO Huang chimed in with a (somewhat condescending)
response about how everybody was 'completely wrong' and tried to
defend the tech... which went about as well as you might expect. The
tech itself has gotten few defenders (outside of nvidia), and many
developers are expressing dismay at what it does.
== THE TECH ==
What is DLSS5? Poorly named, is what it is, I'd say.
Nvidia's previous DLSS technologies were basically frame-interpolators
that smoothed out the framerate. But I think its important to
understand what that older technology was doing in order to compare it
to the new one.
Say your playing a video game. It uses 3D graphics (polygons), texture mapping, and lighting effects to generate a complicated scene on your
screen. This requires a lot of CPU power, so your computer can only
manage this 30 times per second. Nvidia's DLSS technology would look
at the output of any two of the frames, and generate a new image
'halfway' between the two screenshots and insert that into the flow, essentially smoothing out the framerate a bit. Your actual framerate
was still 30fps, but it looked and felt like 45, 60 or more (depending
on how many extra frames DLSS shoved in there), which made for more
pleasing visuals. (It fucked up your input though, because the actual framerate was still much lower... which is why players of a lot of twitch-heavy games disabled it.)
DLSS5 --the new tech-- is somewhat similar, except instead of just
adding interpolated frames in between 'real' frames, it is ALSO
adjusting the appearance of every frame using generative AI. It takes
the output image, and runs an AI filter on it. In the examples nvidia
used in its announcement, the filter seemed to be tailored to "make everything look more lifelike' but (according to nvidia) this filter
can be tailored somewhat. And, undeniably, in the no-DLSS5 and
after-DLSS5 images shown, the characters of the latter looked a lot
more realistic.
== THE UPROAR ==
So why is this bad? Is it bad?
There are a variety of issues that have been brought up.
First, the technology alters the visuals regardless of the artistic
desires of the game's developers. In one example, a character --a
female harried FBI agent who was rushing to handle a disaster-- was
given a drastic makeover where, amongst other changes, her hair color
was changed (original: all blond, after DLSS5: blond with brownish
roots), and make-up was added. In another, a character with a sharp
(and somewhat ugly) haircut had the trim somewhat altered making the
style less shocking. There are other examples too.
This is because the generative AI is taking the screenshot from the
game and running it past a generative AI, which takes the unique look
of the character and averages it out compared to 'real life' images.
Because the often unique appearances of the PC game characters don't
match, it smoothes out many of those differences to match its database
of faces. It looks more real, but often the point of the artist who
created those characters was to emphasize the differences. So DLSS5
doesn't abide by the artists' choices
[This is why so many developers are aghast at the technology]
Generative AI is also very bad at consistency. It's smoothing out
these images based on its own database of what faces should look like.
But video games move at 30 or 60 (or more) frames per second, and the lighting, direction and focus (not to mention things like shake,
blood, smoke, etc.) can change in seconds. The generative AI just
takes the current frame and smoothes it out to an average. The output
doesn't really have to match what it showed two minutes ago... so
characters could very well start looking different scene to scene. Not radically different --DoomGuy isn't going to appear to be buff one
scene and nebbish-like the next-- but details won't remain consistent.
It's probably no coincidence that _all_ the scenes nvidia used had the characters remain almost perfectly still and in constant lighting.
The messaging is also an issue. As mentioned, I think nvidia should
have separated this tech from its DLSS tech. But also, nvidia has
implied that the tech doesn't actually change the appearance of
characters. "It doesn't affect the underlying textures and geometry",
they say. True... but it overlays its generative AI imagery on top of
that geometry. Nvidia is implying that it's mostly just a new lighting
effect when --even in their own examples-- it is obviously doing much
more.
Apparently, developers also have relatively little control over the
effects too, despite claims to the contrary. When pressed, nvidia has admitted that they can block certain scenes from getting processed,
and increase or decrease the intensity, but actually changing what
gets done? That doesn't seem to be in the cards.
Also, it's likely to be another proprietary nvidia tech, rather than
using an open standard.
Oh, also, at the moment it only works if you have TWO 5090 RTX cards
running in tandem and at full speed. Although presumably in future
revisions of the tech, you won't need the most powerful cards on the
market (and a small nuclear reactor for power) to take advantage of
it.
== IS IT REALLY THAT BAD? ==
That I can't answer. For some games, I think the effect is pretty
good. It's a cheap-and-easy way to make your games look 'more real'...
and, you know, if I were making an asset-flip game with no artistic
character of its own, this would be a cool way to make my game look competitive. Imagine "Supermarket Simulator" with visuals on par with
the latest AAA game!
But for everything else? It comes with downsides. It sacrifices the
artistic intent of the developer. It averages out visuals. It makes
more unique looking games less unique looking. It pushes 'ultra
realism' at a cost to stuff like cell-shading or other visual styles.
It's proprietary and requires expensive, power-hungry tech. And nvidia
hasn't been honest about what it does in any way.
So while I'm not sure the huge uproar is deserved, neither is it
entirely undeserved. This is a technology nobody really wanted or
needed, and it was poorly represented by nvidia (and then their CEO doubled-down and 'blamed the customer' rather than reading the room).
Some pushback was necessary.
IMHO. You may disagree. Like I said, I've only done minimal research
into this mess. Heck, it's nothing that will directly affect me
anytime soon (I don't even have ONE 5090RTS card, and -not having AI
bro money- am unlikely to acquire one for years). So feel free to add corrections or your own opinions.
But if you want to talk DLSS5, at least now maybe we have a common understanding. Hopefully. Unless I totally fucked up my explanations.
But even then, it'll give you all something to talk about.
Do you care about DLSS5? Is it good or bad tech? Is all the noise on
the Internet about it deserved? Do you think the meme images are
funny? And will anybody buy me two 5090RTX cards so I can see what it
does for myself? ;-)
On the tech front, I'm wondering how they get the full data required for >such a model in the card, or if the giant amounts of data is put in the >drivers making it bloat insanely in size. Of if they're fudging it, and >it's really going out to the cloud for it, which isn't going to keep up.
The processing power should again be far more than they can manage on a >single computer, again seems like they might be faking it.
I don't believe them.
On 3/23/2026 8:05 AM, Spalls Hurgenson wrote:
On Mon, 23 Mar 2026 07:25:45 -0700, Dimensional TravelerI don't believe them.
<dtravel@sonic.net> said this thing:
I see two issues here. One is forcing the company's AI into/on every
game a customer plays. I can very easily see it evolving into Nvidia
dictating what games (and other software) are allowed to do. And
preventing some games and software from running at all if Nvidia wants
to shut them down. (Hello Microsoft Skynet kind of situation.) This is >>> "just" more AI-Tech Bros power play shit.
In fairness, this /isn't/ happening. Just like 'old-fashioned' DLSS,
developers have to enable it in their games. It isn't something that
the hardware just does on its own. Maybe it can be forced on by the
end-user despite a lack of support in the game (sort of like toggling
'always use vsync' in the nvidia control panel) but that rarely works
as well as it being engaged through the game itself, because then the
game knows the feature is being used and can optimize itself for it.
Nvidia has made it very clear, 'if you don't like it, you can turn it
off.' both to develppers and to customers.
| Sysop: | DaiTengu |
|---|---|
| Location: | Appleton, WI |
| Users: | 1,113 |
| Nodes: | 10 (0 / 10) |
| Uptime: | 492337:11:05 |
| Calls: | 14,238 |
| Files: | 186,312 |
| D/L today: |
3,908 files (1,274M bytes) |
| Messages: | 2,514,893 |