• The DLSS5 uproar

    From Spalls Hurgenson@spallshurgenson@gmail.com to comp.sys.ibm.pc.games.action on Sun Mar 22 12:19:08 2026
    From Newsgroup: comp.sys.ibm.pc.games.action


    [Don't blame me for this article. Justisaur demanded it! ;-)]

    I'll be the first to say I'm not the best person to be writing it.
    Feel free to correct me as needed. I'm just laying the groundwork for
    the discussion.

    So, the issue at hand is nvidia's recent announcement at GDC about its
    new DLSS5 technology. It's garnered a lot of attention, with many
    decrying it as an AI-slop generator (and a lot of funny meme-images).
    Then nvidia CEO Huang chimed in with a (somewhat condescending)
    response about how everybody was 'completely wrong' and tried to
    defend the tech... which went about as well as you might expect. The
    tech itself has gotten few defenders (outside of nvidia), and many
    developers are expressing dismay at what it does.




    == THE TECH ==
    What is DLSS5? Poorly named, is what it is, I'd say.

    Nvidia's previous DLSS technologies were basically frame-interpolators
    that smoothed out the framerate. But I think its important to
    understand what that older technology was doing in order to compare it
    to the new one.

    Say your playing a video game. It uses 3D graphics (polygons), texture
    mapping, and lighting effects to generate a complicated scene on your
    screen. This requires a lot of CPU power, so your computer can only
    manage this 30 times per second. Nvidia's DLSS technology would look
    at the output of any two of the frames, and generate a new image
    'halfway' between the two screenshots and insert that into the flow, essentially smoothing out the framerate a bit. Your actual framerate
    was still 30fps, but it looked and felt like 45, 60 or more (depending
    on how many extra frames DLSS shoved in there), which made for more
    pleasing visuals. (It fucked up your input though, because the actual
    framerate was still much lower... which is why players of a lot of
    twitch-heavy games disabled it.)


    DLSS5 --the new tech-- is somewhat similar, except instead of just
    adding interpolated frames in between 'real' frames, it is ALSO
    adjusting the appearance of every frame using generative AI. It takes
    the output image, and runs an AI filter on it. In the examples nvidia
    used in its announcement, the filter seemed to be tailored to "make
    everything look more lifelike' but (according to nvidia) this filter
    can be tailored somewhat. And, undeniably, in the no-DLSS5 and
    after-DLSS5 images shown, the characters of the latter looked a lot
    more realistic.




    == THE UPROAR ==

    So why is this bad? Is it bad?

    There are a variety of issues that have been brought up.

    First, the technology alters the visuals regardless of the artistic
    desires of the game's developers. In one example, a character --a
    female harried FBI agent who was rushing to handle a disaster-- was
    given a drastic makeover where, amongst other changes, her hair color
    was changed (original: all blond, after DLSS5: blond with brownish
    roots), and make-up was added. In another, a character with a sharp
    (and somewhat ugly) haircut had the trim somewhat altered making the
    style less shocking. There are other examples too.

    This is because the generative AI is taking the screenshot from the
    game and running it past a generative AI, which takes the unique look
    of the character and averages it out compared to 'real life' images.
    Because the often unique appearances of the PC game characters don't
    match, it smoothes out many of those differences to match its database
    of faces. It looks more real, but often the point of the artist who
    created those characters was to emphasize the differences. So DLSS5
    doesn't abide by the artists' choices

    [This is why so many developers are aghast at the technology]

    Generative AI is also very bad at consistency. It's smoothing out
    these images based on its own database of what faces should look like.
    But video games move at 30 or 60 (or more) frames per second, and the
    lighting, direction and focus (not to mention things like shake,
    blood, smoke, etc.) can change in seconds. The generative AI just
    takes the current frame and smoothes it out to an average. The output
    doesn't really have to match what it showed two minutes ago... so
    characters could very well start looking different scene to scene. Not radically different --DoomGuy isn't going to appear to be buff one
    scene and nebbish-like the next-- but details won't remain consistent.
    It's probably no coincidence that _all_ the scenes nvidia used had the characters remain almost perfectly still and in constant lighting.

    The messaging is also an issue. As mentioned, I think nvidia should
    have separated this tech from its DLSS tech. But also, nvidia has
    implied that the tech doesn't actually change the appearance of
    characters. "It doesn't affect the underlying textures and geometry",
    they say. True... but it overlays its generative AI imagery on top of
    that geometry. Nvidia is implying that it's mostly just a new lighting
    effect when --even in their own examples-- it is obviously doing much
    more.

    Apparently, developers also have relatively little control over the
    effects too, despite claims to the contrary. When pressed, nvidia has
    admitted that they can block certain scenes from getting processed,
    and increase or decrease the intensity, but actually changing what
    gets done? That doesn't seem to be in the cards.

    Also, it's likely to be another proprietary nvidia tech, rather than
    using an open standard.

    Oh, also, at the moment it only works if you have TWO 5090 RTX cards
    running in tandem and at full speed. Although presumably in future
    revisions of the tech, you won't need the most powerful cards on the
    market (and a small nuclear reactor for power) to take advantage of
    it.

    So there's reasons for upset.



    == IS IT REALLY THAT BAD? ==

    That I can't answer. For some games, I think the effect is pretty
    good. It's a cheap-and-easy way to make your games look 'more real'...
    and, you know, if I were making an asset-flip game with no artistic
    character of its own, this would be a cool way to make my game look competitive. Imagine "Supermarket Simulator" with visuals on par with
    the latest AAA game!

    But for everything else? It comes with downsides. It sacrifices the
    artistic intent of the developer. It averages out visuals. It makes
    more unique looking games less unique looking. It pushes 'ultra
    realism' at a cost to stuff like cell-shading or other visual styles.
    It's proprietary and requires expensive, power-hungry tech. And nvidia
    hasn't been honest about what it does in any way.

    So while I'm not sure the huge uproar is deserved, neither is it
    entirely undeserved. This is a technology nobody really wanted or
    needed, and it was poorly represented by nvidia (and then their CEO doubled-down and 'blamed the customer' rather than reading the room).
    Some pushback was necessary.

    IMHO. You may disagree. Like I said, I've only done minimal research
    into this mess. Heck, it's nothing that will directly affect me
    anytime soon (I don't even have ONE 5090RTS card, and -not having AI
    bro money- am unlikely to acquire one for years). So feel free to add corrections or your own opinions.

    But if you want to talk DLSS5, at least now maybe we have a common understanding. Hopefully. Unless I totally fucked up my explanations.
    But even then, it'll give you all something to talk about.


    Do you care about DLSS5? Is it good or bad tech? Is all the noise on
    the Internet about it deserved? Do you think the meme images are
    funny? And will anybody buy me two 5090RTX cards so I can see what it
    does for myself? ;-)







    --- Synchronet 3.21d-Linux NewsLink 1.2
  • From Rin Stowleigh@nospam@nowhere.com to comp.sys.ibm.pc.games.action on Sun Mar 22 13:29:49 2026
    From Newsgroup: comp.sys.ibm.pc.games.action


    Anything that adds input latency to games tends to fuck up action
    games like shooters.

    Features like this make cutscenes look slightly better at the expense
    of gameplay. No thanks.
    --- Synchronet 3.21d-Linux NewsLink 1.2
  • From Dimensional Traveler@dtravel@sonic.net to comp.sys.ibm.pc.games.action on Mon Mar 23 07:25:45 2026
    From Newsgroup: comp.sys.ibm.pc.games.action

    On 3/22/2026 9:19 AM, Spalls Hurgenson wrote:

    [Don't blame me for this article. Justisaur demanded it! ;-)]

    I'll be the first to say I'm not the best person to be writing it.
    Feel free to correct me as needed. I'm just laying the groundwork for
    the discussion.

    So, the issue at hand is nvidia's recent announcement at GDC about its
    new DLSS5 technology. It's garnered a lot of attention, with many
    decrying it as an AI-slop generator (and a lot of funny meme-images).
    Then nvidia CEO Huang chimed in with a (somewhat condescending)
    response about how everybody was 'completely wrong' and tried to
    defend the tech... which went about as well as you might expect. The
    tech itself has gotten few defenders (outside of nvidia), and many
    developers are expressing dismay at what it does.




    == THE TECH ==
    What is DLSS5? Poorly named, is what it is, I'd say.

    Nvidia's previous DLSS technologies were basically frame-interpolators
    that smoothed out the framerate. But I think its important to
    understand what that older technology was doing in order to compare it
    to the new one.

    Say your playing a video game. It uses 3D graphics (polygons), texture mapping, and lighting effects to generate a complicated scene on your
    screen. This requires a lot of CPU power, so your computer can only
    manage this 30 times per second. Nvidia's DLSS technology would look
    at the output of any two of the frames, and generate a new image
    'halfway' between the two screenshots and insert that into the flow, essentially smoothing out the framerate a bit. Your actual framerate
    was still 30fps, but it looked and felt like 45, 60 or more (depending
    on how many extra frames DLSS shoved in there), which made for more
    pleasing visuals. (It fucked up your input though, because the actual framerate was still much lower... which is why players of a lot of twitch-heavy games disabled it.)


    DLSS5 --the new tech-- is somewhat similar, except instead of just
    adding interpolated frames in between 'real' frames, it is ALSO
    adjusting the appearance of every frame using generative AI. It takes
    the output image, and runs an AI filter on it. In the examples nvidia
    used in its announcement, the filter seemed to be tailored to "make everything look more lifelike' but (according to nvidia) this filter
    can be tailored somewhat. And, undeniably, in the no-DLSS5 and
    after-DLSS5 images shown, the characters of the latter looked a lot
    more realistic.




    == THE UPROAR ==

    So why is this bad? Is it bad?

    There are a variety of issues that have been brought up.

    First, the technology alters the visuals regardless of the artistic
    desires of the game's developers. In one example, a character --a
    female harried FBI agent who was rushing to handle a disaster-- was
    given a drastic makeover where, amongst other changes, her hair color
    was changed (original: all blond, after DLSS5: blond with brownish
    roots), and make-up was added. In another, a character with a sharp
    (and somewhat ugly) haircut had the trim somewhat altered making the
    style less shocking. There are other examples too.

    This is because the generative AI is taking the screenshot from the
    game and running it past a generative AI, which takes the unique look
    of the character and averages it out compared to 'real life' images.
    Because the often unique appearances of the PC game characters don't
    match, it smoothes out many of those differences to match its database
    of faces. It looks more real, but often the point of the artist who
    created those characters was to emphasize the differences. So DLSS5
    doesn't abide by the artists' choices

    [This is why so many developers are aghast at the technology]

    Generative AI is also very bad at consistency. It's smoothing out
    these images based on its own database of what faces should look like.
    But video games move at 30 or 60 (or more) frames per second, and the lighting, direction and focus (not to mention things like shake,
    blood, smoke, etc.) can change in seconds. The generative AI just
    takes the current frame and smoothes it out to an average. The output
    doesn't really have to match what it showed two minutes ago... so
    characters could very well start looking different scene to scene. Not radically different --DoomGuy isn't going to appear to be buff one
    scene and nebbish-like the next-- but details won't remain consistent.
    It's probably no coincidence that _all_ the scenes nvidia used had the characters remain almost perfectly still and in constant lighting.

    The messaging is also an issue. As mentioned, I think nvidia should
    have separated this tech from its DLSS tech. But also, nvidia has
    implied that the tech doesn't actually change the appearance of
    characters. "It doesn't affect the underlying textures and geometry",
    they say. True... but it overlays its generative AI imagery on top of
    that geometry. Nvidia is implying that it's mostly just a new lighting
    effect when --even in their own examples-- it is obviously doing much
    more.

    Apparently, developers also have relatively little control over the
    effects too, despite claims to the contrary. When pressed, nvidia has admitted that they can block certain scenes from getting processed,
    and increase or decrease the intensity, but actually changing what
    gets done? That doesn't seem to be in the cards.

    Also, it's likely to be another proprietary nvidia tech, rather than
    using an open standard.

    Oh, also, at the moment it only works if you have TWO 5090 RTX cards
    running in tandem and at full speed. Although presumably in future
    revisions of the tech, you won't need the most powerful cards on the
    market (and a small nuclear reactor for power) to take advantage of
    it.

    So there's reasons for upset.



    == IS IT REALLY THAT BAD? ==

    That I can't answer. For some games, I think the effect is pretty
    good. It's a cheap-and-easy way to make your games look 'more real'...
    and, you know, if I were making an asset-flip game with no artistic
    character of its own, this would be a cool way to make my game look competitive. Imagine "Supermarket Simulator" with visuals on par with
    the latest AAA game!

    But for everything else? It comes with downsides. It sacrifices the
    artistic intent of the developer. It averages out visuals. It makes
    more unique looking games less unique looking. It pushes 'ultra
    realism' at a cost to stuff like cell-shading or other visual styles.
    It's proprietary and requires expensive, power-hungry tech. And nvidia
    hasn't been honest about what it does in any way.

    So while I'm not sure the huge uproar is deserved, neither is it
    entirely undeserved. This is a technology nobody really wanted or
    needed, and it was poorly represented by nvidia (and then their CEO doubled-down and 'blamed the customer' rather than reading the room).
    Some pushback was necessary.

    IMHO. You may disagree. Like I said, I've only done minimal research
    into this mess. Heck, it's nothing that will directly affect me
    anytime soon (I don't even have ONE 5090RTS card, and -not having AI
    bro money- am unlikely to acquire one for years). So feel free to add corrections or your own opinions.

    But if you want to talk DLSS5, at least now maybe we have a common understanding. Hopefully. Unless I totally fucked up my explanations.
    But even then, it'll give you all something to talk about.


    Do you care about DLSS5? Is it good or bad tech? Is all the noise on
    the Internet about it deserved? Do you think the meme images are
    funny? And will anybody buy me two 5090RTX cards so I can see what it
    does for myself? ;-)

    I see two issues here. One is forcing the company's AI into/on every
    game a customer plays. I can very easily see it evolving into Nvidia dictating what games (and other software) are allowed to do. And
    preventing some games and software from running at all if Nvidia wants
    to shut them down. (Hello Microsoft Skynet kind of situation.) This is "just" more AI-Tech Bros power play shit.

    Two is the way they seem to be introducing it. "We know what's best for
    you, digital serfs! So shut up and pay your subscription or we'll
    permanently shut down your computer!" It isn't about making a better
    product for the customers, its about control of the customers.

    I know those sound extreme but given what is already happening in
    politics and society because of AI development (power hungry causing electrical shortages and price increases, the US Dept of Defense's fight
    with an AI developer who doesn't want to supply Terminator software, the expectation of massive job losses on top of the ones we are already
    seeing, etc.) even just the appearance of the possibility is really,
    really bad optics on Nvidia's part. Multiplied by the Nvidia's
    president's obvious attitude issues.
    --
    I've done good in this world. Now I'm tired and just want to be a cranky
    dirty old man.
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Spalls Hurgenson@spallshurgenson@gmail.com to comp.sys.ibm.pc.games.action on Mon Mar 23 11:05:06 2026
    From Newsgroup: comp.sys.ibm.pc.games.action

    On Mon, 23 Mar 2026 07:25:45 -0700, Dimensional Traveler
    <dtravel@sonic.net> said this thing:


    I see two issues here. One is forcing the company's AI into/on every
    game a customer plays. I can very easily see it evolving into Nvidia >dictating what games (and other software) are allowed to do. And
    preventing some games and software from running at all if Nvidia wants
    to shut them down. (Hello Microsoft Skynet kind of situation.) This is >"just" more AI-Tech Bros power play shit.


    In fairness, this /isn't/ happening. Just like 'old-fashioned' DLSS,
    developers have to enable it in their games. It isn't something that
    the hardware just does on its own. Maybe it can be forced on by the
    end-user despite a lack of support in the game (sort of like toggling
    'always use vsync' in the nvidia control panel) but that rarely works
    as well as it being engaged through the game itself, because then the
    game knows the feature is being used and can optimize itself for it.

    Nvidia has made it very clear, 'if you don't like it, you can turn it
    off.' both to develppers and to customers.



    Two is the way they seem to be introducing it. "We know what's best for >you, digital serfs! So shut up and pay your subscription or we'll >permanently shut down your computer!" It isn't about making a better >product for the customers, its about control of the customers.


    Nvidia's messaging is definitely a problem. Not just the direct
    messaging to its nominal customers (game developers and players), with
    the CEO saying we're all 'completely wrong' and then misleading people
    about the capabilities and methodology of the technology. That's bad
    enough.

    But --as you've pointed out-- nvidia is gung-ho invested in generative
    AI and has bet heavily on it being the next big thing. It's a big part
    of the cycle that's feeding the AI bubble, and sales of AI-related
    technology are what propelled nvidia to becoming the most-valued
    company on Earth (it has a $4.4 trillion valuation, close to Apple and
    Facebook combined!).

    So they have a major stake in promoting AI... even as the game
    industry has shown little interest in the tech (outside of the
    C-levels, who are salivating at the idea of being able to downsize
    three quarters of their staff). More, nvidia has openly neglected
    their gaming customers in favor of AI tech, to the point where their
    video cards are often impossible to buy by the average gamer, and even
    when you can get one, the cards are over-priced... and nvidia has
    shown no inclination to changing this.

    (Oh, and their cards keep catching fire too because why not have them
    reqiure so much voltage that the wiring literally starts to melt)



    None of which really speaks to whether the uproar over the
    capabilities and methods of DLSS5 are deserved... but it does explain
    why this issue has blown up as quickly and loudly as it has. Gamers
    are /pissed/ at nvidia.


    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Dimensional Traveler@dtravel@sonic.net to comp.sys.ibm.pc.games.action on Mon Mar 23 17:25:27 2026
    From Newsgroup: comp.sys.ibm.pc.games.action

    On 3/23/2026 8:05 AM, Spalls Hurgenson wrote:
    On Mon, 23 Mar 2026 07:25:45 -0700, Dimensional Traveler
    <dtravel@sonic.net> said this thing:


    I see two issues here. One is forcing the company's AI into/on every
    game a customer plays. I can very easily see it evolving into Nvidia
    dictating what games (and other software) are allowed to do. And
    preventing some games and software from running at all if Nvidia wants
    to shut them down. (Hello Microsoft Skynet kind of situation.) This is
    "just" more AI-Tech Bros power play shit.


    In fairness, this /isn't/ happening. Just like 'old-fashioned' DLSS, developers have to enable it in their games. It isn't something that
    the hardware just does on its own. Maybe it can be forced on by the
    end-user despite a lack of support in the game (sort of like toggling
    'always use vsync' in the nvidia control panel) but that rarely works
    as well as it being engaged through the game itself, because then the
    game knows the feature is being used and can optimize itself for it.

    Nvidia has made it very clear, 'if you don't like it, you can turn it
    off.' both to develppers and to customers.

    I don't believe them.
    --
    I've done good in this world. Now I'm tired and just want to be a cranky
    dirty old man.
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Justisaur@justisaur@yahoo.com to comp.sys.ibm.pc.games.action on Tue Mar 24 09:19:06 2026
    From Newsgroup: comp.sys.ibm.pc.games.action

    On 3/22/2026 9:19 AM, Spalls Hurgenson wrote:

    [Don't blame me for this article. Justisaur demanded it! ;-)]

    I'll be the first to say I'm not the best person to be writing it.
    Feel free to correct me as needed. I'm just laying the groundwork for
    the discussion.

    So, the issue at hand is nvidia's recent announcement at GDC about its
    new DLSS5 technology. It's garnered a lot of attention, with many
    decrying it as an AI-slop generator (and a lot of funny meme-images).
    Then nvidia CEO Huang chimed in with a (somewhat condescending)
    response about how everybody was 'completely wrong' and tried to
    defend the tech... which went about as well as you might expect. The
    tech itself has gotten few defenders (outside of nvidia), and many
    developers are expressing dismay at what it does.

    == THE TECH ==
    What is DLSS5? Poorly named, is what it is, I'd say.

    Nvidia's previous DLSS technologies were basically frame-interpolators
    that smoothed out the framerate. But I think its important to
    understand what that older technology was doing in order to compare it
    to the new one.

    Say your playing a video game. It uses 3D graphics (polygons), texture mapping, and lighting effects to generate a complicated scene on your
    screen. This requires a lot of CPU power, so your computer can only
    manage this 30 times per second. Nvidia's DLSS technology would look
    at the output of any two of the frames, and generate a new image
    'halfway' between the two screenshots and insert that into the flow, essentially smoothing out the framerate a bit. Your actual framerate
    was still 30fps, but it looked and felt like 45, 60 or more (depending
    on how many extra frames DLSS shoved in there), which made for more
    pleasing visuals. (It fucked up your input though, because the actual framerate was still much lower... which is why players of a lot of twitch-heavy games disabled it.)

    At least trying it in Control it seemed to help the shading quite a lot, making the rather poor boxes, machines, walls look much better and more distinguishable. Unless I'm conflating it with something else.



    DLSS5 --the new tech-- is somewhat similar, except instead of just
    adding interpolated frames in between 'real' frames, it is ALSO
    adjusting the appearance of every frame using generative AI. It takes
    the output image, and runs an AI filter on it. In the examples nvidia
    used in its announcement, the filter seemed to be tailored to "make everything look more lifelike' but (according to nvidia) this filter
    can be tailored somewhat. And, undeniably, in the no-DLSS5 and
    after-DLSS5 images shown, the characters of the latter looked a lot
    more realistic.

    So yes and no. They look more realistic, except it changed the actual appearance of people, they don't look like the same people, they look
    like the usual AI people that are everywhere, somewhat different bone structure, etc. The skin texture looks better, but it completely
    changes who and what they look like, and their expression.

    It's not litterally the 'enhance' that cop shows had for some time.

    == THE UPROAR ==

    So why is this bad? Is it bad?

    Yes it's bad, as I mention above. At least if you don't want all the characters looking the same and with different expressions.

    There are a variety of issues that have been brought up.

    First, the technology alters the visuals regardless of the artistic
    desires of the game's developers. In one example, a character --a
    female harried FBI agent who was rushing to handle a disaster-- was
    given a drastic makeover where, amongst other changes, her hair color
    was changed (original: all blond, after DLSS5: blond with brownish
    roots), and make-up was added. In another, a character with a sharp
    (and somewhat ugly) haircut had the trim somewhat altered making the
    style less shocking. There are other examples too.

    Yes. If it just improved the textures I'd happily jump on that, but
    it's making artistic changes.


    This is because the generative AI is taking the screenshot from the
    game and running it past a generative AI, which takes the unique look
    of the character and averages it out compared to 'real life' images.
    Because the often unique appearances of the PC game characters don't
    match, it smoothes out many of those differences to match its database
    of faces. It looks more real, but often the point of the artist who
    created those characters was to emphasize the differences. So DLSS5
    doesn't abide by the artists' choices

    Very much. Iconic art remade by mindless committee.

    [This is why so many developers are aghast at the technology]

    Makes sense.

    Generative AI is also very bad at consistency. It's smoothing out
    these images based on its own database of what faces should look like.
    But video games move at 30 or 60 (or more) frames per second, and the lighting, direction and focus (not to mention things like shake,
    blood, smoke, etc.) can change in seconds. The generative AI just
    takes the current frame and smoothes it out to an average. The output
    doesn't really have to match what it showed two minutes ago... so
    characters could very well start looking different scene to scene. Not radically different --DoomGuy isn't going to appear to be buff one
    scene and nebbish-like the next-- but details won't remain consistent.
    It's probably no coincidence that _all_ the scenes nvidia used had the characters remain almost perfectly still and in constant lighting.

    Ah didn't know that, that's really bad. I can imagine it.

    The messaging is also an issue. As mentioned, I think nvidia should
    have separated this tech from its DLSS tech. But also, nvidia has
    implied that the tech doesn't actually change the appearance of
    characters. "It doesn't affect the underlying textures and geometry",
    they say. True... but it overlays its generative AI imagery on top of
    that geometry. Nvidia is implying that it's mostly just a new lighting
    effect when --even in their own examples-- it is obviously doing much
    more.

    A painting doesn't change the geometry either, but there's a big
    difference between a child's finger painting and a Van Gough.

    Apparently, developers also have relatively little control over the
    effects too, despite claims to the contrary. When pressed, nvidia has admitted that they can block certain scenes from getting processed,
    and increase or decrease the intensity, but actually changing what
    gets done? That doesn't seem to be in the cards.

    Also, it's likely to be another proprietary nvidia tech, rather than
    using an open standard.

    Oh, also, at the moment it only works if you have TWO 5090 RTX cards
    running in tandem and at full speed. Although presumably in future
    revisions of the tech, you won't need the most powerful cards on the
    market (and a small nuclear reactor for power) to take advantage of
    it.

    On the tech front, I'm wondering how they get the full data required for
    such a model in the card, or if the giant amounts of data is put in the drivers making it bloat insanely in size. Of if they're fudging it, and
    it's really going out to the cloud for it, which isn't going to keep up.

    The processing power should again be far more than they can manage on a
    single computer, again seems like they might be faking it.

    == IS IT REALLY THAT BAD? ==

    That I can't answer. For some games, I think the effect is pretty
    good. It's a cheap-and-easy way to make your games look 'more real'...
    and, you know, if I were making an asset-flip game with no artistic
    character of its own, this would be a cool way to make my game look competitive. Imagine "Supermarket Simulator" with visuals on par with
    the latest AAA game!

    Yeah, if they can somehow fix the blandification (which it sounds like
    they can't) power, ram, processing speed, storage issues it would be
    quite amazing.

    Right now I'd rather just be able to buy a card that works.


    But for everything else? It comes with downsides. It sacrifices the
    artistic intent of the developer. It averages out visuals. It makes
    more unique looking games less unique looking. It pushes 'ultra
    realism' at a cost to stuff like cell-shading or other visual styles.
    It's proprietary and requires expensive, power-hungry tech. And nvidia
    hasn't been honest about what it does in any way.

    Exactly.

    So while I'm not sure the huge uproar is deserved, neither is it
    entirely undeserved. This is a technology nobody really wanted or
    needed, and it was poorly represented by nvidia (and then their CEO doubled-down and 'blamed the customer' rather than reading the room).
    Some pushback was necessary.

    Oh I think it needs more than what it's got. They're hiding things.
    Pay no attention to the man behind the curtain. It reminds me of the
    kinect fakery hype which got Molyneux so much hate.

    IMHO. You may disagree. Like I said, I've only done minimal research
    into this mess. Heck, it's nothing that will directly affect me
    anytime soon (I don't even have ONE 5090RTS card, and -not having AI
    bro money- am unlikely to acquire one for years). So feel free to add corrections or your own opinions.

    Only if/when the bubble crashes will I be buying a new(er) card at cut
    rate. Or win the lottery, which is unlikely since I don't buy tickets.

    But if you want to talk DLSS5, at least now maybe we have a common understanding. Hopefully. Unless I totally fucked up my explanations.
    But even then, it'll give you all something to talk about.

    Do you care about DLSS5? Is it good or bad tech? Is all the noise on
    the Internet about it deserved? Do you think the meme images are
    funny? And will anybody buy me two 5090RTX cards so I can see what it
    does for myself? ;-)

    Yes some of the meme images are funny.
    --
    -Justisaur

    ø-ø
    (\_/)\
    `-'\ `--.___,
    ¶¬'\( ,_.-'
    \\
    ^'
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Spalls Hurgenson@spallshurgenson@gmail.com to comp.sys.ibm.pc.games.action on Wed Mar 25 12:24:12 2026
    From Newsgroup: comp.sys.ibm.pc.games.action

    On Tue, 24 Mar 2026 09:19:06 -0700, Justisaur <justisaur@yahoo.com>
    said this thing:


    On the tech front, I'm wondering how they get the full data required for >such a model in the card, or if the giant amounts of data is put in the >drivers making it bloat insanely in size. Of if they're fudging it, and >it's really going out to the cloud for it, which isn't going to keep up.

    I've wondered at that too. Where is the dataset that the genAI is
    using to 'fix' these images? It's unlikely to be built into the ROMS;
    probably not in the drivers. It's just too much data. Possibly its
    streaming from the internet (but then everytime you want to use DLSS5
    you need an online connection? Perhaps there's a cache? Latency would
    be a bitch too). Or maybe the game developers build up a data-set and
    included it in the game? This seems most likely --and would give
    developers the most control while minimizing bloat-- but it seems like
    a lot of extra storage would still be required. In any event, Nvidia
    hasn't been forthcoming in providing that info.

    The processing power should again be far more than they can manage on a >single computer, again seems like they might be faking it.

    Well, it does require TWO of nvidia's fastest cards currently running
    at max throughput. ;-)

    But I don't doubt the capability of the tech... at least with regards
    to fairly static images like nvidia showed off in their demo. Even unaccelerated programs like photoshop can do some pretty amazing
    things in near-realtime on less-powerful processors. With a tailored
    dataset and optimizations built into the game, I can imagine the GPUs
    managing these transformations real-time. It'll definitely add extra
    load to the processing and --at least at the moment-- seems something
    most developers would only use in cinematics or low-activity scenes
    (e.g., not the parts with lots of explosions, dozens of characters,
    and the physics engine working overtime). But in a cutscene or walking
    down the street? It's probably doable even with current tech.

    Modern CPUs and GPUs are stupid-fast!

    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Rin Stowleigh@nospam@nothanks.com to comp.sys.ibm.pc.games.action on Wed Mar 25 20:49:53 2026
    From Newsgroup: comp.sys.ibm.pc.games.action


    As I was installing the latest nvidia driver, I saw their sales pitch
    for DLSS5... an image of the little Resident Evil Requiem girl, with
    before (cartoonish) and after (more like an instagram girl using
    filters on a selfie).

    So they're hoping to sell this to horny incels, apparently, that care
    more about how realistic slow moving cutscenes and walking simulators
    can look, with an emphasis on young female protagonists.

    Great, I guess for them. But why would that be a major selling point
    or something I should care about if I enjoy playing the game as
    opposed to watching scripted animations?

    It always gets a laugh out of me when someone reviews "facial
    expressions" of NPCs or whatever in games, as if that matters to
    actual gameplay in any substantial way.

    Games should be played. Movies should be watched.

    The whole "games should tell a story" crowd are the ones who caused
    this.
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Spalls Hurgenson@spallshurgenson@gmail.com to comp.sys.ibm.pc.games.action on Sun Mar 29 20:28:35 2026
    From Newsgroup: comp.sys.ibm.pc.games.action

    On Mon, 23 Mar 2026 17:25:27 -0700, Dimensional Traveler
    <dtravel@sonic.net> said this thing:


    I don't believe them.

    It's interesting to see the responses of developers to this new tech.
    It's largely negative, especially when it comes from the 'guys in the
    trenches' who actually do the work. The biggest concern is regarding
    the integrity of their art being spoiled by generative AI. But there
    are some who are more positive about it.

    Take the creative director Daniel Vávra of "Kingdom Come 2", who sides
    with nvidia and says "haters won't stop DLSS5"*, to the CEO of New
    Blood Interactive (the developer behind Dusk) who advocates "Cripple
    [nvidia's] sales, tank their stock price. Stop collaborating with them
    as developers".** So, you know, extreme opinions on either side.

    But Vávra is probably right that nvidia's new approach won't be
    /stopped/. Having developed the technology, nvidia is going to start
    embedding it into every card they can. But just because it's there
    doesn't mean it is going to be used all that often. Again, I'm no
    expert on the stuff, but I think that -used judiciously- it might not
    be too bad. It just won't have the impact nvidia is hoping for. It'll
    sort of be the lensflare effect. Back in 1997, it was The Next Big
    Thing to add lensflares to your games; every title had it. Cards that
    supported that feature in hardware were highly regarded. Nowadays,
    lensflares are still used, but only when absolutely necessary, and
    you'd hardly notice if they disappeared. You certainly wouldn't buy a
    new card just because it supported the tech.

    DLSS5 will probably be the same; another extra tool in a developer's
    bag that will occassionally be used but not anything that really sells
    new cards. It's not going to be like shaders or TNL or
    hardware-assisted physics, despite how much nvidia touts it.

    But we'll see.








    ----
    * DLSS5 is good! https://www.pcgamesn.com/kingdom-come-deliverance-2/daniel-vavra-dlss5
    ** DLSS5 is bad! https://www.pcgamer.com/hardware/graphics-cards/cripple-their-sales-tank-their-stock-price-stop-collaborating-with-them-as-developers-new-blood-ceo-on-fighting-against-dlss-5/





    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Spalls Hurgenson@spallshurgenson@gmail.com to comp.sys.ibm.pc.games.action on Tue Mar 31 20:40:14 2026
    From Newsgroup: comp.sys.ibm.pc.games.action

    On Mon, 23 Mar 2026 17:25:27 -0700, Dimensional Traveler
    <dtravel@sonic.net> said this thing:

    On 3/23/2026 8:05 AM, Spalls Hurgenson wrote:
    On Mon, 23 Mar 2026 07:25:45 -0700, Dimensional Traveler
    <dtravel@sonic.net> said this thing:


    I see two issues here. One is forcing the company's AI into/on every
    game a customer plays. I can very easily see it evolving into Nvidia
    dictating what games (and other software) are allowed to do. And
    preventing some games and software from running at all if Nvidia wants
    to shut them down. (Hello Microsoft Skynet kind of situation.) This is >>> "just" more AI-Tech Bros power play shit.


    In fairness, this /isn't/ happening. Just like 'old-fashioned' DLSS,
    developers have to enable it in their games. It isn't something that
    the hardware just does on its own. Maybe it can be forced on by the
    end-user despite a lack of support in the game (sort of like toggling
    'always use vsync' in the nvidia control panel) but that rarely works
    as well as it being engaged through the game itself, because then the
    game knows the feature is being used and can optimize itself for it.

    Nvidia has made it very clear, 'if you don't like it, you can turn it
    off.' both to develppers and to customers.

    I don't believe them.

    It's also come out that, despite nvidia insisting it is all about
    letting the developers determine how this technology affects the
    output, apparently few (none) of the developers of the games showcased
    by nvidia's input were consulted.

    So, for instance, nvidia showed off a 'before and after' showcase of a character from Capcom's newest game, "Resident Evil: Requiem". Capcom
    has been very vocally anti-AI. And yet, those same developers weren't
    consulted when the DLSS5 demo changed Claire (the aforementioned
    character) from a rather tired-looking woman to somebody with a fresh
    dash of makeup applied and a zesty look to her eye.

    (It's unclear if the publisher was unaware, and that Nvidia used the
    clip without permission. That seems unlikely, especially since other
    publisher c-levels have indicated their games were
    used-with-permission. But the actual people who created the art and
    character design were not contacted).

    Now, this doesn't necessarily mean that no artists were involved in
    the DLSS5-assisted redesign of the characters; that it was all solely
    left up to the hardware-assisted generative AI. But given that nvidia
    is selling this as a tool to help the artists and developer streamline
    the creation process and isn't pushing nvidia's personal vision of
    what games should look like, _not_ involving the original artists is a
    bit of a gaffe.

    Again: this isn't a commentary on the technology itself; rather, it's
    how nvidia is presenting it. It shows how they're blinded by their
    fascination with AI and can't seem to understand why people might not
    approve.

    And why not? AI has turned nvidia into a 4.4 trillion USD powerhouse.
    Obviously it's what the world wants, right?


    --- Synchronet 3.21f-Linux NewsLink 1.2