• Stardew Valley farmer takes advice from AI, ends up brewing 136useless bottles of rice juice

    From Dimensional Traveler@dtravel@sonic.net to comp.sys.ibm.pc.games.action on Thu Mar 26 20:56:26 2026
    From Newsgroup: comp.sys.ibm.pc.games.action

    Came across this PC Gamer article.

    In today's episode of "innocent AI usage goes wrong," a Stardew Valley
    player wastes a bunch of time and resources because a Google AI summary
    lied to their face.

    "Don't listen to Google and mill your rice for vinegar," warned user WonderfulScholar6171 on the Stardew subreddit this week. "If you want to
    avoid a mass amount of rice juice that is."

    Scholar shared a screenshot of their Google query "how to make vinegar stardew" and its generated answer: "In Stardew Valley update 1.6+, you
    can make vinegar by placing 1 unit or Rice (or Unmilled Rice) into a Keg."

    The error lies in the parenthetical. You do, in fact, have to mill your
    rice before throwing it into a keg if you want vinegar. Skip that step
    (or be misdirected by a machine that only pretends to produce reason)
    and, well, "now I'm left with 136 Unmilled Rice Juice."

    The second screenshot is simply tragic: rows upon rows of neatly
    arranged kegs destined for vinegar production, but instead packing
    hundreds of gallons of rice juice. What the heck is rice juice? You can
    drink it for a little stamina and health, or sell it for around 90 gold.
    Not exactly a money-printing operation.

    As many in the comments were quick to point out, Google summaries are
    best ignored when truth is relevant. A search engine listing that is
    routinely wrong is by default the worst result, much like how a fridge
    that only chills your food for part of each day is the worst refrigerator.

    To be fair to our accidental juice mogul, they're hardly the first to
    shove unmilled rice into a keg expecting vinegar. Someone else came to
    the Starfew subreddit three years ago to share the same PSA.

    Ironically, that post is one of several PSAs that appear when you
    duplicate this original search and scroll past Google's made-up,
    incorrect guess. Scroll past Reddit results and you eventually reach the human-made wiki entry for vinegar that the AI poorly stole from:

    "Vinegar can also be made by putting Rice into a Keg," it reads.

    Good to know.


    https://www.msn.com/en-us/foodanddrink/foodnews/stardew-valley-farmer-takes-advice-from-ai-ends-up-brewing-136-useless-bottles-of-rice-juice/ar-AA1Zpnmw?ocid=winpstoreapp&cvid=69c5fd87b6b04c90b8d496fc963ca0bf&ei=47
    --
    I've done good in this world. Now I'm tired and just want to be a cranky
    dirty old man.

    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Justisaur@justisaur@yahoo.com to comp.sys.ibm.pc.games.action on Fri Mar 27 07:21:04 2026
    From Newsgroup: comp.sys.ibm.pc.games.action

    On 3/26/2026 8:56 PM, Dimensional Traveler wrote:

    a fridge
    that only chills your food for part of each day is the worst refrigerator.

    Technically that's how they work.
    --
    -Justisaur

    ø-ø
    (\_/)\
    `-'\ `--.___,
    ¶¬'\( ,_.-'
    \\
    ^'
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Spalls Hurgenson@spallshurgenson@gmail.com to comp.sys.ibm.pc.games.action on Fri Mar 27 11:02:45 2026
    From Newsgroup: comp.sys.ibm.pc.games.action

    On Thu, 26 Mar 2026 20:56:26 -0700, Dimensional Traveler
    <dtravel@sonic.net> said this thing:

    Came across this PC Gamer article.

    In today's episode of "innocent AI usage goes wrong," a Stardew Valley >player wastes a bunch of time and resources because a Google AI summary
    lied to their face.

    Good old AI. Neat, fun... and completely unreliable. But fortunately,
    you have talented employees who --when using AI to help them in their
    jobs-- can catch these sort of errors. Oh, what's that? You FIRED all
    the employees who knew what they were doing because you thought AI
    could do their jobs? Oh, sucks for you.

    #

    The AI bubble can't end soon enough. And it will end. The AI companies
    (well, besides the hardware manufacturers) just aren't bringing in any revenue... or at least, not enough revenue to offeset the massive
    costs they are accumulating. Even where corporations actually pay
    subscriptions for the services (less than 3% of the AI market!), it
    costs the AI companies more money to service those customers than the subscriptions bring in.

    I forget the actual numbers, but if you say a monthly subscription
    costs a company $200 / month / user (if we assume the most expensive
    rate), but each user makes five hundred compute requests per day and
    each request costs $1 for the AI companies to process... well, that's
    a quick way to bankruptcy. And the corporations who actually pay for a subscription (at any rate) is minimal, and that number would drop to
    nearly zero if they were made to pay the actual cost of each compute
    request. All the more so since the companies who /are/ using AI
    day-to-day are already used to flat rates. Switching to a per-use
    (API) model will kill any interest in using AI.

    Especially since the results of each AI request are so erratic and
    often need multiple attempts to get something actually usable. At that
    point it becomes cheaper for corporations to just keep your employees
    and ditch the AI.

    AI companies like to compare themselves to the early days of Uber or
    Amazon --'You gotta spend money to make money!'-- except the amounts
    they are spending dwarf what Amazon or Uber spent setting themselves
    up. Uber spent about $30 billion USD over a decade getting to where
    they are now. Anthropic spends $30 billion USD per month. It's not
    sustainable.

    There's just not a way out of the mess that AI companies are in right
    now. They can't dig their way out of the hole; buying more datacenters
    will only make things worse. Simply put, their product is too
    expensive and not worth the price they would need to charge for it to
    be profitable.

    AI is a bubble that is consuming vast amounts of cash and resources,
    burning through electricity, scarfing up all the hardware and getting
    thousands of people fired... and in the end it's all going to collapse
    without bringing any net gain to the economy or the world.

    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Dimensional Traveler@dtravel@sonic.net to comp.sys.ibm.pc.games.action on Fri Mar 27 17:28:46 2026
    From Newsgroup: comp.sys.ibm.pc.games.action

    On 3/27/2026 8:02 AM, Spalls Hurgenson wrote:
    On Thu, 26 Mar 2026 20:56:26 -0700, Dimensional Traveler
    <dtravel@sonic.net> said this thing:

    Came across this PC Gamer article.

    In today's episode of "innocent AI usage goes wrong," a Stardew Valley
    player wastes a bunch of time and resources because a Google AI summary
    lied to their face.

    Good old AI. Neat, fun... and completely unreliable. But fortunately,
    you have talented employees who --when using AI to help them in their
    jobs-- can catch these sort of errors. Oh, what's that? You FIRED all
    the employees who knew what they were doing because you thought AI
    could do their jobs? Oh, sucks for you.

    #

    The AI bubble can't end soon enough. And it will end. The AI companies
    (well, besides the hardware manufacturers) just aren't bringing in any revenue... or at least, not enough revenue to offeset the massive
    costs they are accumulating. Even where corporations actually pay subscriptions for the services (less than 3% of the AI market!), it
    costs the AI companies more money to service those customers than the subscriptions bring in.

    I forget the actual numbers, but if you say a monthly subscription
    costs a company $200 / month / user (if we assume the most expensive
    rate), but each user makes five hundred compute requests per day and
    each request costs $1 for the AI companies to process... well, that's
    a quick way to bankruptcy. And the corporations who actually pay for a subscription (at any rate) is minimal, and that number would drop to
    nearly zero if they were made to pay the actual cost of each compute
    request. All the more so since the companies who /are/ using AI
    day-to-day are already used to flat rates. Switching to a per-use
    (API) model will kill any interest in using AI.

    Especially since the results of each AI request are so erratic and
    often need multiple attempts to get something actually usable. At that
    point it becomes cheaper for corporations to just keep your employees
    and ditch the AI.

    AI companies like to compare themselves to the early days of Uber or
    Amazon --'You gotta spend money to make money!'-- except the amounts
    they are spending dwarf what Amazon or Uber spent setting themselves
    up. Uber spent about $30 billion USD over a decade getting to where
    they are now. Anthropic spends $30 billion USD per month. It's not sustainable.

    There's just not a way out of the mess that AI companies are in right
    now. They can't dig their way out of the hole; buying more datacenters
    will only make things worse. Simply put, their product is too
    expensive and not worth the price they would need to charge for it to
    be profitable.

    AI is a bubble that is consuming vast amounts of cash and resources,
    burning through electricity, scarfing up all the hardware and getting thousands of people fired... and in the end it's all going to collapse without bringing any net gain to the economy or the world.

    Its worse than that. There are an increasing number of court cases
    where the lawyers on one or both sides are using AI to find cites and
    related cases to their current case. Which might be a good thing if the
    AI engines weren't making up fake cases to cite. And the lawyers aren't checking the AI's results. Until a judge finds out the filing is full
    of case citations that don't exist. IF the judge realizes there are
    fake case citations. There have already been cases where NO ONE caught
    the make-believe cases until after the case was settled. Oops.
    --
    I've done good in this world. Now I'm tired and just want to be a cranky
    dirty old man.
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From phoenix@j63840576@gmail.com to comp.sys.ibm.pc.games.action,alt.slack on Sun Mar 29 12:31:00 2026
    From Newsgroup: comp.sys.ibm.pc.games.action

    Dimensional Traveler wrote:
    On 3/27/2026 8:02 AM, Spalls Hurgenson wrote:
    On Thu, 26 Mar 2026 20:56:26 -0700, Dimensional Traveler
    <dtravel@sonic.net> said this thing:

    Came across this PC Gamer article.

    In today's episode of "innocent AI usage goes wrong," a Stardew Valley
    player wastes a bunch of time and resources because a Google AI summary
    lied to their face.

    Good old AI. Neat, fun... and completely unreliable. But fortunately,
    you have talented employees who --when using AI to help them in their
    jobs-- can catch these sort of errors. Oh, what's that? You FIRED all
    the employees who knew what they were doing because you thought AI
    could do their jobs? Oh, sucks for you.

    #

    The AI bubble can't end soon enough. And it will end. The AI companies
    (well, besides the hardware manufacturers) just aren't bringing in any
    revenue... or at least, not enough revenue to offeset the massive
    costs they are accumulating. Even where corporations actually pay
    subscriptions for the services (less than 3% of the AI market!), it
    costs the AI companies more money to service those customers than the
    subscriptions bring in.

    I forget the actual numbers, but if you say a monthly subscription
    costs a company $200 / month / user (if we assume the most expensive
    rate), but each user makes five hundred compute requests per day and
    each request costs $1 for the AI companies to process... well, that's
    a quick way to bankruptcy. And the corporations who actually pay for a
    subscription (at any rate) is minimal, and that number would drop to
    nearly zero if they were made to pay the actual cost of each compute
    request. All the more so since the companies who /are/ using AI
    day-to-day are already used to flat rates. Switching to a per-use
    (API) model will kill any interest in using AI.

    Especially since the results of each AI request are so erratic and
    often need multiple attempts to get something actually usable. At that
    point it becomes cheaper for corporations to just keep your employees
    and ditch the AI.

    AI companies like to compare themselves to the early days of Uber or
    Amazon --'You gotta spend money to make money!'-- except the amounts
    they are spending dwarf what Amazon or Uber spent setting themselves
    up. Uber spent about $30 billion USD over a decade getting to where
    they are now. Anthropic spends $30 billion USD per month. It's not
    sustainable.

    There's just not a way out of the mess that AI companies are in right
    now. They can't dig their way out of the hole; buying more datacenters
    will only make things worse. Simply put, their product is too
    expensive and not worth the price they would need to charge for it to
    be profitable.

    AI is a bubble that is consuming vast amounts of cash and resources,
    burning through electricity, scarfing up all the hardware and getting
    thousands of people fired... and in the end it's all going to collapse
    without bringing any net gain to the economy or the world.

    Its worse than that.  There are an increasing number of court cases
    where the lawyers on one or both sides are using AI to find cites and related cases to their current case.  Which might be a good thing if the
    AI engines weren't making up fake cases to cite.  And the lawyers aren't checking the AI's results.  Until a judge finds out the filing is full
    of case citations that don't exist.  IF the judge realizes there are
    fake case citations.  There have already been cases where NO ONE caught
    the make-believe cases until after the case was settled.  Oops.

    AI are great at bullshitting.
    --
    Pharaoh was so pleased with Hadad that he gave him a
    sister of his own wife, Queen Tahpenes, in marriage.
    The sister of Tahpenes bore him a son named Genubath,
    whom Tahpenes brought up in the royal palace. There
    Genubath lived with Pharaoh’s own children.
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From pursent100@pursent100@gmail.com to comp.sys.ibm.pc.games.action,alt.slack on Sun Mar 29 11:46:08 2026
    From Newsgroup: comp.sys.ibm.pc.games.action

    phoenix wrote:
    Dimensional Traveler wrote:
    On 3/27/2026 8:02 AM, Spalls Hurgenson wrote:
    On Thu, 26 Mar 2026 20:56:26 -0700, Dimensional Traveler
    <dtravel@sonic.net> said this thing:

    Came across this PC Gamer article.

    In today's episode of "innocent AI usage goes wrong," a Stardew Valley >>>> player wastes a bunch of time and resources because a Google AI summary >>>> lied to their face.

    Good old AI. Neat, fun... and completely unreliable. But fortunately,
    you have talented employees who --when using AI to help them in their
    jobs-- can catch these sort of errors. Oh, what's that? You FIRED all
    the employees who knew what they were doing because you thought AI
    could do their jobs? Oh, sucks for you.

    #

    The AI bubble can't end soon enough. And it will end. The AI companies
    (well, besides the hardware manufacturers) just aren't bringing in any
    revenue... or at least, not enough revenue to offeset the massive
    costs they are accumulating. Even where corporations actually pay
    subscriptions for the services (less than 3% of the AI market!), it
    costs the AI companies more money to service those customers than the
    subscriptions bring in.

    I forget the actual numbers, but if you say a monthly subscription
    costs a company $200 / month / user (if we assume the most expensive
    rate), but each user makes five hundred compute requests per day and
    each request costs $1 for the AI companies to process... well, that's
    a quick way to bankruptcy. And the corporations who actually pay for a
    subscription (at any rate) is minimal, and that number would drop to
    nearly zero if they were made to pay the actual cost of each compute
    request. All the more so since the companies who /are/ using AI
    day-to-day are already used to flat rates. Switching to a per-use
    (API) model will kill any interest in using AI.

    Especially since the results of each AI request are so erratic and
    often need multiple attempts to get something actually usable. At that
    point it becomes cheaper for corporations to just keep your employees
    and ditch the AI.

    AI companies like to compare themselves to the early days of Uber or
    Amazon --'You gotta spend money to make money!'-- except the amounts
    they are spending dwarf what Amazon or Uber spent setting themselves
    up. Uber spent about $30 billion USD over a decade getting to where
    they are now. Anthropic spends $30 billion USD per month. It's not
    sustainable.

    There's just not a way out of the mess that AI companies are in right
    now. They can't dig their way out of the hole; buying more datacenters
    will only make things worse. Simply put, their product is too
    expensive and not worth the price they would need to charge for it to
    be profitable.

    AI is a bubble that is consuming vast amounts of cash and resources,
    burning through electricity, scarfing up all the hardware and getting
    thousands of people fired... and in the end it's all going to collapse
    without bringing any net gain to the economy or the world.

    Its worse than that.  There are an increasing number of court cases
    where the lawyers on one or both sides are using AI to find cites and
    related cases to their current case.  Which might be a good thing if
    the AI engines weren't making up fake cases to cite.  And the lawyers
    aren't checking the AI's results.  Until a judge finds out the filing
    is full of case citations that don't exist.  IF the judge realizes
    there are fake case citations.  There have already been cases where NO
    ONE caught the make-believe cases until after the case was settled.
    Oops.

    AI are great at bullshitting.

    yes you are
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From phoenix@j63840576@gmail.com to comp.sys.ibm.pc.games.action,alt.slack on Sun Mar 29 12:50:24 2026
    From Newsgroup: comp.sys.ibm.pc.games.action

    % wrote:
    phoenix wrote:
    Dimensional Traveler wrote:
    On 3/27/2026 8:02 AM, Spalls Hurgenson wrote:
    On Thu, 26 Mar 2026 20:56:26 -0700, Dimensional Traveler
    <dtravel@sonic.net> said this thing:

    Came across this PC Gamer article.

    In today's episode of "innocent AI usage goes wrong," a Stardew Valley >>>>> player wastes a bunch of time and resources because a Google AI
    summary
    lied to their face.

    Good old AI. Neat, fun... and completely unreliable. But fortunately,
    you have talented employees who --when using AI to help them in their
    jobs-- can catch these sort of errors. Oh, what's that? You FIRED all
    the employees who knew what they were doing because you thought AI
    could do their jobs? Oh, sucks for you.

    #

    The AI bubble can't end soon enough. And it will end. The AI companies >>>> (well, besides the hardware manufacturers) just aren't bringing in any >>>> revenue... or at least, not enough revenue to offeset the massive
    costs they are accumulating. Even where corporations actually pay
    subscriptions for the services (less than 3% of the AI market!), it
    costs the AI companies more money to service those customers than the
    subscriptions bring in.

    I forget the actual numbers, but if you say a monthly subscription
    costs a company $200 / month / user (if we assume the most expensive
    rate), but each user makes five hundred compute requests per day and
    each request costs $1 for the AI companies to process... well, that's
    a quick way to bankruptcy. And the corporations who actually pay for a >>>> subscription (at any rate) is minimal, and that number would drop to
    nearly zero if they were made to pay the actual cost of each compute
    request. All the more so since the companies who /are/ using AI
    day-to-day are already used to flat rates. Switching to a per-use
    (API) model will kill any interest in using AI.

    Especially since the results of each AI request are so erratic and
    often need multiple attempts to get something actually usable. At that >>>> point it becomes cheaper for corporations to just keep your employees
    and ditch the AI.

    AI companies like to compare themselves to the early days of Uber or
    Amazon --'You gotta spend money to make money!'-- except the amounts
    they are spending dwarf what Amazon or Uber spent setting themselves
    up. Uber spent about $30 billion USD over a decade getting to where
    they are now. Anthropic spends $30 billion USD per month. It's not
    sustainable.

    There's just not a way out of the mess that AI companies are in right
    now. They can't dig their way out of the hole; buying more datacenters >>>> will only make things worse. Simply put, their product is too
    expensive and not worth the price they would need to charge for it to
    be profitable.

    AI is a bubble that is consuming vast amounts of cash and resources,
    burning through electricity, scarfing up all the hardware and getting
    thousands of people fired... and in the end it's all going to collapse >>>> without bringing any net gain to the economy or the world.

    Its worse than that.  There are an increasing number of court cases
    where the lawyers on one or both sides are using AI to find cites and
    related cases to their current case.  Which might be a good thing if
    the AI engines weren't making up fake cases to cite.  And the lawyers
    aren't checking the AI's results.  Until a judge finds out the filing
    is full of case citations that don't exist.  IF the judge realizes
    there are fake case citations.  There have already been cases where
    NO ONE caught the make-believe cases until after the case was
    settled. Oops.

    AI are great at bullshitting.

    yes you are

    I'll see myself out.
    --
    Pharaoh was so pleased with Hadad that he gave him a
    sister of his own wife, Queen Tahpenes, in marriage.
    The sister of Tahpenes bore him a son named Genubath,
    whom Tahpenes brought up in the royal palace. There
    Genubath lived with Pharaoh’s own children.
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From pursent100@pursent100@gmail.com to comp.sys.ibm.pc.games.action,alt.slack on Sun Mar 29 13:47:56 2026
    From Newsgroup: comp.sys.ibm.pc.games.action

    phoenix wrote:
    % wrote:
    phoenix wrote:
    Dimensional Traveler wrote:
    On 3/27/2026 8:02 AM, Spalls Hurgenson wrote:
    On Thu, 26 Mar 2026 20:56:26 -0700, Dimensional Traveler
    <dtravel@sonic.net> said this thing:

    Came across this PC Gamer article.

    In today's episode of "innocent AI usage goes wrong," a Stardew
    Valley
    player wastes a bunch of time and resources because a Google AI
    summary
    lied to their face.

    Good old AI. Neat, fun... and completely unreliable. But fortunately, >>>>> you have talented employees who --when using AI to help them in their >>>>> jobs-- can catch these sort of errors. Oh, what's that? You FIRED all >>>>> the employees who knew what they were doing because you thought AI
    could do their jobs? Oh, sucks for you.

    #

    The AI bubble can't end soon enough. And it will end. The AI companies >>>>> (well, besides the hardware manufacturers) just aren't bringing in any >>>>> revenue... or at least, not enough revenue to offeset the massive
    costs they are accumulating. Even where corporations actually pay
    subscriptions for the services (less than 3% of the AI market!), it
    costs the AI companies more money to service those customers than the >>>>> subscriptions bring in.

    I forget the actual numbers, but if you say a monthly subscription
    costs a company $200 / month / user (if we assume the most expensive >>>>> rate), but each user makes five hundred compute requests per day and >>>>> each request costs $1 for the AI companies to process... well, that's >>>>> a quick way to bankruptcy. And the corporations who actually pay for a >>>>> subscription (at any rate) is minimal, and that number would drop to >>>>> nearly zero if they were made to pay the actual cost of each compute >>>>> request. All the more so since the companies who /are/ using AI
    day-to-day are already used to flat rates. Switching to a per-use
    (API) model will kill any interest in using AI.

    Especially since the results of each AI request are so erratic and
    often need multiple attempts to get something actually usable. At that >>>>> point it becomes cheaper for corporations to just keep your employees >>>>> and ditch the AI.

    AI companies like to compare themselves to the early days of Uber or >>>>> Amazon --'You gotta spend money to make money!'-- except the amounts >>>>> they are spending dwarf what Amazon or Uber spent setting themselves >>>>> up. Uber spent about $30 billion USD over a decade getting to where
    they are now. Anthropic spends $30 billion USD per month. It's not
    sustainable.

    There's just not a way out of the mess that AI companies are in right >>>>> now. They can't dig their way out of the hole; buying more datacenters >>>>> will only make things worse. Simply put, their product is too
    expensive and not worth the price they would need to charge for it to >>>>> be profitable.

    AI is a bubble that is consuming vast amounts of cash and resources, >>>>> burning through electricity, scarfing up all the hardware and getting >>>>> thousands of people fired... and in the end it's all going to collapse >>>>> without bringing any net gain to the economy or the world.

    Its worse than that.  There are an increasing number of court cases
    where the lawyers on one or both sides are using AI to find cites
    and related cases to their current case.  Which might be a good
    thing if the AI engines weren't making up fake cases to cite.  And
    the lawyers aren't checking the AI's results.  Until a judge finds
    out the filing is full of case citations that don't exist.  IF the
    judge realizes there are fake case citations.  There have already
    been cases where NO ONE caught the make-believe cases until after
    the case was settled. Oops.

    AI are great at bullshitting.

    yes you are

    I'll see myself out.

    how long have you had a stretch armstrong face
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From phoenix@j63840576@gmail.com to comp.sys.ibm.pc.games.action,alt.slack on Sun Mar 29 22:50:20 2026
    From Newsgroup: comp.sys.ibm.pc.games.action

    % wrote:
    phoenix wrote:
    % wrote:
    phoenix wrote:
    Dimensional Traveler wrote:
    On 3/27/2026 8:02 AM, Spalls Hurgenson wrote:
    On Thu, 26 Mar 2026 20:56:26 -0700, Dimensional Traveler
    <dtravel@sonic.net> said this thing:

    Came across this PC Gamer article.

    In today's episode of "innocent AI usage goes wrong," a Stardew >>>>>>> Valley
    player wastes a bunch of time and resources because a Google AI >>>>>>> summary
    lied to their face.

    Good old AI. Neat, fun... and completely unreliable. But fortunately, >>>>>> you have talented employees who --when using AI to help them in their >>>>>> jobs-- can catch these sort of errors. Oh, what's that? You FIRED all >>>>>> the employees who knew what they were doing because you thought AI >>>>>> could do their jobs? Oh, sucks for you.

    #

    The AI bubble can't end soon enough. And it will end. The AI
    companies
    (well, besides the hardware manufacturers) just aren't bringing in >>>>>> any
    revenue... or at least, not enough revenue to offeset the massive
    costs they are accumulating. Even where corporations actually pay
    subscriptions for the services (less than 3% of the AI market!), it >>>>>> costs the AI companies more money to service those customers than the >>>>>> subscriptions bring in.

    I forget the actual numbers, but if you say a monthly subscription >>>>>> costs a company $200 / month / user (if we assume the most expensive >>>>>> rate), but each user makes five hundred compute requests per day and >>>>>> each request costs $1 for the AI companies to process... well, that's >>>>>> a quick way to bankruptcy. And the corporations who actually pay
    for a
    subscription (at any rate) is minimal, and that number would drop to >>>>>> nearly zero if they were made to pay the actual cost of each compute >>>>>> request. All the more so since the companies who /are/ using AI
    day-to-day are already used to flat rates. Switching to a per-use
    (API) model will kill any interest in using AI.

    Especially since the results of each AI request are so erratic and >>>>>> often need multiple attempts to get something actually usable. At >>>>>> that
    point it becomes cheaper for corporations to just keep your employees >>>>>> and ditch the AI.

    AI companies like to compare themselves to the early days of Uber or >>>>>> Amazon --'You gotta spend money to make money!'-- except the amounts >>>>>> they are spending dwarf what Amazon or Uber spent setting themselves >>>>>> up. Uber spent about $30 billion USD over a decade getting to where >>>>>> they are now. Anthropic spends $30 billion USD per month. It's not >>>>>> sustainable.

    There's just not a way out of the mess that AI companies are in right >>>>>> now. They can't dig their way out of the hole; buying more
    datacenters
    will only make things worse. Simply put, their product is too
    expensive and not worth the price they would need to charge for it to >>>>>> be profitable.

    AI is a bubble that is consuming vast amounts of cash and resources, >>>>>> burning through electricity, scarfing up all the hardware and getting >>>>>> thousands of people fired... and in the end it's all going to
    collapse
    without bringing any net gain to the economy or the world.

    Its worse than that.  There are an increasing number of court cases >>>>> where the lawyers on one or both sides are using AI to find cites
    and related cases to their current case.  Which might be a good
    thing if the AI engines weren't making up fake cases to cite.  And >>>>> the lawyers aren't checking the AI's results.  Until a judge finds >>>>> out the filing is full of case citations that don't exist.  IF the >>>>> judge realizes there are fake case citations.  There have already
    been cases where NO ONE caught the make-believe cases until after
    the case was settled. Oops.

    AI are great at bullshitting.

    yes you are

    I'll see myself out.

    how long have you had a stretch armstrong face

    I don’t engage with people who talk like that.
    --
    Pharaoh was so pleased with Hadad that he gave him a
    sister of his own wife, Queen Tahpenes, in marriage.
    The sister of Tahpenes bore him a son named Genubath,
    whom Tahpenes brought up in the royal palace. There
    Genubath lived with Pharaoh’s own children.
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Justisaur@justisaur@yahoo.com to comp.sys.ibm.pc.games.action on Wed Apr 1 11:36:41 2026
    From Newsgroup: comp.sys.ibm.pc.games.action

    On 3/27/2026 5:28 PM, Dimensional Traveler wrote:
    On 3/27/2026 8:02 AM, Spalls Hurgenson wrote:
    On Thu, 26 Mar 2026 20:56:26 -0700, Dimensional Traveler
    <dtravel@sonic.net> said this thing:

    Came across this PC Gamer article.

    In today's episode of "innocent AI usage goes wrong," a Stardew Valley
    player wastes a bunch of time and resources because a Google AI summary
    lied to their face.

    Good old AI. Neat, fun... and completely unreliable. But fortunately,
    you have talented employees who --when using AI to help them in their
    jobs-- can catch these sort of errors. Oh, what's that? You FIRED all
    the employees who knew what they were doing because you thought AI
    could do their jobs? Oh, sucks for you.

    #

    The AI bubble can't end soon enough. And it will end. The AI companies
    (well, besides the hardware manufacturers) just aren't bringing in any
    revenue... or at least, not enough revenue to offeset the massive
    costs they are accumulating. Even where corporations actually pay
    subscriptions for the services (less than 3% of the AI market!), it
    costs the AI companies more money to service those customers than the
    subscriptions bring in.

    I forget the actual numbers, but if you say a monthly subscription
    costs a company $200 / month / user (if we assume the most expensive
    rate), but each user makes five hundred compute requests per day and
    each request costs $1 for the AI companies to process... well, that's
    a quick way to bankruptcy. And the corporations who actually pay for a
    subscription (at any rate) is minimal, and that number would drop to
    nearly zero if they were made to pay the actual cost of each compute
    request. All the more so since the companies who /are/ using AI
    day-to-day are already used to flat rates. Switching to a per-use
    (API) model will kill any interest in using AI.

    Especially since the results of each AI request are so erratic and
    often need multiple attempts to get something actually usable. At that
    point it becomes cheaper for corporations to just keep your employees
    and ditch the AI.

    AI companies like to compare themselves to the early days of Uber or
    Amazon --'You gotta spend money to make money!'-- except the amounts
    they are spending dwarf what Amazon or Uber spent setting themselves
    up. Uber spent about $30 billion USD over a decade getting to where
    they are now. Anthropic spends $30 billion USD per month. It's not
    sustainable.

    There's just not a way out of the mess that AI companies are in right
    now. They can't dig their way out of the hole; buying more datacenters
    will only make things worse. Simply put, their product is too
    expensive and not worth the price they would need to charge for it to
    be profitable.

    AI is a bubble that is consuming vast amounts of cash and resources,
    burning through electricity, scarfing up all the hardware and getting
    thousands of people fired... and in the end it's all going to collapse
    without bringing any net gain to the economy or the world.

    Its worse than that.  There are an increasing number of court cases
    where the lawyers on one or both sides are using AI to find cites and related cases to their current case.  Which might be a good thing if the
    AI engines weren't making up fake cases to cite.  And the lawyers aren't checking the AI's results.  Until a judge finds out the filing is full
    of case citations that don't exist.  IF the judge realizes there are
    fake case citations.  There have already been cases where NO ONE caught
    the make-believe cases until after the case was settled.  Oops.


    I saw an interesting bit where a group found where the hallucinations
    are coming from and can almost eliminate them. The 'problem' with doing
    that is that the LLMs become far less friendly. It's basically the
    neural nodes that allow them to be friendly, agreeable, and creative.
    For most uses I'd want to use LLMs for professionally, I'd happily take
    the hit to all of that for precise.

    You can have separate ones for each though you really have to go back to
    the training for that. You could have a 'Spock' logical, mater of fact
    LLM for science, code, law and engineering, and a 'Mud' one you use for writing ad copy, emails, etc.
    --
    -Justisaur

    ø-ø
    (\_/)\
    `-'\ `--.___,
    ¶¬'\( ,_.-'
    \\
    ^'
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Dimensional Traveler@dtravel@sonic.net to comp.sys.ibm.pc.games.action on Wed Apr 1 17:42:39 2026
    From Newsgroup: comp.sys.ibm.pc.games.action

    On 4/1/2026 11:36 AM, Justisaur wrote:
    On 3/27/2026 5:28 PM, Dimensional Traveler wrote:
    On 3/27/2026 8:02 AM, Spalls Hurgenson wrote:
    On Thu, 26 Mar 2026 20:56:26 -0700, Dimensional Traveler
    <dtravel@sonic.net> said this thing:

    Came across this PC Gamer article.

    In today's episode of "innocent AI usage goes wrong," a Stardew Valley >>>> player wastes a bunch of time and resources because a Google AI summary >>>> lied to their face.

    Good old AI. Neat, fun... and completely unreliable. But fortunately,
    you have talented employees who --when using AI to help them in their
    jobs-- can catch these sort of errors. Oh, what's that? You FIRED all
    the employees who knew what they were doing because you thought AI
    could do their jobs? Oh, sucks for you.

    #

    The AI bubble can't end soon enough. And it will end. The AI companies
    (well, besides the hardware manufacturers) just aren't bringing in any
    revenue... or at least, not enough revenue to offeset the massive
    costs they are accumulating. Even where corporations actually pay
    subscriptions for the services (less than 3% of the AI market!), it
    costs the AI companies more money to service those customers than the
    subscriptions bring in.

    I forget the actual numbers, but if you say a monthly subscription
    costs a company $200 / month / user (if we assume the most expensive
    rate), but each user makes five hundred compute requests per day and
    each request costs $1 for the AI companies to process... well, that's
    a quick way to bankruptcy. And the corporations who actually pay for a
    subscription (at any rate) is minimal, and that number would drop to
    nearly zero if they were made to pay the actual cost of each compute
    request. All the more so since the companies who /are/ using AI
    day-to-day are already used to flat rates. Switching to a per-use
    (API) model will kill any interest in using AI.

    Especially since the results of each AI request are so erratic and
    often need multiple attempts to get something actually usable. At that
    point it becomes cheaper for corporations to just keep your employees
    and ditch the AI.

    AI companies like to compare themselves to the early days of Uber or
    Amazon --'You gotta spend money to make money!'-- except the amounts
    they are spending dwarf what Amazon or Uber spent setting themselves
    up. Uber spent about $30 billion USD over a decade getting to where
    they are now. Anthropic spends $30 billion USD per month. It's not
    sustainable.

    There's just not a way out of the mess that AI companies are in right
    now. They can't dig their way out of the hole; buying more datacenters
    will only make things worse. Simply put, their product is too
    expensive and not worth the price they would need to charge for it to
    be profitable.

    AI is a bubble that is consuming vast amounts of cash and resources,
    burning through electricity, scarfing up all the hardware and getting
    thousands of people fired... and in the end it's all going to collapse
    without bringing any net gain to the economy or the world.

    Its worse than that.  There are an increasing number of court cases
    where the lawyers on one or both sides are using AI to find cites and
    related cases to their current case.  Which might be a good thing if
    the AI engines weren't making up fake cases to cite.  And the lawyers
    aren't checking the AI's results.  Until a judge finds out the filing
    is full of case citations that don't exist.  IF the judge realizes
    there are fake case citations.  There have already been cases where NO
    ONE caught the make-believe cases until after the case was settled.
    Oops.


    I saw an interesting bit where a group found where the hallucinations
    are coming from and can almost eliminate them.  The 'problem' with doing that is that the LLMs become far less friendly.  It's basically the
    neural nodes that allow them to be friendly, agreeable, and creative.
    For most uses I'd want to use LLMs for professionally, I'd happily take
    the hit to all of that for precise.

    You can have separate ones for each though you really have to go back to
    the training for that.  You could have a 'Spock' logical, mater of fact
    LLM for science, code, law and engineering, and a 'Mud' one you use for writing ad copy, emails, etc.


    "the LLMs become far less friendly." Meaning what exactly? What do
    they do when you eliminate the "hallucinations"?
    --
    I've done good in this world. Now I'm tired and just want to be a cranky
    dirty old man.
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Justisaur@justisaur@yahoo.com to comp.sys.ibm.pc.games.action on Thu Apr 2 07:40:08 2026
    From Newsgroup: comp.sys.ibm.pc.games.action

    On 4/1/2026 5:42 PM, Dimensional Traveler wrote:
    On 4/1/2026 11:36 AM, Justisaur wrote:
    On 3/27/2026 5:28 PM, Dimensional Traveler wrote:
    On 3/27/2026 8:02 AM, Spalls Hurgenson wrote:
    On Thu, 26 Mar 2026 20:56:26 -0700, Dimensional Traveler
    <dtravel@sonic.net> said this thing:

    Came across this PC Gamer article.

    In today's episode of "innocent AI usage goes wrong," a Stardew Valley >>>>> player wastes a bunch of time and resources because a Google AI
    summary
    lied to their face.

    Good old AI. Neat, fun... and completely unreliable. But fortunately,
    you have talented employees who --when using AI to help them in their
    jobs-- can catch these sort of errors. Oh, what's that? You FIRED all
    the employees who knew what they were doing because you thought AI
    could do their jobs? Oh, sucks for you.

    #

    The AI bubble can't end soon enough. And it will end. The AI companies >>>> (well, besides the hardware manufacturers) just aren't bringing in any >>>> revenue... or at least, not enough revenue to offeset the massive
    costs they are accumulating. Even where corporations actually pay
    subscriptions for the services (less than 3% of the AI market!), it
    costs the AI companies more money to service those customers than the
    subscriptions bring in.

    I forget the actual numbers, but if you say a monthly subscription
    costs a company $200 / month / user (if we assume the most expensive
    rate), but each user makes five hundred compute requests per day and
    each request costs $1 for the AI companies to process... well, that's
    a quick way to bankruptcy. And the corporations who actually pay for a >>>> subscription (at any rate) is minimal, and that number would drop to
    nearly zero if they were made to pay the actual cost of each compute
    request. All the more so since the companies who /are/ using AI
    day-to-day are already used to flat rates. Switching to a per-use
    (API) model will kill any interest in using AI.

    Especially since the results of each AI request are so erratic and
    often need multiple attempts to get something actually usable. At that >>>> point it becomes cheaper for corporations to just keep your employees
    and ditch the AI.

    AI companies like to compare themselves to the early days of Uber or
    Amazon --'You gotta spend money to make money!'-- except the amounts
    they are spending dwarf what Amazon or Uber spent setting themselves
    up. Uber spent about $30 billion USD over a decade getting to where
    they are now. Anthropic spends $30 billion USD per month. It's not
    sustainable.

    There's just not a way out of the mess that AI companies are in right
    now. They can't dig their way out of the hole; buying more datacenters >>>> will only make things worse. Simply put, their product is too
    expensive and not worth the price they would need to charge for it to
    be profitable.

    AI is a bubble that is consuming vast amounts of cash and resources,
    burning through electricity, scarfing up all the hardware and getting
    thousands of people fired... and in the end it's all going to collapse >>>> without bringing any net gain to the economy or the world.

    Its worse than that.  There are an increasing number of court cases
    where the lawyers on one or both sides are using AI to find cites and
    related cases to their current case.  Which might be a good thing if
    the AI engines weren't making up fake cases to cite.  And the lawyers
    aren't checking the AI's results.  Until a judge finds out the filing
    is full of case citations that don't exist.  IF the judge realizes
    there are fake case citations.  There have already been cases where
    NO ONE caught the make-believe cases until after the case was
    settled. Oops.


    I saw an interesting bit where a group found where the hallucinations
    are coming from and can almost eliminate them.  The 'problem' with
    doing that is that the LLMs become far less friendly.  It's basically
    the neural nodes that allow them to be friendly, agreeable, and
    creative. For most uses I'd want to use LLMs for professionally, I'd
    happily take the hit to all of that for precise.

    You can have separate ones for each though you really have to go back
    to the training for that.  You could have a 'Spock' logical, mater of
    fact LLM for science, code, law and engineering, and a 'Mud' one you
    use for writing ad copy, emails, etc.


    "the LLMs become far less friendly."  Meaning what exactly?  What do
    they do when you eliminate the "hallucinations"?

    They aren't yes men and will tell you no, you're wrong, won't give a
    different answer if you call them on something (which often results in hallucinations normally to agree with you.) As I said it sounds
    perfectly fine to me, but it sounds more like conversing with a
    computer, not a human, which was kind of the point of LLMs. *shrug*

    You can get most of the benefits by putting temperature to 0 which will greatly reduce hallucinations, but some of it comes from training. No
    one's currently got a logically trained LLM.
    --
    -Justisaur

    ø-ø
    (\_/)\
    `-'\ `--.___,
    ¶¬'\( ,_.-'
    \\
    ^'
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Spalls Hurgenson@spallshurgenson@gmail.com to comp.sys.ibm.pc.games.action on Thu Apr 2 10:42:33 2026
    From Newsgroup: comp.sys.ibm.pc.games.action

    On Wed, 1 Apr 2026 17:42:39 -0700, Dimensional Traveler
    <dtravel@sonic.net> said this thing:

    On 4/1/2026 11:36 AM, Justisaur wrote:
    On 3/27/2026 5:28 PM, Dimensional Traveler wrote:

    Its worse than that.  There are an increasing number of court cases
    where the lawyers on one or both sides are using AI to find cites and
    related cases to their current case.  Which might be a good thing if
    the AI engines weren't making up fake cases to cite.  And the lawyers
    aren't checking the AI's results.  Until a judge finds out the filing
    is full of case citations that don't exist.  IF the judge realizes
    there are fake case citations.  There have already been cases where NO
    ONE caught the make-believe cases until after the case was settled.
    Oops.


    I saw an interesting bit where a group found where the hallucinations
    are coming from and can almost eliminate them.  The 'problem' with doing
    that is that the LLMs become far less friendly.  It's basically the
    neural nodes that allow them to be friendly, agreeable, and creative.
    For most uses I'd want to use LLMs for professionally, I'd happily take
    the hit to all of that for precise.


    You can have separate ones for each though you really have to go back to
    the training for that.  You could have a 'Spock' logical, mater of fact
    LLM for science, code, law and engineering, and a 'Mud' one you use for
    writing ad copy, emails, etc.


    "the LLMs become far less friendly." Meaning what exactly? What do
    they do when you eliminate the "hallucinations"?


    One theory is that it is the solicitousness of the AI that is the
    problem. What this really means is that the LLMs have too much freedom
    to generate content; that when they don't know the exact answer we
    expect them to still provide a response. Which it then does, based
    either on the median of its dataset (or a subset of the dataset that
    it has recognized as aligning with its users preferences and biases).

    Restricting the AI to using only validated data helps minimize this
    (e.g., making it less helpful), but at a cost to the AI's utility. It
    won't always have an answer... and why use a program when half the
    time it says, "I can't do that"? It also makes them far slower and
    more computationally expensive, as it demands longer 'memories' and
    deeper dives into the dataset to validate its creations.

    And even then, hallucinations are still inevitable. You can minimize
    them slightly, but the very nature of how current AI work insists that
    they will happen. Because what we call AI isn't in any way
    intelligent. It has absolutely no understandinging of what we are
    asking, or what it is saying. It just obeys the law of averages. Even
    OpenAI agrees that hallucinations are mathematically inevitable. You
    would have to fundamentally change the way the programs work before
    they'd go away.



    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Dimensional Traveler@dtravel@sonic.net to comp.sys.ibm.pc.games.action on Thu Apr 2 08:02:22 2026
    From Newsgroup: comp.sys.ibm.pc.games.action

    On 4/2/2026 7:42 AM, Spalls Hurgenson wrote:
    On Wed, 1 Apr 2026 17:42:39 -0700, Dimensional Traveler
    <dtravel@sonic.net> said this thing:

    On 4/1/2026 11:36 AM, Justisaur wrote:
    On 3/27/2026 5:28 PM, Dimensional Traveler wrote:

    Its worse than that.  There are an increasing number of court cases
    where the lawyers on one or both sides are using AI to find cites and
    related cases to their current case.  Which might be a good thing if
    the AI engines weren't making up fake cases to cite.  And the lawyers >>>> aren't checking the AI's results.  Until a judge finds out the filing >>>> is full of case citations that don't exist.  IF the judge realizes
    there are fake case citations.  There have already been cases where NO >>>> ONE caught the make-believe cases until after the case was settled.
    Oops.


    I saw an interesting bit where a group found where the hallucinations
    are coming from and can almost eliminate them.  The 'problem' with doing >>> that is that the LLMs become far less friendly.  It's basically the
    neural nodes that allow them to be friendly, agreeable, and creative.
    For most uses I'd want to use LLMs for professionally, I'd happily take
    the hit to all of that for precise.


    You can have separate ones for each though you really have to go back to >>> the training for that.  You could have a 'Spock' logical, mater of fact >>> LLM for science, code, law and engineering, and a 'Mud' one you use for
    writing ad copy, emails, etc.


    "the LLMs become far less friendly." Meaning what exactly? What do
    they do when you eliminate the "hallucinations"?


    One theory is that it is the solicitousness of the AI that is the
    problem. What this really means is that the LLMs have too much freedom
    to generate content; that when they don't know the exact answer we
    expect them to still provide a response. Which it then does, based
    either on the median of its dataset (or a subset of the dataset that
    it has recognized as aligning with its users preferences and biases).

    Restricting the AI to using only validated data helps minimize this
    (e.g., making it less helpful), but at a cost to the AI's utility. It
    won't always have an answer... and why use a program when half the
    time it says, "I can't do that"? It also makes them far slower and
    more computationally expensive, as it demands longer 'memories' and
    deeper dives into the dataset to validate its creations.

    And even then, hallucinations are still inevitable. You can minimize
    them slightly, but the very nature of how current AI work insists that
    they will happen. Because what we call AI isn't in any way
    intelligent. It has absolutely no understandinging of what we are
    asking, or what it is saying. It just obeys the law of averages. Even
    OpenAI agrees that hallucinations are mathematically inevitable. You
    would have to fundamentally change the way the programs work before
    they'd go away.

    (Responding to both Justisaur & Spalls)

    So, they can't be objective and will refuse to change their answer in
    the face of new data. Just like humans.

    Seriously, that scares me.
    --
    I've done good in this world. Now I'm tired and just want to be a cranky
    dirty old man.
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From rridge@rridge@csclub.uwaterloo.ca (Ross Ridge) to comp.sys.ibm.pc.games.action on Sun Apr 5 18:00:48 2026
    From Newsgroup: comp.sys.ibm.pc.games.action

    Justisaur <justisaur@yahoo.com> wrote:
    You can have separate ones for each though you really have to go back to
    the training for that. You could have a 'Spock' logical, mater of fact
    LLM for science, code, law and engineering, and a 'Mud' one you use for >writing ad copy, emails, etc.

    You can't really have an LLM that's logical. You can have one that sounds logical just like Spock, but similar to how Spock is just a character
    says things the writers want us to believe are logical, there's no actual logical reasoning behind the words. A logical sounding LLM will still
    only say things based on its training data and what the human trainers
    select as the most logical sounding correct responses.

    So you'll get an LLM sounds logical and unbaised, but in reality
    its just as biased as its training data and trainers. The problem
    with LLMs is that they're already are like this, fooling people into
    thinking they must be completely impartial and incapable of lying.
    So its unsurprising lawyers are getting fooled by them. Most of them
    who get caught submitting documents with bogus citations seem genually surprised that the LLM was bullshitting them.

    (That said there's been some cases where lawyers have been caught
    repeatedly submitting fake citations in filings and subjected to
    large fines as result. I'm not sure what's going on in those cases.
    Either they've been completely brainwashed by AI or they gotten away
    with cheating so often in their life they can't imagine ever suffering
    serious consequences.)
    --
    l/ // Ross Ridge -- The Great HTMU
    [oo][oo] rridge@csclub.uwaterloo.ca
    -()-/()/ http://www.csclub.uwaterloo.ca:11068/
    db //
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Spalls Hurgenson@spallshurgenson@gmail.com to comp.sys.ibm.pc.games.action on Mon Apr 6 13:08:03 2026
    From Newsgroup: comp.sys.ibm.pc.games.action

    On Fri, 27 Mar 2026 17:28:46 -0700, Dimensional Traveler
    <dtravel@sonic.net> said this thing:


    Its worse than that. There are an increasing number of court cases
    where the lawyers on one or both sides are using AI to find cites and >related cases to their current case. Which might be a good thing if the
    AI engines weren't making up fake cases to cite. And the lawyers aren't >checking the AI's results. Until a judge finds out the filing is full
    of case citations that don't exist. IF the judge realizes there are
    fake case citations. There have already been cases where NO ONE caught
    the make-believe cases until after the case was settled. Oops.


    There's an interesting study* where MIT created "AI workers" and
    assigned them common tasks that might be done in an office, and the
    algorthims were only 'minimally sufficient'. It only passes the lowest
    of bars when it comes to replacing ordinary employees, and was often
    found to make egregious mistakes. The idea --beloved of many
    C-levels-- that you can swap out AI for regular employees is being
    increasingly disproven.

    Worse, even if you could replace low-level entry-level employees,
    that's still a poor move... because the entry-level guys are whom
    later become the core of your skilled employee base... so if you
    replace them, you've basically killed your company four of five years
    down the line as your skilled employees move on and there's nobody to
    replace them.

    Not that the C-levels really care about the future of a company five
    years down the line... it's all about next-quarter earnings for them.
    It's not as if they don't have golden parachutes to protect them when
    the company collapses, after all.

    AI isn't completely worthless, but the way it is being positioned by
    the AI tech-bros as The Next Big thing is disingenous, and CEOs
    --always eager to cut costs-- are buying it up big-time... to the
    disadvantage of their business, their employees, their customers, the environment and society as a whole.




    * study https://futuretech.mit.edu/publication/crashing-waves-vs-rising-tides-preliminary-findings-on-ai-automation-from-thousands-of-worker-evaluations-of-labor-market-tasks
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Justisaur@justisaur@yahoo.com to comp.sys.ibm.pc.games.action on Tue Apr 7 07:56:33 2026
    From Newsgroup: comp.sys.ibm.pc.games.action

    On 4/6/2026 10:08 AM, Spalls Hurgenson wrote:
    On Fri, 27 Mar 2026 17:28:46 -0700, Dimensional Traveler
    <dtravel@sonic.net> said this thing:

    Not that the C-levels really care about the future of a company five
    years down the line... it's all about next-quarter earnings for them.
    It's not as if they don't have golden parachutes to protect them when
    the company collapses, after all.
    This is really the cancer behind almost all the woes of our society.
    --
    -Justisaur

    ø-ø
    (\_/)\
    `-'\ `--.___,
    ¶¬'\( ,_.-'
    \\
    ^'
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From candycanearter07@candycanearter07@candycanearter07.nomail.afraid to comp.sys.ibm.pc.games.action on Thu Apr 9 15:00:03 2026
    From Newsgroup: comp.sys.ibm.pc.games.action

    Dimensional Traveler <dtravel@sonic.net> wrote at 00:42 this Thursday (GMT):
    On 4/1/2026 11:36 AM, Justisaur wrote:
    On 3/27/2026 5:28 PM, Dimensional Traveler wrote:
    On 3/27/2026 8:02 AM, Spalls Hurgenson wrote:
    On Thu, 26 Mar 2026 20:56:26 -0700, Dimensional Traveler
    <dtravel@sonic.net> said this thing:

    Came across this PC Gamer article.

    In today's episode of "innocent AI usage goes wrong," a Stardew Valley >>>>> player wastes a bunch of time and resources because a Google AI summary >>>>> lied to their face.

    Good old AI. Neat, fun... and completely unreliable. But fortunately,
    you have talented employees who --when using AI to help them in their
    jobs-- can catch these sort of errors. Oh, what's that? You FIRED all
    the employees who knew what they were doing because you thought AI
    could do their jobs? Oh, sucks for you.

    #

    The AI bubble can't end soon enough. And it will end. The AI companies >>>> (well, besides the hardware manufacturers) just aren't bringing in any >>>> revenue... or at least, not enough revenue to offeset the massive
    costs they are accumulating. Even where corporations actually pay
    subscriptions for the services (less than 3% of the AI market!), it
    costs the AI companies more money to service those customers than the
    subscriptions bring in.

    I forget the actual numbers, but if you say a monthly subscription
    costs a company $200 / month / user (if we assume the most expensive
    rate), but each user makes five hundred compute requests per day and
    each request costs $1 for the AI companies to process... well, that's
    a quick way to bankruptcy. And the corporations who actually pay for a >>>> subscription (at any rate) is minimal, and that number would drop to
    nearly zero if they were made to pay the actual cost of each compute
    request. All the more so since the companies who /are/ using AI
    day-to-day are already used to flat rates. Switching to a per-use
    (API) model will kill any interest in using AI.

    Especially since the results of each AI request are so erratic and
    often need multiple attempts to get something actually usable. At that >>>> point it becomes cheaper for corporations to just keep your employees
    and ditch the AI.

    AI companies like to compare themselves to the early days of Uber or
    Amazon --'You gotta spend money to make money!'-- except the amounts
    they are spending dwarf what Amazon or Uber spent setting themselves
    up. Uber spent about $30 billion USD over a decade getting to where
    they are now. Anthropic spends $30 billion USD per month. It's not
    sustainable.

    There's just not a way out of the mess that AI companies are in right
    now. They can't dig their way out of the hole; buying more datacenters >>>> will only make things worse. Simply put, their product is too
    expensive and not worth the price they would need to charge for it to
    be profitable.

    AI is a bubble that is consuming vast amounts of cash and resources,
    burning through electricity, scarfing up all the hardware and getting
    thousands of people fired... and in the end it's all going to collapse >>>> without bringing any net gain to the economy or the world.

    Its worse than that.  There are an increasing number of court cases
    where the lawyers on one or both sides are using AI to find cites and
    related cases to their current case.  Which might be a good thing if
    the AI engines weren't making up fake cases to cite.  And the lawyers
    aren't checking the AI's results.  Until a judge finds out the filing
    is full of case citations that don't exist.  IF the judge realizes
    there are fake case citations.  There have already been cases where NO >>> ONE caught the make-believe cases until after the case was settled.
    Oops.


    I saw an interesting bit where a group found where the hallucinations
    are coming from and can almost eliminate them.  The 'problem' with doing >> that is that the LLMs become far less friendly.  It's basically the
    neural nodes that allow them to be friendly, agreeable, and creative.
    For most uses I'd want to use LLMs for professionally, I'd happily take
    the hit to all of that for precise.

    You can have separate ones for each though you really have to go back to
    the training for that.  You could have a 'Spock' logical, mater of fact
    LLM for science, code, law and engineering, and a 'Mud' one you use for
    writing ad copy, emails, etc.


    "the LLMs become far less friendly." Meaning what exactly? What do
    they do when you eliminate the "hallucinations"?


    When you eliminate the hallucinations, whatever remains, however
    improbable, is manipulation.
    --
    user <candycane> is generated from /dev/urandom
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Spalls Hurgenson@spallshurgenson@gmail.com to comp.sys.ibm.pc.games.action on Thu Apr 9 11:27:52 2026
    From Newsgroup: comp.sys.ibm.pc.games.action

    On Thu, 9 Apr 2026 15:00:03 -0000 (UTC), candycanearter07 <candycanearter07@candycanearter07.nomail.afraid> said this thing:


    When you eliminate the hallucinations, whatever remains, however
    improbable, is manipulation.


    Oh, I like that. Is that original to you? Either way, I'm stealing it
    for future use ;-)


    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Dimensional Traveler@dtravel@sonic.net to comp.sys.ibm.pc.games.action on Thu Apr 9 17:17:52 2026
    From Newsgroup: comp.sys.ibm.pc.games.action

    On 4/9/2026 8:00 AM, candycanearter07 wrote:
    Dimensional Traveler <dtravel@sonic.net> wrote at 00:42 this Thursday (GMT):
    On 4/1/2026 11:36 AM, Justisaur wrote:

    I saw an interesting bit where a group found where the hallucinations
    are coming from and can almost eliminate them.  The 'problem' with doing >>> that is that the LLMs become far less friendly.  It's basically the
    neural nodes that allow them to be friendly, agreeable, and creative.
    For most uses I'd want to use LLMs for professionally, I'd happily take
    the hit to all of that for precise.

    You can have separate ones for each though you really have to go back to >>> the training for that.  You could have a 'Spock' logical, mater of fact >>> LLM for science, code, law and engineering, and a 'Mud' one you use for
    writing ad copy, emails, etc.


    "the LLMs become far less friendly." Meaning what exactly? What do
    they do when you eliminate the "hallucinations"?


    When you eliminate the hallucinations, whatever remains, however
    improbable, is manipulation.

    Isn't that what AIs are already doing?
    --
    I've done good in this world. Now I'm tired and just want to be a cranky
    dirty old man.
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From JAB@noway@nochance.com to comp.sys.ibm.pc.games.action on Sat Apr 11 14:29:39 2026
    From Newsgroup: comp.sys.ibm.pc.games.action

    On 28/03/2026 00:28, Dimensional Traveler wrote:

    Its worse than that.  There are an increasing number of court cases
    where the lawyers on one or both sides are using AI to find cites and related cases to their current case.  Which might be a good thing if the
    AI engines weren't making up fake cases to cite.  And the lawyers aren't checking the AI's results.  Until a judge finds out the filing is full
    of case citations that don't exist.  IF the judge realizes there are
    fake case citations.  There have already been cases where NO ONE caught
    the make-believe cases until after the case was settled.  Oops.

    In the UK we had a case where Maccabi Tel Aviv fans were banned from
    attending a European football match. Problem was someone had run to AI
    and then hadn't checked what it responded with was actually true. I do remember seeing the statement from the police saying why it was banned
    and they cited a match held previously in England. I thought at the time
    that I didn't think they would have played each other before but I'm not
    in police intelligence so what would I know. Turns out they hadn't
    played at all.
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From JAB@noway@nochance.com to comp.sys.ibm.pc.games.action on Sat Apr 11 14:41:21 2026
    From Newsgroup: comp.sys.ibm.pc.games.action

    On 02/04/2026 01:42, Dimensional Traveler wrote:

    "the LLMs become far less friendly."  Meaning what exactly?  What do
    they do when you eliminate the "hallucinations"?

    AIs are very much sycophants based on all your interactions with them.

    So a good example was some asked an AI to give an explanation in a
    scientific context. It spewed out its normal wall of text and when asked
    to verify some of the referenced quotes 'admitted' that it basically
    made them up to help with the scientific context.
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From JAB@noway@nochance.com to comp.sys.ibm.pc.games.action on Sat Apr 11 19:10:30 2026
    From Newsgroup: comp.sys.ibm.pc.games.action

    On 27/03/2026 15:02, Spalls Hurgenson wrote:
    AI companies like to compare themselves to the early days of Uber or
    Amazon --'You gotta spend money to make money!'-- except the amounts
    they are spending dwarf what Amazon or Uber spent setting themselves
    up. Uber spent about $30 billion USD over a decade getting to where
    they are now. Anthropic spends $30 billion USD per month. It's not sustainable.

    There's just not a way out of the mess that AI companies are in right
    now. They can't dig their way out of the hole; buying more datacenters
    will only make things worse. Simply put, their product is too
    expensive and not worth the price they would need to charge for it to
    be profitable.

    I tend to agree, hell I can even be charitable and think I saw why the Metaverse was a good idea. With AI I just don't see it. I believe it was OpenAI that was talking about 'investing' 1.5t dollars over the next
    years. How on earth will they get that money back let alone ever make a profit?
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Spalls Hurgenson@spallshurgenson@gmail.com to comp.sys.ibm.pc.games.action on Sat Apr 11 19:13:49 2026
    From Newsgroup: comp.sys.ibm.pc.games.action

    On Sat, 11 Apr 2026 19:10:30 +0100, JAB <noway@nochance.com> said this
    thing:
    On 27/03/2026 15:02, Spalls Hurgenson wrote:


    I tend to agree, hell I can even be charitable and think I saw why the >Metaverse was a good idea. With AI I just don't see it. I believe it was >OpenAI that was talking about 'investing' 1.5t dollars over the next
    years. How on earth will they get that money back let alone ever make a >profit?

    I'm not so sure I'd go on to say Metaverse was a good idea, but I it
    definitely could have been profitable. After all, the idea of an
    "ever-game" is basically what Roblox has become; a common platform
    which can be used to create a bunch of other games. But Facebook
    limited itself to making it VR only (excluding everybody who didn't
    have a VR headset) and had dreams of making it a commercial hub as
    well. Plus, despite the nearly $200 billion Facebook spent on the
    project, none of that showed in its visuals or mechanics.

    The /concept/ of Metaverse (now called "Horizon Worlds") might have
    worked. Facebook's actual attempt? I don't really see it.


    #

    Meanwhile, for all its failure, Facebook's Metaverse --which spent
    1/7th of the cash that has been spent on AI-- has seen a lot more
    return on investment. Metaverse spurred sales of Facebook's own VR
    headsets, and Facebook took a 50% fee from any assets sold by creators
    on the platform. Plus, it may have drawn some people back into the
    Meta ecosystem. Metaverse was never going to break even, but it got
    Facebook /some/ money.

    Almost nobody is paying for AI, and interest in the tech is decreasing
    as its downsides become more obvious. It's not that AI is without
    value, but it is

    a. way too expensive to spin up, and
    b. has been sold as a general-purpose 'replace all
    your employees' technology when it is a much more
    restrictive tool
    (c. also, a lot of its outputs -- a.k.a. AI slop-- have
    given the tech a really bad reputation so that
    products that are 'AI-free' are more highly valued)

    A slower, less grandiose roll-out of the tech might have worked... but
    the AI-bros --and the venture capitalists behind them-- hoped for instant-trillion-dollar returns. But they can only keep up the spin
    for so long before the whole bubble pops.


    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From JAB@noway@nochance.com to comp.sys.ibm.pc.games.action on Sun Apr 12 18:59:46 2026
    From Newsgroup: comp.sys.ibm.pc.games.action

    On 12/04/2026 00:13, Spalls Hurgenson wrote:
    On Sat, 11 Apr 2026 19:10:30 +0100, JAB <noway@nochance.com> said this
    thing:
    On 27/03/2026 15:02, Spalls Hurgenson wrote:


    I tend to agree, hell I can even be charitable and think I saw why the
    Metaverse was a good idea. With AI I just don't see it. I believe it was
    OpenAI that was talking about 'investing' 1.5t dollars over the next
    years. How on earth will they get that money back let alone ever make a
    profit?

    I'm not so sure I'd go on to say Metaverse was a good idea, but I it definitely could have been profitable. After all, the idea of an
    "ever-game" is basically what Roblox has become; a common platform
    which can be used to create a bunch of other games. But Facebook
    limited itself to making it VR only (excluding everybody who didn't
    have a VR headset) and had dreams of making it a commercial hub as
    well. Plus, despite the nearly $200 billion Facebook spent on the
    project, none of that showed in its visuals or mechanics.

    The /concept/ of Metaverse (now called "Horizon Worlds") might have
    worked. Facebook's actual attempt? I don't really see it.


    Well I did say I was trying to be charitable. Carve out a new market
    backed up by some made up figures on a spreadsheet and there you go.

    Personally I just never saw how it would be popular but then again it
    wasn't aimed at me.

    Almost nobody is paying for AI, and interest in the tech is decreasing
    as its downsides become more obvious. It's not that AI is without
    value, but it is

    a. way too expensive to spin up, and
    b. has been sold as a general-purpose 'replace all
    your employees' technology when it is a much more
    restrictive tool
    (c. also, a lot of its outputs -- a.k.a. AI slop-- have
    given the tech a really bad reputation so that
    products that are 'AI-free' are more highly valued)

    A slower, less grandiose roll-out of the tech might have worked... but
    the AI-bros --and the venture capitalists behind them-- hoped for instant-trillion-dollar returns. But they can only keep up the spin
    for so long before the whole bubble pops.


    That's where I think the contrast to the Metaverse comes in. I can see,
    well sorta of, that it may have worked (compared to the investment). AI
    I just don't see it at all.
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Dimensional Traveler@dtravel@sonic.net to comp.sys.ibm.pc.games.action on Sun Apr 12 12:40:36 2026
    From Newsgroup: comp.sys.ibm.pc.games.action

    On 4/12/2026 10:59 AM, JAB wrote:
    On 12/04/2026 00:13, Spalls Hurgenson wrote:
    On Sat, 11 Apr 2026 19:10:30 +0100, JAB <noway@nochance.com> said this
    thing:
    On 27/03/2026 15:02, Spalls Hurgenson wrote:


    I tend to agree, hell I can even be charitable and think I saw why the
    Metaverse was a good idea. With AI I just don't see it. I believe it was >>> OpenAI that was talking about 'investing' 1.5t dollars over the next
    years. How on earth will they get that money back let alone ever make a
    profit?

    I'm not so sure I'd go on to say Metaverse was a good idea, but I it
    definitely could have been profitable. After all, the idea of an
    "ever-game" is basically what Roblox has become; a common platform
    which can be used to create a bunch of other games. But Facebook
    limited itself to making it VR only (excluding everybody who didn't
    have a VR headset) and had dreams of making it a commercial hub as
    well. Plus, despite the nearly $200 billion Facebook spent on the
    project, none of that showed in its visuals or mechanics.

    The /concept/ of Metaverse (now called "Horizon Worlds") might have
    worked. Facebook's actual attempt? I don't really see it.


    Well I did say I was trying to be charitable. Carve out a new market
    backed up by some made up figures on a spreadsheet and there you go.

    Personally I just never saw how it would be popular but then again it
    wasn't aimed at me.

    Almost nobody is paying for AI, and interest in the tech is decreasing
    as its downsides become more obvious. It's not that AI is without
    value, but it is

         a. way too expensive to spin up, and
         b. has been sold as a general-purpose 'replace all
            your employees' technology when it is a much more
            restrictive tool
        (c. also, a lot of its outputs -- a.k.a. AI slop-- have
            given the tech a really bad reputation so that
            products that are 'AI-free' are more highly valued)

    A slower, less grandiose roll-out of the tech might have worked... but
    the AI-bros --and the venture capitalists behind them-- hoped for
    instant-trillion-dollar returns. But they can only keep up the spin
    for so long before the whole bubble pops.


    That's where I think the contrast to the Metaverse comes in. I can see,
    well sorta of, that it may have worked (compared to the investment). AI
    I just don't see it at all.

    The appeal of AI to big corporations is they figure it is cheaper and
    faster than live human employees.

    That's it.
    --
    I've done good in this world. Now I'm tired and just want to be a cranky
    dirty old man.
    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From Spalls Hurgenson@spallshurgenson@gmail.com to comp.sys.ibm.pc.games.action on Sun Apr 12 21:59:07 2026
    From Newsgroup: comp.sys.ibm.pc.games.action

    On Sun, 12 Apr 2026 12:40:36 -0700, Dimensional Traveler
    <dtravel@sonic.net> said this thing:


    The appeal of AI to big corporations is they figure it is cheaper and
    faster than live human employees.

    Except even that doesn't work. A recent MIT study showed that AI is
    nowhere near good enough to replace workers yet, except in the most
    basic of tasks. It /maybe/ good replace entry-level jobs but even that
    isn't good strategy because --if you kick out all of them in favor of
    AI-- who will replace your experienced workers later on when they move
    on?

    Not that the people pushing for this care about the long-term survival
    of the companies they are responsible for. It's all next-quarter
    growth that matters. If the company starts sliding downhill after
    that, well, that's what golden parachutes are for.


    --- Synchronet 3.21f-Linux NewsLink 1.2
  • From candycanearter07@candycanearter07@candycanearter07.nomail.afraid to comp.sys.ibm.pc.games.action on Mon Apr 13 16:10:03 2026
    From Newsgroup: comp.sys.ibm.pc.games.action

    Dimensional Traveler <dtravel@sonic.net> wrote at 00:17 this Friday (GMT):
    On 4/9/2026 8:00 AM, candycanearter07 wrote:
    Dimensional Traveler <dtravel@sonic.net> wrote at 00:42 this Thursday (GMT): >>> On 4/1/2026 11:36 AM, Justisaur wrote:

    I saw an interesting bit where a group found where the hallucinations
    are coming from and can almost eliminate them.  The 'problem' with doing >>>> that is that the LLMs become far less friendly.  It's basically the
    neural nodes that allow them to be friendly, agreeable, and creative.
    For most uses I'd want to use LLMs for professionally, I'd happily take >>>> the hit to all of that for precise.

    You can have separate ones for each though you really have to go back to >>>> the training for that.  You could have a 'Spock' logical, mater of fact >>>> LLM for science, code, law and engineering, and a 'Mud' one you use for >>>> writing ad copy, emails, etc.


    "the LLMs become far less friendly." Meaning what exactly? What do
    they do when you eliminate the "hallucinations"?


    When you eliminate the hallucinations, whatever remains, however
    improbable, is manipulation.

    Isn't that what AIs are already doing?


    Yes, that's the joke I was trying to make :P
    --
    user <candycane> is generated from /dev/urandom
    --- Synchronet 3.21f-Linux NewsLink 1.2