a fridge
that only chills your food for part of each day is the worst refrigerator.
Came across this PC Gamer article.
In today's episode of "innocent AI usage goes wrong," a Stardew Valley >player wastes a bunch of time and resources because a Google AI summary
lied to their face.
On Thu, 26 Mar 2026 20:56:26 -0700, Dimensional Traveler
<dtravel@sonic.net> said this thing:
Came across this PC Gamer article.
In today's episode of "innocent AI usage goes wrong," a Stardew Valley
player wastes a bunch of time and resources because a Google AI summary
lied to their face.
Good old AI. Neat, fun... and completely unreliable. But fortunately,
you have talented employees who --when using AI to help them in their
jobs-- can catch these sort of errors. Oh, what's that? You FIRED all
the employees who knew what they were doing because you thought AI
could do their jobs? Oh, sucks for you.
#
The AI bubble can't end soon enough. And it will end. The AI companies
(well, besides the hardware manufacturers) just aren't bringing in any revenue... or at least, not enough revenue to offeset the massive
costs they are accumulating. Even where corporations actually pay subscriptions for the services (less than 3% of the AI market!), it
costs the AI companies more money to service those customers than the subscriptions bring in.
I forget the actual numbers, but if you say a monthly subscription
costs a company $200 / month / user (if we assume the most expensive
rate), but each user makes five hundred compute requests per day and
each request costs $1 for the AI companies to process... well, that's
a quick way to bankruptcy. And the corporations who actually pay for a subscription (at any rate) is minimal, and that number would drop to
nearly zero if they were made to pay the actual cost of each compute
request. All the more so since the companies who /are/ using AI
day-to-day are already used to flat rates. Switching to a per-use
(API) model will kill any interest in using AI.
Especially since the results of each AI request are so erratic and
often need multiple attempts to get something actually usable. At that
point it becomes cheaper for corporations to just keep your employees
and ditch the AI.
AI companies like to compare themselves to the early days of Uber or
Amazon --'You gotta spend money to make money!'-- except the amounts
they are spending dwarf what Amazon or Uber spent setting themselves
up. Uber spent about $30 billion USD over a decade getting to where
they are now. Anthropic spends $30 billion USD per month. It's not sustainable.
There's just not a way out of the mess that AI companies are in right
now. They can't dig their way out of the hole; buying more datacenters
will only make things worse. Simply put, their product is too
expensive and not worth the price they would need to charge for it to
be profitable.
AI is a bubble that is consuming vast amounts of cash and resources,
burning through electricity, scarfing up all the hardware and getting thousands of people fired... and in the end it's all going to collapse without bringing any net gain to the economy or the world.
On 3/27/2026 8:02 AM, Spalls Hurgenson wrote:
On Thu, 26 Mar 2026 20:56:26 -0700, Dimensional TravelerIts worse than that. There are an increasing number of court cases
<dtravel@sonic.net> said this thing:
Came across this PC Gamer article.
In today's episode of "innocent AI usage goes wrong," a Stardew Valley
player wastes a bunch of time and resources because a Google AI summary
lied to their face.
Good old AI. Neat, fun... and completely unreliable. But fortunately,
you have talented employees who --when using AI to help them in their
jobs-- can catch these sort of errors. Oh, what's that? You FIRED all
the employees who knew what they were doing because you thought AI
could do their jobs? Oh, sucks for you.
#
The AI bubble can't end soon enough. And it will end. The AI companies
(well, besides the hardware manufacturers) just aren't bringing in any
revenue... or at least, not enough revenue to offeset the massive
costs they are accumulating. Even where corporations actually pay
subscriptions for the services (less than 3% of the AI market!), it
costs the AI companies more money to service those customers than the
subscriptions bring in.
I forget the actual numbers, but if you say a monthly subscription
costs a company $200 / month / user (if we assume the most expensive
rate), but each user makes five hundred compute requests per day and
each request costs $1 for the AI companies to process... well, that's
a quick way to bankruptcy. And the corporations who actually pay for a
subscription (at any rate) is minimal, and that number would drop to
nearly zero if they were made to pay the actual cost of each compute
request. All the more so since the companies who /are/ using AI
day-to-day are already used to flat rates. Switching to a per-use
(API) model will kill any interest in using AI.
Especially since the results of each AI request are so erratic and
often need multiple attempts to get something actually usable. At that
point it becomes cheaper for corporations to just keep your employees
and ditch the AI.
AI companies like to compare themselves to the early days of Uber or
Amazon --'You gotta spend money to make money!'-- except the amounts
they are spending dwarf what Amazon or Uber spent setting themselves
up. Uber spent about $30 billion USD over a decade getting to where
they are now. Anthropic spends $30 billion USD per month. It's not
sustainable.
There's just not a way out of the mess that AI companies are in right
now. They can't dig their way out of the hole; buying more datacenters
will only make things worse. Simply put, their product is too
expensive and not worth the price they would need to charge for it to
be profitable.
AI is a bubble that is consuming vast amounts of cash and resources,
burning through electricity, scarfing up all the hardware and getting
thousands of people fired... and in the end it's all going to collapse
without bringing any net gain to the economy or the world.
where the lawyers on one or both sides are using AI to find cites and related cases to their current case. Which might be a good thing if the
AI engines weren't making up fake cases to cite. And the lawyers aren't checking the AI's results. Until a judge finds out the filing is full
of case citations that don't exist. IF the judge realizes there are
fake case citations. There have already been cases where NO ONE caught
the make-believe cases until after the case was settled. Oops.
Dimensional Traveler wrote:
On 3/27/2026 8:02 AM, Spalls Hurgenson wrote:AI are great at bullshitting.
On Thu, 26 Mar 2026 20:56:26 -0700, Dimensional TravelerIts worse than that. There are an increasing number of court cases
<dtravel@sonic.net> said this thing:
Came across this PC Gamer article.
In today's episode of "innocent AI usage goes wrong," a Stardew Valley >>>> player wastes a bunch of time and resources because a Google AI summary >>>> lied to their face.
Good old AI. Neat, fun... and completely unreliable. But fortunately,
you have talented employees who --when using AI to help them in their
jobs-- can catch these sort of errors. Oh, what's that? You FIRED all
the employees who knew what they were doing because you thought AI
could do their jobs? Oh, sucks for you.
#
The AI bubble can't end soon enough. And it will end. The AI companies
(well, besides the hardware manufacturers) just aren't bringing in any
revenue... or at least, not enough revenue to offeset the massive
costs they are accumulating. Even where corporations actually pay
subscriptions for the services (less than 3% of the AI market!), it
costs the AI companies more money to service those customers than the
subscriptions bring in.
I forget the actual numbers, but if you say a monthly subscription
costs a company $200 / month / user (if we assume the most expensive
rate), but each user makes five hundred compute requests per day and
each request costs $1 for the AI companies to process... well, that's
a quick way to bankruptcy. And the corporations who actually pay for a
subscription (at any rate) is minimal, and that number would drop to
nearly zero if they were made to pay the actual cost of each compute
request. All the more so since the companies who /are/ using AI
day-to-day are already used to flat rates. Switching to a per-use
(API) model will kill any interest in using AI.
Especially since the results of each AI request are so erratic and
often need multiple attempts to get something actually usable. At that
point it becomes cheaper for corporations to just keep your employees
and ditch the AI.
AI companies like to compare themselves to the early days of Uber or
Amazon --'You gotta spend money to make money!'-- except the amounts
they are spending dwarf what Amazon or Uber spent setting themselves
up. Uber spent about $30 billion USD over a decade getting to where
they are now. Anthropic spends $30 billion USD per month. It's not
sustainable.
There's just not a way out of the mess that AI companies are in right
now. They can't dig their way out of the hole; buying more datacenters
will only make things worse. Simply put, their product is too
expensive and not worth the price they would need to charge for it to
be profitable.
AI is a bubble that is consuming vast amounts of cash and resources,
burning through electricity, scarfing up all the hardware and getting
thousands of people fired... and in the end it's all going to collapse
without bringing any net gain to the economy or the world.
where the lawyers on one or both sides are using AI to find cites and
related cases to their current case. Which might be a good thing if
the AI engines weren't making up fake cases to cite. And the lawyers
aren't checking the AI's results. Until a judge finds out the filing
is full of case citations that don't exist. IF the judge realizes
there are fake case citations. There have already been cases where NO
ONE caught the make-believe cases until after the case was settled.
Oops.
phoenix wrote:
Dimensional Traveler wrote:yes you are
On 3/27/2026 8:02 AM, Spalls Hurgenson wrote:AI are great at bullshitting.
On Thu, 26 Mar 2026 20:56:26 -0700, Dimensional TravelerIts worse than that. There are an increasing number of court cases
<dtravel@sonic.net> said this thing:
Came across this PC Gamer article.
In today's episode of "innocent AI usage goes wrong," a Stardew Valley >>>>> player wastes a bunch of time and resources because a Google AI
summary
lied to their face.
Good old AI. Neat, fun... and completely unreliable. But fortunately,
you have talented employees who --when using AI to help them in their
jobs-- can catch these sort of errors. Oh, what's that? You FIRED all
the employees who knew what they were doing because you thought AI
could do their jobs? Oh, sucks for you.
#
The AI bubble can't end soon enough. And it will end. The AI companies >>>> (well, besides the hardware manufacturers) just aren't bringing in any >>>> revenue... or at least, not enough revenue to offeset the massive
costs they are accumulating. Even where corporations actually pay
subscriptions for the services (less than 3% of the AI market!), it
costs the AI companies more money to service those customers than the
subscriptions bring in.
I forget the actual numbers, but if you say a monthly subscription
costs a company $200 / month / user (if we assume the most expensive
rate), but each user makes five hundred compute requests per day and
each request costs $1 for the AI companies to process... well, that's
a quick way to bankruptcy. And the corporations who actually pay for a >>>> subscription (at any rate) is minimal, and that number would drop to
nearly zero if they were made to pay the actual cost of each compute
request. All the more so since the companies who /are/ using AI
day-to-day are already used to flat rates. Switching to a per-use
(API) model will kill any interest in using AI.
Especially since the results of each AI request are so erratic and
often need multiple attempts to get something actually usable. At that >>>> point it becomes cheaper for corporations to just keep your employees
and ditch the AI.
AI companies like to compare themselves to the early days of Uber or
Amazon --'You gotta spend money to make money!'-- except the amounts
they are spending dwarf what Amazon or Uber spent setting themselves
up. Uber spent about $30 billion USD over a decade getting to where
they are now. Anthropic spends $30 billion USD per month. It's not
sustainable.
There's just not a way out of the mess that AI companies are in right
now. They can't dig their way out of the hole; buying more datacenters >>>> will only make things worse. Simply put, their product is too
expensive and not worth the price they would need to charge for it to
be profitable.
AI is a bubble that is consuming vast amounts of cash and resources,
burning through electricity, scarfing up all the hardware and getting
thousands of people fired... and in the end it's all going to collapse >>>> without bringing any net gain to the economy or the world.
where the lawyers on one or both sides are using AI to find cites and
related cases to their current case. Which might be a good thing if
the AI engines weren't making up fake cases to cite. And the lawyers
aren't checking the AI's results. Until a judge finds out the filing
is full of case citations that don't exist. IF the judge realizes
there are fake case citations. There have already been cases where
NO ONE caught the make-believe cases until after the case was
settled. Oops.
% wrote:
phoenix wrote:
Dimensional Traveler wrote:yes you are
On 3/27/2026 8:02 AM, Spalls Hurgenson wrote:AI are great at bullshitting.
On Thu, 26 Mar 2026 20:56:26 -0700, Dimensional TravelerIts worse than that. There are an increasing number of court cases
<dtravel@sonic.net> said this thing:
Came across this PC Gamer article.
In today's episode of "innocent AI usage goes wrong," a Stardew
Valley
player wastes a bunch of time and resources because a Google AI
summary
lied to their face.
Good old AI. Neat, fun... and completely unreliable. But fortunately, >>>>> you have talented employees who --when using AI to help them in their >>>>> jobs-- can catch these sort of errors. Oh, what's that? You FIRED all >>>>> the employees who knew what they were doing because you thought AI
could do their jobs? Oh, sucks for you.
#
The AI bubble can't end soon enough. And it will end. The AI companies >>>>> (well, besides the hardware manufacturers) just aren't bringing in any >>>>> revenue... or at least, not enough revenue to offeset the massive
costs they are accumulating. Even where corporations actually pay
subscriptions for the services (less than 3% of the AI market!), it
costs the AI companies more money to service those customers than the >>>>> subscriptions bring in.
I forget the actual numbers, but if you say a monthly subscription
costs a company $200 / month / user (if we assume the most expensive >>>>> rate), but each user makes five hundred compute requests per day and >>>>> each request costs $1 for the AI companies to process... well, that's >>>>> a quick way to bankruptcy. And the corporations who actually pay for a >>>>> subscription (at any rate) is minimal, and that number would drop to >>>>> nearly zero if they were made to pay the actual cost of each compute >>>>> request. All the more so since the companies who /are/ using AI
day-to-day are already used to flat rates. Switching to a per-use
(API) model will kill any interest in using AI.
Especially since the results of each AI request are so erratic and
often need multiple attempts to get something actually usable. At that >>>>> point it becomes cheaper for corporations to just keep your employees >>>>> and ditch the AI.
AI companies like to compare themselves to the early days of Uber or >>>>> Amazon --'You gotta spend money to make money!'-- except the amounts >>>>> they are spending dwarf what Amazon or Uber spent setting themselves >>>>> up. Uber spent about $30 billion USD over a decade getting to where
they are now. Anthropic spends $30 billion USD per month. It's not
sustainable.
There's just not a way out of the mess that AI companies are in right >>>>> now. They can't dig their way out of the hole; buying more datacenters >>>>> will only make things worse. Simply put, their product is too
expensive and not worth the price they would need to charge for it to >>>>> be profitable.
AI is a bubble that is consuming vast amounts of cash and resources, >>>>> burning through electricity, scarfing up all the hardware and getting >>>>> thousands of people fired... and in the end it's all going to collapse >>>>> without bringing any net gain to the economy or the world.
where the lawyers on one or both sides are using AI to find cites
and related cases to their current case. Which might be a good
thing if the AI engines weren't making up fake cases to cite. And
the lawyers aren't checking the AI's results. Until a judge finds
out the filing is full of case citations that don't exist. IF the
judge realizes there are fake case citations. There have already
been cases where NO ONE caught the make-believe cases until after
the case was settled. Oops.
I'll see myself out.
phoenix wrote:
% wrote:how long have you had a stretch armstrong face
phoenix wrote:
Dimensional Traveler wrote:yes you are
On 3/27/2026 8:02 AM, Spalls Hurgenson wrote:AI are great at bullshitting.
On Thu, 26 Mar 2026 20:56:26 -0700, Dimensional TravelerIts worse than that. There are an increasing number of court cases >>>>> where the lawyers on one or both sides are using AI to find cites
<dtravel@sonic.net> said this thing:
Came across this PC Gamer article.
In today's episode of "innocent AI usage goes wrong," a Stardew >>>>>>> Valley
player wastes a bunch of time and resources because a Google AI >>>>>>> summary
lied to their face.
Good old AI. Neat, fun... and completely unreliable. But fortunately, >>>>>> you have talented employees who --when using AI to help them in their >>>>>> jobs-- can catch these sort of errors. Oh, what's that? You FIRED all >>>>>> the employees who knew what they were doing because you thought AI >>>>>> could do their jobs? Oh, sucks for you.
#
The AI bubble can't end soon enough. And it will end. The AI
companies
(well, besides the hardware manufacturers) just aren't bringing in >>>>>> any
revenue... or at least, not enough revenue to offeset the massive
costs they are accumulating. Even where corporations actually pay
subscriptions for the services (less than 3% of the AI market!), it >>>>>> costs the AI companies more money to service those customers than the >>>>>> subscriptions bring in.
I forget the actual numbers, but if you say a monthly subscription >>>>>> costs a company $200 / month / user (if we assume the most expensive >>>>>> rate), but each user makes five hundred compute requests per day and >>>>>> each request costs $1 for the AI companies to process... well, that's >>>>>> a quick way to bankruptcy. And the corporations who actually pay
for a
subscription (at any rate) is minimal, and that number would drop to >>>>>> nearly zero if they were made to pay the actual cost of each compute >>>>>> request. All the more so since the companies who /are/ using AI
day-to-day are already used to flat rates. Switching to a per-use
(API) model will kill any interest in using AI.
Especially since the results of each AI request are so erratic and >>>>>> often need multiple attempts to get something actually usable. At >>>>>> that
point it becomes cheaper for corporations to just keep your employees >>>>>> and ditch the AI.
AI companies like to compare themselves to the early days of Uber or >>>>>> Amazon --'You gotta spend money to make money!'-- except the amounts >>>>>> they are spending dwarf what Amazon or Uber spent setting themselves >>>>>> up. Uber spent about $30 billion USD over a decade getting to where >>>>>> they are now. Anthropic spends $30 billion USD per month. It's not >>>>>> sustainable.
There's just not a way out of the mess that AI companies are in right >>>>>> now. They can't dig their way out of the hole; buying more
datacenters
will only make things worse. Simply put, their product is too
expensive and not worth the price they would need to charge for it to >>>>>> be profitable.
AI is a bubble that is consuming vast amounts of cash and resources, >>>>>> burning through electricity, scarfing up all the hardware and getting >>>>>> thousands of people fired... and in the end it's all going to
collapse
without bringing any net gain to the economy or the world.
and related cases to their current case. Which might be a good
thing if the AI engines weren't making up fake cases to cite. And >>>>> the lawyers aren't checking the AI's results. Until a judge finds >>>>> out the filing is full of case citations that don't exist. IF the >>>>> judge realizes there are fake case citations. There have already
been cases where NO ONE caught the make-believe cases until after
the case was settled. Oops.
I'll see myself out.
On 3/27/2026 8:02 AM, Spalls Hurgenson wrote:
On Thu, 26 Mar 2026 20:56:26 -0700, Dimensional TravelerIts worse than that. There are an increasing number of court cases
<dtravel@sonic.net> said this thing:
Came across this PC Gamer article.
In today's episode of "innocent AI usage goes wrong," a Stardew Valley
player wastes a bunch of time and resources because a Google AI summary
lied to their face.
Good old AI. Neat, fun... and completely unreliable. But fortunately,
you have talented employees who --when using AI to help them in their
jobs-- can catch these sort of errors. Oh, what's that? You FIRED all
the employees who knew what they were doing because you thought AI
could do their jobs? Oh, sucks for you.
#
The AI bubble can't end soon enough. And it will end. The AI companies
(well, besides the hardware manufacturers) just aren't bringing in any
revenue... or at least, not enough revenue to offeset the massive
costs they are accumulating. Even where corporations actually pay
subscriptions for the services (less than 3% of the AI market!), it
costs the AI companies more money to service those customers than the
subscriptions bring in.
I forget the actual numbers, but if you say a monthly subscription
costs a company $200 / month / user (if we assume the most expensive
rate), but each user makes five hundred compute requests per day and
each request costs $1 for the AI companies to process... well, that's
a quick way to bankruptcy. And the corporations who actually pay for a
subscription (at any rate) is minimal, and that number would drop to
nearly zero if they were made to pay the actual cost of each compute
request. All the more so since the companies who /are/ using AI
day-to-day are already used to flat rates. Switching to a per-use
(API) model will kill any interest in using AI.
Especially since the results of each AI request are so erratic and
often need multiple attempts to get something actually usable. At that
point it becomes cheaper for corporations to just keep your employees
and ditch the AI.
AI companies like to compare themselves to the early days of Uber or
Amazon --'You gotta spend money to make money!'-- except the amounts
they are spending dwarf what Amazon or Uber spent setting themselves
up. Uber spent about $30 billion USD over a decade getting to where
they are now. Anthropic spends $30 billion USD per month. It's not
sustainable.
There's just not a way out of the mess that AI companies are in right
now. They can't dig their way out of the hole; buying more datacenters
will only make things worse. Simply put, their product is too
expensive and not worth the price they would need to charge for it to
be profitable.
AI is a bubble that is consuming vast amounts of cash and resources,
burning through electricity, scarfing up all the hardware and getting
thousands of people fired... and in the end it's all going to collapse
without bringing any net gain to the economy or the world.
where the lawyers on one or both sides are using AI to find cites and related cases to their current case. Which might be a good thing if the
AI engines weren't making up fake cases to cite. And the lawyers aren't checking the AI's results. Until a judge finds out the filing is full
of case citations that don't exist. IF the judge realizes there are
fake case citations. There have already been cases where NO ONE caught
the make-believe cases until after the case was settled. Oops.
On 3/27/2026 5:28 PM, Dimensional Traveler wrote:
On 3/27/2026 8:02 AM, Spalls Hurgenson wrote:
On Thu, 26 Mar 2026 20:56:26 -0700, Dimensional TravelerIts worse than that. There are an increasing number of court cases
<dtravel@sonic.net> said this thing:
Came across this PC Gamer article.
In today's episode of "innocent AI usage goes wrong," a Stardew Valley >>>> player wastes a bunch of time and resources because a Google AI summary >>>> lied to their face.
Good old AI. Neat, fun... and completely unreliable. But fortunately,
you have talented employees who --when using AI to help them in their
jobs-- can catch these sort of errors. Oh, what's that? You FIRED all
the employees who knew what they were doing because you thought AI
could do their jobs? Oh, sucks for you.
#
The AI bubble can't end soon enough. And it will end. The AI companies
(well, besides the hardware manufacturers) just aren't bringing in any
revenue... or at least, not enough revenue to offeset the massive
costs they are accumulating. Even where corporations actually pay
subscriptions for the services (less than 3% of the AI market!), it
costs the AI companies more money to service those customers than the
subscriptions bring in.
I forget the actual numbers, but if you say a monthly subscription
costs a company $200 / month / user (if we assume the most expensive
rate), but each user makes five hundred compute requests per day and
each request costs $1 for the AI companies to process... well, that's
a quick way to bankruptcy. And the corporations who actually pay for a
subscription (at any rate) is minimal, and that number would drop to
nearly zero if they were made to pay the actual cost of each compute
request. All the more so since the companies who /are/ using AI
day-to-day are already used to flat rates. Switching to a per-use
(API) model will kill any interest in using AI.
Especially since the results of each AI request are so erratic and
often need multiple attempts to get something actually usable. At that
point it becomes cheaper for corporations to just keep your employees
and ditch the AI.
AI companies like to compare themselves to the early days of Uber or
Amazon --'You gotta spend money to make money!'-- except the amounts
they are spending dwarf what Amazon or Uber spent setting themselves
up. Uber spent about $30 billion USD over a decade getting to where
they are now. Anthropic spends $30 billion USD per month. It's not
sustainable.
There's just not a way out of the mess that AI companies are in right
now. They can't dig their way out of the hole; buying more datacenters
will only make things worse. Simply put, their product is too
expensive and not worth the price they would need to charge for it to
be profitable.
AI is a bubble that is consuming vast amounts of cash and resources,
burning through electricity, scarfing up all the hardware and getting
thousands of people fired... and in the end it's all going to collapse
without bringing any net gain to the economy or the world.
where the lawyers on one or both sides are using AI to find cites and
related cases to their current case. Which might be a good thing if
the AI engines weren't making up fake cases to cite. And the lawyers
aren't checking the AI's results. Until a judge finds out the filing
is full of case citations that don't exist. IF the judge realizes
there are fake case citations. There have already been cases where NO
ONE caught the make-believe cases until after the case was settled.
Oops.
I saw an interesting bit where a group found where the hallucinations
are coming from and can almost eliminate them. The 'problem' with doing that is that the LLMs become far less friendly. It's basically the
neural nodes that allow them to be friendly, agreeable, and creative.
For most uses I'd want to use LLMs for professionally, I'd happily take
the hit to all of that for precise.
You can have separate ones for each though you really have to go back to
the training for that. You could have a 'Spock' logical, mater of fact
LLM for science, code, law and engineering, and a 'Mud' one you use for writing ad copy, emails, etc.
On 4/1/2026 11:36 AM, Justisaur wrote:
On 3/27/2026 5:28 PM, Dimensional Traveler wrote:
On 3/27/2026 8:02 AM, Spalls Hurgenson wrote:
On Thu, 26 Mar 2026 20:56:26 -0700, Dimensional TravelerIts worse than that. There are an increasing number of court cases
<dtravel@sonic.net> said this thing:
Came across this PC Gamer article.
In today's episode of "innocent AI usage goes wrong," a Stardew Valley >>>>> player wastes a bunch of time and resources because a Google AI
summary
lied to their face.
Good old AI. Neat, fun... and completely unreliable. But fortunately,
you have talented employees who --when using AI to help them in their
jobs-- can catch these sort of errors. Oh, what's that? You FIRED all
the employees who knew what they were doing because you thought AI
could do their jobs? Oh, sucks for you.
#
The AI bubble can't end soon enough. And it will end. The AI companies >>>> (well, besides the hardware manufacturers) just aren't bringing in any >>>> revenue... or at least, not enough revenue to offeset the massive
costs they are accumulating. Even where corporations actually pay
subscriptions for the services (less than 3% of the AI market!), it
costs the AI companies more money to service those customers than the
subscriptions bring in.
I forget the actual numbers, but if you say a monthly subscription
costs a company $200 / month / user (if we assume the most expensive
rate), but each user makes five hundred compute requests per day and
each request costs $1 for the AI companies to process... well, that's
a quick way to bankruptcy. And the corporations who actually pay for a >>>> subscription (at any rate) is minimal, and that number would drop to
nearly zero if they were made to pay the actual cost of each compute
request. All the more so since the companies who /are/ using AI
day-to-day are already used to flat rates. Switching to a per-use
(API) model will kill any interest in using AI.
Especially since the results of each AI request are so erratic and
often need multiple attempts to get something actually usable. At that >>>> point it becomes cheaper for corporations to just keep your employees
and ditch the AI.
AI companies like to compare themselves to the early days of Uber or
Amazon --'You gotta spend money to make money!'-- except the amounts
they are spending dwarf what Amazon or Uber spent setting themselves
up. Uber spent about $30 billion USD over a decade getting to where
they are now. Anthropic spends $30 billion USD per month. It's not
sustainable.
There's just not a way out of the mess that AI companies are in right
now. They can't dig their way out of the hole; buying more datacenters >>>> will only make things worse. Simply put, their product is too
expensive and not worth the price they would need to charge for it to
be profitable.
AI is a bubble that is consuming vast amounts of cash and resources,
burning through electricity, scarfing up all the hardware and getting
thousands of people fired... and in the end it's all going to collapse >>>> without bringing any net gain to the economy or the world.
where the lawyers on one or both sides are using AI to find cites and
related cases to their current case. Which might be a good thing if
the AI engines weren't making up fake cases to cite. And the lawyers
aren't checking the AI's results. Until a judge finds out the filing
is full of case citations that don't exist. IF the judge realizes
there are fake case citations. There have already been cases where
NO ONE caught the make-believe cases until after the case was
settled. Oops.
I saw an interesting bit where a group found where the hallucinations
are coming from and can almost eliminate them. The 'problem' with
doing that is that the LLMs become far less friendly. It's basically
the neural nodes that allow them to be friendly, agreeable, and
creative. For most uses I'd want to use LLMs for professionally, I'd
happily take the hit to all of that for precise.
You can have separate ones for each though you really have to go back
to the training for that. You could have a 'Spock' logical, mater of
fact LLM for science, code, law and engineering, and a 'Mud' one you
use for writing ad copy, emails, etc.
"the LLMs become far less friendly." Meaning what exactly? What do
they do when you eliminate the "hallucinations"?
On 4/1/2026 11:36 AM, Justisaur wrote:
On 3/27/2026 5:28 PM, Dimensional Traveler wrote:
Its worse than that. There are an increasing number of court cases
where the lawyers on one or both sides are using AI to find cites and
related cases to their current case. Which might be a good thing if
the AI engines weren't making up fake cases to cite. And the lawyers
aren't checking the AI's results. Until a judge finds out the filing
is full of case citations that don't exist. IF the judge realizes
there are fake case citations. There have already been cases where NO
ONE caught the make-believe cases until after the case was settled.
Oops.
I saw an interesting bit where a group found where the hallucinations
are coming from and can almost eliminate them. The 'problem' with doing
that is that the LLMs become far less friendly. It's basically the
neural nodes that allow them to be friendly, agreeable, and creative.
For most uses I'd want to use LLMs for professionally, I'd happily take
the hit to all of that for precise.
You can have separate ones for each though you really have to go back to
the training for that. You could have a 'Spock' logical, mater of fact
LLM for science, code, law and engineering, and a 'Mud' one you use for
writing ad copy, emails, etc.
"the LLMs become far less friendly." Meaning what exactly? What do
they do when you eliminate the "hallucinations"?
On Wed, 1 Apr 2026 17:42:39 -0700, Dimensional Traveler
<dtravel@sonic.net> said this thing:
On 4/1/2026 11:36 AM, Justisaur wrote:
On 3/27/2026 5:28 PM, Dimensional Traveler wrote:
Its worse than that. There are an increasing number of court cases
where the lawyers on one or both sides are using AI to find cites and
related cases to their current case. Which might be a good thing if
the AI engines weren't making up fake cases to cite. And the lawyers >>>> aren't checking the AI's results. Until a judge finds out the filing >>>> is full of case citations that don't exist. IF the judge realizes
there are fake case citations. There have already been cases where NO >>>> ONE caught the make-believe cases until after the case was settled.
Oops.
I saw an interesting bit where a group found where the hallucinations
are coming from and can almost eliminate them. The 'problem' with doing >>> that is that the LLMs become far less friendly. It's basically the
neural nodes that allow them to be friendly, agreeable, and creative.
For most uses I'd want to use LLMs for professionally, I'd happily take
the hit to all of that for precise.
You can have separate ones for each though you really have to go back to >>> the training for that. You could have a 'Spock' logical, mater of fact >>> LLM for science, code, law and engineering, and a 'Mud' one you use for
writing ad copy, emails, etc.
"the LLMs become far less friendly." Meaning what exactly? What do
they do when you eliminate the "hallucinations"?
One theory is that it is the solicitousness of the AI that is the
problem. What this really means is that the LLMs have too much freedom
to generate content; that when they don't know the exact answer we
expect them to still provide a response. Which it then does, based
either on the median of its dataset (or a subset of the dataset that
it has recognized as aligning with its users preferences and biases).
Restricting the AI to using only validated data helps minimize this
(e.g., making it less helpful), but at a cost to the AI's utility. It
won't always have an answer... and why use a program when half the
time it says, "I can't do that"? It also makes them far slower and
more computationally expensive, as it demands longer 'memories' and
deeper dives into the dataset to validate its creations.
And even then, hallucinations are still inevitable. You can minimize
them slightly, but the very nature of how current AI work insists that
they will happen. Because what we call AI isn't in any way
intelligent. It has absolutely no understandinging of what we are
asking, or what it is saying. It just obeys the law of averages. Even
OpenAI agrees that hallucinations are mathematically inevitable. You
would have to fundamentally change the way the programs work before
they'd go away.
You can have separate ones for each though you really have to go back to
the training for that. You could have a 'Spock' logical, mater of fact
LLM for science, code, law and engineering, and a 'Mud' one you use for >writing ad copy, emails, etc.
Its worse than that. There are an increasing number of court cases
where the lawyers on one or both sides are using AI to find cites and >related cases to their current case. Which might be a good thing if the
AI engines weren't making up fake cases to cite. And the lawyers aren't >checking the AI's results. Until a judge finds out the filing is full
of case citations that don't exist. IF the judge realizes there are
fake case citations. There have already been cases where NO ONE caught
the make-believe cases until after the case was settled. Oops.
On Fri, 27 Mar 2026 17:28:46 -0700, Dimensional TravelerThis is really the cancer behind almost all the woes of our society.
<dtravel@sonic.net> said this thing:
Not that the C-levels really care about the future of a company five
years down the line... it's all about next-quarter earnings for them.
It's not as if they don't have golden parachutes to protect them when
the company collapses, after all.
On 4/1/2026 11:36 AM, Justisaur wrote:
On 3/27/2026 5:28 PM, Dimensional Traveler wrote:
On 3/27/2026 8:02 AM, Spalls Hurgenson wrote:
On Thu, 26 Mar 2026 20:56:26 -0700, Dimensional TravelerIts worse than that. There are an increasing number of court cases
<dtravel@sonic.net> said this thing:
Came across this PC Gamer article.
In today's episode of "innocent AI usage goes wrong," a Stardew Valley >>>>> player wastes a bunch of time and resources because a Google AI summary >>>>> lied to their face.
Good old AI. Neat, fun... and completely unreliable. But fortunately,
you have talented employees who --when using AI to help them in their
jobs-- can catch these sort of errors. Oh, what's that? You FIRED all
the employees who knew what they were doing because you thought AI
could do their jobs? Oh, sucks for you.
#
The AI bubble can't end soon enough. And it will end. The AI companies >>>> (well, besides the hardware manufacturers) just aren't bringing in any >>>> revenue... or at least, not enough revenue to offeset the massive
costs they are accumulating. Even where corporations actually pay
subscriptions for the services (less than 3% of the AI market!), it
costs the AI companies more money to service those customers than the
subscriptions bring in.
I forget the actual numbers, but if you say a monthly subscription
costs a company $200 / month / user (if we assume the most expensive
rate), but each user makes five hundred compute requests per day and
each request costs $1 for the AI companies to process... well, that's
a quick way to bankruptcy. And the corporations who actually pay for a >>>> subscription (at any rate) is minimal, and that number would drop to
nearly zero if they were made to pay the actual cost of each compute
request. All the more so since the companies who /are/ using AI
day-to-day are already used to flat rates. Switching to a per-use
(API) model will kill any interest in using AI.
Especially since the results of each AI request are so erratic and
often need multiple attempts to get something actually usable. At that >>>> point it becomes cheaper for corporations to just keep your employees
and ditch the AI.
AI companies like to compare themselves to the early days of Uber or
Amazon --'You gotta spend money to make money!'-- except the amounts
they are spending dwarf what Amazon or Uber spent setting themselves
up. Uber spent about $30 billion USD over a decade getting to where
they are now. Anthropic spends $30 billion USD per month. It's not
sustainable.
There's just not a way out of the mess that AI companies are in right
now. They can't dig their way out of the hole; buying more datacenters >>>> will only make things worse. Simply put, their product is too
expensive and not worth the price they would need to charge for it to
be profitable.
AI is a bubble that is consuming vast amounts of cash and resources,
burning through electricity, scarfing up all the hardware and getting
thousands of people fired... and in the end it's all going to collapse >>>> without bringing any net gain to the economy or the world.
where the lawyers on one or both sides are using AI to find cites and
related cases to their current case. Which might be a good thing if
the AI engines weren't making up fake cases to cite. And the lawyers
aren't checking the AI's results. Until a judge finds out the filing
is full of case citations that don't exist. IF the judge realizes
there are fake case citations. There have already been cases where NO >>> ONE caught the make-believe cases until after the case was settled.
Oops.
I saw an interesting bit where a group found where the hallucinations
are coming from and can almost eliminate them. The 'problem' with doing >> that is that the LLMs become far less friendly. It's basically the
neural nodes that allow them to be friendly, agreeable, and creative.
For most uses I'd want to use LLMs for professionally, I'd happily take
the hit to all of that for precise.
You can have separate ones for each though you really have to go back to
the training for that. You could have a 'Spock' logical, mater of fact
LLM for science, code, law and engineering, and a 'Mud' one you use for
writing ad copy, emails, etc.
"the LLMs become far less friendly." Meaning what exactly? What do
they do when you eliminate the "hallucinations"?
When you eliminate the hallucinations, whatever remains, however
improbable, is manipulation.
Dimensional Traveler <dtravel@sonic.net> wrote at 00:42 this Thursday (GMT):
On 4/1/2026 11:36 AM, Justisaur wrote:
I saw an interesting bit where a group found where the hallucinations
are coming from and can almost eliminate them. The 'problem' with doing >>> that is that the LLMs become far less friendly. It's basically the
neural nodes that allow them to be friendly, agreeable, and creative.
For most uses I'd want to use LLMs for professionally, I'd happily take
the hit to all of that for precise.
You can have separate ones for each though you really have to go back to >>> the training for that. You could have a 'Spock' logical, mater of fact >>> LLM for science, code, law and engineering, and a 'Mud' one you use for
writing ad copy, emails, etc.
"the LLMs become far less friendly." Meaning what exactly? What do
they do when you eliminate the "hallucinations"?
When you eliminate the hallucinations, whatever remains, however
improbable, is manipulation.
Its worse than that. There are an increasing number of court cases
where the lawyers on one or both sides are using AI to find cites and related cases to their current case. Which might be a good thing if the
AI engines weren't making up fake cases to cite. And the lawyers aren't checking the AI's results. Until a judge finds out the filing is full
of case citations that don't exist. IF the judge realizes there are
fake case citations. There have already been cases where NO ONE caught
the make-believe cases until after the case was settled. Oops.
"the LLMs become far less friendly." Meaning what exactly? What do
they do when you eliminate the "hallucinations"?
AI companies like to compare themselves to the early days of Uber or
Amazon --'You gotta spend money to make money!'-- except the amounts
they are spending dwarf what Amazon or Uber spent setting themselves
up. Uber spent about $30 billion USD over a decade getting to where
they are now. Anthropic spends $30 billion USD per month. It's not sustainable.
There's just not a way out of the mess that AI companies are in right
now. They can't dig their way out of the hole; buying more datacenters
will only make things worse. Simply put, their product is too
expensive and not worth the price they would need to charge for it to
be profitable.
On 27/03/2026 15:02, Spalls Hurgenson wrote:
I tend to agree, hell I can even be charitable and think I saw why the >Metaverse was a good idea. With AI I just don't see it. I believe it was >OpenAI that was talking about 'investing' 1.5t dollars over the next
years. How on earth will they get that money back let alone ever make a >profit?
On Sat, 11 Apr 2026 19:10:30 +0100, JAB <noway@nochance.com> said this
thing:
On 27/03/2026 15:02, Spalls Hurgenson wrote:
I tend to agree, hell I can even be charitable and think I saw why the
Metaverse was a good idea. With AI I just don't see it. I believe it was
OpenAI that was talking about 'investing' 1.5t dollars over the next
years. How on earth will they get that money back let alone ever make a
profit?
I'm not so sure I'd go on to say Metaverse was a good idea, but I it definitely could have been profitable. After all, the idea of an
"ever-game" is basically what Roblox has become; a common platform
which can be used to create a bunch of other games. But Facebook
limited itself to making it VR only (excluding everybody who didn't
have a VR headset) and had dreams of making it a commercial hub as
well. Plus, despite the nearly $200 billion Facebook spent on the
project, none of that showed in its visuals or mechanics.
The /concept/ of Metaverse (now called "Horizon Worlds") might have
worked. Facebook's actual attempt? I don't really see it.
Almost nobody is paying for AI, and interest in the tech is decreasing
as its downsides become more obvious. It's not that AI is without
value, but it is
a. way too expensive to spin up, and
b. has been sold as a general-purpose 'replace all
your employees' technology when it is a much more
restrictive tool
(c. also, a lot of its outputs -- a.k.a. AI slop-- have
given the tech a really bad reputation so that
products that are 'AI-free' are more highly valued)
A slower, less grandiose roll-out of the tech might have worked... but
the AI-bros --and the venture capitalists behind them-- hoped for instant-trillion-dollar returns. But they can only keep up the spin
for so long before the whole bubble pops.
On 12/04/2026 00:13, Spalls Hurgenson wrote:
On Sat, 11 Apr 2026 19:10:30 +0100, JAB <noway@nochance.com> said this
thing:
On 27/03/2026 15:02, Spalls Hurgenson wrote:
I tend to agree, hell I can even be charitable and think I saw why the
Metaverse was a good idea. With AI I just don't see it. I believe it was >>> OpenAI that was talking about 'investing' 1.5t dollars over the next
years. How on earth will they get that money back let alone ever make a
profit?
I'm not so sure I'd go on to say Metaverse was a good idea, but I it
definitely could have been profitable. After all, the idea of an
"ever-game" is basically what Roblox has become; a common platform
which can be used to create a bunch of other games. But Facebook
limited itself to making it VR only (excluding everybody who didn't
have a VR headset) and had dreams of making it a commercial hub as
well. Plus, despite the nearly $200 billion Facebook spent on the
project, none of that showed in its visuals or mechanics.
The /concept/ of Metaverse (now called "Horizon Worlds") might have
worked. Facebook's actual attempt? I don't really see it.
Well I did say I was trying to be charitable. Carve out a new market
backed up by some made up figures on a spreadsheet and there you go.
Personally I just never saw how it would be popular but then again it
wasn't aimed at me.
Almost nobody is paying for AI, and interest in the tech is decreasing
as its downsides become more obvious. It's not that AI is without
value, but it is
    a. way too expensive to spin up, and
    b. has been sold as a general-purpose 'replace all
       your employees' technology when it is a much more
       restrictive tool
   (c. also, a lot of its outputs -- a.k.a. AI slop-- have
       given the tech a really bad reputation so that
       products that are 'AI-free' are more highly valued)
A slower, less grandiose roll-out of the tech might have worked... but
the AI-bros --and the venture capitalists behind them-- hoped for
instant-trillion-dollar returns. But they can only keep up the spin
for so long before the whole bubble pops.
That's where I think the contrast to the Metaverse comes in. I can see,
well sorta of, that it may have worked (compared to the investment). AI
I just don't see it at all.
The appeal of AI to big corporations is they figure it is cheaper and
faster than live human employees.
On 4/9/2026 8:00 AM, candycanearter07 wrote:
Dimensional Traveler <dtravel@sonic.net> wrote at 00:42 this Thursday (GMT): >>> On 4/1/2026 11:36 AM, Justisaur wrote:
I saw an interesting bit where a group found where the hallucinations
are coming from and can almost eliminate them. The 'problem' with doing >>>> that is that the LLMs become far less friendly. It's basically the
neural nodes that allow them to be friendly, agreeable, and creative.
For most uses I'd want to use LLMs for professionally, I'd happily take >>>> the hit to all of that for precise.
You can have separate ones for each though you really have to go back to >>>> the training for that. You could have a 'Spock' logical, mater of fact >>>> LLM for science, code, law and engineering, and a 'Mud' one you use for >>>> writing ad copy, emails, etc.
"the LLMs become far less friendly." Meaning what exactly? What do
they do when you eliminate the "hallucinations"?
When you eliminate the hallucinations, whatever remains, however
improbable, is manipulation.
Isn't that what AIs are already doing?
| Sysop: | DaiTengu |
|---|---|
| Location: | Appleton, WI |
| Users: | 1,113 |
| Nodes: | 10 (0 / 10) |
| Uptime: | 492334:15:51 |
| Calls: | 14,238 |
| Files: | 186,312 |
| D/L today: |
3,198 files (1,045M bytes) |
| Messages: | 2,514,806 |