Hi,
On Wed, 20 Nov 2024 10:34:50 +0000 Roger wrote:
I had assumed that ntpd would mobilize a few servers and choose
one to replace the unreachable server.
How did you configure the NTP pool in your ntp.conf?
With the 'server'-directive perhaps?
On Wed, 20 Nov 2024 15:48:00 -0000 (UTC), "\"Marco Davids
(SIDN)\" via questions Mailing List" wrote:
On Wed, 20 Nov 2024 10:34:50 +0000 Roger wrote:
I had assumed that ntpd would mobilize a few servers and choose
one to replace the unreachable server.
How did you configure the NTP pool in your ntp.conf?
With the 'server'-directive perhaps?
No, I am using "pool 0.pool.ntp.org poll 11" (and 1. 2. 3. as
well). This is why I thought the non-responding server would
be replaced. If I had used "server 178.238.156.140" then I would
expect ntpd to keep trying to get an answer.
On 2024-11-20 10:11, Roger wrote:
On Wed, 20 Nov 2024 15:48:00 -0000 (UTC), "\"Marco Davids
(SIDN)\" via questions Mailing List" wrote:
On Wed, 20 Nov 2024 10:34:50 +0000 Roger wrote:
I had assumed that ntpd would mobilize a few servers and choose
one to replace the unreachable server.
How did you configure the NTP pool in your ntp.conf?
With the 'server'-directive perhaps?
No, I am using "pool 0.pool.ntp.org poll 11" (and 1. 2. 3. as
well). This is why I thought the non-responding server would
be replaced. If I had used "server 178.238.156.140" then I would
expect ntpd to keep trying to get an answer.
Maybe add "iburst preempt" options and drop "poll 11" or perhaps change to >"maxpoll 11" or higher, unless you have very good reasons to require a longer >interval than the default maximum, instead of adaptive polling based on the error.
On Wed, 20 Nov 2024 19:53:00 -0000 (UTC), Brian Inglis wrote:
On 2024-11-20 10:11, Roger wrote:
On Wed, 20 Nov 2024 15:48:00 -0000 (UTC), "\"Marco Davids
(SIDN)\" via questions Mailing List" wrote:
On Wed, 20 Nov 2024 10:34:50 +0000 Roger wrote:
I had assumed that ntpd would mobilize a few servers and choose
one to replace the unreachable server.
How did you configure the NTP pool in your ntp.conf?
With the 'server'-directive perhaps?
No, I am using "pool 0.pool.ntp.org poll 11" (and 1. 2. 3. as
well). This is why I thought the non-responding server would
be replaced. If I had used "server 178.238.156.140" then I would
expect ntpd to keep trying to get an answer.
Maybe add "iburst preempt" options and drop "poll 11" or perhaps change to >> "maxpoll 11" or higher, unless you have very good reasons to require a longer
interval than the default maximum, instead of adaptive polling based on the error.
Well, the documentation (confopt) tells me that the pool command
"mobilizes a preemptable pool client mode association for the
DNS name specified." Why would adding "preempt" change anything?
Although I have "pool ... poll 11" the poll does shorten
sometimes, going down to poll 6 if necessary. It seems to be
when the temperature (whether ambient or due to processor load)
changes too quickly.
My question is why would a preemptable server, acquired using
"pool ...", continue to be polled after it has stopped
responding, i.e., the reach has gone to 0? It is a
misunderstanding on my part or is there an bug in the code?
Or a doc bug?
I had assumed that ntpd would mobilize a few servers and choose
one to replace the unreachable server. Why assume this? If the
server had been removed from the pool then sending packets
forever would be wrong. However, there were no new mobilization
attempts, the server came back with the same association number.
In this instance it was an "internet malfunction", see graphs on
link below.
https://www.ntppool.org/a/markcpowell
Was my expectation wrong?
Did Dave Hart's ntp-dev-3792-msm-v2 contain such code which
didn't yet get into the released code?
On Wed, 20 Nov 2024 15:48:00 -0000 (UTC), "\"Marco Davids
(SIDN)\" via questions Mailing List" <questions@lists.ntp.org>
wrote:
How did you configure the NTP pool in your ntp.conf?
With the 'server'-directive perhaps?
No, I am using "pool 0.pool.ntp.org poll 11" (and 1. 2. 3. as
well). [...]
On 2024-11-20 14:32, Roger wrote:n
On Wed, 20 Nov 2024 19:53:00 -0000 (UTC), Brian Inglis wrote:
Maybe add "iburst preempt" options and drop "poll 11" or perhaps changeto
"maxpoll 11" or higher, unless you have very good reasons to require a longer
interval than the default maximum, instead of adaptive polling based o=
the error.
Well, the documentation (confopt) tells me that the pool command
"mobilizes a preemptable pool client mode association for the
DNS name specified." Why would adding "preempt" change anything?
It *may* be required and can never hurt:
$ grep 'pool.*preempt' ~/src/time/ntp/ntp-4.2.8p18/ntpd/complete.conf.in
pool 2.ubuntu.pool.ntp.org. iburst preempt
$ man 5 ntp.confnitions.
...
Configuration Commands
...
*pool* For type s addresses, this command mobilizes a persistent
client mode association with a number of remote servers. In
this mode the local clock can synchronized to the remote server,
but the remote server can never be synchronized to the local
clock.
...
Options:
...
*preempt* Says the association can be preempted.
...
This manual page was AutoGen=E2=80=90erated from the ntp.conf option defi=
4.2.8p18 25 May 2024 ntp.conf(5man)
although the older:
https://www.ntp.org/documentation/4.2.8-series/confopt/#server-commands
says:
"Server Commands and Options
Last update: March 23, 2023 21:05 UTC (6ad51a76f)
...
Server Commands
...
pool
For type s addresses (only) this command mobilizes a preemptable pool
client
mode association for the DNS name specified. "
...
Server Command Options
...
preempt
Specifies the association as preemptable rather than the default
persistent.
This option is ignored with the broadcast command and is most useful with
the
manycastclient and pool commands."
My question is why would a preemptable server, acquired using
"pool ...", continue to be polled after it has stopped
responding, i.e., the reach has gone to 0? It is a
misunderstanding on my part or is there an bug in the code?
Or a doc bug?
On Thu, 21 Nov 2024 17:03:00 -0000 (UTC), "Brian Inglis" <Brian.Inglis@SystematicSW.ab.ca> wrote:
HUGE snip
Or a doc bug?
Thank you. That's interesting. I was looking at confopt.html
contained within the ntp-4.2.8p18 source tree. I see that its
file date is 2020-03-03 whereas the man page has a file date of
2024-05-25. I shall now add preempt to my pool lines.
On Wed, Nov 20, 2024 at 6:10?PM Roger <invalid@invalid.invalid> wrote:
On Wed, 20 Nov 2024 15:48:00 -0000 (UTC), "\"Marco Davids
(SIDN)\" via questions Mailing List" <questions@lists.ntp.org>
wrote:
How did you configure the NTP pool in your ntp.conf?
With the 'server'-directive perhaps?
No, I am using "pool 0.pool.ntp.org poll 11" (and 1. 2. 3. as
well). [...]
Are you sure you didn't mean "maxpoll 11"? My reading of the code suggests >the line you provided would be rejected as a syntax error by ntpd.
On 11/21/2024 12:28 PM, Roger wrote:
On Thu, 21 Nov 2024 17:03:00 -0000 (UTC), "Brian Inglis"
<Brian.Inglis@SystematicSW.ab.ca> wrote:
HUGE snip
Or a doc bug?
Thank you. That's interesting. I was looking at confopt.html
contained within the ntp-4.2.8p18 source tree. I see that its
file date is 2020-03-03 whereas the man page has a file date of
2024-05-25. I shall now add preempt to my pool lines.
The dates can be misleading.
The website is now generated via Hugo, so (as best I understand) Dru
takes the man pages, converts them to Hugo, and that's what's on the >website.
So if the date of a page on the website is more recent than the date on
the man page, that doesn't mean that the content is newer, it just means
it was (at least) formatted more recently.
The intent and expectation is that if a change is made to the Hugo
version of the docs, that change is applied "upstream" as well.
The goal is to get the master documentation formatted for Hugo, and at
that point we'll be generating all of the documentation output targets
from the same (single) source documents, and the dates should then all >match.
On Wed, Nov 20, 2024 at 11:51?AM Roger <invalid@invalid.invalid> wrote:
I had assumed that ntpd would mobilize a few servers and choose
one to replace the unreachable server. Why assume this? If the
server had been removed from the pool then sending packets
forever would be wrong. However, there were no new mobilization
attempts, the server came back with the same association number.
In this instance it was an "internet malfunction", see graphs on
link below.
https://www.ntppool.org/a/markcpowell
Was my expectation wrong?
Sadly, yes.
Did Dave Hart's ntp-dev-3792-msm-v2 contain such code which
didn't yet get into the released code?
Yes, that code has not been released yet, and as it's based on the source >code from about two years ago, it's going to be a bit painful to merge into >the current code. The 3792 test release code contains logic to gradually >refine the pool servers by removing one at a time when certain conditions
are met, and not responding for 10 poll intervals is one of those
conditions.
As an aside, using "preempt" on a non-pool non-manycastclient association >(basically, configured via "server" or "peer") seems quixotic to me, as it >allows the association to be removed but nothing is done to replace it. I >have a difficult time imagining where that might be useful.
On Fri, 22 Nov 2024 05:58:00 -0000 (UTC), "Dave Hart via
questions Mailing List" <questions@lists.ntp.org> wrote:
As an aside, using "preempt" on a non-pool non-manycastclient association
(basically, configured via "server" or "peer") seems quixotic to me, as it >> allows the association to be removed but nothing is done to replace it. I >> have a difficult time imagining where that might be useful.
Something in the ntp.conf man page I can't get my head around is
why one would have "pool ... prefer". If one were using only one
pool line it would, presumably, result in all servers being
preferred.
On 11/22/2024 12:52 AM, Roger wrote:
On Fri, 22 Nov 2024 05:58:00 -0000 (UTC), "Dave Hart via
questions Mailing List" <questions@lists.ntp.org> wrote:
As an aside, using "preempt" on a non-pool non-manycastclient association >>> (basically, configured via "server" or "peer") seems quixotic to me, as it >>> allows the association to be removed but nothing is done to replace it. I >>> have a difficult time imagining where that might be useful.
Something in the ntp.conf man page I can't get my head around is
why one would have "pool ... prefer". If one were using only one
pool line it would, presumably, result in all servers being
preferred.
Note that the BUGS section of the ntp.conf man page says:
The syntax checking is not picky; some combinations of
ridiculous and even hilarious options and modes may not be
detected.
In the above case, I'd recommend looking at the code and perhaps seeing
if the description of the "prefer" options could be improved.
The preemptible option is forced on for pool servers, so they are
preemptible with or without that option. However, that option doesn't do much in 4.2.8 as the code intended to preempt useless servers has an off-by-one error that's corrected in my test 3792 release, so preemption
only happens in the unusual case where there are more than 2 times as many pool or manycast client associations as "tos maxclock" which defaults to
10. Arguably this could be fixed in the stable 4.2.8 branch but it would
be a substantial change in behavior without any configuration change that might break existing setups that depend on the off-by-one error.
(basically, configured via "server" or "peer") seems quixotic to me, as
it allows the association to be removed but nothing is done to replace it.
I have a difficult time imagining where that might be useful.
On Thu, Nov 21, 2024 at 4:56 PM Brian Inglis <Brian.Inglis@systematicsw.ab.ca
<mailto:Brian.Inglis@systematicsw.ab.ca>> wrote:
On 2024-11-20 14:32, Roger wrote:
> On Wed, 20 Nov 2024 19:53:00 -0000 (UTC), Brian Inglis wrote:
>> Maybe add "iburst preempt" options and drop "poll 11" or perhaps change to
>> "maxpoll 11" or higher, unless you have very good reasons to require a
longer
>> interval than the default maximum, instead of adaptive polling based on
the error.
>
> Well, the documentation (confopt) tells me that the pool command
> "mobilizes a preemptable pool client mode association for the
> DNS name specified." Why would adding "preempt" change anything?
It *may* be required and can never hurt:
In fact it won't change anything. The only options propagated from the "pool"
directive in ntp.conf (and thereby set on the prototype pool association listed
with refid POOLin the peers billboard) to the resulting pool server associations
are "iburst" and "noselect". See POOL_FLAG_PMASK in source code file ntp_proto.c.
The preemptible option is forced on for pool servers, so they are preemptible
with or without that option. However, that option doesn't do much in 4.2.8 as
the code intended to preempt useless servers has an off-by-one error that's corrected in my test 3792 release, so preemption only happens in the unusual case where there are more than 2 times as many pool or manycast client associations as "tos maxclock" which defaults to 10. Arguably this could be
fixed in the stable 4.2.8 branch but it would be a substantial change in behavior without any configuration change that might break existing setups that
depend on the off-by-one error.
As an aside, using "preempt" on a non-pool non-manycastclient association (basically, configured via "server" or "peer") seems quixotic to me, as it allows the association to be removed but nothing is done to replace it. I have
a difficult time imagining where that might be useful. It may have been useful
in the pre-2009 implementation of "pool" which I'm having a hard time remembering because I thought it was primitive and needed improvement, as it did
all its work at startup and never changed the servers selected once up and running. I re-implemented it to the current iteration, but didn't catch that
the preemption was suffering the aforementioned off-by-one error, or it wasn't
back then.
If you're wondering why I mentioned "manycastclient", it shares much of the implementation with "pool". They use different approaches to finding servers,
but the rest of the code is common. Both are intended to be automatic server
discovery schemes that discard, or preempt, servers which haven't been useful
for 10 poll intervals so that another server can be solicited to replace it.
$ grep 'pool.*preempt' ~/src/time/ntp/ntp-4.2.8p18/ntpd/complete.conf.in
<http://complete.conf.in>
pool 2.ubuntu.pool.ntp.org <http://2.ubuntu.pool.ntp.org>. iburst preempt
complete.conf.in <http://complete.conf.in>is part of the "make check" tests and
is not intended to suggest useful configurations. Rather it's used both to ensure every keyword in the configuration file parser is covered, and to ensure
a configuration can successfully round-trip through ntpd's reading and applying
the configuration and exporting the configuration via the -- saveconfigquit command-line option added specifically for that developer test to
catch changes which break that functionality. It's no coincidenceit isordered
exactly the same as the output of ntpq's saveconfigcommand, which requires authentication and that a directory for such saved configuration files has been
specified in ntp.conf with "saveconfigdir".
$ man 5 ntp.conf
...
Configuration Commands
...
*pool* For type s addresses, this command mobilizes a persistent
client mode association with a number of remote servers. In
this mode the local clock can synchronized to the remote server,
but the remote server can never be synchronized to the local
clock.
...
Options:
...
*preempt* Says the association can be preempted.
...
This manual page was AutoGen‐erated from the ntp.conf option definitions.
4.2.8p18 25 May 2024 ntp.conf(5man)
although the older:
https://www.ntp.org/documentation/4.2.8-series/confopt/#server-commands
<https://www.ntp.org/documentation/4.2.8-series/confopt/#server-commands>
says:
"Server Commands and Options
Last update: March 23, 2023 21:05 UTC (6ad51a76f)
...
Server Commands
...
pool
For type s addresses (only) this command mobilizes a preemptable pool client
mode association for the DNS name specified. "
...
Server Command Options
...
preempt
Specifies the association as preemptable rather than the default persistent.
This option is ignored with the broadcast command and is most useful with the
manycastclient and pool commands."
Despite the timestamps you quoted, the web version is likely newer. Autogen is
run against the documentation source files with every release, so that timestamp
reflects the release date, not the last update of the documentation source files
(.html in this case).
Since the overhaul of the www.ntp.org <http://www.ntp.org> website a few years
back, that documentation sadly is maintained in two places, and there's no process to ensure they stay in sync. The web version is considered the more
authoritative source, and is maintained in .md (Markdown) published only via the
converted HTML on the website. It started as a copy of the documentation from
the source tarballs' /html directory, but after conversion to Markdown and subsequent improvements, those changes have generally not been made to the HTML
version distributed with the source. I'm partly to blame because I find writing
documentation tedious enough without having to update it in two places, and I've
been kept quite busy with coding work and haven't wanted to take the time to correct documentation that no longer reflects the reality of the code. In theory one day I will have time to dedicate to that, but I welcome anyone who
enjoys documentation work or at least really wants accurate NTP documentation to
please volunteer to help out.
> My question is why would a preemptable server, acquired using
> "pool ...", continue to be polled after it has stopped
> responding, i.e., the reach has gone to 0? It is a
> misunderstanding on my part or is there an bug in the code?
Or a doc bug?
A doc bug and an off-by-one bug in the preemption logic.--
On 2024-11-21 22:55, Dave Hart (via questions Mailing List) wrote:
As an aside, using "preempt" on a non-poolnon-manycastclient association
(basically, configured via "server" or "peer") seems quixotic to me, asit
allows the association to be removed but nothing is done to replace it.I have
a difficult time imagining where that might be useful.
It can be useful when you have an adequate number of (some local) backup servers
or (former) pools, but some (local) have an annoying habit of going unreachable,
but not being noticed, and support not being responsive to hints for
weeks, e.g.
...
server ...
...
server ntp2.cpsc.ucalgary.ca iburst preempt # U Calgary T2N AB
CA
server ntp1.yycix.ca iburst preempt # YYCIX, Calgary T2P AB
CA
server ntp2.switch.ca iburst preempt # TELUS, Edmonton T6H AB
CA
...
server ...
...
tos minsane 3 minclock 5 maxclock 7
If you're wondering why I mentioned "manycastclient", it shares much ofe
the
implementation with "pool". They use different approaches to findingservers,
but the rest of the code is common. Both are intended to be automaticserver
discovery schemes that discard, or preempt, servers which haven't beenuseful
for 10 poll intervals so that another server can be solicited to replac=
it.
I noticed those comments.
$ grep 'pool.*preempt' ~/src/time/ntp/ntp-4.2.8p18/ntpd/complete.conf.in
<http://complete.conf.in>preempt
pool 2.ubuntu.pool.ntp.org <http://2.ubuntu.pool.ntp.org>. iburst
hcomplete.conf.in <http://complete.conf.in>is part of the "make check"tests and
is not intended to suggest useful configurations. Rather it's used bot=
too
ensure every keyword in the configuration file parser is covered, and t=
ensurer
a configuration can successfully round-trip through ntpd's reading andapplying
the configuration and exporting the configuration via the --
saveconfigquit command-line option added specifically for that develope=
test to
catch changes which break that functionality. It's no coincidenceitisordered
exactly the same as the output of ntpq's saveconfigcommand, whichrequires
authentication and that a directory for such saved configuration fileshas been
specified in ntp.conf with "saveconfigdir".
Implies that pool will round trip with iburst preempt?
docs
should be Autogen-erated from these masters?
Sysop: | DaiTengu |
---|---|
Location: | Appleton, WI |
Users: | 1,015 |
Nodes: | 10 (0 / 10) |
Uptime: | 68:05:31 |
Calls: | 13,252 |
Files: | 186,574 |
D/L today: |
699 files (231M bytes) |
Messages: | 3,335,419 |