While searching here and there on the web, I stumbled again across
ksh93, which was advertised back then with
The new version of ksh has the functionality of other scripting
languages such as awk, icon, perl, rexx, and tcl. For this and many
other reasons, ksh is a much better scripting language than any of the
other popular shells.
When I saw that it comes with floating point arithmetic, arrays (indexed
and associative) the questions occured to me how true the rather bold
claim quoted above is.
Hello,
following the recent discussion about staying in awk versus promiscous pipelining I tried to come up with simple awk idioms that might replace
my most common external commands (bc, cat, cut, echo, grep, head, nl,
printf, sed, tac, tail, tr, wc), enable me to stay within awk and thus
reap the benefits of having access to more sophisticated program logic
than a simple pipeline.
To my delight and surprise I discovered quite a lot of features that I overlooked or did not understand before (e.g., I almost never used
(g)sub, but rather went for sed). Some pages with one-liners, such as
https://catonmat.net/awk-one-liners-explained-part-one
also helped to rekindle my appreciation of this great tool. So thanks everyone here (especially Janis) for nudging me!
While searching here and there on the web, I stumbled again across
ksh93, which was advertised back then with
The new version of ksh has the functionality of other scripting
languages such as awk, icon, perl, rexx, and tcl. For this and many
other reasons, ksh is a much better scripting language than any of the
other popular shells.
When I saw that it comes with floating point arithmetic, arrays (indexed
and associative) the questions occured to me how true the rather bold
claim quoted above is.
In other words: How feasible is it to stay not within awk (the idea from
the start of this post), but rather stay within ksh93 solely? Is it so powerful that awk could be burned on the stake? I have doubts, but
neither is my awk knowledge solid enough nor did I ever do serious work
with ksh93 to judge this from a technical perspective rather than with
gut feeling.
So I would be very happy to learn about awk functionality not in ksh93!
Thanks and best regards
Axel
Hello,
following the recent discussion about staying in awk versus promiscous pipelining I tried to come up with simple awk idioms that might replace
my most common external commands (bc, cat, cut, echo, grep, head, nl,
printf, sed, tac, tail, tr, wc), enable me to stay within awk and thus
reap the benefits of having access to more sophisticated program logic
than a simple pipeline.
To my delight and surprise I discovered quite a lot of features that I overlooked or did not understand before (e.g., I almost never used
(g)sub, but rather went for sed). Some pages with one-liners, such as
https://catonmat.net/awk-one-liners-explained-part-one
also helped to rekindle my appreciation of this great tool. So thanks everyone here (especially Janis) for nudging me!
While searching here and there on the web, I stumbled again across
ksh93, which was advertised back then with
The new version of ksh has the functionality of other scripting
languages such as awk, icon, perl, rexx, and tcl. For this and many
other reasons, ksh is a much better scripting language than any of the
other popular shells.
When I saw that it comes with floating point arithmetic, arrays (indexed
and associative) the questions occured to me how true the rather bold
claim quoted above is.
In other words: How feasible is it to stay not within awk (the idea from
the start of this post), but rather stay within ksh93 solely? Is it so powerful that awk could be burned on the stake? I have doubts, but
neither is my awk knowledge solid enough nor did I ever do serious work
with ksh93 to judge this from a technical perspective rather than with
gut feeling.
So I would be very happy to learn about awk functionality not in ksh93!
But what I'm really wondering about is whether it makes sense to switch to ksh from bash for my day-to-day shell scripting. I'm pretty familiar and comfortable with bash at this point; I'd rather not switch unless there was
a good reason.
The system that I am typing on right now has /usr/bin/ksh as a link to /usr/bin/ksh2020 - which is, presumably, 27 years better than ksh93 (if you see what I mean...). I'd like to hear from people knowledgeable on both shells as what kind of advantages ksh has over bash.
The one that I am aware of is floating point math handled natively in the shell. This is a significant thing, and one I often miss in bash. I see
no particular reason why it could not be implemented in bash. In fact,
I've worked up a system where I run "bc" as a co-process that works pretty well. Note that if you Google "floating point in bash", you'll get lots of suggestions to use "bc", but none (AFAICT) mention running it as a co-process. That seems to be my innovation.
A shell is an environment from which to manipulate (create/destroy)
files and processes with a language to sequence calls to tools. It is
not a tool to manipulate text. Awk is a tool to manipulate text. In
fact awk is the tool that the people who invented shell also invented
for shell to call to do general purpose text manipulation.
So no, no matter what language constructs it has, a shell is not
designed to replace awk and vice-versa.
https://unix.stackexchange.com/questions/169716/why-is-using-a-shell-loop-to-process-text-considered-bad-practice
The point of my previous posts was to be suspicious when starting to
connect a lot of the primitive tools - you mentioned them above - by pipelines especially in _addition_ to one (or more) awk instances in
that same pipeline.
Often these tools are only used at front or rear of a pipeline, so
easily added per pipe when necessary. In cases where you store the
input data anyway in awk arrays you can of course also omit these
tools.
You mentioned the associative arrays as an example of a ksh feature;
all that cryptic shell syntax here is not something I'd think is good
syntax
Do you need your programs to be widely portable?
Then use the POSIX subset
if doing data processing I have an emphasis on awk, if doing process
based automation my emphasis is on ksh93.
Janis Papanagnou <janis_papanagnou@hotmail.com> writes:
The point of my previous posts was to be suspicious when starting to
connect a lot of the primitive tools - you mentioned them above - by
pipelines especially in _addition_ to one (or more) awk instances in
that same pipeline.
I do get your point, but to be honest I struggle with converting a
pipeline of primitive tools to ONE awk script.
I managed to replace
several pipeline parts with a nice awk one-liner individually (even if
it still feels "unnatural" to me), but that results of course in the
same number of processes and will not reap the dataflow benefits enabled
by staying within one awk incarnation.
You mentioned the associative arrays as an example of a ksh feature;
all that cryptic shell syntax here is not something I'd think is good
syntax
Good to know, and one advantage less over bash.
Let's start with the insight that simple specialized commands can be expressed with awk syntax - note that there may be subtle differences
in one case or another but that doesn't change its general usability
in practice...
cat { print $0 } or just 1
grep pattern /pattern/
head -n 5 NR <= 5
cut -f1 { print $1 }
tr a-z A-Z { print toupper($0) }
sed 's/hi/ho/g' gsub(/hi/,"ho")
wc -l END { print NR }
(These are just a few examples. But it shows that you can use awk in
simple ways to achieve elementary standard tasks, and it also shows
that you use a primitive and coherent standard syntax as opposed to
many individual commands, options, and many tool-specific syntaxes.)
Let's continue with a simple composition of basic functions, say,
cat infile | tail -n +2 | grep 'Error' | cut -f1 | tr a-z A-Z
awk 'NR>=2 && /Error/ { print toupper($1) }' infile
Janis Papanagnou <janis_papanagnou@hotmail.com> writes:
Let's start with the insight that simple specialized commands can be
expressed with awk syntax - note that there may be subtle differences
in one case or another but that doesn't change its general usability
in practice...
cat { print $0 } or just 1
grep pattern /pattern/
head -n 5 NR <= 5
cut -f1 { print $1 }
tr a-z A-Z { print toupper($0) }
sed 's/hi/ho/g' gsub(/hi/,"ho")
wc -l END { print NR }
(These are just a few examples. But it shows that you can use awk in
simple ways to achieve elementary standard tasks, and it also shows
that you use a primitive and coherent standard syntax as opposed to
many individual commands, options, and many tool-specific syntaxes.)
These examples seem to be chosen so as to be particularly easy to write
in AWK.
You could have chosen
cat -A
grep -o pattern
head -n -4
cut -f5-
tr n-za-mN-ZA-M a-zA-Z
sed '/:/s/hi/Hi/'
wc -c
And you've shortened the AWK in a few places: sed 's/hi/ho/g' is really
awk '{gsub(/hi/,"ho");print}'
and cut -f1 is really
awk 'BEGIN{FS="\t"}{print $1}'
(or awk -F$'\t' '{print $1}' if you don't mind a non-standard shell construct).
Let's continue with a simple composition of basic functions, say,
cat infile | tail -n +2 | grep 'Error' | cut -f1 | tr a-z A-Z
awk 'NR>=2 && /Error/ { print toupper($1) }' infile
It could be argued that your presentation is again a little skewed!
Why the UUOC -- it just makes the pipe look longer?
Why pick an example where the matching happens before the transform?
And its handy that tail -n +2 is easier than tail -2 in AWK.
For example
<infile sed 's/#.*//' | cut -f3 | grep . | tail -5
is more fiddly in AWK.
On 18.02.2022 23:49, Axel Reichert wrote:
Janis Papanagnou <janis_papanagnou@hotmail.com> writes:
Ideally you don't "convert" a pipeline, but just use the appropriate
tool to formulate the task.
Let's start with the insight that simple specialized commands can be expressed with awk syntax
cat infile | tail -n +2 | grep 'Error' | cut -f1 | tr a-z A-Z
awk 'NR>=2 && /Error/ { print toupper($1) }' infile
And while you think about a (proliferating) shell script, we change NR
to FNR, add $3~, and extend the file list...
A 4-digit number is called "bellied" if the sum of the inner two digits
is larger than the sum of the outer two digits. How long is the longest >uninterrupted sequence of bellied numbers? (This was a bonus home work
for 12-year olds. Extra points to be earned if the start of this
sequence is output as well. No computer to be used.)
In article <87v8x9baba.fsf_-_@axel-reichert.de>,
Axel Reichert <mail@axel-reichert.de> wrote:
...
A 4-digit number is called "bellied" if the sum of the inner two digits
is larger than the sum of the outer two digits. How long is the longest >>uninterrupted sequence of bellied numbers? (This was a bonus home work
for 12-year olds. Extra points to be earned if the start of this
sequence is output as well. No computer to be used.)
% gawk4 'BEGIN { FIELDWIDTHS="1 1 1 1";for (i=1000; i<10000; i++) {
$0=i; if ($1+$4 < $2+$3) A[x] = A[x] " " i;else x++ } for (i in A) if
((l = length(A[i])) > max) { max = l;idx = i } print max,idx,A[idx] }'
400 283 1920 1921 1922 1923 1924 1925 1926 1927 1928 1929 1930 1931
1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945
1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959
1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973
1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987
1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999
%
How'd I do?
Janis Papanagnou <janis_papanagnou@hotmail.com> writes:
On 18.02.2022 23:49, Axel Reichert wrote:
Janis Papanagnou <janis_papanagnou@hotmail.com> writes:
Ideally you don't "convert" a pipeline, but just use the appropriate
tool to formulate the task.
Sure, but I have just drafted byself to a boot camp.
Let's start with the insight that simple specialized commands can be
expressed with awk syntax
This is what I have done for some of my typical pipeline idioms.
cat infile | tail -n +2 | grep 'Error' | cut -f1 | tr a-z A-Z
awk 'NR>=2 && /Error/ { print toupper($1) }' infile
This one was easy.
And while you think about a (proliferating) shell script, we change NR
to FNR, add $3~, and extend the file list...
You are doing good advertising. (-:
Now for one example where the pipeline solution was a matter of minutes without much thinking or trial and error, but where I struggled with a conversion to awk (remember, boot camp):
A 4-digit number is called "bellied" if the sum of the inner two digits
is larger than the sum of the outer two digits. How long is the longest uninterrupted sequence of bellied numbers?
for 12-year olds. Extra points to be earned if the start of this
sequence is output as well. No computer to be used.)
The pipeline solution (there are others, shorter, more elegant, but that
is how it flowed out of my head):
seq 1000 9999 \
| sed 's/\(.\)/\1 /g' \
| awk '{if ($1+$4 < $2+$3) {print 1} else {print 0}}' \
| tr -d '\n' \
| tr -s 0 '\n' \
| sort \
| tail -n 1 \
| tr -d '\n' \
| wc -c
Get all 4-digit numbers and separate the digits with spaces. Print 1 for bellied numbers, 0 otherwise. Transform into a single line. Squeeze the
zeros (they serve only to mark the start of a new bellied sequence) and
make a linebreak from them. After sorting, the last line will contain
the a string of "1"s from the longest bellied sequence. Get rid of the
final newline and count characters.
Now some awk replacements for the individual "tubes" of the pipeline:
- awk 'BEGIN {for (i=1000; i<10000; i++) print i}'
- awk -F "" '{$1=$1; print}' # non-standard (?) empty FS
- awk '{if ($1+$4 < $2+$3) {print 1} else {print 0}}'
- awk 'BEGIN {RS=""; ORS=""} gsub(/\n/, "")'
- awk 'gsub(/0+/, "\n")'
- awk '{print | "sort"}'
But this is kind of cheating. I could of course take the maximum
number found so far. But that is a different solution, and "sort" is
a staple part of my typical pipelines.
- awk 'END {print}'
- No need for "tr -d '\n' here
- awk '{print length($0)}'
Now that the tubes are ready, I would be very interested about the "plumbing"/"welding" together. I took me already quite some time to come
up with the BEGIN in the inital for loop and have no idea how to deal
with several BEGIN addresses or the mangling of the field/record
separators in one single file bellied.awk, to be run with "awk -f". That
is what I am aiming for. My best so far:
awk 'BEGIN {for (i=1000; i<10000; i++) {print i}}' \
| awk '{FS=""; $1=$1; ORS=""; if ($1+$4<$2+$3) {print 1} else {print 0}}' \
| awk 'gsub(/0+/, "\n") {print | "sort"}' \
| awk 'END {print length($0)}'
Leaving pipes is hard to do ...
Best regards
Axel
P. S.: With apologies to Kaz, who has read this problem some time back
in comp.lang.lisp, with similary stupid Lisp questions from me. (-:
P. P. S.: I understand that an idiomatic awk solution will likely be
based more on numbers than on strings.
The above relies on you using an awk version like GNU awk that given a
null FS splits the string into characters. With any other awk you'd
delete the BEGIN section and replace
($2+$3) > ($1+$4) {
with
(substr($0,2,1)+substr($0,3,1)) > (substr($0,1,1)+substr($0,4,4))
Well, the length of the longest sequence (80, which is indeed from 1920
to 1999) is nowhere in your output. No, substrings do not count. (-;
I have no idea where the 400 and 283 come from. I did not know about >FIELDWIDTH, interesting technique for the "split".
Janis Papanagnou <janis_papanagnou@hotmail.com> writes:
On 18.02.2022 23:49, Axel Reichert wrote:
Janis Papanagnou <janis_papanagnou@hotmail.com> writes:
Ideally you don't "convert" a pipeline, but just use the appropriate
tool to formulate the task.
Sure, but I have just drafted byself to a boot camp.
Let's start with the insight that simple specialized commands can be
expressed with awk syntax
This is what I have done for some of my typical pipeline idioms.
cat infile | tail -n +2 | grep 'Error' | cut -f1 | tr a-z A-Z
awk 'NR>=2 && /Error/ { print toupper($1) }' infile
This one was easy.
And while you think about a (proliferating) shell script, we change NR
to FNR, add $3~, and extend the file list...
You are doing good advertising. (-:
Now for one example where the pipeline solution was a matter of minutes without much thinking or trial and error, but where I struggled with a conversion to awk (remember, boot camp):
A 4-digit number is called "bellied" if the sum of the inner two digits
is larger than the sum of the outer two digits. How long is the longest uninterrupted sequence of bellied numbers? (This was a bonus home work
for 12-year olds. Extra points to be earned if the start of this
sequence is output as well. No computer to be used.)
The pipeline solution (there are others, shorter, more elegant, but that
is how it flowed out of my head):
seq 1000 9999 \
| sed 's/\(.\)/\1 /g' \
| awk '{if ($1+$4 < $2+$3) {print 1} else {print 0}}' \
| tr -d '\n' \
| tr -s 0 '\n' \
| sort \
| tail -n 1 \
| tr -d '\n' \
| wc -c
Get all 4-digit numbers and separate the digits with spaces. Print 1 for bellied numbers, 0 otherwise. Transform into a single line. Squeeze the
zeros (they serve only to mark the start of a new bellied sequence) and
make a linebreak from them. After sorting, the last line will contain
the a string of "1"s from the longest bellied sequence. Get rid of the
final newline and count characters.
Now some awk replacements for the individual "tubes" of the pipeline:
- awk 'BEGIN {for (i=1000; i<10000; i++) print i}'
- awk -F "" '{$1=$1; print}' # non-standard (?) empty FS
- awk '{if ($1+$4 < $2+$3) {print 1} else {print 0}}'
- awk 'BEGIN {RS=""; ORS=""} gsub(/\n/, "")'
- awk 'gsub(/0+/, "\n")'
- awk '{print | "sort"}'
But this is kind of cheating. I could of course take the maximum
number found so far. But that is a different solution, and "sort" is
a staple part of my typical pipelines.
- awk 'END {print}'
- No need for "tr -d '\n' here
- awk '{print length($0)}'
Now that the tubes are ready, I would be very interested about the "plumbing"/"welding" together. I took me already quite some time to come
up with the BEGIN in the inital for loop and have no idea how to deal
with several BEGIN addresses or the mangling of the field/record
separators in one single file bellied.awk, to be run with "awk -f". That
is what I am aiming for. My best so far:
awk 'BEGIN {for (i=1000; i<10000; i++) {print i}}' \
| awk '{FS=""; $1=$1; ORS=""; if ($1+$4<$2+$3) {print 1} else {print 0}}' \
| awk 'gsub(/0+/, "\n") {print | "sort"}' \
| awk 'END {print length($0)}'
Leaving pipes is hard to do ...
Best regards
Axel
P. S.: With apologies to Kaz, who has read this problem some time back
in comp.lang.lisp, with similary stupid Lisp questions from me. (-:
P. P. S.: I understand that an idiomatic awk solution will likely be
based more on numbers than on strings.
it sounds like you're looking for a comparison of awk string
constructs to your above command pipeline
gsub(/./,"& ")
400 is the length of the string. I think you can do the math of dividing that by 5 to get the result you seek (80).
Ed Morton <mortonspam@gmail.com> writes:
it sounds like you're looking for a comparison of awk string
constructs to your above command pipeline
Yes, exactly. Sorry for not making this clearer. Overall it looks like
the "numerical" solution is simpler in awk, while the "string" solution
is easier in a pipeline.
gsub(/./,"& ")
That's a nice idiom for splitting! And from my understanding it is also
more portable than FS="".
Thanks, I learned quite a bit here!
Axel
gsub(/./,"& ")
That's a nice idiom for splitting! And from my understanding it is also
more portable than FS="".
awk 'NR>=2 && /Error/ { print toupper($1) }' infile
This one was easy.
And while you think about a (proliferating) shell script, we change NR
to FNR, add $3~, and extend the file list...
You are doing good advertising. (-:
Now for one example where the pipeline solution was a matter of minutes without much thinking or trial and error, but where I struggled with a conversion to awk (remember, boot camp):
A 4-digit number is called "bellied" if the sum of the inner two digits
is larger than the sum of the outer two digits. How long is the longest uninterrupted sequence of bellied numbers? (This was a bonus home work
for 12-year olds. Extra points to be earned if the start of this
sequence is output as well. No computer to be used.)
The pipeline solution (there are others, shorter, more elegant, but that
is how it flowed out of my head):
seq 1000 9999 \
| sed 's/\(.\)/\1 /g' \
| awk '{if ($1+$4 < $2+$3) {print 1} else {print 0}}' \
| tr -d '\n' \
| tr -s 0 '\n' \
| sort \
| tail -n 1 \
| tr -d '\n' \
| wc -c
Get all 4-digit numbers and separate the digits with spaces. Print 1 for bellied numbers, 0 otherwise. Transform into a single line. Squeeze the
zeros (they serve only to mark the start of a new bellied sequence) and
make a linebreak from them. After sorting, the last line will contain
the a string of "1"s from the longest bellied sequence. Get rid of the
final newline and count characters.
Now some awk replacements for the individual "tubes" of the pipeline:
- awk 'BEGIN {for (i=1000; i<10000; i++) print i}'
- awk -F "" '{$1=$1; print}' # non-standard (?) empty FS
- awk '{if ($1+$4 < $2+$3) {print 1} else {print 0}}'
- awk 'BEGIN {RS=""; ORS=""} gsub(/\n/, "")'
- awk 'gsub(/0+/, "\n")'
- awk '{print | "sort"}'
But this is kind of cheating. I could of course take the maximum
number found so far. But that is a different solution, and "sort" is
a staple part of my typical pipelines.
- awk 'END {print}'
- No need for "tr -d '\n' here
- awk '{print length($0)}'
Now that the tubes are ready, I would be very interested about the "plumbing"/"welding" together. I took me already quite some time to come
up with the BEGIN in the inital for loop and have no idea how to deal
with several BEGIN addresses or the mangling of the field/record
separators in one single file bellied.awk, to be run with "awk -f". That
is what I am aiming for. My best so far:
awk 'BEGIN {for (i=1000; i<10000; i++) {print i}}' \
| awk '{FS=""; $1=$1; ORS=""; if ($1+$4<$2+$3) {print 1} else {print 0}}' \
| awk 'gsub(/0+/, "\n") {print | "sort"}' \
| awk 'END {print length($0)}'
Leaving pipes is hard to do ...
Best regards
Axel
P. S.: With apologies to Kaz, who has read this problem some time back
in comp.lang.lisp, with similary stupid Lisp questions from me. (-:
P. P. S.: I understand that an idiomatic awk solution will likely be
based more on numbers than on strings.
On 20.02.2022 20:35, Axel Reichert wrote:
You are doing good advertising. (-:
I don't sell anything.
Please note that "conversion" might not be the ideal view for every
case
building blocks (pipe'ish: commands [incoherent], awk'ish: language [coherent])
The 'sort' is "in the way"
I'd simplify the awk command as
awk '{print ($1+$4 < $2+$3)}'
I'd nonetheless had used 'sort -r' and 'head -n 1'
build binary string r=r ($1+$4 < $2+$3)
seq 1000 9999 | awk '
{ gsub(/./,"& ") }
m=($1+$4 < $2+$3) { if (++c>max) max=c }
!m { c=0 }
END { print max }
(Ah, okay, you want the numbers also generated by awk. I assumed that
'seq' was just an example input source that could be in principle any
process or file as data source.
I usually don't hard-code constants, but pass them as parameters
{ split(r,a,/0+/) }
awk '$2 { print $1 ; exit }'
Not bad, but that doesn't work since you want to process the output
of 'sort'
(flow (range 1000 9999)(keep-if [chain digits (ap > (+ @2 @3) (+ @1 @4))])
(partition-if (op neq 1 (- @2 @1)))
(find-max @1 : len))
Kaz Kylheku <480-992-1380@kylheku.com> writes:
(flow (range 1000 9999)(keep-if [chain digits (ap > (+ @2 @3) (+ @1 @4))])
(partition-if (op neq 1 (- @2 @1)))
(find-max @1 : len))
Interesting, how close a Lisp can get to my original pipeline. Thanks!
Axel
Janis Papanagnou <janis_papanagnou@hotmail.com> writes:
On 20.02.2022 20:35, Axel Reichert wrote:
I'd nonetheless had used 'sort -r' and 'head -n 1'
Understood, great: "sort" does not care about the "-r", because it just negates the "predicate", but printing the first line is both cheaper and easier for awk. It seems you have learned a thing or two about Big Oh notation. (-;
I tend to neglect this, being used to much larger number crunching
efforts than the modest 10^4 numbers here. Thanks for reminding me.
build binary string r=r ($1+$4 < $2+$3)
Great, another feature that I tend to forget, the default string concatenation. Very elegant in combination with the C-style boolean.
seq 1000 9999 | awk '
{ gsub(/./,"& ") }
m=($1+$4 < $2+$3) { if (++c>max) max=c }
!m { c=0 }
END { print max }
Great. That kind of stuff I was after and eager to learn!
(Ah, okay, you want the numbers also generated by awk. I assumed that
'seq' was just an example input source that could be in principle any
process or file as data source.
In general I agree, but here it is one of the conceptual obstacles in my
way of thinking. So far I believe there did not come up any post in this thread that shows how to "feed" output from the BEGIN block to the rest
of the "awk" program. Or did I miss something?
Janis Papanagnou <janis_papanagnou@hotmail.com> writes:
build binary string r=r ($1+$4 < $2+$3)
Great, another feature that I tend to forget, the default string concatenation. Very elegant in combination with the C-style boolean.
If you are saying that you want to generate the sequence numbers in
the BEGIN block and pass them to the implicit awk-processing-loop
then yes, that's not possible.
You would then do the whole processing in the BEGIN block using a loop;
I think I've seen someone in this sub-thread posting such a loop-based awk-solution.
Personally I tend to and intuitively separate data generation from the
data processing as consequence of a design and usability experience
Sysop: | DaiTengu |
---|---|
Location: | Appleton, WI |
Users: | 790 |
Nodes: | 10 (0 / 10) |
Uptime: | 194:42:05 |
Calls: | 11,043 |
Files: | 186,065 |
Messages: | 1,743,722 |