• Re: Dereference relative to increment and decrement operators ++ --

    From Bart@bc@freeuk.com to comp.lang.misc on Sun Nov 20 16:08:27 2022
    From Newsgroup: comp.lang.misc

    On 20/11/2022 12:28, David Brown wrote:

    Look at C as an example.  Not everyone likes the language, and the only people who find nothing to dislike in it are people to haven't used it enough.

    I disliked C at first glance having never used it. But I loved Algol68,
    having never used that either.

    But given a task now and the choice was between those two languages, I
    would choose C, mainly because some design choices of Algol68 syntax
    make writing code more painful than in C. (Ahead of both would be one of
    my two, by a mile.)

    However I can admire Algol68 for its design, even if it needed tweaking
    IMO, but I would never be able to do that for C, since a lot of it looks
    like it was thrown together with no thought at all, or under the
    influence of some substance.


    But it is undoubtedly a highly successful language.

    On the back of Unix inflicting it on everybody (can anyone prise Unix
    and C apart?), and the lack of viable alternatives.

    Successful languages, then, needed to be able to bend the rules a
    little, do underhand stuff, which C could do in spades (so could mine!).
    You can't really do that with a Wirth language or ones like Algol68.
    Ones like PL/M disappeared.

    Now people look askance at such practices, but C already had its foot in
    the door.


    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From David Brown@david.brown@hesbynett.no to comp.lang.misc on Mon Nov 21 16:01:40 2022
    From Newsgroup: comp.lang.misc

    On 18/11/2022 21:14, James Harris wrote:
    On 18/11/2022 11:00, David Brown wrote:
    On 15/11/2022 18:32, James Harris wrote:

    ...

    The side effects of even something awkward such as

       *(++p) = *(q++);

    are little different from those of the longer version

       p = p + 1;
       *p = *q;
       q = q + 1;

    The former is clearer, however. That makes it easier to see the intent..

    Really?  I have no idea what the programmer's intent was.  "*p++ =
    *q++;" is common enough that the intent is clear there, but from your
    first code I can't see /why/ the programmer wanted to /pre/increment
    "p".  Maybe he/she made a mistake?  Maybe he/she doesn't really
    understand the difference between pre-increment and post-increment?
    It's a common beginners misunderstanding.

    I don't think I know of any language which allows a programmer to say
    /why/ something is the case; that's what comments are for. Programs
    normally talk about /what/ to do, not why. The very fact that the
    assignment does something non-idiomatic is a sign that a comment could
    be useful. It's akin to

      for (i = 0; i <= n ....

    If the test really should be <= then a comment may be useful to explain
    why.

    Ideally there should be no need for a comment, because the code makes it
    clear - for example via the names of the identifiers, or from the rest
    of the context. That rarely happens in out-of-context snippets.



    On the other hand, it is quite clear from the separate lines exactly
    what order the programmer intended.

    What would you say are the differences in side-effects of these two
    code snippets?  (I'm assuming we are talking about C here.)

    That depends on whether the operations are ordered or not. In C they'd
    be different, potentially, from what they would be in my language. What would you say they are?


    You said the side-effects are "a little different", so I wanted to hear
    what you meant.

    In C, there is no pre-determined sequencing between the two increments -
    they can occur in any order, or can be interleaved. As far as the C
    abstract machine is concerned (and that's what determines what
    side-effects mean), unsequenced events are not ordered and it doesn't
    make sense to say which happened first. You can consider them as
    happening at the same time - and if that affects the outcome of the
    program, then it is at least unspecified behaviour if not undefined
    behaviour. (It would be undefined behaviour if "p" and "q" referred to
    the same object, for example.)

    So I don't think it really makes sense to say that the order is
    different. If the original "*(++p) = *(q++);" makes sense at all, and
    is defined behaviour, then it's behaviour is not distinguishable from
    within the C language from the expanded version.

    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From James Harris@james.harris.1@gmail.com to comp.lang.misc on Wed Nov 23 16:59:53 2022
    From Newsgroup: comp.lang.misc

    On 21/11/2022 15:01, David Brown wrote:
    On 18/11/2022 21:14, James Harris wrote:
    On 18/11/2022 11:00, David Brown wrote:
    On 15/11/2022 18:32, James Harris wrote:

    ...

    The side effects of even something awkward such as

       *(++p) = *(q++);

    are little different from those of the longer version

       p = p + 1;
       *p = *q;
       q = q + 1;

    The former is clearer, however. That makes it easier to see the
    intent..

    Really?  I have no idea what the programmer's intent was.  "*p++ =
    *q++;" is common enough that the intent is clear there, but from your
    first code I can't see /why/ the programmer wanted to /pre/increment
    "p".  Maybe he/she made a mistake?  Maybe he/she doesn't really
    understand the difference between pre-increment and post-increment?
    It's a common beginners misunderstanding.

    I don't think I know of any language which allows a programmer to say
    /why/ something is the case; that's what comments are for. Programs
    normally talk about /what/ to do, not why. The very fact that the
    assignment does something non-idiomatic is a sign that a comment could
    be useful. It's akin to

       for (i = 0; i <= n ....

    If the test really should be <= then a comment may be useful to
    explain why.

    Ideally there should be no need for a comment, because the code makes it clear - for example via the names of the identifiers, or from the rest
    of the context.  That rarely happens in out-of-context snippets.

    Either way, non-idiomatic code is a flag. And in that it's useful -
    especially if its easy to read.




    On the other hand, it is quite clear from the separate lines exactly
    what order the programmer intended.

    What would you say are the differences in side-effects of these two
    code snippets?  (I'm assuming we are talking about C here.)

    That depends on whether the operations are ordered or not. In C they'd
    be different, potentially, from what they would be in my language.
    What would you say they are?


    You said the side-effects are "a little different", so I wanted to hear
    what you meant.

    I said they were "little different", not "a little different". In other
    words, focus on the main point rather than minutiae such as what could
    happen if the pointers were identical or overlapped, much as you go on
    to mention:


    In C, there is no pre-determined sequencing between the two increments - they can occur in any order, or can be interleaved.  As far as the C abstract machine is concerned (and that's what determines what
    side-effects mean), unsequenced events are not ordered and it doesn't
    make sense to say which happened first.  You can consider them as
    happening at the same time - and if that affects the outcome of the
    program, then it is at least unspecified behaviour if not undefined behaviour.  (It would be undefined behaviour if "p" and "q" referred to
    the same object, for example.)

    So I don't think it really makes sense to say that the order is
    different.  If the original "*(++p) = *(q++);" makes sense at all, and
    is defined behaviour, then it's behaviour is not distinguishable from
    within the C language from the expanded version.

    --
    James Harris


    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From David Brown@david.brown@hesbynett.no to comp.lang.misc on Wed Nov 23 19:06:06 2022
    From Newsgroup: comp.lang.misc

    On 23/11/2022 17:59, James Harris wrote:
    On 21/11/2022 15:01, David Brown wrote:
    On 18/11/2022 21:14, James Harris wrote:

    Either way, non-idiomatic code is a flag. And in that it's useful - especially if its easy to read.


    Yes.




    On the other hand, it is quite clear from the separate lines exactly
    what order the programmer intended.

    What would you say are the differences in side-effects of these two
    code snippets?  (I'm assuming we are talking about C here.)

    That depends on whether the operations are ordered or not. In C
    they'd be different, potentially, from what they would be in my
    language. What would you say they are?


    You said the side-effects are "a little different", so I wanted to
    hear what you meant.

    I said they were "little different", not "a little different".

    Ah, my mistake. Still, it implies you think there is /some/ difference.

    In other
    words, focus on the main point rather than minutiae such as what could happen if the pointers were identical or overlapped, much as you go on
    to mention:

    OK, so you don't think there is any differences in side-effect other
    than the possible issue I mentioned of undefined behaviour in very
    particular circumstances. That's fine - I just wanted to know if you
    were thinking of something else.

    (Note that the freedom for compilers to re-arrange code from the
    "compact" form to the "expanded" form is one of the reasons why such unsequenced accesses to the same object are undefined behaviour in C.)



    In C, there is no pre-determined sequencing between the two increments
    - they can occur in any order, or can be interleaved.  As far as the C
    abstract machine is concerned (and that's what determines what
    side-effects mean), unsequenced events are not ordered and it doesn't
    make sense to say which happened first.  You can consider them as
    happening at the same time - and if that affects the outcome of the
    program, then it is at least unspecified behaviour if not undefined
    behaviour.  (It would be undefined behaviour if "p" and "q" referred
    to the same object, for example.)

    So I don't think it really makes sense to say that the order is
    different.  If the original "*(++p) = *(q++);" makes sense at all, and
    is defined behaviour, then it's behaviour is not distinguishable from
    within the C language from the expanded version.




    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From James Harris@james.harris.1@gmail.com to comp.lang.misc on Wed Nov 23 18:31:27 2022
    From Newsgroup: comp.lang.misc

    On 20/11/2022 12:28, David Brown wrote:
    On 18/11/2022 20:01, James Harris wrote:
    On 15/11/2022 21:40, David Brown wrote:
    On 15/11/2022 20:09, James Harris wrote:
    On 15/11/2022 17:31, David Brown wrote:

    ...

    You assume /so/ many limitations on what you can do as a language
    designer.  You can do /anything/.  If you want to allow something, >>>>> allow it.  If you want to prohibit it, prohibit it.

    Sorry, but it doesn't work like that.

    Yes, it does.

    No, it does not. Your view of language design is far too simplistic.
    Note, also, that in a few paragraphs you say that you are not the
    language designer whereas I am, but then you go on to try to tell me
    how it works and how it doesn't and, previously, that anything can be
    done. You'd gain by /trying/ it yourself. They you might see that it's
    not as straightforward as you suggest.


    That is a fair point.  But I challenge you to show me where there are
    rules written for language designs.  Explain to me exactly why you are
    not allowed to, say, provide an operator "-" without a corresponding operator "+".  Tell me who is banning you from deciding that source code lines must be limited to 40 characters, or that every assignment
    statement shall be preceded by the keyword "please".  I'm not saying any
    of these things are a good idea (though something similar has been done
    in other cases), I am saying it is /your/ choice to do that or not.


    You can say "I can't have feature A and feature B and maintain the consistency I want."  You /cannot/ say "I can't have feature A".  It is /your/ decision not have feature A.  Choosing to have it may mean
    changing or removing feature B, or losing some consistency that you had hoped to maintain.  But it is your language, your choices, your responsibility - saying "I can't do that" is abdicating that
    responsibility.

    Well, your comments have let me know what you mean, at least, but when I
    say "it doesn't work like that" I mean that language design is not as
    simple as you suggest. In absolute terms I agree with you: you are right
    that a designer can make any decisions he wants. But in reality certain
    things are /infeasible/. You might as well say you could get from your
    house to the nearest supermarket by flying to another country first. In absolute terms you probably could do that and eventually get where you
    want to go but in reality it's so absurd a suggestion that it's infeasible.



    A language cannot be built on ad-hoc choices such as you have
    suggested.


    It most certainly can.  Every language is a collection of design
    decisions, and most of them are at least somewhat ad-hoc.

    However, my suggestions where certainly /not/ ad-hoc

    Hmm, you suggested banning side effects, except in function calls, and
    banning successive prefix "+" operators. Those suggestions seem rather
    ad hoc to me.

    - it was for a
    particular way of thinking about operators and expressions, with justification and an explanation of the benefits.  Whether you choose to follow those suggestions or not, is a matter of your personal choices
    for how you want your language to work - and /that/ choice is therefore somewhat ad-hoc.  They only appear ad-hoc if you don't understand what I wrote justifying them or giving their advantages.

    True, if there is a legitimate and useful reason for a rule then that
    rule will seem less ad hoc than if the reasons for it are unknown.


    Of course you want a language to follow a certain theme or style (or "ethos", as you called it).  But that does not mean you can't make
    ad-hoc decisions if you want - it is inevitable that you will do so. And
    it certainly does not mean you can't make the choices you want for your language.

    Too many ad-hoc choices mean you loose the logic and consistency in the language.  Too few, and your language has nothing to it.  Excessive consistency is great for some theoretical work  - Turing machines,
    lambda calculus, infinite register machines, and the like.  It is
    useless in a real language.

    Look at C as an example.  Not everyone likes the language, and the only people who find nothing to dislike in it are people to haven't used it enough.  But it is undoubtedly a highly successful language.  All binary operators require the evaluation of both operands before evaluating the operator.  (And before you start thinking that is unavoidable, it is
    not, and does not apply to all languages.)  Except && and ||, where the second operand is not evaluated if it is not needed - that's an ad-hoc decision, different from the general rule.  All access to objects must
    be through lvalues of compatible types - except for the ad-hoc rule that character type pointers can also be used.

    To be successful at anything - program language design or anything else
    - you always need to aim for a balance.  Consistency is vital - too much consistency is bad.  Generalisation is good - over-generalisation is
    bad.  Too much ad-hoc is bad, so is too little.

    Fair enough. Short-circuit evaluation is a good example of what you have
    been saying, although it effects a semantic change. By contrast, banning prefix "+" operators because you don't like them does not effect any
    useful change in the semantics of a program.



    I haven't suggested ad-hoc choices.  I have tried to make reasoned
    suggestions.  Being different from languages you have used before, or
    how you envision your new language, does not make them ad-hoc.

    Saying you'd like selected combinations of operators to be banned
    looks like an ad-hoc approach to me.


    Then you misunderstand what I wrote.  I don't know if that was my fault
    in poor explanations, or your fault in misreading or misunderstanding -
    no doubt, it was a combination.

    Maybe. I thought you wanted ++E++ banned because it had successive ++ operators but perhaps I misunderstood. Was what you actually wanted
    banned /any/ use of ++ operators? If the language /is/ to have ++
    operators after all, though, would you still want ++E++ banned?

    ...

    Imagine if you were to stop treating "letters", "digits" and
    "punctuation" separately, and say "They are all just characters.
    Let's treat them the same".  Now people can name a function "123", or
    "2+2". It's conceivable that you'd work out a grammar and parsing
    rules that allow that (Forth, for example, has no problem with
    functions that are named by digits.  You can redefine "2" to mean "1"
    if you like).  Do you think that would make the language easier to
    learn and less awkward to use?

    Certainly not. Why do you ask?

    I ask, because it is an example of over-generalisation that makes a
    language harder to learn and potentially a lot more confusing to
    understand.

    I don't see any lack of generalisation in setting out rules for
    identifier names.

    ...

    [Snipped a bunch of points on which we agree.]


    Further, remember that the decisions the language designer makes have
    to be communicated to the programmer. If a designer says "these side
    effects are allowed but these other ones are not" then that just gives
    the programmer more to learn and remember.


    Sure.  But programmers are not stupid (or at least, you are not catering for stupid programmers).  They can learn more than one rule.

    You are rather changing your tune, there. Earlier you were concerned
    about programmers failing to understand the difference between
    pre-increment and post-increment!


    As I say, you could try designing a language. You are a smart guy. You
    could work on a design in your head while walking to the shops, while
    waiting for a train, etc. As one of my books on language design says,
    "design repeatedly: it will make you a better designer".


    Oh, I have plenty of ideas for a language - I have no end to the number
    of languages, OS's, processors, and whatever that I have "designed" in
    my head :-)  The devil's in the details, however, and I haven't taken
    the time for that!

    Yes, the devil is indeed in the details. It's one thing to have some
    good ideas. It's quite another to bring them together into a single product.
    --
    James Harris


    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From James Harris@james.harris.1@gmail.com to comp.lang.misc on Wed Nov 23 18:50:50 2022
    From Newsgroup: comp.lang.misc

    On 23/11/2022 18:06, David Brown wrote:
    On 23/11/2022 17:59, James Harris wrote:
    On 21/11/2022 15:01, David Brown wrote:
    On 18/11/2022 21:14, James Harris wrote:


    Previously ===>


    The side effects of even something awkward such as

    *(++p) = *(q++);

    are little different from those of the longer version

    p = p + 1;
    *p = *q;
    q = q + 1;

    The former is clearer, however. That makes it easier to see the intent..


    What would you say are the differences in side-effects of these two >>>>> code snippets?  (I'm assuming we are talking about C here.)

    That depends on whether the operations are ordered or not. In C
    they'd be different, potentially, from what they would be in my
    language. What would you say they are?


    You said the side-effects are "a little different", so I wanted to
    hear what you meant.

    I said they were "little different", not "a little different".

    Ah, my mistake.  Still, it implies you think there is /some/ difference.

    I thought there was the /potential/ for a difference (and I suspect that
    there is in C) but that that would distract from the point being made.

    The point remains: I was saying that the former is significantly clearer
    (as long as its effects are defined).


    In other words, focus on the main point rather than minutiae such as
    what could happen if the pointers were identical or overlapped, much
    as you go on to mention:

    OK, so you don't think there is any differences in side-effect other
    than the possible issue I mentioned of undefined behaviour in very particular circumstances.  That's fine - I just wanted to know if you
    were thinking of something else.

    (Note that the freedom for compilers to re-arrange code from the
    "compact" form to the "expanded" form is one of the reasons why such unsequenced accesses to the same object are undefined behaviour in C.)

    Understood.
    --
    James Harris


    --- Synchronet 3.19c-Linux NewsLink 1.113
  • From David Brown@david.brown@hesbynett.no to comp.lang.misc on Wed Nov 23 21:33:53 2022
    From Newsgroup: comp.lang.misc

    On 23/11/2022 19:31, James Harris wrote:
    On 20/11/2022 12:28, David Brown wrote:
    On 18/11/2022 20:01, James Harris wrote:
    On 15/11/2022 21:40, David Brown wrote:


    Well, your comments have let me know what you mean, at least, but when I
    say "it doesn't work like that" I mean that language design is not as
    simple as you suggest. In absolute terms I agree with you: you are right that a designer can make any decisions he wants. But in reality certain things are /infeasible/. You might as well say you could get from your
    house to the nearest supermarket by flying to another country first. In absolute terms you probably could do that and eventually get where you
    want to go but in reality it's so absurd a suggestion that it's infeasible.


    Sure. Not all suggestions are good in practice, and not all things that
    are possible are easy or a good trade-off. I am merely saying that the decisions are yours to make, even if you feel there is only one sane way
    to pick. You are still free to make the hard choice even if that means
    big knock-on effects.



    A language cannot be built on ad-hoc choices such as you have
    suggested.


    It most certainly can.  Every language is a collection of design
    decisions, and most of them are at least somewhat ad-hoc.

    However, my suggestions where certainly /not/ ad-hoc

    Hmm, you suggested banning side effects, except in function calls, and banning successive prefix "+" operators. Those suggestions seem rather
    ad hoc to me.


    They are not necessarily all /good/ suggestions! My point was merely
    that if you don't want people to be able to write +(+(+(+x))) in your language, you have the power to ban them if you want.

    - it was for a particular way of thinking about operators and
    expressions, with justification and an explanation of the benefits.
    Whether you choose to follow those suggestions or not, is a matter of
    your personal choices for how you want your language to work - and
    /that/ choice is therefore somewhat ad-hoc.  They only appear ad-hoc
    if you don't understand what I wrote justifying them or giving their
    advantages.

    True, if there is a legitimate and useful reason for a rule then that
    rule will seem less ad hoc than if the reasons for it are unknown.


    Indeed. I am not recommending a chaotic language!


    Of course you want a language to follow a certain theme or style (or
    "ethos", as you called it).  But that does not mean you can't make
    ad-hoc decisions if you want - it is inevitable that you will do so.
    And it certainly does not mean you can't make the choices you want for
    your language.

    Too many ad-hoc choices mean you loose the logic and consistency in
    the language.  Too few, and your language has nothing to it.
    Excessive consistency is great for some theoretical work  - Turing
    machines, lambda calculus, infinite register machines, and the like.
    It is useless in a real language.

    Look at C as an example.  Not everyone likes the language, and the
    only people who find nothing to dislike in it are people to haven't
    used it enough.  But it is undoubtedly a highly successful language.
    All binary operators require the evaluation of both operands before
    evaluating the operator.  (And before you start thinking that is
    unavoidable, it is not, and does not apply to all languages.)  Except
    && and ||, where the second operand is not evaluated if it is not
    needed - that's an ad-hoc decision, different from the general rule.
    All access to objects must be through lvalues of compatible types -
    except for the ad-hoc rule that character type pointers can also be used.

    To be successful at anything - program language design or anything
    else - you always need to aim for a balance.  Consistency is vital -
    too much consistency is bad.  Generalisation is good -
    over-generalisation is bad.  Too much ad-hoc is bad, so is too little.

    Fair enough. Short-circuit evaluation is a good example of what you have been saying, although it effects a semantic change. By contrast, banning prefix "+" operators because you don't like them does not effect any
    useful change in the semantics of a program.



    I haven't suggested ad-hoc choices.  I have tried to make reasoned
    suggestions.  Being different from languages you have used before,
    or how you envision your new language, does not make them ad-hoc.

    Saying you'd like selected combinations of operators to be banned
    looks like an ad-hoc approach to me.


    Then you misunderstand what I wrote.  I don't know if that was my
    fault in poor explanations, or your fault in misreading or
    misunderstanding - no doubt, it was a combination.

    Maybe. I thought you wanted ++E++ banned because it had successive ++ operators but perhaps I misunderstood. Was what you actually wanted
    banned /any/ use of ++ operators? If the language /is/ to have ++
    operators after all, though, would you still want ++E++ banned?


    I was suggesting banning any use of pre- and post- increment and
    decrement operators. They are unnecessary in a language, and (along
    with assignment operators that return values, rather than being strictly statements) they are a way of having side-effects in the middle of
    expressions that otherwise look like calculations or reading data.

    Unless you are aiming for a pure functional language, "side-effects" are necessary - it's how you get things done in the code. But IMHO they
    should be as clear as possible, not hidden away as extras. Changes to
    any object should be the main purpose of a statement or function call,
    rather than a little extra feature.

    Remember, the fewer places you can have side-effects - changes to an
    object, or IO functionality - the more freedom the compiler has to
    manipulate and optimise the code, the clearer the code is to the reader,
    the safer it is from accidentally changing things, the easier it is to
    be sure the code is correct, and the more the code can be thread-safe, re-entrant or run in parallel. Make every object immutable unless the programmer goes out of their way to insist that it is mutable - you can
    do so much more with it if its value cannot change!



    Imagine if you were to stop treating "letters", "digits" and
    "punctuation" separately, and say "They are all just characters.
    Let's treat them the same".  Now people can name a function "123",
    or "2+2". It's conceivable that you'd work out a grammar and parsing
    rules that allow that (Forth, for example, has no problem with
    functions that are named by digits.  You can redefine "2" to mean
    "1" if you like).  Do you think that would make the language easier
    to learn and less awkward to use?

    Certainly not. Why do you ask?

    I ask, because it is an example of over-generalisation that makes a
    language harder to learn and potentially a lot more confusing to
    understand.

    I don't see any lack of generalisation in setting out rules for
    identifier names.


    You can give nice general rules for an identifier - you can say
    identifiers must start with a letter, and consist of letters, digits and underscore characters. (That's a common choice for many languages, but
    not the only choice.) If you /over-generalise/, you might allow
    identifiers consisting solely of digits. And that can lead to allowing confusing code - such as this example from Forth :

    $ gforth
    Gforth 0.7.3, Copyright (C) 1995-2008 Free Software Foundation, Inc.
    Gforth comes with ABSOLUTELY NO WARRANTY; for details type `license'
    Type `bye' to exit
    ok
    2 2 + . 4 ok
    : 2 3 ; ok
    2 2 + . 6 ok

    Forth is so general and free that you can redefine the meaning of "2"
    (or pretty much anything else). This is not a good idea.


    [Snipped a bunch of points on which we agree.]


    Further, remember that the decisions the language designer makes have
    to be communicated to the programmer. If a designer says "these side
    effects are allowed but these other ones are not" then that just
    gives the programmer more to learn and remember.


    Sure.  But programmers are not stupid (or at least, you are not
    catering for stupid programmers).  They can learn more than one rule.

    You are rather changing your tune, there. Earlier you were concerned
    about programmers failing to understand the difference between
    pre-increment and post-increment!


    Sometimes smart programmers get mixed up too - especially when trying to
    read code that is symbol-heavy and uses code that appears to be a common idiom, but is subtly different.


    As I say, you could try designing a language. You are a smart guy.
    You could work on a design in your head while walking to the shops,
    while waiting for a train, etc. As one of my books on language design
    says, "design repeatedly: it will make you a better designer".


    Oh, I have plenty of ideas for a language - I have no end to the
    number of languages, OS's, processors, and whatever that I have
    "designed" in my head :-)  The devil's in the details, however, and I
    haven't taken the time for that!

    Yes, the devil is indeed in the details. It's one thing to have some
    good ideas. It's quite another to bring them together into a single
    product.


    And that's assuming you can figure out which ideas are good!



    --- Synchronet 3.19c-Linux NewsLink 1.113