On 12/12/2022 14:36, Bart wrote:
On 12/12/2022 11:56, David Brown wrote:
I have two broad objections:
* All the FP features I don't understand and don't find intuitive,
mainly do with functions (higher order, closures, currying etc).
Pattern-matching, map, reduce etc I can deal with; the concepts are
easy, and they can trivially expressed in non-FP terms.
* Basing an entirely language around FP features.
I've been looking at examples on rosettacode.org. Most languages there
are conventional (my style) other than all the weird ones plus FP.
But one task caught my eye:
https://rosettacode.org/wiki/Determine_if_a_string_has_all_unique_characters#Haskell
as all three Haskell versions seem to make meal of it. I had been
looking for a short cryptic Haskell example; this was a long cryptic one!
One mystery is how it gets the output (of first version) properly lined
up, as I can't see anything relevant in the code.
Half my version below is all the fiddly formatting; this is where I'd consider this a weak spot in my language and think about what could
improve it.
Other languages (eg. OCaml) keep the output minimal.
On 07/12/2022 19:58, Bart wrote:
My point was that your language (the low-level compiled one) and C are similar styles - they are at a similar level, and are both procedural imperative structured languages.
(None of this suggests you "copied" C. You simply have a roughly
similar approach to solving the same kinds of tasks - you probably had experience with much the same programming languages as the C designers,
and similar assembly programming experience before making your languages
at the beginning.)
Note: C makes little use of true array pointers; it likes to use T*types not (T*)[], which mean my example would be written as A[i] anyway.
Programming in Eiffel, Haskell, APL, Forth or Occam is /completely/ different - you approach your coding in an entirely different way, and
it makes no sense to think about translating from one of these to C (or
to each other).
On 11/12/2022 16:50, antispam@math.uni.wroc.pl wrote:
Bart <bc@freeuk.com> wrote:
(I don't think you've made it clear whether the other language(s) you've refered to are some mainstream ones, or one(s) that you have devised. I
will assume the latter and tone down my remarks. A little.)
My scheme also allows circular and mutual imports with no restrictions.
You probably mean "with no artifical restrictions". There is
fundamental restriction that everything should resolve in finite
number of steps.
Huh? That doesn't come up. Anything which is recursively defined (eg.
const a=b, b=a) is detected, but that is due to out-of-order not
circular modules.
There needs to be some structure, some organisation.
Exactly, private import is tool for better organisation.
Sorry, all I can see is extra work; it was already a hassle having to
write `import B` on top of every module that used B, when it was visible
to all functions, because of having to manage that list of imports /per module/. Now I have to do that micro-managing /per function/?
(Presumably this language has block scopes: can this import also be
private to a nested block within each function?)
With module-wide imports, it is easy to draw a diagram of interdependent modules; with function-wide imports, that is not so easy, which is why I think there will be less structure.
I can compile each program separately. Granularity moves from module to
program. That's better. Otherwise why not compile individual functions?
In the past there was support for compiling individual functions, and
I would not exclude possibility that it will come back. But ATM
I prefer to keep things simple, so this functionality was removed.
With whole program compilation, especially used in the context of a
resident tool that incorporates a compiler, with resident symbol tables, resident source and running code in-memory (actually, just like my very first compilers), lots of possibiliies open up, of which I've hardly scratched the surface.
Including compiling/recompiling a function at a time.
Although part-recompiling during a pause in a running program, then resuming, would still be very tricky. That would need debugging
features, and I would consider, in that case, running via an interpreter.
My point: a system that does all this would need all the relevant bits
in memory, and may involve all sorts of complex components.
But a
whole-program compiler that runs apps in-memmory already does half the work.
I used to use independent compilation myself. I moved on to
whole-program compilation because it was better. But all the issues
involved with interfacing at the boundaries between modules don't
completely disappear, they move to the boundaries between programs
instead; that is, between libraries.
I consider programs to be different from libraries. Program may
use several libraries, in degenerate case library may be just a
single module. Compiling whole program has clear troubles with
large programs.
My definition of 'program', on Windows, is a single EXE or DLL file.
I expect larger applications to consist of a collection of EXE and DLL files. My own will have one EXE and zero or more DLLs, but I would also
make extensive use of scripting modules, that have different rules.
The latter being an automatically created amalgamation (produced with
`mm -ma app`). Build using `mm app`.
I could provide a single file, shell archive containing build script
and sources, but important part of providing sources is that people
can read them, understand and modify.
I don't agree. On Linux you do it with sources because it doesn't have a reliable binary format like Windows that will work on any machine. If
there are binaries, they might be limited to a particular Linux distribution.
(Binaries on Linux have always been a mystery to me, starting with the
fact that they don't have a convenient extension like .exe to even tell
what it is.)
GNU folks have nice definition
of source: "preffered form for making modifications". I would guess
that 'app.ma' is _not_ your preffered form for making modifications,
so it is not really true source.
No, but deriving the true sources from app.ma is trivial, since it is basically a concatenation of the relevant files.
And to build from "source" I need
source first. And I provide _true_ sources to my users.
If you were on Linux or didn't want to use my compiler, then it's even
simpler; I would provide exactly one file:
app.c Generated C source code (via `mc -c app`)
Here, you need a C compiler only. On Windows, you can build it using
`gcc app.c -oapp.exe`. Linux needs more options, listed at the top of
the file.
Sorry, generated file is _not_ a source. If I were to modify C file
This is not for modifying. 99% of the time I want to build an open
source C project, it is in order to provide a running binary, not spend hours trying to get it to build. These are the obstacles I have faced:
* Struggling with formats like .gz2 that require multiple steps on Windows
* Ending up with myriad files scatterred across myriad nested directories
* Needing to run './configure' first (this will not work on Windows...)
* Finding a 'make' /program/ (my gcc has a program called
mingw32-make.exe; is that the one?)
* Getting 'make' to work. Usually it fails partway and makefiles can be
so complex that I have no way of figuring a way out
* Or, trying to compile manually, struggling with files which are all
over the place and imparting that info to a compiler.
I don't have any interest in this; I just want the binary!
So, with my own programs, if I can't provide a binary (eg. they are not trusted), then one step back from a single binary file, is a single amalgamated source file.
I first did this in 2014, as a reaction to the difficulties I kept
facing: I wanted any of my applications to be as easy to build as hello.c.
If someone wants the original, discrete sources, then sure they can have
a ZIP file, which generally will have files that unpack into a single directly. But it's on request.
The difference is that what I provide is genuinely simple: one bare
compiler, one actual source file.
Sorry, for me "one file" is not a problem, there is 'tar' (de facto standard for distributing sorce code)
Yeah, I explained how well that works above. So the last Rust
implementation was a single binary download (great!), but it installed itself as 56,000 discrete fies and across don't know how many 1000s of directories (not so great). And it didn't work (it requires additional tools).
Being able to ZIP or TAR a sprawling set of files into giant binary
makes it marginally easier to transmit or download, but it doesn't
really address complexity.
And there is possible quite large dependency, namley Windows.
Yeah, my binaries run on Windows. Aside from requiring x64 and using
Win64 ABI, they use one external library MSVCRT.DLL,
which itself uses
Windows.
For programs that run on Windows and Linux, those depend on libraries
used. For 'M' programs, one module has to be chosen from Windows and
Linux versions; To run on Linux, I have to do this:
mc -c -linux app.m # On Windows, makes app.c, using
# the Linux-specific module
gcc app.c -oapp -lm etc # On Linux
./app
but M makes little use of WinAPI. With my interpreter, the process is as follows:
c:\qx>mc -c -linux qq # On Windows
M6 Compiling qq.m---------- to qq.c
Copy qq.c to c:\c then under WSL:
root@DESKTOP-11:/mnt/c/c# gcc qq.c -oqq -fno-builtin -lm -ldl
Now I can run scripts under Linux:
root@DESKTOP-11:/mnt/c/c# ./qq -nosys hello
Hello, World!
However, notice the '-nosys' option; this is because qq automatically incorporates a library suite that include a GUI library based on Win32. Without that, it would complain of not finding user32.dll etc.
I would need to dig up an old set of libraries or create new
Linux-specific ones. A bit of extra work. But see how the entire app is contained within that qq.c file.
It is [not?] clear how much of your code _usefully_ runs in now-Window environment.
OK, let's try my C compiler. Here I've done 'mc -c -linux cc`, copied
cc.q, and compiled under WSL as bcc:
root@DESKTOP-11:/mnt/c/c# ./bcc -s hello.c
Compiling hello.c to hello.asm
root@DESKTOP-11:/mnt/c/c# ./bcc -e hello.c
Preprocessing hello.c to hello.i
root@DESKTOP-11:/mnt/c/c# ./bcc -c hello.c
Compiling hello.c to hello.obj
root@DESKTOP-11:/mnt/c/c# ./bcc -exe hello.c
Compiling hello.c to hello.exe
msvcrt
msvcrt.dll
SS code gen error: Can't load search lib
So, most things actually work; only creating EXE doesn't work, because
it needs access to msvcrt.dll. But even it it did, it would work as a cross-compiler, as its code generator is for Win64 ABI.
But I think this shows useful stuff can be done. A more interesting test (which used to work, but it's too much effort right now), is to get my M compiler working on Linux (the 'mc' version that targets C), and use
that to build qq, bcc etc from original sources on Linux.
In all, my Windows stuff generally works on Linux. Linux stuff generally doesn't work on Windows, in terms of building from source.
C is just so primitive when it comes to this stuff. I'm sure it largely
works by luck.
C is at low level, that is clear.
The way it does modules is crude. So was my scheme in the 1980s, but it
was still one step up from C. My 2022 scheme is miles above C now.
The
underlying language can still be low level, but you can at least fix
some aspects.
I think a better module scheme could be retrofitted to C, but I'm not
going to do it.
Good programming
environment should help. C as language is not helpful, one
may have fully compliant and rather unhelpful compiler. But
real C compilers tends to be as helpful as they can within
limit a C language. While C still limits what they can do,
there is quite a lot of difference betwen current popular
compiler and bare-bones legal comiler. And there are extra
tools and here C support is hard to beat.
I don't agree with spending 1000 times more effort in devising complex
tools compared with just fixing the language.
So what are the new circles of ideas? All that crap in Rust that makes
coding a nightmare, and makes building programs dead slow? All those new >> functional languages with esoteric type systems? 6GB IDEs (that used to
take a minute and a half to load on my old PC)? No thanks.
Borrow checker in Rust looks like good idea. There is good chance
that _idea_ will be adopted by several languages in near feature.
OK. I've heard that that makes coding in Rust harder. Also that makes compilation slower. Not very enticing features!
Not so new ideas are:
- removing limitations, that is making sure language constructs
work as general as possible (that allows to get rid of many
special constructs from older languages)
- nominal, problem dependent types. That is types should reflect
problem domain. In particular, domains which need types like
'u32' are somewhat specific, in normal domains fundamental types
are different
- functions as values/parameters. In particular functions have
types, can be members of data structures
- "full rights" for user defined types. Which means whatever
syntax/special constructs works on built-in types should
also work for user defined types
- function overloading
- type reconstruction
- garbage collection
- exception handling
- classes/objects
Are these what your language supports? (If you have your own.)
I can't say these have ever troubled me. My scripting language has
garbage collection, and experimental features for exceptions and playing with OOP, and one or two taken from functional languages.
Being dynamic, it has generics built-in. But it deliberately keeps type systems at a simple, practical level (numbers, strings, lists, that sort
of thing), because the aim is for easy coding.
If you want hard, then
Rust, Ada, Haskell etc are that way -->!
* Clean, uncluttered brace-free syntaxDoes this count as brace-free?
for i in 1..10 repeat (print i; s := s + i)
Not if you just substitude brackets for braces. Brackets (ie "()") are
OK within one line, otherwise programs look too Lispy.
* Case-insensitive
* 1-based
* Line-oriented (no semicolons)
* Print/read as statements
Lot of folks consider the above misfeatures/bugs.
I know.
Concerning
'line-oriented' and 'intuitive, can you guess which changes to
following statement in 'line-oriented' syntax are legal and preserve meaning?
nm = x or nm = 'log or nm = 'exp or nm = '%power or
nm = 'nthRoot or
nm = 'cosh or nm = 'coth or nm = 'sinh or nm = 'tanh or
nm = 'sech or nm = 'csch or
nm = 'acosh or nm = 'acoth or nm = 'asinh or nm = 'atanh or
nm = 'asech or nm = 'acsch or
nm = 'Ei or nm = 'erf or nm = 'erfi or nm = 'li or
nm = 'Gamma or nm = 'digamma or nm = 'dilog or
nm = '%root_sum =>
"iterate"
As a hit let me say that '=' is comparison. And this is single
statement, small changes to whitespace will change parse and lead
to wrong code or syntax/type error. BTW: you need real newsreader
to see it, Google and likes will change it so it no longer works.
Thunderbird screws it up as well, unless it is meant to have a ragged
left edge. But not sure what your point is.
Non-line-oriented (like C, like JSON) is better for machine readable
code, that can also be transmitted with less risk of garbling. But when
when 90% of semicolons in C-style languages coincidence with
end-of-line, you need to start question the point of them.
Note that C's preprocessor is line-oriented, but C itself isn't.
C is still tremendously popular for many reasons. But anyone wanting to
code today in such a language will be out of luck if prefered any or all >> of these characteristics. This is why I find coding in my language such
a pleasure.
Then, if we are comparing the C language with mine, I offer:
* Out of order definitions
That is considerd misfeature in modern time.
Really? My experiments showed that modern languages (not C or C++) do
allow out-of-order functions. This gives great freedom in not worrying
about whether function F must go before G or after, or being able to
reorder or copy and paste.
In modern languages
definition may generate some code to be run and order in which
this code is run matters.
* One-time definitions (no headers, interfaces etc)
* Expression-based
C is mostly expression-based.
No, it's mostly statement-based. Although it might be that most
statements are expression statements (a=b; f(); ++c;).
You can't do 'return switch() {...}' for example, unless using gcc extensions.
There are langages that go further
than C, for example:
a := (s = 1; for i in 1..10 repeat (s := s + i); s)
is legeal in language that I use, but can not be directly translated to C. However, from examples that you gave it looked that your language
is _less_ expression-based than C.
I don't use the feature much. I had it from 80s, then switched to statement-based for a few years to match the scripting language, now
both are expression-based.
One reason it's not used more is because it causes problems when
targetting C. However I like it as a cool feature.
* Program-wide rather than module-wide compilation unit
AFACS C leave choice to the implementer. If your language _only_
supports whole-program compilation, then this whould be
negative feature.
* Build direct to executable; no object files or linkers
That really question of implementation. Building _only_
to executable is misfeature (what about case when I want
to use a few routines in your language, but the rest including
main program is in different language?).
There were escape routes involving OBJ files, but that's fallen into
disuse and needs fixing. For example, I can't so `mm -obj app` ATM, but could do this, when I've cleared some bugs:
mm -asm app # app.m to app.asm
aa -obj app # app.asm to app.obj
gcc app.obj lib.o -oapp.exe # or lib.a?
This (or something near) allows static linking of 'lib' instead of
dynamic, or including lib written in another language.
However, my /aim/ is for my language to be self-contained, and not to
talk to external software except via DLLs.
* Blazing fast compilation speed, can run direct from source
Again, that is implementation (some language features may slow
down compilation, as you know C allow fast comilation).
C also requires that the same header (say windows.h or gtk.h) used in 50 modules needs to be processed 50 times for a full build.
My M language processes it just once for a normal build (further, such
APIs are typically condensed into a single import module, not 100s of
nested headers). Some of it is by design!
* Module scheme with tidy 'one-time' declaration of each module
* Function reflection (access all functions within the program)
* 64-bit default data types (ie. 'int' is 64 bits, 123 is 64 bits)
* No build system needed
That really depends on needs of your program. Some are complex
and need build system, some are simple and in principle could
be compiled with "no" build system. I still use Makefiles for
simple programs for two reasons:
- typing 'make' is almost as easy as it can get
Ostensibly simple, yes. But it rarely works for me. And internally, it
is complex. Look at what a typical makefile contains with one of a
program headers, which looks like a shopping list - you can't get simpler!
- I want to have record of compiler used/compiler options/
libraries
So do I, but I want to incorporate that into the language. So if a
program uses OpenGL, when it sees this:
importdll opengl =
(followed by imported entities) that tells it it will need opengl.dll.
In more complex cases (mapping of import library to DLL file(s) is not straightforward), it's more explicit:
linkdll opengl # next to module info
This stuff no longer needs to be submitted via command line; that is old-hat.
Or are used to buying ready-meals from supermarkets.
Meals are different thing than programming languages. If you want
to say that _you_ enjoy yor language(s), then I got this. My point
was that you are trying to present your _subjective_ preferences
as something universal.
Yes, and I think I'm right. For example, English breakfast choices are simple (cereal, toast, eggs, sausages), everybody likes them, kids and adults. But then in the evening you go to a posh restaurant and things
are very different.
I think the same basics exist in programming languages.
I like programming and important part
is that my programs work. So I like featurs that help me to get
working program and dislike ones that cause troubles. IME, the
following cause troubles:
- case insensitivity
I believe this would only cause problems if you already have a
dependence on case-sensitivity, so it's a self-fulfilling problem!
Create a new language with it, and those problems become minor ones that occur on FFI boundaries, and then not that often.
- dependence on poorly specified defauls
- out of order definitions
I don't believe this. In C, not having this feature means:
* Requiring function prototypes, sometimes
* Causing problems in self-referential structs (needs struct tags)
* Causing problems with circular references in structs (S includes
a pointer to T, and T includes a pointer to S)
- infexible tools that for example insist on creating executable
without option on making linkable object file
Concerning 1-based indexing, IME in more cases it causes trouble
than helps, but usually this is minor issue.
In my compiler sources, about 30% of arrays are zero-based (with the
0-term usually associated with some error or non-set/non-valid index).
I use a lot
line oriented syntax. I can say that it works, simple examples
are easy, but there are unintuitive situations and sometimes troubles.
For example cut and paste works better for traditional syntax.
If there are problems, then beginers may be confused. As one
guy put it: trouble with white space is that one can not see it.
White space (spaces and tabs) is nothing to do with line-oriented.
Bart <bc@freeuk.com> wrote:
I don't agree. On Linux you do it with sources because it doesn't have a
reliable binary format like Windows that will work on any machine. If
there are binaries, they might be limited to a particular Linux
distribution.
You do not get it. I create binaries that work on 10 year old Linux
and new one. And on distributions that I never tried.
Of course,
I mean that binaries are for specific architecture, separate
for i386 Linux and x86_64 Linux (that covers PC-s, I would have to
provide more if I wanted to support more architectures).
Concerning binary format, there were two: Linux started with a.out
and switched to ELF in second half of nineties.
You also ignore educational aspect: some folks fetch sources to
learn how things are done.
Concerning not having extension: you can add one if you want,
moderatly popular choices are .exe or .elf.
Linux executable it should not matter if it is a shell script,
interpreted Python file or machine code. So exention should
not "give away" nature of executable.
And having no extension
means that users are spared needless typing
No, but deriving the true sources from app.ma is trivial, since it is
basically a concatenation of the relevant files.
Not less trivial than running 'tar' (which is standard component
on Linux).
* Needing to run './configure' first (this will not work on Windows...)
I saw one case then a guy tried to run './configure' on Windows NT
and Windows NT kept crashing.
It made a little progress and than
crashed, so that guy restarted it hoping that eventually it will
finish (after a week or two he gave up and used Linux). But
usually './configure' is not that bad. It make take a lot of
time, IME './configure' that run in seconds on Linux needed
several minutes on Windows.
And of course you need to install
essential dependencies, good program will tell you what you need
to install first, before running configure. But you need to
understand what they mean...
* Finding a 'make' /program/ (my gcc has a program called
mingw32-make.exe; is that the one?)
Probably. Normal advice for Windows folks is to install thing
called msys (IIUC it is msys2 now) which contains several tools
incuding 'make'. You are likely to get it as part of bigger
bundle, I am not up to date to tell you if this bundle will
be called 'gcc' or something else.
I don't have any interest in this; I just want the binary!
Well, I provide Linux binaries, but only sources for Windows
users. One reason is that I have only Linux on my personal
machine, so to deal with Windows I need to lease a machine.
Different reason is that I an not paid for programming, I do
this because I like to program and to some degree to build
community.
But if some potential members of community would
like to benefit but are unwilling to spent little effort.
Of course, in big community there may be a lot of "free
riders" who benefit without contributing anything whithout
bad effect because other folks will do needed work. But
here I am dealing with small community. I did port
to Windows to make sure that it actually works and there
are not serious problems. But I leave to other creation
of binaries and reporting possible build problems. If
nobody is willing to do this, then from my point of
view Windows has no users and is not worth supporting.
Being able to ZIP or TAR a sprawling set of files into giant binary
makes it marginally easier to transmit or download, but it doesn't
really address complexity.
In my book single blob of 20M is more problematic than 10000 files,
2kB each. At deeper level complexity is the same, but blob lacks
useful structure given by division into files and subdirectories.
So, most things actually work; only creating EXE doesn't work, because
it needs access to msvcrt.dll. But even it it did, it would work as a
cross-compiler, as its code generator is for Win64 ABI.
Yes. It was particularly funny whan you had compiler running on
Raspberry Pi, but producing Intel code...
But I think this shows useful stuff can be done. A more interesting test
(which used to work, but it's too much effort right now), is to get my M
compiler working on Linux (the 'mc' version that targets C), and use
that to build qq, bcc etc from original sources on Linux.
In all, my Windows stuff generally works on Linux. Linux stuff generally
doesn't work on Windows, in terms of building from source.
Our experience differ. There were times that I had to work on
Windows machine and problem was that Windows does not come with
tools that I consider essential.
The way it does modules is crude. So was my scheme in the 1980s, but it
was still one step up from C. My 2022 scheme is miles above C now.
Concering "miles above": using C one can create shared libraries.
Some shared libraries may be system provided, some may be private.
Within C ecosystem, one you have corresponding header files you
can use them as "modules". And they are usable with other languages.
AFAIK no other module system can match _this_.
I don't agree with spending 1000 times more effort in devising complex
tools compared with just fixing the language.
It is cheaper to have 1000 people doing tools, than 100000 people
fixing their programs.
On 17/12/2022 06:07, antispam@math.uni.wroc.pl wrote:
When /I/ provide sources (that is, a representation that is one step
back from binaries), to build on Linux, then it will build on Linux.
They will have a dependency on a C compiler that can produce a ELF file,
and I now stipulate either gcc or tcc.
On 17/12/2022 06:07, antispam@math.uni.wroc.pl wrote:
Bart <bc@freeuk.com> wrote:
Of course,
I mean that binaries are for specific architecture, separate
for i386 Linux and x86_64 Linux (that covers PC-s, I would have to
provide more if I wanted to support more architectures).
Concerning binary format, there were two: Linux started with a.out
and switched to ELF in second half of nineties.
(I don't understand that; a.out is a filename; ELF is a file format.)
Concerning not having extension: you can add one if you want,
moderatly popular choices are .exe or .elf.
But nobody does. Main problem is in forums like this: if I say
`hello.exe`, everyone knows that's a binary executable for Windows. But
if I mention `hello`, how are you supposed to tell that I'm talking
about a Linux executable?
I know that Linux doesn't care about extensions, but people do. After
all it still uses, by convention, extensions like .c -s .o .a .so, so
why not actual binaries by convention?
But for using normal
Linux executable it should not matter if it is a shell script,
interpreted Python file or machine code. So exention should
not "give away" nature of executable.
You can have a neutral extension that doesn't give it away either. Using
no extension is not useful: is every file with no extension something
you can execute?
But there are also ways to execute .c files directly, and of course .py files which are run from source anyway.
It simply doesn't make sense. On Linux, I can see that executables are displayed on consoles in different colours; what happened when there was
no colour used?
And having no extension
means that users are spared needless typing
Funny you should bring that up, because every time you run a /C
compiler/ on a /C source file/, you have to type the extension like this:
gcc hello.c
which also writes the output as a.exe or a.out, so you further need to
write at least:
gcc hello.c -o hello # hello.exe on Windows
I would only write this:
bcc hello
and it works out, by some very advanced AI, that I want to compile
hello.c into hello.exe. And once you have hello.exe, you can run it like this:
hello
You don't need to type .exe. So, paradoxically, having extensions means having to type them less often:
mm -pcl prog # old compiler: translate prog.m to prog.pcl
pcl -asm prog # prog.pcl to prog.asm
aa prog # prog.asm to prog.exe
prog # run it
At no point did I need to write an extension. It is implied by the
program I invoked.
Could it be simply that file extensions are sometimes helpful, sometimes inconvenient or irrelevant, and mostly it all just works without much trouble?
On 17/12/2022 14:22, Bart wrote:
For data files, it can often be convenient to have an extension
indicating the type - and it is as common on Linux as it is on Windows
to have ".odt", ".mp3", etc., on data files.
People use extensions where they are useful, and skip them when they are counter-productive (such as for executable programs).
When you are writing code, and you have a function "twiddle" and an
integer variable "counter", you call them "twiddle" and "counter". You don't call them "twiddle_func" and "counter_int". But maybe sometimes
you find it useful - it's common to write "counter_t" for a type, and
maybe you'd write "xs" for an array rather than "x". Filenames can
follow the same principle - naming conventions can be helpful, but you
don't need to be obsessive about it or you end up with too much focus on
the wrong thing.
On *nix, every file with the executable flag can be executed - that's
what the flag is for.
Sometimes it is convenient to be able to see which files in a directory
are executables, directories, etc. That's why "ls" has flags for
colours or to add indicators for different kinds of files. ("ls -F --color").
But there are also ways to execute .c files directly, and of course
.py files which are run from source anyway.
There are standards for that. A text-based file can have a shebang
comment ("#! /usr/bin/bash", or similar) to let the shell know what interpreter to use. This lets you distinguish between "python2" and "python3", for example, which is a big improvement over Windows-style
file associations that can only handle one interpreter for each file
type.
And the *nix system distinguishes between executable files and
non-executables by the executable flag - that way you don't accidentally
try to execute non-executable Python files.
You do realise that gcc can handle some 30-odd different file types?
It's not a simple C compiler that assumes everything it is given is a C file.
On Linux, you just write "make hello" - you don't need a makefile for
simple cases like that.
(And the "advanced AI" can figure out if it is
C, C++, Fortran, or several other languages.)
That's fine for programs that handle just one file type.
But I'm a little confused here. On the one hand, you are saying how terrible Linux is for not using file extensions. On the other hand, you are saying how wonderful your own tools are because they don't need file extensions.
Could it be simply that file extensions are sometimes helpful, sometimes inconvenient or irrelevant, and mostly it all just works without much trouble?
On 17/12/2022 13:22, Bart wrote:
On 17/12/2022 06:07, antispam@math.uni.wroc.pl wrote:
When /I/ provide sources (that is, a representation that is one step
back from binaries), to build on Linux, then it will build on Linux.
They will have a dependency on a C compiler that can produce a ELF file, and I now stipulate either gcc or tcc.
See https://github.com/sal55/langs/tree/master/demo
This includes mc.c, a generated-C rendering of my M-on-Linux compiler.
You [antispam] will need gcc or tcc to create a binary on Linux; instructions are at the link.
Once you have a working binary, you can try that on the one-file M
'true' sources in mc.ma, to create a new binary.
If that monolithic source file still doesn't cut it for you, I've
included an extraction program. The readme tells you how to run that,
and how to run the 2nd compiler on those discrete files to make a third compiler.
(I've briefly tested those instructions under WSL. It ought to work on
any 64-bit Linux including ARM, but I can't guarantee it. The C file is 32Kloc, and the .ma file is 25Kloc.
If it doesn't work, then forget it. I know it can be made to work, and
to do so via my one-file distributions.)
Bart <bc@freeuk.com> wrote:
On 17/12/2022 13:22, Bart wrote:
On 17/12/2022 06:07, antispam@math.uni.wroc.pl wrote:
When /I/ provide sources (that is, a representation that is one step
back from binaries), to build on Linux, then it will build on Linux.
They will have a dependency on a C compiler that can produce a ELF file, >>> and I now stipulate either gcc or tcc.
See https://github.com/sal55/langs/tree/master/demo
This includes mc.c, a generated-C rendering of my M-on-Linux compiler.
You [antispam] will need gcc or tcc to create a binary on Linux;
instructions are at the link.
Once you have a working binary, you can try that on the one-file M
'true' sources in mc.ma, to create a new binary.
It works on 64-bit AMD/Intel Linux.
More precisly, initial 'mc.c' compiled fine, but it could not
run 'gcc'. Namely, ARM gcc does not have '-m64' option. Once
I removed this it works.
I tested this using the following program:
proc main=
rsystemtime tm
os_getsystime(&tm)
println tm.second
println tm.minute
println tm.hour
println tm.day
println tm.month
println tm.year
end
BTW: I still doubt that 'mc.ma' expands to true source: do you
really write no comments in your code?
On 17/12/2022 06:07, antispam@math.uni.wroc.pl wrote:
Bart <bc@freeuk.com> wrote:
I don't agree. On Linux you do it with sources because it doesn't have a >> reliable binary format like Windows that will work on any machine. If
there are binaries, they might be limited to a particular Linux
distribution.
You do not get it. I create binaries that work on 10 year old Linux
and new one. And on distributions that I never tried.
I tried porting a binary from one ARM32 Linux machine to another; it
didn't work, even 2 minutes later. Maybe it should have worked and there
was some technical reason why my test failed.
But I have noticed that on Linux, distributing stuff as giant source
bundles seems popular. I assumed that was due to difficulties in using binaries.
Of course,
I mean that binaries are for specific architecture, separate
for i386 Linux and x86_64 Linux (that covers PC-s, I would have to
provide more if I wanted to support more architectures).
Concerning binary format, there were two: Linux started with a.out
and switched to ELF in second half of nineties.
(I don't understand that; a.out is a filename; ELF is a file format.)
You also ignore educational aspect: some folks fetch sources to
learn how things are done.
Sure. Android is open source; so is Firefox. While you can spend years reading through the 25,000,000 lines of Linux kernal code.
Good luck finding out how they work!
Here I'm concerned only with building stuff that works, and don't want
to know what directory structure the developers use.
Concerning not having extension: you can add one if you want,
moderatly popular choices are .exe or .elf.
But nobody does. Main problem is in forums like this: if I say
`hello.exe`, everyone knows that's a binary executable for Windows.
But
if I mention `hello`, how are you supposed to tell that I'm talking
about a Linux executable?
I know that Linux doesn't care about extensions, but people do. After
all it still uses, by convention, extensions like .c -s .o .a .so, so
why not actual binaries by convention?
But for using normal
Linux executable it should not matter if it is a shell script,
interpreted Python file or machine code. So exention should
not "give away" nature of executable.
You can have a neutral extension that doesn't give it away either. Using
no extension is not useful: is every file with no extension something
you can execute?
But there are also ways to execute .c files directly, and of course .py files which are run from source anyway.
It simply doesn't make sense.
On Linux, I can see that executables are
displayed on consoles in different colours; what happened when there was
no colour used?
And having no extension
means that users are spared needless typing
Funny you should bring that up, because every time you run a /C
compiler/ on a /C source file/, you have to type the extension like this:
gcc hello.c
which also writes the output as a.exe or a.out, so you further need to
write at least:
gcc hello.c -o hello # hello.exe on Windows
I would only write this:
bcc hello
and it works out, by some very advanced AI, that I want to compile
hello.c into hello.exe. And once you have hello.exe, you can run it like this:
hello
You don't need to type .exe. So, paradoxically, having extensions means having to type them less often:
mm -pcl prog # old compiler: translate prog.m to prog.pcl
pcl -asm prog # prog.pcl to prog.asm
aa prog # prog.asm to prog.exe
prog # run it
At no point did I need to write an extension. It is implied by the
program I invoked.
No, but deriving the true sources from app.ma is trivial, since it is
basically a concatenation of the relevant files.
Not less trivial than running 'tar' (which is standard component
on Linux).
.ma is a text format; you can separate with a text editor if you want!
But you don't need to. Effectively you just do:
gcc app # ie. app.gz2
and it makes `app` (ie. an ELF binary 'app.').
* Needing to run './configure' first (this will not work on Windows...)
I saw one case then a guy tried to run './configure' on Windows NT
and Windows NT kept crashing.
Possibly you don't quite understand: aside from "./" being a syntax
error on Windows,
'configure' is a script full of Bash commands which
invoke all sorts of utilities from Linux. It is meaningless to attempt
to run it on Windows.
It would be like my bundling a Windows BAT file with sources intended to
be built on Linux.
It made a little progress and than
crashed, so that guy restarted it hoping that eventually it will
finish (after a week or two he gave up and used Linux). But
usually './configure' is not that bad. It make take a lot of
time, IME './configure' that run in seconds on Linux needed
several minutes on Windows.
It can takes several minutes on Linux too! Auto-conf-generated configure scripts can contain tens of thousands of lines of code.
And of course you need to install
essential dependencies, good program will tell you what you need
to install first, before running configure. But you need to
understand what they mean...
* Finding a 'make' /program/ (my gcc has a program called
mingw32-make.exe; is that the one?)
Probably. Normal advice for Windows folks is to install thing
called msys (IIUC it is msys2 now) which contains several tools
incuding 'make'. You are likely to get it as part of bigger
bundle, I am not up to date to tell you if this bundle will
be called 'gcc' or something else.
But that's just a cop-out. As I said above, it's like my delivering a
build system for Linux that requires so many Windows dependencies, that
you can only build by installing half of Windows.
I don't have any interest in this; I just want the binary!
Well, I provide Linux binaries, but only sources for Windows
users. One reason is that I have only Linux on my personal
machine, so to deal with Windows I need to lease a machine.
Different reason is that I an not paid for programming, I do
this because I like to program and to some degree to build
community.
I had the same problem, in reverse. I've spent money on RPis, cheap
Linux netbooks, spent endless time getting VirtualBox to work, and still don't have a suitable Linux machine that Just Works.
WSL is not interesting since it is still x64, and maybe things will work that will not work on real Linux (eg. it can still run actual Windows
EXEs; what else is it allowing that wouldn't work on real Linux).
I've stopped this since no one has ever expressed any interest in seeing
my stuff work on Linux, especially on RPi where a very fast alternative
to C that ran on the actual board would have been useful.
Bart <bc@freeuk.com> wrote:
But I have noticed that on Linux, distributing stuff as giant source
bundles seems popular. I assumed that was due to difficulties in using
binaries.
Creating binaries that work on many system requires some effort.
Easiest way is to create them on oldest system that you expect to
be used: typically compile on Linux will try to take advantage
of features of processor and system, older system/processor
may lack those features and fail. As I wrote, one needs to limit dependencies or bundle them. Bundling may lead to very large
download size.
And do not forget to open source means that users can get the source.
If source is not available, most users will ignore your program.
Concerning binaries for ARM, they are a bit more problematic than
for Intel/AMD. Namely, there is a lot of different variants of
ARM processors and 3 different instruction encodings in popular
ARM have no support for 32-bit binaries. So, there is less
compatiblity than technically possible.
Things have structure, kernel is divided into subdirectories in
resonably logical way.
And there are tools like 'grep', it can
find relevant thing in seconds. This is not pure theory, I had
puzzling problem that binaries were failing on newer Linux
distributions. It took some time to solve, but another guy
hinted me that this may be tighthended "security" in kernel.
Even after the hint my first try did not work. But I was able
to find relevant code in the kernel and then it became clear
what to do.
Here I'm concerned only with building stuff that works, and don't want
to know what directory structure the developers use.
Concerning not having extension: you can add one if you want,
moderatly popular choices are .exe or .elf.
But nobody does. Main problem is in forums like this: if I say
`hello.exe`, everyone knows that's a binary executable for Windows.
It may be for Linux...
At no point did I need to write an extension. It is implied by the
program I invoked.
Implied extentions have trouble that somebody else my try to hijack
them. I use TeX system which produces .dvi files. IIUC they could
be easily mishandled by systems depending just on extion. And in
area of programming languages at least three languages compete for
.p extention.
Possibly you don't quite understand: aside from "./" being a syntax
error on Windows,
AFAIK '/' is legal in Windows pathnames (even though many programs
do not support it). I am not sure about leading dot.
'configure' is a script full of Bash commands which
invoke all sorts of utilities from Linux. It is meaningless to attempt
to run it on Windows.
You probably do not understand that 'configure' scripts use POSIX
commands.
It would be like my bundling a Windows BAT file with sources intended to
be built on Linux.
There are two important differences:
- COMMAND.COM is very crappy as command processor, unlike Unix shell
which from the start was designed as programming language. IIUC that
is changing with PowerShell.
- you compare thing which was designed to be portability layer with
platform specific thing.
It can takes several minutes on Linux too! Auto-conf-generated configure
scripts can contain tens of thousands of lines of code.
It depends on commands and script. Shell can execute thousends of
simple commands per second. On Linux most of time probably goes
to C compiler (which is called many times from 'confugure').
On Windows cost of process creation used to be much higher than
on Linux, so it is likely that most 'configure' time went to
process creation. Anyway, the same 'configure' script tended to
run 10-100 times slower on Windows than on Linux. I did not try
recently...
But that's just a cop-out. As I said above, it's like my delivering a
build system for Linux that requires so many Windows dependencies, that
you can only build by installing half of Windows.
POSIX utilities could be quite small. On Unix-like system one
could fit them in something between 1-2M. On Windows there is
trouble that some space saving ticks do not work (in Unix usual
trick is to have one program available under several names, doing
different thing depending on name). Also, for robustness they
may be staticaly linked. And people usually want versions with
most features, which are bigger than what is strictly necessary.
Still, that is rather small thing if you compare to size of
Windows.
Another story is size of C compiler and its header
files.
WSL is not interesting since it is still x64, and maybe things will work
that will not work on real Linux (eg. it can still run actual Windows
EXEs; what else is it allowing that wouldn't work on real Linux).
I've stopped this since no one has ever expressed any interest in seeing
my stuff work on Linux, especially on RPi where a very fast alternative
to C that ran on the actual board would have been useful.
Well, compiler that can not generate code for Pi is not very
interesting to run on RPi, even if it is very fast.
Your
latest M is step in good direction, but suffers due to gcc
compile time:
time ./mc -asm mc.m
M6 Compiling mc.m---------- to mc.asm
real 0m0.431s
user 0m0.347s
sys 0m0.079s
time ../mc mc.m
M6 Compiling mc.m---------- to mc
L:Invoking C compiler: gcc -omc mc.c -lm -ldl -s -fno-builtin
mc.c:2:32: warning: unknown option after '#pragma GCC diagnostic' kind [-Wpragmas]
#pragma GCC diagnostic ignored "-Wbuiltin-declaration-mismatch"
^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
real 0m9.137s
user 0m8.746s
sys 0m0.386s
So 'mc' can generate C code in 0.431s, but then it takes '9.137s'
to compile generated C (IIUC Tiny C does not support ARM and
even on x86_64 compile command probably needs fixing).
And there seem to be per-program overhead:
time fred.nn/mc -asm hello.m
M6 Compiling hello.m------- to hello.asm
real 0m0.222s
user 0m0.186s
sys 0m0.032s
time fred.nn/mc hello.m
M6 Compiling hello.m------- to hello
L:Invoking C compiler: gcc -ohello hello.c -lm -ldl -s -fno-builtin hello.c:2:32: warning: unknown option after '#pragma GCC diagnostic' kind [-Wpragmas]
#pragma GCC diagnostic ignored "-Wbuiltin-declaration-mismatch"
real 0m1.596s
user 0m1.464s
sys 0m0.125s
'hello.m' is quite small, but is needs half of time of mc which
is 6000 times larger. And generated 'hello.c' is still 4381
lines.
On 32-bit Pi-s I have Poplog:
http://github.com/hebisch/poplog http://www.math.uni.wroc.pl/~hebisch/poplog/corepop.arm
For bootstrap one needs binary above. For me it works on
oryginal Raspberry Pi, Banana Pi, Orange Pi Pc, Orange Pi Zero.
They all have different ARM chips and run different version
of Linux, yet the same 'corepop.arm' works on all of them.
If you are interested you can look at INSTALL file in repo
above (skip quick install part that assumes tarball and go to
full install). Building uses had written 'configure', it
is very simple, but calls C compiler up to 4 times, so runtime
of 'configure' on Pi-s is noticeable (of order of 1 second).
Actual build (using 'make') takes few minutes, depending
on what was configured.
For massive files Poplog compiles much slower than your compiler
but usually much faster than gcc. However, main advantage
is that Poplog compiles to memory giving you impression of
interpreter, but generating machine code.
Let me add that there are actually two compilers, one compiling to
memory and separate one which generates assembly code. The second
compiler has low-level extenstions allowing faster object code.
Compiler compiling to memory is is significantly faster than
the one which generates assembly.
On 18/12/2022 17:17, antispam@math.uni.wroc.pl wrote:
As-is it failed on 64-bit ARM.
More precisly, initial 'mc.c' compiled fine, but it could not
run 'gcc'. Namely, ARM gcc does not have '-m64' option. Once
I removed this it works.
gcc doesn't have `-m64`, really? I'm sure I've used it even on ARM. (How
do you tell it it to generate ARM32 rather than ARM64 code?)
Bart <bc@freeuk.com> wrote:
On 17/12/2022 06:07, antispam@math.uni.wroc.pl wrote:
Bart <bc@freeuk.com> wrote:
Possibly you don't quite understand: aside from "./" being a syntax
error on Windows,
AFAIK '/' is legal in Windows pathnames (even though many programs
do not support it). I am not sure about leading dot.
For massive files Poplog compiles much slower than your compiler
but usually much faster than gcc. However, main advantage
is that Poplog compiles to memory giving you impression of
interpreter, but generating machine code.
That's what my 'mm' compiler does on Windows, using -run option. Or if I rename it 'ms', that is the default:
c:\mx>ms mm -run \qx\qq \qx\hello.q
Compiling mm.m to memory
Compiling \qx\qq.m to memory
Hello, World! 19-Dec-2022 15:00:34
(There's an issue ATM building ms with ms.)
But I can't do this via the C target.
Bart <bc@freeuk.com> wrote:
I know that Linux doesn't care about extensions, but people do. After
all it still uses, by convention, extensions like .c -s .o .a .so, so
why not actual binaries by convention?
Here you miss virtue of simplicity: binaries are started by kernel
and you pass filename of binary to the system call. No messing
with extentions there. There are similar library calls that
do search based on PATH, again no messing with extentions.
It simply doesn't make sense.
It makes sense if you know that executable in the PATH is simultaneousy
shell command. You see, there are folks which really do not like
useless clutter in their command lines. And before calling
executable from a shell script you may wish to check if it is available. Having different extention for calling and for access as normal
file would complicate scripts.
On 19/12/2022 06:15, antispam@math.uni.wroc.pl wrote:
Bart <bc@freeuk.com> wrote:
I know that Linux doesn't care about extensions, but people do. After
all it still uses, by convention, extensions like .c -s .o .a .so, so
why not actual binaries by convention?
Here you miss virtue of simplicity: binaries are started by kernel
and you pass filename of binary to the system call. No messing
with extentions there. There are similar library calls that
do search based on PATH, again no messing with extentions.
I don't get this. Do you mean that inside a piece of code (ie. written
once and executed endless times), it is better to write run("prog")
instead of run("prog.exe"), because it saves 4 keystrokes?
It simply doesn't make sense.
It makes sense if you know that executable in the PATH is simultaneousy shell command. You see, there are folks which really do not like
useless clutter in their command lines. And before calling
executable from a shell script you may wish to check if it is available. Having different extention for calling and for access as normal
file would complicate scripts.
In every context I've been talking about where extensions have been
optional and have been inferred, you have always been able to write full extensions if you want. This would be recommended inside a script run
myriad times to make it clear to people reading or maintaining it.
People have mentioned that on Linux you could optionally name
executables with ".exe" or ".elf" extension. If 'gcc' (the main binary driver program of gcc, not gcc as a broader concept - you see the
problems you get into!) had been named "gcc.exe", would you have had to
type this every time you ran it:
gcc.exe hello.c
If so, then I think I can see the real reason why extensions are empty!
In a Linux terminal shell, there apparently is no scope for informality
or user-friendliness at all.
This has lead me to thinking about how command line parameters are separated. On either OS you normally type this:
gcc a.c b.c c.c
You can't do this, separate with commas, as the comma becomes part of
each filename:
gcc a.c, b.c, c.c
That applies also to my bcc, but there, you CAN have comma-separated
items inside an @file; with gcc, that still fails.
So, what's going on here: is it an OS shell misfeature, or what?
Well it's not the OS on Windows, since 'T.BAT a,b,c' will process a, b,
c as separate 'a b c' items inside the script (not as "a," etc). (I
can't test how it works on Linux.)
On 18/12/2022 13:05, David Brown wrote:
On 17/12/2022 14:22, Bart wrote:
For data files, it can often be convenient to have an extension
indicating the type - and it is as common on Linux as it is on Windows
to have ".odt", ".mp3", etc., on data files.
It's convenient for all files. And before you say, I can add a .exe extension if I want: I don't want to have to write that every time I run that program.
People use extensions where they are useful, and skip them when they
are counter-productive (such as for executable programs).
I can't imagine all my EXE (and perhaps BAT files) all having no
extensions. Try and envisage all your .c files have no extensions by default. How do you even tell that are C sources and not Python or not executables?
When you are writing code, and you have a function "twiddle" and an
integer variable "counter", you call them "twiddle" and "counter".
You don't call them "twiddle_func" and "counter_int". But maybe
sometimes you find it useful - it's common to write "counter_t" for a
type, and maybe you'd write "xs" for an array rather than "x".
Filenames can follow the same principle - naming conventions can be
helpful, but you don't need to be obsessive about it or you end up
with too much focus on the wrong thing.
But you /do/ write twiddle.c, twiddle.s, twiddle.o, twiddle.cpp,
twiddle.h etc? Yet the most important file of all, is just plain 'twiddle'!
In casual writing or conversation, how to do distinguish 'twiddle the
binary executable' from 'twiddle the folder', from 'twiddle the
application' (an installation) , from 'twiddle' the project etc, without having to use that qualification?
Using 'twiddle.exe' does that succinctly and unequivocally.
On *nix, every file with the executable flag can be executed - that's
what the flag is for.
Sometimes it is convenient to be able to see which files in a
directory are executables, directories, etc. That's why "ls" has
flags for colours or to add indicators for different kinds of files.
("ls -F --color").
As I said, if it's convenient for data and source files, it's convenient
for all files.
But there are also ways to execute .c files directly, and of course
.py files which are run from source anyway.
There are standards for that. A text-based file can have a shebang
comment ("#! /usr/bin/bash", or similar) to let the shell know what
interpreter to use. This lets you distinguish between "python2" and
"python3", for example, which is a big improvement over Windows-style
file associations that can only handle one interpreter for each file
type.
That is invasive. And taking something that is really an attribute of a
file name, in having it not only inside the file, but requiring the file
to be opened and read to find out.
(Presumably every language that runs on Linux needs to accept '#' as a
line comment? And you need to build it in to every one of 10,000 source files the direct location of the Python2 or Python3 installation on that machine? Is that portable across OSes? But I expect it's smarter than
that.)
With Python, you're still left with the fact that you see a file with a
.py extension, and don't know if it's Py2 or Py3, or Py3.10 or Py3.11,
or whether it's a program that works with any version. It is a separate problem from having, as convention, no extensions for ELF binary files.
And the *nix system distinguishes between executable files and
non-executables by the executable flag - that way you don't
accidentally try to execute non-executable Python files.
(So there are files that contain Python code that are non-executable?
Then what is the point?)
You do realise that gcc can handle some 30-odd different file types?
That doesn't change the fact that probably 99% of the time I run gcc, it
is with the name of a .c source file. And 99.9% of the times when I
invoke it on prog.c as the first or only file to create an executable,
then I want to create prog.exe.
So its behaviour is unhelpful. After the 10,000th time you have to type
.c, or backspace over .c to get at the name itself to modify, it becomes tedious.
Now it's not that hard to write a wrapper script or program on top of gcc.exe, but if it isn't hard, why doesn't it just do that?
It's not a simple C compiler that assumes everything it is given is a
C file.
As I said, that is not helpful for me. Also, how many file types does
'as' accept? As that also requires the full extension, and also,
bizarrely, generates `a.out` as the object file name.
If you intend to assemble three .s files to object files, using separate 'as' invocations, they will all be called a.out!
That would be crass even for a toy program written by a student. And yet here it is a mainstream product used by million of people.
All my language programs (and many of my apps), have a primary type of
input file, and will default to that file extension if omitted. Anything else (eg .dll files) need the full extension.
Here's something funny: take hello.c and rename to 'hello', with no extension. If I try and compile it:
gcc hello
it says: hello: file not recognised: file format not recognised. Trying
'gcc hello.' is worse: it can't see the file at all.
So first, on Linux, where file extensions are supposed to be optional,
gcc can't cope with a missing .c extension; you have to provide extra
info. Second, on Linux, "hello" is a distinct file from "hello.".
With bcc, I just have to type "bcc hello." to make it work. A trailing
dot means an empty extension.
On Linux, you just write "make hello" - you don't need a makefile for
simple cases like that.
OK... so how does 'make' figure out the file extension?
'Make' anyway has different behaviour:
* It can choose not to compile
* On Windows, it says this:
c:\yyy>make hello
cc hello.c -o hello
process_begin: CreateProcess(NULL, cc hello.c -o hello, ...) failed.
make (e=2): The system cannot find the file specified.
<builtin>: recipe for target 'hello' failed
make: *** [hello] Error 2
* I also use several C compilers; how does make know which one I intend?
How do I pass it options?
If I give another example:
c:\c>bcc cipher hmac sha2
Compiling cipher.c to cipher.asm
Compiling hmac.c to hmac.asm
Compiling sha2.c to sha2.asm
Assembling to cipher.exe
it just works. 'make cipher hmac sha2' doesn't, not even in WSL.
(And the "advanced AI" can figure out if it is C, C++, Fortran, or
several other languages.)
No, it can't. If I have hello.c and hello.cpp, it will favour the .c file.
File extensions are tremendously helpful. But that doesn't mean you have
to keep typing them! They just have to be there.
On 18/12/2022 13:05, David Brown wrote:
On 17/12/2022 14:22, Bart wrote:
When you are writing code, and you have a function "twiddle" and an integer variable "counter", you call them "twiddle" and "counter".? You don't call them "twiddle_func" and "counter_int".? But maybe sometimes
you find it useful - it's common to write "counter_t" for a type, and maybe you'd write "xs" for an array rather than "x".? Filenames can
follow the same principle - naming conventions can be helpful, but you don't need to be obsessive about it or you end up with too much focus on the wrong thing.
But you /do/ write twiddle.c, twiddle.s, twiddle.o, twiddle.cpp,
twiddle.h etc? Yet the most important file of all, is just plain 'twiddle'!
If you intend to assemble three .s files to object files, using separate 'as' invocations, they will all be called a.out!
That would be crass even for a toy program written by a student. And yet here it is a mainstream product used by million of people.
* I also use several C compilers; how does make know which one I intend?
How do I pass it options?
If I give another example:
c:\c>bcc cipher hmac sha2
Compiling cipher.c to cipher.asm
Compiling hmac.c to hmac.asm
Compiling sha2.c to sha2.asm
Assembling to cipher.exe
it just works. 'make cipher hmac sha2' doesn't, not even in WSL.
On 18/12/2022 17:17, antispam@math.uni.wroc.pl wrote:
Bart <bc@freeuk.com> wrote:
On 17/12/2022 13:22, Bart wrote:
On 17/12/2022 06:07, antispam@math.uni.wroc.pl wrote:
I tested this using the following program:
proc main=
rsystemtime tm
os_getsystime(&tm)
println tm.second
println tm.minute
println tm.hour
println tm.day
println tm.month
println tm.year
end
It's funny you picked on that, because the original version of my
hello.m also printed out the time:
proc main=
println "Hello World!",$time
end
This was to ensure I was actually running the just built-version, and
not the last of the 1000s of previous ones. But the time-of-day support
for Linux wasn't ready so I left it out.
I've updated the mc.c/mc.ma files (not hello.m, I'm sure you can fix that).
However getting this to work on Linux wasn't easy as it kept crashing.
The 'struct tm' record ostensibly has 9 fields of int32, so has a size
of 36 bytes. And on Windows it is. But on Linux, a test program reported
the size as 56 bytes.
Doing -E on that program under Linux, the struct actually looks like this:
struct tm
{
int tm_sec;
int tm_min;
int tm_hour;
int tm_mday;
int tm_mon;
int tm_year;
int tm_wday;
int tm_yday;
int tm_isdst;
long int tm_gmtoff;
const char *tm_zone;
};
16 extra byte for fields not mentioned in 'man' docs, plus 4 bytes
alignment account for the 20 bytes. This is typical of the problems in adapting C APIs to the FFIs of other languages.
BTW: I still doubt that 'mc.ma' expands to true source: do you
really write no comments in your code?
The file was detabbed and decommented, as the comments would be full of ancient crap, mainly debugging code that never got removed. I've tidied
most of that up, and now the file is just detabbed (otherwise things
won't line up properly). Note the sources are not heavily commented anyway.
It will always be a snapshot of the actual sources, which are not kept on-line and can change every few seconds.
Bart <bc@freeuk.com> wrote:
This has lead me to thinking about how command line parameters are
separated. On either OS you normally type this:
gcc a.c b.c c.c
You can't do this, separate with commas, as the comma becomes part of
each filename:
gcc a.c, b.c, c.c
That applies also to my bcc, but there, you CAN have comma-separated
items inside an @file; with gcc, that still fails.
Why would you do such silly thing? If you really want you can
redefine 'gcc' so that it strips trailing commas (that is
trivial). If you like excess characters you can type longer
thing like:
echo gcc a.c, b.c, c.c | tr -d ',' | bash
So, what's going on here: is it an OS shell misfeature, or what?
KISS principle. Commas are legal in filenames and potentially
useful. On command line spaces work fine. If you really need
splitting to work differently there are resonably simple ways
to do this, most crude is above.
BTW: travelling between UK and other countries do you complain
that cars drive on wrong side of the road?
Well it's not the OS on Windows, since 'T.BAT a,b,c' will process a, b,
c as separate 'a b c' items inside the script (not as "a," etc). (I
can't test how it works on Linux.)
There is IFS variable which lists characters used for word splitting,
you can put comma there together with whitspace. I never used it
myself, but it is used extensively in hairy shell scripts like
'configure'.
In *nix, the shell (not the OS) is responsible for many aspects of
parsing command lines, including splitting up parameters and expanding wildcards in filenames. So on Linux, writing "gcc a.c, b.c, c.c" in
bash will call gcc with three parameters - "a.c,", "b.c,", and "c.c".
On Windows, the standard "DOS Prompt" command-line terminal does very
little of this. (I don't know the details of Powershell. And if you
use a different shell on Windows, like bash from msys, you get the
behaviour of that shell.)
On 20/12/2022 03:44, antispam@math.uni.wroc.pl wrote:
Bart <bc@freeuk.com> wrote:
This has lead me to thinking about how command line parameters are
separated. On either OS you normally type this:
gcc a.c b.c c.c
You can't do this, separate with commas, as the comma becomes part of
each filename:
gcc a.c, b.c, c.c
That applies also to my bcc, but there, you CAN have comma-separated
items inside an @file; with gcc, that still fails.
Why would you do such silly thing? If you really want you can
redefine 'gcc' so that it strips trailing commas (that is
trivial). If you like excess characters you can type longer
thing like:
echo gcc a.c, b.c, c.c | tr -d ',' | bash
So, what's going on here: is it an OS shell misfeature, or what?
KISS principle. Commas are legal in filenames and potentially
useful. On command line spaces work fine. If you really need
splitting to work differently there are resonably simple ways
to do this, most crude is above.
BTW: travelling between UK and other countries do you complain
that cars drive on wrong side of the road?
Well it's not the OS on Windows, since 'T.BAT a,b,c' will process a, b,
c as separate 'a b c' items inside the script (not as "a," etc). (I
can't test how it works on Linux.)
There is IFS variable which lists characters used for word splitting,
you can put comma there together with whitspace. I never used it
myself, but it is used extensively in hairy shell scripts like
'configure'.
An important issue here is that the "OS" is not involved in any of this, either on Windows or on Linux.
In *nix, the shell (not the OS) is responsible for many aspects of
parsing command lines, including splitting up parameters and expanding wildcards in filenames. So on Linux, writing "gcc a.c, b.c, c.c" in
bash will call gcc with three parameters - "a.c,", "b.c,", and "c.c".
On Windows, the standard "DOS Prompt" command-line terminal does very
little of this. (I don't know the details of Powershell. And if you
use a different shell on Windows, like bash from msys, you get the
behaviour of that shell.) So if you have a normal "DOS Prompt" and
write "gcc a.c, b.c, c.c" then the program "gcc" is called with /one/ parameter. It's up to the program to decide how to parse these.
Typically it will use one of several different WinAPI calls depending on whether it wants the abomination that is "wide characters", or UTF-8, or
to hope that everything is simple ASCII. If a program wants to parse
the string itself using commas as separators, it can do that too.
Of course most programs - especially those that come from a *nix
heritage - will choose to parse in the same way as is done by *nix shells.
I did not know that the batch file interpreter handled commas
differently like this. Who says you never learn things on Usenet? :-)
On 18/12/2022 18:09, Bart wrote:
A key point here is that almost every general-purpose OS, other than Windows, in modern use on personal computers is basically POSIX
compliant.
Maybe you haven't done much Python programming and have only worked with small scripts. But like any other language, bigger programs are split
into multiple files or modules - only the main program file will be executable. So if a big program has 50 Python files, only one of them
will normally be executable and have the shebang and the executable
flag. (Sometimes you'll "execute" other modules to run their tests
during development, but you'd likely do that as "python3 file.py".)
You do realise that gcc can handle some 30-odd different file types?
That doesn't change the fact that probably 99% of the time I run gcc,
it is with the name of a .c source file. And 99.9% of the times when I
invoke it on prog.c as the first or only file to create an executable,
then I want to create prog.exe.
OK. So gcc should base its handling of input on what /you/ do, never
mind the rest of the world?
That's fine for your own tools, but not for
gcc.
In *nix, the dot is just a character, and file extensions are just part
of the name. You can have as many or as few as you find convenient and helpful.
If you intend to assemble three .s files to object files, using
separate 'as' invocations, they will all be called a.out!
That would be crass even for a toy program written by a student. And
yet here it is a mainstream product used by million of people.
All my language programs (and many of my apps), have a primary type of
input file, and will default to that file extension if omitted.
Anything else (eg .dll files) need the full extension.
Here's something funny: take hello.c and rename to 'hello', with no
extension. If I try and compile it:
gcc hello
it says: hello: file not recognised: file format not recognised.
Trying 'gcc hello.' is worse: it can't see the file at all.
How is that "funny" ? It is perfectly clear behaviour.
gcc supports lots of file types. For user convenience it uses file extensions to tell the file type unless you want to explicitly inform it
of the type using "-x" options.
<https://gcc.gnu.org/onlinedocs/gcc/Overall-Options.html>
"hello" has no file extension, so the compiler will not assume it is C. (Remember? gcc is not just a simple little dedicated C compiler.) Files without extensions are assumed to be object files to pass to the linker,
and your file does not fit that format.
"hello." is a completely different file name - the file does not exist.
It is an oddity of DOS and Windows that there is a hidden dot at the end
of files with no extension - it's a hangover from 8.3 DOS names.
Yes. It's the only sane way, and consistent with millions of programs spanning 50 years on huge numbers of systems.
With bcc, I just have to type "bcc hello." to make it work. A trailing
dot means an empty extension.
When you make your own little programs for your own use, you can pick
your own rules.
Do you have a program called "cc" on your path? It's unlikely. "cc" is the standard name for the system compiler, which may be gcc or may be something else entirely.
* I also use several C compilers; how does make know which one I
intend? How do I pass it options?
It uses the POSIX standards. The C compiler is called "cc", the flags passed are in the environment variable CFLAGS.
If that's not what you want, write a makefile.
If I give another example:
c:\c>bcc cipher hmac sha2
Compiling cipher.c to cipher.asm
Compiling hmac.c to hmac.asm
Compiling sha2.c to sha2.asm
Assembling to cipher.exe
it just works. 'make cipher hmac sha2' doesn't, not even in WSL.
(And the "advanced AI" can figure out if it is C, C++, Fortran, or
several other languages.)
No, it can't. If I have hello.c and hello.cpp, it will favour the .c
file.
Sorry, I should have specified that the "advanced AI" can do it on an advanced OS, such as every *nix system since before Bill Gates found
MS-DOS is a dustbin.
Bart <bc@freeuk.com> wrote:
It will always be a snapshot of the actual sources, which are not kept
on-line and can change every few seconds.
You are misusing git and github. git is "source control" system.
At least from my point of view (there is lot of flame wars discussing
what source control should do) main task of source control is
to store all significant versions of software and allow resonable
easy retrival of any version. Logically got stores separate
source tree for each version (plus some meta info like log messages).
Done naively it would lead to serious bloat, which 1547 versions
it would be almost 1547 times larger than single version. git
uses compression to reduce this. AFAICS actual sources of your
projects are about 4-5M. With normal git use I would expect
(compressed) history to add another 5-10M (if there are a lot of
deletions than history would be bigger). Your repo is bigger than
that probably due to generated files and .exe. Note: I understand
that if you write in your own language, than bootstrap is a problem.
But for boostrap mc.c is enough. OK, you want want be independent
from C, so mayb .exe. But .ma files just add bloat. Note that
github has release feature, people who want just binaries or single
verion can fetch release. And many projects having bootstrap
problem say: if you do not have compiler fetch earler binary
and use it to build the system. Or they add extra generated things
to releases but do not keep them in source repositiory.
Exactly. You just have a very DOS-biased view as to when they are
helpful, and when they are not. It's a backwards and limited view due
to a lifetime of living with a backwards and limited OS.
On 20/12/2022 11:55, antispam@math.uni.wroc.pl wrote:
Bart <bc@freeuk.com> wrote:
It will always be a snapshot of the actual sources, which are not kept
on-line and can change every few seconds.
You are misusing git and github. git is "source control" system.
At least from my point of view (there is lot of flame wars discussing
what source control should do) main task of source control is
to store all significant versions of software and allow resonable
easy retrival of any version. Logically got stores separate
source tree for each version (plus some meta info like log messages).
Done naively it would lead to serious bloat, which 1547 versions
it would be almost 1547 times larger than single version. git
uses compression to reduce this. AFAICS actual sources of your
projects are about 4-5M. With normal git use I would expect
(compressed) history to add another 5-10M (if there are a lot of
deletions than history would be bigger). Your repo is bigger than
that probably due to generated files and .exe. Note: I understand
that if you write in your own language, than bootstrap is a problem.
But for boostrap mc.c is enough. OK, you want want be independent
from C, so mayb .exe. But .ma files just add bloat. Note that
github has release feature, people who want just binaries or single
verion can fetch release. And many projects having bootstrap
problem say: if you do not have compiler fetch earler binary
and use it to build the system. Or they add extra generated things
to releases but do not keep them in source repositiory.
DB:
Exactly. You just have a very DOS-biased view as to when they are helpful, and when they are not. It's a backwards and limited view due
to a lifetime of living with a backwards and limited OS.
So much negativity here.
I get the impression that everything I try is viewed negatively.
At least, I don't remember anyone saying, What a great idea, Bart!
Or,
Yeah, I'd like that, but unfortunately the way Linux works makes that impractical.
Instead, it would be, Yeah, that's what you would expect from a rubbish
OS that Bill Gates found in a bin.
So much negativity here.[... To David and Waldek:]
You two aren't going to be happy until my language is a clone of C,
with tools that work exactly the same way they do on Unix. But then
you're going to say, what's the point?
At least, I don't remember anyone saying, What a great idea, Bart!
Or, Yeah, I'd like that, but unfortunately the way Linux works makes
that impractical.
* Linux, sadly, has acquired a degree of bloat. Eg, "man gcc" comes
to some 300 pages, compared with the two pages of "man cc" in the
7th Edition version. Basically, it's always easier to add more
to an existing facility that to take stuff out. Grr! We used to
grumble when the binary of a fully-featured browser went to over
a megabyte. Now we scarcely turn a hair at the size of Firefox,
or the number of processes it spawns. Grr.
You [and probably Dmitry] seem to have a very weird idea of what Unix /is/.
But what you seem to fail to appreciate is that very little of Unix is
laid down in concrete.
On 20/12/2022 06:56, David Brown wrote:
On 18/12/2022 18:09, Bart wrote:
A key point here is that almost every general-purpose OS, other than
Windows, in modern use on personal computers is basically POSIX
compliant.
POSIX compliant means basically being a clone of Unix with all the same restrictions and stupid quirks?
Maybe you haven't done much Python programming and have only worked
with small scripts. But like any other language, bigger programs are
split into multiple files or modules - only the main program file will
be executable. So if a big program has 50 Python files, only one of
them will normally be executable and have the shebang and the
executable flag. (Sometimes you'll "execute" other modules to run
their tests during development, but you'd likely do that as "python3
file.py".)
Oh, just like Windows then?
Obviously, all 50 modules will contain executable code. You probably
mean that only the lead module can be launched by the OS and needs
special permissions.
You do realise that gcc can handle some 30-odd different file types?
That doesn't change the fact that probably 99% of the time I run gcc,
it is with the name of a .c source file. And 99.9% of the times when
I invoke it on prog.c as the first or only file to create an
executable, then I want to create prog.exe.
OK. So gcc should base its handling of input on what /you/ do, never
mind the rest of the world?
No, based on what LOTS of people do. gcc is used as a /C/ compiler, and
is probably only ever used as a C compiler.
Maybe this is acceptable to
you:
gcc prog.c -oprog -lm
./prog
But I prefer:
bcc prog
prog
Who wouldn't?
Why would I bother with such stone-age rubbish? (And that hardly ever works.)
You of course will disagree, since whatever Unix does, no matter how ridiculous or crass, is perfect, and every other kind of behaviour is rubbish.
On 21/12/2022 11:07, Bart wrote:
[...]
So much negativity here.[... To David and Waldek:]
You two aren't going to be happy until my language is a clone of C,
with tools that work exactly the same way they do on Unix. But then
you're going to say, what's the point?
You [and probably Dmitry] seem to have a very weird idea of what Unix /is/. To really understand why many of us have been happy users of Unix [and somewhat less so of Linux*] for several decades, you need to understand the history, what came before, and how Unix then evolved. I don't intend to write an essay here; there are books on the subject.
But what you seem to fail to appreciate is that very little of Unix is
laid down in concrete. It would take major surgery to change the file system significantly; you are probably also stuck with the "exec"
family of system calls; you would be unwise to tamper too much with
the basic security mechanisms. But thereafter, it's entirely up to
you.
You're bright enough to be able to write your own language and compiler. So you're surely bright enough to write your own shell, or
to tinker with one of those already available -- they /all/ came into existence because some other bright person wanted something different.
Bright enough to write wrappers for things where you would prefer the defaults to be different, to write your own editor, your own tools for
all purposes. All, every single one, of those supplied "by default"
again came into being because someone decided they wanted it and wrote
the requisite code. Sources are freely available, so if you want
something different and don't want to write your own, you can play
with the code that someone else wrote. Entirely up to you.
When David says "you can do X", he doesn't mean "you /have/ to
do X". There is almost no compulsion. All the tools are there, use
them as you please. When you complain about some aspect of "gcc" or
"make" or whatever, you're actually complaining that people who gave
their time and expertise freely to provide a tool that /they/ wanted,
haven't done so to /your/ specification. Well, shucks.
To give one example, you have been wittering recently about
the fact that "cc hello; hello" doesn't, as you would like, find and
compile a program whose source is in "hello.c", put the binary into
"hello", and run it. But you can write your own almost trivially;
it's a "one-line" shell script [for large values of "one", but that's
to provide checks rather than because it's complicated]. You complain
that you have to write "./hello" rather than just "hello"; but that's because "." is not in your "$PATH", which is set by you, not because Unix/Linux insists on extra verbiage. If you need further help, just
ask. But I'd expect you to be able to work it out rather than wring
your hands and flap around helplessly [or blame Unix for it].
But you can write your own almost trivially;
it's a "one-line" shell script
[...]
At least, I don't remember anyone saying, What a great idea, Bart!
Or, Yeah, I'd like that, but unfortunately the way Linux works makes
that impractical.
Perhaps you would tell us what great ideas you'd like "us" to consider? The things I recall you telling us are things that existed
long ago in other languages, such as 1-based arrays, line-based syntax,
or case insensitivity.
On 21/12/2022 01:42, Bart wrote:
You of course will disagree, since whatever Unix does, no matter how
ridiculous or crass, is perfect, and every other kind of behaviour is
rubbish.
Yes.
But then, I would not normally keep a thousand files in one directory. I think even MS-DOS has supported directory trees since version 2.x.
Any Linux shell made a terrible CLI, but I guess it was designed for
gurus rather than ordinary people.
On 21/12/2022 12:07, Bart wrote:
So much negativity here.
I have long experience with MS-DOS and Windows,
and long experience with
*nix.
DOS is absolute shite in comparison - it was created as a cheap
knock-off of other systems, thrown together quickly for a throw-away marketing project by IBM.
Unfortunately IBM forgot to throw away the
project and it was accidentally successful,
resulting in the world being
stuck with hardware, software and a processor ISA that were known to be third-rate
outdated cheapo solutions at the time the IBM PC was first
released. Those turds have been polished a great deal in the last 35
years or so
processors are very impressive engineering - but turds they remain at
their core. While some designs were planned to be forward compatible
with future enhancements (like the 68k processor architecture, or the
BBC MOS operating system), and some were designed to be compatible with everything above a set minimum (like Unix), x86 and DOS then Windows
have been saddled with backwards compatibility as their prime
motivation.
But I don't understand how you can take personal offence when I talk
about operating systems, or how you end up thinking it was a criticism
of you or your language.
On 22/12/2022 13:03, David Brown wrote:
On 21/12/2022 12:07, Bart wrote:
So much negativity here.
I have long experience with MS-DOS and Windows,
So have I.
and long experience with *nix.
I looked into it every few years; it always looked shite to me.
However, I should say I have little interest in operating systems
anyway. DOS was fine because it didn't get in my way. It provided a file system, could copy files, launch programs etc, and it didn't cut my productivity and sanity in half by throwing in case-sensitivity. What
else did I need?
I expect you didn't like DOS because it doesn't have the dozens of toys
that you came to rely on in Unix, including a built-in C compiler; what luxury!
It's because DOS was so sparse that I have few dependencies on it; and
my stuff can build on Linux more easily than Linux programs can build on Windows. (AIUI, most won't; you need to use CYGWIN or MSYS or WSL, but
then you don't get a bona fide Windows executable that customers can run directly.)
DOS is absolute shite in comparison - it was created as a cheap
knock-off of other systems, thrown together quickly for a throw-away
marketing project by IBM.
This is from who used, what was it, a Spectrum machine?
I was involved in creating 8-bit business computers at the time, and
looked down on such things. (But it was also my job to investigate
similar, low-cost designs for hobbyist computers as an area of expansion.)
BTW our machines used a rip-off of CP/M. My boss approached Digital
Research but couldn't come to an agreement on licensing. So we (not me though) created a clone. So why is saving money a bad thing?
I don't know exactly what you expected from an OS that ran on a 64KB machine, which wasn't allowed to use more than about 8KB.
And, where /were/ the PCs with Unix in those days? Where could you buy
one? Would you be able to do much on it other than endlessly configure
stuff to make it work? Could you create binaries that were guaranteed to work with any other Unix?
How unfriendly would it have been to supply apps as software bundles
that would take an age to build on a dual-floppy machine, with users
havin to keep feeding it floppies?
I think you just have little experience of that world of creating
products for low-end consumer PCs.
IME Linux systems were poor, amateurish attempts at an OS where lots of things just didn't work, until the early 2000s. GUIs came late too, and looked dreadful. By comparison, Microsoft Windows looked professional.
Yes you had to pay for it; is that what this is about, that Linux is free?
Unfortunately IBM forgot to throw away the project and it was
accidentally successful,
Good.
resulting in the world being stuck with hardware, software and a
processor ISA that were known to be third-rate
The IBM PC was definitely more advanced than by 8-bit business machine,
if not that much faster despite an internal 16-bit processor.
The 8088/86/286 had some disappointing limitations, which were fixed
with the 80386.
outdated cheapo solutions at the time the IBM PC was first released.
Those turds have been polished a great deal in the last 35 years or so
The architecture was open. There was a huge market in add-on
peripherals, and they came with drivers that worked. Good luck in
finding equivalent support in 1990s for even a printer driver under Linux.
- some versions of Windows are okay, and modern x86-64
processors are very impressive engineering - but turds they remain at
their core. While some designs were planned to be forward compatible
with future enhancements (like the 68k processor architecture, or the
BBC MOS operating system), and some were designed to be compatible
with everything above a set minimum (like Unix), x86 and DOS then
Windows have been saddled with backwards compatibility as their prime
motivation.
Which has been excellent. Until they chose not to support 16-bit
binaries under 64-bit Windows.
But I don't understand how you can take personal offence when I talk
about operating systems, or how you end up thinking it was a criticism
of you or your language.
I get annoyed when people openly diss Windows, or MSDOS, simply for not being Linux.
On 22/12/2022 16:21, David Brown wrote:
On 21/12/2022 01:42, Bart wrote:
You of course will disagree, since whatever Unix does, no matter how
ridiculous or crass, is perfect, and every other kind of behaviour is
rubbish.
Yes.
But then, I would not normally keep a thousand files in one directory.
I think even MS-DOS has supported directory trees since version 2.x.
You've obviously never written programs for other people to run on their
own PCs. How do /you/ know how other people will organise their files?
Who are you to tell them how to do so?
And it might in any case be up to third party apps how files are
generated or your client's machine.
But my point about '* *.c', which you've chosen to ignore, is valid even
for ten files; it's just wrong.
It might be acceptable within a higher level language where each
wildcard spec expands to a list of files which itself is a nested
element of the paramter list. But it doesn't work is you just
concatenate everything into one giant list; there are too many ambiguities.
Of course, will never agree there's anything wrong with it; you will
defend Linux to the death. Or you will point that you can do X, Y and Z
to turn off this 'globbing', which now causes problems to programs that depend on, and which it is now up to each customer to do so persistently
on their machines.
The alternative in DOS and Windows has always been to buy additional
tools that *nix users take for granted. And those tools always have to include /everything/. So while a Pascal compiler for *nix just needs to
be a Pascal compiler - because there are editors, build systems,
debuggers, libraries, assemblers and linkers already - on DOS/Windows
you had to get Turbo Pascal or Borland Pascal which had everything
included.
Do I like the fact that *nix has always come with a wide range of general-purpose tools? Yes, I most surely do!
On 22/12/2022 17:59, Bart wrote:
But my point about '* *.c', which you've chosen to ignore, is valid
even for ten files; it's just wrong.
It might be acceptable within a higher level language where each
wildcard spec expands to a list of files which itself is a nested
element of the paramter list. But it doesn't work is you just
concatenate everything into one giant list; there are too many
ambiguities.
Of course, will never agree there's anything wrong with it; you will
defend Linux to the death. Or you will point that you can do X, Y and
Z to turn off this 'globbing', which now causes problems to programs
that depend on, and which it is now up to each customer to do so
persistently on their machines.
I can't figure out what you are worrying about here.
In any shell, in any OS, for any program, if you write "prog *" the
program is run with a list of all the files in the directory.
If you
wrote "prog * *.c", it will be started with a list of all the files, followed by a list of all the ".c" files.
It's the same in DOS, Linux, Windows, Macs, or anything else you like.
It's the same for any shell.
The difference is that for some shells (such as Windows PowerShell or
bash), the shell does the work of finding the files and expanding the wildcards because this is what /every/ program needs - there's no point
in repeating the same code in each program. In other shells, such as
DOS "command prompt", every program has to have that functionality added
to the program.
Well, I say "every program" supports wildcards for filenames - I'm sure there are some DOS/Windows programs that don't. But most do.
On 22/12/2022 22:55, Bart wrote:
I expect you didn't like DOS because it doesn't have the dozens of
toys that you came to rely on in Unix, including a built-in C
compiler; what luxury!
I find it useful to have a well-populated toolbox. I am an engineer - finding the right tools for the job, and using them well, is what I do.
DOS gave you a rock to bash things with. Some people, such as
yourself, seem to have been successful at bashing out your own tools
with that rock. That's impressive, but I find it strange how you can be happy with it.
The alternative in DOS and Windows has always been to buy additional
tools that *nix users take for granted. And those tools always have to include /everything/.
So while a Pascal compiler for *nix just needs to
be a Pascal compiler - because there are editors, build systems,
debuggers, libraries, assemblers and linkers already - on DOS/Windows
you had to get Turbo Pascal or Borland Pascal which had everything
included.
(Turbo Pascal for DOS and Win3.1 included "make", by the way
- for well over a decade that was the build tool I used for all my
assembly and C programming on microcontrollers.) And then when you
wanted C, you bought MSVC which included a different editor with a
different setup, a different assembler, a different build tool (called "nmake", and so on. Everything was duplicated but different, everything incompatible, everything a huge waste of the manufacturers time and
effort, and a huge waste of the users' money and time as they had to get familiar with another set of basic tools.
Do I like the fact that *nix has always come with a wide range of general-purpose tools? Yes, I most surely do!
It's because DOS was so sparse that I have few dependencies on it; and
my stuff can build on Linux more easily than Linux programs can build
on Windows. (AIUI, most won't; you need to use CYGWIN or MSYS or WSL,
but then you don't get a bona fide Windows executable that customers
can run directly.)
Programs built with msys2 work fine on any Windows system. You have to include any DLL's you use, but that applies to all programs with all tools.
Then there was the OS - it was crap. Unix was vastly better than CP/M
and the various DOS's.
But IBM was not involved in Unix at that time,
and did not want others to control the software on their machines. (They thought they could control Microsoft.)
Or compare it to other machines, such as the BBC Micro. It had the OS
in ROM, making it far simpler and more reliable. It had a good
programming language in ROM. (BASIC, but one of the best variants.)
This meant new software could be written quickly and easily in a high
level language, instead of assembly (as was the norm for the PC in the
early days). It had an OS that was expandable - it supported pluggable file systems that were barely imagined at the time the OS was designed.
It was a tenth of the price of the PC.
Of course the BBC Micro wasn't perfect either, and had limitations
compared to the PC - not surprising, given the price difference. The
6502 was not a powerful processor.
But imagine what we could have had with a computer using a 68000,
running a version of Unix, combining the design innovation, user-friendliness and forward thinking of Acorn and the business
know-how of IBM? It would have been achievable at lower cost than the
IBM PC, and /so/ much better.
As it was, by the mid-eighties there were home computers with usability, graphics and user interfaces that were not seen in the PC world for a decade. There were machines that were cheaper than the PC's of the time and could /emulate/ PC's at near full-speed. The PC world was far
behind in its hardware, OS and basic software. But the IBM PC and MSDOS won out because the other machines were not compatible with the IBM PC
and MSDOS.
I used Unix systems at university in the early 1990's, and by $DEITY it
was a /huge/ step backwards when I had to move to Windows.
The 8088/86/286 had some disappointing limitations, which were fixed
with the 80386.
It's a pity MS didn't catch up with proper 32-bit support until AMD were already working on 64-bit versions.
On the business and marketing side, there's no doubt that MS in
particular outclassed everyone else. They innovated the idea of
criminal action as a viable business tactic - use whatever illegal means
you like to ensure competitors go bankrupt before they manage to sue
you, and by the time the case gets to the courts the fines will be negligible compared to your profit. Even IBM was trapped by them.
But yes, compatibility and market share was key - Windows and PC's were popular because there was lots of hardware and software that worked with them, and there was lots of hardware and software for them because they
were popular. They were technically shite,
Which has been excellent. Until they chose not to support 16-bit
binaries under 64-bit Windows.
I believe you can run Wine on Windows, and then you could run 16-bit binaries. But you might have to run Wine under Cygwin or something -
it's not something I have tried.
But I don't understand how you can take personal offence when I talk
about operating systems, or how you end up thinking it was a
criticism of you or your language.
I get annoyed when people openly diss Windows, or MSDOS, simply for
not being Linux.
I diss Windows or DOS because it deserves it. Linux was not conceived
when I realised DOS was crap compared to the alternatives. (You always seem to have such trouble distinguishing Linux and Unix.)
That will be up to individual programs whether they accept wildcards for filenames, and what they do about them. If the input is "*", what kinds
of application would be dealing with an ill-matched collection of files including all sorts of junks that happens to be lying around?
It would be very specific. BTW if I do this under WSL:
vim *.c
Then I was disappointed that I didn't get hundreds of edit windows for
all those files. So even under Linux, sometimes expansion doesn't
happen. (What magic does vim use to get the OS to see sense?)
Usually the last thing you want is for the OS (or whatever is
responsible for expanding those command line params before they get to
the app) is to just expand EVERYTHING willy-nilly:
* Maybe the app needs to know the exact parameters entered (like my
showargs program).
Maybe they are to be stored, and passed on to
different parts of an app as needed.
* Maybe they are to be passed onto to another program, where it's going
to be much easier and tidier if they are unexpanded:
c:\mx>ms showargs * *.c
1 *
2 *.c
Here 'ms', which runs 'showargs' as a script, sees THREE parameters: showargs, * and *.c; It arranges for the last two to be passed as input
to the program being run.
* Maybe the app's inputs are mathematical expressions so that you want
"A*B" and not a list of files that start with A and end with B!
* But above all, it simply doesn't work, not when you have expandable
params interspersed with other expandable ones, or even ordinary params, because everything just merges together.
So here there are two things I find utterly astonishing:
(1) That you seem to think this a good idea, despite that list of problems
Yes, individual apps can CHOOSE to do their own expansion, but that is workable because that expansion-list is segregrated from other parameters.
So much negativity here.
Bart <bc@freeuk.com> wrote:
That will be up to individual programs whether they accept wildcards for
filenames, and what they do about them. If the input is "*", what kinds
of application would be dealing with an ill-matched collection of files
including all sorts of junks that happens to be lying around?
Yesterday I used several times the following:
du -s *
'man du' will tell you want is does. And sometimes I use the
(in)famous:
rm -rf *
Do not try this if you do not know what it is doing!
It would be very specific. BTW if I do this under WSL:
vim *.c
Then I was disappointed that I didn't get hundreds of edit windows for
all those files. So even under Linux, sometimes expansion doesn't
happen. (What magic does vim use to get the OS to see sense?)
I suspect that you incorrecty interpreted your observation.
Usually the last thing you want is for the OS (or whatever is
responsible for expanding those command line params before they get to
the app) is to just expand EVERYTHING willy-nilly:
Speak about yourself. If application needs a list of files I want
this list to be expanded.
I am under impression that you miss important fact: Unix was designed
as a _system_. Programs expect system conventions and work with them,
not against them. One convention is that there are few dozens (myriad
in your language) small programs that are supposed to work together.
Shell works as a glue that combines them together. Shell+utilities
form a programming language, crappy as programming language, but
quite useful. In particular, ability to transform/crate command
line via programming means allows automating a lot of tasks.
Just more on my point of view: I started to use DOS around 1988
(I was introduced to computers on mainframes and I had ZX
Spectrum earlier). My first practical contact with Unix was in
1990. It took me some time to understand how Unix works, but once
I "got it" I was able easily to do things on Unix that would be
hard (or require much more work) on DOS. By 1993 I was mostly
using Unix (more precisely, at that time I switched from 386BSD
Unix to Linux).
Coming back to Unix: it works for me. DOS in comparison felt
crappy. Compared to 1993 Windows has improved, but for me
this does not change much, I saw nothing that would be
better for _me_ than what I have in Linux. Now, if you want to
improve, one can think of many ways of doing thing better than on
Unix. Trouble is to that real-world design will have compromises.
One looses some possibilites to have another ones. You either
does not understand Unix or at least pretend to not understand.
If you do not understand Unix, then you do not qualified to judge
it. It looks that you do not know what you loose by choosing
different design.
BTW: I did spent some time thinking of better command line than
Unix. Some my ideas were quite different. But none were borrowed
from DOS.
rm -rf *
Do not try this if you do not know what it is doing!
On 2022-12-22 16:09, Andy Walker wrote:
????You [and probably Dmitry] seem to have a very weird idea of what
Unix /is/.
I don't know about Bart. As for me, I started with PDP-11 UNIX and
continued with m68k UNIX Sys V. Both were utter garbage in every
possible aspect inferior to any competing system with worst C compilers
I ever seen.
Dmitry A. Kazakov <mailbox@dmitry-kazakov.de> wrote:
On 2022-12-22 16:09, Andy Walker wrote:
????You [and probably Dmitry] seem to have a very weird idea of what
Unix /is/.
I don't know about Bart. As for me, I started with PDP-11 UNIX and
continued with m68k UNIX Sys V. Both were utter garbage in every
possible aspect inferior to any competing system with worst C compilers
I ever seen.
Can you name those superior competing systems?
One of the way to illustrate the beauty of UNIX "ideas" is this:
$ echo "" > -i
Now try to "more" or remove it (:-))
On 23/12/2022 15:38, David Brown wrote:
On 22/12/2022 22:55, Bart wrote:
I expect you didn't like DOS because it doesn't have the dozens of
toys that you came to rely on in Unix, including a built-in C
compiler; what luxury!
I find it useful to have a well-populated toolbox. I am an engineer -
finding the right tools for the job, and using them well, is what I
do. DOS gave you a rock to bash things with. Some people, such as
yourself, seem to have been successful at bashing out your own tools
with that rock. That's impressive, but I find it strange how you can
be happy with it.
You seem fixated on DOS. My early coding was done on DEC and ICL OSes,
then no OS at all, then CP/M (or our clone of it).
The tools provided, when they were provided, were always spartan:
editor, compiler, linker, assembler. That's nothing new.
The alternative in DOS and Windows has always been to buy additional
tools that *nix users take for granted. And those tools always have
to include /everything/.
'Everything' is good. Remember those endless of discussions on clc about what exactly constituted a C compiler? Because this new-fangled 'gcc'
didn't come with batteries included, like header files, assembler or
linker.
A bad mistake on Windows where those utilities are not OS-provided.
But tell me again why a 'linker', of all things, should be part of a consumer product mostly aimed at people doing things that are nothing to
do with building software. Why give it such special dispensation?
So while a Pascal compiler for *nix just needs to be a Pascal
compiler - because there are editors, build systems, debuggers,
libraries, assemblers and linkers already - on DOS/Windows you had to
get Turbo Pascal or Borland Pascal which had everything included.
You can get bare compilers on Windows too.
(Turbo Pascal for DOS and Win3.1 included "make", by the way - for
well over a decade that was the build tool I used for all my assembly
and C programming on microcontrollers.) And then when you wanted C,
you bought MSVC which included a different editor with a different
setup, a different assembler, a different build tool (called "nmake",
and so on. Everything was duplicated but different, everything
incompatible, everything a huge waste of the manufacturers time and
effort, and a huge waste of the users' money and time as they had to
get familiar with another set of basic tools.
Nothing stopped anybody from marketing a standalone assembler or linker
that could be used with third party compilers. These are not complicated programs (a workable linker is only 50KB).
I can't answer that. Unless that assembler used 'gas' syntax then I
would write my own too.
Do I like the fact that *nix has always come with a wide range of
general-purpose tools? Yes, I most surely do!
What did that do for companies wanting to develop and sell their own compilers and tools?
Or compare it to other machines, such as the BBC Micro. It had the OS
in ROM, making it far simpler and more reliable. It had a good
programming language in ROM. (BASIC, but one of the best variants.)
This meant new software could be written quickly and easily in a high
level language, instead of assembly (as was the norm for the PC in the
early days). It had an OS that was expandable - it supported
pluggable file systems that were barely imagined at the time the OS
was designed. It was a tenth of the price of the PC.
It used a 6502. I'd argue it was better designed than any Sinclair
product, with a proper keyboard, but it was still in that class of machine.
BTW this is the kind of machine my company were selling:
https://nosher.net/archives/computers/pcw_1982_12_006a
(My first redesign task was adding the bitmapped graphics on the display.)
Of course the BBC Micro wasn't perfect either, and had limitations
compared to the PC - not surprising, given the price difference. The
6502 was not a powerful processor.
As I said...
More business-oriented 8-bit systems were based on the Z80, such as the
PCW 8256, with CP/M 3. (My first commercial graphical application was
for that machine IIRC.)
So you don't rate its OS - so what? All customers needed were the most mundane things. It was a marketed as a word processor after all!
But imagine what we could have had with a computer using a 68000,
running a version of Unix, combining the design innovation,
user-friendliness and forward thinking of Acorn and the business
know-how of IBM? It would have been achievable at lower cost than the
IBM PC, and /so/ much better.
As it was, by the mid-eighties there were home computers with
usability, graphics and user interfaces that were not seen in the PC
world for a decade. There were machines that were cheaper than the
PC's of the time and could /emulate/ PC's at near full-speed. The PC
world was far behind in its hardware, OS and basic software. But the
IBM PC and MSDOS won out because the other machines were not
compatible with the IBM PC and MSDOS.
That's true. I was playing with 24-bit RGB graphics for my private
designs about a decade before it became mainstream on PCs.
But where were the Unix alternatives that people could buy from PC
World? Sure there had been colour graphics in computers for years but
I'm talking about consumer PCs.
I used Unix systems at university in the early 1990's, and by $DEITY
it was a /huge/ step backwards when I had to move to Windows.
I went from a £500,000 (in mid-70s money) mainframe at college, running TOPS 20 I think, to my own £100 Z80 machine with no OS on it at all, and
no disk drives either.
I'd say /that/ was a huge step backwards! Perhaps you can appreciate why
I'm not that bothered.
The 8088/86/286 had some disappointing limitations, which were fixed
with the 80386.
It's a pity MS didn't catch up with proper 32-bit support until AMD
were already working on 64-bit versions.
It wasn't so critical with the 80386. Programs could run in 16-bit mode under a 16-bit OS, and use 32-bit operations, registers and address modes.
On the business and marketing side, there's no doubt that MS in
particular outclassed everyone else. They innovated the idea of
criminal action as a viable business tactic - use whatever illegal
means you like to ensure competitors go bankrupt before they manage to
sue you, and by the time the case gets to the courts the fines will be
negligible compared to your profit. Even IBM was trapped by them.
I never got interested in that side; I was always working to deadlines!
But what exactly was the point of Linux? What exactly was wrong with Unix?
But yes, compatibility and market share was key - Windows and PC's
were popular because there was lots of hardware and software that
worked with them, and there was lots of hardware and software for them
because they were popular. They were technically shite,
/Every/ software and hardware product for Windows was shite? Because
you're looked at every one and given your completely unbiased opinion!
Which has been excellent. Until they chose not to support 16-bit
binaries under 64-bit Windows.
I believe you can run Wine on Windows, and then you could run 16-bit
binaries. But you might have to run Wine under Cygwin or something -
it's not something I have tried.
(I gave this 10 minutes but it lead nowwhere. Except it involved an
extra 500MB to install stuff that didn't work, but when I purged it, it
only recovered 0.2MB.
Hmmm.. have I mentioned the advantages of a piece of software that comes
and runs as a single executable file? Either its there or not there.
There's nowhere for it to hide!)
But I don't understand how you can take personal offence when I talk
about operating systems, or how you end up thinking it was a
criticism of you or your language.
I get annoyed when people openly diss Windows, or MSDOS, simply for
not being Linux.
I diss Windows or DOS because it deserves it. Linux was not conceived
when I realised DOS was crap compared to the alternatives. (You
always seem to have such trouble distinguishing Linux and Unix.)
Understandably. What exactly /is/ the difference? And what are the differences between the myriad different versions of Linux even for the
same platform?
Apparently having more than one assembler or linker on a platform is a disaster.
But have 100 different versions of the same OS, that's
perfectly fine!
I like all these contradictions.
On 24/12/2022 00:46, Bart wrote:
On 23/12/2022 15:38, David Brown wrote:
But tell me again why a 'linker', of all things, should be part of a
consumer product mostly aimed at people doing things that are nothing
to do with building software. Why give it such special dispensation?
It is convenient to have on the system. Programs can rely on it being there.
On 2022-12-27 13:24, David Brown wrote:
On 24/12/2022 00:46, Bart wrote:
On 23/12/2022 15:38, David Brown wrote:
But tell me again why a 'linker', of all things, should be part of a
consumer product mostly aimed at people doing things that are nothing
to do with building software. Why give it such special dispensation?
It is convenient to have on the system. Programs can rely on it being
there.
I have an impression that you guys confuse linker with loader. Programs (applications) do not need linker.
On 18/12/2022 13:05, David Brown wrote:
There are standards for that.? A text-based file can have a shebang comment ("#! /usr/bin/bash", or similar) to let the shell know what interpreter to use.? This lets you distinguish between "python2" and "python3", for example, which is a big improvement over Windows-style
file associations that can only handle one interpreter for each file
type.
That is invasive. And taking something that is really an attribute of a
file name, in having it not only inside the file, but requiring the file
to be opened and read to find out.
(Presumably every language that runs on Linux needs to accept '#' as a
line comment? And you need to build it in to every one of 10,000 source files the direct location of the Python2 or Python3 installation on that machine? Is that portable across OSes? But I expect it's smarter than that.)
You do realise that gcc can handle some 30-odd different file types?
That doesn't change the fact that probably 99% of the time I run gcc, it
is with the name of a .c source file. And 99.9% of the times when I
invoke it on prog.c as the first or only file to create an executable,
then I want to create prog.exe.
So its behaviour is unhelpful. After the 10,000th time you have to type
.c, or backspace over .c to get at the name itself to modify, it becomes tedious.
Now it's not that hard to write a wrapper script or program on top of gcc.exe, but if it isn't hard, why doesn't it just do that?
It's not a simple C compiler that assumes everything it is given is a C file.
As I said, that is not helpful for me. Also, how many file types does
'as' accept? As that also requires the full extension,
and also,
bizarrely, generates `a.out` as the object file name.
Here's something funny: take hello.c and rename to 'hello', with no extension. If I try and compile it:
gcc hello
it says: hello: file not recognised: file format not recognised. Trying
'gcc hello.' is worse: it can't see the file at all.
So first, on Linux, where file extensions are supposed to be optional,
gcc can't cope with a missing .c extension; you have to provide extra
info. Second, on Linux, "hello" is a distinct file from "hello.".
With bcc, I just have to type "bcc hello." to make it work. A trailing
dot means an empty extension.
I privately coined the term 'loader' in the 1980s for a program that combined multiple object files, from independently compiled source
modules of a program, into a single program binary (eg. a .com file).
It was also pretty much what a linker did, yet a linker was a far more complicated program that also took much longer. What exactly do linkers
do? I'm still not really sure!
Anyway, I no longer have a need to combine object files (there are no
object files). But there are dynamic fix-ups needed, which the OS EXE
loader will do, between an EXE and its imported DLLs.
On 22/12/2022 15:09, Andy Walker wrote:
You complain
that you have to write "./hello" rather than just "hello";? but that's because "." is not in your "$PATH", which is set by you, not because Unix/Linux insists on extra verbiage.? If you need further help, just
ask.? But I'd expect you to be able to work it out rather than wring
your hands and flap around helplessly [or blame Unix for it].
So lots of workarounds to be able to do what DOS, maligned as it was,
did effortlessly.
So automate things.
rename file and look how it affects compilations, then you may
do a lot of command line editing. In normal use the same
Makefile with can be used for tens or hundreds of edits to
source files.
Some folks use compile commands in editors. That works nicely
because rules are simple.
Now it's not that hard to write a wrapper script or program on top of
gcc.exe, but if it isn't hard, why doesn't it just do that?
You miss important point: gcc gives you a lot of possibilities.
Simple wrapper which substitutes some defaults would make
using non-default values harder or impossible.
If you want
to have all functionality of gcc you will end up with complicated
command line.
It's not a simple C compiler that assumes everything it is given is a C
file.
As I said, that is not helpful for me. Also, how many file types does
'as' accept? As that also requires the full extension,
AFAICS as accepts any extention. I can name my assember file
'ss.m' and it works fine.
You are determined not to learn
parties: in Linux file name is just string of characters. Dot is
as valid character in a name as any other.
On 22/12/2022 15:09, Andy Walker wrote:
You complain
that you have to write "./hello" rather than just "hello";? but that's
because "." is not in your "$PATH", which is set by you, not because
Unix/Linux insists on extra verbiage.? If you need further help, just
ask.? But I'd expect you to be able to work it out rather than wring
your hands and flap around helplessly [or blame Unix for it].
So lots of workarounds to be able to do what DOS, maligned as it was,
did effortlessly.
It seems that nobody mentioned this: not having '.' in PATH is
relatively recent trend.
You two aren't going to be happy until my language is a clone of C, with tools that work exactly the same way they do on Unix. But then you're
going to say, what's the point?
So automate things.
Isn't gcc already an automated wrapper around compiler, assembler and linker?
Of course, if your hobby is to continously
rename file and look how it affects compilations, then you may
do a lot of command line editing. In normal use the same
Makefile with can be used for tens or hundreds of edits to
source files.
Some folks use compile commands in editors. That works nicely
because rules are simple.
Now it's not that hard to write a wrapper script or program on top of
gcc.exe, but if it isn't hard, why doesn't it just do that?
You miss important point: gcc gives you a lot of possibilities.
Simple wrapper which substitutes some defaults would make
using non-default values harder or impossible.
Not on Windows. Clearly, Linux /would/ make some things impossible,
because there are no rules so anything goes.
The rules for my BCC compiler inputs are:
Input Assumes file
file.c file.c
file.ext file.ext
file file.c
file. file # use "file." when file has no extension
This is not possible with Unix, since "file." either ends up as "file",
or stays as "file." You can only refer to "file" or "file.", but not both.
So a silly decision with Unix, which really buys you very little, means having to type .c extensions on inputs to C files, for eternity.
If you want
to have all functionality of gcc you will end up with complicated
command line.
It's not a simple C compiler that assumes everything it is given is a C >>> file.
As I said, that is not helpful for me. Also, how many file types does
'as' accept? As that also requires the full extension,
AFAICS as accepts any extention. I can name my assember file
'ss.m' and it works fine.
But you need an extension. Give it a null extension then resulting executable, if this is the only module, will clash.
My point however was the the reason gcc needs an explicit extension was
the number of possible input file types. How many /different/ file types does 'as' work with?
I write my command-line utilities both to be easy to use when manually invoked, and for invoking from scripts.
My experience of Unix utilities is that they do nothing of the sort;
they have no provision for user-friendliness whatsoever.
You are determined not to learn
You also seem reluctant to learn how non-Unix systems might work, or acknowledge that they could be better and more user-friendly.
, but for possible benefits of third
parties: in Linux file name is just string of characters. Dot is
as valid character in a name as any other.
Well, that's wrong. It may have sounded a good idea at one time: accept
ANY non-space characters as the name of a file.
But that allows for a
lot of completely daft names, while disallowing some sensible practices.
There is no structure at all, no possibility for common sense.
With floating point numbers, 1234 is the same value as 1234. while
1234.. is an error, but they are all legal and distinct filenames under Unix.
Under Windows, 1234 1234. 1234... all represent the same "1234" file.
While 123,456 are two files "123" and "456"; finally, some rules!
[...] You complainSo lots of workarounds to be able to do what DOS, maligned as it was,
that you have to write "./hello" rather than just "hello"; but that's
because "." is not in your "$PATH", which is set by you, not because
Unix/Linux insists on extra verbiage. If you need further help, just
ask. But I'd expect you to be able to work it out rather than wring
your hands and flap around helplessly [or blame Unix for it].
did effortlessly.
Don't forget it is not just me personally who would have trouble. For
over a decade, I was supplying programs that users would have to
launch from their DOS systems, or on 8-bit systems before that.
So every one of 1000 users would have to be told how to fix that "."
problem?
Fortunately, nobody really used Unix back then (Linux was
not yet ready), at least among our likely customers who were just
ordinary people.
Fortunate also that with case-sensivitivity in the shell program and
file system, it would have created a lot more customer support
headaches.
But you can write your own almost trivially;Sure. I also asked, if it is so trivial, why don't programs do that
it's a "one-line" shell script
anyway?
Learn something from DOS at least which is user
friendliness.
because C and Linux have taken over the world [...].Really? According to
Notice that most user-facing interfaces tend to be case-insensitive?
So, nobody here thinks that doing 'mm -ma appl' to produce a one-file
appl.ma file representing /the entire application/, that can be
trivially compiled remotely using 'mm appl.ma', is a great idea?
Well, have a look at the A68G source bundle for example: [...].
Like so many, this application starts with a
'configure' script, although only 9500 lines this time. So I can't
build it on normal Windows.
But typing 'make' again still took 1.4 seconds even with nothing to
do.
Then I looked inside the makefile: it was an auto-generated one with
nearly 3000 lines of crap inside - no wonder it took a second and a
half to do nothing!
In article <tof7ol$ovb$1@gioia.aioe.org>, <antispam@math.uni.wroc.pl> wrote:
Bart <bc@freeuk.com> wrote:
On 22/12/2022 15:09, Andy Walker wrote:
You complain
that you have to write "./hello" rather than just "hello";? but
that's because "." is not in your "$PATH", which is set by you,
not because Unix/Linux insists on extra verbiage.? If you need
further help, just ask.? But I'd expect you to be able to work
it out rather than wring your hands and flap around helplessly
[or blame Unix for it].
So lots of workarounds to be able to do what DOS, maligned as it
was, did effortlessly.
It seems that nobody mentioned this: not having '.' in PATH is
relatively recent trend.
Define "recent": I haven't included `.` in my $PATH for
30 or so years now. :-)
Bart <bc@freeuk.com> wrote:
So automate things.
Isn't gcc already an automated wrapper around compiler, assembler and
linker?
You miss principle of modularity: gcc contains bulk of code (I mean
the wrapper, it is quite large given that all what it is doing
is handling command line). Your own code can provide fixed
values (so that you do not need to retype them) and more complex
behaviours (which would be too special to have as gcc options).
In fact, 'make' was designed to be tool for "directing compilation",
it handles things at larger scale than gcc.
To put is differently: gcc and make provide mechanizms. It is
to you to specify policy.
BTW: there are now several competitors to make. One is 'cmake'.
Of course, if your hobby is to continously
rename file and look how it affects compilations, then you may
do a lot of command line editing. In normal use the same
Makefile with can be used for tens or hundreds of edits to
source files.
Some folks use compile commands in editors. That works nicely
because rules are simple.
Now it's not that hard to write a wrapper script or program on top of
gcc.exe, but if it isn't hard, why doesn't it just do that?
You miss important point: gcc gives you a lot of possibilities.
Simple wrapper which substitutes some defaults would make
using non-default values harder or impossible.
Not on Windows. Clearly, Linux /would/ make some things impossible,
because there are no rules so anything goes.
What I wrote really have nothing to with operating system, this
is very general principle.
The rules for my BCC compiler inputs are:
Input Assumes file
file.c file.c
file.ext file.ext
file file.c
file. file # use "file." when file has no extension
This is not possible with Unix, since "file." either ends up as "file",
or stays as "file." You can only refer to "file" or "file.", but not both.
You can implement your rules on Unix. Of course, one can ask if the
rules are useful. As written above your rules make it impossible to
access file named "file." (it will be mangled to "file"). And you
get quite unintuitive behaviour for "file".
So a silly decision with Unix, which really buys you very little, means
having to type .c extensions on inputs to C files, for eternity.
Nobody forces you to have extention on files.
And do not exagerate,
two characters extra from time to time do not make much difference.
If you want
to have all functionality of gcc you will end up with complicated
command line.
It's not a simple C compiler that assumes everything it is given is a C >>>>> file.
As I said, that is not helpful for me. Also, how many file types does
'as' accept? As that also requires the full extension,
AFAICS as accepts any extention. I can name my assember file
'ss.m' and it works fine.
But you need an extension. Give it a null extension then resulting
executable, if this is the only module, will clash.
as ss
will produce 'a.out' (as you know). If there is no a.out there will
be no clash, otherwise you need to specify output file, like:
as ss -o ss.m
or
as ss -o tt
Note: as produces object file (it does not link). For automatic
linking use gcc.
My point however was the the reason gcc needs an explicit extension was
the number of possible input file types. How many /different/ file types
does 'as' work with?
as have no notion of "file type". as takes stream of bytes and turns
it into object file (different thing from an executable!). Stream
of bytes may come from file, but in typical use (when as is called by
gcc) as gets its input from a pipe.
I write my command-line utilities both to be easy to use when manually
invoked, and for invoking from scripts.
My experience of Unix utilities is that they do nothing of the sort;
they have no provision for user-friendliness whatsoever.
Old saying (possibly mangled): "Unix is friendly, just not everybody
is its friend".
You are determined not to learn
You also seem reluctant to learn how non-Unix systems might work, or
acknowledge that they could be better and more user-friendly.
"might work" is almost meaningless. By design Unix allows a lot
of flexibility so it "might work" in quite different way.
[...] And letter
case must be spot-on too: so was that file oneTwo or OneTwo or
Onetwo? And commands must be spot-on as well:
you type 'ls OneTwo'
then you look up and realise Caps-lock was on and you typed 'LS
oNEtWO'; urggh! Start again..
Sysop: | DaiTengu |
---|---|
Location: | Appleton, WI |
Users: | 919 |
Nodes: | 10 (1 / 9) |
Uptime: | 48:52:10 |
Calls: | 12,183 |
Calls today: | 3 |
Files: | 186,524 |
Messages: | 2,236,146 |