Part 46 - Jun 29 2006

Old messages from osFree mailing list hosted by Yahoo!
admin
Site Admin
Posts: 1925
Joined: Wed Dec 19, 2018 9:23 am
firstname: osFree
lastname: admin

RE: [osFree] Development Project

Post by admin »

#1403 RE: [osFree] Development Project
Expand Messages

John P Baker
Jul 1, 2006

Lynn,



I started programming in IBM S/360 assembler around 1968, and continue to do so to this day. It is the basis of my employment.



I have been programming in PL/I since the early 1970s, and like it very much.



I have also been programming in COBOL since about the same time frame, but have to say that I am bothered by its wordiness.



Now, all of these languages have their respective places, and I use each when appropriate to the circumstance.



However, neither COBOL nor PL/I provide the functionality I am looking for.



I also know that a lot of people favor C++. I am not one of them. This is not because C++ is a bad language. I just don’t find it suitable to what I wish to accomplish.



For what I wish to accomplish, I feel that the development of a new language is the best approach.



Clearly, you disagree. That is fine. But please don’t tell me that I am wrong.



It is simply a difference of opinion.



I see that you have an approach that you feel is appropriate to accomplish the goals you have set for yourself. That is great. I encourage you to pursue your goals with vigor.



I too have an approach that I feel is appropriate to accomplish the goals I have set for myself. And I will pursue them with equal vigor.



Please realize that you should demonstrate to the ideas expressed by others the same degree of respect that you would hope that they would demonstrate to your ideas.



John P Baker

Software Engineer
From: osFree@yahoogroups.com [mailto: osFree@yahoogroups.com ] On Behalf Of Lynn H Maxson
Sent: Saturday, July 01, 2006 19:35
To: osFreeyahoogroupscom
Subject: RE: [osFree] Development Project



John,

I really didn't want to get into this argument, but where did
you ever get the idea that you had any PASCAL language
derivative for the two areas, EXEC CICS and EXEC SQL, you
mentioned that would ever offer more than the current PL/I
or COBOL compilers, more specifically than PL/I. As one who
began with CICS at its announcement in 1965 and remained a
regional specialist in the product through the remainder of my
IBM career and the same with PL/I, Wirth in his wildest dreams
in his language evolution never ever got to that level.

I sit here on my OS/2 workstation. In front of me is the
VisualAge PL /I, VisualAge COBOL, IBM DB2, and CICS for OS/2
icons. Thus having begun with CICS prior to entry of the
command-level interface and in fact having to customize the
TCP (Terminal Control Program) source to make an audio
response device correspond to the customer's use, I don't
know how you expect to make a dent with something as
puerile as a PASCAL clone against legacy programming
languages with a complete variable precision, fixed-point
decimal repertoire, i.e. packed and picture decimal format.

As to direct recognition of the EXEC CICS within the compiler
instead of the preprocessor I would think it offers more as a
possible convenience than as a performance enhancement.
Obviously as CICS is an guest operating system functioning in a
host operating system what consideration have you given to
offering the same level of multi-platform support with your
compiler. I doubt if you have more than one platform in mind.
Otherwise you would not ignore what the pre-processor
option offers in terms of reduced support over the compiler
option.

Then finally EXEC SQL where many of the previous
considerations apply as well you have the situation that SQL is
a fourth generation language which has a two-stage proof
engine. The first stage is a completeness proof which verifies
and optimizes the SQL inquiry with a full syntax and semantic
analysis along with code generation. The second stage is an
exhaustive true/false proof, which "tests" the "assertions" of
the query language, returning zero (false) if no "true"
instances found, otherwise a "list" of one or more true
instances.

Now the keyword here is "list". The key performance issue
here is the "one at a time" passing of true instances to the
invoking program which is the curse of all first (actual),
second (symbolic assembly), and third (HLL) generation,
imperative languages: their lack of native list processing
support. There is a reason why LISP has managed to evolve
and remain on top of its game.

So if I was going to define a language which interfaced
"natively" with a relational database manager it would have
native list processing support, receiving and processing the list
of true instances without the stutterstep of "one true instance
at a time" or the other programming "awkwardness" of third
generation languages.

As luck would have it I have defined such a language which I
introduced as part of the Warpicity Proposal at Warpstock 98:
SL/I (Specification Language/One) . It wasn't that difficult to
define due to the fact that three pre-1970 "legacy" languages
(APL, LISP, and PL/I) contained everything and more of all
other programming languages since. So if you take the syntax
and data types of PL/I, the operators of APL, and the list
aggregate of LISP, add the ability of both the assignment
statement of third generation HLLs and the assertion
statement (which accepts list variables on input and output)
of fourth generation HLLs you end up with a single fourth
generation language that incorporates all the language
capabilities of previous generations.

Thus your relational database manager can drop the
performance- draining fluff of the "one true instance at a
time" processing and become a seamless fit within an
executable.

Of course I was there when Wirth produce PASCAL as an
academic exercise, when he attempted to overcome its
limitations with MODULA, and then when he went out in a
frensy because by then most were wise to his game by the
time Oberon came along. You are most welcome to pursue a
false prophet and engage in yet another academic exercise. I
will continue to shake my head in disbelief.
admin
Site Admin
Posts: 1925
Joined: Wed Dec 19, 2018 9:23 am
firstname: osFree
lastname: admin

RE: [osFree] Development Project

Post by admin »

#1404 RE: [osFree] Development Project
Expand Messages

Lynn H. Maxson
Jul 1, 2006
John,

I just have difficulty believing that anyone with PL/I
experience would fiddle with a less functional instrument. So
far neither C++, JAVA, or any of the other C derivatives come
even close to the functionality that PL/I had over thirty years
ago.

Why, for example, would you even consider a language that
didn't support operations on aggregates? Or that dealt with
"int" instead of "fixed bin (31)" or "fixed bin (31, 7)", variable
precision, fixed point binary arithmetic? When was the last
time you had a problem porting PL/I either source code or
data.

We will agree to disagree. I just have difficulty understanding
how your experience would lead you down the path you
intend to follow.
admin
Site Admin
Posts: 1925
Joined: Wed Dec 19, 2018 9:23 am
firstname: osFree
lastname: admin

RE: [osFree] Development Project

Post by admin »

#1405 RE: [osFree] Development Project
Expand Messages

John P Baker
Jul 2, 2006

Lynn,



Just because I have chosen to take the course of developing a Pascal-derivative language should not be taken as an indication that I intend that language to continue the same weaknesses that exist in Pascal, such as those you cited.



I am actively researching the data type capabilities of dozens of languages.



Aggregate manipulation is another area in my project plan. I will admit that I have not yet gotten around to doing much in that area. But I will.



In its initial implementation, I am writing everything in pure K&R “C”. This makes it portable, and easy to subsequently translate.



I have largely completed the lexical analyzer. The only area in which I still have work to do is in respect to certain symbols (for example, “.”) whose interpretation is sensitive to adjacent whitespace. Of course, if my ongoing research dictates a change to the symbol recognition process, I will have to go back and make changes. However, I don’t expect any major problems here.



I have also largely completed the semantic parser. I have incorporated some capabilities from the PL/C compiler (the Cornell compiler, not the Checkout compiler) in that the semantic parser will automatically recognize and correct certain types of errors. For example, if the context requires a terminal symbol (for example, a “;”) and that is the only valid path for the preceding context and that terminal symbol is not present, the semantic parser will, as an option, recognize that fact, report the condition to the user, insert the missing symbol, and continue. I intend to look for other common syntactic coding errors which may be easily recognizable and correctable, and if appropriate, incorporate additional error correction code into the semantic parser.



Other than the error correction code, the semantic parser is wholly table driven. Unlike XPL, another language which I have researched, the parsing tables are much smaller (in respect to the size of the language) than what XPL compiler generator would require. Of course, XPL could not handle a language of this complexity.



The semantic parser calls a code generator. It is my thought that I will probably generate some form of intermediate code, which I will generate in its entirety, and then call a multi-pass architecture-specific code generator.



It is going to take me a while to complete the language specification. Once that component of the project is complete, the remainder of the project runs roughly along these lines:



1) Revise the lexical analyzer, if necessary.

2) Construct the parsing tables.

3) Test the language recognition process.

4) Write the “intermediate” code generator.

5) Write an “architecture-specific” code generator.

6) Test the code generation process.



It is not listed above, but on a parallel track I will be writing manuals to go along with the code. As I presently envisage it, the manual set will include:



1) Installation Reference

2) Language Reference

3) Messages Reference

4) Programmer’s Guide

5) Data Areas Reference

6) Program Logic Reference

7) Architecture-Specific Code Generator Writer’s Reference

8) Intermediate Code Reference



It is not a simple process, and it is going to take a bit of time. In the long run, I think that it will be worth it. I think that you may be pleasantly surprised.



John P Baker

Software Engineer
From: osFree@yahoogroups.com [mailto: osFree@yahoogroups.com ] On Behalf Of Lynn H Maxson
Sent: Sunday, July 02, 2006 00:04
To: osFreeyahoogroupscom
Subject: RE: [osFree] Development Project



John,

I just have difficulty believing that anyone with PL/I
experience would fiddle with a less functional instrument. So
far neither C++, JAVA, or any of the other C derivatives come
even close to the functionality that PL/I had over thirty years
ago.

Why, for example, would you even consider a language that
didn't support operations on aggregates? Or that dealt with
"int" instead of "fixed bin (31)" or "fixed bin (31, 7)", variable
precision, fixed point binary arithmetic? When was the last
time you had a problem porting PL/I either source code or
data.

We will agree to disagree. I just have difficulty understanding
how your experience would lead you down the path you
intend to follow.
admin
Site Admin
Posts: 1925
Joined: Wed Dec 19, 2018 9:23 am
firstname: osFree
lastname: admin

RE: [osFree] Development Project

Post by admin »

#1406 RE: [osFree] Development Project
Expand Messages

Lynn H. Maxson
Jul 2, 2006
John,

I accept as given that we share the same intent in providing
enhanced software development enhancement tools. From
the brief bio that you offered I accept that we overlap
considerably our time in our profession as well as a
background in IBM mainframe operating systems. I apologize
for appearing harsh, as I had no such intent, but I had to
somehow expressed the confusion of how two people
operating essentially from the same background could differ
so greatly in their analysis of the current software situation
and in its resolution.

I probably have a few years on you. I started in 1956 writing
and entering diagnostic programs in machine language on an
IBM 705 in June of 1956. This year marks my 50th anniversary
as an IT professional. You mentioned 1968. In 1968 I was the
controller of the SJCC (Spring Joint Computer Conference) in
Anaheim where Dennis Ritchie presented his paper on the "B"
language for "Basic" compiler writing. Prior to the advent of
the OS/360 family of operating systems I had written a
multi-tasking operating system concurrently supporting both
batch and online transaction processing against a common
datastore, in this instance the control sequential file access.

In 1970 I attended an IBM-internal symposium on programming
languages. One presenter from IBM Raleigh offered up the
macro language facility they had offered. It was all the other
attendees could do to keep from laughing as it didn't even
have the capabilities of that of the existing S/360 assembly
language. Moreover another presenter offered up the H-level
assembler which has now evolved into the "advanced
assembly language" which leaves all other implementations in
the dust.

Now I have only one motto, a single guideline with respect to
designing a software tool: let people do what software cannot
and software what people need not. In software that comes
down to writing in deciding what writing can only people
provide and what rewriting (which third generation languages
require developers to do prior to compiling) can we leave up
to the software which logic changes dictate.

The answer here lies in the fourth generation, i.e. logic
programming, two-stage proof engine of the completeness
proof and the exhaustive true/false proof. In SQL. In Prolog.
In Trilogy. In all of AI. In neural nets. The software assumes
responsibility for the logical organization, the optimal logical
organization, of the source. This says the source segments
can appear in any, i.e. random, order.

The only additional requirement is the assertion statement,
e.g. in SQL the query, which allows zero (false), one or more
"true" instances. That requirement necessitates a native list
aggregate. Thus an assignment statement allows (requires)
one and only one true instance as a result while an assertion
allows 0, 1, or more true instances. Thus an assignment
statement is a "special" case of an assertion.

Moreover logic programming has two logical forms, clausal and
predicate logic. SQL and Prolog use clausal logic. Trilogy and
the z-specification language use predicate. Now what's the
difference? Clausal logic requires that you indicate (and thus
prepare) the data for the exhaustive true/false proof.
Predicate logic does not. In predicate logic you set the
"range" of values a variable may undertake and the software
creates an enumerated set of variable values, i.e. automatic
test data generation.

Moreover logic programming allows the entry of rules with
respect to data. Those rules get entered "once" in the source
input and get applied by the software in all instances
involving the variables to which they apply. That means you
don't have to depend on a programmer remembering to enter
checking code to insure rule compliance: you let the software
do it as it never forgets or gets it wrong.

As to the choice of language why would you begin with
anything less than the best ever offered: PL/I. Why would
you pick something less only to have to add it in and still
come up with something less? Why would you talk about
using C due to its portability? Do you know how many
thousands of enterprises who went to UNIX discovered the
"special" meaning K&R applied for "portability".

At the moment VOICE will in the near future receive the
source code for the OS/2 version of PMMail/2. I don't know
that you are an OS/2 user. You obviously don't use PMMail/2.
Nevertheless the development team's first task lies in
"porting" the source from IBM's VisualAge C/C++ to that of
OpenWatcom. I would think that the term "port" or "porting"
occurs enough times in the open source community to cause
anyone pause about this great claim of portability of C.

In truth the language is portable. Unfortunately the libraries
are not. Why would you choose a language which requires
the use of a library as opposed to one which does not either
due to its range of "builtin functions", operators, and data
types? I won't bother you with the set of "on" functions
(subscriptrange, overflow, error, system, user-defined, etc.)
which keeps control within the purview of the program,
thereby contributing to its ability to engage in self-repair.

So why not pick a programming language with the simplest
syntax structure possible? Where every program element is a
statement include statement delimiters (;) and group
statement delimiter (end;) Every statement ends in a
semi-colon, no exceptions.

Now to the issue of productivity. We have two to consider in
terms of throughput, the amount of work achieved per unit
time. We have the programmer. We have the program. Now
a programmer has to develop a program. That implies from
the moment he starts to that when he is done the program is
in an incomplete state. Now a compiler requires a "complete"
program on input even if the program itself is incomplete.

In your instance of a missing terminator (;) caught for some
reason by your semantic and not syntax checker you have in
occur during a compile. Now "smart" editors, which could
even get smarter, with colorised syntax checking would have
caught the missing terminator immediately, earliest error
detection, earliest error correction. So why would you want
to introduce the delay of compilation after committiing such a
easily detectable error? Is your time not valuable? Why
would you want to waste it?

So why develop code with a compiler which optimizes the
runtime of a program instead of an interpreter which
optimizes the productivity of a programmer? Why do you
need two separate tools, an interpreter or a compiler, when
the only difference between them lies in the method of code
generation? So if you have the "smartest" editor, an
interpreter, which allows testing at the statement level or any
segment assembly therof using predicate logic, why would
you not simply add an option to generate compiled output
only after you have completely tested it (with automatically
generated test data) and want to release it to production?

Now I assume that you understand that an application system
consists of one or more programs. If it consists of more and
you have a global change which crosses program boundaries
or in fact application system boundaries, why would you insist
on compiling each program separately instead of all of them
at once as a single unit of work? That guarantees in
software the synchronization of change.

Now I won't bother you with integrating this
interpreter/compiler with a full function data
repository/directory in which the software does all
maintenance. Here the only stored source is the "statement",
each of which the software assigns a name and all assemblies
of which exist as lists of assemblies and source names: a pure
manufacturing, BOM approach. It permits unlimited homonyms
(same name, different referents) and synonyms (different
names, same referent).

That means if you make a change to a source statement, i.e.
create a different version of it, you can request that the
software to either select which assemblies to change or opt
to change them universally along with recompiling all affected
assemblies in their entirety.

Then along with this you make the decision to operate on
nothing except source, to have a single source library, which
the interpreter/compiler then "knows" in their entirety which
eliminates all concerns about porting as well as providing the
most in inline optimization.

So I would ask why programmer productivity isn't higher on
your list of priorities? Why just a compiler instead of an
interpreter/compiler? Is not your runtime, your transaction
rate, your throughput as important in the overall equation as
that of a program?

Getting multiple executables from a single compile, eliminating
the need for and use of a separate linker, make, or build
activity, arise from implementation not language. All this
effort to make a program be the best that it can be without
doing the same for the programmer has gotten us to the mess
we are in. One of us seems hellbent on continuing that
history.

So, yes, I do question your choices as well as your reasoning.
I remain respectful of your intentions. I do wish you were an
OS/2 user. If a Windows user, at least then one using PMMail.
admin
Site Admin
Posts: 1925
Joined: Wed Dec 19, 2018 9:23 am
firstname: osFree
lastname: admin

RE: [osFree] Development Project

Post by admin »

#1407 RE: [osFree] Development Project
Expand Messages

John P Baker
Jul 2, 2006

Lynn,



You do have few years on me. I started in 1968 at the age of 12. My uncle worked for IBM, so I had the advantage of being able to obtain, free of charge, manuals, etc., which I devoured. I became a professional programmer in 1975.



In addition to the S/360 line, and its successors, I have worked on an IBM 1620, which was actually a neat machine.



I also got to play around a little bit on an IBM 7094.



I have also done some work on an IBM 1130.



By the way, I wonder how many people here can say that they have actually seen an IBM data cell device?



The pride and joy of my collection is an original IBM S/360 announcement. I also have a module of S/360 core memory.



Of course, I have numerous “microseconds” which I obtained from Commodore Grace Hopper.



By the way, I have running OS/2 since Version 1.x Extended Edition. I also run DB2 UDB, as well as both IBM Enterprise COBOL and IBM Enterprise PL /I.



I also run Windows, Linux, VM/370 (under Hercules), and OS/VS2 (under Hercules).



I concur with you that of all of the procedural languages ever developed that PL/I is by far the best.



In my research, I am examining the capabilities of many languages, of which ICON, JOVIAL, LIST, PROLOG, SNOBOL, and XPL are but a small subset. I actually have a list of over 300 programming languages. I wonder how many people are aware that there was an actual programming language called “HAL”. Has anyone watched “2001: A Space Odyssey” recently?



As far as K&R “C” goes, it is quite portable if, and only if, you avoid some of its more esoteric capabilities. In this regard, I am being very careful to avoid anything which would impact portability.



It is my intention to translate the compiler into the new language once I have the first operational compiler.



Now, to address the issue of compiler vs. interpreter, the design provides for the generation of “intermediate code”. This being the case, the provision of an interpretive engine is quite reasonable, and will likely follow at some point.



In the case of the error recognition code that I have put into place, it is actually quite small. A preliminary analysis indicates that on an IBM S/360, or successor processor, the overhead would amount to zero machine instructions per token where an error is NOT detected, to six machine instructions where an error is detected and is not correctable in the means indicated previously, and to twenty-one machine instructions where an error is detected and is correctable in the means previously indicated. I don’t believe that the overhead of excessive.



I am greatly concerned about instruction path length. That is one of the reasons that I despise C++. It has some of the worse performance characteristics of any programming language I have ever seen.



I am of course available at any time to look at and discuss the approach you are taking, and I am prepared to make available for your analysis the code and related documentation for my project.



John P Baker

Software Engineer
From: osFree@yahoogroups.com [mailto: osFree@yahoogroups.com ] On Behalf Of Lynn H Maxson
Sent: Sunday, July 02, 2006 13:01
To: osFreeyahoogroupscom
Subject: RE: [osFree] Development Project



John,

I accept as given that we share the same intent in providing
enhanced software development enhancement tools. From
the brief bio that you offered I accept that we overlap
considerably our time in our profession as well as a
background in IBM mainframe operating systems. I apologize
for appearing harsh, as I had no such intent, but I had to
somehow expressed the confusion of how two people
operating essentially from the same background could differ
so greatly in their analysis of the current software situation
and in its resolution.

I probably have a few years on you. I started in 1956 writing
and entering diagnostic programs in machine language on an
IBM 705 in June of 1956. This year marks my 50th anniversary
as an IT professional. You mentioned 1968. In 1968 I was the
controller of the SJCC (Spring Joint Computer Conference) in
Anaheim where Dennis Ritchie presented his paper on the "B"
language for "Basic" compiler writing. Prior to the advent of
the OS/360 family of operating systems I had written a
multi-tasking operating system concurrently supporting both
batch and online transaction processing against a common
datastore, in this instance the control sequential file access.

In 1970 I attended an IBM-internal symposium on programming
languages. One presenter from IBM Raleigh offered up the
macro language facility they had offered. It was all the other
attendees could do to keep from laughing as it didn't even
have the capabilities of that of the existing S/360 assembly
language. Moreover another presenter offered up the H-level
assembler which has now evolved into the "advanced
assembly language" which leaves all other implementations in
the dust.

Now I have only one motto, a single guideline with respect to
designing a software tool: let people do what software cannot
and software what people need not. In software that comes
down to writing in deciding what writing can only people
provide and what rewriting (which third generation languages
require developers to do prior to compiling) can we leave up
to the software which logic changes dictate.

The answer here lies in the fourth generation, i.e. logic
programming, two-stage proof engine of the completeness
proof and the exhaustive true/false proof. In SQL. In Prolog.
In Trilogy. In all of AI. In neural nets. The software assumes
responsibility for the logical organization, the optimal logical
organization, of the source. This says the source segments
can appear in any, i.e. random, order.

The only additional requirement is the assertion statement,
e.g. in SQL the query, which allows zero (false), one or more
"true" instances. That requirement necessitates a native list
aggregate. Thus an assignment statement allows (requires)
one and only one true instance as a result while an assertion
allows 0, 1, or more true instances. Thus an assignment
statement is a "special" case of an assertion.

Moreover logic programming has two logical forms, clausal and
predicate logic. SQL and Prolog use clausal logic. Trilogy and
the z-specification language use predicate. Now what's the
difference? Clausal logic requires that you indicate (and thus
prepare) the data for the exhaustive true/false proof.
Predicate logic does not. In predicate logic you set the
"range" of values a variable may undertake and the software
creates an enumerated set of variable values, i.e. automatic
test data generation.

Moreover logic programming allows the entry of rules with
respect to data. Those rules get entered "once" in the source
input and get applied by the software in all instances
involving the variables to which they apply. That means you
don't have to depend on a programmer remembering to enter
checking code to insure rule compliance: you let the software
do it as it never forgets or gets it wrong.

As to the choice of language why would you begin with
anything less than the best ever offered: PL/I. Why would
you pick something less only to have to add it in and still
come up with something less? Why would you talk about
using C due to its portability? Do you know how many
thousands of enterprises who went to UNIX discovered the
"special" meaning K&R applied for "portability" .

At the moment VOICE will in the near future receive the
source code for the OS/2 version of PMMail/2. I don't know
that you are an OS/2 user. You obviously don't use PMMail/2.
Nevertheless the development team's first task lies in
"porting" the source from IBM's VisualAge C/C++ to that of
OpenWatcom. I would think that the term "port" or "porting"
occurs enough times in the open source community to cause
anyone pause about this great claim of portability of C.

In truth the language is portable. Unfortunately the libraries
are not. Why would you choose a language which requires
the use of a library as opposed to one which does not either
due to its range of "builtin functions", operators, and data
types? I won't bother you with the set of "on" functions
(subscriptrange, overflow, error, system, user-defined, etc.)
which keeps control within the purview of the program,
thereby contributing to its ability to engage in self-repair.

So why not pick a programming language with the simplest
syntax structure possible? Where every program element is a
statement include statement delimiters (;) and group
statement delimiter (end;) Every statement ends in a
semi-colon, no exceptions.

Now to the issue of productivity. We have two to consider in
terms of throughput, the amount of work achieved per unit
time. We have the programmer. We have the program. Now
a programmer has to develop a program. That implies from
the moment he starts to that when he is done the program is
in an incomplete state. Now a compiler requires a "complete"
program on input even if the program itself is incomplete.

In your instance of a missing terminator (;) caught for some
reason by your semantic and not syntax checker you have in
occur during a compile. Now "smart" editors, which could
even get smarter, with colorised syntax checking would have
caught the missing terminator immediately, earliest error
detection, earliest error correction. So why would you want
to introduce the delay of compilation after committiing such a
easily detectable error? Is your time not valuable? Why
would you want to waste it?

So why develop code with a compiler which optimizes the
runtime of a program instead of an interpreter which
optimizes the productivity of a programmer? Why do you
need two separate tools, an interpreter or a compiler, when
the only difference between them lies in the method of code
generation? So if you have the "smartest" editor, an
interpreter, which allows testing at the statement level or any
segment assembly therof using predicate logic, why would
you not simply add an option to generate compiled output
only after you have completely tested it (with automatically
generated test data) and want to release it to production?

Now I assume that you understand that an application system
consists of one or more programs. If it consists of more and
you have a global change which crosses program boundaries
or in fact application system boundaries, why would you insist
on compiling each program separately instead of all of them
at once as a single unit of work? That guarantees in
software the synchronization of change.

Now I won't bother you with integrating this
interpreter/ compiler with a full function data
repository/director y in which the software does all
maintenance. Here the only stored source is the "statement",
each of which the software assigns a name and all assemblies
of which exist as lists of assemblies and source names: a pure
manufacturing, BOM approach. It permits unlimited homonyms
(same name, different referents) and synonyms (different
names, same referent).

That means if you make a change to a source statement, i.e.
create a different version of it, you can request that the
software to either select which assemblies to change or opt
to change them universally along with recompiling all affected
assemblies in their entirety.

Then along with this you make the decision to operate on
nothing except source, to have a single source library, which
the interpreter/ compiler then "knows" in their entirety which
eliminates all concerns about porting as well as providing the
most in inline optimization.

So I would ask why programmer productivity isn't higher on
your list of priorities? Why just a compiler instead of an
interpreter/ compiler? Is not your runtime, your transaction
rate, your throughput as important in the overall equation as
that of a program?

Getting multiple executables from a single compile, eliminating
the need for and use of a separate linker, make, or build
activity, arise from implementation not language. All this
effort to make a program be the best that it can be without
doing the same for the programmer has gotten us to the mess
we are in. One of us seems hellbent on continuing that
history.

So, yes, I do question your choices as well as your reasoning.
I remain respectful of your intentions. I do wish you were an
OS/2 user. If a Windows user, at least then one using PMMail.
admin
Site Admin
Posts: 1925
Joined: Wed Dec 19, 2018 9:23 am
firstname: osFree
lastname: admin

RE: [osFree] Development Project

Post by admin »

#1408 RE: [osFree] Development Project
Expand Messages

Lynn H. Maxson
Jul 2, 2006
John,

We cannot have crossed the same territory without having
come to the same or similar conclusions. In the SCOUG
Programming SIG in which we are underway in developing a
Developer's Assistant, my single tool in a single, self-defining,
self-extensible specification/programming language, we will
progress toward an interpreter/compiler initially in C. Once
we get to our own "smart" editor where we can then switch
to SL/I that may change. I have my doubters with some
considerable experience and skills who question my claims of
"50 times time improvement (developer productivity) and 200
times cost improvement (software productivity)" of my
method. In a sense each of my assertions has a practical
proof process facing it.

If you have OS/2, why do you not also have PMMail/2? The
decision by VOICE to buy the rights to the OS/2 version of
PMMail and continue its development ought to be of interest
to you as well as the process dovetails into the problematic
details we both want to see subside.

If you look at the software development cycle of five stages
(specification, analysis, design, construction, and testing),
third generation languages "dictate" that each of these occur
"manually" in terms of separate source management. In logic
programming, e.g. SQL or Prolog, only the first stage,
specification, requires manual input. The software, if you use
predicate logic instead of clausal, will achieve the other
stages as part of the completeness and exhaustive true/false
proofs.

Just as you have software which will create flowcharts from
source you have other software offerings, e.g.
www.imagix.com, which will produce UML documentation from
source. You can do the same with producing dataflows and
structure charts of structured design. If you use an
interpreter which can dynamically rewrite program logic on
the fly based on source specification language changes, the
completeness proof can completely rewrite the documentation
from the now reorganized source on demand. So you have no
need for any other than specification source to produce all
visual output dynamically. You get essentially immediate
feedback of the "pictures" it produces versus the one you had
in mind. You only have one source to maintain.

Now the first implementation of APL occurred on a IBM 1620
and later "ported" to an IBM 1130. Prior to that I attended a
UCLA extension course taught by Bob Barton, the chief
architect of the Burroughs B5000 series, on APL directly from
the Iverson book. I still have it as do my friends who attended
the same class.

At the same IBM-internal programming languages symposium
in which the Raleigh people presented their puerile
macro-processor and the Endicott their H-level assembly
language was Jean Sammet, author of "History of
Programming Languages". I had her autograph my copy.

I also attended another UCLA extension course on
programming languages taught by Jean. She kept making the
mistake of introducing new features which might be nice to
have which other members of the class explained to her had
already been implemented in PL/I. Jean was wedded to her
beloved COBOL in which both she and Grace Hopper
participated in the language committee.

No language which depends upon a non-standard third party
library is portable at the source level. It is not a language
issue. It is a semantic issue of names of what they do and
how they work.

You want to have fun take a program of some complexity and
involving reentrancy. Write it in C and PL/I. Execute it in both
to see which runs faster. Of course that complexity may
include variable-length bit strings. Contrary to popular belief
bit strings and logical operators like "and", "or" and "not" are
closer to machine architecture than anything C supports
natively. C libraries require the overhead of a call even for a
small subroutine. So for performance I wouldn't pick C or any
of its derivatives like C++, C#, JAVA, Python, PHP, or any of
the rest which came after Jean wrote her book.

You see you end up counting machine instructions, measuring
their overhead. You place an emphasis even in development
on machine cycles, while I place one on people seconds. You
may say, "Well, I want to improve things a bit for the 250
people involved in maintaining an operating system package."
I want to do the same thing except that I want to bring that
number down to 5.

You retain your mainframe mentality while I have switch to
individual, not multi-user workstations. I do the same COBOL,
DB2, and CICS except I do it with 40 times the productivity of
the mainframe programmer. I had an account here In
Southern California, SCE, a major utility company, for whom I
designed a testing system. I had two groups working
essentially side-by-side in the same facility, one working
exclusively with mainframe tools and the other with
equivalent workstation tools. I can attest to such a significant
difference in productivity that I would never recommend
software development or testing on a mainframe. Not if you
have an interest in minimizing the ratio of people's waiting to
working time.

I mention testing because I cringe when some open source
"bigot" brags about having 1400 beta testers. In fact if he
brags about something being ready for alpha or beta testing, I
have concerns about his competency. In point of fact even in
what you intend to design you will include the structured
programming control structures (sequence, iteration, and
decision) along with their variants. These have a one-in,
one-out topology. Any program is no more than a nested, i.e.
hierarchical, assembly of such structures.

If you write an interpreter regardless of language which uses
a two-stage, logic programming proof engine and you do so
using predicate logic, you can exhaustively test any control
structure or assembly of such without leaving your
workstation and more completely and a zillion times faster
than with a thousand or a million beta testers.

So I don't argue about your use of an intermediate code which
you can address interpretive or compile mode. I do argue
which you produce first relative to bringing down the cost of
software. The cost, let us remember, has nothing to do with
how fast it runs, but rather how fast you can get to that
point.

Finally, for this segment at least, the worse thing that ever
happened to this industry after the disaster of K&R's C was
the advent of the current OO methodology based on the
example of Smalltalk coming out of PARC. We have never
done any programming which was not object-oriented or
whose objects was not data. That's why we called it "data
processing".

We made the same mistake in C++ that we did with C: relied
on non-standard (class) libraries, primarily propietary, i.e.
closed source. How we ever allowed incompatibilities like this
to occur comes from people who like to get paid for
programming regardless of what it costs in effort. As we
could not increase the productivity of programmers with the
new technology, in fact we decreased it and thus raised the
cost, we in this country found our services "out sourced" for a
lower cost alternative. Now we discover that even H1-B in
sourced replacements work for a lower salary structure
compared to available citizens now seeking work elsewhere.

I go through all this because my box I suspect is somewhat
larger than yours. You focus on a product specific solution
while I focus on a larger industry population which has not
focused on productivity to allow them to compete in the
global economy. Thus my product solution deals with
resolving far more issues than yours. I focus on people
productivity, not machine in an attempt to correct a historical
imbalance.

We both will probably take the same time getting to market.
Unfortunately except for the learning experience, something
which I do value, you will have "wasted" your time. As
someone with a major focus on people productivity, that
bothers me.
admin
Site Admin
Posts: 1925
Joined: Wed Dec 19, 2018 9:23 am
firstname: osFree
lastname: admin

Hey let's revive this community

Post by admin »

#1444 Hey let's revive this community
Expand Messages

unitedronaldo
Feb 1, 2007
I have been an open source supporter for several years. Even though I
never contributed to the development, but I have always prefered to
use OSS on my computer. I hope we can do something with this community.
Please respond
admin
Site Admin
Posts: 1925
Joined: Wed Dec 19, 2018 9:23 am
firstname: osFree
lastname: admin

Re: [osFree] Hey let's revive this community

Post by admin »

#1446 Re: [osFree] Hey let's revive this community
Expand Messages

Cristiano guadagnino
Feb 2, 2007
Hi,
this newsgroup is dead.
If you want to know more about osFree please go to www.osfree.org.

unitedronaldo wrote:

Hide message history
> I have been an open source supporter for several years. Even though I
> never contributed to the development, but I have always prefered to
> use OSS on my computer. I hope we can do something with this community.
> Please respond
>
>
admin
Site Admin
Posts: 1925
Joined: Wed Dec 19, 2018 9:23 am
firstname: osFree
lastname: admin

osFree and this mailing list

Post by admin »

#1943 osFree and this mailing list
Expand Messages

Allan Holm
Aug 5, 2008
Spam seems to become a big problem for this mailinglist,
which none of us likes, so I tried to figure out, why the
owner/moderator doesn't stop this.

It seems, that the group currently doesn't have an owner/moderator :-(

I asked Yahoo, and they told me they could setup a new person
as moderator, if we elected one.

Thats at least a start to stop the spam.

Question is altso - is this group still needed/wanted ?

I think this group is still a better alternative to
the Webforums, that can be found on the osFree site.
Personally I hate Webforums - and since osFree site
seems to regulary loose the contents of their forum,
I would still like to see this group alive.


But... is anyone else still reading this group ?
(It seems to have 400 members !)

Do you want it to stay alive, or should we just abandon it
and let the spammers have their fun ?


Speak up, if you want it to stay alive !


Allan.



Allan Holm Operating -_-_\ \ / / ____ _____ _____
Running OS/2 at _-_ _-_-_\ \/\/ /-/ -_ \ \ -_ \_ \ -__\
Unleash 32-bit power! - -_-_-_\_/\_/ -\_\ \_\ \_\ \_\ \_\ speed
alh@...
admin
Site Admin
Posts: 1925
Joined: Wed Dec 19, 2018 9:23 am
firstname: osFree
lastname: admin

Re: [osFree] osFree and this mailing list

Post by admin »

#1945 Re: [osFree] osFree and this mailing list
Expand Messages

Carlos
Aug 5, 2008
Hi Allan, the rest of the osFree group and the spammers

I think it has been some time since I saw anything OSFree related in
this group. Lots of money spending/wasting opportunities have been
coming my way though!!

I would like to keep the group alive. I also hate web forums because I
have to remember to visit them all the time, plus there's the logging
in, etc. Nice to have things drop into my mail box & I can always filter
them off to their own directory if they over crowd my in box.

I guess that's 2 votes out of the 400 potential votes/spammers so far. I
don't have the time to moderate though, unless it's just to remove the
odd spammer occasionally. Pity we can't squish them like flies.

Carl.


Allan Holm wrote:

Hide message history
>
>
> Spam seems to become a big problem for this mailinglist,
> which none of us likes, so I tried to figure out, why the
> owner/moderator doesn't stop this.
>
> It seems, that the group currently doesn't have an owner/moderator :-(
>
> I asked Yahoo, and they told me they could setup a new person
> as moderator, if we elected one.
>
> Thats at least a start to stop the spam.
>
> Question is altso - is this group still needed/wanted ?
>
> I think this group is still a better alternative to
> the Webforums, that can be found on the osFree site.
> Personally I hate Webforums - and since osFree site
> seems to regulary loose the contents of their forum,
> I would still like to see this group alive.
>
> But... is anyone else still reading this group ?
> (It seems to have 400 members !)
>
> Do you want it to stay alive, or should we just abandon it
> and let the spammers have their fun ?
>
> Speak up, if you want it to stay alive !
>
> Allan.
>
> Allan Holm Operating -_-_\ \ / / ____ _____ _____
> Running OS/2 at _-_ _-_-_\ \/\/ /-/ -_ \ \ -_ \_ \ -__\
> Unleash 32-bit power! - -_-_-_\_/\_/ -\_\ \_\ \_\ \_\ \_\ speed
> alh@... <mailto:alh%40fabel.dk>
>
>
Post Reply