#2022 Alive, dead or in a coma
Expand Messages
Lynn H. Maxson
Sep 17, 2009
In the interval since IBM exited the PC operating system business with
OS/2 we have seen a steady decline in our population. That should come
as no surprise to anyone. After IBM's exit, specifically at Warpstock98
in Chicago several presenters offered their opinions along with the
general discussion on how best to respond to not only preserve, but
enhance OS/2.
As the last presenter at that user conference I took an entirely
different approach from the other speakers, offering the Warpicity
Proposal. It had three parts, one dealing with a formal organization
supported by the population, the staffing and an enhanced methodology.
The first two quickly drifted off into the background while the
methodology assumed center stage.
Basically the methodology consists of a single language, designated as
SL/I, a single tool, designated as "The Developer's Assistant", and a
single source-only, automated library, designated as the "Data
Directory/Repository". Now the whole purpose of this trio lay in
increasing developer productivity some 50+ times and reducing
development costs some 200 times. As neither had never occurred
previously in the history of software development anywhere near those
numbers obviously skeptics and skepticism have challenged it ever since.
Understand that I chose to address productivity and cost as the central
concerns which any minority software population would have to achieve in
order to maintain or exceed the pace of support within larger
populations. I chose to offer something different, startling so,
because it could not occur using the same tools and methodology of the
larger aupported populations. With fewer people resources to draw upon
we could not compete with larger resources on their playing field. At
this moment some 11 years later history has borne me out in spades.
All that aside some interest came about in a project to create an open
source version of OS/2. That lead to some activity on a list known as
"osfree". The discussion there split between two approaches. One
proposed to create an application layer relying on an underlying
existing operating system with a larger population, namely Linux. The
other proposed to start with a micro-kernel which supported not only
OS/2 but any of the other operating systems suitably modified. The
first group focused its efforts on the freeos list while the second
group started the osfree list. Both today lie largely dormant.
Meanwhile a still active OS/2 user group, SCOUG (Southern California
OS/2 User Group) several years back had its Programming SIG focusing on
the Warpicity proposed methodology, specifically the development of the
single tool, the Developer's Assistant(DA). Anyone who wants to follow
this development can do so by subscribing to the scoug-programming list.
You can get instructions for doing so at the website,
www.scoug.com.
I assume that when a dormant or relatively inactive list gets new
subscribers that they should have an idea and options to participate, if
they so desired.
The DA is an interpreter/compiler for SL/I. It is both because an
interpreter offers the most in developer productivity, his throughput,
and a compiler offers the most in transaction productivity, its
throughput. As it operates as an interpreter that means it has a
GUI-based editor. As an interpreter and compiler share syntax and
semantic analysis, differing only in code generation, which type of code
to produce becomes a developer option.
SL/I is an HLL offering both an imperative (1st, 2nd and 3rd generation)
and a declarative (4th generation) combined programming language. As
every programming language by definition is a specification language,
SL/I becomes a "universal" specification language. As such it can
define itself, a capability which makes it self-extensible.
It can combine both imperative (assignment) and declarative (assertions)
in one as its implementation uses the "standard" two-stage proof engine
of logic programming, the completeness proof and the exhaustive
true/false proof. In short it brings to an imperative language the
self-organizing capability enjoyed by all fourth generation languages,
e.g. SQL, Prolog, AI and neural networks.
To understand this you need to remember that structured programming
relies on three forms of control structures: sequence, decision and
iteration. Each of these control structures contain statements and
possibly other control structures. Thus we have only two software
entities, statements and their assemblies (control structures). Every
statement receives a software-assigned name as does every assembly which
also has a developer-assigned name.
All this occurs within the domain of the Data
Directory/Repository(DD/R). Within it on source statements are stored
as such. All assemblies, i.e. control structures, exist as list of
statement and possibly contained assembly names.
This means in turn that only a single copy of any statement or assembly
exists in the DD/R. The only replication occurs in the use instances of
names. On top of this every source name has appended to it an index, a
binary field. This insures that every instance named remains unique
regardless of the number of homonyms (same name, different referent) and
synonyms (different name, same referent) which occur. The fact that it
does homonyms automatically means that versioning is builtin, not
optional. Moreover versioning applies to every statement, assembly and
defined data elements and aggregates (assemblies).
Two things to note now. One, this treats software as a manufacturing
system. Two, every contained instance, whether element (source, data)
or assembly is reusable. That reuse is maintained automatically by the
software. Thus reuse occurs by repeated use and not by some predefined
mothod.
The fact that each statement and control structure has a unique name
means that you now have the same programming option in use by COBOL with
its "paragraphs", named code sections. It also means that any given
high level name will in turn cause all the lower level names (statements
or assemblies) to get automatically inserted in order as a consequence
of the completeness proof. Thus all organization and reorganization,
should a logic change in the underlying structures so require it, the
software will automatically reorganize the source, again as a
consequence of the completeness proof.
Thus now the developer writes in the "small" while the software
organizes in the "large". As the small is less error prone by humans
and the large absolutely not error prone by the software, you have the
best of both worlds.
Now no system is complete without testing. We have two ways of testing,
one in which we externally generate the test data and another in which
the software automatically generates it according to data ranges we
specify. The first is use by all imperative languages, which we have
now changed with our two-stage proof engine. We now have one of the two
options in use in logic programming. We have one which operates on
pre-defined data, e.g. SQL, known as "clausal" logic. We have one which
operates on automatically generating user-defined, range-based rules,
known as "predicate" logic.
The DA and SL/I use predicate logic. Thus they offer the most complete
and easiest to define set of test data. It has the effect of
eliminating the need for either alpha or beta testing or testers.
Now the DA and SL/I will not change the amount of "original" or
"modified" source. It will reduce and in fact eliminate any manual
organization or reorganization of source as well as the generation of
test data. While it gains in productivity in other areas these two,
organizing of source and automatic test daya generation, account for the
majority of time and effort in current methods. This follows the
guiding principle of "let people do what software cannot and software
what people need not."
In short if you can achieve a 50 to 1 gain in productivity, you require
a corresponding less need in human resources. We should also note that
it reverses the result of the business software development/maintenance
equation which currently favors "build to stock", volume-based
purchasing, over "build to suit", custom software. To restate this,
volume-based producers cannot compete economically with custom-based
producers. If you consider the deeper meaning of a 50 times
productivity gain, any group larger that 5 to 7 people becomes
non-competitive.