Page 1 of 3

Part 36 - Jul 28 2004

Posted: Sat Jan 05, 2019 5:25 am
by admin
#1058 Re: Project Definition
Expand Messages

Daniel Lee Kruse
Jul 28, 2004

Hide message history
--- In osFree@yahoogroups.com, "John P Baker" <jbaker314@e...> wrote:
[snip]
>
>
> * Do we need to continue to support 16-bit APIs and the associated
> "thunking"?

How backward compatible is desired? A large portion of device drivers
are 16 bit. I wouldn't be surprised to see several 16-bit programs
still in use, but I don't have any way to back up that statement.

> * Do we need to support an Windows interface layer or just native
> "protected mode" OS/2 sessions?
> * What hardware should we support? Do we establish a minimum of a
> "Pentium 4"?

Personally, I think "protected mode" OS/2 sessions should take
priority. The Windoze interface layer could be later if there is
demand for it.

Hardware - do we want to start off at 32 bits or just make the jump to
64 bits ala Althon 64 and PowerPC? Do we want to stay just x86 or
have the option of using other hardware platforms? I think x86 should
be the initial hardware supported.

Good question on CPU level as minimum. There is a lot of older
hardware out there still being used. I'm still using an Althon 1.3
ghz. I don't think I'd go below an 1 ghz class machine though.

[snip]
>
> John P Baker
>
> Software Engineer
[snip]

Re: Project Definition

Posted: Sat Jan 05, 2019 5:28 am
by admin
#1059 Re: Project Definition
Expand Messages

Daniel Lee Kruse
Jul 28, 2004

Hide message history
--- In osFree@yahoogroups.com, "John P Baker" <jbaker314@e...> wrote:
[snip] I have a
> particular interest in security and auditability. I am leaning
towards a
> much more object-oriented API set.

And I a multi-user system. I would like to see a more object-oriented
(not object-based) API set, too.

I would like to see the APIs return exceptions instead of return
codes. That's my 2 cents.

[snip]

Daniel Lee Kruse

Re: [osFree] Re: Project Definition

Posted: Sat Jan 05, 2019 5:29 am
by admin
#1060 Re: [osFree] Re: Project Definition
Expand Messages

Stepan Kazakov
Jul 28, 2004
On Thu, 29 Jul 2004 05:43:44 -0000, Daniel Lee Kruse wrote:

>> * Do we need to continue to support 16-bit APIs and the associated
>> "thunking"?
>How backward compatible is desired? A large portion of device drivers
>are 16 bit. I wouldn't be surprised to see several 16-bit programs
>still in use, but I don't have any way to back up that statement.

hey, guys.

all you here must know something about reality.
_all_ of OS/2 physical device drivers including DEVICE and BASEDEV are
16-bit.
_any_ non-PM program with screen/keyboard/mouse access are 16-bit programs.
many of 32-bit DOSCALLS are thunks to 16-bit DOSCALLS code.
there are so many 16-bit crap and limitations in system, what if you will
remove it - almost
no one OS/2 program will not work.
if you want to emulate all this old crap - you must to do all those ugly
things like LDT tiling,
512 Mb private space limitation, HMA, etc,etc,etc.
and you will get just another OS/2 with all it's limitations.

anybody really wants it ? ;)

Re: Project Definition

Posted: Sat Jan 05, 2019 5:31 am
by admin
#1061 Re: Project Definition
Expand Messages

Daniel Lee Kruse
Jul 29, 2004
--- In osFree@yahoogroups.com, "Stepan Kazakov" <madded@m...> wrote:

> On Thu, 29 Jul 2004 05:43:44 -0000, Daniel Lee Kruse wrote:

[snip]

> hey, guys.
>
> all you here must know something about reality.
> _all_ of OS/2 physical device drivers including DEVICE and BASEDEV are
> 16-bit.
> _any_ non-PM program with screen/keyboard/mouse access are 16-bit

programs.

> many of 32-bit DOSCALLS are thunks to 16-bit DOSCALLS code.
> there are so many 16-bit crap and limitations in system, what if you

will

> remove it - almost
> no one OS/2 program will not work.
> if you want to emulate all this old crap - you must to do all those ugly
> things like LDT tiling,
> 512 Mb private space limitation, HMA, etc,etc,etc.
> and you will get just another OS/2 with all it's limitations.
>
> anybody really wants it ? ;)

Hi all,

I would then suggest that we, if we do the 16-bit parts, that we make
that like a legacy OS/2 emulation layer.

I would suggest that we also skip the 32 bitness, too, and just jump
to 64 bits. 32 bits could be another legacy emulation layer.
Granted, 64 bits isn't hugely widespread (yet), it will be. My 2
cents is to go to 64 bits initially and exclusively. The 16/32 bit
thunking/emulation can happen later, IMHO.

Thoughts?

Daniel Lee Kruse

Re: [osFree] Re: Project Definition

Posted: Sat Jan 05, 2019 5:33 am
by admin
#1062 Re: [osFree] Re: Project Definition
Expand Messages

Lynn H. Maxson
Jul 29, 2004
If it won't run OS/2 programs, it's not an OS/2 replacement.
You may have a "better" OS, but you won't have an OS/2
replacement.

When Frank Griffin weighs in on this I expect we will revive
the different perspectives that lead to a split in the original
FreeOS mailing list leading to the creation of the OSFree one:
whether you base the OS on a microkernel (uK) supporting
multiple OS "personalities" (FreeOS) or an emulation layer, a
guest OS, on top of a host OS like Linux (OSFree).

In fact you don't have to make a choice for either until after
you have all the "facts". There you will discuss those "little"
details that you must resolve iterativly in a specification,
analysis, design process. I will simply point out that in theory
(as no one has yet produced a uK with multiple personalities)
a uK approach addresses the issues and wishes you raise.

However, proposing "solutions" before making clear the
"problems" seems like putting the cart before the horse. You
have an ongoing collaborative effort occurring throughout the
specification, analysis, design, construction, and testing
process. Collaboration provides problems of its own related to
organization and efficiency. We haven't decided on the tools,
the means, or the methods of collaboration.

John Baker initiated this thread and set up the website for
project control because he understood the need for orderly
progress. Now I don't know of any other open source project
that has attempted this same level of "management". Maybe
its absence accounts for the number dropped, inactive, or
unchanged.

We come to this endeavor with a mixed bag of skills and skill
levels. We need somehow to firm up the first and raise the
second. That means learn while we earn, while we engage in
this project.

It does "you" no good to have source which you do not
understand and cannot maintain. If design precedes
construction, then source text (documentation) precedes
source code. As a collaborative effort documentation
provides challenges of its own.

We will have to learn to walk here before we can run. That
means we go slow at first to establish our comfort zone, that
we can run at full speed without tripping and falling.

Above all let's make sure that everyone understands OS/2
first before moving on to OS/2++.<g>

Re: [osFree] Methodology

Posted: Sat Jan 05, 2019 5:34 am
by admin
#1063 Re: [osFree] Methodology
Expand Messages

Frank Griffin
Jul 29, 2004
Lynn H. Maxson wrote:

>Frank,
>
>I've started several different responses, gotten deep into
>them, reviewed them, and then erased them. I have no
>argument with your assessment of "what is" in terms of
>current implementations. I fail to come up with any cogent
>insights to resolve the niggly differences that separate us.
>
>

Lynn,

I don't dispute the premise that if we had significantly better tools,
we would need significantly fewer people. The problem I have is that
all of the specific capabilities that you've claimed so far for the tool
you espouse either seem to me to exist already just as effectively in
existing tools or else don't seem to me to be of any consequence (e.g.
the business about single-pass compilers or the evils of a runtime library).

The whole issue would be neither here nor there except that it affects
which language bindings you try for first and which tools people use to
draft header files for specifications.

Maybe more fundamentally, it affects whether people believe that they
can write and maintain a kernel, PM, WPS, etc., from scratch using your
tool if they don't think they could do it without your tool.

For people to make the decision as to whether your tool is, to put it
bluntly, worth waiting for or supporting relative to the timeline of
this project, they need much more concrete information about it than
you've been giving. Forget the theoretical dialogs about software doing
more and people doing less; I doubt there is anyone here who would
disagree. Instead, pick some task which would come up in writing or
maintaining the OS under discussion, show how it would be done in C, how
it would be done in SL/I, and why the SL/I version offers
orders-of-magnitude improvements.

Re: [osFree] Methodology

Posted: Sat Jan 05, 2019 5:36 am
by admin
#1064 Re: [osFree] Methodology
Expand Messages

Lynn H. Maxson
Jul 29, 2004
Frank,

As always you're to the point and on mark.

You have five stages of the software development process,
all of which we perform manually, i.e. through writing or
visual input. That's a requirement of imperative languages
which require that you input the complete global organization
of a program. The programmer not only writes the parts, but
he must write them in a way that they fit seamlessly both
logically and physically. If you switch from an imperative
language to a declarative one, from the imperative form of
record i-o processing to the declarative one of relational
databases, i.e. SQL, Only one stage, the first one,
specification, is done manually. The remainder become the
responsibility of the software.

The software as part of the completeness proof does the
analysis, design, and construction (the assembly of the parts
into an optimal whole). The software as part of the
exhaustive true/false proof does the testing.

I could offer you a DOS product, Trilogy, that I picked up in '88
that uses "predicate logic". You have two forms in logic
programming, predicate logic and causal logic. Two
programming languages which use causal logic are SQL and
Prolog. The difference lies in the method of creating the data
used in testing. Causal logic uses data created by the user,
e.g. SQL and user tables. Predicate logic creates the data
from the data definition. It does this for all the variables
defined within a code segment, from a statement on up to a
whole program, by enumerating all sets of all possible values
of the variables. As you know that could run into the zillions,
far beyond the capacity of any manual system ever
attempted.

So you see the first shift lies in moving from imperative
languages to declarative, dropping five manual stages to one.
Now you must remember that software will perform the
analysis, design, and construction at least a 100,000 times
faster than any single individual and more than a 1,000,000
times faster than a 10-member team.

That says that you get a near instantaneous response to any
change to specifications. In fact you will see the result of a
change as fast as you can write it, even if the change
introduces logic which requires a complete global
reorganization of the source.

Now you can't do this with a compiler. You can only do it
with an interpreter. And if you do it with an interpreter, you
can do what you have always been able to do, execute code
segments in isolation far and away from having to test as
complete programs.

Now you have three basic structured programming constructs:
sequence, decision, and iteration, a variation of decision (the
case statement), and two options for iteration, (do while and
do until along with do forever and do once, i.e iterate once).
A control structure can range from a single statement
sequence on up to any on up to a multiple control structure
sequence (combinations of sequence, decision, and iteration)
and any number of nested levels hierarchically within decision
and iteration. No matter how simple or complicated one and
only one rule exists for any control structure: it has only one
input and one exit point.

These input and exit points make up the path segments. An
exhaustive true/false test of any path segment, again from a
single statement on up, in which the software (using predicate
logic) automatically generates all possible test instances
provides a level of testing not reached in any in which you or
I have ever participated in manually. And it occurs millions, if
not billions, of time faster (the data generation as well as the
testing) than the incomplete testing we do perform. If we did
not have incomplete testing, we would have no use for a beta
testing or beta testers.

Now you have no experience using logic programming with
predicate logic in an interpretive environment. The only
example I've seen, have, and used is Trilogy. it's not all that I
talk about here, but it does use logic programming based on
predicate logic in an interpretive environment. The only other
language based on predicate logic is the Z-specification
language. You can do a google search on "Z-specification
language" for references.

So I have used logic programming with predicate logic in an
interpretive environment. It did generate all the possible test
instances of the variables.

When I began programming they handed me a flowchart
template and said to design the program before coding it.
Now flowcharting is a visual representation of a virtual
program. That means it doesn't exist until you get off the
flowchart and get to coding. Nowadays you code and get the
flowchart from the source code. Now you can argue that it
might take longer to produce source code without the
flowcharts, but you can't argue about the speed the software
can produce them relative to the manual effort needed.

The fact of the matter is, if you have the organized source
code, the software can produce any visual output from it,
including flowcharts, dataflow diagrams, structure charts, and
any of the 16 or so different UML documents. Why? Because
a "logical equivalence" exists that works both ways, from
visual representation to source or from source to visual
representation.

Now that means that the software can generate all "formal"
program documentation of any software methodology,
including OO, from the source. You input the unordered
source. The software as part of the completeness proof
organizes it globally (optimizing it as well) and then uses the
organized source to produce all supporting documentation as
well as executable code. In so doing guaranteeing that each
and everyone of them is in sync.

Now I don't know if you have been counting on your fingers,
had to take off your shoes to be able to use your toes or
called in a friend to increase the number of digits available,
but by this time you should be well over an order of
magnitude improvement in productivity.

All you write are a series of unordered specifications, meaning
you can write them in any order. As you do so the interpreter
does a syntax check, a semantics check, and where possible a
logical organization (an ordering). As it incrementally does so
it provides a visual portrayal in any of the forms mentioned,
offering you feedback on what the current source contains
and what it does not. It does this with every statement
entered when it is entered. Now optionally you can have it
stop after syntax and semantic analysis until you are ready to
start viewing the output.

Now you can sit there and tell me that "...all of the specific
capabilities that you've claimed so far for the tool you
espouse either seem to me to exist already just as effectively
in existing tools or else don't seem to me to be of any
consequence ...", but I don't think so. Otherwise IBM would not
be placing all the emphasis currently on tool integration as it
now tries to reassure its customers. If and when it gets done,
however much it exceeds that of its competitors (if it does), it
will not have the level of integration inherent in this tool.

Now the author of Trilogy is Paul Voda. There's another
google search for you.

So it's not SL/I or PL/E per se. Their uniqueness lies in their
universality as a specification language capable of specifying
itself with itself: self-defining, and thus self-extensible. They
rely on a foundation of logic programming, predicate logic,
and an interpretive environment allowing selective testing of
any seqment or collection of segments. It also produces
selected visual output of flowcharts, dataflows, structure
charts, and UML documents.

It's a world where you only write and rewrite unordered
specifications. The software performs all other
writing...except for text-based documents. I'm more than
willing to consider any tool out there that you feel does all
this or compare it to the use of any combination of tools out
there that does all this.

Full language, logic programming, predicate logic, interpretive
environment. I know PL/I is the closest thing to a "full"
language out there, but it fails on the other pieces. You tell
me what you think offers equivalent in terms of productivity
to this combination. You may know something. I hope you do,
because it will save me a hell of a lot of work.

But just in the reduced number of different manual writings
(different languages to master) and the reduction in manual
rewriting I can more than meet an order of magnitude
difference in productivity. The net effect is to be able to
effect changes faster than their rate of occurrence. That
means eliminating backlogs. So if your suggested tools do not
eliminate backlogs, they are not in the same class.

Now to end this with a "practical" example relative to this
project. Take any API, e.g. DosOpen, specify it and its rules in
isolation, and test it exhaustively. Then add another API and
repeat the process for both and so on until you include all
OS/2 APIs. That's incremental development. What
incremental compiler do we have available in open source
that will allow you to "mark" selected segments of source
code, process, and test them?

You are correct about the importance of our choice, the
choices you know and the choice like mine that you do not.
The choices you know and a choice like mine that you do not.
With our numbers we cannot use the choices you know to
develop and maintain competitively an OS/2 replacement.

So why would you not want to learn more about a choice you
do not know, if it holds out that possibility? So why don't we
complete at least DosOpen in C (or C++) and PL/E? Maybe
that's all the example needed to understand all choices.

Re: Project Definition

Posted: Sat Jan 05, 2019 5:37 am
by admin
#1065 Re: Project Definition
Expand Messages

Daniel Lee Kruse
Jul 30, 2004
--- In osFree@yahoogroups.com, "Lynn H. Maxson" <lmaxson@p...> wrote:

> If it won't run OS/2 programs, it's not an OS/2 replacement.
> You may have a "better" OS, but you won't have an OS/2
> replacement.

Hence my suggestion for the 'legacy' OS/2 emulation layer. And it
shouldn't be touted as an OS/2 replacement until the emulation layer
is added.

Hide message history
>
> Above all let's make sure that everyone understands OS/2
> first before moving on to OS/2++.<g>

Re: [osFree] Re: Project Definition

Posted: Sat Jan 05, 2019 5:38 am
by admin
#1066 Re: [osFree] Re: Project Definition
Expand Messages

Lynn H. Maxson
Jul 30, 2004
Daniel Lee Kruse writes:
"Hence my suggestion for the 'legacy' OS/2 emulation layer.
And it shouldn't be touted as an OS/2 replacement until the
emulation layer is added."

Well, Frank Griffin and I argued on opposite sides of this on
the FreeOS mailing list. He favored the layered approach;I,
the uK. He favored concurrent execution with OS/2 as layer
over a LInux kernel. I favored their concurrent execution
running side-by-side, i.e. multiple OS personalities on a uK
base. In that manner you could include Windows, BeOS, and
others.

Now I really don't want to see that repeated here. I respect
Frank Griffin and his deliberated views too much to want to
engage in an endless debate. It takes away from what we
both want: an open source future for OS/2 independent of IBM.
As an IBM retiree the OS/2 position in no way implies an
anti-IBM stance: my blood still runs blue. IBM got me hooked
on OS/2. The market pressures that impacted their decision to
retreat did not affect me.

Regardless of whether you prefer a layered approach of a
guest OS/2 replacement on a host Linux or a layered approach
of an OS/2 replacement personality on a uK, you still have to
define it to the detail that allows layering it on either. The
only difference lies in what you have to know intimately,
Linux or uK, to complete the detailed picture.

Now I really don't care what specification language used here,
whether or not it differs from mine. If you choose to leave
them "as is" in C, it has no bearing on my translating them into
PL/E. The only real difference is that you will have two
source forms, code and text, while I will have one: code. Thus
I will only have one form to maintain, while you will have two
to keep in sync.

Now I started my thread on "Programming" give
non-programmers an insight into the common features of an
OS/2 API. Among those common features with few exceptions
the return and parameter variables are all full words. As such
they either contain an address or a signed or unsigned integer.
Three possible data types: pointer, fixed bin (31) signed, fixed
bin (32) unsigned. IMHO these have a clarity that the
multitude of "extended" data types, which in fact introduce
no real "new" data types, do not have.

Until recently you could not say "pointer" in C because it did
not exist as a separate data type. You still cannot say 'fixed
bin (31) signed or (32) unsigned'. Instead the C advocates
created extended data types by giving different names to the
same underlying data type, relying on a macro language to
resolve the extended name to the basic type, and
incorporating the actual data type as a lower-case prefix in a
variable name. Personally that seems like a hell of a lot of
unnecessary extra work to define a 32-bit word in a language
which has no means to operate on bits: no native bit string
data types. Of course, someone will still insist that C more
closely represent machine architecture even if it can't
represent the one data type, the bit, which composes all the
others.

So we already have the C form of the OS/2 APIs published in
IBM (and others') manuals as well as source code libraries.
Instead of all these manuals we need only one, an open
source one free of copyright restrictions, to call our own.
What does it hurt to support two forms, a C form and a PL/E
form? If at the end of the detailed design, the detailed
algorithmic descriptions, you don't have a PL/E tool, you still
have the uptodate C tools to use.

If the two forms side-by-side make it easier for the
non-programmers to gradually gain in programming skills, then
with or without a PL/E tool you now have a larger skilled
population to draw upon. Besides you still have PL/I compilers
so that you can also write C and PL/I code side-by-side to see
which more easily supports development and maintenance.

Re: [osFree] Methodology

Posted: Sat Jan 05, 2019 5:39 am
by admin
#1067 Re: [osFree] Methodology
Expand Messages

Carl
Jul 30, 2004
Lynn H. Maxson wrote:

>I think you know the expression "hide in plain sight". No PL/I
>programmer has ever needed the use of a library to write an
>application. Not a library that comes "with" the compiler nor
>one from a third-party. No, the PL/I library comes "builtin",
>through a set of "builtin functions". The difference is that the
>compiler knows the function. Knows. Knows. Knows.
>
>
>

Is there a freeware/shareware/demo PL/I compiler for OS/2 that I could
download? I've not heard of PL/I before & am curious to have a look.

Carl