Part 14 - Mar 15 2002

Old messages from osFree mailing list hosted by Yahoo!
admin
Site Admin
Posts: 1925
Joined: Wed Dec 19, 2018 9:23 am
firstname: osFree
lastname: admin

Part 14 - Mar 15 2002

Post by admin »

#391 From: "Lynn H. Maxson" <lmaxson@...>
Date: Fri Mar 15, 2002 5:01 am
Subject: Re: # 29 - a kernel which is second to none lynnmaxson
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°


Seth writes:
"This discussion is way above my head - but I liked what I read
from Lynn Maxson about doing all the hard work upfront and
letting the tools do code writing. ... "

Generally I follow the guidelines of "Let people do what software
cannot and software what people need not". Fortunately software
lacks intrinsic intelligence and thus intellect, a quality that
varies but is present in people. On the other hand software is
perfectly non-intelligent, possessive of a clear clerical view of
the universe. It will do the dullest, most extremely complex and
complicated clerical tasks unfailingly and tirelessly without
concern for how long it takes or when it will get done. It may
not have the shortcuts, the efficiencies, that human intellect
allows, but even in a most obtuse, inefficient manner it can
produce equivalent results in a fraction of the time and at a
fraction of the cost.

Clearly we have a "proper" division of labor defined in theory.
We need simply to improve our practice accordingly: increasingly
assign clerical tasks from people to software. That's how we have
justified and continue to justify our existence in client
accounts: improve their processes such that fewer people can do
the same work or the same people can do more work.

The most significant gain in programmer productivity occurred in
the transition from first generation (machine language) to second
generation (symbolic assembler) language. In large part this was
due to the shift in clerical effort from the programmer to the
software. The transition in imperative languages from second
generation to third (HLL) had lesser significance in removing even
more detailed clerical writing effort. The transition from
imperative (first, second, and third generation) languages to
fourth generation, declarative languages based on logic
programming should have had has great an effect on productivity as
had that from the first to the second. Somehow the industry with
the exception of SQL never adopted fourth generation
wholeheartedly. As a result it has been mired in a cost spiral
that seems without end.

There's no doubt in my mind based on empirical evidence, e.g. AI
expert systems, neural nets, SQL, Prolog, etc., that fourth
generation declarative languages based on logic programming,
whether clausal or predicate logic, are superior in terms of
productivity over any third (or lower) generation imperative
language. If I had to pick a third generation language, I would
pick PL/I over any form of C, C++, C#, or JAVA. Somewhere
inbetween the two groups I would have LISP, APL, and Forth. If
the issue is productivity, go to where the greater productivity
lies.

However, each generation builds upon the previous. Symbolic
assembly translates into machine language. HLLs normally involve
some interim translation into a symbolic assembly form. And a
fourth generation language has to translate into an imperative,
ultimately machine language form. It makes no sense to me why a
higher level does not also include the option of all the lower.
Thus a fourth generation language (logic programming) should
support a proof process supporting the use of both assertion and
assignment statements. Among those assignment statements should
be those which translate directly into machine language.

Now the principal obstacle we face is not one of intelligence or
intellect, but simply manpower. Competitively our few number is
completely outmanned by those of Microsoft, IBM, Linux, et al.
There is no doubt that their manpower need is productivity based.
Double their productivity halves their need. The major limit to
their productivity does not lie with the languages available, but
with the different number of source languages used (specification,
analysis, design, construction, and testing), their piecemeal
(non-seamless) implementation, restrictions on implementations,
and the people organization necessary to attempt to keep the whole
in synchronization. Addressing these issues effectively changes
the manpower equation significantly, particularly the shift from
manual to software labor.

It may not seem clear that the translation of user requirements to
formal specifications is all that is necessary. After all we now
perform an analysis (another language source) creating a logically
equivalent form. We then translate this analytical form into
another logically equivalent form in design (another language
source). Then finally we translate this design form into a
logically equivalent form of source code.

All these translations with rare exceptions occur manually. It's
little wonder that from one translation to the next that something
gets lost (or gained) in translation. That's what happens when
you engage intellect in a clerical task. Let's be quite clear
that logical equivalent translations are clerical in nature. All
in theory derive from the first formal source, the specifications.
It would appear then that if the intellect engaged in writing the
specifications, which the software cannot do, that the software
can do the rest, including all the necessary visual translated
forms.

We have the language as a tool which limits us both to what we can
express and how we can express it. We have the implementations of
the languages as tools which place further limits on what we can
do with the language. Toss out the limits on language, allowing
the full range of formal logic expressions (operators and
operands). Toss out the limits of existing tools, particularly
with respect to source files as input and restricting the scope of
compilation to a single external procedure. Toss out separate
editors, compilers, linkers, etc. and go to a single interactive,
interpretive system which produces all the visual output after
organizing the source along with the executable. Do all this and
you can with a few match the efforts of a hundred times that many.

All these "improper tools" contain the specifications, the design
if you will, for a single, integrated one requiring only one
written form (source) from which the software produces all other.
This is your killer app regardless of OS.<g>
admin
Site Admin
Posts: 1925
Joined: Wed Dec 19, 2018 9:23 am
firstname: osFree
lastname: admin

Re: Part 14

Post by admin »

#392 From: "Gregory L. Marx" <gregory.marx@...>
Date: Fri Mar 15, 2002 1:18 pm
Subject: Re: Kernel/mikrokernel architecture chekmarx
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°


On Thu, 14 Mar 2002 09:12:59 -0800 (PST), Lynn H. Maxson wrote:

<snip>

>So I don't feel the need freeos or osfree to the extent that
>others do to escape the yolk of IBM or Microsoft. I've
>escaped.<g> But if someone is really interested in a no
>compromise, high-performance, low resource usage, highest quality,
>highest reliability, universal OS, be aware that it certainly is
>possible. Though it doesn't particularly speak to any special
>need of mine, except possibly intellectually, I'm more than
>willing to participate and show that it is possible to have the
>best without compromise.

Reading this gives me the same chills-up-my spine as when I first read of the
Win32-OS2 project ...

What you are describing (if I'm understanding correctly) is nothing more than a
complete, 100% different approach to not only developing
software, but also a full and utter paradigm change in how the os and
applications interact with each other and the enduser ...

Greg
admin
Site Admin
Posts: 1925
Joined: Wed Dec 19, 2018 9:23 am
firstname: osFree
lastname: admin

Re: Part 14

Post by admin »

#393 From: "Lynn H. Maxson" <lmaxson@...>
Date: Fri Mar 15, 2002 6:17 pm
Subject: Let's resolve the issue lynnmaxson
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°


We, John Martin (JMA) and I, obviously have a difference of
opinion about the efficacy of placing an OS/2 CPAPI layer (of
known breadth and unknown depth) on top of various other OS kernel
APIs. In short how much work is required to map from a host OS
(CPAPI[host]) to a guest OS (CPAPI[guest]), in this instance a
CPAPI[OS/2].

If we choose a Linux host, we have the following linkage to
maintain: CPAPI[Linux] <==> CPAPI[OS/2]. Now we know they are not
identical, otherwise we would not need a layer. That means they
differ. The degree of difference determines what amount of
filtering is required, where "filter" is a well-developed UNIX
euphemism for mismatched interfaces.<g> So where could they
differ?

An API has a name, an ordered parameter list, an individual type
per parameter, a defined set of parameter input and output values,
and an rc (return code) offering different output values. A
mismatch in any of these results in a need for a two-sided (input
and output) filter. A different name, a different order, a
different type, a different input value, a different output value.

This implies a complete knowledge of both CPAPIs as well as the
filters used. But we have another mismatch possibility as well,
one of function type: that we do not have a complete overlap.
This requires a different set of (compensatory) filters. Then you
have the possibility of function nuance differences, seemingly the
same but not quite.

Now a filter however short still provides an increased instruction
path length, i.e. a performance loss. Should you decide to go
from CPAPI[Linux] to CPAPI[Windows] you have a complete
refiltering job, a complete rewrite. If a CPAPI layer for Linux,
Windows, or OS/2 is isolated from the hardware, that's the least
of your problem. If you know the hardware answer for one,
assuming maximum comprehension, you have the hardware answer for
the remaining, i.e. the same hardware isolation.

There's no argument about the hardware isolation. We don't need
filters in the CPAPI layer for that. The problem lies in the
software fit and what it takes to make it seamless.

So do I believe that we can take a set of MKAPIs providing
hardware isolation and make them in common with a set of software
APIs leading up to a CPAPI for a particular OS? Yes. Do I
believe that the same requisite knowledge to layer a guest OS on a
host suffices in time and effort to constructing them to run in
parallel, i.e. horizontal layering? Yes. Do I believe that the
aggregate maintenance effort is less for the horizontal layering
than vertical? Yes. Do I believe that I can achieve greater
overall performance with horizontal layering over vertical? Yes.

The only thing confusing me at the moment is how anyone with any
level of programming experience would come to a different
conclusion. You have development where you start from scratch and
ongoing maintenance after that point. Historically maintenance
costs exceed development. Technically you can probably develop a
vertical layering of two OS personalities, where one, e.g. Linux,
exists in the same time frame that it takes to place one on a
MKAPI. So, to develop two on a MKAPI takes longer. But once you
have both and possibly a third, e.g. Windows, the maintenance cost
due to the effect of the low coupling between the OS personalities
becomes less. So the trade off becomes one of "pay me now"
(horizontal layering) or "pay me later" (vertical layering). In
the aggregate, considering both development and ongoing
maintenance cost, the horizontal layering wins hands down over the
vertical.

So those who look for some "quick and dirty", easy means of
associating OS/2 with another host like Linux using vertical
layering do so without consideration of the longer term costs
incurred. To casually say that the hardware independence of a
CPAPI somehow eases the task of software dependencies in a
vertical layer approach has not paid attention to their histories.
admin
Site Admin
Posts: 1925
Joined: Wed Dec 19, 2018 9:23 am
firstname: osFree
lastname: admin

Re: Part 14

Post by admin »

#394 From: "Lynn H. Maxson" <lmaxson@...>
Date: Fri Mar 15, 2002 7:54 pm
Subject: Re: Kernel/mikrokernel architecture lynnmaxson
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°


Gregory Marx writes:
"... What you are describing (if I'm understanding correctly) is
nothing more than a complete, 100% different approach to not only
developing software, but also a full and utter paradigm change in
how the os and applications interact with each other and the
enduser ... "

Not quite. I accept the software development cycle of
specification, analysis, design, construction, and testing. That
paradigm remains. Currently each of these are performed manually.
My paradigm shift lies in making at least the three stages
following specification the responsibility of software, i.e.
requiring only manual effort in writing of specifications.

I have not included testing here to avoid confusion with those
unfamiliar with logic programming. In truth I would use predicate
logic, which specifies the ranges of values that variables may
assume, and let the software as part of its exhaustive true/false
proof process automate the testing process as well. All this
means is that I would have no use for beta testers and their
random (as opposed to exhaustive) test results.

But that's another issue for what you do after you have an
executable, the second phase of software development. The first
phase is getting to the executable from the user requirements, of
converting them into formal specifications, performing analysis
and design, and then construction of source compiled into an
executable.

So the issue is getting to the executable in the shortest time and
at least cost. The long and short of it is to reduce the number
of people required and what the remaining people have to do. We
know someone has to express user requirements. We know that any
user requirement that we can execute in software is expressible in
a specification.

Thus if users can express their requirements and in communication
with IT agree upon their expression as specifications, then these
should be the only two forms of manual writing necessary for the
remainder of the software development cycle, at least for
analysis, design, and construction. That means within the cycle,
i.e. after the expression of user requirements, the only source
for all other output--the dataflows of analysis, the structure
charts of design, the optimally organized source
specifications--are the written unordered specifications.

Now oddly enough no one looks quizzically at software which
produces flowcharts from source code, even though the original
intent of flowcharting was to precede the writing of source code.
At the moment Rational Software and others are looking for means
of software translation of UML documents, i.e. UML source, into
source code. At some point the paradigm reverse of the
flowcharting program will occur for them and they will realize
that all of the UML documentation can be produced by software from
source code.<g>

So I'm not proposing a different approach to developing software.
I have no argument with what gets done and in what order. I am
proposing that how it gets done beyond the writing of
specifications and who does it becomes the responsibility of
software. By turning it over to software the synchronization of
all documentation, the reflection of all changes to
specifications, becomes automatic. The time required to reflect a
change throughout becomes minuscule as does the cost. In fact you
develop the ability to respond to change requests faster than they
occur: elimination of backlogs.

So in the end I don't think "what" we do in software development
needs changing, only "how" we do it, shifting as much of the
clerical effort, e.g. the production of documentation, as possible
to software. That's why it's not a silver bullet. There's
nothing new in "what" we do, but only "how" and "who" does it.

The only other "radical" thing is to have a "true" IDE (Integrated
Development Environment). In spite of how often each vendor calls
its offering an IDE it falls short of the mark. They seemingly do
not understand that the software development cycle, its five
stages, is an integrated development environment. Thus no IDE is
complete unless it offers a seamless, integration of the entire
process, i.e. its sub-processes down to every individual activity.

Let's be clear on this point. The software development cycle in
"theory" is a seamless, integrated whole. In practice it never
has been. Never once in the entire history of IT have we offered
a "holistic" set of tools. The only deliberate attempt to do so,
IBM's failed AD/Cycle, was deliberately sabotaged by the
participating vendors, supposedly cooperating in the venture.

The answer is simple. A seamless, integrated process remains so
down to each individual activity. That means only one tool
offering a seamless access to and fit among all activities. That
means a standard, open source set of activity interfaces. That
disallows non-standard interfaces. As a vendor, who has to
recover his cost of sales plus a profit, this is a competitive
nightmare. That's why closed source continues to dominate our
industry.

IBM's proposed AD/Cycle was visually represented as three vertical
layers. The first or top layer was the software development cycle
of the five stages in seamless, integrated sequence. The second
or middle layer was the set of individual vendor tools with
horizontal line segments placed to represent which portions of the
top layer a tool supported. The third or bottom layer became the
"famed" data repository, which among other things contained the
interfaces for the activities to which the vendors were to adhere.

In truth if you are a software tool vendor and your gestalt
mechanism (along with the "ach" phenomenon) is working at all, it
takes less than a millisecond to realise that you don't want any
part of this. As a user, one confounded by the number of tools
available and their unwillingness to provide an end-to-end fit
over the entire software development cycle, it takes you the same
millisecond to realise that you do want this.

As a vendor it would be politically incorrect to tell a client
that you wont provide it because it would be committing
competitive suicide. So you get on board in a politically correct
manner, publicly supporting it but privately dragging your feet in
a form of passive resistance. IBM in some foolhardy manner
assumed responsibility for the data repository which went from an
initial intent to implement on an OS/2 workstation but eventually
ended up in MVS on an IBM mainframe. As IBM under intense federal
pressure defending itself from antitrust allegations wasn't
willing to impose a set of standard activity interfaces or even
willing to define a set of standard activities, it found itself
spending tens of millions of dollars and millions of man-hours in
a losing cause. Eventually it died a "classical death".

The truth is what users need vendors don't want to supply.
Vendors don't want to supply it due to the real risk of
competitive suicide. As a vendor you want to recover your cost of
sales, the initial and ongoing costs incurred, your research
costs, plus some profit. In your market plans those costs are
allocated across an expected volume of sales at a given price
level. You have less risk using proprietary interfaces,
guaranteeing mismatches, and thus barriers to your competition.

The fact of the matter is as a software vendor you cannot afford
to develop the silver bullet that the user wants. It's not a
technical difficulty, but an economic one. In fact anything other
than a fractional overall productivity gain threatens the very
volume of sales you deem necessary. The greater productivity
gains you offer, thus the fewer number of people necessary, the
less likely you will achieve the necessary volume.

It's relatively easy to specify the ultimate software development
tool. I've done so elsewhere in my description of "The
Developer's Assistant". No one, no independent vendor, no
Microsoft, no IBM, can afford to develop it. You cannot produce a
product offering a "50 times time improvement, 200 times cost
improvement" and, one, recover your investment costs, and, two,
not significantly impact economically downward any software or
services business, e.g. IBM Global Services.

In fact you cannot profit in producing the product. You can only
profit in its use. Once you understand this, then the only ones
who can afford (in terms of recovering their costs) to produce it
is the using community itself. You are up against the same
economic consequences of AD/Cycle.
admin
Site Admin
Posts: 1925
Joined: Wed Dec 19, 2018 9:23 am
firstname: osFree
lastname: admin

Re: Part 14

Post by admin »

#395 From: "Lynn H. Maxson" <lmaxson@...>
Date: Fri Mar 15, 2002 8:38 pm
Subject: Re: Kernel/mikrokernel architecture lynnmaxson
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°


John Martin writes:
"... This is rubbish. So there is a much better way to develop
software around and the biggest companies in the world dont use
it. Now why dont they use it ?

Neither IBM nor MS are dumb, there has to be a catch."

I think I have answered this in a response to Gregory Marx. The
answer is quite simple. A tool providing a 50 times time
improvement (productivity) and a 200 times cost improvement would
devastate existing software vendors, IBM Software and Global
Services as well as Microsoft. If you are a publicly held
company, i.e. sensitive to PE ratios and stockholders, making 30
billion dollars gross on 24 billion dollars cost of sales, thus
six billion profit one year, guess what happens when your cost of
sales drops to 120 million? Do you have this feeling that you can
hold to your prices competitively? How much of an incentive does
a competitor have to have to enter your market?

Go back and take a look at IBM's AD/Cycle. No technical
difficulties existed to defining a standard set of activities or
activity interfaces into which all vendor software could easily
fit. It's the other side of open source: standard interfaces.
Just as we could take the set of OS/2 CPAPIs and define a set of
intervening APIs down to the MKAPI and write the corresponding
supporting source, producing a clone of OS/2, open (standard)
interfaces are an even greater threat. Otherwise you miss the
point of the Sun/Microsoft squabble over JAVA, the introduction by
Microsoft of C# and .Net.

Vendors do not deliberately open themselves up to competition if
they can avoid it. It's not because they are evil. Like anyone
they have a self-interest in lowering risk. There's a difference
between being "user-friendly" and "vendor-friendly", one is open
source and the other closed, i.e. proprietary. In certain
instances of user-friendly tools that are not vendor-friendly (at
least until the vendor becomes a user and not a producer of the
tool) no vendor, no profit-oriented vendor, is about to make an
investment in which he cannot recovery his costs and make a
profit.

If the user profits and the vendor cannot (except as a user), then
let the users become their own vendor, i.e. produce the product
they want. Don't be angry because someone is not willing to make
an effort to produce a product in whose use you both profit but in
whose development your cost is zero. One of you is being entirely
selfish.<g>

So, yes, there is a catch. IBM is not dumb. MS is not dumb. And
I am not dumb.

"... Please join the FreeOS group and do the wiz-bang kernel
and later we can join forces to place osFree ontop of
that kernel. ..."

Well, I guess this is inviting me out of this group and back into
FreeOS from which we both sprang and I never left. Anyone
interested in writing the best possible software of any type for
any purpose on any platform can discuss the means of doing so
there. In the meantime I will ponder in my zen-based madness of
how you can join the forces of one.<g>
admin
Site Admin
Posts: 1925
Joined: Wed Dec 19, 2018 9:23 am
firstname: osFree
lastname: admin

Re: Part 14

Post by admin »

#396 From: "JMA" <mail@...>
Date: Fri Mar 15, 2002 10:14 pm
Subject: Re: Kernel/mikrokernel architecture mailjmase
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°


On Fri, 15 Mar 2002 09:38:06 -0800 (PST), Lynn H. Maxson wrote:

>John Martin writes:
>"... This is rubbish. So there is a much better way to develop
>software around and the biggest companies in the world dont use
>it. Now why dont they use it ?
>
>Neither IBM nor MS are dumb, there has to be a catch."
<SNIP>
>Go back and take a look at IBM's AD/Cycle. No technical
>difficulties existed to defining a standard set of activities or
>activity interfaces into which all vendor software could easily
>fit. It's the other side of open source: standard interfaces.
>
I have seen and used JSP (Jackson structured programing) that
I assume could be decribed as a crude variant of what you are
talking about.
Working with normal business logics it did work
(though the code it generated was horrible).

But we are talking about microkernel/kernel lowlevel stuff.
I dont think it fits.

It might fit into other parts of the osFree project though,
like PM/WPS and their tools.


>"... Please join the FreeOS group and do the wiz-bang kernel
>and later we can join forces to place osFree ontop of
>that kernel. ..."
>
>Well, I guess this is inviting me out of this group and back into
>FreeOS from which we both sprang and I never left. Anyone
>interested in writing the best possible software of any type for
>any purpose on any platform can discuss the means of doing so
>there. In the meantime I will ponder in my zen-based madness of
>how you can join the forces of one.<g>
>
What I'm saying is that this group osFree has a specific focus.
We cannot wander off in a "support everything" project - regardless
how good that project is.

The discussions you generate belongs to a group that want to do
just that - build the best ever kernel. I dont mind a project that does
that, but this group is not about that.

When FreeOS has a great kernel developed far enough I'd be more
than willing to place the work osFree has done ontop of it.

You do the kernel level work, we do the user level
(Dos/Vio/Kbd/Mou, PM, WPS etc.).

But osFree has one goal and you seem to have a much wider goal.

osFree has to focus (as the limited resources we have).





Sincerely

JMA
Development and Consulting

John Martin , jma@...
==================================
Website: http://www.jma.se/
email: mail@...
Phone: 46-(0)70-6278410
==================================
admin
Site Admin
Posts: 1925
Joined: Wed Dec 19, 2018 9:23 am
firstname: osFree
lastname: admin

Re: Part 14

Post by admin »

#397 From: "JMA" <mail@...>
Date: Fri Mar 15, 2002 10:24 pm
Subject: Re: Let's resolve the issue mailjmase
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°


On Fri, 15 Mar 2002 07:17:48 -0800 (PST), Lynn H. Maxson wrote:

>We, John Martin (JMA) and I, obviously have a difference of
>opinion about the efficacy of placing an OS/2 CPAPI layer (of
>known breadth and unknown depth) on top of various other OS kernel
>APIs. In short how much work is required to map from a host OS
>(CPAPI[host]) to a guest OS (CPAPI[guest]), in this instance a
>CPAPI[OS/2].
>
<SNIP>

>Now a filter however short still provides an increased instruction
>path length, i.e. a performance loss. Should you decide to go
>from CPAPI[Linux] to CPAPI[Windows] you have a complete
>refiltering job, a complete rewrite. If a CPAPI layer for Linux,
>Windows, or OS/2 is isolated from the hardware, that's the least
>of your problem.
>
In what way would writing a new kernel help ?
Either you will have to put direct support for all the "personalities"
its to run that will give you a bloated kernel or you have to do the
same mapping all personalities to one generic support and you
will have to do all the "filters".




Sincerely

JMA
Development and Consulting

John Martin , jma@...
==================================
Website: http://www.jma.se/
email: mail@...
Phone: 46-(0)70-6278410
==================================
admin
Site Admin
Posts: 1925
Joined: Wed Dec 19, 2018 9:23 am
firstname: osFree
lastname: admin

Re: Part 14

Post by admin »

#398 From: "Lynn H. Maxson" <lmaxson@...>
Date: Sat Mar 16, 2002 8:37 am
Subject: Re: Let's resolve the issue lynnmaxson
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°


John Martin (JMA) writes:
" ... In what way would writing a new kernel help ?
Either you will have to put direct support for all the
"personalities" its to run that will give you a bloated kernel or
you have to do the same mapping all personalities to one generic
support and you will have to do all the "filters". ..."

If we agree that there is a lowest level set of APIs necessary to
support higher level APIs and we agree that the highest level of
the kernel is the CPAPI, then that which is above the MKAPI and up
to and including the CPAPI is an OS personality. If you subscribe
to the belief that the CPAPI is not part of the kernel, then we
clearly have a difference of opinion.

For me it's a matter of effort related to supporting the OS/2
CPAPI. Two proposals exist, one, to support it as a vertical
layer on top of another CPAPI, e.g. Linux, and, two, to support
both CPAPIs as peer (horizontal) OS personalities with both based
on a common MKAPI layer.

Now the issue is fairly simple. In a previous response I outlined
what is necessary in terms of "filters" to compensate for the
differences that exist in translating from one CPAPI to another in
a vertical relationship. Filters, per se, represent processes to
connect two dissimilar interfaces. If I write an OS/2 personality
on top of the MKAPI level, I have no filters between that level
and the OS/2 CPAPI. There are no dissimilar interfaces as after
decomposition creating the logical hierarchy of API paths from
MKAPI to CPAPI it is a seamless fit with no filters in any seam.

I simply maintain that it is easier or certainly no more difficult
to write an OS/2 personality based on a MKAPI than it is to write
an OS/2 CPAPI layer support on top of a Linux CPAPI. That's in
terms of development. When it comes to maintenance with my
separate OS personality I have no concern with respect to changes
in any other OS personality. Therefore no change to it will
necessitate a change in my OS personality. So I expect not only
is the time and effort in development essentially equal, but that
a separate OS personality based on an MKAPI requires less time and
effort to maintain.

Now an issue exists in what occurs if you want to run Linux and
OS/2 applications concurrently and you don't want the expense of
VPC or VMWare. If you have a Linux personality on top of the same
MKAPI, then you don't need VPC or VMWare. You certainly have a
larger kernel at least in terms of virtual storage. It's only
when they are executing concurrently that the demand on internal
storage comes to play. In that instance it certainly is no more
than that which occurs with VPC and VMWare.

Now what happens when you want to expand your OS/2 personality in
terms of function, i.e. new APIs? You have only to incorporate
them in the same seamless (non-filtered) manner from the MKAPI
level to the now enhanced CPAPI. This is true even if the
enhancement result from new APIs at the MKAPI level. If you have
two OS personalities, e.g. Linux and OS/2, then you can write
applications to the expanded OS/2 CPAPI without concern for what
occurs with Linux. That independence and its impact on
maintenance (the synchronization of two interfacing CPAPIs) favors
the use of a common MKAPI.

With two independent OS personalities there are no filters
required by definition.

"... I have seen and used JSP (Jackson structured programing) that
I assume could be decribed as a crude variant of what you are
talking about. Working with normal business logics it did work
(though the code it generated was horrible).

But we are talking about microkernel/kernel lowlevel stuff.
I dont think it fits. ..."

I'm talking about using a logic programming approach in which the
only manual writing which occurs are specifications with the
remaining stages of analysis, design, and construction occurring
via software. That's how Prolog works, SQL works, AI expert
systems work, and neural nets work. That says you don't need
people to do analysis. You don't need people to do design. You
don't need people to translate from specifications from an initial
language into specifications in another language, which occurs
currently. You eliminate people doing any writing within the
software process except for specifications. You allow the
software to write all the visual output from analysis, design, and
construction from the same set of now software-organized
specifications.

The point is if you eliminate what people have to do through
automation you reduce the number of people necessary. If you take
just the first four stages--specification, analysis, design, and
construction--it means you invest no people effort in analysis,
design, and construction.

You have to understand that that's the way logic programming has
worked successfully now for over thirty years. It's part of the
advantage in going from a third-generation, imperative HLL to a
fourth-generation declarative HLL: analysis, design, and
construction occur in the software.

As the work (in theory) says that the logical equivalence of the
specifications is maintained throughout in analysis, design, and
construction, whatever effort, whatever number of people now
engaged in writing for those stages disappear. If they were by
chance equal in number, then three-quarters (3/4) are no longer
needed. That reduces a team of 100 to 25.

The point is that once you have written a specification you have
all that is necessary. The software can finished it up from
there. You can only have two errors in specifications: incorrect
and incomplete. As you allow the software to carry it beyond that
point you very quickly, in fact millions of times more quickly,
can determine if incorrect or incomplete specifications exist.

You spend an hour writing a specification. The software takes it
in along with all the other specifications you have submitted
earlier and in three seconds (not three weeks) you have the effect
of that change on the whole. It takes you an hour to correct a
specification or submit one to compensate for an incomplete one.
It's three seconds to see the result of change, the production of
a new executable.

The point is that you have no reason to "freeze" development,
forcing change requests to be delayed instead of introduced as
soon as they are specifications. You don't need to batch them.
You can accept them in any order, specifically in the order in
which they occur to the user defining the change. The only delay
effectively from beginning to end is the time it takes to
translate the user request into one or more formal specifications,
adding them to the set under consideration. In effect you can
implement changes faster than they occur to the user: the response
rate exceeds the user inter-arrival rate of change requests. No
backlog. No delay.

For all this to occur at these speeds implies a seamless
processing of a specification statement once written. This says
that the editing (writing) of a statement immediately submits it
to syntax analysis, semantic analysis, and incorporates it
properly into its place in the overall logical hierarchy generated
by the software. So there is no separation of editing from
compiling from execution. They are all part of a single process.
This is exactly how interpreters have worked since their
inception. The only difference is using the two-stage proof
engine of logic programming which takes unordered input, orders it
optimally, and then compiles it into an executable form.

To say that Jackson Structured Programming is a "crude variant" of
this is to give it too much credit.<g> I would go into the code
generation quality possible with this method. Let me just say
that the best of assembly language programmers would take a
lifetime to match what this system can produce in seconds. Add to
that optimized for the particular processor in use.

That's the difference that logic programming brings to the table.
admin
Site Admin
Posts: 1925
Joined: Wed Dec 19, 2018 9:23 am
firstname: osFree
lastname: admin

Re: Part 14

Post by admin »

#399 From: "JMA" <mail@...>
Date: Sun Mar 17, 2002 7:48 pm
Subject: Re: Let's resolve the issue mailjmase
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°


On Fri, 15 Mar 2002 21:37:52 -0800 (PST), Lynn H. Maxson wrote:

>John Martin (JMA) writes:
>" ... In what way would writing a new kernel help ?
>Either you will have to put direct support for all the
>"personalities" its to run that will give you a bloated kernel or
>you have to do the same mapping all personalities to one generic
>support and you will have to do all the "filters". ..."
>
>If we agree that there is a lowest level set of APIs necessary to
>support higher level APIs and we agree that the highest level of
>the kernel is the CPAPI, then that which is above the MKAPI and up
>to and including the CPAPI is an OS personality. If you subscribe
>to the belief that the CPAPI is not part of the kernel, then we
>clearly have a difference of opinion.
>
Its not about yours or mine opinion !

Here's some OS theory:

*mk
Small to very small layer of code to hide the hardware differences
from the main kernel.

*kernel
The meat of the services that supports user level API and applications.
A kernel may be build ontop a mk or by itself (does the mk job itself).

*userLevel
Applications and system utilities.

Now, lets talk OS/2. On OS/2 kernel services and device drivers mainly
runs in ring 0 protection levels (refere to Intel documentation).

The CPAPI is implemented in the DOSCALL1.DLL as a ring 3 DLL.
Its job is to take API requests from applications and process them
either by its own functions or by calling the kernel using thunks and
call gates (32/16 bit conversions and ring crossing).

So the best way to describe DOSCALL1.DLL and thus the CPAPI is
to say its a userlevel DLL outside the kernel.

The distinction between kernel and userlevel mode differs a bit between
different kernels. Linux apps can use kernel services and sidestep its
"CPAPI" . But OS theory wants clearly defined layers and a new mk
OS should try to use clearly defined lines since they would promote
order and would be much easier to build/maintain/change.


>I simply maintain that it is easier or certainly no more difficult
>to write an OS/2 personality based on a MKAPI than it is to write
>an OS/2 CPAPI layer support on top of a Linux CPAPI.
>
You are quite right, but Linux (as an example of a possible kernel)
is there already and well tested. Your suggested kernel is not even
on the drawing board

And, this focus of this group is supporting userlevel API and function
of OS/2, not building another kernel. Others are better at that and we
want to participate with them.




Sincerely

JMA
Development and Consulting

John Martin , jma@...
==================================
Website: http://www.jma.se/
email: mail@...
Phone: 46-(0)70-6278410
==================================
admin
Site Admin
Posts: 1925
Joined: Wed Dec 19, 2018 9:23 am
firstname: osFree
lastname: admin

Re: Part 14

Post by admin »

#400 From: "Lynn H. Maxson" <lmaxson@...>
Date: Mon Mar 18, 2002 6:25 pm
Subject: Re: Let's resolve the issue lynnmaxson
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°


John Martin (JMA) writes:
" ... Here's some OS theory: ..."

At least I understand how you see things. That suffices for
communication purposes. I have no need to dissuade you from any
view. I would think with the lesson of ODIN starring in our faces
we would have some trepidation about layering on CPAPI over
another.

On the other hand I think I will pursue osfree in seeking a
microkernel solution supporting OS/2, Linux, Windows, and BeOS,
leaving freeos to pursue an OS/2 CPAPI layer above a Linux CPAPI
layer. It should be interesting. At least it will leave those
who favor one camp over the other a clearer means of deciding
where to participate.
Post Reply