Page 1 of 3

Part 32 - Dec 30 2003

Posted: Sat Dec 22, 2018 2:03 pm
by admin
#935 Re: [osFree] Digest Number 203
Expand Messages

Frank Griffin
2003 Dec 30 6:42 PM
Cristiano Guadagnino wrote:

> Hi Frank,
> I have to say that your use of english is quite strange to me. I'm not a
> native speaker, so it's probably my fault.
> BTW, this thread has run too long for me, and it's getting quite harsh.
> I'm sorry for what has been my fault in making it like this. The thread
> is closed for me.
>

OK. I've been told in the past that I speak IBM assembler better than
English, so you may not be too far off the mark.

A (belated) Merry Christmas to you and yours as well.

Re: Part 32

Posted: Sat Dec 22, 2018 2:06 pm
by admin
#936 Requirements and Specifications
Expand Messages

Frank Griffin
Dec 30 6:47 PM
Lynn H. Maxson wrote:

> I suggest that we set aside our differences to cooperate
> where they don't exist: providing a detailed specification for
> an OS/2 replacement.


It's been stated several times that we need to do requirements and
specifications, but I've never been exactly sure why.

Certainly this would be true in a "normal" software project, but in this
case we are trying to clone something for which extensive documentation
already exists. Not only is there no need to re-document the APIs, but
it is dangerous to do so, since we may inadvertently change something
upon which existing applications rely.

*What* we need to do is pretty much forced upon us by the requirement
that existing OS/2 applications be supported. The only area for
improvisation is *how* we do it.

Given that, I don't see that any osFree documentation outside of that
already provided by IBM can be written until we decide upon what the
OS/2 CP/PM/?? APIs will be based. Put more generally, you can think of
the design as a plane area containing a circle representing the API
boundaries. Within the circle, we have no choice: we know exactly how
things need to work. What happens at the circle boundary depends on how
much of the stuff outside the circle we intend to write ourselves, and
where we intend to get the rest.

If we intend to use someone else's (micro)kernel, whether Linux or
another, where the kernel API presents the same "level of service" as
the OS/2 CP API in the sense that we need provide nothing more than a
"pass-through" layer from one API to the other, then any documentation
of that layer will be tightly tied to the kernel API and cannot be
written without a knowledge of it. In essence, we will be writing code
to link two APIs where we have control of neither one; each will have
its own documentation, and all we need to document is how we intend to
use the services of one to emulate the services of the other.

If, on the other hand, we intend to either provide the kernel ourselves,
or base it upon something whose level of service requires us to provide
two or more layers between it and the OS/2 CP API, then at least one of
those layers will be passing between two APIs we control, and in that
case I agree that we need requirements and specifications.

I make this point because Lynn advanced the argument that we have other
work (requirements and specifications) that can be done and upon which
we are agreed. I suspect that is not the case. If we base on Linux or
any kernel which offers the same level of services, then the
requirements and specifications are already a matter of record, and the
specification of the work that needs to be done is highly specific to
that choice.

Re: Part 32

Posted: Sat Dec 22, 2018 2:07 pm
by admin
#937 Re: [osFree] Digest Number 203
Expand Messages

Frank Griffin
Dec 30 8:01 PM
Lynn H. Maxson wrote:

> Anytime Frank makes an argument
> based on safety in numbers I cringe. To me such reliance
> represents a threat to the very soul of open source. I have no
> objection for as many people who like to participate. I have
> deep concerns when that becomes a need to do so.


Sorry, but there is safety in numbers whether or not that chills you.
It's not true (necessarily) for projects which are purely software (read
on below), but it is for projects where legwork research and QA are
eating 95% of your development budget.

We've discussed before, and (I think) agreed upon the idea of having
osFree or FreeOS use source or binary drivers from Linux or Windows
simply because people writing drivers for new hardware will write for
these platforms and not ours. Using somebody else's kernel is not
conceptually different.

If you are going to base *anything* you do for an OS/2 clone on software
provided by other people, then you want to make sure of the following
things:

(1) that their work is open-source, so that you can archive and modify
it if they get hit by a bus (or a large Redmond corporation).

(2) that they have enough people with motivations unrelated to your
interests to make it highly probable that they will continue doing
whatever it is they are doing that you want, but don't want to be
bothered doing yourself, and that they will continue to do it whether or
not they like what you're doing.

(3) that they have been doing it long enough, actively enough, and well
enough to ensure the level of quality and stability from their work that
OS/2 users expect.

Linux fits the bill here. It's open-source, they do pretty much
everything we could hope for as regards keeping up with processor
technology and hardware for reasons that have nothing to do with us, and
their reputation for reliability is just as good as OS/2's.


> You cannot use the same software tools as IBM, M$, Linux
> developers, or any other software vendor without committing
> to the same level of human resource requirements. As open
> source cannot easily achieve the organizational efficiency of
> closed source without adapting it, open source based on
> volunteerism has a need for even more human resources.


Lynn, I want to start by saying that I see the value of SL/I as you've
described it in the past in other forums as well as recently in this one.

SL/I is no doubt the "silver bullet" you think it is for the type of
programming project where the creation of the specifications and the
software represents the bulk of the effort. When you expect to be
spending 90% of your time writing specifications and then producing code
which implements them, the type of tool you describe would have exactly
the impact you claim.

The problem is that the creation of a kernel for use with the existing
population of Intel PCs is not that type of project.

True, it involves specifications and programming, and the use of SL/I
would be just as advantageous for that portion of the work as anywhere
else. However, the difference is that, in such a project,
specifications and programming account for much less than 90%.

As you say, let the software do what programmers need not. But the
majority of the time spent debugging Linux or most any other
"from-scratch" kernel is spent revising the specifications that
something like SL/I would require as primitive input, in light of the
fact that hardware designers and system integrators often deviate from
published specifications, and software written once the hardware exists
must bridge the gap.

If you review the Linux kernel driver code, you will find that a
staggering amount of it exists to work around broken or poorly-designed
PC hardware. Assuming that osFree will have to work with the same
hardware, SL/I can't help you there. It becomes a case of garbage in,
garbage out. You might start by feeding SL/I a theoretically correct
hardware spec, and then find that you are spending 95% of your time
discovering and readjusting for all of the hardware anomalies Linux has
already worked around. Or, you might start by analyzing the Linux code
to produce a spec which incorporates all of the hardware experience
inherent in the Linux code, and find that you've spent 95% of your time
preparing the input for SL/I (and to what point, if you're copying the
Linux specs/code/blood-sweat-tears verbatim ?).

The bottom line is that if the type of work upon which SL/I can work
wonders accounts for only 5-10% of the project, then the overall savings
won't be that great. SL/I can help us produce high-quality
easily-maintainable code, but it can't purchase and QA every piece of
arcane PC hardware that Windows and Linux currently supports, nor can it
magically invent the needed workaround code.

The question we need to ask and answer is what we want the osFree
"differentiators" to be. I suspect that nobody values OS/2 for the way
it handles keyboards or mice or video. So no one should really care if
osFree is based on something which handles those things equally well.
The positive OS/2 "experience" is based upon reliability, quality, and
high-level "signature" user APIs like the WPS. That's what we need to
re-implement, not process control, memory management, or mouse drivers.

Re: Part 32

Posted: Sat Dec 22, 2018 2:08 pm
by admin
#938 Re: [osFree] Digest Number 203
Expand Messages

Lynn H. Maxson
Dec 31 8:04 AM
Frank Griffin writes:
"Sorry, but there is safety in numbers whether or not that
chills you. It's not true (necessarily) for projects which are
purely software (read on below), but it is for projects where
legwork research and QA are eating 95% of your development
budget. ..."

Frank,

I repeat my respect for your participation in these mailing lists.
No differences of opinion will diminish that.

I do not oppose the safety in numbers that you mention for
the same reasons that you also mention. For my money the
more the merrier. It "chills" me when that number goes from
a "nicety" to a "necessity". At that point open source
becomes a group thing not subject to individual choice. That
individual choice, reflected in his ability to independently
pursue his own course, for me represents the "soul" of open
source.

What value does having source have if you cannot carry it
effectively from there on your own? When you go from an
organization of one to two or three or four or more,
specifically more, you begin a drastic change in how you
divide your time in individual effort from collective. The more
people that you require the less "freedom" you have
individually in terms of choice.

We haven't discussed the collective, the organizational, needs
of an open source project such as this. We have not
addressed the organizational differences between a
geographically concentrated organization, e.g. Innotek, and a
distributed one such as this. We have not discussed the
organizational differences between a face-to-face project
and a keyboard-to-keyboard one.

When you find yourself resource constrained, having a group
requirement, a safety in numbers, hits a threshold changing it
from a luxury to a necessity. If your number lies below that
threshold, then you need to reconsider the value of even
beginning. Your time to completion, your window of
opportunity, increases logarithmically as your distance from
that threshold.

That brings us to your second message whose answers
actually lay in your first.

"It's been stated several times that we need to do
requirements and specifications, but I've never been exactly
sure why.

Certainly this would be true in a "normal" software project,
but in this case we are trying to clone something for which
extensive documentation already exists. Not only is there no
need to re-document the APIs, but it is dangerous to do so,
since we may inadvertently change something upon which
existing applications rely. ..."

I would refer you back to the three reasons you supplied in
your other message. As I look on my bookshelf, as I look at
the bookshelfs at Borders or Barnes and Noble, as I look at
the OS/2 books that others have invested in, I see the need
for anyone wanting to participate not to have to invest
neither the time or money in such looking. I prefer to have
him look in one place which contains all necessary
information.

I come from an IBM background. I got used to having a
document set on every product, hardware or software, from
an initial announcement letter, to a general information
manual (GIM), to a reference manual (RM), to a user guide
(UG), to an application guide (AG), and to a program logic
manual (PLM). The need for and the cost of creating and
maintaining these rivaled that of the product itself. In fact in
contributed greatly to the generally higher price of the
product: its underlying support cost.

If we duplicate, copy, or clone OS/2, we need to give it a
distinction as a separate product, as a distinct open source
product. That says you make it totally distinct as a product,
regardless of its plug compatibility with another. That
includes documentation.

We have an advantage in creating a internet-accessible
document source, the advantage of hyper-media or hypertext:
the advantage of utilizing a single source to create multiple
documents. You don't have that when each document has its
own source with the difficulties of synchronizing change
maintenance across them. It takes additional people. It takes
additional time. More importantly it raises the "safety in
numbers" threshold.

I do believe that we need to provide a single source for code
an text complete with the all the associations necessary to
meet the documentary, educational, and instructional needs of
anyone who would consider joining this effort. To the degree
that we address those needs, to how much we reduce the
individual time necessary to meet them, that we have the
greater chance to achieve our "safety in numbers".

It doesn't hurt that in the meantime our choices for achieving
this also lowers the threshold.<g> If we can lower that
number to one in this project, we will have achieve the same
capability for all other open source projects. In that manner
we will have assured the future of open source.

I will answer your more specific charges in another message.

Re: Part 32

Posted: Sat Dec 22, 2018 2:09 pm
by admin
#939 Re: [osFree] Requirements and Specifications
Expand Messages

Lynn H. Maxson
Dec 31 10:08 AM
Frank Griffin writes:
"...*What* we need to do is pretty much forced upon us by
the requirement that existing OS/2 applications be supported.
The only area for improvisation is *how* we do it. ..."

Frank,

In your current career as a systems programmer and in my IBM
one as a systems engineer I think we basically agree on the
idea that a "system" refers to something complete, i.e. whole.
I don't approach this project with anything less in mind: to
create a system from an initial program load (IPL) to runtime
to future enhancements. That includes the source code, the
source text, and all relevant associations between them.

I believe both belong in one place, completely integrated as a
single source. I've maintained this view in the single software
tool, the Developer's Assistant (DA), in its source language,
SL/I, and in the source language of its source language, also
SL/I. That includes the software source for creating and
maintaining both sources and their associations, the Data
Directory/Repository, also written in SL/I.

When it comes to OSFree or FreeOS I expect nothing less. You
seem to indicate less in your remarks. I don't want to copy,
duplicate, or clone OS/2. I want to replace it. I want to
replace it with something functionally equal to today's
product, but something functionally capable of taking it where
we agree today's product will not go.

If that means adapting to 64-bit processors, so be it. If that
means adapting to different motherboards, so be it. If that
means documenting it from cradle (C-A-D) to grave (future
enhancements), so be it. It does mean doing it in such a
manner that anyone can read and understand, join in a
continuing effort, or use it as a basis to go in a different
direction.

I don't think you quite accept the equivalence between a
programming language and a specification language. No
difference if an implementation, a compiler or interpreter
exists. Thus whatever inelegant code you write to
compensate for hardware, that code becomes a specification.

When I began this profession some 47 years ago I did so as a
hardware technician at a time when you actually repaired
broken logic. I note then as I do now that all such hardware
and all software based on it had a 100% common base in
formal logic. If it didn't, then you could in no manner write
software to compensate for hardware differences. Logic
doesn't exist as an option in software. Thus it cannot exist as
one in hardware.

We can make the point that we can specify either in a
specification language as a programming language or a
programming language as a specification language. Once you
have a compiler or interpreter they become the same.

I will go so far to say that in my approach you can customize
your kernel to a specific motherboard with an ease
unobtainable in the instances you quote relative to Linux.
You will have no need to have any code not relevant to the
distinct characteristics of a particular motherboard instance.
That arises from the software tool (DA) and the software
language used (SL/I).

Now we ought to say to those listening in that you have a
habit of saying "QA" where I say "testing". We both know
that regardless of how you say it, that the more
comprehensive the project, the more complicated (the number
of parts) and the more complex (the connections among
them), the more effort overall expended in QA (testing).

The problem lies in continued use of imperative programming
languages, their software tools, and the methodologies they
impose. Our use of these says we remain in lockstep with the
other prisoners. Part of that means increasing our numbers to
match theirs to stay in step. A simple count will show we lack
such numbers.

I find your argument to use their numbers to make up for a
lack in ours disingenuous. Perhaps no more far fetched than
mine to make our number suffice. In part my argument for
documentation which incorporates analysis and design allows
us to work on that on which we agree. Along the way in
doing this and certainly by the time we reach the end, we can
either continue to disagree, going our separate ways, or we
will find ourselves in agreement as part of the process.

I don't have any fears of screwing up an API. In fact I plan on
doing a better job in terms of description and performance
than IBM could ever cost justify. Again a difference due to
the language, the implementation, and methodology used.

Having found myself caught up in IBM code in named queues
creating an "error" instance of time-dependent code, I would
insure its correction in the replacement without ever changing
an API. I would just insure that it worked as described.

I feel that you have not had time to invest in mastering logic
programming, in the difference between clausal and predicate
logic, in the automated testing that predicate logic provides.
So QA remains a big issue with you instead of the easier task
it assumes with logic programming. That comes from a failure
to communicate on my part.

You don't see me leaping past description or documentation
with complete analysis and design here to get into coding. I
have no desire to use a C, PL/I, JAVA, C++, or any other
imperative language compiler. In fact I regard the use of a
compiler counter-productive relative to an interpreter. A
compiler approach raises the number we have to have for
safety.

You probably don't understand why you would use an
interpreter, a single set of source, and a single module
encompassing an entire operating system. You don't
understand it, because you can't do it with a compiler
approach. You don't fully understand the difference that
exists between software development and its productive use.

Software development occurs most productively in an
interpretive environment. There you want to have everything
under a single module environment. When you have it in the
most rapid manner to the most reliable form possible from a
QA standpoint, you simply opt for synchronized compiled
output in multiple modules to meet the performance needs of
production. You have no need to have more than one
software tool which can produce either result.

I have yet to demonstrate the validity of the above assertions.
That will take considerable time on my (and some others)
part. With luck no longer than the time to produce the level
of detailed documentation we've discussed here. At that point
whether we agree or disagree becomes moot. We can go our
separate ways successfully.

Re: Part 32

Posted: Sat Dec 22, 2018 2:10 pm
by admin
#940 Re: [osFree] Requirements and Specifications
Expand Messages

Frank Griffin
Dec 31 2:03 PM
Lynn H. Maxson wrote:

> When it comes to OSFree or FreeOS I expect nothing less. You
> seem to indicate less in your remarks. I don't want to copy,
> duplicate, or clone OS/2. I want to replace it. I want to
> replace it with something functionally equal to today's
> product, but something functionally capable of taking it where
> we agree today's product will not go.


Lynn,

I'm not trying to discuss the philosophy of software design in general,
nor the importance in most cases of getting complete specs before
coding. The points I'm trying to make are extremely specific to a
project whose goal is an OS/2 replacement.

"Replacement", to me (along with "copy", duplicate", and "clone"), means
that it must support the same APIs as did the original. One of the
goals I assume for the project is to be able to run existing OS/2
software. If that's true, then it forces certain things upon us, e.g.

1) The existing APIs have to be implemented as documented in the current
SDK.
2) You must support language bindings for the languages which currently
have access to the APIs.

(1) decrees that the current SDK documentation *is* the specification.
While it will not serve as external developer or user documentation
(which is a project deliverable and not an input), it provides
everything we need to begin the work of specifying how we will support
the needs of the OS/2 API using the (presumably different) API of
whatever we choose to put beneath it.

The point I tried to make is that we cannot write a meaningful spec for,
say, a DosAllocMem() function without knowing what (micro)kernel API
calls will be available for use. The memory management API of Linux is
different from that of Mach, which is different again from that of OSKit
(or ReactOS, or anything else). The description of how we will provide
the functionality of the DosAllocMem() API is completely a function of
the choice of underpinning, and changes whenever the choice changes.

Wherever we go from here depends totally on what the group decides to
use as an underpinning. The choices range from what I believe you want,
which is that we write and support every line of code from the bare
metal up to the OS/2 API boundary ourselves, to what I suggest.
Somewhere in the middle, we have possibilities of using microkernels.

If we do it your way, then I agree that SL/I and DA have value to osFree
because we will be dealing with multiple layers of control, where we
will have design authority for all but the topmost. The language
bindings need only exist at the API level, and everything sitting below
that would be written in SL/I.

If we do it my way, then SL/I and DA have less appeal for osFree,
because what we will be producing are single interface layers, where we
design neither the boundary above (the OS/2 API) nor the boundary below
(Linux system calls). Moreover, the Linux system calls offer only a
C-language binding. Using SL/I here would introduce an ongoing overhead
of creating an SL/I binding for the Linux system calls.

I think both SL/I and the DA are excellent ideas, and I look forward to
their implementation, whether or not they are appropriate choices for
osFree.

The only point I tried to make in my original comment is that without
committing to a choice of underpinning, it is very difficult (if not
impossible) to identify a "next" activity, be it specification, design,
or coding, which is not tied to a particular choice and which does not
become a throwaway if a different choice is ultimately made. Rewriting
the same API information in the OS/2 manuals using SL/I or anything else
is probably something that ultimately needs to be done, since we can't
very well point users and developers to the original IBM documents as
the only source of osFree programming documentation. But that activity
produces a project documentation deliverable needed to be able to say
that the project is complete, not a spec that we need for design or
coding, since it adds nothing to the OS/2 API specifications we already
have.


> I find your argument to use their numbers to make up for a
> lack in ours disingenuous. Perhaps no more far fetched than
> mine to make our number suffice.


I think the part of my argument that you're missing is that the work for
which I want to use their numbers is already done, and has been for
years. Hardware manufacturers are more consistent than they were 10 or
15 years ago, and (primarily because of Linux) spec data is much more
available.

The work I refer to that took thousands of person-hours and access to
massive amounts of hardware, but that type of work doesn't typically
need to be done for new hardware today. However, it *does* need to have
been done by *somebody* if you want to support older hardware, which
appears to be a priority in the OS/2 community.

It might be disingenuous to assume that people you don't control will
commit a large amount of resource to something you benefit from, but it
depends on how strong their own motivation for doing it is. But that's
not what I'm proposing here. The resource has already been committed,
and the Linux code already reflects everything that was learned through
its expenditure. It is there for the taking. Unless you want to start
from scratch and do it all again.


> I feel that you have not had time to invest in mastering logic
> programming, in the difference between clausal and predicate
> logic, in the automated testing that predicate logic provides.
> So QA remains a big issue with you instead of the easier task
> it assumes with logic programming. That comes from a failure
> to communicate on my part.


No it doesn't. I understand your point, and in general I agree with
it. In a "normal" software project, I fully agree with the premise that
a tool like DA which treats specs like code, and ensures that design
correctly reflects specs and that code correctly reflects design has the
potential to drastically reduce the amount of testing (or QA) needed.

But that's only true if the information from which the specs are
generated is correct. My point was made in the extremely narrow context
of a situation where legacy hardware does not comply with its own
published specs (or those specs are incomplete or vague). If you take
those specs and enter the information they contain into SL/I, the
resulting code will not work, and it will not be the fault of SL/I or
any other piece of methodology involved. In such cases, *only*
extensive post-release QA done by customers who have the hardware
involved will uncover the problems.

This is *not* a methodology issue; it's a quality-of-input issue. I've
never believed that QA in any amount can or should make up for
sloppiness elsewhere.

Re: Part 32

Posted: Sat Dec 22, 2018 2:10 pm
by admin
#941 Re: [osFree] Requirements and Specifications
Expand Messages

Lynn H. Maxson
Dec 31 5:45 PM
Frank Griffin writes:
"I'm not trying to discuss the philosophy of software design in
general, nor the importance in most cases of getting complete
specs before coding. The points I'm trying to make are
extremely specific to a project whose goal is an OS/2
replacement. ..."

Frank,

I imagine that we agree far more than we disagree. At the
moment we disagree on what we begin with and what we put
off until the end. We agree on the need to support the OS/2
APIs correctly and completely. We disagree on the nature of
that support, what constitutes documentation, what
constitutes specification. I propose a literate programming
approach in which description proceeds both informally and
formally. That says that the writing of both proceeds
concurrently: we don't put off one until the other is done.

I also favor late binding, essentially putting off a decision until
the latest possible moment. That says I favor ordering things
in their chronological order of binding. First things first, then
second and so on.

In top-down development you begin with what you expect to
end up with. In our case that comes down to OS/2 API
support. You have to exercise some care in macro switch
settings but you can in fact compile the entire set of OS/2
APIs. That at least I have done just as part of feasibility
effort. It took me more than a few iterations get them all.

I would propose that as a proper beginning. As a second pass
I would decompose them into the categories IBM currently
uses in the SDK. Then as a third pass I would rewrite each
one in a literate programming form (English and SL/I), using
the information in the IBM SDK manuals as a guide. As a
fourth pass I would go through all naming conventions to
eliminate both homonyms and synonyms to insure unique and
unambiguous referents. These names in turn become the
global variable names used in the supporting source.

At this point we have something equivalent but not identical
to the IBM SDK. We have something which allows us to cut
the cord. We have something which we can treat as ours and
not IBM's.

"...(1) decrees that the current SDK documentation *is* the
specification. While it will not serve as external developer or
user documentation (which is a project deliverable and not an
input), it provides everything we need to begin the work of
specifying how we will support the needs of the OS/2 API
using the (presumably different) API of whatever we choose
to put beneath it. ..."

Understanding then that to me a specification exists as source
code the documentation of the SDK does not suffice. That
text exists as an informal description to which we should
attach a formal one, a formal specification. The specification
document then contains two separate but associated encoded
forms, one informal, the other formal.

One of the categories of the SDK is memory management. It
may be that we can use this to present to the others on this
list where we differ (and possibly why) in terms of a starting
point.

Perhaps we can begin as you suggest with DosAllocMem.
Maybe we can use this to improve the way in which we
communicate with each other.

Would you be amenable to this?

Re: Part 32

Posted: Sat Jan 05, 2019 1:25 am
by valerius
#942 QA equals testing, Part One:Detection
Expand Messages

Lynn H. Maxson
Jan 2 10:02 AM
We all want to produce quality software. We want to
produce correct software, a match of the solution set to the
problem set, specifications to requirements. We want to
produce error-free software, free of errors of omission
(missing logic) and commission (incorrect logic). We want
code optimized for space and time: no more, no less than
necessary.

Our primary means for achieving this lies within the testing
phase of QA. By implication QA includes more than testing,
e.g. supporting documentation.

We have two aspects with testing. We have one relative to
detecting errors in execution, thus errors in logic (omission
and commission). We have another relative to correcting
errors in execution, thus errors in logic (omission and
commission). I engage in redundancy here to emphasize that
we have only two types of possible errors related to logic. I
also want to point out that all software which performs
exactly according to the logic of its source is error-free:
software does not create errors, people do.

Now detecting errors requires the creation of test data, data
expected to give correct (or expected) results and data
expected to give incorrect (or unexpected) results. When
correct data gives incorrect results or incorrect data gives
correct results we have a logic error, again either of omission
or commission, again people- not software-based.

Historical wisdom indicates that we cannot exhaustively test
logic because due to time or money constraints we cannot
"afford" to, one, create the test data and, two, to execute
the test even if we could create the test data. To
compensate for this developers attempt to distribute the cost
in time or money. They create artifices like alpha and beta
versions to distribute the load on alpha and beta testers.
Even with thousands of alpha and beta testers, if you believe
the "boasts" of open source and Linux advocates, they still
cannot provide exhaustive testing prior to the release of an
"official" (not alpha or beta) release.

Beyond this we have a common guideline: whatever cost
estimates (time and money) you give for construction, double
it for testing. In effect this triples the cost of producing the
source. Note this guideline accepts the inability to
exhaustively test, detect, and correct source prior to an
"official" release. This inability persists whether you triple,
quadruple, or further multiply the time and money in testing.

The historical wisdom blinds us to the fact that this inability,
which again is people-based, comes directly from, one, the
use of imperative programming languages and, two, from
people-generated test cases, i.e. test data. This historical
wisdom does not take into account our everyday experience
with logic programming, specifically that of SQL.

Before you protest too strongly look again what SQL does as
an example of logic programming or declarative languages in
general. SQL invokes the two-stage proof engine of logic
programming, the completeness proof and the exhaustive
true/false proof. The first assures that the input source
contains enough to contruct the logic for the query (analysis)
and then constructs (design and construction) it. That covers
the first four stages of the SDP: specification (query source),
as well as analysis, design, and construction (completeness
proof). It then automatically executes the exhaustive
true/false proof, the testing.

True, it doesn't automatically generate the test data. No one
reading this, however, should have any doubts that the output
includes "all" true instances of the test data (tables, rows,
and columns) while rejecting "all" false instances. It does
produce error-free code. No one here should disagree that
people through commission or omission produce any logic
errors. If we have an interest in "all" false instances, we
need only "not" the contents of the "WHERE" clause.

SQL uses "clausal" logic dictated by its purposes of finding
true instances of known (or existing) data. Logic
programming supports an alternate form known as "predicate"
logic in which the data rules include the valid range of values
for the variables. As you can have multiple variables the
software generates, i.e. enumerates, all possible combinations
of all possible values of all variables: it automatically
generates all possible "true" value sets. It could just as easily
generate through the same "not" logic we could employ with
SQL all possible "false" value sets.

If we get a "false" result from within a "true" value set, we
have incorrect (or missing) logic. If we get a "true" result
from within a "false" value set, we have incorrect (or missing)
logic. In either case the software itself generates the
"exhaustive" test data.

Unfortunately the actual validation remains a people process.
The amount of either "true" or "false" results could
overwhelm us in the amount of time (which equates
frequently to money) required, thus contributing to the cost
of the software. However, we should lay to rest the historical
wisdom that states the impossibility of performing exhaustive
testing, of generating all possible test cases, test data.

Now a byproduct of this lies in eliminating the need for either
alpha or beta versions of software. You would only need in
either case to distribute the results of execution to somehow
allow their parallel review.

You gain two things from this process. One, you restrict
errors of omission or commission to requirements and
specification, both human writing processes. Once
incorporated as a specification no further mistranslation by
the software can occur. This produces, as in SQL, error-free
code. Two, you have exhaustively tested it with automatic
detection of false instances of true input and true instances of
false input.

The only time-consuming issue remaining lies the validating
the volume of false instances of true input and true instances
of false input. We already have developed techniques of
reducing this: variable on boundary, in boundary, and out of
boundary values.

This should deal effectively with issues regarding error
detection through exhaustive testing. The next part hopefully
will do the same for correction.

Re: SCOUG-Programming: QA equals testing, Part One:Detection

Posted: Sat Jan 05, 2019 1:28 am
by valerius
#943 Re: SCOUG-Programming: QA equals testing, Part One:Detection
Expand Messages

Lynn H. Maxson
Jan 2 4:51 PM
"...Having spent a few years writing requirements, I found
that it's pretty hard to define the requirement adequately, up
front, and the waterfall of the development process results in
a refinement of the requirement manifested in the sequential
specs. The code becomes a sulf(sic)-fulfilling requirement
baseline, no matter what the original spec. Probably
accidental if it actually matches the designer's intentions. ..."

Ah, a second voice heard from. Must be an open moment
during the holidays. Thank you, Dave, for taking time to
comment.

The waterfall effect as you describe it changes radically in
logic programming in that only two stages have people input:
requirements and specification. The remaining stages occur in
software: analysis, design, construction (completeness proof)
and testing (exhaustive true/false proof). So only two errors
in translation can occur, one in translation from the user to
written requirements (informal logic) and their translation into
specifications (formal logic). The software translates the
specifications exactly as written, i.e. no transmission loss.

Note that having the software perform these functions you no
longer have programmer analysts, designers, or coders. The
designer's intentions then must reside in the written
specifications. If they don't result in the expected, i.e.
intended, then he wrote them wrong (errors of commission) or
incompletely (errors of omission). He corrects them by
correcting the writing. He does not do analysis, design, or
coding beyond the specifications.

It seems difficult to accept to accept that software can
adequately perform these tasks. Yet daily millions of users
have trusted SQL to do exactly that...for the last 30 years. If
you respond that an SQL query doesn't quite match up to the
more extensive logic and levels found in a program, that a
program using embedded SQL has to also include additional
logic outside the query, then I would have to refer you to an
AI product like IBM's TIRS (The Intelligent Reasoning System).
TIRS does the whole logic including the writing of the SQL
query.

Peter also comments:
"...Dave, you've identified the true path -- writing the spec.
Lynn has also been pushing the spec as the one true
document but he's constantly trying to incorporate it into code
generation using just-in-time manufacturing techniques and
hallowed echoes of H. Edwards Deming (plus Homer Sarasohn
and Charles Protzman, to be fair). It's much easier to
digest "writing the spec" when there's a wall between design
and manufacturing, and Lynn's dream might more quickly bear
fruit if he took that tack."

A wall between design and manufacturing. In imperative
languages that wall exists by necessity: it has to do with the
logical organization globally in the sourceby people prior to
submitting it to manufacturing, i.e. compiling. In declarative
languages, i.e. logic programming, it ceases when you submit a
specification to processing. The software in the completeness
proof does analysis, design, and construction in a seamless
stages with no loss in transmission or translation.

Get use to it. Get use to the idea of saying what you want in
an unordered, disconnected manner in terms of writing
specifications. Get use to the idea that the software has all
the capabilities that humans have in performing analysis,
design, and construction. You tell it "what" you want. It tells
you "how" to get it.

It's not some dream of mine. It's how logic programming
works. It's how SQL works. It's how AI works. It's how neural
nets work.

A difference that occurs lies in the thousands and millions
reduction in time for the software to do analysis, design,
construction, and testing over having people do it. To make a
few changes to specifications, submit it, and few seconds (or
even minutes) later to have the complete results of the
effects of those changes ought to suffice that analysis and
design reside inherently in specifications. They do for people.
They do for software.

This means that you do specifications at one point in one
source in the process regardless of the number of iterations of
that process. That means you do requirements at one point in
one source in the process. You don't get involved with the
writing of either, with detection and correction of errors of
omission or commission, at some later point.

You do what SQL query writers do. You write a query. You
view the results. If they are not what you want, you rewrite
the query. The query design is pre-structured in the
"SELECT...FROM...WHERE" clauses. Inside those clauses the
structure is user-specified in any order that the user chooses.
In fact within the "FROM" and "WHERE" clauses the order is
unimportant. Obviously the SQL processor demonstrates that
it can do analysis, design, construction, and testing without
any further "input" from you.

I'll reserve commenting on Peter's references to Deming, et al
except to say that once you get by the effort for problem
detection, i.e. validating the exhaustive results of execution,
you need to correct it at its source. When you only have two
sources which occur sequentially in time you don't have to go
back to requirements unless it exists there. If it doesn't, then
you only have to change the specifications.

In short you only have two sources to maintain in sync. These
two sources represent two different but equivalent
descriptive forms. It's only between these forms that either a
loss in transmission or translation can occur. Any that do
definitely lie outside the purview of the software:
people-based.

You end up from a people's perspective with a drastically
short portion up front of the total waterfall. You never have
to tread water beyond that portion while you always get to
see it in its entire glory. You can change that flow
extensively with very small changes in your portion. It reacts
immediately to any such change. You do not apply the term
"immediately" when you have people responsible for the
entire waterfall.<g>

Re: [osFree] Re: SCOUG-Programming: QA equals testing, Part One:Detection

Posted: Sat Jan 05, 2019 1:31 am
by admin
#944 Re: [osFree] Re: SCOUG-Programming: QA equals testing, Part One:Detection
Expand Messages

Frank Griffin
Jan 2 5:55 PM
Lynn H. Maxson wrote:

> The waterfall effect as you describe it changes radically in
> logic programming in that only two stages have people input:
> requirements and specification.

Lynn,

I'm not sure whether this is OT to your thread (since the thread may be
relevant to something in SCOUG and just copied here for interest).

If I'm the target of the thread, you're preaching to the choir. I
followed your past descriptions of SL/I with interest, and I fully
understand that, in the SL/I world, writing specs is equivalent to
programming. I will absolutely love to try DA when it is available.

But none of this is germane to the point I made. The specs for hardware
designed since 1980 are not written in SL/I, and many of the earlier
ones simply aren't correct or complete in describing the behavior of the
hardware. Nothing that you derive from incorrect specifications will be
correct, whether SL/I generates the code or the code is written by some
drooling lycanthrope of a VB programmer with electrodes in his ears.

If the hardware is supposed to do a certain thing when you write 0x80 to
port 0x1234 and it doesn't, no software which assumes it should will
function correctly, no matter who or what wrote it.

If the points on which the specs are incorrect involve scenarios not
covered in your test suite, you won't find them unless somebody extends
your test suite to cover them. These situations can theoretically be
covered by a tool which can test every possible scenario. However, if
the points on which the specs are incorrect involve certain brands of
hardware which claim to operate to spec but do not, *and* you don't have
the hardware involved available for testing, then you won't find the
bugs unless someone who *does* have the hardware runs your tests (or
some of their own). SL/I cannot help you there. As I said, garbage in,
garbage out.

That was the extent of my point. Kernels, whether micro or otherwise,
which deal with hardware and which were not developed via logic
programming (i.e. all of them) can only program around such deficiencies
through the recursive testing process which I understand is anathema to
a logic programmer.

My point was that, however inefficient the process was due to the
languages or programming models involved, the degree to which a
non-logic-programming kernel will have surmounted these difficulties is
directly proportional to the amount of such recursive testing it has
undergone.

Linux has undergone orders of magnitude more of such testing than most
(if not all) of the pre-written non-SL/I kernels which have been
proposed here for review.

And if you accept the hypothesis that logic-programmed kernels are no
less vulnerable to specs which don't correctly describe the behavior of
their associated hardware, then the same is true of them. Logic
programming can tell you if your code doesn't match the spec, or can
ensure that it does by generating the code itself from those specs, but
it can't detect that the spec is wrong and magically correct for it.
Thus, if you write your own kernel in SL/I based on the same incorrect
specs, you are starting from square one as regards this recursive
testing cycle, and you don't even get the benefit of the lesser amount
of testing done for the non-Linux kernels.