Part 32 - Dec 30 2003

Old messages from osFree mailing list hosted by Yahoo!
admin
Site Admin
Posts: 1925
Joined: Wed Dec 19, 2018 9:23 am
firstname: osFree
lastname: admin

Re: [osFree] Re: SCOUG-Programming: QA equals testing, Part One:Detection

Post by admin »

#945 Re: [osFree] Re: SCOUG-Programming: QA equals testing, Part One:Detection
Expand Messages

Lynn H. Maxson
2004 Jan 2 7:46 PM
Frank Griffin writes:
"...But none of this is germane to the point I made. The specs
for hardware designed since 1980 are not written in SL/I, and
many of the earlier ones simply aren't correct or complete in
describing the behavior of the hardware. Nothing that you
derive from incorrect specifications will be correct, whether
SL/I generates the code or the code is written by some
drooling lycanthrope of a VB programmer with electrodes in
his ears. ..."

Frank,

Happy New Year. No, you were not the target. Yes, I'm
actually submitting this response to two mailing lists. I have
repeated myself so many times in so many places that when I
don't have to chew my tobacco twice I relish the opportunity.
This particular thread topic is one under consideration in the
SCOUG Programming SIG as part of our effort to determine
how we can best use our people resources to support open
source development for OS/2.

Everyone of your points is correct. Neither SL/I nor logic
programming can compensate for incorrect or incomplete
specifications. I wouldn't pretend or claim otherwise.

However incorrectly manufacturers hardware differs from
their claims doesn't alter the fact that you must be able to
write correct specifications for how they actually work.
Otherwise no one could write the code in any programming
language from machine level on up. Any code for any
motherboard written for Linux is specifiable in SL/I or PL/I or
any number of other programming languages.

It is also true that that we must pair the motherboard,
whatever its internal logic, with the software to do valid
testing. We have a similar problem when an machine
instruction in practice differs from its spec. We have to make,
to modify, the software to match the hardware. We have to
do it with the machine instructions themselves specifiable.

In every case we have to validate that we have a match
through testing. Hopefully we do exhaustive testing to have a
complete record of what occurs when, what input produces
what output. A standard IPO model.

Given that you can arrive at the correct specs, overcoming
any misinformation from the manufacturer, they will suffice
for anyone in writing any kernel of any operating system.
Furthermore if you have these specs along with those
representing the instruction set of the associated processor,
you can optimize the code for this hardware/processor
combination, eliminating the need to consider any other.

Whatever exists in Linux of use to us is there for the taking.
Otherwise I've failed somehow to grasp the point of open
source. That it's done means I don't have to redo it, i.e. the
process of discovery, only rewrite it.

Basically we are in agreement on all your points. We differ
only (and there only slightly) on which path to proceed from
admin
Site Admin
Posts: 1925
Joined: Wed Dec 19, 2018 9:23 am
firstname: osFree
lastname: admin

Re: SCOUG-Programming: QA equals testing, Part One:Detection

Post by admin »

#946 Re: SCOUG-Programming: QA equals testing, Part One:Detection
Expand Messages

Lynn H. Maxson
2004 Jan 2 8:39 PM
Peter Skye wrote:
"The difficulty here comes when someone wants to read and
understand the specs, Lynn. Things need to be organized so
they can be understood. Can you imagine studying chemistry
and not learning the table of elements until three semesters
on? ..."

Peter,

We live in different universes. You live in one in which you
can always predict change;I, in one in which change often
occurs unpredictably, in fact randomly. I have two things that
I need to accomplish. One, I need to insure that I have
specified a change correctly. Two, I need to insure how this
change works within its set of associated specifications.

The first only I can do, correctly or otherwise. Correctly or
otherwise it remains my responsibility. The second I leave up
to the software to determine the optimal, logical organization
of the source. This means at the end of the process I have
two sources of equivalent content for viewing.

The software has no concern if a change or set of changes
impacts an existing logical organization. It doesn't use the
currently logical organization as source input, which
imperative languages require you to do. It simply generates a
new logical organization. It does it hundreds and perhaps
thousands of times faster than it takes you to alter a single
logical path in an existing logical organization. It regenerates
from scratch, from unordered specs, where you must rewrite.

"...I'm serious here. There's the engineer who makes the
theoretical (which is quite organized) and makes it practical.
There is the scientist who takes random pattern-matches and
creates a concept (which is organized). Then there are the
weed-smoking, pill-popping artists who are all tangentially
connected to Haight-Ashbury and can't even get organized
enough to buy groceries at 7-11.

Which camp are you in?"

I'm in the camp of the division of labor where I do what the
software cannot, i.e. write specifications, and it does what I
need not, i.e. analysis, design, construction, and testing. It
doesn't do anything that I cannot view, follow, or understand.
It produces a global logical organization without concern for
how many times it gets tossed and regenerated from scratch.
It does it in less time than I can write a single statement.

The point is I can write a single specification once regardless
of the number of instances it occurs reused in the optimized
logical organization. That says I only have to modify a single
source, submit it to have all use instances occur. I can choose
to have it occur once invoked repeatedly as a subroutine or
have it replicated in each use instance or through
meta-programming, again written as a specification, any
mixture of each.

While I can only speak to my experience I don't know of any
imperative language implementation which allows this
flexibility. It's builtin to declarative language implementations.
It's what occurs as part of the completeness proof. It's part of
the compilation/interpretive process. I have at another time in
another thread suggested that you could make it work for
imperative languages by making changes to the
implementation without having to change the language.

Logic programming uses a two-stage proof engine. Nothing
prevents an imperative language implementation from doing
the same. In that manner we could take advantage of
people's ability to write error-free code in the small in
conjunction of software to do the same in the large. It simply
means not requiring a programmer to perform global logical
organization prior to submitting for a compile.

This means that you can have named reusable code segments
from the statement level up through any sequence of control
structures up to a procedural level. Any reusable code at any
level can have reuse instances within any other including
across multiple procedures. By implication it means that you
can input multiple "main" procedures into one compile as a
single unit of work producing multiple output executables.

You can do this by modifying the existing GCC compiler if you
so choose with all the benefits of doing so. You don't have to
wait for SL/I. It's not an SL/I-specific thing. It's a logic
programming thing that doesn't care if its input source is
imperative or declarative or a mixture of both.
admin
Site Admin
Posts: 1925
Joined: Wed Dec 19, 2018 9:23 am
firstname: osFree
lastname: admin

Re: SCOUG-Programming: QA equals testing, Part One:Detection

Post by admin »

#947 Re: SCOUG-Programming: QA equals testing, Part One:Detection
Expand Messages

Lynn H. Maxson
2004 Jan 3 12:29 AM
Peter Skye wrote:
"Then you are burning a bridge. You are, _arbitrarily_,
deciding that humans shall no longer need to understand the
source -- and you are designing a system which will make it
somewhat impossible for humans to do just that. ..."

Peter,

What am I failing to communicate here? First, I'm writing
source code and text using literate programming. That says
that every formal description, i.e. specification, has an
associated informal, i.e. understandable, description. That's
what people can do that software cannot.

If IBM were to suddenly drop the source code for OS/2 on you
with its hundreds of different and separate modules, what is it
that you could with any degree of rapidity understand?
Certainly no more than a piece at a time. Then like any
puzzle you would have to assemble it a piece at a time. Even
then how much would you understand without iteratively and
repeatedly rereading the code, possibly drawing pictures on
the side which you cannot incorporate as comments within the
code?

You seem to be upset by the occurrence of unordered input.
You do not seem to pay attention to the fact that this same
input processed by the completeness proof is ordered on
output. You now have two source forms, one unordered, one
ordered. Moreover you didn't have to exert any time and
effort in ordering it or any future reordering of it. Yet you do
get a logical and optimal organization. No people can do
better. They certainly can't do it faster...or cheaper.

This logical organization has a logical hierarchy from the
highest level to the lowest. The software can offer you a set
of dataflow charts as it completes analysis, a set of structure
charts as it completes design, and an logical organization of
the source as input to compilation. So you get both text and
visual output.

Moreover logic programming implements "backtracking". This
basically carries you in a stepwise manner why it organized
the source in the manner it did. As it is part of the
completeness proof which may prove "false", meaning that it
didn't have information (incomplete specifications) to go to
completion. The same backtracking method not only shows
you the point(s) of disconnection, i.e. an incomplete path, but
also tells you why, i.e. what is missing.

Now I do not know of any software tool, any implementation,
that offers you so much with so little input: specifications
only. Certainly draws and redraws pictures, visual output,
millions of times faster and cheaper than you can. Moreover it
draws them without error. You know that's something else
you're a little short on.<g> More importantly they all occur
synchronized, all from the same logical source organization.

Humans have to understand the source, the specifications,
because they have to write them by translating requirements
which they also need to understand. They may not
understand how they fit together entirely either in segments
or in whole from the unordered input. They certainly have
more than enough accompanying the ordered output, knowing
that the outputs visual and otherwise arise from the same
ordered source.

The point is that they get to understand the source faster,
better, cheaper, and more accurately than any of the means
they have had up to now. The truth is that they don't have
to know the whole (the large) to get the parts (the small)
right. They may not have the whole, but the completeness
proof will tell them what is missing and why. They have to
write the specifications that connects the "gaps" based on
information gleaned from the completeness proof.

Now I started off doing analysis and design with flowcharting.
In the 70's I switched to structured analysis and design with
dataflows and structure charts. Even with CASE tools I had to
draw and redraw them. Then came the translation to source
code. Flowcharts were a separate source as were dataflows,
structure charts, and source code. If I made a change to one
source, I had to manually ripple the change throughout the
others. It doesn't take much investigation into almost any IT
shop to know how frequently the ripple didn't occur.
Otherwise the first action we need to undertake would not be
to document the system.<g>

Now I offer you the ability to alter a set of specifications,
each written in a literate programming manner, and submit
this unordered source to a software process which ultimately
orders it optimally, producing from this ordered source a
complete set of visual and textual output.

The only bridge I am burning is the one that says I have to
perform clerical work instead of turning it over to software. I
guess there is a second bridge as well: not having to do
unnecessary work. It's amazing how much more necessary
work you can get done if you don't have to screw around
with the unnecessary.<g>

The completeness proof, whether complete (true) or
incomplete (false), is as capable of accurately depicting an
incomplete state visually as it does a complete one. From the
same partially complete or totally complete ordered source it
can produce flowcharts, dataflows, structure charts, or any of
the sixteen UML charts. The fact that you don't have to
create and maintain them means you have more time to
understand them.
admin
Site Admin
Posts: 1925
Joined: Wed Dec 19, 2018 9:23 am
firstname: osFree
lastname: admin

Re: SCOUG-Programming: QA equals testing, Part One:Detection

Post by admin »

#948 Re: SCOUG-Programming: QA equals testing, Part One:Detection
Expand Messages

Lynn H. Maxson
2004 Jan 3 8:41 AM
Gregory Smith writes:
"No matter how sophisticated your specifications, your
"solution set" is not much good when the "problem set" is ill
defined. And nearly every engineering problem I have
encounted is an incompletely defined "problem set". It is not
"adapting to change"--it is wanting different results from our
ill defined problem. ..."

Greg,

It's hard enough when you do things right. It's simply
impossible when you do them wrong. When your problem set
description doesn't match the problem the mismatch remains
no matter how accurate the translation into specifications.

No matter how frequently this situation occurs it shouldn't
distract us from bringing improvements to situations in which it
doesn't. I do not address the requirements gathering process
as we can use software to assist in it but we cannot use it to
automate it. We cannot even use software to automate the
translation from requirements (good or bad) to specifications.
We can, however, fully automate the process once we have
the specifications. That's the lesson of logic programming.

Now I didn't invent logic programming. I am not its Al Gore. I
had experience with various logic programming tools, e.g. TIRS
in AI, Knowledge Pro in AI, and SQL in IT. They all go from
specifications to executables entirely within software, entirely
automated. When you get within the SDP you have with
imperative languages five manual stages and with declarative
one. It doesn't take a rocket scientist (like someone else), a
process engineer (like yourself) or a dummy (like me) to do
the arithmetic.

At the heart of logic programming lies this two-stage proof
engine: the completeness proof and the exhaustive true/false
proof. A further examination of the completeness proof
shows that it automates analysis, design, and construction
while the exhaustive true/false proof does the same for
testing. The question then becomes, if you can automate
these stages, why continue to do them manually? You have
hundreds, possibly even thousands of logic programming
instances that have operated in this manner for over 30 years.

So why do we continue in our use of imperative languages? If
we insist on continuing their use, why do we not make them
amenable to this two-stage proof approach? What changes
do we have to make to them to allow their complete
automation after specification as well? We won't end up with
all the advantages of declarative languages, but we will end
up with the automation of analysis, design, construction, and
testing.

People get hung up on SL/I and the DA. Instead they should
focus on the possibilities of the two-stage proof engine.
Without it SL/I and the DA have no meaning. With it you can
pick any language and supporting tool interface to achieve
significant gains.

I haven't address the gathering process that results in the
user requirements. I haven't address the translation process
from requirements to specifications. I haven't address the
automated process that takes specifications to executables.
Logic programming has. I have simply pointed to it as a
solution that has worked reliably for over 30 years. I would
think that long enough to convince even the most hardened of
skeptics.<g>

If you have a better solution complete with empirical proof,
then I will quickly switch from mine to yours. Otherwise I
would expect you to extend me the same courtesy.<g>
admin
Site Admin
Posts: 1925
Joined: Wed Dec 19, 2018 9:23 am
firstname: osFree
lastname: admin

Re: [osFree] QA equals testing, Part One:Detection

Post by admin »

#949 Re: [osFree] QA equals testing, Part One:Detection
Expand Messages

Dale Erwin
2004 Jan 3 8:49 AM
Lynn H. Maxson wrote:

>
> Now detecting errors requires the creation of test data, data
> expected to give correct (or expected) results and data
> expected to give incorrect (or unexpected) results.

EXPECTED to give UNEXPECTED results ????
--
Dale Erwin
Salamanca 116
Pueblo Libre
Lima 21 PERU
Tel. +51(1)461-3084
Cel. +51(1)9743-6439
admin
Site Admin
Posts: 1925
Joined: Wed Dec 19, 2018 9:23 am
firstname: osFree
lastname: admin

Re: [osFree] QA equals testing, Part One:Detection

Post by admin »

#950 Re: [osFree] QA equals testing, Part One:Detection
Expand Messages

Lynn H. Maxson
2004 Jan 3 10:03 AM
Dale Erwin writes:
"EXPECTED to give UNEXPECTED results ????"

Yes. You expect something. You get something else. You get
something when you didn't expect it, thus unexpected.

By convention the exhaustive true/false proof only produces
all true instances, one or more. It could just as easily and
separately produce all false instances. This proof process
then works as advertised, i.e. exhaustively. This should put to
rest any assertions of the impossibility to perform exhaustive
testing and thus the need for multiple alpha, beta, and other
releases as well as corresponding "testers".

However, the ability to automate the testing process, to
automatically generate all test cases, i.e. enumerated sets of
test data, does nothing to automate the validation process.
This process, a human one, a manual one, we need to detect
errors: false instances in true results and true instances in
false results. Unless the guy's theorem and proof on proving
correctness get invalidated we can't automate the validation
process. We can only insure that we have all the possible
results.

We need then to significantly reduce the number of test
cases, i.e. test instances, to a necessary and sufficient number
representing all possible test results. Fortunately we
developed such methods in testing long before the advent of
logic programming. We can use these methods as part of the
process of generating test data instances to significantly
reduce the volume of results we need to validate.

In truth because we have automated four of the five stages
of the SDP, essentially reducing their current time a
millionfold, this leaves us with more time to do validation...and
still have some left over: a net productivity plus.

While we cannot assume error-free validation, as to err is
human in reading as well as writing, we have all the possible
results with all the possible errors. Assuming then that we
have the skill to detect at least some if not all of the errors,
we then go on to error correction, the second part of this
thread.

Thank you for this lead in.
admin
Site Admin
Posts: 1925
Joined: Wed Dec 19, 2018 9:23 am
firstname: osFree
lastname: admin

Re: [osFree] Re: SCOUG-Programming: QA equals testing, Part One:Detection

Post by admin »

#951 Re: [osFree] Re: SCOUG-Programming: QA equals testing, Part One:Detection
Expand Messages

Dale Erwin
2004 Jan 3 10:53 AM
Lynn H. Maxson wrote:

> "...Having spent a few years writing requirements, I found
> that it's pretty hard to define the requirement adequately, up
> front, and the waterfall of the development process results in
> a refinement of the requirement manifested in the sequential
> specs. The code becomes a sulf(sic)-fulfilling requirement
> baseline, no matter what the original spec. Probably
> accidental if it actually matches the designer's intentions. ..."
>
> Ah, a second voice heard from. Must be an open moment
> during the holidays. Thank you, Dave, for taking time to
> comment.
>
> The waterfall effect as you describe it changes radically in
> logic programming in that only two stages have people input:
> requirements and specification. The remaining stages occur in
> software: analysis, design, construction (completeness proof)
> and testing (exhaustive true/false proof). So only two errors
> in translation can occur, one in translation from the user to
> written requirements (informal logic) and their translation into
> specifications (formal logic). The software translates the
> specifications exactly as written, i.e. no transmission loss.
>
> Note that having the software perform these functions you no
> longer have programmer analysts, designers, or coders. The
> designer's intentions then must reside in the written
> specifications. If they don't result in the expected, i.e.
> intended, then he wrote them wrong (errors of commission) or
> incompletely (errors of omission). He corrects them by
> correcting the writing. He does not do analysis, design, or
> coding beyond the specifications.
>
> It seems difficult to accept to accept that software can
> adequately perform these tasks. Yet daily millions of users
> have trusted SQL to do exactly that...for the last 30 years. If
> you respond that an SQL query doesn't quite match up to the
> more extensive logic and levels found in a program, that a
> program using embedded SQL has to also include additional
> logic outside the query, then I would have to refer you to an
> AI product like IBM's TIRS (The Intelligent Reasoning System).
> TIRS does the whole logic including the writing of the SQL
> query.
>
> Peter also comments:
> "...Dave, you've identified the true path -- writing the spec.
> Lynn has also been pushing the spec as the one true
> document but he's constantly trying to incorporate it into code
> generation using just-in-time manufacturing techniques and
> hallowed echoes of H. Edwards Deming (plus Homer Sarasohn
> and Charles Protzman, to be fair). It's much easier to
> digest "writing the spec" when there's a wall between design
> and manufacturing, and Lynn's dream might more quickly bear
> fruit if he took that tack."
>
> A wall between design and manufacturing. In imperative
> languages that wall exists by necessity: it has to do with the
> logical organization globally in the sourceby people prior to
> submitting it to manufacturing, i.e. compiling. In declarative
> languages, i.e. logic programming, it ceases when you submit a
> specification to processing. The software in the completeness
> proof does analysis, design, and construction in a seamless
> stages with no loss in transmission or translation.
>
> Get use to it. Get use to the idea of saying what you want in
> an unordered, disconnected manner in terms of writing
> specifications. Get use to the idea that the software has all
> the capabilities that humans have in performing analysis,
> design, and construction. You tell it "what" you want. It tells
> you "how" to get it.
>
> It's not some dream of mine. It's how logic programming
> works. It's how SQL works. It's how AI works. It's how neural
> nets work.
>
> A difference that occurs lies in the thousands and millions
> reduction in time for the software to do analysis, design,
> construction, and testing over having people do it. To make a
> few changes to specifications, submit it, and few seconds (or
> even minutes) later to have the complete results of the
> effects of those changes ought to suffice that analysis and
> design reside inherently in specifications. They do for people.
> They do for software.
>
> This means that you do specifications at one point in one
> source in the process regardless of the number of iterations of
> that process. That means you do requirements at one point in
> one source in the process. You don't get involved with the
> writing of either, with detection and correction of errors of
> omission or commission, at some later point.
>
> You do what SQL query writers do. You write a query. You
> view the results. If they are not what you want, you rewrite
> the query. The query design is pre-structured in the
> "SELECT...FROM...WHERE" clauses. Inside those clauses the
> structure is user-specified in any order that the user chooses.
> In fact within the "FROM" and "WHERE" clauses the order is
> unimportant. Obviously the SQL processor demonstrates that
> it can do analysis, design, construction, and testing without
> any further "input" from you.
>
> I'll reserve commenting on Peter's references to Deming, et al
> except to say that once you get by the effort for problem
> detection, i.e. validating the exhaustive results of execution,
> you need to correct it at its source. When you only have two
> sources which occur sequentially in time you don't have to go
> back to requirements unless it exists there. If it doesn't, then
> you only have to change the specifications.
>
> In short you only have two sources to maintain in sync. These
> two sources represent two different but equivalent
> descriptive forms. It's only between these forms that either a
> loss in transmission or translation can occur. Any that do
> definitely lie outside the purview of the software:
> people-based.
>
> You end up from a people's perspective with a drastically
> short portion up front of the total waterfall. You never have
> to tread water beyond that portion while you always get to
> see it in its entire glory. You can change that flow
> extensively with very small changes in your portion. It reacts
> immediately to any such change. You do not apply the term
> "immediately" when you have people responsible for the
> entire waterfall.<g>

Lynn, I find one thing missing from your missals. I have yet to
see anything addressing errors in the code (written by a human)
that constitutes this tool (SL/I).
--
Dale Erwin
Salamanca 116
Pueblo Libre
Lima 21 PERU
Tel. +51(1)461-3084
Cel. +51(1)9743-6439
admin
Site Admin
Posts: 1925
Joined: Wed Dec 19, 2018 9:23 am
firstname: osFree
lastname: admin

Re: [osFree] Re: SCOUG-Programming: QA equals testing, Part One:Detection

Post by admin »

#952 Re: [osFree] Re: SCOUG-Programming: QA equals testing, Part One:Detection
Expand Messages

Lynn H. Maxson
2004 Jan 3 11:39 AM
Dale Erwin writes:
"Lynn, I find one thing missing from your missals. I have yet to
see anything addressing errors in the code (written by a
human) that constitutes this tool (SL/I)."

We have two tools here, the language SL/I and its
implementation within the DA. Neither can detect logic errors
either of omission or commission. I would like them to do so,
but some genius proved that it was impossible for any
software (or hardware) to do. They can, however, in their
visual output, logical organization, and exhaustive true/false
proof tell you the results of that logic. It's up to you to
validate these results in terms of what you expect. At best
then they provide a multiplicity of aids. Your option, should
you choose to exercise it, lies in validating without the use of
aids: like performing a peer review of your own, i.e. reading,
on the code you have written.

In effect every implementation offers a form of peer review.
This allows you to verify that you wrote what you thought
you wrote. If after exhaustive testing it produces only the
expected results either true or false, then you wrote it
correctly. Otherwise not. You have the additional
requirement to know what to expect. You ought to know
what you want. The implementation can only tell you what
you get. If they don't match, you have a problem.

Do not focus on SL/I. I created it only to overcome the
deficiencies I feel in Prolog, the dominant logic programming
language. If it didn't have them or could easily overcome
them, I would not have come up with SL/I. Unfortunately I'm
an old man with nearly 40 years experience with PL/I. The
gap between it and any other programming language is huge.
No one, not even Wirth in his evolutionary approach of
programming languages comes close. To get to a universal
specification language based on PL/I means traveling a far
shorter distance in time with fewer changes than any other. I
could have called it PL/II, but chose instead SL/I.

You need not focus on SL/I or PL/I. You need to focus on
logic programming and its embedded use of a two-stage proof
engine. Then pick any language of your choice and upgrade it
accordingly. The object lies in increasing your productivity so
that you have more time to detect and correct errors prior to
releasing them on an unsuspecting world...which unfortunately
has come to expect it.<g>
admin
Site Admin
Posts: 1925
Joined: Wed Dec 19, 2018 9:23 am
firstname: osFree
lastname: admin

Re: SCOUG-Programming: QA equals testing, Part One:Detection

Post by admin »

#953 Re: SCOUG-Programming: QA equals testing, Part One:Detection
Expand Messages

Lynn H. Maxson
2004 Jan 3 12:09 PM
Peter Skye writes:
"My perception is that Lynn is trying to eliminate the
high-level source and "compile" directly from the specs. ..."

You must speed read what I write with comprehension
dropping off correspondingly. Somewhere along the line you
must have read my assertion that every programming
language is a specification language, but not every
specification language is a programming language. It becomes
so when an implementation exists either a compiler or
interpreter.

In truth, Peter, we always compile from a specification
language that we call a programming language. Along the
way from earlier stages of specification, analysis, and design
we have engaged in the use of other specification languages,
translating each in turn to get to our final specification
language which we compile.

Now logic programming illustrates that we only need to write
the specification language in one stage, the specification
stage, leaving it up to the software to do the remainder of
the necessary writing. That means no CASE input, only
output. No UML input, only output. No ordered input, only
output. The "no input" here refers to what using logic
programming you don't have to write or even translate.

The specification language is a programming language is an
HLL.

As to the remainder of your message anything you write in
source code is a specification. It doesn't make any difference
what language you use. Concepts like real time system, lead,
lag, threads, priorities, etc. you can encode in several forms
with a choice of multiple programming, i.e. specification,
languages. However, you can't escape what you need to
know to encode them properly.

Compiling directly from an HLL is not new. The difference lies
in imperative languages compiling only in the construction
stage while declarative languages compile in the specification
stage. If it wasn't for the need of imperative languages to
incorporate the logical organization as part of the input
process, you could do them from the specification stage as
well. If you change a few rules which allows the software to
do the logical organization, then you could do it with any
programming language.
admin
Site Admin
Posts: 1925
Joined: Wed Dec 19, 2018 9:23 am
firstname: osFree
lastname: admin

Watcom problems?

Post by admin »

#954 Watcom problems?
Expand Messages

Yuri Prokushev
2004 Jan 9 1:48 AM
Hi

Open Watcom (at least version 1.0) seems to have problems with creating LX DLL
with 16-bit entry points. Anyone have information about later versions?

wbr,
Yuri
Post Reply