Page 1 of 3
Part 26 - Aug 28 2003
Posted: Sat Dec 22, 2018 7:45 am
by admin
#755 From: "Lynn H. Maxson" <lmaxson@...>
Date: Thu Aug 28, 2003 7:35 am
Subject: Re: Alot of Acitivity and Alot of Discussion. lynnmaxson
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°
Dale Erwin writes:
"But now we seem to have reached an impasse. Is there
enough talent is this group to write such a language as Lynn
proposes?"
Yes. I know of at least one person capable of it.<g>
Ben Ravago writes:
"And is it even in such a state that someone can write a tool
that implements it?"
Again, yes.
You synthesize the language from PL/I, APL, and LISP. It's not
all that difficult. You take the syntax, control structures, and
data types of PL/I, the operators of APL, and add to them the
list aggregate and operators of LISP. You now have more
than exists in all of the thousands of other third generation
programming languages developed over the years. Then you
add to the mix the capabilities of logic programming,
upgrading your synthesis to a 4GL. To do that you add the
assertion statement to go with the assignment statement.
That allows you to have all of 3GL and 4GL in a single
language.
You end up with a "universal" specification language, so
universal that it's capable of specifying, i.e. defining, itself. It
is not only self-defining, but self-extensible.
If you don't know PL/I, then you have no idea of how
complete you can make a programming language. Thus you
have the least amount of work to make it a complete third
generation language. When done you have the least amount
of work to upgrade it to a fourth generation language.
You need to get to a fourth generation level in order to shift
more of the coding effort over to the software. Much of that
occurs in the nature of "rules" in 4GLs. In 3GLs the
programmer is responsible for the application of rules by
incorporating them where needed physically in the source
code. To err is human meaning that programmers may not
know the rules, may forget them, may incorporate them
incorrectly, or may simply fail to incorporate them at all.
Software doesn't suffer from these human frailties.
You can download a free working copy of Visual Prolog to
see how rules get implemented in one 4GL. You can read an
illustration of their use in Peter Flach's "Simply Logical:
Intelligent Reasoning by Example". It comes down to rules
being an unordered collection of defined relationships.
Each rule exists as an independent specification from a single
statement to an assembly of such statements. I should have
also said that they are not only independent but reusable.
That means that they have names. The closest thing in a 3GL
to illustrating rules is in COBOL "paragraphs" which can range
from a single statement to a sequence of paragraphs.
I need to offer some examples of assertions and rules to give
some idea of how a 4GL differs from a 3GL and earlier in
terms of what the programmer writes and what the software
writes. I will do that.
Re: Part 26
Posted: Sat Dec 22, 2018 7:46 am
by admin
#756 From: Ben Ravago <ben.ravago@...>
Date: Thu Aug 28, 2003 8:47 am
Subject: Re: Alot of Acitivity and Alot of Discussion. ben_ravago
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°
"Lynn H. Maxson" wrote:
> I need to offer some examples of assertions and rules to give
> some idea of how a 4GL differs from a 3GL and earlier in
> terms of what the programmer writes and what the software
> writes. I will do that.
Certainly, that would help. Having had brushes with many of the languages
you've
been mentioning, I'm curious to see what this amalgam you have in mind will look
like. How would one describe an OS/2 thread switch in that language, for
example?
And by "such a state that someone can write a tool that implements it", I mean
having something that a competent programmer can use to, say, create a gcc
front end for your language and so enable the creation of executable binaries
via the gcc toolchain (or something like that). In other words, something
beyond an EBNF specification of the language; either a compiler or the specs
for a compiler.
As for downloading a Prolog system: what would be the point? Unless you're
proposing that your language will look like prolog, I don't see how people will
be encouraged by having to tackle first the strangeness of a lisp (vs. c/basic)
derived language and then the strangeness of the declarative (vs. procedural)
paradigm. And this will not necessarily give them a good impression of
rule-based programming anyway. Many rule engine products now feature
english-like rule specifications because only academic types can make sense
of lisp-like syntax (and that excludes most programmers).
Sorry if I seem a little contentious here but my last assignment involved the
implementation of a rule-engine based system. Conceptually, a very promising
technology but not for the faint of heart or hard of head.
Also, since you've previously referred to your language as a specification
language, have you looked at the Z specification language (or Z-notation)?
This is an ISO standard specification language. Also no compiler, IIRC.
> If you don't know PL/I, then you have no idea of how
> complete you can make a programming language. Thus you
> have the least amount of work to make it a complete third
> generation language. When done you have the least amount
> of work to upgrade it to a fourth generation language.
Yes, I know PL/I. And COBOL, too, so I have some idea of what's good and bad
about language 'complete'ness and extension. It's nice to have a lot of
keywords
and handy abstractions; it makes for really concise programming. But over time
a lot of the terms can tend to become baggage as new technology enters into
the realm of what must be programmed for and old technology that was so neatly
encapsulated by terminology disappears. The C language had different objectives
and so has developed differently. But this is getting off-topic: back to OS/2.
Re: Part 26
Posted: Sat Dec 22, 2018 7:51 am
by admin
#757 From:
yuri_prokushev@mail.ru
Date: Thu Aug 28, 2003 10:12 am
Subject: Re: Alot of Acitivity and Alot of Discussion. prokushev
Offline Offline
Send Email Send Email
* Answer on message from INET.OSFREE area
Hello!
Answer on message from Tom Lee Mullins to
osFree@yahoogroups.com:
>> As I said before, we need to restart project. It is a)
>> Web-site b) CVS.
>>
>> BTW, Command line tools very near to be completed.
TLM> There are other 'puzzle pieces' (ie; the DANIS drivers, etc)
TLM> that can come together to form an operating system(?).
Don't expect too much. Not so many open-source (i.e. DANIS not open
source -> /dev/nul)
CU!
Yuri Prokushev
prokushev at freemail dot ru [
http://sibyl.netlabs.org]
Re: Part 26
Posted: Sat Dec 22, 2018 7:53 am
by admin
#758 From: "Lynn H. Maxson" <lmaxson@...>
Date: Thu Aug 28, 2003 10:57 pm
Subject: Re: Alot of Acitivity and Alot of Discussion. lynnmaxson
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°
Ben Ravago writes:
"Certainly, that would help. Having had brushes with many of
the languages you've been mentioning, I'm curious to see what
this amalgam you have in mind will look like. How would one
describe an OS/2 thread switch in that language, for example?
..."
I've decided broaden the audience for my response here
because I really get tired of repeating myself, if for no other
reason than I've heard it before.<g>
Every programming language is a specification language. Not
every specification language is a programming language. The
Z-specification language, for example, is not a programming
language, though I probably have more reference textbooks
on it than any other with Forth running a close second.
No matter how you hack it beginning with the gathering of
user requirements as input you have a process of five
stages--specification, analysis, design, construction, and
testing--for their translation into a executable software
version. Imperative (first-, second-, and third-generation)
languages "insist" that you perform these manually as a
sequence of translations from one stage to the next. They
"insist" because you can only say "what" you want done
through describing "how" to do it.
That's their "nature" and how they move through their
"natural habitat": one "manual" stage at a time. If you use
them, then you too must go lockstep with them. After you
have done it a few hundred million times, it becomes "natural"
to you, a habit, a paradigm, the way you see things.
Now come fourth generation "declarative" languages based on
logic programming something needs to change, even
God-forbid a 50-year-old paradigm that has transported you
through three generations to this point. You can't simply
desert it now. You don't. You insist on making fourth
generation use earlier generation tools, the same mind set
regardless.
Yet fourth generation takes specifications as input,
automatically performing analysis, design, and construction.
So why do fourth generation users not simply write
specifications only without doing any of the other? Do they
simply have bad habits they cannot drop? Or does their tool
set based on earlier generation methodology "insist" on it?
It's the tool set. It's the tool set. It's the tool set. What I tell
you three times is true. The tool set not only ill-serves its
intended earlier generations, but it's an absolute disaster
when it comes to the fourth generation.
The tool set ill serves deliberately. The vendors had a peek
at an alternate universe with IBM's (failed) AD/Cycle project
and said, "Thanks, but no thank you." Of course, they said it
in a way which implied IBM was at fault. That should have
been enough of a clue to lay to rest any residual myth about
IBM's great "marketing advantage".<g>
Unfortunately open source, whose only tool vendor is itself,
has seen fit to have its tool set ill serve it. You may see some
reason to tie in the gcc family to what goes on here, but
frankly I don't want to get dragged down into that dungeon.
For my money the gcc family as currently implemented is the
problem and not the solution.
I brought up COBOL to illustrate a point on "reuse" made
possible by COBOL paragraphs. Granted COBOL is verbose (as
an understatement). Thus the trees may block out a view of
the forest. You may have so many paragraphs so widely
separated in a source listing that reuse may seem more a
handicap than a blessing.
The COBOL "perform" verb executes a named paragraph (all
paragraphs have names). That paragraph may consist of a
single COBOL statement on up to a larger assembly. The
COBOL "perform thru" executes a named sequence of two or
more paragraphs with the first and last names acting as
delimiters.
So COBOL provides simply what PL/I and C do not: repetitive
reuse of non-procedures, i.e. anything from a single statement
on up. In theory PL/I and C can achieve the same with an
"include", but only at the cost of actually repeating the
statements. Also note that data, not statements, are the
principal use of "includes" in most programming languages.
So COBOL provides reuse at a granularity not easily possible in
other programming languages. It implements that reuse
through an internal "label goto and return" by the compiler
whose external use by a programmer nowadays qualifies him
for job termination.<g>
COBOL at least recognizes that the smallest unit of reuse is a
single statement (even if it occupies an entire paragraph).
Granularity of reuse to the statement level means reuse from
that level on up.
Now in logic programming rules appear as a sequence of one
or more statements. In logic programming rules are reusable.
Thus in logic programming reuse occurs down to the
statement, i.e. specification, level. My only point in
referencing COBOL was to show it was not a new concept,
something dreamed up for logic programming or by me.
You see it trips off our lips to talk of "gathering" user
requirements. That implies a "holding" action until some
number of them meeting some unspoken criteria is reached.
That implies a delay. If that delay is longer than the average
arrival rate of a new or changed requirement, then you can't
possibly keep up going manually through four stages to get
through construction. More often than not you have to initiate
a "freeze" on one gathering to get it through the process,
which means starting another for those requirements arriving
in the meantime.
You see we need to shift from a "gathering" mentality to a
"capturing" one. We don't collect them in a holding area.
Instead we pass one off as one or more specifications before
on average the next one comes in. In that manner we never
have a "backlog" nor the need to initiate a "freeze".
We bring them in one specification at a time in the order in
which they appear on input. We don't have to determine
ahead of time whether we have captured enough for
meaningful results. We let the software as part of its
completeness proof do that for us. Thus we have only to
write specifications as they occur, allowing the software to
engage continuously in analysis, design, and construction
according to the dynamics of the input.
That's what declarative buys you over imperative. You only
write specifications (here your specification language is a
programming language), leaving it up to the software to write
everything else. You know it can do it, because that's what
logic programming has always done. It's not some weird
invention of my own.
"Sorry if I seem a little contentious here but my last
assignment involved the implementation of a rule-engine
based system. Conceptually, a very promising technology but
not for the faint of heart or hard of head. ..."
We live in a world in which apparently we neither hear nor
listen to ourselves. In programming we have two things: data
(referents) and expressions regarding their use (references).
In documentation we have two things: data (referents) and
expressions regarding their use (references). So when Donald
Knuth proposes something like "literate programming" we
shouldn't be surprised that it deals with data (referents) and
associations among two different expressions with the same
purpose of their use.<g>
If you want to go "high class", joining those who feel better
saying "information processing" instead of "data processing,
you can upgrade "data" to "object". In doing so you must
somehow admit that all data-oriented activities are also
object-oriented. That means we were doing object-oriented
technology long before we formalized and restricted its
application to a particular methodology. All programming from
its very inception to now and onward into the future is
object-oriented. It's not that we have a choice. It's simply a
matter of picking a particular choice.
In the instance of what we now refer to as object-oriented
technology we made a poor choice on selecting Smalltalk as
our paradigm. You have to be careful when you borrow
freely from academic and research environments. You ought
to have extra caution when these people seize on the
opportunity to benefit from a "for-profit" adventure. They
can't lose. You can.
No better example of this exists than UML, the "Unified"
Modeling Language. You didn't need one as much unified as
universal. You see a universal modeling language is unified,
but a unified one is not necessarily universal. You should
have been suspicious about the increased labor content when
your analysis and design stages went from two sources
(dataflows and structure charts) to fourteen.<g>
It not only takes longer, regardless of extreme or agile
programming methods, costs more, and runs slower, but the
backlog grows faster. In other words it had just the opposite
effect on the problem it was supposed to solve.
So I can't refute my own logic by denying object-oriented. I
can, however, intelligently decide not to engage seriously in
"that" object-oriented technology.
This issues here regardless of what form of object-oriented
programming you practice are reuse, its granularity, and
whether or not repetition occurs. No one argues against the
principle of reuse. Things start to differ after that point.
Logic programming and COBOL allow granularity from the
single statement level on up. They differ on invocation. In
COBOL it occurs only through a "perform" statement. Thus
the name of the paragraph (which is the programmer's
responsibility) gets repeated, but not the paragraph, i.e. the
source. Logic programming is somewhat "richer" in options
with respect to rule processing. Nevertheless the rule itself is
never repeated, only its references.
You can understand why this happens in COBOL. You use
reuse to avoid repetition as each repeated instance would
take up valuable space. At least that's what it would do at
the time COBOL was invented. So you had two benefits. One,
space saving. Two, only a single copy of source to maintain.
Only a single copy in a single program. Otherwise a repeated
copy in each. It simplified maintenance within a single
program, but not among multiple programs affected by a
paragraph change. Except for IBM's implementation of
object-oriented COBOL I am not aware of any widespread use
of the "copy" statement in a processing section to perform the
same function as an "include" in C. Even in C (or PL/I) the
"include" statement is seldom applied to processing
statements, certainly not to a single statement regardless of
the possibility.
Why is this important? What's the big deal?
Currently editors, compilers, a whole host of supporting
utilities are based on a file system. Single statement files are
not impossible, only impractical. To use them means having to
explicitly and manually name each file. Besides that, having a
six million statement program means maintaining six million
files. Even if you break them down logically somehow into
multiple directories, it still becomes unwieldy,
Thus the file system itself mitigates against the use of single
statement files. Even in COBOL single statement paragraphs
occur physically within a body of other paragraphs, all of
which are contained in a single file. Little wonder then that
for practical purposes we use one file per procedure. Not only
do we not have granularity at the statement level, but we do
not even have it at the control structure level. In fact we do
not have it at any statement group level below the procedure
level.
All due to basing source creation and maintenance on use of a
file system. The option is to use a database. Everyone gets
hot for that because it's good to have on your resume. So
how do you use a database? Chances are exactly like a file
system except that you have a row name instead of a file
name.
The secret here lies in taking advantage of a database and
the relational language supporting access to it. You do that
by storing each statement separately, allowing the software
to automatically generate the name based on content. It
doesn't matter how many times a programmer writes the same
statement in how many different programs, the actual
statement itself only appears once in the database.
In turn statement assemblies occur only as a named list of
names of statements or assemblies. You have the "pure"
manufacturing environment that object-oriented talks about in
terms of reuse but never achieves.
All this occurs through a data repository/directory
automatically access and maintained by the software for all
documentation, all source code, and all data. All in one place,
one database with one access point, the directory. The
software provides the necessary user interface.
In the end it's not just one but a set of paradigm changes
necessary. You have the overhaul of a complete system of
paradigms.
Re: Part 26
Posted: Sat Dec 22, 2018 7:54 am
by admin
#759 From: Dale Erwin <daleerwin@...>
Date: Fri Aug 29, 2003 4:54 am
Subject: Re: Alot of Acitivity and Alot of Discussion. dale_erwin
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°
Lynn, I just wanted to make a comment on your remarks concerning
COBOL and reusability:
While it's true that COBOL paragraphs and/or sections can be PERFORMed
from many different locations in the logic, this is only evident in the
source, because the compiler proceeds to copy all the performed code
inline at every point of execution... supposedly for "performance" reasons.
This might save on typing, for, as you say, COBOL is quite verbose, but
to me "reusability" is not really served here.
--
Dale Erwin
Salamanca 116
Pueblo Libre
Lima 21 PERU
Tel. +51(1)461-3084
Cel. +51(1)9743-6439
Re: Part 26
Posted: Sat Dec 22, 2018 7:54 am
by admin
#760 From: "Lynn H. Maxson" <lmaxson@...>
Date: Fri Aug 29, 2003 8:24 am
Subject: Re: Alot of Acitivity and Alot of Discussion. lynnmaxson
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°
Dale Erwin writes:
"...While it's true that COBOL paragraphs and/or sections can
be PERFORMed from many different locations in the logic, this
is only evident in the source, because the compiler proceeds
to copy all the performed code inline at every point of
execution... supposedly for "performance" reasons. ..."
Interesting as my experience in debugging hundreds of COBOL
programs in the IBM mainframe environment did not do them
inline. I do have a COBOL compiler on this machine, but I so
dread reading Intel dumps that I'm reluctant to find what it
does.
I assume that inline is not only faster, but also supports a
smaller working set in a virtual storage system. However,
your comments do make a point that regardless of repetitive
inline use only one source copy exists. That still simplifies
maintenance.
The type of reuse, either inline (repetitive) or out-of-line
(non-repetitive), should exist as a metaprogramming option on
an instance (local) or global basis. I'm still of the view that
the programmer should dictate the implementation and not
vice versa.
Thank you for showing that not all COBOL compilers are the
same.
Re: Part 26
Posted: Sat Dec 22, 2018 7:56 am
by admin
#761 From: Dale Erwin <daleerwin@...>
Date: Fri Aug 29, 2003 9:56 am
Subject: Re: Alot of Acitivity and Alot of Discussion. dale_erwin
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°
Lynn H. Maxson wrote:
> Dale Erwin writes:
> "...While it's true that COBOL paragraphs and/or sections can
> be PERFORMed from many different locations in the logic, this
> is only evident in the source, because the compiler proceeds
> to copy all the performed code inline at every point of
> execution... supposedly for "performance" reasons. ..."
>
> Interesting as my experience in debugging hundreds of COBOL
> programs in the IBM mainframe environment did not do them
> inline. I do have a COBOL compiler on this machine, but I so
> dread reading Intel dumps that I'm reluctant to find what it
> does.
>
> I assume that inline is not only faster, but also supports a
> smaller working set in a virtual storage system. However,
> your comments do make a point that regardless of repetitive
> inline use only one source copy exists. That still simplifies
> maintenance.
>
> The type of reuse, either inline (repetitive) or out-of-line
> (non-repetitive), should exist as a metaprogramming option on
> an instance (local) or global basis. I'm still of the view that
> the programmer should dictate the implementation and not
> vice versa.
>
> Thank you for showing that not all COBOL compilers are the
> same.
My experience before retirement was all IBM mainframe... a little
over 25 years split up just about half and half between COBOL and
Assembler. Starting with the introduction of COBOL II and a compiler
option for "optimizing," the performed code was copied inline. Being
an Assembler programmer, I could never understand why it was supposed
to be faster. Is a branch instruction so slow? But then, COBOL always
has so much other stuff going on, maybe it did a LOT of stuff any
time a branch was made. But saving a return address, branching to
another location, then loading up that return address and returning
can't take so many machine cycles that it is a noticeable delay.
But then, I guess they're thinking about the thousands of times that
a program will run along with all the thousands of others that run
along with it. Adds up, I guess, but I always thought that the time
to load the larger load modules would compensate for any delays.
Loading a module require disk access, and you can do an awful lot of
BAL and BALR (or BAS/BASR) instructions in that amount of time.
Before COBOL II, there was a third-party optimizing compiler (lordy,
never thought I'd ever forget the name of that compiler, but I have)
that also copied performs inline.
--
Dale Erwin
Salamanca 116
Pueblo Libre
Lima 21 PERU
Tel. +51(1)461-3084
Cel. +51(1)9743-6439
Re: Part 26
Posted: Sat Dec 22, 2018 7:57 am
by admin
#762 From: "Dwight M. Cannon" <dwightc@...>
Date: Fri Aug 29, 2003 2:39 pm
Subject: COBOL Performs youwish_1999
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°
Regarding the use of PERFORM THRU statements, just so those that are
unfamiliar with COBOL have a little understanding of this, the basic tenet
of "structured" COBOL programming requires that only one paragraph be
performed by a single invocation, i.e., that the PERFORM be such as "PERFORM
9000-OPEN-FILES THRU 9000-OPEN-FILES-EXIT" (with the ...EXIT paragraph being
a mere EXIT, or return, statement. To invoke a string of paragraphs in one
PERFORM is sloppy, not to mention dangerous, programming - it only takes one
unknowing maintenance programmer to inadvertently stick an incorrect
paragraph within a PERFORM range to cause problems...then again, they could
add an ALTER or GO TO DEPENDING ON statement or two to make things really
interesting.
That being said, COBOL also offer the PERFORM VARYING..., et al, statements
that allow subscripted data access for the handling of table-ized data. What
I would like to see in any considered programming language (4G or whatever)
is the ability to not only vary the data accessed through subscripts, but
also the referenced name of that data. This would, I suppose, be along the
lines of object-oriented parameter passing. While object-oriented COBOL more
or less serves this purpose, the ability to reuse paragraphs with
interpreted data names would also be useful. COBOL allows the usage of COPY
REPLACING statements for copybooks (data or procedure division), but this is
a compile-time replacement; I'd like to see a runtime variant of this. This
might allow reuse of even greater code, perhaps reducing source to an even
lower level. How a compiler would resolve this is beyond me (I don't write
compilers), but I'm sure it could be handled.
By the way, while I've seen plenty of implementations of COPY statements
being used in the PROCEDURE DIVISION (processing section) within programs,
I've never been a fan of this, as it makes programming more difficult, as
you are unable to see this code and what, exactly it's doing, in most
editors (GUI IDEs tend to allow inline viewing and/or editing of copybooks
while looking at the copying source, however - no such ability exists on the
mainframe that I am aware of; you'd have to open the copybook in a different
editor). Then there's also the possibility of somebody changing the
procedural copybook and hosing your program the next time it is compiled...
Whether COBOL or any other language, such reuse could be beneficial and
perhaps make the creation and maintenance of source much easier. I've always
heard the COBOL as verbose arguments, but this little ability (code reuse)
can actually make a well-written program not only self-documenting, but
smaller and easier to write and maintain as well. Perhaps a code generating
language would create these statements on its own, as I'm unsure of the
implementation of the language that has been discussed would work...
OK, enough COBOL rambling.
Dwight M. Cannon, OSE (how many of us are there?)
Re: Part 26
Posted: Sat Dec 22, 2018 7:58 am
by admin
#763 From: "Lynn H. Maxson" <lmaxson@...>
Date: Fri Aug 29, 2003 7:36 pm
Subject: Re: Alot of Acitivity and Alot of Discussion. lynnmaxson
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°
Dale Erwin writes:
"My experience before retirement was all IBM mainframe... a
little over 25 years split up just about half and half between
COBOL and Assembler. Starting with the introduction of
COBOL II and a compiler option for "optimizing," the performed
code was copied inline. ..."
Dwight M. Cannon writes:
"Regarding the use of PERFORM THRU statements, just so
those that are unfamiliar with COBOL have a little
understanding of this, the basic tenet of "structured" COBOL
programming requires that only one paragraph be performed
by a single invocation, i.e., that the PERFORM be such as
"PERFORM 9000-OPEN-FILES THRU 9000-OPEN-FILES-EXIT"
(with the ...EXIT paragraph being a mere EXIT, or return,
statement. To invoke a string of paragraphs in one PERFORM
is sloppy, not to mention dangerous, programming - it only
takes one unknowing maintenance programmer to
inadvertently stick an incorrect paragraph within a PERFORM
range to cause problems...then again, they could add an
ALTER or GO TO DEPENDING ON statement or two to make
things really interesting. ..."
I hadn't really wanted to touch off a discussion about COBOL.
In spite of its "verbosity" I would still pick it over C.<g> At
least it does support variable-precision, fixed-point, decimal
arithmetic while allowing integers to be decimal as well as
binary numbers.
The so-called compiler optimizing option represents a global
meta-programming feature as does most (if not all) compiler
options. Personally I would like to see the programmer have
more control down to the instance level. Of course, as a PL/I
programmer I've always believe the programmer should have
the last word.<g>
I worked primarily on DOS/VSE, not MVS, accounts as a
regional specialist on CICS/DL/I for accounts using IBM's
COPICS. The advent of COBOL II with its structured
programming features proved interesting. My first experience
on it was trying to instruct a COBOL programmer how to write
inline PERFORM's. Now some several decades later out-of-line
still dominate coding practices.
Dwight failed to explain why the "9000-OPEN-FILES" started
off with the "9000-" or why "8090-" would appear before and
"9040-" after it. The programmer essentially numbered and
physically ordered the paragraphs in source to make locating
them in the listing easier. I had the habit in reading COBOL
program listings to "decollate" the pages, laying them out in a
two-dimensional manner to make it easier to move from
reading one paragraph to another.
There was very low overhead in an out-of-line reference.
That was not a performance issue. The performance issue
arose due to the reference pattern physically in storage and
the number of virtual storage pages it touched (and which
therefore had to be brought into memory) in the course of its
travels. This increased the working set of the number of
pages. Putting the executing code inline reduced this number.
I don't know that any of this has any meaning any more when
main storage runs into the gigabytes and essentially no paging
occurs.
I used COBOL to illustrate granularity of reuse down to the
statement level, something not "attractive" through use of
"include's" in C or PL/I. Sometimes people think I invent things
like "reuse at the statement level". I just wanted to show its
presence in existing software technology.
However, the use of a number prefix as part of a label, i.e.
paragraph, name, a compiler (meta-programming) option to
produce inline instead of out-of-line execution, and "smart"
editors that perform syntax checking but seem incapable
processing "include" or "copy" statements by including them
inline in source--whew--makes several points with respect to
the deficiencies of our current tool set and use of third
generation languages.
In the system I propose no "explicit" use of an "include" or
"copy" occurs, because "implicitly" every statement is
accessed as if it were an "include" or "copy". Thus the editor,
in this instance the Developer's Assistant, would present the
fully exploded source.
Now remember this uses a fourth generation language in
which the "code segments" representing specifications can
appear in any order, i.e. unordered. That's the "input" source.
Fourth generation languages, in fact all of logic programming,
uses a two-stage proof engine: a completeness proof and an
exhaustive true/false proof. As part of the processing of the
completeness proof the software "organizes", i.e. orders, the
unordered input source into an optimal output source format.
That means that every specification, which appears only once
in the actual input, is replicated wherever it is referenced by
other specifications. The only exception occurs when a
specification exists as a named subroutine, i.e. a "procedure".
Thus the "output" source resulting from the completeness
proof contains replications of the "input" source where
required and no use of artifacts like paragraph names in
COBOL occurs. This results in a completely structured source,
obeying the one-in-one-out pattern of structured
programming.
You need to understand the "output" source, which is the
input to the code generation process of the completeness
proof, is never stored, only generated. Thus no maintenance
ever occurs on it, only regeneration. Maintenance only occurs
on the unordered input where changes get reflected through
regeneration in the output source.
We achieve this through a data repository/directory, in this
instance based on using a relational database, in which we
store only the individual statements as rows in a table. To do
that the software assigns each statement a name based on
the first 20 bytes of its content appended by a 4-byte
(binary) index value. The index value compensates for the
possibility of homonyms (same name, different referents).
Together they form a 24-byte unique name required for every
row in a table.
Now note that the programmer writes statements as he would
normally without concern for their existence in the database.
The software will process a statement, create a name based
on its content, check it against same names (homonyms) in
the database, and check its total content against that of the
other homonyms. If it finds a match, it will use that existing
name for the source. Otherwise it will add a index suffix to
give it a unique name and store it in the database.
All this is invisible to the programmer. All he sees is the
unaltered source. Now he could look to see if the source
statement he wants is in the database, but that would take
longer than simply writing it. So he just writes it. The
software assumes all responsibility from that point on. Note
that statement reuse is automatic, requiring no effort on the
part of the programmer. Also note that any statement
appears only once in the database, the data repository,
regardless of how many times it appears in how many
programs.
You can write the input in any order. The software will
determine (generate) the output order. Thus you can accept
user changes/requirements as they occur, write them as
specifications, insert them into the input, and the software
will regenerate the output reflecting changes to the input.
Now I said that kind of fast, but not fast enough for the
software which can do all that tens of millions times
faster.<g> If you have followed this to this point--and if you
have not, go back and do so--and you have any kind of
experience in the difficulties of source program maintenance
of third generation languages, you have just seen what a
difference a fourth makes.<g>
Moreover don't forget the completeness proof has two results:
true, if it has all it needs to order the input source completely,
and, false, if it does not. In either case, just in case you don't
understand how it derived the output form from the input or
why it couldn't, it incorporates a feature called "backtracking"
(present in all of logic programming) to show you why it did
what it did or why it could not (an incomplete or false
instance).
Don't stop now. It gets even better. Though the
completeness proof may register a "false" or "incomplete"
status (state), that doesn't mean that completed portions do
not exist, i.e. partially complete. Nothing prevents the
completeness proof from displaying incomplete results based
on its complete portions. By display I mean in the form of
dataflows, structure charts, or any of the dozen of more
forms of UML.
When you use these visual output results to base changes to
the "unordered" input source, it automatically regenerates the
new output source which means it automatically regenerates
the visual output results. Ten million times faster relative to
your speed puts it in real time.
Remember all you do is write specifications. Your's then
becomes a "one-write" activity on input while the software
engages in a "multi-write" activity on output. That means
you maintain only one source, the unordered input, and
automatically gain through the software all the different
outputs, visual and otherwise, that exist as options for your
selection.
If you understand the meaning, the implication (and
simplification) of all these features and functions in a single
tool, you're not the only one. Intelligent people following the
same logic come to the same conclusion: you only need one
comprehensive tool, one user interface, and one form of
source writing translating one form of user requirements. You
now see why the software tool vendors offered lip service
coupled with passive resistance to IBM's proposed AD/Cycle.
You may also come to understand why my friend and
collaborator Bob Blair has his focus on developing the front
end which includes the data repository/directory while I
prattle on the back end which produces all these visual and
text outputs. In the middle we have our "smartest" editor for
maintaining our unordered input source specifications, which
get checked against the front end and processed by the back.
You probably missed the other fact of using a relational
database to house the data repository/directory. You can
query to get a listing of all homonyms. Within any one family
of same "proper names", the text portion of the unique row
name, you can check all the contents at leisure, determining if
simplification, a reduction in number, is possible. If so, you can
have the effect of the reduction replicated in all affected
assemblies.
The point is that you now can administer the entire source of
an entire enterprise and that includes all documentation
within the enterprise, not just source code and source text.
You have a means of employing "literate programming"
technology to an entire enterprise from a single source.
If you are smart enough to know the difference, you will give
up on storing html "files", use the same tool for maintaining
websites as you do for programming as you do for enterprise
documentation. This means you can totally "integrate" the
information or knowledge base of an entire enterprise from a
single source data repository/directory. That means everyone
uses the same means for maintaining unordered source and
generating ordered output.
I thank you for your patience.
Re: Part 26
Posted: Sat Dec 22, 2018 7:59 am
by admin
#764 From: Dale Erwin <daleerwin@...>
Date: Fri Aug 29, 2003 7:59 pm
Subject: Re: COBOL Performs dale_erwin
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°
Dwight M. Cannon wrote:
> Regarding the use of PERFORM THRU statements, just so those that are
> unfamiliar with COBOL have a little understanding of this, the basic tenet
> of "structured" COBOL programming requires that only one paragraph be
> performed by a single invocation, i.e., that the PERFORM be such as "PERFORM
> 9000-OPEN-FILES THRU 9000-OPEN-FILES-EXIT" (with the ...EXIT paragraph being
> a mere EXIT, or return, statement. To invoke a string of paragraphs in one
> PERFORM is sloppy, not to mention dangerous, programming - it only takes one
> unknowing maintenance programmer to inadvertently stick an incorrect
> paragraph within a PERFORM range to cause problems...then again, they could
> add an ALTER or GO TO DEPENDING ON statement or two to make things really
> interesting.
In shops that forbid GO TO statements, PERFORM THRU is useless. Why have
an EXIT paragraph, if you can't use GO TO to get to it? Other uses of
PERFORM THRU can be solved by PERFORMING a SECTION rather than a range of
paragraphs.
> By the way, while I've seen plenty of implementations of COPY statements
> being used in the PROCEDURE DIVISION (processing section) within programs,
> I've never been a fan of this, as it makes programming more difficult, as
> you are unable to see this code and what, exactly it's doing, in most
> editors (GUI IDEs tend to allow inline viewing and/or editing of copybooks
> while looking at the copying source, however - no such ability exists on the
> mainframe that I am aware of; you'd have to open the copybook in a different
> editor). Then there's also the possibility of somebody changing the
> procedural copybook and hosing your program the next time it is compiled...
COPY statements in the PROCEDURE DIVISION are a maintenance nightmare. If
someone changes the copy code for his program, it will invariably break
some other program that copies it. In CICS programming, code that brought
it by COPY is not translated by the CICS translator program, because the
copy code is not brought in until compile time. For maintenance' sake,
this code should be in a separate program which can be called by various
programs, but then you get stuck by performance issues... especially for
dynamic calls, and static calls are almost as bad as COPY code.
If I understand Lynn correctly, his proposed 4GL compiler would circumvent
this type of problem.
--
Dale Erwin
Salamanca 116
Pueblo Libre
Lima 21 PERU
Tel. +51(1)461-3084
Cel. +51(1)9743-6439