#915 From: "Yuri Prokushev" <yuri_prokushev@mail.ru>
Date: Sun Dec 21, 2003 9:52 am
Subject: Re: Re: Choosing kernel prokushev
Offline Offline
Send Email Send Email
** Reply to message from "Lynn H. Maxson" <lmaxson@...> on Sat, 20 Dec
2003 08:13:31 -0800 (PST)
> "...And I prefer to have language independed (sic)
> specification (IIRC one time Apple used such language
> independed api description). ..."
> Let's get the obvious out of the way. It's impossible to have a
> language independent specification, because you have to use
> a language to write it. If you want it programming language
> independent, which I think you meant,
Exactly
>then your desire for a
> two-layered approach in implementing a kernel becomes a
> three-layered one in describing that implementation: the
> informal language of a requirement, the formal language of an
> programming language independent specification, and the
> formal language of a programming dependent source.
Yes. But such approach allows to use language which developer prefer or must to
use. It's like CORBA IDL for interfaces definetion. I can translate IDL to
corresponding language.
> I have no knowledge of the specification language that Apple
> used. I have some knowledge of the instruction specification
[eaten]
> May I suggest then that for the moment we consider a
> three-layer language approach using this language as your
> [programming] language independent form. Where feasible
> you can translate it into C or one or more of its derivatives
> while I continue to pursue its implementation. That means
> eventually it will become a two-layered approach, reducing
> significantly the overall effort of synchronizing changes.
I need to read this specification before making some conclusions.
wbr,
Yuri
Part 31 - Dec 19 2003
Re: Part 31
#916 From: "Lynn H. Maxson" <lmaxson@...>
Date: Sun Dec 21, 2003 7:13 pm
Subject: Re: Re: Choosing kernel lynnmaxson
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°
Yuri Prokushev writes:
"...Yes. But such approach allows to use language which
developer prefer or must to use. It's like CORBA IDL for
interfaces definetion. I can translate IDL to corresponding
language. ..."
An interface definition contains the interface name, its
parameters, the data type of the parameters, whether input
or output only or both (inout), the values those parameters
can take on, and the meaning of some of those values. For
you to translate from the IDL to any programming language
means the IDL itself cannot rise above the lowest common
denominator (LCD) of the associated programming languages.
That may not seem serious to you as C and its derivatives
have long possess the LCD of programming languages.
However, for some of us the LCD denies capabilities of the
language of our choice. If we use those capabilities, then you
have no translatable form short of an assembly language
filter to another language.
For example, C has two definitions of a string variable, 'char'
for a fixed length, single character string without a null
terminator and a variable length, string array with a null
terminator. Thus it does not allow a fixed length, multiple
character string; a variable length, multiple character string
without a null terminator.
OTOH, PL/I does. It does it by having only one data type
'character' or 'char', which can have a fixed length of one or
more characters without a null terminator, e.g. 'dcl manny
char (1);' or 'dcl moe char(23);', and a variable length of zero
or more characters with or without null termination, e.g. 'dcl
zoe char (54) varying;' or 'dcl able char (66) varyingz;'.
This may not seem all that serious, but consider that the IDL
doesn't support variable precision, fixed-point decimal or
binary parameters. Or if it does you cannot translate them
directly into C. The problem here, of course, gets down to
the mapping between the mapping of the problem set,
contained within the informal language of requirements, and
that of the solution set, the formal language of specifications.
If you cannot have a one-to-one (1:1) on data types between
the two, something gets lost in translation.
I, of course, prefer to write in PL/I, which supports more
"native" data types in a machine-independent mode than all
other programming languages combined. That means it allows
a mapping of data types between the problem set and the
solution set unmatched by any other programming language.
So why would I not choose to make it my specification
language of choice? In doing so, by opting for something
more than the LCD, I impact your ability to translate directly
into your language of choice.
I didn't bother to mention PL/I's support of fixed and variable
length bit strings definable in a machine-independent manner.
That means their lengths may or may not fall on fixed length
boundaries of 8, 16, 32, 64, 128, or 256 bit boundaries, e.g. 'dcl
funny bit (47);'. This may not seem important to others, but
they have not tracked the implementation defined track
record of 'int', 'long', and 'short' in the UNIX and C world.
I mention all this to point out that I may have a perfect match
between a requirements language (the problem set) and a
specification language (a solution set) in terms of data types
(elements and aggregates), operators (element and
aggregate), and mixed expressions involving their use, but
lose that mapping in translating from on specification
language to another used for programming.
Therein lies my objection to a three-layered language
approach: I have no desire to lose anything in translation. As
a programmer I have no desire to find myself restricted to an
LCD level.
So I have a fourth generation specification language, SL/I
(Specification Language/One). I'm in the process of making it
a programming language as well. It is a synthesis of PL/I,
APL, and LISP enhanced with logic programming. PL/I, APL,
and LISP contain everything in terms of data types and
operator found in all other third generation, programming
languages. You cannot accuse it of conforming to an LCD.<g>
Date: Sun Dec 21, 2003 7:13 pm
Subject: Re: Re: Choosing kernel lynnmaxson
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°
Yuri Prokushev writes:
"...Yes. But such approach allows to use language which
developer prefer or must to use. It's like CORBA IDL for
interfaces definetion. I can translate IDL to corresponding
language. ..."
An interface definition contains the interface name, its
parameters, the data type of the parameters, whether input
or output only or both (inout), the values those parameters
can take on, and the meaning of some of those values. For
you to translate from the IDL to any programming language
means the IDL itself cannot rise above the lowest common
denominator (LCD) of the associated programming languages.
That may not seem serious to you as C and its derivatives
have long possess the LCD of programming languages.
However, for some of us the LCD denies capabilities of the
language of our choice. If we use those capabilities, then you
have no translatable form short of an assembly language
filter to another language.
For example, C has two definitions of a string variable, 'char'
for a fixed length, single character string without a null
terminator and a variable length, string array with a null
terminator. Thus it does not allow a fixed length, multiple
character string; a variable length, multiple character string
without a null terminator.
OTOH, PL/I does. It does it by having only one data type
'character' or 'char', which can have a fixed length of one or
more characters without a null terminator, e.g. 'dcl manny
char (1);' or 'dcl moe char(23);', and a variable length of zero
or more characters with or without null termination, e.g. 'dcl
zoe char (54) varying;' or 'dcl able char (66) varyingz;'.
This may not seem all that serious, but consider that the IDL
doesn't support variable precision, fixed-point decimal or
binary parameters. Or if it does you cannot translate them
directly into C. The problem here, of course, gets down to
the mapping between the mapping of the problem set,
contained within the informal language of requirements, and
that of the solution set, the formal language of specifications.
If you cannot have a one-to-one (1:1) on data types between
the two, something gets lost in translation.
I, of course, prefer to write in PL/I, which supports more
"native" data types in a machine-independent mode than all
other programming languages combined. That means it allows
a mapping of data types between the problem set and the
solution set unmatched by any other programming language.
So why would I not choose to make it my specification
language of choice? In doing so, by opting for something
more than the LCD, I impact your ability to translate directly
into your language of choice.
I didn't bother to mention PL/I's support of fixed and variable
length bit strings definable in a machine-independent manner.
That means their lengths may or may not fall on fixed length
boundaries of 8, 16, 32, 64, 128, or 256 bit boundaries, e.g. 'dcl
funny bit (47);'. This may not seem important to others, but
they have not tracked the implementation defined track
record of 'int', 'long', and 'short' in the UNIX and C world.
I mention all this to point out that I may have a perfect match
between a requirements language (the problem set) and a
specification language (a solution set) in terms of data types
(elements and aggregates), operators (element and
aggregate), and mixed expressions involving their use, but
lose that mapping in translating from on specification
language to another used for programming.
Therein lies my objection to a three-layered language
approach: I have no desire to lose anything in translation. As
a programmer I have no desire to find myself restricted to an
LCD level.
So I have a fourth generation specification language, SL/I
(Specification Language/One). I'm in the process of making it
a programming language as well. It is a synthesis of PL/I,
APL, and LISP enhanced with logic programming. PL/I, APL,
and LISP contain everything in terms of data types and
operator found in all other third generation, programming
languages. You cannot accuse it of conforming to an LCD.<g>
Re: Part 31
#917 From: Dale Erwin <daleerwin@...>
Date: Mon Dec 22, 2003 2:11 am
Subject: Re: Re: Choosing kernel dale_erwin
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°
Lynn H. Maxson wrote:
>
> So I have a fourth generation specification language, SL/I
> (Specification Language/One). I'm in the process of making it
> a programming language as well. It is a synthesis of PL/I,
> APL, and LISP enhanced with logic programming. PL/I, APL,
> and LISP contain everything in terms of data types and
> operator found in all other third generation, programming
> languages. You cannot accuse it of conforming to an LCD.<g>
Lynn,
Do you have any estimate of when we might expect to see this SL/I
language in a form in which it could be used?
I have a little (very little) experience with PL/I, but only in a
maintenance mode. I've never developed a program from scratch in
it.
--
Dale Erwin
Salamanca 116
Pueblo Libre
Lima 21 PERU
Tel. +51(1)461-3084
Cel. +51(1)9743-6439
Date: Mon Dec 22, 2003 2:11 am
Subject: Re: Re: Choosing kernel dale_erwin
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°
Lynn H. Maxson wrote:
>
> So I have a fourth generation specification language, SL/I
> (Specification Language/One). I'm in the process of making it
> a programming language as well. It is a synthesis of PL/I,
> APL, and LISP enhanced with logic programming. PL/I, APL,
> and LISP contain everything in terms of data types and
> operator found in all other third generation, programming
> languages. You cannot accuse it of conforming to an LCD.<g>
Lynn,
Do you have any estimate of when we might expect to see this SL/I
language in a form in which it could be used?
I have a little (very little) experience with PL/I, but only in a
maintenance mode. I've never developed a program from scratch in
it.
--
Dale Erwin
Salamanca 116
Pueblo Libre
Lima 21 PERU
Tel. +51(1)461-3084
Cel. +51(1)9743-6439
Re: Part 31
#918 From: "Lynn H. Maxson" <lmaxson@...>
Date: Mon Dec 22, 2003 3:20 am
Subject: Re: Re: Choosing kernel lynnmaxson
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°
Dale Erwin writes:
"...Do you have any estimate of when we might expect to see
this SL/I language in a form in which it could be used? ..."
It's usable now as a specification language, i.e. as a
descriptive language. Without an interpreter or compiler it is
not a programming language as well. If your question relates
to when to expect a usable executable version, I have no
definitive answer. I certainly would not say "real soon now"
until close to that time.
In the meantime it should suffice as a specification language
along the lines for which Yuri has expressed a preference. It
certainly contains the LCD minimum represented by C. It
should suffice to formally describe, i.e. specify, everything
associated with a kernel.
You have to begin somewhere. The most logical place would
be to begin with what you have, in this instance the OS/2
toolkit with its APIs and documentation. I took the one that
comes with the OS/2 PL/I compiler. After a number of
iterations to make sure I set all the necessary macro switches
I managed to compile a listing of all the control program and
PM APIs.
The resulting PL/I attribute and cross-reference listing for all
names in the source as well as the source itself provides a
good starting point. If you do it for the control program alone,
it should cover everything the kernel needs to support. You
can then address the use of homonyms (same name, different
referents) such as 'rc' for the return code to provide a
different name for each different sets of values, e.g. rc1, rc2,
etc.. Once you eliminate the homonyms by assigning unique
names to each, you can then turn your attention to synonyms
(different names, same referent). In this instance assigning a
single name to each referent. In this manner you eliminate
any ambiguity (homonyms or synonyms) with the names.
As the names themselves represent data types, either
elements or aggregates, you can within their declaration
associate the rules governing their use and behavior. Herein
you see one of the major differences between third
(imperative) and fourth (declarative) languages. In
imperative languages it remains the programmer's
responsibility to insure the application of all rules. In
declarative languages that responsibility resides in the
software.
Let me offer you a simple example. Names in a programming
language have rules associate with their construction. The
name itself may vary from 1 to 33 characters and always
begin with an alphabetic upper or lower case character. Now
in C and PL/I variable length strings can range in length from
0 (the null string) up to some maximum length.
Any program doing syntax analysis has to insure that any
name used follows certain rules. Just think what it would
mean to define the minimum as well as the maximum length of
a string within a variable definition, e.g. 'dcl name1 char
(1:33);' It now shifts the burden for checking this onto the
software instead of forcing the programmer to add, i.e. write,
the logic in every necessary instance in the program. You
could enhance this by 'dcl name1 char (1:33)
range(<a...z|A...Z><...>);', following standard BNF rules. In this
instance the software now has to insure that the 'value' in
name1 corresponds to the rule(s). In this manner you can
assure that the rule is applied and not forgotten as in the
case all too frequently with programmers.
I need to cut this off here due to other commitments, but I
suggest that you extrapolate this out to a set of APIs,
variable (and constant) names and the rules governing their
behavior, specifically their inter-relationship. At some point
with minimal effort on the programmer's part the software
could automatically generate the logic of the microkernel, the
kernel, the PM, and quite frankly any and all applications
defined in a similar manner.
At the moment in third generation (imperative) languages the
programmer must implement the rules by what he writes in
the logic of the source as well organize the overall logic. I am
simply suggesting to you to take advantage of that which
fourth generation (declarative) languages have been doing
successfully for over thirty years now, e.g. SQL.
Date: Mon Dec 22, 2003 3:20 am
Subject: Re: Re: Choosing kernel lynnmaxson
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°
Dale Erwin writes:
"...Do you have any estimate of when we might expect to see
this SL/I language in a form in which it could be used? ..."
It's usable now as a specification language, i.e. as a
descriptive language. Without an interpreter or compiler it is
not a programming language as well. If your question relates
to when to expect a usable executable version, I have no
definitive answer. I certainly would not say "real soon now"
until close to that time.
In the meantime it should suffice as a specification language
along the lines for which Yuri has expressed a preference. It
certainly contains the LCD minimum represented by C. It
should suffice to formally describe, i.e. specify, everything
associated with a kernel.
You have to begin somewhere. The most logical place would
be to begin with what you have, in this instance the OS/2
toolkit with its APIs and documentation. I took the one that
comes with the OS/2 PL/I compiler. After a number of
iterations to make sure I set all the necessary macro switches
I managed to compile a listing of all the control program and
PM APIs.
The resulting PL/I attribute and cross-reference listing for all
names in the source as well as the source itself provides a
good starting point. If you do it for the control program alone,
it should cover everything the kernel needs to support. You
can then address the use of homonyms (same name, different
referents) such as 'rc' for the return code to provide a
different name for each different sets of values, e.g. rc1, rc2,
etc.. Once you eliminate the homonyms by assigning unique
names to each, you can then turn your attention to synonyms
(different names, same referent). In this instance assigning a
single name to each referent. In this manner you eliminate
any ambiguity (homonyms or synonyms) with the names.
As the names themselves represent data types, either
elements or aggregates, you can within their declaration
associate the rules governing their use and behavior. Herein
you see one of the major differences between third
(imperative) and fourth (declarative) languages. In
imperative languages it remains the programmer's
responsibility to insure the application of all rules. In
declarative languages that responsibility resides in the
software.
Let me offer you a simple example. Names in a programming
language have rules associate with their construction. The
name itself may vary from 1 to 33 characters and always
begin with an alphabetic upper or lower case character. Now
in C and PL/I variable length strings can range in length from
0 (the null string) up to some maximum length.
Any program doing syntax analysis has to insure that any
name used follows certain rules. Just think what it would
mean to define the minimum as well as the maximum length of
a string within a variable definition, e.g. 'dcl name1 char
(1:33);' It now shifts the burden for checking this onto the
software instead of forcing the programmer to add, i.e. write,
the logic in every necessary instance in the program. You
could enhance this by 'dcl name1 char (1:33)
range(<a...z|A...Z><...>);', following standard BNF rules. In this
instance the software now has to insure that the 'value' in
name1 corresponds to the rule(s). In this manner you can
assure that the rule is applied and not forgotten as in the
case all too frequently with programmers.
I need to cut this off here due to other commitments, but I
suggest that you extrapolate this out to a set of APIs,
variable (and constant) names and the rules governing their
behavior, specifically their inter-relationship. At some point
with minimal effort on the programmer's part the software
could automatically generate the logic of the microkernel, the
kernel, the PM, and quite frankly any and all applications
defined in a similar manner.
At the moment in third generation (imperative) languages the
programmer must implement the rules by what he writes in
the logic of the source as well organize the overall logic. I am
simply suggesting to you to take advantage of that which
fourth generation (declarative) languages have been doing
successfully for over thirty years now, e.g. SQL.
Re: Part 31
#919 From: "Tom Lee Mullins" <tomleem@...>
Date: Mon Dec 22, 2003 6:11 pm
Subject: Re: FreeDOS kernel to OSFree? bigwarpguy
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°
--- In osFree@yahoogroups.com, "Yuri Prokushev" <yuri_prokushev@m...>
wrote:
> --- In osFree@yahoogroups.com, "Tom Lee Mullins" <tomleem@a...> wrote:
> > Is it possible to port the FreeDOS -
> > http://www.freedos.org - kernel to
> > OSFree (use Open Watcom to compile it?)?
>
> Are you kidding? DOS does not contains LOT of services
> required for any OS.
I am new to this, what kind of services?
BigWarpGuy
Date: Mon Dec 22, 2003 6:11 pm
Subject: Re: FreeDOS kernel to OSFree? bigwarpguy
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°
--- In osFree@yahoogroups.com, "Yuri Prokushev" <yuri_prokushev@m...>
wrote:
> --- In osFree@yahoogroups.com, "Tom Lee Mullins" <tomleem@a...> wrote:
> > Is it possible to port the FreeDOS -
> > http://www.freedos.org - kernel to
> > OSFree (use Open Watcom to compile it?)?
>
> Are you kidding? DOS does not contains LOT of services
> required for any OS.
I am new to this, what kind of services?
BigWarpGuy
Re: Part 31
#920 From: "criguada@libero.it" <criguada@...>
Date: Mon Dec 22, 2003 6:14 pm
Subject: Re: Digest Number 203 criguada
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°
Hi Frank,
Frank Griffin wrote:
> You can't have it both ways. I wasn't talking about you personally (in
> fact, I wasn't even aware that you lurked this list), but unless you
??? What are you talking about?
Did you notice this thread started from a message of mine, or what?
> want to claim to know something about kernel programming I'm afraid you
> can't make "blanket" statements like "I don't like the traditional
> unices kernel design" or "Linux doesn't really deviate much from that"
??? "traditional unix design" is something that every student of an information
technology course at every university knows.
BTW, you're right about the "Linux doesn't deviate" argument. I'm not talking
from personal experience, but from several statements by people whose statements
I consider valuable: people that have made several contributions to OS/2 and the
OS/2 community in a way that lets everyone sure about their technical skills.
I'm sorry but I can't say the same thing about you, though you may be the most
skilled person in this list.
> Since this is exactly what you said back in the FreeOS days, I have a
> sneaking suspicion that your knowledge of how Linux deviates from
> "traditional Unix" isn't based on any current source base. In fact
I think you're mixing up things. I was mostly there in "lurking mode" at the
time of FreeOS. I may have posted a few messages at the time of the litigation
that led to the split, but I was not among the most active writers. You're
probably thinking about the original founder of the FreeOS project, which was a
Brasilian guy (IIRC) of which I don't remember the name (but I can find it if
you like).
> I'm sorry, but it is of extreme interest for this discussion.
This is of NO interest. The fact that the Linux kernel is positively using
recent Intel improvements doesn't shed any light on the difference among the two
kernels or their compared performance.
I'm still much more favourable to a tabular comparison among the different
kernels which are available to settle the question.
> Serenity has no access to kernel source code that I've ever seen them
> post about. Nor have I ever read a post indicating that they are
> allowed to modify the kernel.
-- start of quote --
Ok, among other there is mentioned a smp fix, bsmp8603.zip, that someone at
Intel has tested on their free time for Serenity. So I would like to get that
fix if possible. The rest of the thread doesn't really say anything if anypne
outside Intel than has manage to pull the same stunt off, ie getting OS2 to
support HT.
-- end of quote --
All the thread is available at the following adress:
http://www.os2world.com/cgi-bin/forum/U ... 61&TID=429&
P=1#ID429
I hope you're following OS/2 and eCS with as much devotion as it seems you're
following Linux.
> I think this is the crux of your argument. You appear to care less that
> OS/2 survive than that it survive on your terms, i.e. not have to
> collaborate with any other software you see as a competitor.
> I'd just like to see it survive. Goldencode's and Innotek's support for
> Java have gone a very long way towards making me feel better about
> OS/2's survival than I used to.
This is obviously THE argument, and it would be for anybody who is concerned
about OS/2 survival, unless you want to have yet another Linux distribution with
some OS/2 flavor.
Having some kind of OS called OS/2 or osFree just to say "OS/2 is alive" is of
no interest to me. If I'd want to have some kind of OS/2ish Linux, it would be
better for me to switch directly to one of the excellent Linux distributions out
there.
And since I see (from the other messages in the list) that you want to base PM
on top of GTK or one of the other toolkits which in turn work on top of X, I
suspect that what you want is exactly this. Sorry, I'm not interested. If this
will be the choice of this team (I don't think so), so OK, go for it.
>>- I don't want to mess with kernel recompiles just to add devoce driver
support
>>to the OS.
>
> It hasn't been necessary to recompile the kernel to add device driver
> support for at least five years now, probably more like ten. Most Linux
> distributions include third-party binary drivers which are compiled by
> vendors and just included as modules.
Either you're very lucky, or you don't mess very much with Linux.
I had to mess with RH9 kernel just a month ago trying to install on an elder
system, and I see I'm not alonem judging from the messages that have been posted
recently.
With OS/2 you NEVER have to mess with the kernel. If a device is supported by
the system you just install the driver and you're done.
> You're probably thinking of the OS/2 port of the ALSA project, which
> requires both kernel and module support. That's because ALSA is not a
> driver, it's a kernel-based sound card driver architecture. The code in
> the kernel just manages the drivers; all the drivers are modules.
It's not important if it's a driver or not as long as you need to mess with the
kernel.
>>- I want a "multiple roots" file system model, not the "single root" model
from
>>Linux. I don't know how deeply this is rooted in the kernel. Would be nice to
>>hear about this.
>
> This was also discussed in the FreeOS days. There is absolutely no
> difference between a "drives" (multiple-root) filesystem and a
> single-root filesystem whose root directory contains mount points
> (subdirectories) called "c-drive", "d-drive", etc. Just as under OS/2 a
> partition can be mounted as X:, the same partition can be mounted as a
> subdirectory *anywhere* on another filesystem.
>
> An example under OS/2 would be that you could install Java 1.1, Java1.3,
> and Java 1.4 on separate paritions, and mount (attach) them to your C:
> drive as C:java11, C:java13, and C:java14.
I think that the concept of multiple roots and single root is absolutely clear
to anybody on this list, at least those that have some experience on Linux or
other unices. It's not necessary to explain it.
And how you can state that "there is no difference" between having separate
partitions, each one with its own root, and having a single root where you mount
partitions under subdirectories, well it really beats me.
>>- I want OS/2 global consistency permeating the whole system, not Unix' mess.
I
>>don't know how deeply this is rooted in Linux' kernel, probably little or
>>nothing. Would be nice to hear about this.
>
> It's hard to address this without knowing what you mean by "Unix mess",
> but you're correct in saying that the Linux kernel has virtually nothing
> to do with any aspect of the user interface. Aside from GNOME and KDE,
> there are about a dozen other window managers for Linux, most of which
> have multiple "personalities" (even Warp 4 !).
Sure, I'm correct about saying that it's not related to the kernel, but what you
say is resembling more and more a Linux distro with the capability to run OS/2
apps, not a new OS based on Linux kernel.
And talking about the mess, obviously I'm not talking about window managers.
What do you say about the lack of a global system clipboard, like in OS/2 and
Win? Yes, I know that there is some software that tries to address the problem.
What do you say about the lack of global keyboard mappings?
What do you say about the lack of a system registry, instead of each application
trying to solve the problem with it's own (often baroque) config files?
etc. etc....
> Not correct. X is the underpinning for every Window Manager on Linux
> today, is under active development, and isn't going anywhere. It is the
> primary and sole source of video hardware support for Linux, sort of
> like Scitech for OS/2.
You're obviously ignoring projects aimed at replacing X. Just do a google search
for "xfree replacement" and you'll find a few, some quite advanced and some just
"wannabe".
> But nobody programs to the X API, which is considered very low-level.
See UDE for an example (Unix Desktop Environment).
Bye
Cris
Date: Mon Dec 22, 2003 6:14 pm
Subject: Re: Digest Number 203 criguada
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°
Hi Frank,
Frank Griffin wrote:
> You can't have it both ways. I wasn't talking about you personally (in
> fact, I wasn't even aware that you lurked this list), but unless you
??? What are you talking about?
Did you notice this thread started from a message of mine, or what?
> want to claim to know something about kernel programming I'm afraid you
> can't make "blanket" statements like "I don't like the traditional
> unices kernel design" or "Linux doesn't really deviate much from that"
??? "traditional unix design" is something that every student of an information
technology course at every university knows.
BTW, you're right about the "Linux doesn't deviate" argument. I'm not talking
from personal experience, but from several statements by people whose statements
I consider valuable: people that have made several contributions to OS/2 and the
OS/2 community in a way that lets everyone sure about their technical skills.
I'm sorry but I can't say the same thing about you, though you may be the most
skilled person in this list.
> Since this is exactly what you said back in the FreeOS days, I have a
> sneaking suspicion that your knowledge of how Linux deviates from
> "traditional Unix" isn't based on any current source base. In fact
I think you're mixing up things. I was mostly there in "lurking mode" at the
time of FreeOS. I may have posted a few messages at the time of the litigation
that led to the split, but I was not among the most active writers. You're
probably thinking about the original founder of the FreeOS project, which was a
Brasilian guy (IIRC) of which I don't remember the name (but I can find it if
you like).
> I'm sorry, but it is of extreme interest for this discussion.
This is of NO interest. The fact that the Linux kernel is positively using
recent Intel improvements doesn't shed any light on the difference among the two
kernels or their compared performance.
I'm still much more favourable to a tabular comparison among the different
kernels which are available to settle the question.
> Serenity has no access to kernel source code that I've ever seen them
> post about. Nor have I ever read a post indicating that they are
> allowed to modify the kernel.
-- start of quote --
Ok, among other there is mentioned a smp fix, bsmp8603.zip, that someone at
Intel has tested on their free time for Serenity. So I would like to get that
fix if possible. The rest of the thread doesn't really say anything if anypne
outside Intel than has manage to pull the same stunt off, ie getting OS2 to
support HT.
-- end of quote --
All the thread is available at the following adress:
http://www.os2world.com/cgi-bin/forum/U ... 61&TID=429&
P=1#ID429
I hope you're following OS/2 and eCS with as much devotion as it seems you're
following Linux.
> I think this is the crux of your argument. You appear to care less that
> OS/2 survive than that it survive on your terms, i.e. not have to
> collaborate with any other software you see as a competitor.
> I'd just like to see it survive. Goldencode's and Innotek's support for
> Java have gone a very long way towards making me feel better about
> OS/2's survival than I used to.
This is obviously THE argument, and it would be for anybody who is concerned
about OS/2 survival, unless you want to have yet another Linux distribution with
some OS/2 flavor.
Having some kind of OS called OS/2 or osFree just to say "OS/2 is alive" is of
no interest to me. If I'd want to have some kind of OS/2ish Linux, it would be
better for me to switch directly to one of the excellent Linux distributions out
there.
And since I see (from the other messages in the list) that you want to base PM
on top of GTK or one of the other toolkits which in turn work on top of X, I
suspect that what you want is exactly this. Sorry, I'm not interested. If this
will be the choice of this team (I don't think so), so OK, go for it.
>>- I don't want to mess with kernel recompiles just to add devoce driver
support
>>to the OS.
>
> It hasn't been necessary to recompile the kernel to add device driver
> support for at least five years now, probably more like ten. Most Linux
> distributions include third-party binary drivers which are compiled by
> vendors and just included as modules.
Either you're very lucky, or you don't mess very much with Linux.
I had to mess with RH9 kernel just a month ago trying to install on an elder
system, and I see I'm not alonem judging from the messages that have been posted
recently.
With OS/2 you NEVER have to mess with the kernel. If a device is supported by
the system you just install the driver and you're done.
> You're probably thinking of the OS/2 port of the ALSA project, which
> requires both kernel and module support. That's because ALSA is not a
> driver, it's a kernel-based sound card driver architecture. The code in
> the kernel just manages the drivers; all the drivers are modules.
It's not important if it's a driver or not as long as you need to mess with the
kernel.
>>- I want a "multiple roots" file system model, not the "single root" model
from
>>Linux. I don't know how deeply this is rooted in the kernel. Would be nice to
>>hear about this.
>
> This was also discussed in the FreeOS days. There is absolutely no
> difference between a "drives" (multiple-root) filesystem and a
> single-root filesystem whose root directory contains mount points
> (subdirectories) called "c-drive", "d-drive", etc. Just as under OS/2 a
> partition can be mounted as X:, the same partition can be mounted as a
> subdirectory *anywhere* on another filesystem.
>
> An example under OS/2 would be that you could install Java 1.1, Java1.3,
> and Java 1.4 on separate paritions, and mount (attach) them to your C:
> drive as C:java11, C:java13, and C:java14.
I think that the concept of multiple roots and single root is absolutely clear
to anybody on this list, at least those that have some experience on Linux or
other unices. It's not necessary to explain it.
And how you can state that "there is no difference" between having separate
partitions, each one with its own root, and having a single root where you mount
partitions under subdirectories, well it really beats me.
>>- I want OS/2 global consistency permeating the whole system, not Unix' mess.
I
>>don't know how deeply this is rooted in Linux' kernel, probably little or
>>nothing. Would be nice to hear about this.
>
> It's hard to address this without knowing what you mean by "Unix mess",
> but you're correct in saying that the Linux kernel has virtually nothing
> to do with any aspect of the user interface. Aside from GNOME and KDE,
> there are about a dozen other window managers for Linux, most of which
> have multiple "personalities" (even Warp 4 !).
Sure, I'm correct about saying that it's not related to the kernel, but what you
say is resembling more and more a Linux distro with the capability to run OS/2
apps, not a new OS based on Linux kernel.
And talking about the mess, obviously I'm not talking about window managers.
What do you say about the lack of a global system clipboard, like in OS/2 and
Win? Yes, I know that there is some software that tries to address the problem.
What do you say about the lack of global keyboard mappings?
What do you say about the lack of a system registry, instead of each application
trying to solve the problem with it's own (often baroque) config files?
etc. etc....
> Not correct. X is the underpinning for every Window Manager on Linux
> today, is under active development, and isn't going anywhere. It is the
> primary and sole source of video hardware support for Linux, sort of
> like Scitech for OS/2.
You're obviously ignoring projects aimed at replacing X. Just do a google search
for "xfree replacement" and you'll find a few, some quite advanced and some just
"wannabe".
> But nobody programs to the X API, which is considered very low-level.
See UDE for an example (Unix Desktop Environment).
Bye
Cris
Re: Part 31
#921 From: Alexander Smolyakov <small@...>
Date: Mon Dec 22, 2003 7:09 pm
Subject: Re: Re: FreeDOS kernel to OSFree? small_san_ru
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°
Hello Tom,
Monday, December 22, 2003, 6:11:38 PM, you wrote:
TLM> --- In osFree@yahoogroups.com, "Yuri Prokushev" <yuri_prokushev@m...>
TLM> wrote:
>> --- In osFree@yahoogroups.com, "Tom Lee Mullins" <tomleem@a...> wrote:
>> > Is it possible to port the FreeDOS -
>> > http://www.freedos.org - kernel to
>> > OSFree (use Open Watcom to compile it?)?
>>
>> Are you kidding? DOS does not contains LOT of services
>> required for any OS.
TLM> I am new to this, what kind of services?
Process management service is general of it (multi-threading, process
isolation etc.). In multiuser OS it is user management(security).
--
Best regards,
Alexander mailto:small@...
Date: Mon Dec 22, 2003 7:09 pm
Subject: Re: Re: FreeDOS kernel to OSFree? small_san_ru
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°
Hello Tom,
Monday, December 22, 2003, 6:11:38 PM, you wrote:
TLM> --- In osFree@yahoogroups.com, "Yuri Prokushev" <yuri_prokushev@m...>
TLM> wrote:
>> --- In osFree@yahoogroups.com, "Tom Lee Mullins" <tomleem@a...> wrote:
>> > Is it possible to port the FreeDOS -
>> > http://www.freedos.org - kernel to
>> > OSFree (use Open Watcom to compile it?)?
>>
>> Are you kidding? DOS does not contains LOT of services
>> required for any OS.
TLM> I am new to this, what kind of services?
Process management service is general of it (multi-threading, process
isolation etc.). In multiuser OS it is user management(security).
--
Best regards,
Alexander mailto:small@...
Re: Part 31
#922 From: "criguada@libero.it" <criguada@...>
Date: Mon Dec 22, 2003 7:25 pm
Subject: HyperThreading criguada
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°
Regarding my previous comments about hyperthreading, here is what a few users
(in the OS2World forum I mentioned previously) are saying:
---
devnul:
If it works (using a SMP-kernel) or not depends heavily on the way how
Hyperthreading is activated in the BIOS of the motherboard. Some might work,
many will not.
As Hyperthreading brings only a boost with not so well written (or threaded)
apps OS/2 kernel developers believe that it won't bring signifikant fortunes
using it together with OS/2.
---
Sebadoh:
I think the main thing that would hurt such a competition is the goal, HT can
actually cause the system to run slower then the same system without using HT.
Why would anyone want to get this working if it is only going to hurt
performance or at the best not show much of an improvement at all?
---
Kim:
Regarding performance and HT do you have any suggestion on additional info, ie
links to articles or sources? Would be nice to read about this issue. I can't
say that I have gained any speed on my system using. Well, ok the Windows system
info says 2 CPUs... Wow.....
---
Sebadoh:
Check out benchmarks on hyperthreaded systems, most end up with little or no
gains, especially since microsoft's multiprocessor support is very lacking,
however the few apps which show any improvement, mostly adobe applications only
show a very small margin of improvement,.
---
Bye
Cris
Date: Mon Dec 22, 2003 7:25 pm
Subject: HyperThreading criguada
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°
Regarding my previous comments about hyperthreading, here is what a few users
(in the OS2World forum I mentioned previously) are saying:
---
devnul:
If it works (using a SMP-kernel) or not depends heavily on the way how
Hyperthreading is activated in the BIOS of the motherboard. Some might work,
many will not.
As Hyperthreading brings only a boost with not so well written (or threaded)
apps OS/2 kernel developers believe that it won't bring signifikant fortunes
using it together with OS/2.
---
Sebadoh:
I think the main thing that would hurt such a competition is the goal, HT can
actually cause the system to run slower then the same system without using HT.
Why would anyone want to get this working if it is only going to hurt
performance or at the best not show much of an improvement at all?
---
Kim:
Regarding performance and HT do you have any suggestion on additional info, ie
links to articles or sources? Would be nice to read about this issue. I can't
say that I have gained any speed on my system using. Well, ok the Windows system
info says 2 CPUs... Wow.....
---
Sebadoh:
Check out benchmarks on hyperthreaded systems, most end up with little or no
gains, especially since microsoft's multiprocessor support is very lacking,
however the few apps which show any improvement, mostly adobe applications only
show a very small margin of improvement,.
---
Bye
Cris
Re: Part 31
#923 Re: [osFree] Digest Number 203
Expand Messages
Lynn H. Maxson
Dec 22 8:39 AM
Cris,
Frank Griffin made it clear early on in freeOS discussions that
he favored a layered approach of an OS/2 client on a Linux
host. He believed it would take less time and effort to do so.
Taking less time and effort he felt necessary to insure that a
viable, open source OS/2 future existed beyond the vagaries
of IBM and Serenity Systems. In short he felt that it offered
the best and shortest path to success.
Beyond that Frank has made serious technical contributions on
this and other mailing lists. Given that he has a relatively low
threshold on engaging in circular, repetitive, and endless
discussions he probably feels close to ceasing his participation
in this one.
In freeOS I differed with him, stating my belief that a
microkernel approach, a two-layer approach, would require
less effort and time than a three-layer approach: layering an
OS/2 client on a Linux host layered on the Linux microkernel
(as one does exist). I look at time an effort in terms of initial
development and then ongoing maintenance, believing that in
either that less time and effort occurs in the two-layer,
microkernel approach.
This results in a conflict of belief systems, based on facts
derived from experience but not on facts associated with
having done both approaches and then comparing them. I
simply looked at similar efforts associated with VPC, ODIN,
WINE, Lindows, and other such projects. To me they offered a
reasonable view of the time and effort required as well as
concerns about sustainable success.
Here on OSfree I don't care to see this division reoccurring
with the debilitating effects it had on freeOS. We do have a
common area in the work necessary, that of documenting the
OS/2 CP and PM APIs into a set of detailed specifications
covering the rules governing their behavior. With these in
place we can begin to differentiate the time and effort
between a two- and three-layered approach in a more
factual manner. At that time we will see whose belief system
comes closest to the facts.
I happen to value Frank's input, yours, and others on this list.
As long as we can differ in some areas while cooperating in
others until we have a better sense of the facts as reasonable
people we can use the facts to increase our cooperation and
reduce our differences. We can do all this in the process of
creating an OS/2 replacement through documentation
(specification), design, and construction in that order.
I suggest that we set aside for the moment those issues which
divide us, whether a two- or three-layered approach, to focus
on what doesn't, the need to specify in detail an OS/2
replacement. I further suggest that after doing that and
entering the design stage we will more clearly understand the
consequences of the two choices to the point where we will
agree on one.
In doing this we will waste less of our valuable human
resources, having more available to dedicate for the
necessary time and effort required. Having that minimal
amount of resources dedicated determines the ultimate
success or failure of our effort.
Expand Messages
Lynn H. Maxson
Dec 22 8:39 AM
Cris,
Frank Griffin made it clear early on in freeOS discussions that
he favored a layered approach of an OS/2 client on a Linux
host. He believed it would take less time and effort to do so.
Taking less time and effort he felt necessary to insure that a
viable, open source OS/2 future existed beyond the vagaries
of IBM and Serenity Systems. In short he felt that it offered
the best and shortest path to success.
Beyond that Frank has made serious technical contributions on
this and other mailing lists. Given that he has a relatively low
threshold on engaging in circular, repetitive, and endless
discussions he probably feels close to ceasing his participation
in this one.
In freeOS I differed with him, stating my belief that a
microkernel approach, a two-layer approach, would require
less effort and time than a three-layer approach: layering an
OS/2 client on a Linux host layered on the Linux microkernel
(as one does exist). I look at time an effort in terms of initial
development and then ongoing maintenance, believing that in
either that less time and effort occurs in the two-layer,
microkernel approach.
This results in a conflict of belief systems, based on facts
derived from experience but not on facts associated with
having done both approaches and then comparing them. I
simply looked at similar efforts associated with VPC, ODIN,
WINE, Lindows, and other such projects. To me they offered a
reasonable view of the time and effort required as well as
concerns about sustainable success.
Here on OSfree I don't care to see this division reoccurring
with the debilitating effects it had on freeOS. We do have a
common area in the work necessary, that of documenting the
OS/2 CP and PM APIs into a set of detailed specifications
covering the rules governing their behavior. With these in
place we can begin to differentiate the time and effort
between a two- and three-layered approach in a more
factual manner. At that time we will see whose belief system
comes closest to the facts.
I happen to value Frank's input, yours, and others on this list.
As long as we can differ in some areas while cooperating in
others until we have a better sense of the facts as reasonable
people we can use the facts to increase our cooperation and
reduce our differences. We can do all this in the process of
creating an OS/2 replacement through documentation
(specification), design, and construction in that order.
I suggest that we set aside for the moment those issues which
divide us, whether a two- or three-layered approach, to focus
on what doesn't, the need to specify in detail an OS/2
replacement. I further suggest that after doing that and
entering the design stage we will more clearly understand the
consequences of the two choices to the point where we will
agree on one.
In doing this we will waste less of our valuable human
resources, having more available to dedicate for the
necessary time and effort required. Having that minimal
amount of resources dedicated determines the ultimate
success or failure of our effort.
Re: Part 31
#924 Re: [osFree] Re: FreeDOS kernel to OSFree?
Expand Messages
Yuri Prokushev
Dec 23 3:48 AM
** Reply to message from Alexander Smolyakov <small@...> on Mon, 22 Dec
2003 19:09:38 +0300
Hello Tom,
>> --- In osFree@yahoogroups.com, "Tom Lee Mullins" <tomleem@a...> wrote:
>> > Is it possible to port the FreeDOS -
>> > http://www.freedos.org - kernel to
>> > OSFree (use Open Watcom to compile it?)?
>> Are you kidding? DOS does not contains LOT of services
>> required for any OS.
TLM> I am new to this, what kind of services?
If you meant just port FreeDOS kernel as MDOS replacement (not as osFree
kernel) then yes, it is possible. At least interupt handler will be required.
But in general whole VDM will be required.
wbr,
Yuri
Expand Messages
Yuri Prokushev
Dec 23 3:48 AM
** Reply to message from Alexander Smolyakov <small@...> on Mon, 22 Dec
2003 19:09:38 +0300
Hello Tom,
>> --- In osFree@yahoogroups.com, "Tom Lee Mullins" <tomleem@a...> wrote:
>> > Is it possible to port the FreeDOS -
>> > http://www.freedos.org - kernel to
>> > OSFree (use Open Watcom to compile it?)?
>> Are you kidding? DOS does not contains LOT of services
>> required for any OS.
TLM> I am new to this, what kind of services?
If you meant just port FreeDOS kernel as MDOS replacement (not as osFree
kernel) then yes, it is possible. At least interupt handler will be required.
But in general whole VDM will be required.
wbr,
Yuri