#905 From: menchie@...
Date: Fri Dec 19, 2003 3:22 am
Subject: A Layman's view... thedugan
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°
..and I'm as green as you can get (well, close to it)....
-----------------------------------------------------------------------
Given....
It's an OS/2 clone, so you need to support OS/2 Binarys/Programs.
In turn, that means supporting DOS (I believe)...
Is supporting Win32 worth the effort? In my opinion, yes, because
you are going to GREATLY increase the number of applications you
can run as well as the number of people that will want to use it.
Obviously, this multiplies your problems - your call.
If you're doing what will essentially be a Linux distro with a
compatibility layer, the WINE is made to order.
The problem I had running Linux (in the days of Redhat5.1), was
installation of programs along with a *lack* of them. Not being
a programmer, I hated reading "Compile under....". If I were to
run this, I'd like to see the installation of new programs consist
largely of "unzip in an appropriate directory and run setup"
<shrug>
You've already got people complaining about how complicated OS2 is
now, *further* complications from an operators point of view seem
counterproductive.
ReactOS sounds good (to me, if not others) because it's based on
NT, not win98/ME - I can understand having concerns about stability
even then.
From my point of view, it seems that Windows would have all the
drivers support that Linux would have and more. EVERYONE supports
Windows.
I'd check on the "Driver compatibility" problem in regards to
ReactOS - if it's just a kernel replacement, and you can just dump
the windows binary's into it, then I don't see a problem. You're
going to be redoing it to make it more OS2-like anyway....
Hey, I said I was a layman.....
------------------------------------------------------------------------
From some member of the Dugan family
Big Bang = "Let There Be Light"
Part 31 - Dec 19 2003
Re: Part 31
#906 From: "Tom Lee Mullins" <tomleem@...>
Date: Fri Dec 19, 2003 4:29 pm
Subject: FreeDOS kernel to OSFree? bigwarpguy
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°
Is it possible to port the FreeDOS -
http://www.freedos.org - kernel to
OSFree (use Open Watcom to compile it?)?
BigWarpGuy
Date: Fri Dec 19, 2003 4:29 pm
Subject: FreeDOS kernel to OSFree? bigwarpguy
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°
Is it possible to port the FreeDOS -
http://www.freedos.org - kernel to
OSFree (use Open Watcom to compile it?)?
BigWarpGuy
Re: Part 31
#907 From: Frank Griffin <fgriffin@...>
Date: Fri Dec 19, 2003 6:27 pm
Subject: Re: Digest Number 203 ftg314159
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°
>No problem, at least if we don't start another endless war-thread.
>
I'm not here for that. I just don't want to see people waste time
generating the same arguments that have been generated and answered and
countered before, unless (having seen them) they really want to.
>The "I don't know anything about kernel programming" argument doesn't mean "I
>don't know anything about kernel architectures", at least for me. And it should
>be worded more like "I don't have experience in kernel programming".
>
>Put simply, I don't like the traditional unices kernel design, and Linux
>doesn't really deviate much from that. It seems that most of the people here
>agrees about this, so why do you want me to state this once again?
>
Well, there is certainly a distinction in English between "I don't know
anything about kernel programming" and "I don't have experience in
kernel programming", but it isn't a very large one. You've either
programmed or debugged kernel code or you haven't. I have. For about
30 years.
You can't have it both ways. I wasn't talking about you personally (in
fact, I wasn't even aware that you lurked this list), but unless you
want to claim to know something about kernel programming I'm afraid you
can't make "blanket" statements like "I don't like the traditional
unices kernel design" or "Linux doesn't really deviate much from that"
and expect people to accept your opinions without some justification.
Since this is exactly what you said back in the FreeOS days, I have a
sneaking suspicion that your knowledge of how Linux deviates from
"traditional Unix" isn't based on any current source base. In fact
(please prove me wrong), I suspect that you read a Unix kernel book
describing an AT&T kernel from the 1970s, and maybe updated your Linux
knowledge with materials from the 1990s, and that you in fact have no
idea at all how the current Linux kernel compares to the current OS/2
kernel, either positively or negatively.
>On the other hand, if you think (and I hope this means that you DO have
>experiance in kernel programming) that changing Linux' kernel scheduler and
>threading model would be a matter of a few days of hacking away from a single
>person, that please announce it on this list and START WORKING. The worst thing
>that can happen is that some people will join and help you, and we all will
have
>to say "sorry, YOU were right".
>Then, we all will be able to start ading a layer of compatibility over the
Linux
>kernel for OS/2 APIs.
>
Well, "START WORKING" on kernel changes can't happen until somebody
actually says what it is about the existing Linux kernel that needs to
be changed for OS/2 (or osFree). As far as I'm concerned, if we
implement the OS/2 CP API on top of glibc and Linux system calls,
everything else that uses the CP API just works. I don't really see
anything that would have to be changed, at least on a first pass.
>This is exactly what I tried to do. I knew I didn't like Linux nor Windows
>(hence ReactOS), and tried to explain clearly _WHY_ I didn't like them, and
>tried to suggest an already existing design that I think fits the original OS/2
>design better.
>
Sorry, but until you give a logical, coherent, and verifiable by others,
reason why "you don't like Linux", I'm not going to admit that as a
serious argument against it. What I remember of what you said in the
past was mostly stuff that had maybe been true years before or that
involved minor bugs or design issues in components many levels above the
kernel itself. I don't want to misquote you, and my memory isn't what
it probably never was, so I'll leave this as a topic for further
discussion. Please just check to make sure that your objections are
still valid against present-day Linux systems.
>Obviously, as staed from others, the device drivers problem arises, and that
>leaves us only with Linux and ReactOS (if we DO want to avoid the problem).
>Linux is the better tested and stable of the two, so again: if you think Linux'
>kernel weaknesses (compared to OS/2) can be easily filled, please propose
>something and start a SIG about it.
>
Again, sorry, but I don't think that CURRENT-DAY Linux design has any
weaknesses relative to the current OS/2 kernel, for the simple reason
that IBM stopped active kernel development several years ago (and yes, I
know that you can get new bugfix kernels from testcase), and Linux has
continued on to exploit processor improvements.
We can argue about this all we want to, but I don't know enough about
the OS/2 kernel to render an authoritative opinion about its merits
relative to Linux. Nor, I strongly suspect, do you. I think the
definitive opinion would probably come from Scott Garfinkel or somebody
else within IBM. But that would only be the OS/2 side, unless they were
knowledgable in Linux.
The classic argument about Unix versus OS/2 Process Management was that
Unix didn't have threads, just "heavyweight" processes. This hasn't
been true for years; Linux has several thread models, some in the kernel
and some add-ons, and there have been major advances in thread
management in 2.6.
It occurs to me that an easy way to settle the question would be to take
the Sun JDK on Linux and the Innotek port of the Sun JDK on OS/2, which
should be pretty much the same code, and run a threading benchmark in
each case. The differences would pretty much boil down to the
underlying OS.
Or, you could run the IBM 1.3 JDK on Linux versus the IBM 1.3 JDK on
OS/2. At one time, the OS/2 JDK ran rings around all of the other JDKs
on Linux and Windows. If this has changed in the Linux case, it would
have to be due to underlying improvements in the OS thread management.
>>In terms of performance, the new 2.6 kernel supports the Intel
>>hyper-threading technology (up to 50% performance improvement on a
>>uniprocessor system), which OS/2 never will; I seriously doubt that you
>>
>>
>
>This is to be verified. I don't know if OS/2 "nerver will" (I already heard
>about some support coming from Serenity). And I'm very doubtful about the 50%
>improvement... all the claims I've heard about (let apart Intel's) report much
>lower improvements (10% or so).
>OS/2's kernel still is very competitive, despite the fact that it is not
>following very closely hardware improvements.
>But that's not of interest for this discussion.
>
I'm sorry, but it is of extreme interest for this discussion.
Serenity has no access to kernel source code that I've ever seen them
post about. Nor have I ever read a post indicating that they are
allowed to modify the kernel.
And while my 50% is the high figure, your 10% is the low figure. Both
are verifiable. Both were quoted by an article (I forget where) that
actually ran tests on the 2.6 Linux kernel versus the 2.4 kernel and
both 10% and 50% differences depending on what was being tested. But
even assuming only the 10% figure, that's a lot of performance to lose,
especially since hyper-threading is primarily aimed at multitasking, and
that's what OS/2 takes pride in.
>It's not only a question of exterior perception. It's the overall feeling that
>must be preserved.
>
>So (let me state it again, adding some more things said by other people here):
>
I think this is the crux of your argument. You appear to care less that
OS/2 survive than that it survive on your terms, i.e. not have to
collaborate with any other software you see as a competitor.
I'd just like to see it survive. Goldencode's and Innotek's support for
Java have gone a very long way towards making me feel better about
OS/2's survival than I used to.
>
>- I want OS/2 multitasking/multithreading performance (IOW the OS/2 scheduler
>
>and threading model, or a similar performing one).
>
Dealt with above.
>- I don't want to mess with kernel recompiles just to add devoce driver support
>to the OS.
>
It hasn't been necessary to recompile the kernel to add device driver
support for at least five years now, probably more like ten. Most Linux
distributions include third-party binary drivers which are compiled by
vendors and just included as modules.
You're probably thinking of the OS/2 port of the ALSA project, which
requires both kernel and module support. That's because ALSA is not a
driver, it's a kernel-based sound card driver architecture. The code in
the kernel just manages the drivers; all the drivers are modules.
>- I want a "multiple roots" file system model, not the "single root" model from
>Linux. I don't know how deeply this is rooted in the kernel. Would be nice to
>hear about this.
>
This was also discussed in the FreeOS days. There is absolutely no
difference between a "drives" (multiple-root) filesystem and a
single-root filesystem whose root directory contains mount points
(subdirectories) called "c-drive", "d-drive", etc. Just as under OS/2 a
partition can be mounted as X:, the same partition can be mounted as a
subdirectory *anywhere* on another filesystem.
An example under OS/2 would be that you could install Java 1.1, Java1.3,
and Java 1.4 on separate paritions, and mount (attach) them to your C:
drive as C:java11, C:java13, and C:java14.
>- I want OS/2 global consistency permeating the whole system, not Unix' mess. I
>don't know how deeply this is rooted in Linux' kernel, probably little or
>nothing. Would be nice to hear about this.
>
It's hard to address this without knowing what you mean by "Unix mess",
but you're correct in saying that the Linux kernel has virtually nothing
to do with any aspect of the user interface. Aside from GNOME and KDE,
there are about a dozen other window managers for Linux, most of which
have multiple "personalities" (even Warp 4 !).
>- X is out of the question. We need PM on top of this. X is already a problem
>for Unix users (which are in fact trying to replace it).
>
Not correct. X is the underpinning for every Window Manager on Linux
today, is under active development, and isn't going anywhere. It is the
primary and sole source of video hardware support for Linux, sort of
like Scitech for OS/2.
But nobody programs to the X API, which is considered very low-level.
All GUI programming is done via "toolkits" like GNOME, Qt, or
Motif/Lesstif which look more like the PM APIs that any OS/2 application
programmer would be used to. Unless you wanted to write the equivalent
of IOPL PM Video drivers, you'd never see X. Just as if you program in
Visual Age C and use the library functions you never see the actual OS/2
system calls those library functions make.
Date: Fri Dec 19, 2003 6:27 pm
Subject: Re: Digest Number 203 ftg314159
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°
>No problem, at least if we don't start another endless war-thread.
>
I'm not here for that. I just don't want to see people waste time
generating the same arguments that have been generated and answered and
countered before, unless (having seen them) they really want to.
>The "I don't know anything about kernel programming" argument doesn't mean "I
>don't know anything about kernel architectures", at least for me. And it should
>be worded more like "I don't have experience in kernel programming".
>
>Put simply, I don't like the traditional unices kernel design, and Linux
>doesn't really deviate much from that. It seems that most of the people here
>agrees about this, so why do you want me to state this once again?
>
Well, there is certainly a distinction in English between "I don't know
anything about kernel programming" and "I don't have experience in
kernel programming", but it isn't a very large one. You've either
programmed or debugged kernel code or you haven't. I have. For about
30 years.
You can't have it both ways. I wasn't talking about you personally (in
fact, I wasn't even aware that you lurked this list), but unless you
want to claim to know something about kernel programming I'm afraid you
can't make "blanket" statements like "I don't like the traditional
unices kernel design" or "Linux doesn't really deviate much from that"
and expect people to accept your opinions without some justification.
Since this is exactly what you said back in the FreeOS days, I have a
sneaking suspicion that your knowledge of how Linux deviates from
"traditional Unix" isn't based on any current source base. In fact
(please prove me wrong), I suspect that you read a Unix kernel book
describing an AT&T kernel from the 1970s, and maybe updated your Linux
knowledge with materials from the 1990s, and that you in fact have no
idea at all how the current Linux kernel compares to the current OS/2
kernel, either positively or negatively.
>On the other hand, if you think (and I hope this means that you DO have
>experiance in kernel programming) that changing Linux' kernel scheduler and
>threading model would be a matter of a few days of hacking away from a single
>person, that please announce it on this list and START WORKING. The worst thing
>that can happen is that some people will join and help you, and we all will
have
>to say "sorry, YOU were right".
>Then, we all will be able to start ading a layer of compatibility over the
Linux
>kernel for OS/2 APIs.
>
Well, "START WORKING" on kernel changes can't happen until somebody
actually says what it is about the existing Linux kernel that needs to
be changed for OS/2 (or osFree). As far as I'm concerned, if we
implement the OS/2 CP API on top of glibc and Linux system calls,
everything else that uses the CP API just works. I don't really see
anything that would have to be changed, at least on a first pass.
>This is exactly what I tried to do. I knew I didn't like Linux nor Windows
>(hence ReactOS), and tried to explain clearly _WHY_ I didn't like them, and
>tried to suggest an already existing design that I think fits the original OS/2
>design better.
>
Sorry, but until you give a logical, coherent, and verifiable by others,
reason why "you don't like Linux", I'm not going to admit that as a
serious argument against it. What I remember of what you said in the
past was mostly stuff that had maybe been true years before or that
involved minor bugs or design issues in components many levels above the
kernel itself. I don't want to misquote you, and my memory isn't what
it probably never was, so I'll leave this as a topic for further
discussion. Please just check to make sure that your objections are
still valid against present-day Linux systems.
>Obviously, as staed from others, the device drivers problem arises, and that
>leaves us only with Linux and ReactOS (if we DO want to avoid the problem).
>Linux is the better tested and stable of the two, so again: if you think Linux'
>kernel weaknesses (compared to OS/2) can be easily filled, please propose
>something and start a SIG about it.
>
Again, sorry, but I don't think that CURRENT-DAY Linux design has any
weaknesses relative to the current OS/2 kernel, for the simple reason
that IBM stopped active kernel development several years ago (and yes, I
know that you can get new bugfix kernels from testcase), and Linux has
continued on to exploit processor improvements.
We can argue about this all we want to, but I don't know enough about
the OS/2 kernel to render an authoritative opinion about its merits
relative to Linux. Nor, I strongly suspect, do you. I think the
definitive opinion would probably come from Scott Garfinkel or somebody
else within IBM. But that would only be the OS/2 side, unless they were
knowledgable in Linux.
The classic argument about Unix versus OS/2 Process Management was that
Unix didn't have threads, just "heavyweight" processes. This hasn't
been true for years; Linux has several thread models, some in the kernel
and some add-ons, and there have been major advances in thread
management in 2.6.
It occurs to me that an easy way to settle the question would be to take
the Sun JDK on Linux and the Innotek port of the Sun JDK on OS/2, which
should be pretty much the same code, and run a threading benchmark in
each case. The differences would pretty much boil down to the
underlying OS.
Or, you could run the IBM 1.3 JDK on Linux versus the IBM 1.3 JDK on
OS/2. At one time, the OS/2 JDK ran rings around all of the other JDKs
on Linux and Windows. If this has changed in the Linux case, it would
have to be due to underlying improvements in the OS thread management.
>>In terms of performance, the new 2.6 kernel supports the Intel
>>hyper-threading technology (up to 50% performance improvement on a
>>uniprocessor system), which OS/2 never will; I seriously doubt that you
>>
>>
>
>This is to be verified. I don't know if OS/2 "nerver will" (I already heard
>about some support coming from Serenity). And I'm very doubtful about the 50%
>improvement... all the claims I've heard about (let apart Intel's) report much
>lower improvements (10% or so).
>OS/2's kernel still is very competitive, despite the fact that it is not
>following very closely hardware improvements.
>But that's not of interest for this discussion.
>
I'm sorry, but it is of extreme interest for this discussion.
Serenity has no access to kernel source code that I've ever seen them
post about. Nor have I ever read a post indicating that they are
allowed to modify the kernel.
And while my 50% is the high figure, your 10% is the low figure. Both
are verifiable. Both were quoted by an article (I forget where) that
actually ran tests on the 2.6 Linux kernel versus the 2.4 kernel and
both 10% and 50% differences depending on what was being tested. But
even assuming only the 10% figure, that's a lot of performance to lose,
especially since hyper-threading is primarily aimed at multitasking, and
that's what OS/2 takes pride in.
>It's not only a question of exterior perception. It's the overall feeling that
>must be preserved.
>
>So (let me state it again, adding some more things said by other people here):
>
I think this is the crux of your argument. You appear to care less that
OS/2 survive than that it survive on your terms, i.e. not have to
collaborate with any other software you see as a competitor.
I'd just like to see it survive. Goldencode's and Innotek's support for
Java have gone a very long way towards making me feel better about
OS/2's survival than I used to.
>
>- I want OS/2 multitasking/multithreading performance (IOW the OS/2 scheduler
>
>and threading model, or a similar performing one).
>
Dealt with above.
>- I don't want to mess with kernel recompiles just to add devoce driver support
>to the OS.
>
It hasn't been necessary to recompile the kernel to add device driver
support for at least five years now, probably more like ten. Most Linux
distributions include third-party binary drivers which are compiled by
vendors and just included as modules.
You're probably thinking of the OS/2 port of the ALSA project, which
requires both kernel and module support. That's because ALSA is not a
driver, it's a kernel-based sound card driver architecture. The code in
the kernel just manages the drivers; all the drivers are modules.
>- I want a "multiple roots" file system model, not the "single root" model from
>Linux. I don't know how deeply this is rooted in the kernel. Would be nice to
>hear about this.
>
This was also discussed in the FreeOS days. There is absolutely no
difference between a "drives" (multiple-root) filesystem and a
single-root filesystem whose root directory contains mount points
(subdirectories) called "c-drive", "d-drive", etc. Just as under OS/2 a
partition can be mounted as X:, the same partition can be mounted as a
subdirectory *anywhere* on another filesystem.
An example under OS/2 would be that you could install Java 1.1, Java1.3,
and Java 1.4 on separate paritions, and mount (attach) them to your C:
drive as C:java11, C:java13, and C:java14.
>- I want OS/2 global consistency permeating the whole system, not Unix' mess. I
>don't know how deeply this is rooted in Linux' kernel, probably little or
>nothing. Would be nice to hear about this.
>
It's hard to address this without knowing what you mean by "Unix mess",
but you're correct in saying that the Linux kernel has virtually nothing
to do with any aspect of the user interface. Aside from GNOME and KDE,
there are about a dozen other window managers for Linux, most of which
have multiple "personalities" (even Warp 4 !).
>- X is out of the question. We need PM on top of this. X is already a problem
>for Unix users (which are in fact trying to replace it).
>
Not correct. X is the underpinning for every Window Manager on Linux
today, is under active development, and isn't going anywhere. It is the
primary and sole source of video hardware support for Linux, sort of
like Scitech for OS/2.
But nobody programs to the X API, which is considered very low-level.
All GUI programming is done via "toolkits" like GNOME, Qt, or
Motif/Lesstif which look more like the PM APIs that any OS/2 application
programmer would be used to. Unless you wanted to write the equivalent
of IOPL PM Video drivers, you'd never see X. Just as if you program in
Visual Age C and use the library functions you never see the actual OS/2
system calls those library functions make.
Re: Part 31
#908 From: Frank Griffin <fgriffin@...>
Date: Fri Dec 19, 2003 7:19 pm
Subject: OS Stability/Reliability ftg314159
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°
After finishing my previous comment, I went on to read this week's IBM developerWorks Technology issue (number 50), and found this article:
http://www.ibm.com/developerworks/libra ... ca=dnt-450
The article is a detailed account of stress tests run on the Linux kernel by the IBM Linux Technology Center. The test scenarios they used almost make me cringe at my relatively tame suggestion of running JDK comparisons. These tests ran for periods of 30, 60, and 90 days, the point being to test Linux performance and stability over long periods of time.
Whether or not you like Linux, the article is worth reading (it's fairly short) because it highlights one of the points I was trying to make about the massive amount of QA that would have to go into any new kernel in order to locate and resolve the types of bugs that can't be caught by a design tool, no matter how good (sorry, Lynn). You can say "we'll write our own (micro)kernel and test it like this", but you'd have a hard time convincing me that all of the microkernel projects suggested in these discussions over the years are doing the same.
The tests didn't just stress microkernel-type functions like process control and memory management, but kernel-level components like filesystems and networking. Here's an excerpt from the article sidebar: (apologies if this shows up in HTML, it's a cut-and-paste)
********************************************
Test results at a glance
The following summary is based on the results of the tests and observations on the duration of the runs:
* The Linux kernel and other core OS components, including libraries, device drivers, file systems, networking, IPC, and memory management, operated consistently and completed all the expected durations of runs with zero critical system failures.
* Every run generated a high success rate (over 95%), with a very small number of expected intermittent failures that were the result of the concurrent executions of tests that are designed to overload resources.
* Linux system performance was not degraded during the long duration of the run.
* The Linux kernel properly scaled to use hardware resources (CPU, memory, disk) on SMP systems.
* The Linux system handled continuous full CPU load (over 99 percent) and high memory stress well.
* The Linux system handled overloaded circumstances correctly.
The tests demonstrate that the Linux kernel and other core OS components are reliable and stable over 30, 60, and 90 days, and can provide a robust, enterprise-level environment for customers over long periods of time.
********************************************
I personally would find it hard to believe that even OS/2 4.52 (or anything else for that matter) could turn in better results than these. We could do a lot worse than building on top of something like this. This is probably the same scale of testing that the OS/2 kernel got when it was being actively developed.
Date: Fri Dec 19, 2003 7:19 pm
Subject: OS Stability/Reliability ftg314159
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°
After finishing my previous comment, I went on to read this week's IBM developerWorks Technology issue (number 50), and found this article:
http://www.ibm.com/developerworks/libra ... ca=dnt-450
The article is a detailed account of stress tests run on the Linux kernel by the IBM Linux Technology Center. The test scenarios they used almost make me cringe at my relatively tame suggestion of running JDK comparisons. These tests ran for periods of 30, 60, and 90 days, the point being to test Linux performance and stability over long periods of time.
Whether or not you like Linux, the article is worth reading (it's fairly short) because it highlights one of the points I was trying to make about the massive amount of QA that would have to go into any new kernel in order to locate and resolve the types of bugs that can't be caught by a design tool, no matter how good (sorry, Lynn). You can say "we'll write our own (micro)kernel and test it like this", but you'd have a hard time convincing me that all of the microkernel projects suggested in these discussions over the years are doing the same.
The tests didn't just stress microkernel-type functions like process control and memory management, but kernel-level components like filesystems and networking. Here's an excerpt from the article sidebar: (apologies if this shows up in HTML, it's a cut-and-paste)
********************************************
Test results at a glance
The following summary is based on the results of the tests and observations on the duration of the runs:
* The Linux kernel and other core OS components, including libraries, device drivers, file systems, networking, IPC, and memory management, operated consistently and completed all the expected durations of runs with zero critical system failures.
* Every run generated a high success rate (over 95%), with a very small number of expected intermittent failures that were the result of the concurrent executions of tests that are designed to overload resources.
* Linux system performance was not degraded during the long duration of the run.
* The Linux kernel properly scaled to use hardware resources (CPU, memory, disk) on SMP systems.
* The Linux system handled continuous full CPU load (over 99 percent) and high memory stress well.
* The Linux system handled overloaded circumstances correctly.
The tests demonstrate that the Linux kernel and other core OS components are reliable and stable over 30, 60, and 90 days, and can provide a robust, enterprise-level environment for customers over long periods of time.
********************************************
I personally would find it hard to believe that even OS/2 4.52 (or anything else for that matter) could turn in better results than these. We could do a lot worse than building on top of something like this. This is probably the same scale of testing that the OS/2 kernel got when it was being actively developed.
Re: Part 31
#909 From: "Michal Necasek" <michaln@...>
Date: Fri Dec 19, 2003 8:31 pm
Subject: Re: Digest Number 203 michalnec
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°
On Fri, 19 Dec 2003 10:27:49 -0500, Frank Griffin wrote:
>Well, "START WORKING" on kernel changes can't happen until somebody
>actually says what it is about the existing Linux kernel that needs to
>be changed for OS/2 (or osFree).
>
I wonder how much experience with Open Source development you have.
In my experience, a very small team (or just one person) starts working
on a project, the way _they_ like it, and if they produce something
useful, others will follow. I've never heard of an OS project designed
by a committee.
>Again, sorry, but I don't think that CURRENT-DAY Linux design has any
>weaknesses relative to the current OS/2 kernel, for the simple reason
>that IBM stopped active kernel development several years ago (and yes, I
>know that you can get new bugfix kernels from testcase), and Linux has
>continued on to exploit processor improvements.
>
I have not seen the 2.6 kernel, but in 2.4 releases Linux threading
is _nowhere near_ as good as OS/2's. Even if the kernel was up to par,
the user level implementation in glibc etc. is far from perfect.
>>- I don't want to mess with kernel recompiles just to add devoce driver
support
>>to the OS.
>>
>It hasn't been necessary to recompile the kernel to add device driver
>support for at least five years now, probably more like ten. Most Linux
>distributions include third-party binary drivers which are compiled by
>vendors and just included as modules.
>
I don't know if you're deliberately lying or just don't know what
you're talking about. True, most users don't need to recompile the kernel.
But you forgot a little catch - the loadable module has to be built for
your version of the kernel. The kernel modules are not very portable
between kernel versions.
What's much worse, certain Linux developers (incl. Torvalds) are very
strongly against any binary-only drivers. You must know about "tainted"
mode and attempts to restrict certain kernel APIs to GPL modules only.
Sure, companies provide binary drivers - but what about cases like
nVidia, whose drivers are known to cause all sorts of problems and no
Linux distro wants to support them?
So no, the Linux kernel doesn't need to be recompiled. But it's not
even close to the stable device driver model of OS/2 where you can
often happily use drivers that are 5 or 10 years old.
>>- X is out of the question. We need PM on top of this. X is already a problem
>>for Unix users (which are in fact trying to replace it).
>>
>Not correct. X is the underpinning for every Window Manager on Linux
>today, is under active development, and isn't going anywhere.
>
People are very unhappy with XFree86. That's why there are efforts
to fork and/or replace it. That is not so much because XF86 is bad as
because its maintainers aren't doing a very good job.
>Just as if you program in
>Visual Age C and use the library functions you never see the actual OS/2
>system calls those library functions make.
>
Are you suggesting to use X as an underlying architecture for
re-implementation of PM?
Anyway here are a few points against Linux:
- Licensing (not everyone likes GPL)
- IP issues (SCO does have _some_ points, though not many)
- Zero likelihood that Linux kernel maintainers will care about osFree
Just food for thought.
Michal
Date: Fri Dec 19, 2003 8:31 pm
Subject: Re: Digest Number 203 michalnec
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°
On Fri, 19 Dec 2003 10:27:49 -0500, Frank Griffin wrote:
>Well, "START WORKING" on kernel changes can't happen until somebody
>actually says what it is about the existing Linux kernel that needs to
>be changed for OS/2 (or osFree).
>
I wonder how much experience with Open Source development you have.
In my experience, a very small team (or just one person) starts working
on a project, the way _they_ like it, and if they produce something
useful, others will follow. I've never heard of an OS project designed
by a committee.
>Again, sorry, but I don't think that CURRENT-DAY Linux design has any
>weaknesses relative to the current OS/2 kernel, for the simple reason
>that IBM stopped active kernel development several years ago (and yes, I
>know that you can get new bugfix kernels from testcase), and Linux has
>continued on to exploit processor improvements.
>
I have not seen the 2.6 kernel, but in 2.4 releases Linux threading
is _nowhere near_ as good as OS/2's. Even if the kernel was up to par,
the user level implementation in glibc etc. is far from perfect.
>>- I don't want to mess with kernel recompiles just to add devoce driver
support
>>to the OS.
>>
>It hasn't been necessary to recompile the kernel to add device driver
>support for at least five years now, probably more like ten. Most Linux
>distributions include third-party binary drivers which are compiled by
>vendors and just included as modules.
>
I don't know if you're deliberately lying or just don't know what
you're talking about. True, most users don't need to recompile the kernel.
But you forgot a little catch - the loadable module has to be built for
your version of the kernel. The kernel modules are not very portable
between kernel versions.
What's much worse, certain Linux developers (incl. Torvalds) are very
strongly against any binary-only drivers. You must know about "tainted"
mode and attempts to restrict certain kernel APIs to GPL modules only.
Sure, companies provide binary drivers - but what about cases like
nVidia, whose drivers are known to cause all sorts of problems and no
Linux distro wants to support them?
So no, the Linux kernel doesn't need to be recompiled. But it's not
even close to the stable device driver model of OS/2 where you can
often happily use drivers that are 5 or 10 years old.
>>- X is out of the question. We need PM on top of this. X is already a problem
>>for Unix users (which are in fact trying to replace it).
>>
>Not correct. X is the underpinning for every Window Manager on Linux
>today, is under active development, and isn't going anywhere.
>
People are very unhappy with XFree86. That's why there are efforts
to fork and/or replace it. That is not so much because XF86 is bad as
because its maintainers aren't doing a very good job.
>Just as if you program in
>Visual Age C and use the library functions you never see the actual OS/2
>system calls those library functions make.
>
Are you suggesting to use X as an underlying architecture for
re-implementation of PM?
Anyway here are a few points against Linux:
- Licensing (not everyone likes GPL)
- IP issues (SCO does have _some_ points, though not many)
- Zero likelihood that Linux kernel maintainers will care about osFree
Just food for thought.
Michal
Re: Part 31
#910 From: Frank Griffin <fgriffin@...>
Date: Sat Dec 20, 2003 3:37 am
Subject: Re: Digest Number 204 ftg314159
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°
>From: "Michal Necasek" <michaln@prodigy.net>
>Subject: Re: Digest Number 203
>On Fri, 19 Dec 2003 10:27:49 -0500, Frank Griffin wrote:
>
>>Well, "START WORKING" on kernel changes can't happen until somebody actually says what it is about the existing Linux kernel that needs to be changed for OS/2 (or osFree).
>>
> I wonder how much experience with Open Source development you have.
>
Since about 1990.
>In my experience, a very small team (or just one person) starts working on a project, the way _they_ like it, and if they produce something useful, others will follow. I've never heard of an OS project designed by a committee.
>
Well, they only start working on it if they agree on what ought to be done. You missed the point of the comment, which was that since I didn't personally believe that anything was wrong with the Linux dispatching model, it was a little unrealistic to expect me to busy myself changing it.
> I have not seen the 2.6 kernel, but in 2.4 releases Linux threading is _nowhere near_ as good as OS/2's. Even if the kernel was up to par, the user level implementation in glibc etc. is far from perfect.
>
Perhaps you would explain exactly how you determined this ? I'm not being sarcastic, as I recognize your name and have great respect for your contributions to OS/2. However, I'd be interested to know whether this is just a subjective opinion, or one based on code reading or a benchmark.
BTW, the threading code was totally rewritten for 2.6 to use NPTL (Native Posix Threading Library).
The article I mentioned before was part of the September 26 DeveloperWorks newsletter, and can be found at:
http://www.ibm.com/developerworks/libra ... ca=dnt-438
It describes the dispatcher, threading, and memory management improvements for 2.6.
> I don't know if you're deliberately lying or just don't know what you're talking about. True, most users don't need to recompile the kernel. But you forgot a little catch - the loadable module has to be built for your version of the kernel. The kernel modules are not very portable between kernel versions. What's much worse, certain Linux developers (incl. Torvalds) are very strongly against any binary-only drivers. You must know about "tainted" mode and attempts to restrict certain kernel APIs to GPL modules only. Sure, companies provide binary drivers - but what about cases like nVidia, whose drivers are known to cause all sorts of problems and no Linux distro wants to support them?
>So no, the Linux kernel doesn't need to be recompiled. But it's not even close to the stable device driver model of OS/2 where you can often happily use drivers that are 5 or 10 years old.
>
Well, there are a couple of tripping points here.
No one, certainly not I, ever suggested that any part of osFree needing to be in the kernel should be packaged as a third-party product to be installed on the user's choice of Linux kernel. We wouldn't do that if we based osFree on someone else's microkernel, and we certainly wouldn't do it for Linux. We would take a kernel version, add to it whatever we needed to (compiled for that version), test it, and distribute it. So while it is true that a module needs to be compiled for the kernel version with which it is to be used, *we* would be doing the compiling, not the user. The osFree kernel would be a captured Linux kernel plus code of ours compiled for that kernel, distributed as a unit.
I assume that from time to time we would grab a new version of Linux, in which case we would recompile our own stuff and distribute a new osFree version, with no conflicts or overlap.
Yes, I'm familiar with "tainted" mode. But it has no applicability to osFree, since this is an open-source project and we wouldn't be providing binary-only drivers. However, just to make the point, several distros include binary or partially-binary drivers (check out Mandrake, which includes the nVidia drivers BTW).
>People are very unhappy with XFree86. That's why there are efforts to fork and/or replace it. That is not so much because XF86 is bad as because its maintainers aren't doing a very good job.
>
That argument needs details to be evaluated. I've been using XFree86 in Linux for several years on all sorts of video hardware, and I've never had an X-related problem on hardware X claimed to support, so I suppose that it depends on who's using it and what they are using it with.
> Are you suggesting to use X as an underlying architecture for re-implementation of PM?
>
That's certainly one possibility, and if you want to take advantage of Linux's video card support, it is certain that X will be involved somewhere, since that's what Linux uses. The re-implementation of PM would probably be better off using something above X, such as Gtk or Qt, which are closer to the parts of the PM API used by 99% of application programmers.
> Anyway here are a few points against Linux:
>- Licensing (not everyone likes GPL)
>
AFAIK, while anything added to the kernel must be released under GPL, Linux allows (and even encourages) you to release it under other licenses concurrently. And that only applies if you want your code added to the official distribution. As long as you are writing modules which you add to the kernel package yourself, I believe you can use whatever license you want.
> - IP issues (SCO does have _some_ points, though not many)
>
You don't think that by cloning OS/2 we're not infringing anybody's IP ? At least in the case of SCO, there are other people with deep pockets willing to fight them for us. I'd worry more about Microsoft deciding we'd infringed the IP they contributed to OS/2 if osFree ever really takes off.
> - Zero likelihood that Linux kernel maintainers will care about osFree
>
Who cares ? We're not asking them to do anything for us.
Date: Sat Dec 20, 2003 3:37 am
Subject: Re: Digest Number 204 ftg314159
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°
>From: "Michal Necasek" <michaln@prodigy.net>
>Subject: Re: Digest Number 203
>On Fri, 19 Dec 2003 10:27:49 -0500, Frank Griffin wrote:
>
>>Well, "START WORKING" on kernel changes can't happen until somebody actually says what it is about the existing Linux kernel that needs to be changed for OS/2 (or osFree).
>>
> I wonder how much experience with Open Source development you have.
>
Since about 1990.
>In my experience, a very small team (or just one person) starts working on a project, the way _they_ like it, and if they produce something useful, others will follow. I've never heard of an OS project designed by a committee.
>
Well, they only start working on it if they agree on what ought to be done. You missed the point of the comment, which was that since I didn't personally believe that anything was wrong with the Linux dispatching model, it was a little unrealistic to expect me to busy myself changing it.
> I have not seen the 2.6 kernel, but in 2.4 releases Linux threading is _nowhere near_ as good as OS/2's. Even if the kernel was up to par, the user level implementation in glibc etc. is far from perfect.
>
Perhaps you would explain exactly how you determined this ? I'm not being sarcastic, as I recognize your name and have great respect for your contributions to OS/2. However, I'd be interested to know whether this is just a subjective opinion, or one based on code reading or a benchmark.
BTW, the threading code was totally rewritten for 2.6 to use NPTL (Native Posix Threading Library).
The article I mentioned before was part of the September 26 DeveloperWorks newsletter, and can be found at:
http://www.ibm.com/developerworks/libra ... ca=dnt-438
It describes the dispatcher, threading, and memory management improvements for 2.6.
> I don't know if you're deliberately lying or just don't know what you're talking about. True, most users don't need to recompile the kernel. But you forgot a little catch - the loadable module has to be built for your version of the kernel. The kernel modules are not very portable between kernel versions. What's much worse, certain Linux developers (incl. Torvalds) are very strongly against any binary-only drivers. You must know about "tainted" mode and attempts to restrict certain kernel APIs to GPL modules only. Sure, companies provide binary drivers - but what about cases like nVidia, whose drivers are known to cause all sorts of problems and no Linux distro wants to support them?
>So no, the Linux kernel doesn't need to be recompiled. But it's not even close to the stable device driver model of OS/2 where you can often happily use drivers that are 5 or 10 years old.
>
Well, there are a couple of tripping points here.
No one, certainly not I, ever suggested that any part of osFree needing to be in the kernel should be packaged as a third-party product to be installed on the user's choice of Linux kernel. We wouldn't do that if we based osFree on someone else's microkernel, and we certainly wouldn't do it for Linux. We would take a kernel version, add to it whatever we needed to (compiled for that version), test it, and distribute it. So while it is true that a module needs to be compiled for the kernel version with which it is to be used, *we* would be doing the compiling, not the user. The osFree kernel would be a captured Linux kernel plus code of ours compiled for that kernel, distributed as a unit.
I assume that from time to time we would grab a new version of Linux, in which case we would recompile our own stuff and distribute a new osFree version, with no conflicts or overlap.
Yes, I'm familiar with "tainted" mode. But it has no applicability to osFree, since this is an open-source project and we wouldn't be providing binary-only drivers. However, just to make the point, several distros include binary or partially-binary drivers (check out Mandrake, which includes the nVidia drivers BTW).
>People are very unhappy with XFree86. That's why there are efforts to fork and/or replace it. That is not so much because XF86 is bad as because its maintainers aren't doing a very good job.
>
That argument needs details to be evaluated. I've been using XFree86 in Linux for several years on all sorts of video hardware, and I've never had an X-related problem on hardware X claimed to support, so I suppose that it depends on who's using it and what they are using it with.
> Are you suggesting to use X as an underlying architecture for re-implementation of PM?
>
That's certainly one possibility, and if you want to take advantage of Linux's video card support, it is certain that X will be involved somewhere, since that's what Linux uses. The re-implementation of PM would probably be better off using something above X, such as Gtk or Qt, which are closer to the parts of the PM API used by 99% of application programmers.
> Anyway here are a few points against Linux:
>- Licensing (not everyone likes GPL)
>
AFAIK, while anything added to the kernel must be released under GPL, Linux allows (and even encourages) you to release it under other licenses concurrently. And that only applies if you want your code added to the official distribution. As long as you are writing modules which you add to the kernel package yourself, I believe you can use whatever license you want.
> - IP issues (SCO does have _some_ points, though not many)
>
You don't think that by cloning OS/2 we're not infringing anybody's IP ? At least in the case of SCO, there are other people with deep pockets willing to fight them for us. I'd worry more about Microsoft deciding we'd infringed the IP they contributed to OS/2 if osFree ever really takes off.
> - Zero likelihood that Linux kernel maintainers will care about osFree
>
Who cares ? We're not asking them to do anything for us.
Re: Part 31
#911 From: "Lynn H. Maxson" <lmaxson@...>
Date: Sat Dec 20, 2003 4:39 am
Subject: Re: OS Stability/Reliability lynnmaxson
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°
Frank Griffin writes:
"...Whether or not you like Linux, the article is worth reading
(it's fairly short) because it highlights one of the points I was
trying to make about the massive amount of QA that would
have to go into any new kernel in order to locate and resolve
the types of bugs that can't be caught by a design tool, no
matter how good (sorry, Lynn). ..."
Well, it's good to see the respect that contributors to this list
have for each other despite strong differences of opinion. I
did not from the excerpt on Linux stress results that it did fail
in certain overload conditions. To me that indicates a
weakness that needs removing. It should be impossible to
create conditions from which an OS cannot recover without
failing.
I have no problem considering the use of the Linux
microkernel as the base for an OS/2 personality instead of
layering that personality on top of the Linux kernel. I see no
reason why we cannot have a microkernel at least equal to
any other and possibly better. Such a form would serve
Linux, Windows, and OS/2 as well as allowing their concurrent
execution.
As I indicated earlier I'm taking a different tack by
concentrating on the tool, the means to write the microkernel,
the kernel, and all supporting applications. It's based on using
a 4GL, i.e. logic programming, With few exceptions from
groups like Maude and Tunes based on an OO form of LISP
that removes it from those using C, C++, or C#.
I'm not here to hammer Linux or UNIX or anything else. I won't
even argue against a kernel-layered approach instead of
microkernel for those who want to engage in the extra effort
necessary initially and ongoing. Whatever the choice I want
to make it possible for the individual to maintain any open
source package, customized in whatever way he chooses,
without outside assistance.
I still think the earlier suggestion to somehow tabularly
compare different microkernel implementations a good one. I
would think that we could decompose it into the set of of
microkernel functions and within each set list each API along
with a description including the rules. That would give us a
means to understand the similarities and differences among
them. Our evaluation of those similarities and differences
could lead to selecting one or designing a better one (from
our perspective) of our own.
Date: Sat Dec 20, 2003 4:39 am
Subject: Re: OS Stability/Reliability lynnmaxson
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°
Frank Griffin writes:
"...Whether or not you like Linux, the article is worth reading
(it's fairly short) because it highlights one of the points I was
trying to make about the massive amount of QA that would
have to go into any new kernel in order to locate and resolve
the types of bugs that can't be caught by a design tool, no
matter how good (sorry, Lynn). ..."
Well, it's good to see the respect that contributors to this list
have for each other despite strong differences of opinion. I
did not from the excerpt on Linux stress results that it did fail
in certain overload conditions. To me that indicates a
weakness that needs removing. It should be impossible to
create conditions from which an OS cannot recover without
failing.
I have no problem considering the use of the Linux
microkernel as the base for an OS/2 personality instead of
layering that personality on top of the Linux kernel. I see no
reason why we cannot have a microkernel at least equal to
any other and possibly better. Such a form would serve
Linux, Windows, and OS/2 as well as allowing their concurrent
execution.
As I indicated earlier I'm taking a different tack by
concentrating on the tool, the means to write the microkernel,
the kernel, and all supporting applications. It's based on using
a 4GL, i.e. logic programming, With few exceptions from
groups like Maude and Tunes based on an OO form of LISP
that removes it from those using C, C++, or C#.
I'm not here to hammer Linux or UNIX or anything else. I won't
even argue against a kernel-layered approach instead of
microkernel for those who want to engage in the extra effort
necessary initially and ongoing. Whatever the choice I want
to make it possible for the individual to maintain any open
source package, customized in whatever way he chooses,
without outside assistance.
I still think the earlier suggestion to somehow tabularly
compare different microkernel implementations a good one. I
would think that we could decompose it into the set of of
microkernel functions and within each set list each API along
with a description including the rules. That would give us a
means to understand the similarities and differences among
them. Our evaluation of those similarities and differences
could lead to selecting one or designing a better one (from
our perspective) of our own.
Re: Part 31
#912 From: "Yuri Prokushev" <yuri_prokushev@mail.ru>
Date: Sat Dec 20, 2003 11:28 am
Subject: Re: FreeDOS kernel to OSFree? prokushev
Offline Offline
Send Email Send Email
--- In osFree@yahoogroups.com, "Tom Lee Mullins" <tomleem@a...> wrote:
> Is it possible to port the FreeDOS -
> http://www.freedos.org - kernel to
> OSFree (use Open Watcom to compile it?)?
Are you kidding? DOS not contains LOT of services required for any OS.
Date: Sat Dec 20, 2003 11:28 am
Subject: Re: FreeDOS kernel to OSFree? prokushev
Offline Offline
Send Email Send Email
--- In osFree@yahoogroups.com, "Tom Lee Mullins" <tomleem@a...> wrote:
> Is it possible to port the FreeDOS -
> http://www.freedos.org - kernel to
> OSFree (use Open Watcom to compile it?)?
Are you kidding? DOS not contains LOT of services required for any OS.
Re: Part 31
#913 From: "Yuri Prokushev" <yuri_prokushev@mail.ru>
Date: Sat Dec 20, 2003 12:07 pm
Subject: Re: Choosing kernel prokushev
Offline Offline
Send Email Send Email
--- In osFree@yahoogroups.com, "Lynn H. Maxson" <lmaxson@p...> wrote:
Lynn,
> in my IT world requirements precede specifications which
> precede analysis which precedes design which precedes
> construction. Requirements normally enter as an informal
> request which need translation into formal specifications.
Heh. I agree.
> To support the DOSCALLS api's as a requirement means listing
> them in detail in a set of specifications. That detail includes
> the api name, the set and order of parameters, and rules
> governing them on input and output. The parameters end up
> as variables and the rules regarding the values they can
> assume.
>
> From my perspective that means things like 'rc' for return
> code need to have an individual specification as well as name
> for all the varieties which exist with respect to returned
> values possible.
Also agree
> I came close to doing this once before and would be willing to
> carry it to completion to provide a detailed breakdown of
> kernel api's and rules governing their use. If someone works
> in parallel to provide a tabular comparison of microkernels as
> you suggested, then we can select a bottom (a microkernel)
> to support our top (the kernel api's). Then analysis will
> determine if we have a connection between the top and
> bottom. Then design will offer that connection in minimal
> form. We can then decompose that minimal form into
> different work units which people can then volunteer to
> construct.
Agree. Only comment. DOSCALLS contains as kernel-depended things
(like file/semathores/process/threads managers) and kernel-independed
things (like NLS, MSG-files, etc.). So I propose do create minimal
kernel-depended layer (lets call them as KRNL.DLL or something like
this ). And we need create specification for such kernel api's.
After this we can select kernel using tabular or another
representation. Anyway, to have something real, not only talks,
someone must start doing such specification.
And I prefer to have language independed specification (IIRC one time
Apple used such language independed api description).
> Note that you have two choices, a two-layered approach
> based on a microkernel and an OS personality and a
> three-layered approach based on a microkernel, a host OS
> personality, and a client OS personality. In either instance we
> have a basic choice in the microkernel. After that it just
> depends on whether or not people want to master the Linux
> api's to host the OS/2 client ones. Personally I don't
> understand why a resource constrained effort would want to
> master an extra layer.
I prefer to have 2-layered model (like ReactOS subsystems) instead of
3-layered model (like Odin, Wine).
As general comment, I don't like to use as kernel systems like Linux.
At least because I need kernel recompilation in most cases (I
recompiler Linux kernel around 10 times last week just to support
various combination of network cards. After that I hate as Linux
modules {which doesn't work in most cases} as integrated to kernel
drivers).
wbr,
Yuri
Date: Sat Dec 20, 2003 12:07 pm
Subject: Re: Choosing kernel prokushev
Offline Offline
Send Email Send Email
--- In osFree@yahoogroups.com, "Lynn H. Maxson" <lmaxson@p...> wrote:
Lynn,
> in my IT world requirements precede specifications which
> precede analysis which precedes design which precedes
> construction. Requirements normally enter as an informal
> request which need translation into formal specifications.
Heh. I agree.
> To support the DOSCALLS api's as a requirement means listing
> them in detail in a set of specifications. That detail includes
> the api name, the set and order of parameters, and rules
> governing them on input and output. The parameters end up
> as variables and the rules regarding the values they can
> assume.
>
> From my perspective that means things like 'rc' for return
> code need to have an individual specification as well as name
> for all the varieties which exist with respect to returned
> values possible.
Also agree
> I came close to doing this once before and would be willing to
> carry it to completion to provide a detailed breakdown of
> kernel api's and rules governing their use. If someone works
> in parallel to provide a tabular comparison of microkernels as
> you suggested, then we can select a bottom (a microkernel)
> to support our top (the kernel api's). Then analysis will
> determine if we have a connection between the top and
> bottom. Then design will offer that connection in minimal
> form. We can then decompose that minimal form into
> different work units which people can then volunteer to
> construct.
Agree. Only comment. DOSCALLS contains as kernel-depended things
(like file/semathores/process/threads managers) and kernel-independed
things (like NLS, MSG-files, etc.). So I propose do create minimal
kernel-depended layer (lets call them as KRNL.DLL or something like
this ). And we need create specification for such kernel api's.
After this we can select kernel using tabular or another
representation. Anyway, to have something real, not only talks,
someone must start doing such specification.
And I prefer to have language independed specification (IIRC one time
Apple used such language independed api description).
> Note that you have two choices, a two-layered approach
> based on a microkernel and an OS personality and a
> three-layered approach based on a microkernel, a host OS
> personality, and a client OS personality. In either instance we
> have a basic choice in the microkernel. After that it just
> depends on whether or not people want to master the Linux
> api's to host the OS/2 client ones. Personally I don't
> understand why a resource constrained effort would want to
> master an extra layer.
I prefer to have 2-layered model (like ReactOS subsystems) instead of
3-layered model (like Odin, Wine).
As general comment, I don't like to use as kernel systems like Linux.
At least because I need kernel recompilation in most cases (I
recompiler Linux kernel around 10 times last week just to support
various combination of network cards. After that I hate as Linux
modules {which doesn't work in most cases} as integrated to kernel
drivers).
wbr,
Yuri
Re: Part 31
#914 From: "Lynn H. Maxson" <lmaxson@...>
Date: Sat Dec 20, 2003 7:13 pm
Subject: Re: Re: Choosing kernel lynnmaxson
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°
Yuri Prokushev writes:
"...And I prefer to have language independed (sic)
specification (IIRC one time Apple used such language
independed api description). ..."
Let's get the obvious out of the way. It's impossible to have a
language independent specification, because you have to use
a language to write it. If you want it programming language
independent, which I think you meant, then your desire for a
two-layered approach in implementing a kernel becomes a
three-layered one in describing that implementation: the
informal language of a requirement, the formal language of an
programming language independent specification, and the
formal language of a programming dependent source.
I have no knowledge of the specification language that Apple
used. I have some knowledge of the instruction specification
language that Intel uses in its Pentium reference manuals. In
fact Intel specifies each instruction in three programming
language forms: (1) actual or machine code (first generation),
(2) symbolic assembly (second generation), and (3) an HLL
(third generation). [As a side note this 1:1:1 form means that
you can incorporate through a simple translation process
assembly language within an HLL.]
By design every programming language is a specification
language. A specification language is only a programming
language if a compiler or interpreter exists for it. As you must
eventually end up with a programming language as your
actual specification you might as well skip the extra,
inbetween specification language of a three-layer approach,
going directly to a two-layer. That skipping means less effort
overall both in initial development and in ongoing maintenance
to keep specifications (programming language independent
and dependent) synchronized with changes in requirements.
So the least effort occurs with a two-layered language
approach, one informal, one formal, both in accordance with
literate programming. That says that the formal specification
language is a programming language.
Unfortunately C and its derivatives as currently defined
cannot qualify. While Intel can use a single HLL specification
and programming language for everything from the instruction
level on up that same capability does not currently exist for C
and its derivatives. In fact it somewhat destroys the myth
that C, which does not natively support all the data types and
operators of an instruction set, is somehow closer to machine
or assembly language than any other. Obviously the language
Intel uses is closer.
Now I don't recommend using the Intel HLL language due to its
third generation language form. I happen to favor a fourth
generation language approach based on logic programming
which supports third generation forms within it as well. That
basically says that it allows expressions involving assertions
(fourth generation) as well as assignments (third generation).
Thus you can have a single language which effectively allows
you to specify all program generation levels down to the
instruction level. This basically says that the language allows
a mix of machine dependent and independent specifications as
source. As machine independent specifications must
decompose eventually into machine dependent ones for code
generation any implementation, any compiler or interpreter,
must have the specifications in source for doing so.
That alone eliminates C and its derivatives due to their
dependence on third-party, binary library support whether or
not standardized. It basically says that input to the process
consists entirely of source, all of whose processing occurs, i.e.
is "known", to the compiler or interpreter.
However, no such programming language, compiler, or
interpreter exists. I have such a specification language which
is not currently a programming language as no compiler or
interpreter exists. I'm working on that. It looks remarkably
like PL/I as does the Intel language.
May I suggest then that for the moment we consider a
three-layer language approach using this language as your
[programming] language independent form. Where feasible
you can translate it into C or one or more of its derivatives
while I continue to pursue its implementation. That means
eventually it will become a two-layered approach, reducing
significantly the overall effort of synchronizing changes.
"...Agree. Only comment. DOSCALLS contains as
kernel-depended things (like file/semathores/process/threads
managers) and kernel-independed things (like NLS, MSG-files,
etc.). So I propose do create minimal kernel-depended layer
(lets call them as KRNL.DLL or something like
this ). ..."
I would simply say that whether kernel dependent or
independent each is specifiable in the language as a data
type. The rules are specifiable. You can associate rules with
a data type. You can associate inter-dependent rules that
exists between data types. Once you have specified them
their enforcement becomes the responsibility of the software,
not the programmer. That's one of the benefits of fourth
generation languages or logic programming.
This means basically that semaphores, files, processes,
threads, etc. exist as data types with associated rules. That
goes for messages, NLS, etc.. You only have to associate
them. You do not have to constantly organize the logic
associated with their use. That becomes the responsibility of
the software. You make a change to the rules, the software
automatically reorganizes any necessary logic throughout any
use instance in the source.
In short the software reorganizes the entire source based on
any change. I should not have to mention what a saving over
current third generation methods means where such
reorganization is a programmer responsibility. The software is
millions of times faster and cheaper. That in turn reduces
significantly the human effort required and increases
significantly the productivity of an individual programmer.
Date: Sat Dec 20, 2003 7:13 pm
Subject: Re: Re: Choosing kernel lynnmaxson
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°
Yuri Prokushev writes:
"...And I prefer to have language independed (sic)
specification (IIRC one time Apple used such language
independed api description). ..."
Let's get the obvious out of the way. It's impossible to have a
language independent specification, because you have to use
a language to write it. If you want it programming language
independent, which I think you meant, then your desire for a
two-layered approach in implementing a kernel becomes a
three-layered one in describing that implementation: the
informal language of a requirement, the formal language of an
programming language independent specification, and the
formal language of a programming dependent source.
I have no knowledge of the specification language that Apple
used. I have some knowledge of the instruction specification
language that Intel uses in its Pentium reference manuals. In
fact Intel specifies each instruction in three programming
language forms: (1) actual or machine code (first generation),
(2) symbolic assembly (second generation), and (3) an HLL
(third generation). [As a side note this 1:1:1 form means that
you can incorporate through a simple translation process
assembly language within an HLL.]
By design every programming language is a specification
language. A specification language is only a programming
language if a compiler or interpreter exists for it. As you must
eventually end up with a programming language as your
actual specification you might as well skip the extra,
inbetween specification language of a three-layer approach,
going directly to a two-layer. That skipping means less effort
overall both in initial development and in ongoing maintenance
to keep specifications (programming language independent
and dependent) synchronized with changes in requirements.
So the least effort occurs with a two-layered language
approach, one informal, one formal, both in accordance with
literate programming. That says that the formal specification
language is a programming language.
Unfortunately C and its derivatives as currently defined
cannot qualify. While Intel can use a single HLL specification
and programming language for everything from the instruction
level on up that same capability does not currently exist for C
and its derivatives. In fact it somewhat destroys the myth
that C, which does not natively support all the data types and
operators of an instruction set, is somehow closer to machine
or assembly language than any other. Obviously the language
Intel uses is closer.
Now I don't recommend using the Intel HLL language due to its
third generation language form. I happen to favor a fourth
generation language approach based on logic programming
which supports third generation forms within it as well. That
basically says that it allows expressions involving assertions
(fourth generation) as well as assignments (third generation).
Thus you can have a single language which effectively allows
you to specify all program generation levels down to the
instruction level. This basically says that the language allows
a mix of machine dependent and independent specifications as
source. As machine independent specifications must
decompose eventually into machine dependent ones for code
generation any implementation, any compiler or interpreter,
must have the specifications in source for doing so.
That alone eliminates C and its derivatives due to their
dependence on third-party, binary library support whether or
not standardized. It basically says that input to the process
consists entirely of source, all of whose processing occurs, i.e.
is "known", to the compiler or interpreter.
However, no such programming language, compiler, or
interpreter exists. I have such a specification language which
is not currently a programming language as no compiler or
interpreter exists. I'm working on that. It looks remarkably
like PL/I as does the Intel language.
May I suggest then that for the moment we consider a
three-layer language approach using this language as your
[programming] language independent form. Where feasible
you can translate it into C or one or more of its derivatives
while I continue to pursue its implementation. That means
eventually it will become a two-layered approach, reducing
significantly the overall effort of synchronizing changes.
"...Agree. Only comment. DOSCALLS contains as
kernel-depended things (like file/semathores/process/threads
managers) and kernel-independed things (like NLS, MSG-files,
etc.). So I propose do create minimal kernel-depended layer
(lets call them as KRNL.DLL or something like
this ). ..."
I would simply say that whether kernel dependent or
independent each is specifiable in the language as a data
type. The rules are specifiable. You can associate rules with
a data type. You can associate inter-dependent rules that
exists between data types. Once you have specified them
their enforcement becomes the responsibility of the software,
not the programmer. That's one of the benefits of fourth
generation languages or logic programming.
This means basically that semaphores, files, processes,
threads, etc. exist as data types with associated rules. That
goes for messages, NLS, etc.. You only have to associate
them. You do not have to constantly organize the logic
associated with their use. That becomes the responsibility of
the software. You make a change to the rules, the software
automatically reorganizes any necessary logic throughout any
use instance in the source.
In short the software reorganizes the entire source based on
any change. I should not have to mention what a saving over
current third generation methods means where such
reorganization is a programmer responsibility. The software is
millions of times faster and cheaper. That in turn reduces
significantly the human effort required and increases
significantly the productivity of an individual programmer.