Part 14 - Mar 15 2002

Old messages from osFree mailing list hosted by Yahoo!
admin
Site Admin
Posts: 1925
Joined: Wed Dec 19, 2018 9:23 am
firstname: osFree
lastname: admin

Re: Part 14

Post by admin »

#401 From: Frank Griffin <fgriffin@...>
Date: Tue Mar 19, 2002 4:40 am
Subject: Re: Digest Number 32 ftg314159
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°


I really don't want to get into this, but just to (possibly) stop this
thread I have to say...

Lynn, you're wrong about your layering theory as regards Linux, and (I
think) about your conception of microkernels as a whole.

uKs purport to abstract whatever uses them from hardware. Well, some of
them do, and some of them don't. Many academic uKs are concerned simply
with process control, which is pretty much a no-brainer in comparison
with all of the other stuff that makes up a kernel. They do process
control and real memory management, and export the problem of device
management to drivers which are supposed to run in "user space" and are
therefore not their problem. By "device" I don't mean fripperies like
sound card drivers, but stuff like keyboard and hard disk drivers.

Your conception of a uK appears to be some composite of real uKs and the
work that others may or may not have done to "link" them to real
hardware.

Your model of CPAPI and MKAPI assumes that the MKAPI contains all the
functionality of the CPAPI, and that the CPAPI was somehow invented to
sit on top of it. I don't think that is true in general, and I know
that it isn't true in the case of Linux.

All Unixes work on the principle of the kernel providing services to
user-mode code via "syscalls". In the case of the OS/2 kernel, this
would be call gates to ring 0 routines. They are there regardless of
what you want to call them, and are the closest thing to what you re
calling MKAPI. What you would call the Linux CPAPI is in fact the libc
library (sort of like the DOSCALLS DLL) which is used by applications to
call POSIX-compliant services which then use the syscalls of the
underlying operating system to achieve what they need to achieve.
However, anything and anybody can use the Linux syscalls, so your
assertion that an OS/2 personality on top of Linux would be going
through the "Linux CPAPI" layer is false.

Now, to be fair, it *could* be true. It would be true if the
osfree/freeos programmers made the choice to write the OS/2 CPAPI to the
POSIX libc API rather than to the Linux syscall API. In support of your
point, that would exclude them from running with other uKs, but in
support of other opinions it would also enable them to run on any
POSIX-compliant UNIX (albeit with the overhead of an additional layer of
libc-to-syscall interface, which frankly I don't think is going to be of
a magnitude to interest anyone outside of the RTOS community). In any
case, reinventing DOSCALLS or libc from scratch is (on a guess) 90%
recoding stuff that the syscalls/uK never supplied to begin with and 10%
changing the code to use different syscalls.

Frankly, I think that in your model you have mentally pigeon-holed uK
providers into giving you all of the "stuff'" that separates you from
the hardware. The reality is that most uK providers are much more
interested in academic process-separation theories or better ways of
doing process control and memory management, and are not in the least
interested in the parts of real kernel development that is going to soak
up 98% of your time if you want to base your code on theirs, namely
supporting keyboards, disks, SCSI, monitors, etc.

Timur makes the point that the Linux driver model allows drivers to have
full kernel privileges. That's true, although the best-practices docs
discourage people from using anything other than the process-control and
memory-management kernel APIs. The kernel doesn't specifically prohibit
drivers from poking around, but Linux's claim of superior reliability
versus Windows is a pretty good guarantee that aberrant drivers which
compromise kernel integrity will either be shunned or rewritten.

Neither ODIN nor WINE is a poster-child for your theories. An OS/2
CPAPI built on top of either Linux syscalls or a POSIX libc would be
working from publically-documented APIs. ODIN and WINE are hampered by
the fact that they are not trying to allow non-Windows programmers to
use Win32 APIs, but are trying to accomodate the Win32 API use of
existing Win32 applications, the most important of which are MS
applications which use undocumented APIs deliberately to foil such
attempts.

As for myself, I'm sitting out until somebody comes up with a plan for
either of these efforts which delivers the components I consider
critical in a reasonable timeframe. Those components are a GUI and a
JVM. If we had people committed to producing those, I would commit do
doing kernel (or whatever) work. Failing that, I think you're wasting
your time pursuing operating environments which don't already have them.
admin
Site Admin
Posts: 1925
Joined: Wed Dec 19, 2018 9:23 am
firstname: osFree
lastname: admin

Re: Part 14

Post by admin »

#402 From: "Lynn H. Maxson" <lmaxson@...>
Date: Tue Mar 19, 2002 6:54 am
Subject: Re: Digest Number 32 lynnmaxson
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°


Frank Griffin writes:
" ... Lynn, you're wrong about your layering theory as regards
Linux, and (I think) about your conception of microkernels as a
whole. ..."

I wondered if you were lurking out there. If I'm wrong, well, it
won't be the first time. Fortunately I have the habit of getting
it right eventually. You did cause me to refer once more to the
IBM redbook describing OS/2 for the PPC and specifically the
microkernel. I saw nothing there that basically changes my view.

I frankly don't care which Linux interface selected, syscalls or
POSIX, the description I offered of the API differences that could
occur still stand. I don't want to stop or discourage anyone from
attempting an OS/2 layering on either. I just feel that for
essentially the same amount of effort, considering that the Linux
source is available, that it is just as easy to create a
microkernel version, i.e. a common set of lowest level APIs, and
the two OS personalities. If the ODIN people did the same for a
Windows personality, then we could offer something that was better
overall than the original. Better in quality, better in
performance, better in function.

When I look at the end point of vertical layering I don't see that
you have done anything except offer a migration path from OS/2.
It certainly won't encourage either vendor device driver support
or the writing of applications using those devices. My object
remains freeing the user from having to make an OS decision based
on applications or drivers, but rather provide an user environment
in which the user is isolated from such concerns. In that manner
if the quality, performance, and function is there, it can compete
openly with the other OS choices.

I obviously have more research to do. I have a greater interest
in writing a software development tool that makes it possible for
a handful of people to have greater productivity than hundreds.
If and when that occurs, then closed source will no longer be a
competitive option.

Any rate it's good to here from you. I think you are right that
this thread has gone on far too long.
admin
Site Admin
Posts: 1925
Joined: Wed Dec 19, 2018 9:23 am
firstname: osFree
lastname: admin

Re: Part 14

Post by admin »

#403 From: Frank Griffin <fgriffin@...>
Date: Wed Mar 20, 2002 3:25 am
Subject: Re: Digest Number 33 ftg314159
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°


>Any rate it's good to here from you.

And from you. Somehow I didn't think you'd be silent over here. I joined
up when somebody posted the existence of this group on FreeOS. It's been
interesting to read all of the old arguments again....


> I just feel that for
> essentially the same amount of effort, considering that the Linux
> source is available, that it is just as easy to create a
> microkernel version, i.e. a common set of lowest level APIs, and
> the two OS personalities.

What I keep trying to get across is that once you've written a dispatcher,
it's done (until you want to improve it). On the other hand, keyboard and
mouse and video and disk drivers are empirically-driven nightmares. If you
don't believe me, then check out the dozens upon dozens of idiot
"account-for-the-bug-in-XXXX" options in a Linux install. Once you walk
away from a kernel where someone has already done this work, you are
destined to have to do it again yourself.

You are known to be enamored of the support costs topic, and this is its
poster child. If you snapshot a source base and cut yourself off from the
automatic inclusion of future work by the people who produced what you
snapshotted, you have a support nightmare.

I doubt that you are going to find many uKs with the ongoing committment to
device support that Linux has, simply because of the disparity in the size
of developer communities.

Of course, my arguments could be out of date. Traditional PCs are filthy
pieces of hardware design, but there are efforts underway to eliminate all
of the difficult-to-support hardware (kiss that floppy goodbye). Maybe by
the time this sees the light of day, the issue will be moot.

In any case, I think you've been spoiled a bit by researching OS/2 PPC. IBM
took the Mach uK and decided to fill in the device support holes to produce
a full kernel. The trouble is, you don't have their work, so you're back in
the position they were when they started.

> When I look at the end point of vertical layering I don't see that
> you have done anything except offer a migration path from OS/2.

No argument there. My prime directive is to find or build another
environment that has all the stuff I liked in OS/2.

> It certainly won't encourage either vendor device driver support
> or the writing of applications using those devices.

NOTHING short of a nuclear strike on Redmond is going to encourage vendor
device driver support for a new OS. However, if the drivers are there and
the interface is that of a popular OS, then the applications are already
there and will continue to arrive.

> My object
> remains freeing the user from having to make an OS decision based
> on applications or drivers, but rather provide an user environment
> in which the user is isolated from such concerns. In that manner
> if the quality, performance, and function is there, it can compete
> openly with the other OS choices.

This will be true of any of the options discussed to date. With CPU speeds
outdistancing bus speeds by a factor of 10-20, the performance impact of
even the libc approach will be negligible.

> I have a greater interest
> in writing a software development tool that makes it possible for
> a handful of people to have greater productivity than hundreds.
> If and when that occurs, then closed source will no longer be a
> competitive option.

I really wish that you would work on this, as you've been talking about it
for so long. We went back and forth on this for awhile many moons ago, but
I could never quite grasp how what you were talking about would actually
work, and all of your replies seemed to need a listener versed in more
language-theory than I was or wanted to be.

Please, no more marketing brochure material --- give us a description of how
one would use this tool to create an application (or whatever) and exactly
how the tool would perform the functions it was supplying and what other
pieces of software (database, source control system, etc.) it would need.

I've been programming for 30+ years, and I come away from your descriptions
of this thing thinking that it sounds like an academic project intended for
someone's doctoral thesis. I have to think that others are similarly lost.
I think what you need to do for an audience like this is tie what you want
the tool to do to how it will do it in terms of existing programming
constructs, e.g. it will need a database with the following schemas, and
when somebody writes specifications it will take them and do xxxxxxxx,
resulting in the following database entires for each program unit.

My gut feel about your proposed tool has always been good. I've just never
been able to link it to my view of reality. In any case, if it involves
writing in a new language, I doubt you'll find people in a project like this
willing to learn and use it. Still, that wouldn't be a reason not to do it.
admin
Site Admin
Posts: 1925
Joined: Wed Dec 19, 2018 9:23 am
firstname: osFree
lastname: admin

Re: Part 14

Post by admin »

#404 From: "Lynn H. Maxson" <lmaxson@...>
Date: Wed Mar 20, 2002 7:19 pm
Subject: Re: Digest Number 33 lynnmaxson
Offline Offline
Send Email Send Email
Invite to Yahoo! 360° Invite to Yahoo! 360°


Frank Griffin writes:
" ... I really wish that you would work on this, as you've been
talking about it for so long. We went back and forth on this for
awhile many moons ago, but I could never quite grasp how what you
were talking about would actually work, and all of your replies
seemed to need a listener versed in more language-theory than I
was or wanted to be. ..."

The tool I've named The Developer's Assistant. The language is
SL/I (Specification Language/One), a superset of PL/I and APL. It
borrows the syntax and data types of PL/I, the operators and
symbol set of APL, and in addition adds the use of rules,
relationships, and assertions of logic programming. As a
specification language based on predicate logic (as opposed to the
clausal logic of Prolog) it is capable of specifying itself,
making it then self-extensible.

The Warpicity proposal included as a package the tool (The
Developer's Assistant), its source written in SL/I, and the
specifications (the source) for the SL/I language written in SL/I.
As it includes the SL/I specifications for the instruction set of
any given platform and allows both assignment and assertion
statements, it incorporates all of symbolic assembler (2nd
generation imperative language), imperative HLLs (third generation
languages), and declarative HLLs (fourth generation languages).

People tend to focus on the language as for some reason that has
fascination for programmers. I tend to focus on the tool as that
is what provides the productivity gains that makes everything else
possible. The tool is a Swiss knife in that it supports all the
activities involved in software development as a seamless,
integrated whole.

It's basic mode is interactive, interpretive. You might say its
an "intelligent" editor to which we have added syntax analysis,
semantic analysis, and the two-stage proof engine of logic
programming. If you can buy an editor that does color-coded
syntax analysis, there's no sense in just stopping there without
throwing in semantic analysis and a proof engine, that which
produces executable code. That's the way it has worked since the
first interpreter was written. The tool then is simply an
enhanced interpreter based on an enhanced specification language.

When I say it is enhanced realize that in logic programming you go
directly from a specification to an executable. If you look at
the first four stages of the software development
cycle--specification, analysis, design, and construction--, this
says that the software, the tool, takes an unordered set of
specifications, performs analysis, design, and construction (which
orders the source and produces an executable).

The only manual writing which occurs is that of the unordered
specifications. Many enterprises use CASE tools, which
essentially are manual drawing tools for analysis and design. I
happen to come from the structured design methodology in which
dataflow charts result from analysis and their conversion to
structure charts in design. Once the tool has ordered the source
code it produces the dataflow and structure charts in the same
manner that a flowcharting program works today.

So you write an unordered set of specifications, the tool orders
them after syntax and semantic analysis into an ordered set and
executable (the first-stage or completeness proof of the logic
engine). Once the completeness proof has finished it also offers
the dataflows and structure charts of the ordered source as well.

Now all this is done interpretively at the tool accepts a
specification, entering it into the following steps of syntax and
semantic analysis and the completeness proof. This provides a
dynamic visual output in separate windows of the outputs of each
step. In one window you have the ordered source;in another, the
dataflows;in another, the structure charts;and in another, the
output of semantic analysis (the names, attributes, and statement
cross-references of all data variables and constants).

I didn't mention that scope of compilation, the number of object
modules produced, is determined by the input specifications: it
can be one or more. Thus if you enter the specifications for an
entire application system (multiple programs) or an operating
system (multiple, different module types), the outputs of the tool
will show them in their entirety.

So now you have all of the visual output produced automatically in
the software. If you make a change to the input specification
set, once completed its effect is reflected "immediately". If
that means a reorganization, a reordering of the source that
occurs "immediately" also. Thus the documentation remains
synchronized with changes to the input specifications.

Now if you are familiar with IT organizations that separate the
functions of gathering user requirements, transforming them into
formal specifications, selecting a group of them for manual
analysis, design, and construction using CASE tools, editors,
compilers, linkers, and make systems, all of which involve manual
writing, (whew) then you can appreciate the time and effort saving
by turning all writing except for requirements, specifications,
and user documentation over to the software.

So you have software companies like Rational working to write code
generation in C++ or JAVA from UML document source. If you have
15 different types of document source, then you have separate
maintenance for each. If, however, you have the source
specifications, once they are ordered (whether complete or not)
you can produce all the UML documentation. It doesn't take a
genius or rocket scientist to know that maintenance of a single
source is easier than multiple, that writing a single source is
easier if you don't have to order it, and that the ordering of
source and production of all documentation by software is easier,
faster, and cheaper than doing it manually.

You see people get hung up on the language which basically has
nothing to do with the tool. You could write the tool in C (and
assembler) and get the same type of productivity gains. All you
need is an interpreter which produces CASE (structured design or
UML) output. The only change you have to make to C is eliminate
nested procedures, allowing each procedure to appear separately in
the source for later ordering by the interpreter. You might what
to consider increasing the scope of interpretation (compilation)
to multiple object modules (executables).

There's nothing radical in this except for having the CASE output
produced from the source. If it's not radical for a flowcharting
program, then it's certainly not radical for CASE output. In
truth I haven't invented anything, certainly not a silver bullet.
I've taken existing technology and reorganized it. I've kept the
"what" and changed the "how".

It all arises from a study of logic programming and of what it
meant for software to accept unordered input and through internal
logic perform analysis, design, and construction. While the
output of construction, the ordered source, sufficed for producing
an executable, why could it not produce the logically equivalent
forms of analysis and design as well?

The completeness proof, the first-stage of the logic engine, is a
magnificent beast. For it is not only a completeness proof, but
an incompleteness proof as well. It not only tells how it looks
when you got it right, i.e. complete, but how it looks when you
didn't and why. That's all due to its backtracking mechanism. So
here for over thirty years (and an SQL later<g>) we have had this
magnificent ability for a software peer review, giving us a view
to compare what we have said with what we thought we had said.
And to do it at an incomprehensible rate compared to scheduling,
preparing, and holding a formal peer review.

Do I think there are productivity gains here? You betcha.<g>

Is there a catch? You betcha. Such a tool reduces the market for
its services. The more it reduces it, i.e. its population of
users, the fewer sales will result. If it is too effective, i.e.
too productive, it might never have a sales volume large enough to
cover its cost of sales. Thus whatever value may accrue to using
it has only value to its users, not to its producers. Only its
users can profit from it. Thus they should absorb its cost of
sales. In that manner the producer profits from the contract, not
from the sale of the product.

So don't get hung up on the language, which is to a large extent
unimportant. Focus on the tool.
admin
Site Admin
Posts: 1925
Joined: Wed Dec 19, 2018 9:23 am
firstname: osFree
lastname: admin

Re: Part 14

Post by admin »

#405 Re: [osFree] Digest Number 33
Expand Messages

Lynn H. Maxson
Mar 20, 2002
Frank Griffin writes:
"... In any case, I think you've been spoiled a bit by researching
OS/2 PPC. IBM took the Mach uK and decided to fill in the device
support holes to produce a full kernel. The trouble is, you don't
have their work, so you're back in the position they were when
they started. ..."

I should probably have just stated that the Developer's Assistant
is an enhanced interpreter in C++ that produces CASE output as
well as produces multiple object module types as a single unit of
work. If SL/I can specify itself, it certainly capable of
specifying C++.<g> I mean it's a universal specification
language. The only one you have ever seen is textbook formal
logic which seemingly has been too great a challenge to
programming language authors up to now.<g>

With respect to OS/2 for the PPC I'm not quite in the same
position. At least I know that a uK version is possible. They
really didn't know that when they started. I have to get a copy
of the MACH kernel and perhaps that of ReactOS as well. I need at
least a starting set of lowest level APIs. Then when I attempt to
map them into the OS/2 CPAPIs, I will set about mapping in the
inbetweens. Thus I will have my kernel set.

As I am disinclined to burden objects with OO technology as I
don't want to burden myself I suspect my implementation will
differ significantly from either MACH or ReactOS and resemble more
with what I gained from other IBM operating systems (MVS, VSE, VM,
AS/400, DPPX, CICS, etc.). The essence lies in the peer review of
the documentation supporting this effort as it is produced. Maybe
somewhere along the line people will begin to understand from a
software perspective this is just another application.

Probably at the same time I will incorporate the Linux source, at
least it API set. If I then have a common configuration
management scheme as part of the IPL process, I should be on my
way to demonstrating two OS personalities executing concurrently
and independently of each other. Maybe at that time the ReactOS
folks will more fully appreciate the generic approach and we can
merge our efforts. There's a Lindows (Linux-Windows) effort
underway out here that's possibly another source.

I will accept the existing drivers for whatever operating system,
either allocating (deallocating then allocating) the devices
dynamically from one to the other. That implies a "master" OS
personality governing the others. The applications will run in
native mode unaltered.

If in doing all this, we can from a performance, reliability, and
security point of view produce a better OS/2 than OS/2, Linux than
Linux, and Windows than Windows, I suspect that we will pull
talent from those arenas for further assistance. I wouldn't be
surprised if it did not occur to offer a browser personality, a
database personality, and a JVM personality.

It's ambitious, but why not? If people feel up front that you
can't succeed, then why compromise in the course of failure in the
off chance that you don't.<g> You're not inventing anything here
as much as you are reinventing, in which case you have the ability
to improve on the original. Seemingly you have an existing supply
of improvements already that people would like to see. So why not
offer people what they want?

"... NOTHING short of a nuclear strike on Redmond is going to
encourage vendor device driver support for a new OS. However, if
the drivers are there and the interface is that of a popular OS,
then the applications are already there and will continue to
arrive. ..."

Which is probably another nail in ODIN's coffin, having
applications arrive, converting them, and then not having the
driver support. Maybe, just maybe, it's time that an equal effort
went into creating a Windows personality on the same uK base using
Windows drivers and applications. Somewhere along the way it will
occur that the undocumented APIs that make Office off limits is an
anti-competitive act of a convicted monopolist further subjecting
them to corrective action.

Eventually we, the OS using community, will actually enforce open
standards as well as open source by essentially contracting our
software in the same manner that it has occurred in IT shops from
the beginning. In that manner when we pay for something we will
own it and set our own rules with respect to its distribution and
sale. It probably puts software vendors out of business, forcing
them in to becoming software contractors. In that case when they
decide arbitrarily to go out of business or otherwise lose
interest, we will have the source for more willing contractors to
maintain and enhance for us in case we didn't want to do it for
ourselves.

The using community which purchases product can spend less money
contracting for it with the net result that they own what they
have paid for. That puts them in control. If you combine this
with a tool like the Developer's Assistant, then the cost to the
using community becomes significantly less as well as
significantly easer to develop and maintain. If the using
community assumes this responsibility, then the "voodoo economics"
currently governing open source will disappear and we will have a
proper "business model" in which all costs (subsidized or not) are
accounted for. There's no free lunch or Linux.<g>
admin
Site Admin
Posts: 1925
Joined: Wed Dec 19, 2018 9:23 am
firstname: osFree
lastname: admin

Re: Part 14

Post by admin »

#406 Down to earth
Expand Messages

Lynn H. Maxson
Mar 26, 2002
I have no idea how these Yahoo fees will work out or their effect
on us. I do have some idea, a suggestion, relative to the common
efforts of these two (freeos, osfree) mailing lists. The
suggestion is relatively simple. Use freeos for those interested
in pursuing a uK strategy for constructing an OS/2 replacement
suitable for replacing Linux, Windows, and BeOS as well. Use
osfree for those interested in creating an OS/2 CPAPI layer onto a
Linux host.

They both have in common the need to "rest" the OS/2 CPAPI on an
underlying host. Therefore they can pursue in common the
specification and coding of the OS/2 CPAPI relative to their
choice of underlying host. I would further suggest that they look
to "The Design of OS/2" as a document guideline for decomposing
the CPAPI into rational units.

In addition I suggest the uK folks look to "OS/2 Warp(PowerPC
Edition): A First Look" as their documentation and decomposition
guideline. In truth if we regard these are reasonable approaches
with respect to division of labor (and interest) we could return
to a single mailing list for both.

As a proponent of uK approach and one who feels the need not
simply to replace OS/2 (and Linux, Windows, and BeOS) but to offer
something better I have a bit more ambition for this undertaking.
Previous OS authors have assumed theirs was the only one
executing. That led to a "restricted" environment in two areas,
device driver implementation and application support (the CPAPI).

Now applications remain "devoted" to their CPAPI as in a sense do
the device drivers. It does little good to provide a specific
device driver for an OS for which no applications exist. In the
same sense if a device driver exists for applications for more
than one OS then it does little good to allocate control of the
device to only one OS. Therefore a need exists to dynamically
allocate and deallocate device resources among participating OSes.
In short "true" plug (and unplug) and play.

Anyway just some thoughts on the matter.
admin
Site Admin
Posts: 1925
Joined: Wed Dec 19, 2018 9:23 am
firstname: osFree
lastname: admin

Re: Part 14

Post by admin »

#407 Repair your cedit now!
Expand Messages

pilooera
Mar 26, 2002
This site has found credit card deal that will approve you. Check it
out

Click on link Below!

http://apply4creditcard.com
admin
Site Admin
Posts: 1925
Joined: Wed Dec 19, 2018 9:23 am
firstname: osFree
lastname: admin

Re: Part 14

Post by admin »

#408 XfilesG.zip
Expand Messages

mailjmase
Mar 27, 2002
This file was not complete (eg. the zip had errors)
Could whoever uploaded it upload it again ?
admin
Site Admin
Posts: 1925
Joined: Wed Dec 19, 2018 9:23 am
firstname: osFree
lastname: admin

Re: Part 14

Post by admin »

#409 Re: Down to earth
Expand Messages

tomleem7659
Mar 28, 2002
--- In osFree@y..., "Lynn H. Maxson" <lmaxson@p...> wrote:

> I have no idea how these Yahoo fees will work out or their effect
> on us. I do have some idea, a suggestion, relative to the common
> efforts of these two (freeos, osfree) mailing lists. The
> suggestion is relatively simple. Use freeos for those interested
> in pursuing a uK strategy for constructing an OS/2 replacement
> suitable for replacing Linux, Windows, and BeOS as well. Use
> osfree for those interested in creating an OS/2 CPAPI layer onto a
> Linux host.
>

This is a good idea and worth pursuing. This will also help
those reach their goals without having to argue what those
goals are. If in the 'home' section of each group it is
defined what the goal is, this could preclude many possible
arguements. This does not mean one can not join both groups
or one group can not learn from the other. It would define
where the groups is heading and remove some obstacles that
might come up as a result of not really knowing where one
is going with the group.

TomLeeM
admin
Site Admin
Posts: 1925
Joined: Wed Dec 19, 2018 9:23 am
firstname: osFree
lastname: admin

Re: Part 14

Post by admin »

#410 Re: [osFree] Repair your cedit now!
Expand Messages

Adrian Suri
Mar 28, 2002
Sorry but do we really need adds like this, maybe only members should be
able to post from this group to
help cut back os such spam

Adrain

pilooera wrote:

> This site has found credit card deal that will approve you. Check it
> out
>
> Click on link Below!
>
> http://apply4creditcard.com
>
>
> Yahoo! Groups Sponsor

ADVERTISEMENT
[Image]

Hide message history
>
> To unsubscribe from this group, send an email to:
> osFree-unsubscribe@yahoogroups.com
>
>
>
> Your use of Yahoo! Groups is subject to the Yahoo! Terms of Service.
Post Reply