[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Announce] Linux Test Project



[ Resend.  Victim of MS Exchange ]

Oh damn.  Now I'm worried.  I regret redirecting mailing lists so early. 

I'm already in your camp, Aaron - in real life I manage teams who
develop products for Nortel Networks, some of which my customers use to
turn over revenues of over $1bn p.a. So I understand what's involved in
delivery of complex software at high quality levels (some of my
colleagues would disagree with this statement :))

I suspect the same goes for most other people on the ltp list?

At this stage there isn't a lot of point in the converted preaching to
each other on ltp@oss.sgi.com.  It needs explanation and evangelism.

Let's think about how to do that.

Aaron Laffin wrote:
> 
> I've polluted Andrew's response with a bunch of responses.  I hope
> others have opinions to share.  I think soon we need to start to discuss
> overall project goals. <SGI corporate hat on>We have goals for this
> project that we think fit nicely into the novel goal of bringing
> organized testing to Linux.  We hope to share them.</corporate hat>
> 
> Since I'm writing this from home, I'll inject a personal opinion here.
> My personal goal is to increase the quality of Linux.  I'm of the
> personal belief that the scalability of the current testing model has
> reached its limit.  Thats not to say it needs to be replaced; it needs a
> compliment.  I also hope we can compliment the lk development model as a
> whole.

I forsee organisational problems.  How to persuade kernel developers to
contribute test cases, regression tests for their own components and
changes?  Hard.

I'm sure that many developers already have home-grown test tools.  It's
a matter of getting them to pony up.  Hard.

I also forsee areas for tension between your corporate goals and the
rest of the world.  Not bad things per se, but just "things".
SPARC/PPC/Alpha.  The IDE driver. If the sponsors of this project do not
require IDE in products... 

This is quite germain, because the bulk of Linux quality problems are in
device drivers.  A huge number of them are unmaintained, slow, buggy and
not SMP-safe...

> Andrew Morton wrote:
> >
> > - Great initiative!
> 
> Thanks.  We've been waiting for someone like you to come in and ask
> these questions.  We have lots of ideas, but didn't want to blurt them
> all out to nobody listening.  Furthurmore, we didn't want to start a
> project around our potentially narrow mindset.

I think we need to take this back to linux-kernel.  Mea culpa.

> >
> > - Performance regression would be a good thing, but rather tricky to set
> > up.
> 
> I'm not sure I understand what you mean.  Do you mean 'operation X took
> 10 mins on 2.2.4 and 20 mins on 2.2.5,' why?

Yes.  There are any number of benchmarks available - lmbench, bonnie++,
....  A set of wrappers around these which could spot performance
regression would be useful.  
 
> >
> > - I'd encourage you to make it very easy for people to contribute new
> > test cases.
> >
> > This is an organisational thing.  If joe-random-developer comes up with
> > some little test case it would be neat if he could wrap it in a little
> > scripting framework and submit it with _minimal_ effort.
> 
> Agree completely.  In fact, we have discussed this at length.  We've
> come to the conclusion that if the tests are too hard to write or
> understand, the project will fail.  I am convinced that tests can be
> easy to write and all those lkml lurkers out there looking for a place
> to help out can get in on the development of the linux kernel: testing.

Good.  I'll pull down your existing stuff and provide feedback (probably
on linux-kernel).  Give me a few days.

> >
> > - Some tests will not lend themselves to autorunning.  Networking, of
> > course.  Plus tests which involve removable media, unplugging devices,
> > etc.
> 
> I must admit that I'm biased towards automation -- and I think most
> testers are.  If we had to perform thousands of test cases in a manual
> fashion, we'd quit our jobs.  Its hard enough writing automated tests
> that don't fail.  Its when we get word the morning after our build from
> our automated test runs that one of the test you wrote a year ago is
> failing -- thats when its most exciting.
> 
> You're right though, there is a place for semi-automated tests (tests
> which need our help).  We need to decide how they fit into this project.

I agree.  Being able to type `make test' and go to bed is a big win. 
It's a thing I've never had the luxury of being able to put in place,
because all the stuff here requires communication between lots of quite
dissimilar and complex nodes.  Operating systems are easy :)

> >
> > So the environmental wrappers (which are really the core of this
> > project) will need to be able to stop and prompt for some external
> > activity.  And the means of presenting the documentation for the test
> > setup should be thought out up-front.
> >
> > - Is there prior-art here which can be learnt from?  How do the irix QA
> > guys test stuff before it goes out?
> 
> That's us.  We hope there is some prior-art.  Thats why we've released
> some code.  Its simple, but there is a reason for that.  Its a good
> start for illustrating the simplicity of writing tests.
> 
> >
> > - A lot of kernel tests require that the kernel be patched to run the
> > test (whitebox testing).  spinlock debugging, slab poisoning, assertion
> > checking (what assertions?), etc.  Is it your intent to take things this
> > far?
> 
> Speaking for myself, I think we might want to avoid this.  Its great
> stuff, but I just assume leave this level of testing to the kernel
> developers.

Fair enough.  But a simple global CONFIG_TESTING option in the kernel
source would help with diagnosis.  That may not be too hard to arrange -
just a matter of ORing all the existing DEBUG macros with a new global
one.

> >
> > - The test execution environment will need to be able to not run some
> > tests, based upon the tester's architecture and config (eg: I don't have
> > USB..).
> 
> This is an issue the project needs to find an answer for.  We have tools
> (SGI internal) that help us out with this.  However, we have an
> advantage in that our hardware and OS are all ours.  Much of the
> variability is constrained (# of hardware/software configs) and so our
> tools may not be enough for Linux.

mm..  But this is a technical issue.  They're the easy ones to solve!

I think coordinating the test script with $TOPDIR/.config is the best
approach, although it's perfectly valid (and gives better coverage) to
just run all the tests every time and to use the .config to weed out
bogus error reports.

Anyway, I need to take a look at your stuff before going any further.

BTW: A kernel bug-tracking system would be nice.  Do you have any bored
web developers over there?