[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Announce] Linux Test Project




I've polluted Andrew's response with a bunch of responses.  I hope
others have opinions to share.  I think soon we need to start to discuss
overall project goals. <SGI corporate hat on>We have goals for this
project that we think fit nicely into the novel goal of bringing
organized testing to Linux.  We hope to share them.</corporate hat>

Since I'm writing this from home, I'll inject a personal opinion here. 
My personal goal is to increase the quality of Linux.  I'm of the
personal belief that the scalability of the current testing model has
reached its limit.  Thats not to say it needs to be replaced; it needs a
compliment.  I also hope we can compliment the lk development model as a
whole.

Andrew Morton wrote:
> 
> - Great initiative!

Thanks.  We've been waiting for someone like you to come in and ask
these questions.  We have lots of ideas, but didn't want to blurt them
all out to nobody listening.  Furthurmore, we didn't want to start a
project around our potentially narrow mindset.

> 
> - Performance regression would be a good thing, but rather tricky to set
> up.

I'm not sure I understand what you mean.  Do you mean 'operation X took
10 mins on 2.2.4 and 20 mins on 2.2.5,' why?

> 
> - I'd encourage you to make it very easy for people to contribute new
> test cases.
> 
> This is an organisational thing.  If joe-random-developer comes up with
> some little test case it would be neat if he could wrap it in a little
> scripting framework and submit it with _minimal_ effort.

Agree completely.  In fact, we have discussed this at length.  We've
come to the conclusion that if the tests are too hard to write or
understand, the project will fail.  I am convinced that tests can be
easy to write and all those lkml lurkers out there looking for a place
to help out can get in on the development of the linux kernel: testing.

> 
> - Some tests will not lend themselves to autorunning.  Networking, of
> course.  Plus tests which involve removable media, unplugging devices,
> etc.

I must admit that I'm biased towards automation -- and I think most
testers are.  If we had to perform thousands of test cases in a manual
fashion, we'd quit our jobs.  Its hard enough writing automated tests
that don't fail.  Its when we get word the morning after our build from
our automated test runs that one of the test you wrote a year ago is
failing -- thats when its most exciting.

You're right though, there is a place for semi-automated tests (tests
which need our help).  We need to decide how they fit into this project.

> 
> So the environmental wrappers (which are really the core of this
> project) will need to be able to stop and prompt for some external
> activity.  And the means of presenting the documentation for the test
> setup should be thought out up-front.
> 
> - Is there prior-art here which can be learnt from?  How do the irix QA
> guys test stuff before it goes out?

That's us.  We hope there is some prior-art.  Thats why we've released
some code.  Its simple, but there is a reason for that.  Its a good
start for illustrating the simplicity of writing tests.

> 
> - A lot of kernel tests require that the kernel be patched to run the
> test (whitebox testing).  spinlock debugging, slab poisoning, assertion
> checking (what assertions?), etc.  Is it your intent to take things this
> far?

Speaking for myself, I think we might want to avoid this.  Its great
stuff, but I just assume leave this level of testing to the kernel
developers.

> 
> - The test execution environment will need to be able to not run some
> tests, based upon the tester's architecture and config (eg: I don't have
> USB..).

This is an issue the project needs to find an answer for.  We have tools
(SGI internal) that help us out with this.  However, we have an
advantage in that our hardware and OS are all ours.  Much of the
variability is constrained (# of hardware/software configs) and so our
tools may not be enough for Linux.

--aaron

-- 
Aaron Laffin
laffinaw@acm.org or a.laffin@computer.org