Linux Test Project

Linux™ Test Plan

  1. Overview
  2. Goals
  3. Test Approach and Methodology
  4. LTP Coverage Summary
  5. Test Machines
  6. Test Case Details
  7. Utilization Tools
  8. Framework/Harness Tools
  9. Other Document Links
  10. Document Owners
  11. Appendix A
  12. Appendix B
  1. Overview

    The Test Plan provides the strategy for engaging the Open Source Community (OSC) in the delivery of test suites focusing on customer workloads. The Test team is responsible for delivering documented stress runs, reliability statements and verifying defect fixes through regression testing. Reliability, workloads, and stress test ware will be executed in automated 72 hours runs. The Linux Test Project is headquartered at All source code, patches, bugs, support requests, etc should be submitted through either the mailing list or the Source Forge applications. We are seeking comments from the Open Source Community (OSC) and internal reviewers.

  2. Goals

    1. Software Goals
      1. Mix application workloads for stress testing.
      2. General and targeted benchmarks.
      3. API verification programs.
      4. Regression testing.
    2. Report Goals
      1. Deliver documented stress runs and reliability statements.
      2. Reports providing the foundation for benchmark/performance analysis.
      3. API analysis.
      4. Regression run analysis.
    3. Time-line Goals
      1. Initial Posting of test suites 2Q-01.
      2. 4Q-01 Posting of Comprehensive test suite to the OSC.
      3. 4Q-01 Document all test cases and provide categorized index.
  3. Test Approach and Methodology

    1. Sanity Testing
      1. Test the reliability and robustness of the kernel.
      2. Should be finished in less than 24 hours.
      3. Used to test each kernel release and pre-patch (assuming automated kernel builds, installs and reboots).
      4. Comprised mostly of "quick hit" tests that are self verifying.
      5. Includes any fast running regression tests.
      6. Results are limited to PASS/FAIL with additional details for FAIL.
    2. Stress Testing
      1. More comprehensive tests as a higher load.
      2. Used to shake out locking issues, hangs and corruption.
      3. Should be finished in less than six hours.
      4. Includes more I/O and network testing and longer running regression tests.
      5. Results are limited to PASS/FAIL with additional details for FAIL.
    3. Endurance Runs
      1. Testing that lasts from six hours to weeks.
      2. Used to verify system operations over long periods of varying loads.
    4. SGI Performance Runs
      1. Test runs designed to check performance of specific applications and workloads.
      2. Test duration depends on application.
      3. Results are used to compare against previous results on a quantitative basis.
    5. LTP Prep Summary -- Define and implement test infrastructure & requirements.

      1H01 2H01 1H02 2H02
      Develop/port harness independent test cases. In Progress In Progress
      CVS Source Forge will be used as the test case source repository. In Progress In Progress
      Develop/port hardware neutral test cases. In Progress In Progress
      Scalable Test Platform hardware configured (hardware for STP).
      Build automated testing implimentation (software for STP).
      Full runs of available tests on standard kernels (on the STP machines).
      Joint development on kernel stress test, performance and regression suites.
      Joint development on application level performance analysis suites.
      Analyze benchmark tools for coverage. In Progress In Progress
      Identify Linux 2.2 Test products/suites currently available in the OSC. In Progress In Progress
      Investigate Globalization and I18N.
      Identify new Linux 2.4 functions that need to be tested.
      Initial distros are Red Hat®, SuSE®, Turbolinux®. In Progress In Progress
      Automated Reliability runs (predictable, repeatable).
      Interoperability between n(2.4), n-1(2.2), n+1(2.4+) kernels.
      All test cases should be executed from 72 to 168 hours.
      Kernel runtime should be balanced between subsystems (i.e. filesystem, threads, memory).
  4. LTP Coverage Summary

    1. Identify strategic Linux components that should be stress tested.

      1H01 2H01 1H02 2H02
      Memory management (VMM, Paging space) In Progress In Progress
      Scheduler (process, stack handler) In Progress
      Pthreads In Progress In Progress
      Varying filesystems sizes In Progress In Progress
      Varying directory structures In Progress
      Compression In Progress
      /proc In Progress In Progress
      Remote commands In Progress In Progress
      File Transfers In Progress In Progress
      Sockets In Progress In Progress
      Multicast In Progress In Progress
      NFS In Progress In Progress
      Ethernet 10/100
      Gigabit Ethernet
      Token Ring 4/16
      Wireless technology
      Cdrom In Progress In Progress
      Floppy In Progress In Progress
      IDE In Progress In Progress
      SCSI / RAID drives In Progress In Progress
  5. Test Environment

    1. IBM®
      System Type Number Of Systems
      06/2001 12/2001
      UP 60 100
      2-way 3 3
      4-way 2 4
      8-way 3 9
      12-way - -
      16-way - -
    2. SGI™
      System Type Number Of Systems
      Flex Time
      UP 5
      2-way 3 i686
      5 ia64
      4-way 2 i686
    3. OSDL
      System Type Number Of Systems
      Full Time Flex Time
      2-way 2 49
      4-way 2 4
      8-way 2 3
      16-way 0 1 32bit
      1 64bit
  6. Test Case Details

    1. Kernel Tests
      Test Case Name Description
      pth_str01 Threads Testing
      pthcli Socket client and server
      openfile Test file and thread limits
      mtest01 Memory stress tests
      pth_str02 Pthreads library routines
      pth_str03 Creates a tree of threads, does calculations, and returns the result to the parent
      1. SGI Test Case Details

        These are simple system call tests that verify functionality. They should be run as a unprivileged user unless otherwise specified. More complex workloads should be created by combining these tests to test for kernel locking bottlenecks.

        Test Case Name Description
        access01.c Basic test for access(2) using F_OK, R_OK, W_OK, and X_OK arguments.
        access03.c EFAULT error testing for access(2).
        alarm01.c Basic test for alarm(2).
        alarm02.c Boundary Value Test for alarm(2).
        alarm03.c Alarm(2) cleared by a fork.
        asyncio02.c Write/close flushes data to the file.
        chdir02.c Basic test for chdir(2).
        chmod02.c Basic test for chmod(2).
        chown01.c Basic test for chown(2).
        close08.c Basic test for close(2).
        creat09.c Basic test for creat(2) using 0700 argument.
        dup01.c Basic test for dup(2).
        dup02.c Negative test for dup(2) with bad fd.
        dup03.c Negative test for dup(2) (too many fds).
        dup04.c Basic test for dup(2) of a system pipe descriptor.
        dup05.c Basic test for dup(2) of a named pipe descriptor.
        execl01.c Basic test for execl(2).
        execle01.c Basic test for execle(2).
        execlp01.c Basic test for execlp(2).
        execv01.c Basic test for execv(2).
        execve01.c Basic test for execve(2).
        execvp01.c Basic test for execvp(2).
        f00f.c This is a simple test for handling of the pentium f00f bug.
        fchmod01.c Basic test for Fchmod(2).
        fchown01.c Basic test for fchown(2).
        fcntl02.c Basic test for fcntl(2) using F_DUPFD argument.
        fcntl03.c Basic test for fcntl(2) using F_GETFD argument.
        fcntl04.c Basic test for fcntl(2) using F_GETFL argument.
        fcntl05.c Basic test for fcntl(2) using F_GETLK argument.
        fcntl07.c Close-On-Exec functional test.
        fcntl07B.c Close-On-Exec of named pipe functional test.
        fcntl08.c Basic test for fcntl(2) using F_SETFL argument.
        fcntl09.c Basic test for fcntl(2) using F_SETLK argument.
        fcntl10.c Basic test for fcntl(2) using F_SETLKW argument.
        fork01.c Basic test for fork(2).
        fork04.c Child inheritance of Environment Variables after fork().
        fork05.c Make sure LDT is propagated correctly.
        fpathconf01.c Basic test for fpathconf(2).
        fstat01.c Basic test for fstat(2).
        fstatfs01.c Basic test for fstatfs(2).
        fsync01.c Basic test for fsync(2).
        getegid01.c Basic test for getegid(2).
        geteuid01.c Basic test for geteuid(2).
        getgid01.c Basic test for getgid(2).
        getgroups01.c Getgroups system call critical test.
        getgroups02.c Basic test for getgroups(2).
        gethostid01.c Basic test for gethostid(2).
        gethostname01.c Basic test for gethostname(2).
        getpgrp01.c Basic test for getpgrp(2).
        getpid01.c Basic test for getpid(2).
        getppid01.c Basic test for getppid(2).
        getuid01.c Basic test for getuid(2).
        kill02.c Sending a signal to processes with the same process group ID.
        kill09.c Basic test for kill(2).
        link02.c Basic test for link(2).
        link03.c Multi links tests.
        link04.c Negative test cases for link(2).
        link05.c Multi links (EMLINK) negative test.
        lseek01.c Basic test for lseek(2).
        lseek02.c Negative test for lseek(2).
        lseek03.c Negative test for lseek(2) whence.
        lseek04.c Negative test for lseek(2) of a fifo.
        lseek05.c Negative test for lseek(2) of a pipe.
        lstat02.c Basic test for lstat(2).
        mkdir01.c Basic errno test for mkdir(2).
        mkdir08.c Basic test for mkdir(2).
        mknod01.c Basic test for mknod(2).
        mmap001.c Tests mmapping a big file and writing it once.
        nice05.c Basic test for nice(2).
        open03.c Basic test for open(2).
        pathconf01.c Basic test for pathconf(2).
        pause01.c Basic test for pause(2).
        pipeio.c Creates children, which write to a pipe, while the parent reads everything off, checking for data errors.
        read01.c Basic test for the read(2) system call.
        readlink02.c Basic test for the readlink(2) system call.
        rename02.c Basic test for the rename(2) system call.
        rmdir04.c Basic test for the rmdir(2) system call.
        rmdir05.c Verify that rmdir(2) returns a value of -1 and sets errno to indicate the error.
        sbrk01.c Basic test for the sbrk(2) system call.
        select01.c Basic test for the select(2) system call to a fd of regular file with no I/O and small timeout.
        select02.c Basic test for the select(2) system call to fd of system pipe with no I/O and small timeout.
        select03.c Basic test for the select(2) system call to fd of a named-pipe (FIFO).
        setgid01.c Basic test for the setgid(2) system call.
        setgroups01.c Basic test for the setgroups(2) system call.
        setpgid01.c Basic test for setpgid(2) system call.
        setpgrp01.c Basic test for the setpgrp(2) system call.
        setregid01.c Basic test for the setregid(2) system call.
        setreuid01.c Basic test for the setreuid(2) system call.
        setuid01.c Basic test for the setuid(2) system call.
        setuid02.c Basic test for the setuid(2) system call as root.
        sighold02.c Basic test for the sighold02(2) system call.
        signal03.c Boundary value and other invalid value checking of signal setup and signal sending.
        sigrelse01.c Basic test for the sigrelse(2) system call.
        stat05.c Basic test for the stat05(2) system call.
        statfs01.c Basic test for the statfs(2) system call.
        symlink01.c Test of various file function calls, such as rename or open, on a symbolic link file
        symlink02.c Basic test for the symlink(2) system call.
        sync01.c Basic test for the sync(2) system call.
        time01.c Basic test for the time(2) system call.
        times01.c Basic test for the times(2) system call.
        ulimit01.c Basic test for the ulimit(2) system call.
        umask01.c Basic test for the umask(2) system call.
        uname01.c Basic test for the uname(2) system call.
        unlink05.c Basic test for the unlink(2) system call.
        unlink06.c Test for the unlink(2) system call of a FIFO.
        unlink07.c Tests for error handling for the unlink(2) system call.
        unlink08.c More tests for error handling for the unlink(2) system call.
        wait02.c Basic test for wait(2) system call.
        write01.c Basic test for write(2) system call.
      2. Application Development Environment

        Identify any requirements for commands test cases that are commonly used in application development.

        Test Case Name Description
        ar01 Tests the basic functionality of the 'ar' command.
        ld01 Tests the basic functionality of the 'ld' command.
        ldd01 Tests the basic functionality of the 'ldd' command.
        nm01 Tests the basic functionality of the 'nm' command.
        objdump01 Tests the basic functionality of the 'objdump' command.
        size01 Tests the basic functionality of the 'size' command.
    2. Network Tests

      Identify requirements for network tests for remote procedure calls, network file systems, multicast, and various network commands.

      1. TCP/IP
        Test Case Name Type Description
        Arp Network tests arp command with flags -a -t -d.
        Finger Network tests finger command with flags -b -f -h -i -l -m -p -q -s -w and bad flags, starts and stops finger daemon.
        ftp File Transfer ftp files from one host to another and tests commands get, put.
        Host Network stresses a host using forward and reverse name lookup.
        mc_cmds Multicast tests commands ifconfig, netstat, and ping.
        mc_commo Multicast uses IP Multicast to send UDP datagrams between nodes on a subnet using a specific Multicast group and a specific port.
        mc_member Multicast tests options for level IPPROTO_IP Service interface to allow the list of host group memberships to be updated.
        mc_opts Multicast tests that options are set to the default for level IPPROTO_IP Service Interface and that they can be set and read properly.
        Netstat Network tests netstat command with flags: -s -p.
        Ping Network tests ping command with flags -c -s(8 16 32 64 128 256 512 10 24 2048 4064).
        Rcp File Transfer rcp from one host to another and backwards.
        Rdist File Transfer uses rdist command to transfer files of various sizes from one host to another.
        rlogin Remote Commands rlogin from one host to another, performs ls -l /etc/hosts, and verifies output.
        Rsh Remote Commands rsh from one host to another performs ls -l /etc/hosts, and verifies output.
        Rwho Remote Commands verifies that rwho and ruptime are working in a subnet.
        Sendfile File Transfer copies files from server to clients using "sendfile" subroutine.
        perf_lan Sockets a C program called echoes writes and listens over a TCP socket.
        Echo Sockets a c program called pingpong creates a raw socket and spawns a child process to send echo ICMP packet to the other host.
        tcpdump Network Must be developed in house - similar to 'iptrace' command. May model test after the test for 'iptrace'.
        telnet Remote Commands tests telnet, performs ls -l /etc/hosts and verifies output.
      2. NFS
        Test Case Name Type Description
        nfs01 Stress Stress opening, writing and closing of files on an NFS server
        nfslock01 Client Test NFS file lock
        nfs02 Client Copies a files & creates directory over an nfs mounted file system
      3. RPC
        Test Case Name Description
        rpc01 Test rpc using file transfers between a client & server.
        rpcinfo01 Tests the basic functionality of the 'rpcinfo' command.
        rup01 Tests the basic functionality of the 'rup' command.
        rusers01 Tests the basic functionality of the 'rusers' command.
    3. I/O Tests
      Test Case Name Description
      cdrom_cd This task creates multiple threads to read from a device and tests the device by reading and checking sum.
      stress_floppy This task exercises floppy drives using tar, backup, cpio, dd and dos commands.
    4. Filesystem Tests
      1. Misc Tests
        Test Case Name Description
        Bonnie_str This is the bonnie file system test it is used to stress filesystems by writing a large file in several different ways and then reading that file by several different methods. This is an open source filesystem bottleneck benchmark running in a loop.
        fs_inod Filesystem inode test. Tests hard & symbolic links.
      2. XFS Tests

        The XFS test suite is part of the XFS CVS tree and it is freely available. It is tried tightly to XFS development and has it's own test harness that handles mounting devices. Most tests verify output to recorded outputs.

    5. DB Test Case Details -- DB workload
      • Concurrent sessions, each of which executes a sequence of database applications
      • Load database tables
        • Row deletes
        • Create/insert rows
        • Selects/queries against database table
    6. Kernel Stress
      • Disk cache
      • Page cache
      • Buffer cache
      • Virtual memory
      • Scheduling
      • SMP
      • Read/write spin locks
    7. File systems
      • ext2
      • JFS
      • Riser
      • XFSi
    8. Applications
      • WebSphere®
      • Stock brokerage
      • Financials
      • Telco
  7. Utilization Tools

    The following describes suggestions for setting up systems for testing in order to maintain a consistent, stable test environment. Also described below are some suggestions for tools which may be use to provide statistics for analyzing the stress placed on a system during testing.

    Paging space requirements:
    (Memory Size * Memory Factor) = Ideal Paging Space Size
    128MB - 1GB 1 (SMALL) 1:3
    1GB - 4GB 2 (MID) 1:2
    4GB+ 3 (BIG) 1:1

    A machine with with 2 GB of ram would be in the MID category so it should have 2 * 2 = 4 GB of swap space.

    sar - Collects, reports, and saves system activity information
    Part of the Sysstat utilities:

    uptime - Shows how long the system has been up, and load averages
    Standard utility

    vmstat - Reports virtual memory statistics
    Standard utility

    top - display top CPU processes
    Standard on most systems

    tcpdump - dump network traffic
    Standard on most systems

    Ethereal -GUI Network Protocol Analyzer

    Performance Co-Pilot - System monitoring with recording

  8. Framework/Harness Tools

    Features Software Testing Automation Framework (STAF) Dejagnu SGI PAN Test Environment Toolkit (TET 3) Tivoli Wizdom Test Driver (WTD)
    Open Source yes yes yes yes no
    Test harness no yes yes yes yes
    Framework yes yes no yes no
    Available on Linux yes yes yes yes no
    Single front end for all test yes yes yes yes yes
    Widely used in the Open Source Community no yes no yes no
    GNU Public License LGPL & IPL yes yes no no
    Run test cases independently from test harness yes yes yes yes yes
    Terminate testing if a single test case fails n/a no yes n/a yes
    Assign run-levels for each test case n/a no no n/a yes
    Provides a layer of abstraction which allows you to write tests that are portable to any host or target where a program must be tested yes yes yes yes yes
    Ability to run x number of copies of a particular test case. n/a no yes yes no
    Randomly selects test cases to run. n/a no yes yes no
    Allows testing of interactive programs yes yes no n/a no
    Allows testing of batch oriented programs yes yes yes yes yes
    Cross platform testing tool yes yes yes yes yes
    POSIX™ 1003.3 conforming n/a yes no yes no
    Debugging facility yes yes yes n/a no
    Remote Host Testing yes yes no yes no
    Uniform output format yes yes yes yes yes
    Test Results Database n/a no no no yes
    Provides a method for maintaining conf and runtime data, with support for dynamic updating. (Variable) yes no no n/a no
    Start, stop and query processes (Process) yes no yes n/a no
    Network-enabled IPC mechanism (Queue) yes no no n/a no
    Provides network-enabled named event and mutex semaphores (Semphore) yes no no n/a no
    Allows you to get and copy files across the network (File System) yes yes no n/a no
    Logging Facility yes yes yes yes yes
    Allows a test case to publish its current running execution status for others to read. (Status Monitoring) yes no no n/a no
    Manage exclusive access to pools of elements. (Resource Pool) yes no no n/a no
    Publish/subscribe notification system (Events) yes no no n/a no
    Ability to manage different test harness yes no no n/a no
    NLS Data Handling yes n/a n/a n/a n/a
  9. Other Document Links

  10. Document Owners

  11. Appendix A

    1. LTC-Test Execution Plan

      The execution Plan is divided into 3 main phases with each phase adding increasing content and complexity.

      Phase Dates Content
      Phase 1 4Q 2001 LTP, Webservers, Databases, Load balancer, DOTS, Whatzilla
      Phase 2 1Q 2002 Phase 1 plus High availability, EVMS
      Phase 3 2Q 2002 Phase 2 plus Telco
      1. Phase 1

        Phases Planned Start Planned Completion Comments
        Prep 2001/08/01 2001/10/30 Database and Webserver farm setup, testcase creation, define workloads and lab setup
        Focus Test 2001/08/15 2001/11/15
        Integration Test 2001/10/01 2001/11/30
        Stress Test 2001/12/15 Post results
      2. Phase 2

        Phases Planned Start Planned Completion Comments
        Test Development 2002/12/01 2002/02/01 HA configuration, IP v6, SCTP setup, testcase creation, define workloads and lab setup
        Focus Test 2002/02/01 2002/02/15
        Integration Test 2002/02/01 2002/02/29
        Stress Test 2002/03/01 2002/03/30 Post results
      3. Phase 3

        Phases Planned Start Planned Completion Comments
        Test Development 2002/03/01 2002/05/01 Telco setup
        Focus Test 2002/04/01 2002/05/01
        Integration Test 2002/05/01 2002/05/31
        Stress Test 2002/06/01 2002/06/30 Post results

        Phase 1 Red Hat 7.1 SuSe 7.2 TurboLinux 7.1 UP SMP 2-8 Way LTP Filesystems Databases Webservers
        Focus Test -> 24 hours Complete Complete Complete Complete Complete Complete
        Integration Test -> 48 hours In Progress
        Stress Test -> 96 hours
    2. Glossary

      1. Focus Test

        The goal of focus test is to end up with a stable and reliable product. Focus test is six distinct areas of testing:
        1. Regression test: testing existing functions on new Linux builds.
        2. Early test: early testing of Linux packages as they become available in the OSC.
        3. New Function test: testing new functions on new Linux builds.
        4. New Test/Applications: testing of new tests and applications to stablized them.
        5. Stop Ship Defects: testing is performed to continue focus on stop ship defects.
        6. Pervasive or Critical Defects: testing to obtain any pervasive or critical defects.
      2. Integration

        Describe how the product is to be tested as a whole system. The purpose of integration test is to attempt to create customer scenarios which view the system as a whole. The Integration test may also include other products. Stress Test (see below) is part of Integration Test. Tests need to focus on integrating components of Linux such as: Kernel, I/O, TCP/IP, filesystems, DB and WAS applications.

      3. Stress

        Describe the stress testing to be done on the product. The Stress test should verify its robustness of the product during high system usage. Include the target length of the test and the acceptable breaking point.

  12. Appendix B - Test Development

    1. Create test plans to cover committed LTC line items, new Linux packages and general regression activities.
    2. Identify a planned set of testcases for each test phase.
    3. Develop teams responsibility for testing, test case development, install methods and hardware utilization.
    4. Promote cross training activity for team members, establishing back-ups and promoting skills development.
    5. Develop Distro build acceptance test scenarios for new Linus builds.
    6. Provide a high-level test plan and team lower-level test plans or technical details.
    7. Define schedules that project when all planned tests will be executed.
    8. Identify hardware requirements for current and future test efforts.
    9. System configurations will range from highend, middle, lowend.
    10. Configure and setup required HW / network topologies.
    11. Incorporate ad-hoc "what if" testing.
    12. Develop automated scenarios.
    13. Automate reviewing system error logs.
    14. Automate test execution launch/status.
    15. Automate monitoring of stress levels such as: CPU utilization, Memory, I/O, Network.
    16. Adjust testcases and scenarios to achieve high stress levels.
    17. Incorporate changes from the previous lessons learned meetings. The test team will hold lessons learned meetings to provide improvements and changes for future test efforts.
    18. Continue product level focus activity in the early stages of testing and then move to a more integrated systems test environment through brand ship test.
    19. Perform negative tests.
    20. Define remote debug strategy.

IBM, WebSphere, and their logos are registered trademarks of International Business Machines. Linux is a trademark of Linus Torvalds. SGI is a trademark of Silicon Graphics, Inc. POSIX is a trademark of the Institute of Electrical and Electronic Engineers (IEEE). Red Hat and its logo are registered trademarks of Red Hat, Inc. SuSE and its logo are registered trademarks of SuSE AG. Turbolinux and its logo are trademarks of Turbolinux, Inc.

Other company, product, and service names may be trademarks or service marks of others.
End of Document December 6, 2001  Last modified on: June 15, 2006 - 16:37:40 UTC.