09:59 AM PST
March 20, 2018
LTP logo LTP logo
(LTP Menu) o Home Page
o Project Page
o News
o Lists & IRC
o Kernel Errata

Code Coverage
o LTP's Kernel Code Coverage
o LTP's Gcov Extension(lcov)
o LTP's Gcov-kernel Extension
o Coverage "Galaxy" Map

o LTP HowTo
o Inside LTP Testsuite
o 2.5 Execution Matrix
o 2.5 Test Plan
o 2.4 Test Plan
o STAX LTP Driver HowTo
o DOTS HowTo
o Linux Test Tools
o LTP man pages

Test Results
o Expected errors for LTP
o Kernel Test Results
o 2002 Test Results
o 2001 Test Results

Technical Papers


Related Projects
o Kernel Fix
o Open *POSIX Test Suite
o OpenHPI Test Suite
o (CTCS) Cerberus
o (STAF) Software Testing Automation Framework
o EAL2+ Certification Test Suite
o SuSE EAL3+ Certification Test Suite
o Red Hat EAL3+ Certification Test Suite
SourceForge Logo

Linux™ Test Plan

1.0   Overview

2.0   Goals

3.0   Test Approach and Methodology

4.0   LTP Coverage Summary

5.0   Test Machines

6.0   Test Case Details

7.0   Utilization Tools

8.0   Framework/Harness Tools

9.0   Other Document Links

10.0  Document Owners


1.0   Overview

The Test Plan provides the strategy for engaging the Open Source Community (OSC) in the delivery of test suites focusing on customer workloads. The Test team is responsible for delivering documented stress runs, reliability statements and verifying defect fixes through regression testing. Reliability, workloads, and stress test ware will be executed in automated 72 hours runs. The Linux Test Project is headquartered at ltp.sourceforge.net. All source code, patches, bugs, support requests, etc should be submitted through either the mailing list or the Source Forge applications. We are seeking comments from the Open Source Community (OSC) and internal reviewers.

2.0   Goals

Software Goals
  • Mix application workloads for stress testing
  • General and targeted benchmarks
  • API verification programs
  • Regression testing
Report Goals
  • Deliver documented stress runs and reliability statements
  • Reports providing the foundation for benchmark/performance analysis
  • API analysis
  • Regression run analysis
Time-line Goals
  • Initial Posting of test suites 2Q01
  • 4Q01 Posting of Comprehensive test suite to the OSC
  • 4Q01 Document all test cases and provide categorized index

3.0   Test Approach and Methodology

Sanity Testing
  • Test the reliability and robustness of the kernel
  • Should be finished in less than 24 hours
  • Used to test each kernel release and pre-patch (assuming automated kernel builds, installs and reboots)
  • Comprised mostly of "quick hit" tests that are self verifying
  • Includes any fast running regression tests
  • Results are limited to PASS/FAIL with additional details for FAIL
Stress Testing
  • More comprehensive tests as a higher load
  • Used to shake out locking issues, hangs and corruption
  • Should be finished in less than six hours
  • Includes more I/O and network testing and longer running regression tests
  • Results are limited to PASS/FAIL with additional details for FAIL
Endurance Runs
  • Testing that lasts from six hours to weeks
  • Used to verify system operations over long periods of varying loads
SGI Performance Runs
  • Test runs designed to check performance of specific applications and workloads
  • Test duration depends on application
  • Results are used to compare against previous results on a quantitative basis

3.2   LTP Prep Summary -- Define and implement test infrastructure & requirements

= In Progress

Develop/port harness independent test casesin progressin progress  
CVS Source Forge will be used as the test case source repositoryin progressin progress  
Develop/port hardware neutral test casesin progressin progress  
Scalable Test Platform hardware configured (hardware for STP)    
Build automated testing implimentation (software for STP)    
Full runs of available tests on standard kernels (on the STP machines)    
Joint development on kernel stress test, performance and regression suites    
Joint development on application level performance analysis suites    
Analyze benchmark tools for coveragein progressin progress  
Identify Linux 2.2 Test products/suites currently available in the OSCin progressin progress  
Investigate Globalization and I18N    
Identify new Linux 2.4 functions that need to be tested    
Initial distros are Red Hat®, SuSE®, Turbolinux®in progressin progress  
Automated Reliability runs (predictable, repeatable)    
Interoperability between n(2.4), n-1(2.2), n+1(2.4+) kernels    
All test cases should be executed from 72 to 168 hours    
Kernel runtime should be balanced between subsystems (i.e. filesystem, threads, memory)    

4.0   LTP Coverage Summary -- Identify strategic Linux components that should be stress tested

Memory management (VMM, Paging space)in progressin progress  
Scheduler (process, stack handler) in progress  
Pthreadsin progressin progress  
Varying filesystems sizesin progressin progress  
Varying directory structures in progress  
Compression in progress  
/procin progressin progress  
Remote commandsin progressin progress  
File Transfersin progressin progress  
Socketsin progressin progress  
Multicastin progressin progress  
NFSin progressin progress  
Ethernet 10/100    
Gigabit Ethernet    
Token Ring 4/16    
Wireless technology    
Cdromin progressin progress  
Floppyin progressin progress  
IDEin progressin progress  
SCSI / RAID drivesin progressin progress  

5.0   Test Environment

5.1   IBM®
System TypeNumber of systems

5.2   SGI™
System TypeNumber of systems
Flex time
2-way3 i686
5 ia64
4-way2 i686

5.3   OSDL
System TypeNumber of systems
Full timeFlex time
16-way01 32bit
1 64bit

6.0   Test case Details

6.1   Kernel Tests
Test case NameDescription
pth_str01Threads Testing
pthcliSocket client and server
openfileTest file and thread limits
mtest01Memory stress tests
pth_str02Pthreads library routines
pth_str03Creates a tree of threads, does calculations, and returns the result to the parent

6.1.1   SGI Test case Details
These are simple system call tests that verify functionality. They should be run as a unprivileged user unless otherwise specified. More complex workloads should be created by combining these tests to test for kernel locking bottlenecks.
Test case NameDescription
access01.cBasic test for access(2) using F_OK, R_OK, W_OK, and X_OK arguments
access03.cEFAULT error testing for access(2)
alarm01.cBasic test for alarm(2)
alarm02.cBoundary Value Test for alarm(2)
alarm03.cAlarm(2) cleared by a fork
asyncio02.cWrite/close flushes data to the file
chdir02.cBasic test for chdir(2)
chmod02.cBasic test for chmod(2)
chown01.cBasic test for chown(2)
close08.cBasic test for close(2)
creat09.cBasic test for creat(2) using 0700 argument
dup01.cBasic test for dup(2)
dup02.cNegative test for dup(2) with bad fd
dup03.cNegative test for dup(2) (too many fds)
dup04.cBasic test for dup(2) of a system pipe descriptor
dup05.cBasic test for dup(2) of a named pipe descriptor
execl01.cBasic test for execl(2)
execle01.cBasic test for execle(2)
execlp01.cBasic test for execlp(2)
execv01.cBasic test for execv(2)
execve01.cBasic test for execve(2)
execvp01.cBasic test for execvp(2)
f00f.cThis is a simple test for handling of the pentium f00f bug
fchmod01.cBasic test for Fchmod(2)
fchown01.cBasic test for fchown(2)
fcntl02.cBasic test for fcntl(2) using F_DUPFD argument
fcntl03.cBasic test for fcntl(2) using F_GETFD argument
fcntl04.cBasic test for fcntl(2) using F_GETFL argument
fcntl05.cBasic test for fcntl(2) using F_GETLK argument
fcntl07.cClose-On-Exec functional test
fcntl07B.cClose-On-Exec of named pipe functional test
fcntl08.cBasic test for fcntl(2) using F_SETFL argument
fcntl09.cBasic test for fcntl(2) using F_SETLK argument
fcntl10.cBasic test for fcntl(2) using F_SETLKW argument
fork01.cBasic test for fork(2)
fork04.cChild inheritance of Environment Variables after fork()
fork05.cMake sure LDT is propagated correctly
fpathconf01.cBasic test for fpathconf(2)
fstat01.cBasic test for fstat(2)
fstatfs01.cBasic test for fstatfs(2)
fsync01.cBasic test for fsync(2)
getegid01.cBasic test for getegid(2)
geteuid01.cBasic test for geteuid(2)
getgid01.cBasic test for getgid(2)
getgroups01.cGetgroups system call critical test
getgroups02.cBasic test for getgroups(2)
gethostid01.cBasic test for gethostid(2)
gethostname01.cBasic test for gethostname(2)
getpgrp01.cBasic test for getpgrp(2)
getpid01.cBasic test for getpid(2)
getppid01.cBasic test for getppid(2)
getuid01.cBasic test for getuid(2)
kill02.cSending a signal to processes with the same process group ID
kill09.cBasic test for kill(2)
link02.cBasic test for link(2)
link03.cMulti links tests
link04.cNegative test cases for link(2)
link05.cMulti links (EMLINK) negative test
lseek01.cBasic test for lseek(2)
lseek02.cNegative test for lseek(2)
lseek03.cNegative test for lseek(2) whence
lseek04.cNegative test for lseek(2) of a fifo
lseek05.cNegative test for lseek(2) of a pipe
lstat02.cBasic test for lstat(2)
mkdir01.cBasic errno test for mkdir(2)
mkdir08.cBasic test for mkdir(2)
mknod01.cBasic test for mknod(2)
mmap001.cTests mmapping a big file and writing it once
nice05.cBasic test for nice(2)
open03.cBasic test for open(2)
pathconf01.cBasic test for pathconf(2)
pause01.cBasic test for pause(2)
pipeio.cCreates children, which write to a pipe, while the parent reads everything off, checking for data errors.
read01.cBasic test for the read(2) system call
readlink02.cBasic test for the readlink(2) system call
rename02.cBasic test for the rename(2) system call
rmdir04.cBasic test for the rmdir(2) system call
rmdir05.cVerify that rmdir(2) returns a value of -1 and sets errno to indicate the error
sbrk01.cBasic test for the sbrk(2) system call.
select01.cBasic test for the select(2) system call to a fd of regular file with no I/O and small timeout
select02.cBasic test for the select(2) system call to fd of system pipe with no I/O and small timeout
select03.cBasic test for the select(2) system call to fd of a named-pipe (FIFO)
setgid01.cBasic test for the setgid(2) system call
setgroups01.cBasic test for the setgroups(2) system call
setpgid01.cBasic test for setpgid(2) system call
setpgrp01.cBasic test for the setpgrp(2) system call
setregid01.cBasic test for the setregid(2) system call
setreuid01.cBasic test for the setreuid(2) system call
setuid01.cBasic test for the setuid(2) system call
setuid02.cBasic test for the setuid(2) system call as root
sighold02.cBasic test for the sighold02(2) system call
signal03.cBoundary value and other invalid value checking of signal setup and signal sending
sigrelse01.cBasic test for the sigrelse(2) system call
stat05.cBasic test for the stat05(2) system call
statfs01.cBasic test for the statfs(2) system call.
symlink01.cTest of various file function calls, such as rename or open, on a symbolic link file
symlink02.cBasic test for the symlink(2) system call
sync01.cBasic test for the sync(2) system call
time01.cBasic test for the time(2) system call
times01.cBasic test for the times(2) system call
ulimit01.cBasic test for the ulimit(2) system call
umask01.cBasic test for the umask(2) system call
uname01.cBasic test for the uname(2) system call
unlink05.cBasic test for the unlink(2) system call
unlink06.cTest for the unlink(2) system call of a FIFO
unlink07.cTests for error handling for the unlink(2) system call
unlink08.cMore tests for error handling for the unlink(2) system call
wait02.cBasic test for wait(2) system call
write01.cBasic test for write(2) system call

6.1.2   Application Development Environment
Identify any requirements for commands test cases that are commonly used in application development
Test case NameDescription
ar01Tests the basic functionality of the 'ar' command
ld01Tests the basic functionality of the 'ld' command
ldd01Tests the basic functionality of the 'ldd' command
nm01Tests the basic functionality of the 'nm' command
objdump01Tests the basic functionality of the 'objdump' command
size01Tests the basic functionality of the 'size' command

6.2   Network Tests
Identify requirements for network tests for remote procedure calls, network file systems, multicast, and various network commands.

6.2.1   TCP/IP

Test case NameTypeDescription
ArpNetworktests arp command with flags -a -t -d
FingerNetworktests finger command with flags -b -f -h -i -l -m -p -q -s -w and bad flags, starts and stops finger daemon
ftpFile Transferftp files from one host to another and tests commands get, put
HostNetworkstresses a host using forward and reverse name lookup
mc_cmdsMulticasttests commands ifconfig, netstat, and ping
mc_commoMulticastuses IP Multicast to send UDP datagrams between nodes on a subnet using a specific Multicast group and a specific port
mc_memberMulticasttests options for level IPPROTO_IP Service interface to allow the list of host group memberships to be updated
mc_optsMulticasttests that options are set to the default for level IPPROTO_IP Service Interface and that they can be set and read properly
NetstatNetworktests netstat command with flags: -s -p
PingNetworktests ping command with flags -c -s(8 16 32 64 128 256 512 10 24 2048 4064)
RcpFile Transferrcp from one host to another and backwards
RdistFile Transferuses rdist command to transfer files of various sizes from one host to another
rloginRemote Commandsrlogin from one host to another, performs ls -l /etc/hosts, and verifies output
RshRemote Commandsrsh from one host to another performs ls -l /etc/hosts, and verifies output
RwhoRemote Commandsverifies that rwho and ruptime are working in a subnet
SendfileFile Transfercopies files from server to clients using "sendfile" subroutine
perf_lanSocketsa C program called echoes writes and listens over a TCP socket
EchoSocketsa c program called pingpong creates a raw socket and spawns a child process to send echo ICMP packet to the other host
tcpdumpNetworkMust be developed in house - similar to 'iptrace' command. May model test after the test for 'iptrace'
telnetRemote Commandstests telnet, performs ls -l /etc/hosts and verifies output

6.2.2   NFS

Test case NameTypeDescription
nfs01StressStress opening, writing and closing of files on an NFS server
nfslock01ClientTest NFS file lock
nfs02ClientCopies a files & creates directory over an nfs mounted file system

6.2.3   RPC

Test case NameDescription
rpc01Test rpc using file transfers between a client & server
rpcinfo01 Tests the basic functionality of the 'rpcinfo' command
rup01Tests the basic functionality of the 'rup' command
rusers01Tests the basic functionality of the 'rusers' command

6.3   I/O Tests

Test case NameDescription
cdrom_cdThis task creates multiple threads to read from a device and tests the device by reading and checking sum
stress_floppyThis task exercises floppy drives using tar, backup, cpio, dd and dos commands

6.4   Filesystem Tests

6.4.1   Misc Tests

Test case NameDescription
Bonnie_strThis is the bonnie file system test it is used to stress filesystems by writing a large file in several different ways and then reading that file by several different methods. This is an open source filesystem bottleneck benchmark running in a loop.
fs_inodFilesystem inode test
linktest.plTests hard & symbolic links

6.4.2   XFS Tests
The XFS test suite is part of the XFS CVS tree and it is freely available. It is tried tightly to XFS development and has it's own test harness that handles mounting devices. Most tests verify output to recorded outputs.

6.5   DB Test case Details -- DB workload
  • Concurrent sessions, each of which executes a sequence of database applications
  • Load database tables
      -Row deletes
      -Create/insert rows
      -Selects/queries against database table
  • 6.6   Kernel stress
  • Disk cache
  • Page cache
  • Buffer cache
  • Virtual memory
  • Scheduling
  • SMP
  • Read/write spin locks
  • 6.7   File systems
  • ext2
  • JFS
  • Riser
  • XFSi
  • 6.9   Applications
  • WebSphere®
  • Stock brokerage
  • Financials
  • Telco
  • 7.0   Utilization Tools

    The following describes suggestions for setting up systems for testing in order to maintain a consistent, stable test environment. Also described below are some suggestions for tools which may be use to provide statistics for analyzing the stress placed on a system during testing.

    Paging space requirements
    (Memory Size *Memory Factor) =Ideal Paging Space Size
    128MB - 1GB1 (SMALL)1:3
    1GB - 4GB2 (MID)1:2
    4GB+3 (BIG)1:1

    A machine with with 2 GB of ram would be in the MID category so it should have 2 * 2 = 4 GB of swap space.

    sar - Collects, reports, and saves system activity information
    Part of the Sysstat utilities http://perso.wanadoo.fr/sebastien.godard/

    uptime - Shows how long the system has been up, and load averages
    Standard utility

    vmstat - Reports virtual memory statistics
    Standard utility

    top - display top CPU processes
    Standard on most systems

    tcpdump - dump network traffic
    Standard on most systems

    Ethereal -GUI Network Protocol Analyzer

    Performance Co-Pilot - System monitoring with recording

    8.0 Framework/Harness Tools

    FeaturesSoftware Testing Automation Framework (STAF)DejagnuSGI PANTest Environment Toolkit (TET 3)Tivoli Wizdom Test Driver (WTD)
    Open Sourceyesyesyesyesno
    Test harnessnoyesyesyesyes
    Available on Linuxyesyesyesyesno
    Single front end for all testyesyesyesyesyes
    Widely used in the Open Source Communitynoyesnoyesno
    GNU Public LicenseLGPL & IPLyesyesnono
    Run test cases independently from test harnessyesyesyesyesyes
    Terminate testing if a single test case failsn/anoyesn/ayes
    Assign run-levels for each test casen/anonon/ayes
    Provides a layer of abstraction which allows you to write tests that are portable to any host or target where a program must be testedyesyesyesyesyes
    Ability to run x number of copies of a particular test case.n/anoyesyesno
    Randomly selects test cases to run.n/anoyesyesno
    Allows testing of interactive programsyesyesnon/ano
    Allows testing of batch oriented programsyesyesyesyesyes
    Cross platform testing toolyesyesyesyesyes
    POSIX™ 1003.3 conformingn/ayesnoyesno
    Debugging facilityyesyesyesn/ano
    Remote Host Testingyesyesnoyesno
    Uniform output formatyesyesyesyesyes
    Test Results Databasen/anononoyes
    Provides a method for maintaining conf and runtime data, with support for dynamic updating. (Variable)yesnonon/ano
    Start, stop and query processes (Process)yesnoyesn/ano
    Network-enabled IPC mechanism (Queue)yesnonon/ano
    Provides network-enabled named event and mutex semaphores (Semphore)yesnonon/ano
    Allows you to get and copy files across the network (File System)yesyesnon/ano
    Logging Facilityyesyesyesyesyes
    Allows a test case to publish its current running execution status for others to read. (Status Monitoring)yesnonon/ano
    Manage exclusive access to pools of elements. (Resource Pool)yesnonon/ano
    Publish/subscribe notification system (Events)yesnonon/ano
    Ability to manage different test harnessyesnonon/ano
    NLS Data Handlingyesn/an/an/an/a

    9.0 Other Document Links

    Linux Test Project: http://sourceforge.net/projects/ltp/
    Connectathon - http://www.connectathon.org/
    Internet Software Consortium - http://www.isc.org/

    10.0 Document Owners

    Document Owner: Linda Scott lindajs@us.ibm.com
    Document Owner: Timothy Witham wookie@osdlab.org
    Document Owner: Nathan Straz nstraz@sgi.com

    Appendix A

    LTC-Test Execution Plan

    The execution Plan is divided into 3 main phases with each phase adding increasing content and complexity.

    Phase 14Q 2001 LTP, Webservers, Databases, Load balancer, DOTS, Whatzilla
    Phase 21Q 2002Phase 1 plus High availability, EVMS
    Phase 32Q 2002 Phase 2 plus Telco

    Phase 1

    Phases Planned Start Planned Completion Comments
    Prep 08/01/01 10/30/01 Database and Webserver farm setup, testcase creation, define workloads and lab setup
    Focus Test 08/15/01 11/15/01  
    Integration Test 10/01/01 11/30/01  
    Stress Test 12/15/01   Post results

    Phase 2

    Phases Planned Start Planned Completion Comments
    Test Development 12/01/02 2/01/02 HA configuration, IP v6, SCTP setup, testcase creation, define workloads and lab setup
    Focus Test 02/01/02 02/15/02  
    Integration Test 02/01/02 02/29/02  
    Stress Test 03/01/02 03/30/02 Post results

    Phase 3

    Phases Planned Start Planned Completion Comments
    Test Development 03/01/02 05/01/02 Telco setup
    Focus Test 04/01/02 05/01/02  
    Integration Test 05/01/02 05/31/02  
    Stress Test 06/01/02 06/30/02 Post results

    Phase 1 Red Hat 7.1 SuSe 7.2 TurboLinux 7.1 UP SMP 2-8 Way LTP Filesystems Databases Webservers
    Focus   Test -> 24 hours complete   complete complete complete complete Complete    
    Integration Test -> 48 hours in progress                
    Stress Test -> 96 hours                  


    Focus Test

    The goal of focus test is to end up with a stable and reliable product. Focus test is six distinct areas of testing:

    1. Regression test: testing existing functions on new Linux builds
    2. Early test: early testing of Linux packages as they become available in the OSC
    3. New Function test: testing new functions on new Linux builds
    4. New Test/Applications: testing of new tests and applications to stablized them
    5. Stop Ship Defects: testing is performed to continue focus on stop ship defects
    6. Pervasive or Critical Defects: testing to obtain any pervasive or critical defects

    Describe how the product is to be tested as a whole system. The purpose of integration test is to attempt to create customer scenarios which view the system as a whole. The Integration test may also include other products. Stress Test (see below) is part of Integration Test. Tests need to focus on integrating components of Linux such as: Kernel, I/O, TCP/IP, filesystems, DB and WAS applications.


    Describe the stress testing to be done on the product. The Stress test should verify its robustness of the product during high system usage. Include the target length of the test and the acceptable breaking point.

    Appendix B - Test Development

    1. Create test plans to cover committed LTC line items, new Linux packages and general regression activities
    2. Identify a planned set of testcases for each test phase
    3. Develop teams responsibility for testing, test case development, install methods and hardware utilization
    4. Promote cross training activity for team members, establishing back-ups and promoting skills development
    5. Develop Distro build acceptance test scenarios for new Linus builds
    6. Provide a high-level test plan and team lower-level test plans or technical details
    7. Define schedules that project when all planned tests will be executed
    8. Identify hardware requirements for current and future test efforts
    9. System configurations will range from highend, middle, lowend
    10. Configure and setup required HW / network topologies
    11. Incorporate ad-hoc "what if" testing
    12. Develop automated scenarios
    13. Automate reviewing system error logs
    14. Automate test execution launch/status
    15. Automate monitoring of stress levels such as: CPU utilization, Memory, I/O, Network
    16. Adjust testcases and scenarios to achieve high stress levels
    17. Incorporate changes from the previous lessons learned meetings. The test team will hold lessons learned meetings to provide improvements and changes for future test efforts
    18. Continue product level focus activity in the early stages of testing and then move to a more integrated systems test environment through brand ship test
    19. Perform negative tests
    20. Define remote debug strategy

    IBM, WebSphere, and their logos are registered trademarks of International Business Machines. Linux is a trademark of Linus Torvalds. SGI is a trademark of Silicon Graphics, Inc. POSIX is a trademark of the Institute of Electrical and Electronic Engineers (IEEE). Red Hat and its logo are registered trademarks of Red Hat, Inc. SuSE and its logo are registered trademarks of SuSE AG. Turbolinux and its logo are trademarks of Turbolinux, Inc.

    Other company, product, and service names may be trademarks or service marks of others.
    End of Document December 6, 2001