DOCX

performance testing

By Doris Smith,2014-04-25 18:00
8 views 0
performance testing

Contents

    ; 1 Performance Testing Sub-Genres

    o 1.1 Load Testing

    o 1.2 Stress Testing

    o 1.3 Endurance Testing (Soak Testing)

    o 1.4 Spike Testing

    o 1.5 Configuration Testing

    o 1.6 Isolation Testing

    ; 2 Setting performance goals

    o 2.1 Concurrency / Throughput

    o 2.2 Server response time

    o 2.3 Render response time

    o 2.4 Performance specifications

    o 2.5 Questions to ask

    ; 3 Pre-requisites for Performance Testing

    o 3.1 Test conditions

    o 3.2 Timing

    ; 4 Tools

    ; 5 Myths of Performance Testing

    ; 6 Technology

    ; 7 Tasks to undertake

    ; 8 Methodology

    o 8.1 Performance Testing Web Applications Methodology

    ; 9 See also

    ; 10 External links

    Performance Testing Sub-Genres

    Load Testing

    Load testing is the simplest form of performance testing. A load test is usually conducted to

    understand the behavior of the application under a specific expected load. This load can be the

    expected concurrent number of users on the application performing a specific number of transactions within the set duration. This test will give out the response times of all the important

    business critical transactions. If the database, application server, etc. are also monitored, then this simple test can itself point towards any bottlenecks in the application software. Stress Testing

    Stress testing is normally used to understand the upper limits of capacity within the application

    landscape. This kind of test is done to determine the application's robustness in terms of extreme

    load and helps application administrators to determine if the application will perform sufficiently

    if the current load goes well above the expected maximum.

Endurance Testing (Soak Testing)

    Endurance testing is usually done to determine if the application can sustain the continuous expected load. During endurance tests, memory utilization is monitored to detect potential leaks. Also important, but often overlooked is performance degradation. That is, to ensure that the throughput and/or response times after some long period of sustained activity are as good or better than at the beginning of the test.

    Spike Testing

    Spike testing, as the name suggests is done by spiking the number of users and understanding the

    behavior of the application; whether performance will suffer, the application will fail, or it will be able to handle dramatic changes in load.

    Configuration Testing

    Configuration testing is another variation on traditional performance testing. Rather than testing for performance from the perspective of load you are testing the effects of configuration changes in the application landscape on application performance and behaviour. A common example would be experimenting with different methods of load-balancing.

    Isolation Testing

    Isolation testing is not unique to performance testing but a term used to describe repeating a test execution that resulted in an application problem. Often used to isolate and confirm the fault domain.

    Setting performance goals

    Performance testing can serve different purposes.

    ; It can demonstrate that the system meets performance criteria.

    ; It can compare two systems to find which performs better.

    ; Or it can measure what parts of the system or workload causes the system to perform

    badly.

    Many performance tests are undertaken without due consideration to the setting of realistic performance goals. The first question from a business perspective should always be "why are we performance testing?". These considerations are part of the business case of the testing.

    Performance goals will differ depending on the application technology and purpose however they should always include some of the following:-

    Concurrency / Throughput

    If an application identifies end-users by some form of login procedure then a concurrency goal is highly desirable. By definition this is the largest number of concurrent application users that the application is expected to support at any given moment. The work-flow of your scripted transaction may impact true application concurrency especially if the iterative part contains the

    Login & Logout activity

    If your application has no concept of end-users then your performance goal is likely to be based on a maximum throughput or transaction rate. A common example would be casual browsing of a web site such as Wikipedia.

    Server response time

    This refers to the time taken for one application node to respond to the request of another. A simple example would be a HTTP 'GET' request from browser client to web server. In terms of response time this is what all load testing tools actually measure. It may be relevant to set server

    response time goals between all nodes of the application landscape.

    Render response time

    A difficult thing for load testing tools to deal with as they generally have no concept of what happens within a node apart from recognising a period of time where there is no activity 'on the wire'. To measure render response time it is generally necessary to include functional test scripts

    as part of the performance test scenario which is a feature not offered by many load testing tools. Performance specifications

    It is critical to detail performance specifications (requirements) and document them in any performance test plan. Ideally, this is done during the requirements development phase of any system development project, prior to any design effort. See Performance Engineering for more

    details.

    However, performance testing is frequently not performed against a specification i.e. no one will have expressed what the maximum acceptable response time for a given population of users should be. Performance testing is frequently used as part of the process of performance profile tuning. The idea is to identify the “weakest link” – there is inevitably a part of the system which,

    if it is made to respond faster, will result in the overall system running faster. It is sometimes a difficult task to identify which part of the system represents this critical path, and some test tools include (or can have add-ons that provide) instrumentation that runs on the server (agents) and report transaction times, database access times, network overhead, and other server monitors, which can be analyzed together with the raw performance statistics. Without such instrumentation one might have to have someone crouched over Windows Task Manager at the

    server to see how much CPU load the performance tests are generating (assuming a Windows system is under test).

    There is an apocryphal story of a company that spent a large amount optimizing their software without having performed a proper analysis of the problem. They ended up rewriting the

    system‟s „idle loop‟, where they had found the system spent most of its time, but even having the most efficient idle loop in the world obviously didn‟t improve overall performance one iota!

    Performance testing can be performed across the web, and even done in different parts of the country, since it is known that the response times of the internet itself vary regionally. It can also be done in-house, although routers would then need to be configured to introduce the lag what

    would typically occur on public networks. Loads should be introduced to the system from realistic points. For example, if 50% of a system's user base will be accessing the system via a 56K modem connection and the other half over a T1, then the load injectors (computers that

    simulate real users) should either inject load over the same connections (ideal) or simulate the network latency of such connections, following the same user profile.

    It is always helpful to have a statement of the likely peak numbers of users that might be expected to use the system at peak times. If there can also be a statement of what constitutes the maximum allowable 95 percentile response time, then an injector configuration could be used to test whether the proposed system met that specification.

    Questions to ask

    Performance specifications should ask the following questions, at a minimum:

    ; In detail, what is the performance test scope? What subsystems, interfaces, components,

    etc. are in and out of scope for this test?

    ; For the user interfaces (UI's) involved, how many concurrent users are expected for each

    (specify peak vs. nominal)?

    ; What does the target system (hardware) look like (specify all server and network

    appliance configurations)?

    ; What is the Application Workload Mix of each application component? (for example: 20%

    login, 40% search, 30% item select, 10% checkout).

    ; What is the System Workload Mix? [Multiple workloads may be simulated in a single

    performance test] (for example: 30% Workload A, 20% Workload B, 50% Workload C)

    ; What are the time requirements for any/all backend batch processes (specify peak vs.

    nominal)?

    Pre-requisites for Performance Testing

    A stable build of the application which must resemble the Production environment as close to possible.

    The performance testing environment should not be clubbed with User acceptance testing (UAT)

    or development environment. This is dangerous as if an UAT or Integration test or other tests are

    going on in the same environment, then the results obtained from the performance testing may not be reliable. As a best practice it is always advisable to have a separate performance testing environment resembling the production environment as much as possible.

    Test conditions

    In performance testing, it is often crucial (and often difficult to arrange) for the test conditions to be similar to the expected actual use. This is, however, not entirely possible in actual practice. The reason is that the workloads of production systems have a random nature, and while the test workloads do their best to mimic what may happen in the production environment, it is impossible to exactly replicate this workload variability - except in the most simple system. Loosely-coupled architectural implementations (e.g.: SOA) have created additional complexities

    with performance testing. Enterprise services or assets (that share a common infrastructure or

    platform) require coordinated performance testing (with all consumers creating production-like transaction volumes and load on shared infrastructures or platforms) to truly replicate production-like states. Due to the complexity and financial and time requirements around this activity, some organizations now employ tools that can monitor and create production-like conditions (also referred as "noise") in their performance testing environments (PTE) to

    understand capacity and resource requirements and verify / validate quality attributes. Timing

    It is critical to the cost performance of a new system, that performance test efforts begin at the inception of the development project and extend through to deployment. The later a performance defect is detected, the higher the cost of remediation. This is true in the case of functional testing, but even more so with performance testing, due to the end-to-end nature of its scope. Tools

    In the diagnostic case, software engineers use tools such as profilers to measure what parts of a

    device or software contributes most to the poor performance or to establish throughput levels (and thresholds) for maintained acceptable response time.

    Myths of Performance Testing

    Some of the very common myths are given below.

    1. Performance Testing is done to break the system.

    Stress Testing is done to understand the break point of the system. Otherwise normal load testing is generally done to understand the behavior of the application under the expected user load. Depending on other requirements, such as expectation of spike load, continued load for an extended period of time would demand spike, endurance soak or stress testing. 2. Performance Testing should only be done after the System Integration Testing

    Although this is mostly the norm in the industry, performance testing can also be done while the initial development of the application is taking place. This kind of approach is known as the Early Performance Testing. This approach would ensure a holistic development of the

    application keeping the performance parameters in mind. Thus the finding of a performance bug

    just before the release of the application and the cost involved in rectifying the bug is reduced to a great extent.

    3. Performance Testing only involves creation of scripts and any application changes would cause a simple refactoring of the scripts.

    Performance Testing in itself is an evolving science in the Software Industry. Scripting itself

    although important, is only one of the components of the performance testing. The major challenge for any performance tester is to determine the type of tests needed to execute and analyzing the various performance counters to determine the performance bottleneck. The other segment of the myth concerning the change in application would result only in little refactoring in the scripts is also untrue as any form of change on the UI especially in the Web

    protocol would entail complete re-development of the scripts from scratch. This problem becomes bigger if the protocols involved include Web Services, Siebel, Citrix, and SAP.

    Technology

    Performance testing technology employs one or more PCs or Unix servers to act as injectors

    each emulating the presence of numbers of users and each running an automated sequence of interactions (recorded as a script, or as a series of scripts to emulate different types of user interaction) with the host whose performance is being tested. Usually, a separate PC acts as a test conductor, coordinating and gathering metrics from each of the injectors and collating performance data for reporting purposes. The usual sequence is to ramp up the load starting

    with a small number of virtual users and increasing the number over a period to some maximum. The test result shows how the performance varies with the load, given as number of users vs response time. Various tools, are available to perform such tests. Tools in this category usually execute a suite of tests which will emulate real users against the system. Sometimes the results can reveal oddities, e.g., that while the average response time might be acceptable, there are outliers of a few key transactions that take considerably longer to complete something that

    might be caused by inefficient database queries, pictures etc.

    Performance testing can be combined with stress testing, in order to see what happens when an

    acceptable load is exceeded does the system crash? How long does it take to recover if a large load is reduced? Does it fail in a way that causes collateral damage?

    Analytical Performance Modeling is a method to model the behaviour of an application in a

    spreadsheet. The model is fed with measurements of transaction resource demands (CPU, disk

    I/O, LAN, WAN), weighted by the transaction-mix (business transactions per hour). The weighted transaction resource demands are added-up to obtain the hourly resource demands and divided by the hourly resource capacity to obtain the resource loads. Using the responsetime formula (R=S/(1-U), R=responsetime, S=servicetime, U=load), responsetimes can be calculated and calibrated with the results of the performance tests. Analytical performance modelling allows evaluation of design options and system sizing based on actual or anticipated business usage. It is therefore much faster and cheaper than performance testing, though it requires thorough understanding of the hardware platforms.

    Tasks to undertake

    Tasks to perform such a test would include:

    ; Decide whether to use internal or external resources to perform the tests, depending on

    inhouse expertise (or lack thereof)

    ; Gather or elicit performance requirements (specifications) from users and/or business

    analysts

    ; Develop a high-level plan (or project charter), including requirements, resources,

    timelines and milestones

    ; Develop a detailed performance test plan (including detailed scenarios and test cases,

    workloads, environment info, etc.)

    ; Choose test tool(s)

    ; Specify test data needed and charter effort (often overlooked, but often the death of a

    valid performance test)

    ; Develop proof-of-concept scripts for each application/component under test, using

    chosen test tools and strategies

    ; Develop detailed performance test project plan, including all dependencies and associated

    timelines

    ; Install and configure injectors/controller

    ; Configure the test environment (ideally identical hardware to the production platform),

    router configuration, quiet network (we don‟t want results upset by other users),

    deployment of server instrumentation, database test sets developed, etc. ; Execute tests probably repeatedly (iteratively) in order to see whether any unaccounted

    for factor might affect the results

    ; Analyze the results - either pass/fail, or investigation of critical path and recommendation

    of corrective action

    Methodology

    Performance Testing Web Applications Methodology

    According to the Microsoft Developer Network the Performance Testing Methodology consists

    of the following activities:

    ; Activity 1. Identify the Test Environment. Identify the physical test environment and

    the production environment as well as the tools and resources available to the test team.

    The physical environment includes hardware, software, and network configurations.

    Having a thorough understanding of the entire test environment at the outset enables

    more efficient test design and planning and helps you identify testing challenges early in

    the project. In some situations, this process must be revisited periodically throughout the

    project‟s life cycle.

    ; Activity 2. Identify Performance Acceptance Criteria. Identify the response time,

    throughput, and resource utilization goals and constraints. In general, response time is a

    user concern, throughput is a business concern, and resource utilization is a system

    concern. Additionally, identify project success criteria that may not be captured by those

    goals and constraints; for example, using performance tests to evaluate what combination

    of configuration settings will result in the most desirable performance characteristics. ; Activity 3. Plan and Design Tests. Identify key scenarios, determine variability among

    representative users and how to simulate that variability, define test data, and establish

    metrics to be collected. Consolidate this information into one or more models of system

    usage to be implemented, executed, and analyzed.

    ; Activity 4. Configure the Test Environment. Prepare the test environment, tools, and

    resources necessary to execute each strategy as features and components become

    available for test. Ensure that the test environment is instrumented for resource

    monitoring as necessary.

    ; Activity 5. Implement the Test Design. Develop the performance tests in accordance

    with the test design.

    ; Activity 6. Execute the Test. Run and monitor your tests. Validate the tests, test data,

    and results collection. Execute validated tests for analysis while monitoring the test and

    the test environment.

    ; Activity 7. Analyze Results, Tune, and Retest. Analyse, Consolidate and share results

    data. Make a tuning change and retest. Improvement or degradation? Each improvement

    made will return smaller improvement than the previous improvement. When do you stop?

    When you reach a CPU bottleneck, the choices then are either improve the code or add

    more CPU.

Report this document

For any questions or suggestions please email
cust-service@docsford.com