DOC

The main objectives then would be to identify the key IS

By Henry Scott,2014-11-26 13:40
7 views 0
The main objectives then would be to identify the key IS

Journal of Information, Information Technology, and Organizations Volume 3, 2008

    Applying Importance-Performance Analysis to

    Information Systems: An Exploratory Case Study

    Sulaiman Ainin and Nur Haryati Hisham

    Faculty of Business and Accountancy, University of Malaya,

    Kuala Lumpur, Malaysia,

    ainins@um.edu.my; nurharyati@mesiniaga.com.my

    Abstract

    Using an end-user satisfaction survey, the perception of the importance of attributes of informa-tion systems was investigated and whether the performance of information systems met the users‟

    expectations in a company in Malaysia. It was discovered that the end-users were moderately sa-tisfied with the company‟s IS performance and that there were gaps between importance and per-

    formance on all the systems-related attributes studied. The largest gap pertained to the attributes Understanding Systems, Documentation, and High Availability of Systems. The contribution of the study is in advancing Importance-Performance Analysis applicable to IS research. Keywords: information systems performance, information systems importance, end-user satisfac-tion

    Introduction

    Information technology (IT) and information systems (IS) continue to be highly or significantly debated in today‟s corporate environments. As IT spending grows and becomes commoditized

    and as essential as electricity and running water, many organizations continue to wonder if their IT spending is justified (Farbey, Land, & Targett, 1992) and whether their IS functions are effec-tive (Delone & McLean, 1992). IT and IS have evolved drastically from the heyday of mainframe computing to the environment that has reached out to the end-users. In the past, end-users inte-racted with systems via the system analyst or programmer who translated the user requirements into system input in order to generate the output required for the end-users‟ analysis and decision-

    making process. However, now end-users are more directly involved with the systems as they navigate themselves typically via an interactive user interface, thus assuming more responsibility for their own applications. Therefore, the ability to capture and measure end-user satisfaction serves as a tangible surrogate measure in determining the performance of the IS function and ser-vices, and of IS themselves (Ives, Olson, & Baroudi, 1983). Besides evaluating IS performance, it is also important to evaluate whether IS in an organization meet users‟ expectations. This paper

    aims to demonstrate the use of Impor-

    tance-Performance Analysis (IPA) in Material published as part of this publication, either on-line or

    in print, is copyrighted by the Informing Science Institute. evaluating IS. Permission to make digital or paper copy of part or all of these

    works for personal or classroom use is granted without fee

    provided that the copies are not made or distributed for profit

    or commercial advantage AND that copies 1) bear this notice

    in full and 2) give the full citation on the first page. It is per-

    missible to abstract these works so long as credit is given. To

    copy in all other cases or to republish or to post on a server or

    to redistribute to lists requires specific permission and payment

    of a fee. Contact Publisher@InformingScience.org to request

    redistribution permission.

    Accepting Editor: Bob Travica

Importance-Performance Analysis

    Research Framework

    Importance-Performance Analysis

    The Importance-Performance Analysis (IPA) framework was introduced by Martilla and James (1977) in marketing research in order to assist in understanding customer satisfaction as a func-tion of both expectations concerning the significant attributes and judgments about their perfor-mance. Analyzed individually, importance and performance data may not be as meaningful as when both data sets are studied simultaneously (Graf, Hemmasi, & Nielsen, 1992). Hence, impor-tance and performance data are plotted on a two dimensional grid with importance on the y-axis and performance on the x-axis. The data are then mapped into four quadrants (Bacon, 2003; Mar-tilla & James, 1977) as depicted in Figure 1. In quadrant 1, importance is high but performance is low. This quadrant is labeled as “Concentrate Here”, indicating the existing systems require ur-

    gent corrective action and thus should be given top priority. Items in quadrant 2 indicate high im-portance and high performance, which indicates that existing systems have strengths and should continue being maintained. This category is labeled as “Keep up good work.” In contrast, the cat-

    egory of low importance and low performance items makes the third quadrant labeled as “Low

    Priority”. While the systems with such a rating of the attributes do not pose a threat they may be candidates for discontinuation. Finally, quadrant 4 represents low importance and high perfor-mance, which suggests insignificant strengths and a possibility that the resources invested may better be diverted elsewhere.

    The four quadrants matrix helps organizations to identify the areas for improvement and actions for minimizing the gap between importance and performance. Extensions of the importance-performance mapping include adding an upward sloping a 45-degree line to highlight regions of

    differing priorities. This is also known as the iso-rating or iso-priority line, where importance equals performance. Any attribute below the line must be given priority, whereas an attribute above the line indicates otherwise (Bacon, 2003).

    5 iQQ s1 2 o High -

    rImportance aQQt 4 4 i Low n

    g 5 1 Low High

    Performance

    Figure 1. Importance-Performance Map

    (Source: adapted from Bacon, 2003; Martilla & James, 1977)

    96

    Ainin & Hisham

    IPA has been used in different research and practical domains (Eskildsen & Kristensen, 2006). Slack (1994) used it to study operations strategy while Sampson and Showalter (1999) evaluated customers. Ford, Joseph, and Joseph (1999) used IPA in the area of marketing strategy. IPA is

    also used in various industries, such as health (Dolinsky & Caputo, 1991; Skok, Kophamel, & Richardson, 2001), banking (Joseph, Allbrigth, Stone, Sekhon, & Tinson 2005; Yeo, 2003), hotel (Weber, 2000), and tourism (Duke & Mont 1996). The IPA has also been applied in IS research. Magal and Levenburg (2005) employed IPA to study the motivations behind e-business strategies among small businesses while Shaw, Delone, and Niederman (2002) used it to analyze end-user

    support. Skok and colleagues (2001) mapped out the IPA using the Delone and McLean IS suc-cess model. Delone and McLean (1992) used the construct of end-user satisfaction as a proxy measure of systems performance. Firstly, End-User Satisfaction has a high face validity since it is hard to deny that an information system is successful when it is favored by the users. Secondly, the development of the Bailey and Pearson instrument (1983) and its derivatives provided a relia-ble tool for measuring user satisfaction, which also facilitates comparison among studies. Thirdly, the measurement of end-user satisfaction is relatively more popular since other measures have performed poorly.

    Methodology

    Based on the reasons mentioned above, this study adapted the measurement tool developed by Bailey and Pearson (1983) to evaluate end-user satisfaction. The measures used are: System Quality, Information Quality, Service Quality, and System Use. System Quality typically focuses on the processing system itself, measuring its performance in terms of productivity, throughput, and resource utilization. On the other hand, Information Quality focuses on measures involving the output from an IS, typically the reports produced. If the users perceive the information gener-ated to be inaccurate, outdated, or irrelevant, their dissatisfaction will eventually force them to seek alternatives and avoid using the information system altogether. System Use reflects the level of recipient consumption of the information system output. It is hard to deny the success of a sys-tem that is used heavily, which explains its popularity as the IS measure of choice. Although cer-tain researchers strived to differentiate between voluntary and mandatory use, Delone and McLean (1992) noted that “no system use is totally mandatory”. If the system is proven to per-

    form poorly in all aspects, management can always opt to discontinue the system and seek other alternatives. The inclusion of the Service Quality dimension recognizes the service element in the information systems function. Pitt, Watson, and Kavan (1995) proposed that Service Quality is an antecedent of System Use and User Satisfaction. Because IS now has an important service com-ponent, IS researchers may be interested in identifying the exact service components that can help boost User Satisfaction.

    New and modified measurement items were added to suit the current issues which are pertinent to the IS development and performance evaluation, specific to the Malaysian IT firm. The newly added factors include High Availability of Systems that directly affects the employees‟ ability to

    be productive. Frequent downtimes mean idle time since employees are not able to access the required data or email for their communication needs. Senior management has raised concerns on this issue since it affects business continuity and can potentially compromise its competitive posi-tion. Thus, the IS department under study is exploring measures to provide continuous access to the company‟s IS including clustering technology that provides real-time replication onto a back-

    up system.

    The second new variable is Implementation of Latest Technology that has a direct bearing on the company‟s productivity. Being a leading IT solutions provider, the company needs to keep itself abreast of the recent technologies and solutions available in the market. This is aided by strategic partnerships with various partners who provide the thought leadership, access to their latest de-

    97

Importance-Performance Analysis

    velopments and skill transfer in order to equip the employees with the relevant expertise. Addi-tionally, in order to formulate better solutions for its customers, the company needs to have first-hand experience in using the proposed technologies. To realize this strategy, the company has embarked on a restructuring exercise that sees the formalization of a R&D think-tank and a dep-loyment unit to facilitate rapid roll-out of the latest technologies for internal use. The third new factor included in this study is Ubiquitous Access to IT Applications to enable productivity anytime and anywhere. The primary aim is to provide constant connectivity for the employees who are often out of the office, enabling them to stay in touch with email and critical applications hosted within the organization‟s network.

    A convenience sampling method was used for data gathering. The targeted respondents were the organization‟s end-users of various IS (email, Internet browsing, and a host of office automation systems developed in-house). In order to expedite the data collection process, the survey was converted into an online format and deposited in the organization‟s Lotus Notes database. An

    email broadcast was sent out to explain the research objectives and a brief instruction on how to complete the survey was also included. 680 users accessed the survey and 163 questionnaires were filled, which is equivalent to a response rate of 24%. All completed questionnaires were au-tomatically deposited into a Lotus Notes database. The security settings on this database were modified to allow anonymous responses to ensure complete anonymity.

    The questionnaire contained 20 attributes that were selected out of the 39 items proposed by Bai-ley and Pearson (1983). The rationale for this was to reduce the complexity of the survey ques-tionnaire. Also, the survey questionnaire did not include any negative questions for verification purposes again for the sake of simplification and to reduce the total time taken to provide a com-plete response. It is foreseeable that including other factors may provide a different insight or im-prove the internal reliability of the variables studied. However, to perform a vigorous test to qual-ify the best set of variables would be time consuming and could possibly shorten the duration re-quired for data collection.

    The questionnaire was comprised of two sections, each containing the select 20 attributes (see Table 1). The first section asked respondents to evaluate the degree of importance placed upon each attribute. The second section required an evaluation of the actual performance of the same attributes. The respondents were prompted to use a five-point Likert Scale (1=low, 5-high).

    Findings

    It was found that the respondents were moderately satisfied with the IS performance as indicated by the mean scores (see Table 1). The mean scores in Table 1 indicated that the respondents were the least satisfied with the Degree of Training (the mean score of 2.95) that was provided to them In contrast, the respondents expressed the greatest satisfaction with the following attributes: Rela-tionship with the Electronic Data Processing (EDP) staff (the mean score of 3.82), Re-sponse/Turnaround Time (the mean score of 3.76) and Technical Competence of EDP Staff (the mean score of 3.65). A detailed discussion on user satisfaction is presented in Hisham‟s (2006)

    paper.

    Table 1 indicates the respondents‟ perception that all attributes were below their expectations or level of importance (note the negative values for differences in mean scores). The degree of dif-ference, however, varies. From the gap between means, it is easy to see that the IS department needs to work harder to achieve better results on Understanding of the Systems, Documentation, System Availability, Ubiquitous Access, and Training. These five items have the highest gap scores indicating the biggest discrepancy between importance and performance. On the other hand, the items with the lowest gap scores suggest that the current performance levels are mana-geable, even if they are still below end-users‟ expectations. These include Relationship with the

    98

    Ainin & Hisham

    EDP Staff, Relevancy, Time Required for New Development, Feeling of Control, and Feeling of Participation.

    Table 1. IS Attributes, Means, and Gap Scores

    IS Attributes Performance Importance Means’

    Mean Mean Difference

    (X) (Y) (X-Y)

    Understanding of Systems 3.27 4.36 -1.09

    Documentation 3.06 4.14 -1.08

    High Availability of Systems 3.1 4.16 -1.06

    Ubiquitous Access to Applications 3.05 4.07 -1.02

    Degree of Training 2.95 3.95 -1.00

    Security of Data 3.53 4.49 -0.96

    Integration of Systems 3.09 3.99 -0.90

    Top Management Involvement 3.35 4.19 -0.84

    Flexibility of Systems 3.28 4.07 -0.79

    Implementation of Latest Technology 3.03 3.81 -0.78

    Confidence in the Systems 3.44 4.22 -0.78

    Attitude of the EDP Staff 3.55 4.31 -0.76

    Job Effects 3.51 4.24 -0.73

    Response/Turnaround Time 3.76 4.46 -0.70

    Technical Competence of the EDP Staff 3.65 4.23 -0.58

    Feeling of Participation 3.25 3.79 -0.54

    Feeling of Control 3.27 3.74 -0.47

    Time Required for New Development 3.36 3.8 -0.44

    Relevancy 3.54 3.97 -0.43

    Relationship with the EDP Staff 3.82 4.05 -0.23

    Mean scores for both importance and performance data were plotted as coordinates on the Impor-tance-Performance Map as depicted in Figure 1. All the means were above 2.5 thus falling in the second quadrant of the IPA map (refer to Figure 1). To show the resulting positions on the map more clearly, only the plotted scores are shown (Figure 2). These indicate that both performance and importance of the systems are satisfactory. Therefore, the systems are qualified for further maintenance (cf. Bacon, 2003; Martilla & James, 1977).

    As discussed above, performance and importance scores provide more meaning when they are studied together. It is not enough to know which attribute was rated as the most important, or which one fared the best or worst. By mapping these scores against the iso-rating line one get an indication of whether the focus and resources are being deployed adequately, insufficiently or too

    99

Importance-Performance Analysis

    lavishly. Figure 2 shows that all the scores were above the iso-rating, thus indicating that impor-tance exceeds performance. This implies that there are opportunities for improvement in this company .