DOC

The main objectives then would be to identify the key IS

By Henry Scott,2014-11-26 13:40
6 views 0
The main objectives then would be to identify the key IS

Journal of Information, Information Technology, and Organizations Volume 3, 2008

    Applying Importance-Performance Analysis to

    Information Systems: An Exploratory Case Study

    Sulaiman Ainin and Nur Haryati Hisham

    Faculty of Business and Accountancy, University of Malaya,

    Kuala Lumpur, Malaysia,

    ainins@um.edu.my; nurharyati@mesiniaga.com.my

    Abstract

    Using an end-user satisfaction survey, the perception of the importance of attributes of informa-tion systems was investigated and whether the performance of information systems met the users‟

    expectations in a company in Malaysia. It was discovered that the end-users were moderately sa-tisfied with the company‟s IS performance and that there were gaps between importance and per-

    formance on all the systems-related attributes studied. The largest gap pertained to the attributes Understanding Systems, Documentation, and High Availability of Systems. The contribution of the study is in advancing Importance-Performance Analysis applicable to IS research. Keywords: information systems performance, information systems importance, end-user satisfac-tion

    Introduction

    Information technology (IT) and information systems (IS) continue to be highly or significantly debated in today‟s corporate environments. As IT spending grows and becomes commoditized

    and as essential as electricity and running water, many organizations continue to wonder if their IT spending is justified (Farbey, Land, & Targett, 1992) and whether their IS functions are effec-tive (Delone & McLean, 1992). IT and IS have evolved drastically from the heyday of mainframe computing to the environment that has reached out to the end-users. In the past, end-users inte-racted with systems via the system analyst or programmer who translated the user requirements into system input in order to generate the output required for the end-users‟ analysis and decision-

    making process. However, now end-users are more directly involved with the systems as they navigate themselves typically via an interactive user interface, thus assuming more responsibility for their own applications. Therefore, the ability to capture and measure end-user satisfaction serves as a tangible surrogate measure in determining the performance of the IS function and ser-vices, and of IS themselves (Ives, Olson, & Baroudi, 1983). Besides evaluating IS performance, it is also important to evaluate whether IS in an organization meet users‟ expectations. This paper

    aims to demonstrate the use of Impor-

    tance-Performance Analysis (IPA) in Material published as part of this publication, either on-line or

    in print, is copyrighted by the Informing Science Institute. evaluating IS. Permission to make digital or paper copy of part or all of these

    works for personal or classroom use is granted without fee

    provided that the copies are not made or distributed for profit

    or commercial advantage AND that copies 1) bear this notice

    in full and 2) give the full citation on the first page. It is per-

    missible to abstract these works so long as credit is given. To

    copy in all other cases or to republish or to post on a server or

    to redistribute to lists requires specific permission and payment

    of a fee. Contact Publisher@InformingScience.org to request

    redistribution permission.

    Accepting Editor: Bob Travica

Importance-Performance Analysis

    Research Framework

    Importance-Performance Analysis

    The Importance-Performance Analysis (IPA) framework was introduced by Martilla and James (1977) in marketing research in order to assist in understanding customer satisfaction as a func-tion of both expectations concerning the significant attributes and judgments about their perfor-mance. Analyzed individually, importance and performance data may not be as meaningful as when both data sets are studied simultaneously (Graf, Hemmasi, & Nielsen, 1992). Hence, impor-tance and performance data are plotted on a two dimensional grid with importance on the y-axis and performance on the x-axis. The data are then mapped into four quadrants (Bacon, 2003; Mar-tilla & James, 1977) as depicted in Figure 1. In quadrant 1, importance is high but performance is low. This quadrant is labeled as “Concentrate Here”, indicating the existing systems require ur-

    gent corrective action and thus should be given top priority. Items in quadrant 2 indicate high im-portance and high performance, which indicates that existing systems have strengths and should continue being maintained. This category is labeled as “Keep up good work.” In contrast, the cat-

    egory of low importance and low performance items makes the third quadrant labeled as “Low

    Priority”. While the systems with such a rating of the attributes do not pose a threat they may be candidates for discontinuation. Finally, quadrant 4 represents low importance and high perfor-mance, which suggests insignificant strengths and a possibility that the resources invested may better be diverted elsewhere.

    The four quadrants matrix helps organizations to identify the areas for improvement and actions for minimizing the gap between importance and performance. Extensions of the importance-performance mapping include adding an upward sloping a 45-degree line to highlight regions of

    differing priorities. This is also known as the iso-rating or iso-priority line, where importance equals performance. Any attribute below the line must be given priority, whereas an attribute above the line indicates otherwise (Bacon, 2003).

    5 iQQ s1 2 o High -

    rImportance aQQt 4 4 i Low n

    g 5 1 Low High

    Performance

    Figure 1. Importance-Performance Map

    (Source: adapted from Bacon, 2003; Martilla & James, 1977)

    96

    Ainin & Hisham

    IPA has been used in different research and practical domains (Eskildsen & Kristensen, 2006). Slack (1994) used it to study operations strategy while Sampson and Showalter (1999) evaluated customers. Ford, Joseph, and Joseph (1999) used IPA in the area of marketing strategy. IPA is

    also used in various industries, such as health (Dolinsky & Caputo, 1991; Skok, Kophamel, & Richardson, 2001), banking (Joseph, Allbrigth, Stone, Sekhon, & Tinson 2005; Yeo, 2003), hotel (Weber, 2000), and tourism (Duke & Mont 1996). The IPA has also been applied in IS research. Magal and Levenburg (2005) employed IPA to study the motivations behind e-business strategies among small businesses while Shaw, Delone, and Niederman (2002) used it to analyze end-user

    support. Skok and colleagues (2001) mapped out the IPA using the Delone and McLean IS suc-cess model. Delone and McLean (1992) used the construct of end-user satisfaction as a proxy measure of systems performance. Firstly, End-User Satisfaction has a high face validity since it is hard to deny that an information system is successful when it is favored by the users. Secondly, the development of the Bailey and Pearson instrument (1983) and its derivatives provided a relia-ble tool for measuring user satisfaction, which also facilitates comparison among studies. Thirdly, the measurement of end-user satisfaction is relatively more popular since other measures have performed poorly.

    Methodology

    Based on the reasons mentioned above, this study adapted the measurement tool developed by Bailey and Pearson (1983) to evaluate end-user satisfaction. The measures used are: System Quality, Information Quality, Service Quality, and System Use. System Quality typically focuses on the processing system itself, measuring its performance in terms of productivity, throughput, and resource utilization. On the other hand, Information Quality focuses on measures involving the output from an IS, typically the reports produced. If the users perceive the information gener-ated to be inaccurate, outdated, or irrelevant, their dissatisfaction will eventually force them to seek alternatives and avoid using the information system altogether. System Use reflects the level of recipient consumption of the information system output. It is hard to deny the success of a sys-tem that is used heavily, which explains its popularity as the IS measure of choice. Although cer-tain researchers strived to differentiate between voluntary and mandatory use, Delone and McLean (1992) noted that “no system use is totally mandatory”. If the system is proven to per-

    form poorly in all aspects, management can always opt to discontinue the system and seek other alternatives. The inclusion of the Service Quality dimension recognizes the service element in the information systems function. Pitt, Watson, and Kavan (1995) proposed that Service Quality is an antecedent of System Use and User Satisfaction. Because IS now has an important service com-ponent, IS researchers may be interested in identifying the exact service components that can help boost User Satisfaction.

    New and modified measurement items were added to suit the current issues which are pertinent to the IS development and performance evaluation, specific to the Malaysian IT firm. The newly added factors include High Availability of Systems that directly affects the employees‟ ability to

    be productive. Frequent downtimes mean idle time since employees are not able to access the required data or email for their communication needs. Senior management has raised concerns on this issue since it affects business continuity and can potentially compromise its competitive posi-tion. Thus, the IS department under study is exploring measures to provide continuous access to the company‟s IS including clustering technology that provides real-time replication onto a back-

    up system.

    The second new variable is Implementation of Latest Technology that has a direct bearing on the company‟s productivity. Being a leading IT solutions provider, the company needs to keep itself abreast of the recent technologies and solutions available in the market. This is aided by strategic partnerships with various partners who provide the thought leadership, access to their latest de-

    97

Importance-Performance Analysis

    velopments and skill transfer in order to equip the employees with the relevant expertise. Addi-tionally, in order to formulate better solutions for its customers, the company needs to have first-hand experience in using the proposed technologies. To realize this strategy, the company has embarked on a restructuring exercise that sees the formalization of a R&D think-tank and a dep-loyment unit to facilitate rapid roll-out of the latest technologies for internal use. The third new factor included in this study is Ubiquitous Access to IT Applications to enable productivity anytime and anywhere. The primary aim is to provide constant connectivity for the employees who are often out of the office, enabling them to stay in touch with email and critical applications hosted within the organization‟s network.

    A convenience sampling method was used for data gathering. The targeted respondents were the organization‟s end-users of various IS (email, Internet browsing, and a host of office automation systems developed in-house). In order to expedite the data collection process, the survey was converted into an online format and deposited in the organization‟s Lotus Notes database. An

    email broadcast was sent out to explain the research objectives and a brief instruction on how to complete the survey was also included. 680 users accessed the survey and 163 questionnaires were filled, which is equivalent to a response rate of 24%. All completed questionnaires were au-tomatically deposited into a Lotus Notes database. The security settings on this database were modified to allow anonymous responses to ensure complete anonymity.

    The questionnaire contained 20 attributes that were selected out of the 39 items proposed by Bai-ley and Pearson (1983). The rationale for this was to reduce the complexity of the survey ques-tionnaire. Also, the survey questionnaire did not include any negative questions for verification purposes again for the sake of simplification and to reduce the total time taken to provide a com-plete response. It is foreseeable that including other factors may provide a different insight or im-prove the internal reliability of the variables studied. However, to perform a vigorous test to qual-ify the best set of variables would be time consuming and could possibly shorten the duration re-quired for data collection.

    The questionnaire was comprised of two sections, each containing the select 20 attributes (see Table 1). The first section asked respondents to evaluate the degree of importance placed upon each attribute. The second section required an evaluation of the actual performance of the same attributes. The respondents were prompted to use a five-point Likert Scale (1=low, 5-high).

    Findings

    It was found that the respondents were moderately satisfied with the IS performance as indicated by the mean scores (see Table 1). The mean scores in Table 1 indicated that the respondents were the least satisfied with the Degree of Training (the mean score of 2.95) that was provided to them In contrast, the respondents expressed the greatest satisfaction with the following attributes: Rela-tionship with the Electronic Data Processing (EDP) staff (the mean score of 3.82), Re-sponse/Turnaround Time (the mean score of 3.76) and Technical Competence of EDP Staff (the mean score of 3.65). A detailed discussion on user satisfaction is presented in Hisham‟s (2006)

    paper.

    Table 1 indicates the respondents‟ perception that all attributes were below their expectations or level of importance (note the negative values for differences in mean scores). The degree of dif-ference, however, varies. From the gap between means, it is easy to see that the IS department needs to work harder to achieve better results on Understanding of the Systems, Documentation, System Availability, Ubiquitous Access, and Training. These five items have the highest gap scores indicating the biggest discrepancy between importance and performance. On the other hand, the items with the lowest gap scores suggest that the current performance levels are mana-geable, even if they are still below end-users‟ expectations. These include Relationship with the

    98

    Ainin & Hisham

    EDP Staff, Relevancy, Time Required for New Development, Feeling of Control, and Feeling of Participation.

    Table 1. IS Attributes, Means, and Gap Scores

    IS Attributes Performance Importance Means’

    Mean Mean Difference

    (X) (Y) (X-Y)

    Understanding of Systems 3.27 4.36 -1.09

    Documentation 3.06 4.14 -1.08

    High Availability of Systems 3.1 4.16 -1.06

    Ubiquitous Access to Applications 3.05 4.07 -1.02

    Degree of Training 2.95 3.95 -1.00

    Security of Data 3.53 4.49 -0.96

    Integration of Systems 3.09 3.99 -0.90

    Top Management Involvement 3.35 4.19 -0.84

    Flexibility of Systems 3.28 4.07 -0.79

    Implementation of Latest Technology 3.03 3.81 -0.78

    Confidence in the Systems 3.44 4.22 -0.78

    Attitude of the EDP Staff 3.55 4.31 -0.76

    Job Effects 3.51 4.24 -0.73

    Response/Turnaround Time 3.76 4.46 -0.70

    Technical Competence of the EDP Staff 3.65 4.23 -0.58

    Feeling of Participation 3.25 3.79 -0.54

    Feeling of Control 3.27 3.74 -0.47

    Time Required for New Development 3.36 3.8 -0.44

    Relevancy 3.54 3.97 -0.43

    Relationship with the EDP Staff 3.82 4.05 -0.23

    Mean scores for both importance and performance data were plotted as coordinates on the Impor-tance-Performance Map as depicted in Figure 1. All the means were above 2.5 thus falling in the second quadrant of the IPA map (refer to Figure 1). To show the resulting positions on the map more clearly, only the plotted scores are shown (Figure 2). These indicate that both performance and importance of the systems are satisfactory. Therefore, the systems are qualified for further maintenance (cf. Bacon, 2003; Martilla & James, 1977).

    As discussed above, performance and importance scores provide more meaning when they are studied together. It is not enough to know which attribute was rated as the most important, or which one fared the best or worst. By mapping these scores against the iso-rating line one get an indication of whether the focus and resources are being deployed adequately, insufficiently or too

    99

Importance-Performance Analysis

    lavishly. Figure 2 shows that all the scores were above the iso-rating, thus indicating that impor-tance exceeds performance. This implies that there are opportunities for improvement in this company .

     Legend for IS Attribute 1. Understanding of Systems 11. Confidence in the Systems 2. Documentation 12. Attitude of the EDP Staff 3. High Availability of Systems 13. Job Effects 4. Ubiquitous Access to Applications 14. Response/Turnaround Time 5. Degree of Training 15. Technical Competence of the EDP Staff 6. Security of Data 16. Feeling of Participation 7. Integration of Systems 17. Feeling of Control 8. Top Management Involvement 18. Time Required for New Development 9. Flexibility of Systems 19. Relevancy 10. Implementation of Latest Technology 20. Relationship with the EDP Staff

    Figure 2. Map of IS Attributes

    Discussion and Conclusion

    Organizations today are compelled to analyze the actual value (performance) of their IS. On the practical level, the implication of this is that IS departments should measure the satisfaction level amongst their end-users as part of evaluating the performance of IS. The same goes for managers in other areas who are responsible for the return on IS investments. End-users‟ input can reveal

    insights as to which areas deserve special attention and more resources. Using tested tools such as Bailey and Pearson‟s (1983) instrument helps ensure a highly consistent, reliable, and valid out-

    come that, when deployed over time, can help measure the performance of the IS department to ensure its continual alignment between its operational goals and the underlying business objec-tives.

    As determined in this study, key IS attributes of IS performance pertain to Service Quality (Rela-tionship with the EDP Staff) and System Quality (Response/Turnaround Time) are critical in de-

    livering end-user satisfaction (i.e. system performance). On the other hand, data security is deemed to be the most important IS attribute, which echoes today‟s concern about rampant secu-

    rity threats. These include virus and worm attacks which could lead to data loss, identity theft, hacking by unscrupulous hackers, and unauthorized access to data. As such, an IS department

    100

    Ainin & Hisham

    needs to be more proactive in handling these threats and continually demonstrate to the end-users its ability to secure the system and its information repositories. Confidence in the System is also partly related to data security. But even more, this attribute also has to do with the quality of in-formation. The data presented to the user must be reliable, accurate, and timely; otherwise, data will be meaningless and will hinder the end-users from making sound decisions. Principles of IPA can be expanded to other functional areas and there are particularly functional units that provide internal services, such as Human Resources. For example, a human resources department provides services to internal staff, such as the leave management system (a system

    for managing employees‟ leave applications). The staff can be asked to rate the system‟s perfor-

    mance as well as their expectation from the system. The findings then can be used to gauge whether their expectations are met, whether they are satisfied with the system, and most impor-tantly as a guide for enhancements and improvements.

    The Importance-Performance Map (Figure 2) revealed that all twenty IS attributes were perform-ing below the end-users‟ expectations. The three variables with highest gap scores were, Under-

    standing of Systems, Documentation, and High Availability of Systems. Mapping the mean scores for both data sets onto a scatter plot and analyzing the distance of the scores plotted against the iso-rating line gave much insight to help guide the prioritization of resources and management intervention. For example, to improve end-users‟ understanding of systems, the IS department

    could strive to improve its systems‟ documentation and training. This may include the use of

    computer-based training, applying multimedia and video to conduct online demonstrations, and organizing IT open days” when end-users can approach the IS personnel and inquire about the

    systems deployed.

    Based on the gap analysis, the IS department has already fostered good relationships with the end-users, encouraging a high user involvement in the development of new applications. This makes end-users feel that they are more in control and they are assured that the solutions devel-oped are highly relevant to their tasks. The IS department would be wise to maintain their healthy relationship with the end-users, while pursuing the enhancement of the IS attributes identified with the highest gap scores.

    Managers from the end-users‟ departments should work together with the IS department to reduce

    the Importance-Performance gaps. They should play a more active role in the development and implementation of new systems. For example, during the design stage, managers must write and request their specifications based on the functional and procedural requirements, while during the testing stage, managers must test the system thoroughly and extensively.

    The limitation of this study is the use of a convenient rather than random sample. Additionally, end-user responses on the perceived importance might have suffered from their desire to rank everything as “very important” in order to suggest a highly concerned outlook on the overall state of the attributes presented. Nevertheless, the study accomplished its purpose of continuing IPA research within the filed of IS, which was only recently imported from the field of marketing. This application of IPA contributes in several ways. For example, IPA has been used in IS re-search by Magal and Levenburg (2005) for evaluating e-strategies among small businesses in the United States. The respondents were business owners. In contrast, the present study differs as the respondents are users of systems within a company. Moreover, Skok and colleagues (2001) used IPA to study IS performance of a health club in the United Kingdom by deploying Delone and McLean‟s (1992 IS success typology. The model incorporates users satisfaction as a variable to

    be studied. The present study differs in focusing on the end user satisfaction alone as it considers it a surrogate measure of IS success.

    Finally, the present study has adapted Bailey and Pearson‟s (1983) instrument by adding three

    new dimensions High Availability of Systems, Implementation of Latest Technology, and Ubi-

    101

Importance-Performance Analysis

    quitous Access. These factors were included in response to the needs of the studied company. High Availability of Systems directly affects the employees‟ ability to be productive. Frequent downtimes mean idle time since employees are not able to access the data needed. The senior management has raised this issue since it affects business continuity and can potentially compro-mise its competitive position. Thus, the IS department is exploring measures to provide conti-nuous access to the company‟s IS, including the technology clustering that provides real-time

    backups. Furthermore, the dimension Implementation of Latest Technology is added to the in-strument because the company has to keep abreast of recent technologies and solutions available in the market. In order to formulate better solutions for its customers, the company must have first-hand experience in using the proposed technologies. The third new dimension is Ubiquitous Access that is supposed to enable the company to achieve productivity objectives anytime and anywhere. This means providing a constant connectivity for employees who are often out of of-fice and enabling them to have continuous access to critical applications hosted within the com-pany‟s network.

    Lastly, it should be noted that most of the previous work on IPA was conducted in more devel-oped countries, such as the United States and the United Kingdom. No research of this sort has ever been done in Malaysia, which belongs in the category of developing countries.

    References

    Bacon, D. R. (2003). A comparison of approaches to importance-performance analysis. International Jour-

    nal of Market Research, 45(1), 55-71.

    Bailey, J. E., & Pearson, S. W. (1983). Development of a tool for measuring and analyzing computer user

    satisfaction. Management Science, 29(5), 530-545. Available at

    http://business.clemson.edu/ISE/html/development_of_a_tool_for_meas.html

    Delone, W. H., & McLean, E. R. (1992). Information systems success: The quest for the dependent varia-

    ble. Information Systems Research, 3(1), 60-95. Available at

    http://business.clemson.edu/ISE/html/information_systems_success__t.html

    Dolinsky, A. L., & Caputo, R. K. (1991). Adding a competitive dimension to importance-performance

    analysis: An application to traditional health care systems. Health Marketing Quarterly, 8(3/4), 61-79.

    Duke, C. R., & Mont, A. S. (1996). Rediscovery performance importance analysis of products. Journal of

    Product and Brand Management, 5(2), 143-154.

    Eskildsen, J. K., & Kristensen, K. (2006). Enhancing IPA international. Journal of Productivity and Per-

    formance Management, 55(1), 40-60.

    Farbey, B., Land, F., & Targett, D. (1992). Evaluating investments in IT. Journal of Information Technolo-

    gy, 7, 109-122.

    Ford, J. B., Joseph, M., & Joseph, B. (1999). IPA as a strategic tool for service marketers: The case of ser-

    vice quality perceptions of business students in New Zealand and the USA. Journal of Services Mar-

    keting, 13(2), 171 186.

    Graf, L. A., Hemmasi, M., & Nielsen, W. (1992). Importance satisfaction analysis: A diagnostic tool for

    organizational change. Leadership and Organization Development Journal, 13(6), 8-12.

    Hisham, N. (2006). Measuring end user computing satisfaction. Unpublished MBA dissertation. University

    of Malaya.

    Ives, B., Olson, M., & Baroudi, J. J. (1983). The measurement of user information satisfaction. Communi-

    cation of the ACM, 26(10), 785-793. Available at

    http://business.clemson.edu/ISE/html/the_measurement_of_user_inform.html

    102

    Ainin & Hisham

    Joseph, M., Allbrigth, D., Stone, G., Sekhon, Y., & Tinson, J. (2005). IPA of UK and US Bank: Customer

    perceptions of service delivery technologies. International Journal of Financial Services Management,

    1(1), 66 88.

    Magal, S., & Levenburg, N. M. (2005). Using IPA to evaluate e-business strategies among small businesses. thProceedings of the 38 Hawaii International Conference on System Sciences; Jan 3-6, 2005; Hilton

    Waikoloa Village Island of Hawaii, USA

    Martila, J. A., & James, J. C. (1977). Importance-performance analysis. Journal of Marketing, 2(1), 77-79.

    Pitt, L. F., Watson, R. T., & Kavan, C. B. (1995). Service quality: A measure of information systems effec-

    tiveness. MIS Quarterly, 19(2), 173185.

    Sampson, S. E., & Showalter, M. J. (1999). The performance importance response function: Observation

    and implications. The Services Industries Journal, 19(3), 1-25.

    Shaw, N. C., Delone, W. H., & Niederman, F. (2002). Sources of dissatisfaction in end-user support: An

    empirical study. Data Base for Advances in Information Systems, 33(2), 4156.

    Skok, W., Kophamel, A., & Richardson, I. (2001). Diagnosing information systems success: Importance-

    performance maps in the health club industry. Information & Management, 38, 409-419.

    Slack, N. (1994). The importance-performance matrix as a determinant of improvement priority. Interna-

    tional Journal of Operations & Production Management, 14(5), 59-76.

    Weber, K. (2000). „Meeting planners‟ perceptions of hotel-chain practices and benefits: An importance-

    performance analysis. Cornell Hotel and Restaurant Administration Quarterly, 41(4), 32-38.

    Yeo, A. Y. C. (2003). Examining a Singapore bank‟s competitive superiority using importance-

    performance analysis. Journal of American Academy of Business, 3(1/2), 155-161.

    Biographies

    Dr. Ainin is associated with the Department of Marketing and Infor-

    mation Systems, Faculty of Business and Accountancy, University of

    Malaya, Kuala Lumpur, Malaysia. She is currently the Dean of the Fa-

    culty. Her main areas of research include technology adoption, digital

    divide, information systems‟ performance, and management education.

    Nurhayati Hisham obtained her Masters Degree in Business Admin-

    istration from University of Malaya. She currently works for a Malay-

    sian information technology company.

    103

Report this document

For any questions or suggestions please email
cust-service@docsford.com