ACCESS SECURITY AT McDONNELL DOUGLAS
AEROSPACE INFORMATION SERVICES ? 1998 J. Christopher Westland. All rights reserved.*
Markus Wegner was a highwayman. Only the highways he traversed were electronic, not concrete. His highways had sign-posts, bridges and gateways. They had turnpikes and intersections. His current haunt — an intersection (one of many) where
electronic freight mingled and crossed.
After weeks of searching through backdoors, trapdoors, and gateways on the Milnet computer network, he found the entryway he sought — into the computers of
McDonnell Douglas. McDonnell Douglas' weapons secrets, personnel files, research, commercial aircraft designs, were open to his scrutiny. As design plans for the MD-12 (MD's newest commercial airliner) downloaded by his side, Wegner relaxed and lighted another cigarette.
* * * * *
Halfway around the globe, Tom Thompson, Security Manager for McDonnell Douglas Aerospace Information Services (MDAIS) in Long Beach, California pondered the difficulties in securing MDAIS' systems. His chore was increasingly hampered by unfettered growth of networked workstations and microcomputers within the company. Thompson suspected these harbored a growing underground "sneaker network" of stolen software, data and proprietary secrets, along with the odd computer virus. He suspected that both networks and diskettes left McDonnell Douglas' (MD) computers open to intrusion. Now events seemed to bear out his suspicions.
Dave Komendat, Principal Specialist for International Security Operations, had just contacted Thompson about a report received from Army Intelligence. An unidentified hacker had made several attempts to access military files on McDonnell Douglas' aircraft through a convolution of telephone lines, computer bulletin boards, and corporate data systems. The common thread in all of these access attempts was the use, at one or more points, of the Milnet military communications network. Milnet was actually
two networks — one that was relatively insecure, and used extensively for research, often for military R&D; and the other secured and intended only for military use. Komendat believed that the hacker was able to access either side, although the Army could not be sure until it further perused access logs and other audit trails. The hacker had used a number of clever ruses to infiltrate military systems without leaving a trail. The Army assured Komendat, though, that these systems were secure, and that all of the hacker's attempts had been thwarted. Komendat was not so sure.
Despite its scrutiny, the Army was unable to accurately track the source of the telephone calls through the hacker's telephone connections, computer commands, retransmissions and automated login attempts across US military computing sites.
* Permission to use at Babson College obtained from David Kopcso, Department of Math and Sciences, Autumn 1999.
All this worried Thompson. Although he felt that MDAIS' mainframe data was secure, he was unsure of the microcomputer network and another R&D network connecting several clusters of VAX minicomputers. Because these networks did not support critical "production" systems — i.e., systems that handled commercial
transactions, whose accuracy, privacy, security, audit trails and transaction integrity needed to be insured — they were allowed a certain degree of unmanaged growth. This was beneficial for two reasons — (1) it allowed hardware and networks to alter quickly in support of new projects, often by retrofitting existing standalone microcomputers; and (2) it promoted a laissez-faire spirit toward collaboration and communication that favored
creativity and productivity.
Unfortunately, the data needed by users of microcomputer networks frequently resided on the secured, mission critical, "production" side of the mainframes. It was only a matter of time before microcomputer users requested access. That meant providing gateways to the mainframe, and a new layer of security at the gateway. Thompson was concerned with security beyond the gateway, over which he had little control. An impostor might easily gain access to authorized login IDs and passwords in the microcomputer network, leaving him free to prowl through supposedly secure databases.
Some gateways were opened to appease programmers. Programmers who might not even work for MDAIS; who anyway were suspected of placing backdoors and trapdoors in production programs to palliate their own software maintenance chores. Data security became more complex by the day.
Hackers invoked several ploys to gain access. Shoulder surfing let hackers gather
information (e.g., passwords) by looking over another user's shoulder. Worms and viruses
could be used to capture and return information, as well as for damage or illicit access. Logic bombs —programs that perform an unauthorized act when a specified system condition occurs — could be used to cover a hacker's trail and make prosecution difficult. Leakage through disclosure of proprietary or confidential information could compromise information assets. Dumpster diving — the search of trash from corporations — was a
major source of sensitive information. Thompson had heard of one California bank where a trash collector had figured out the bank's system from paper waste and transferred $1 million into his own account without detection (for a while). Zapping used utility
programs to override system controls. Piggybacking onto another's computer account
without authorization (often because a user failed to logoff) was a common problem at MDAIS. Adding confusion to these and many other options was the salami technique
where theft of information or resources was hidden in a large group of activities, such as skimming rounding errors from interest calculations. Thompson liked to summarize violators' modi operandi with "the seven E's":
• Embezzlement: the unauthorized (usually undetectable) appropriation of
• Eavesdropping: the invasion of privacy
• Espionage: the theft of R&D and other corporate information assets
• Enmity: Revenge of disgruntled employees, through time bombs, sabotage, etc.
• Extortion: the use of time bombs, sabotage, and so forth, with the objective of
• Error: the most common abuse
• Ego: committing abuse for enjoyment or prestige; the hacker's motivation
MDAIS was founded in St. Louis as a separate division of McDonnell Douglas (MD) called McAuto in 1969. MDAIS consolidated under one management team the diverse and widespread data processing operations of MD Aerospace. For several years MDAIS resources were dedicated solely to McDonnell Douglas information processing. In 1973 MDAIS successfully launched their first major commercial venture "CUADATA," a Credit Union back-office processing support system. This was followed in 1976 by a second successful commercial venture "UNIGRAPHICS" commercial Computer Aided Design / Computer Aided Manufacturing (CAD / CAM) system. MDAIS essentially offered internal expertise on a "time shared" basis for use by outside firms. Prior to the advent of powerful low cost microcomputers, timesharing was a popular option for purchase of discrete chunks of computing time, without a substantial and permanent investment in capacity.
By 1981, MDAIS' various timesharing initiatives were consolidated under the Timenet and Telecheck programs. In that year, MDAIS consolidated its operations on a 1.1 million square foot campus in St. Louis. Yet operational problems were also surfacing by that time. Capital and operational costs had outpaced revenues. What is more important, MDAIS started losing commercial contracts for non-peak batch processing (in the evening and early morning) as demand shifted toward on-line transaction processing (OLTP) during daylight hours. This exacerbated management's incentive to cut non-essential costs — among which they included information systems security.
Between 1984 and 1991, the peak hour processing needs of MD forced MDAIS to divest itself of many of its commercial ventures. At the same time operations were split between St. Louis MO and Long Beach CA. The divestiture made securing MD's information assets significantly easier, since control systems did not have to track a large and shifting body of external users. Yet the same period saw the rapid growth of VAX minicomputer networks and networks of workstations — especially in the engineering
design and development area. These networks installed numerous gateways to other networks outside of MDAIS. They also had numerous dial-up ports, which provided valuable telecommuting and access capabilities to engineers and management. Although critical on-line transaction processing resided on a tightly controlled mainframe, most of the valuable R&D resided on unsecured networks.
MDAIS increasingly received its transaction revenue from sources outside of MD. In the 1970s MDAIS was a timesharing company — they sold raw computer time,
counting on customers to supply their own software, data, processing procedures and standards, and error control. Although they were one of the most efficient and reliable
providers of mainframe computer processing, they lacked a portfolio of software packages and services to sell, and were ill-prepared to assume the comprehensive range of facilities' management services increasingly being offered by their competition. What was worse, their customers were increasingly skeptical of MDAIS' ability to run a secure operation when only the hardware aspect of processing was under their management. Many customers made proprietary and sensitive processes, trade secrets and market data available to MDAIS. They might be less likely to procure processing time were these to be subject to unfettered access by competitors and hackers. In contrast, facilities' management / systems integrators such as EDS and Arthur Andersen could assure a "closed shop" by tightly controlling which software accessed whose databases, and tightly managing the disposition of media and resident data.
By 1992 MDAIS presided over a far-flung empire of mainframe computers, VAX minicomputers and networked microcomputers. The center of operations split between the two largest operations — one in Long Beach, California, and the other in St. Louis, Missouri. Eight satellite operations completed the system (Florida Space Center, Houston, Macon GA, Toronto, Tulsa, Salt Lake City, Columbus OH , San Diego) which handled mainly administrative and accounting processing. Engineering R&D computing tended to take place on the VAX network, and Secure DoD work took place at special centers facetiously called "Black Holes."
The MDAIS operation processed around 15 million transactions per day in 1992, and around 4 billion transactions annually. Systems tuning and load balancing between various machines was performed on a continuous basis. Over 8 trillion bytes of data were retained in 250,000 volumes of tape storage, most of which could be accessed within 15 seconds via tape silos and similar automated mounting systems. Continual load balancing, tuning, and refinement of machine hardware and software and operating procedures made the MDAIS computers some of the most efficient among any computer service provider. Outside reviews benchmarked capacity utilization higher than virtually any other comparable installation.
This traffic became increasingly expensive to service. Networked microcomputers were offering as much as 100 to 1 improvements in price performance over mainframes. But they were unreliable compared to proprietary mainframes, and provided few services to insure data integrity. Yet cost pressures were increasing demand for dedicated minicomputers and workstations. Ad hoc networks were popping up to support data and
software transfer around these networks, and these were seen by MDAIS management as real sources of security and data integrity problems.
The growth of transaction traffic in the 1970s and 1980s was not unlike the parallel growth of traffic in neighboring Los Angeles — with similar problems arising in
both the computer and highway networks. Healthy investments in infrastructure had allowed both MDAIS and Los Angeles to (barely) keep up with the demands of traffic, especially during peak hours. But policing both network became more difficult, and risky or shortsighted decisions were made in the face of tight budgets. Thompson felt it was
only a matter of time before the computer equivalents of carjackings, drive-by shootings and unsafe vehicles got out of hand.
Of even more concern to MDAIS was the growing threat from hackers, industrial spies, and disgruntled employees. Microcomputer viruses were being detected at a rate of around six per day in 1992, less than ten years after the concept of a computer virus was first proposed by Fred Cohen at The University of Southern California. Computer hacking was at an all time high, with the vast majority of intrusions no doubt going unnoticed. Damages could potentially be in the billions. MDAIS was very likely being hurt by the potential for security breaches, although no one could be sure.
* * * * *
Thompson had been with MDD since 1968. In early 1980's, Thompson left MDAIS, and traveled to Oregon to become a gentleman farmer. Several years of farming had left him yearning for the structure and routine of industry, and he had returned to assume responsibility for implementation of ACF-2, a popular mainframe security package. ACF-2 operated by defining computer assets as "objects" to be accessed only by "authorized individuals." Authorization was granted on a "need-to-know" basis. In theory this provided adequate security where virtually all sensitive information resided on the mainframe. In practice, it was often difficult to determine who needed to know what concerning sensitive R&D information — needs would often evolve as R&D progressed.
Thus "need-to-know" was interpreted laxly. This presented the corporation with significant exposure to loss of corporate secrets, with consequent loss of competitive position and potential patent rights.
Thompson estimated that roughly 20% of MDAIS' information technology investments were in hardware, 30% in proprietary software, and the remaining 50% in the corporation's databases, of which engineering R&D data constituted the largest share. Proprietary software investments reflected mainly costs of intellectual effort associated with development and maintenance. Data costs were largely acquisition costs, which tended to be labor intensive. Given the relative proportions of MDAIS' investments, security tended to focus most intensely on databases and other data assets. This was in sharp contrast to security programs initiated in the 1970's, which attempted to assure that machine time was used efficiently and solely for corporate affairs.
There was increased concern over maintenance to proprietary software. Although investment in proprietary software was substantial, it was doubtful that much of it would find widespread application outside MDAIS. But this same software could access, alter and copy data with impunity; neither ACF-2 nor corporate authorization schemes were equipped to deal with this possibility. Thompson thought that control could be enhanced by tightly monitoring programmer's access to production software.
Maintenance costs on MDAIS's software ran an annual 10% of installed cost. Around 75% of this reflected the addition or refinement of features to insure data integrity and accuracy. An internal study revealed that approximately 50% of
maintenance expenditures were incurred reading old code, trying to figure out what tasks it performed. Since the average seven year old system contained around 50% "dead code," this work was often tedious and unrewarding. Given the economics of software maintenance, Thompson felt that any security measures must be transparent to maintenance programmers and could not impede their already tedious task. Programmers who perceived their efforts being seriously hampered by security could bring unpleasant politics to bear on Thompson and his staff. To this end, mainframe systems were dichotomized into "production" and "test" sides. Maintenance programmers were allowed access to copies of "production" software modules and databases on a prophylactic "test" side. Since their use and modification might be substantial, this seemed an appropriate way to sequester sensitive production data.
* * * * *
Dave Komendat glanced over Thompson's shoulder at the maze of data scrolling forth on the screen. Four hours of searching activity logs had left them exhausted and irritable. But now they had him — the hacker who had eluded them over the past week.
This is what they saw:
Welcome to McDonnell Douglas Aerospace Information Services.
Please enter your user ID.
Incorrect login, try again
Incorrect login, try again
Incorrect login; session disconnected
And the hacker had made 183 similar attempts from the same Milnet gateway over the past week, under different login ID's, but using similar passwords from a list of around 100 names. Except, that the last four attempts had proven successful!
"Check the current activity log" suggested Komandat.
Thompson pecked at the keyboard... "He's in the system right now!"
"Can you shut him down?"
"I think so. He's going after design specs for the MD-12 flight control electronics."
Thompson rapped on the keyboard. "There. That puts the entire database off-limits until we can get a positive ID."
* * * * *
Markus Wegner's repose was interrupted by the unexpected silencing of his printer. For reasons unknown, it had failed to complete the printout of MD-12 plans. He
had seen the data on his screen only half an hour earlier, but had been unable to download, let alone print. Fortunately, months of knocking at electronic doors had inured him to these little glitches. He had other routes into McDonnell Douglas' networks.
* * * * *
MDAIS' Strategy for Mainframe Security
In 1968, all mainframe access security tasks were consolidated under Tom Thompson, Director of Information Protection. Thompson transferred into the position from an operator‘s position in the corporate data processing area. The purpose of the group was to implement management policy.
In 1977, when the Foreign Corrupt Practices Act was passed, MD executives raised new concerns about the accuracy and security of financial information. Computer security programs were expanded at that time to deal with local police, as well as Military security personnel. Audit coordination, with user guidance and training, gained new emphasis. During this period, the International Information Security Foundation, Telecommunications security council, and National Research Council Security Systems Study Committee were formed, further emphasizing the concern of management over security and control of information assets.
Access was controlled using ACF-2, and an associated "lock and key" conceptual framework that specifically allowed access links based on (1) individuals, and (2) objects.
Individuals were employees and outside users of MDD‘s information systems. In its simplest rendition, MDAIS perceived its security problem to be one of assuring that individuals were allowed access to objects only when explicitly authorized. This was
equivalent to partitioning the company's assets up into rooms, and issuing keys to access only those rooms that individuals actually needed access to as a part of their jobs. Each of the three components required extensive interpretation before this scheme could be implemented in practice. The precise definition of an object was left open, to insure
flexibility in a rapidly evolving computing environment. Major classes of objects that had been defined in past use were: (1) Central Processing Units (real and virtual); (2) Software modules; (3) Databases and files; (4) Individual data fields on databases and files; (5) networks; (6) Network gateways; (7) General ledger accounts (for charging P.O.s); and (8) Individual users' computer time accounts.
Each object was assumed to be useful (valuable) to some individual — otherwise
MDAIS would divest itself of the object. Individuals could be employees of MDAIS; more often they were not. Since the majority of MDAIS' business was time-sharing, the majority of their information technology assets (software, data and transactions) could be considered to be held on consignment. Identification of the full population of individuals who might desire access to a given object was a crucial but vexing endeavor; the identifications that existed were generally considered to be incomplete.
Individuals needed to be categorized into authorized users, potential violators of access security and non-accessors. There was an inherent bias in all authorization schemes to delineate the authorized accessors, while ignoring the potential for unauthorized access. The risk was that there existed users who would try to circumvent information access controls. There might be diverse motivations for circumventing control — the price of authorized access was unaffordable; competition or survival of the unauthorized might depend upon gaining access; or successful circumvention might be touted as a sign of cleverness or just good fun. The latter case was an increasing concern. Thompson suspected the growth of a computer underground. Recent spectacular cases of
1intrusion had reported and profiled in books such as Cliff Stoll's The Cuckoo's Egg —
these accounts had been especially embarrassing for the firms and individuals mentioned in them. But Thompson surmised that reported offenses were only the tip of the iceberg — these were the foolish hackers ... the ones who had been caught.
Compared to hackers, it seemed relatively easy to identify other classes of unauthorized users. These violators could be assumed to have some type of identifiable link to the information asset — either they were competitors, or employees, or consumer
groups with an agenda which included the information asset. Employees and ex-
2employees presented the greatest threat. A recent survey found that around 30% of all
employees were honest; another 40% would under the right conditions, be compromised; and the final 30% of employees fully expected to exploit the corporation when it suited their needs.
But hackers — they were irrational and unpredictable. Their motivation for access could range from "catch-me-if-you-can" playfulness to theft and vandalism. And they could manifest themselves either through personal access attempts, or through
impersonal artifices — automated viruses, worms or Trojan horse software. Identifying them was difficult enough; foiling them was nearly impossible.
Thompson attempted to gain a more complete identification of unauthorized users. But it was common for users and management to disregard Thompson's entreaties. They claimed there was no evidence of unauthorized access, thus it was not a problem that demanded investment of resources. Ignoring, of course, that lack of evidence may have been due more to systems that failed to detect unauthorized access, than to the non-existence of violators. Industry periodicals documented evidence that, across the industry, abuse of computer assets was growing rapidly. In 1992 the total cost in lost work, vandalized assets, and stolen data in the US alone was estimated to be around $50 billion per year (though for obvious reasons this number was considered highly speculative).
Authorization consisted of two components — the authorization scheme and the
approval process. The authorization scheme was a part of management policy that determined how ownership, access and use of assets under the firm's jurisdiction were distributed.
Approval was the actual process of granting authorization on at the event /
incident level. As much as possible, Thompson wanted to see the approval process
automated. Automated approval was less expensive in the long run, and could be vigilant 24 hours a day, 7 days a week. Unfortunately, automation was rigid, inflexible, and lacked the intuitiveness of human intervention that was often crucial to identifying and apprehending a perpetrator.
Owing to the nature of its business and service structure, many of MDAIS' information assets (a.k.a. objects) were provided on consignment from their customers,
rather than belonging to MD outright. Authorization policies were loosely based on a "need-to-know" dictum for data, and "contractual payment" for machines and software. Unfortunately, this dictum was often difficult to interpret in practice because of uncertainty about end user needs.
A prime example appeared in the joint development agreement reached with Taiwan Aerospace Corporation, the newly formed Taiwanese national firm committed to co-developing the MD-12 Commercial Airliner. MDAIS retained the MD-12 engineering specification, prototype and R&D files for MD as a part of their subsidiary relationship. It was impossible to determine exactly what files would ultimately be needed by engineers on either side since MD-12 specifications were still evolving. Thus the Taiwanese firm was essentially given unrestricted access to MD-12 specifications, even in areas in which no subcontracting was being performed by them. McDonnell Douglas felt that it may have unnecessarily divulged $100's of millions of dollars of proprietary R&D that could be used in the design of competing airliners. Thompson was determined not to let this happen in the future. But how to more accurately restrict access without limiting the usefulness of the data?
* * * *
The Information Highway
Thompson had read somewhere that the most effective way to secure a home was to build it in a neighborhood distant from any highway. This, it seemed, had proven effective where armed guards, gated communities, increased policing, or other enforcement had failed. Thompson was concerned that MDAIS' mainframe security under ACF-2 was sort of a gated community, serving the firm's geriatric legacy systems, close to all the major highways. Unfortunately, much of the valuable new data and software resided on microcomputers. Microcomputer networks were MDAIS' vibrant but dangerous urban neighborhoods, highways running through; some gentrified, some decaying; steps for some on the way to a better neighborhood. There was no central planner for these streets — ad hoc and chaotic, signposts conflicting, laws undocumented
or ignored — they carried their traffic with efficient anarchy. No doubt the gated community was more secure; but it never seemed to satisfy the young, creative, and productive community vital to MD.
Unlike crimes against property, once a computer crime was committed, it became very difficult to gather evidence or prosecute. Thompson knew that computer crime left
very little evidence after the fact. The best evidence could be found in paper audit trails, computer memory, computer backup media, and computer logs. But there was almost never any physical evidence.
Dave Komendat contended that the best immediate response upon evidence of a computer crime was much like police response after a burglary:
• Freeze the scene of the crime
• Document what happened
• Preserve the documentation — e.g., do not reconstruct a file without first
copying it in its damaged state
Furthermore, preparing the legal case should be handled by:
• Calling in and cooperating with local law enforcement officers, recognizing that
they might lack expertise in systems
• Recognizing the difficulties in presenting computer related evidence to a jury,
• Recognizing that US Attorneys or district attorneys would rather prosecute a
mail fraud or wire fraud case, because of difficulties in presenting the
computer aspects of crime to a jury
One of the greatest problems in prosecuting computer crimes was posed by the strict rules governing the admissibility of evidence in court. These were designed to ensure fairness, and to guard against tampering or misrepresentation. But the volatile and manipulable nature of computer media made many computer counterparts to traditional evidence inadmissible — e.g., documents, photos, and recently video. Computer evidence and records had to be gathered and documented with great care.
Both Komendat and Thompson believed that the victimized corporation was often in a better position to deal with a crime than law enforcement. They might be reluctant because they wished to protect their reputation or did not feel they had a strong case. But if a company had its own experienced investigators they could assemble a case —
interview employees, assemble phone and audit trail records — better than law
enforcement officials. Thompson knew that once law enforcement was brought in, they were required to play by ―Miranda rules,‖ which could significantly slow the progress of a case.
Komendat knew that corporate evidence gathering might be at risk because firms were used to prosecuting civil cases rather than criminal cases. Standards were much higher in a criminal case, and ignorance of privacy laws and so forth could leave the evidence open to attack in court. Law enforcement officers were used to this greater burden of proof. And Komendat knew that many computer crime cases were disposed of not based on what the perpetrator did or what the evidence was, but whether it was gotten legally and whether it was admissible.
Komendat was also aware that once you had collected evidence incorrectly, you could not go back and redo it correctly. Because of the complexities of investigating