The UNIX System

By Warren Carpenter,2014-03-28 07:33
10 views 0
The UNIX SystemThe,Unix,the,UNIX

Operating Systems DPR355 Revision 2

    OS Chapter 6 February 2004

    Chapter 6

    Memory Management and Processor Management

6.1 Memory Management

6.1.1 Loading Operating System and Application Programs

    ; The operating system is a collection of software modules that, among other things, loads

    application programs and supports them as they run. Clearly, the operating system must

    itself occupy memory. Generally, the first few hundred byes of low memory are set aside to

    hold key operating system control information (Figure below). Next come the input/output

    control system and the file system, followed by the command processor. The remaining

    memory (sometimes called high memory) is the transient area; this is where application

    programs and less essential operating system routines are loaded.

     Control Information

     Input/ Output control system

     File system

     Command processor

     (Essential routines)

     Transient Area

     Command processor

     (these routines can be overlaid)

    Generally control information is stored in low memory, and transient logic occupies high memory.

    ; Some operating systems modules, such as the ones that control physical I/0, directly support

    application programs as they run, so they must be resident. Others, such as the routine that

    formats disks, are used only occasionally. These modules can be stored on disk and read

    into memory only when needed; they are called transients. Given that memory space is

    limited, keeping only essential logic resident is good idea.

    ; Loading application programs is not always as simple as it seems. The amount of space

    needed by a program can change as it runs, so some programs make use of overlay

    structures. The idea of overlays was developed during the second generations when the

    amount of available memory was quite limited. The problem, in a nutshell, was to fit a 32K

    program into 16K machine. The solution was breaking the program into modules.

    ; For example, imagine a program with four 8K modules (Figure below). Module 1 holds the

    main control logic and key data common to the entire program. Module 2 processes valid

    input data. Occasionally, errors or unusual data values call for the logic in module 3. Module

    4 generates end-of-program statistics, so it is needed only when the program terminates.


Operating Systems DPR355 Revision 2

    OS Chapter 6 February 2004

     Module 1: Main control and key data Module 2: Normal data processing Module 3: Error routine Module 4: End-of-job routine

     Module 1 Module 1 Module 1 Module 2 Module 3 Module 4

    ; Clearly, module 1 must remain in memory at all times. If no errors are encountered, there is

    no need for module 3. If an error occurs, module 3’s logic must be executed, but module 2

    and 4 are superfluous. Thus, as the program begins, module 1 and 2 are in memory. When

    an error is encountered, module 3 overlays module 2. It stays in memory until the next valid

    set of data is read; at that time module 2 replaces it. Finally, just before the program ends,

    module 4 overlays 2 or 3 and generates its statistics. Memory is not nearly so limited on

    modern microcomputers, but overlays are still used.

    ; Sometimes, the resident operating system is itself partially overlaid. When an application

    program is running, it needs the support of the input/output control system, the file system,

    and some of the command processor’s functions, but has little or no use for other command

    processor features. Consequently, part of the command processor occupies high memory

    and, if extra space is needed, those functions can be overlaid. (when the program finishes

    executing, the complete command process must be restored to memory).

6.1.2 Concurrency - the Multiple-User Environment

    ; Efficient resource utilization is crucial on a mainframe. Given the speed disparity between a

    computer and its peripherals, input and output operations significantly impact on efficiency.

    The program cannot process data it does not yet have, and success cannot be assumed until

    an output. Since the program controls the computer, the computer waits, too. Typically,

    computers spend far more time waiting for I/O than processing data.

    ; Simultaneous means "at the same instant." No processor can execute two or more programs

    simultaneously. Concurrent Means "over the same time period." A processor can certainly

    execute two or more programs concurrently. The processor can do more work in the same

    amount of elapsed time. However, while a mainframe’s resources are substantial, they are

    limited and, with two or more concurrent users, conflicts over processor time, memory space,

    and peripheral device allocation are inevitable. When these conflicts occur, they must be

    resolved. Because the operating system acts as the hardware /software interface, it is the

    ideal place to implement resource management.


Operating Systems DPR355 Revision 2

    OS Chapter 6 February 2004

     Program A

     Run wait run wait run


     Program B

     Run wait run wait

     Program C

     Run wait run wait

     System System

     Wait wait

    With more than one program in memory, much of the computer’s wait time can be utilized

6.1.3 Partitions, Segmentation, Paging, Virtual Memory

    ; If memory is to hold multiple programs, memory space must be managed. The simple

    approach, fixed-partition memory management (Figure below), divides the available space

    into fixed-length Partitions each of which can hold one program. Partition size are generally

    set when the system is booted, so the memory allocation decision is made before the actual

    amount of space needed by a given program is known.

     Operating Systems Partition A Partition B Partition C Partition D

    Under fixed-partition memory management, the available memory space is divided into a series

    of fixed-length partition

    ; Imagine a 32K program. If the partition size is 256K, fully 224K will be wasted when that

    program runs. Fixed-partition memory management wastes space. Its major advantage is


; Under dynamic memory management, the transient area is treated as a pool of

    unstructured free space. When the system decided to load a particular program, a region of

    memory just sufficient to hold it is allocated from the pool. Because a program gets only the

    space it needs, relatively little space is wasted.

    ; Dynamic memory management does not completely solve the wasted space problem..

    Assume, for example, that a 120K program has just finished executing (Figure below). if there

    are no 120K programs available, the system might load a 60K program and a 50K program.


Operating Systems DPR355 Revision 2

    OS Chapter 6 February 2004

    Note that 10K remains unallocated. If there are no 10K or smaller programs available, the

    space will simply not be used. Over time, little chunks of unused space will be spread

    throughout memory, creating a fragmentation problem.

     Operating Systems Other programs 60K program 50K program Previously a 120K region 10K fragment Other programs Over time, dynamic memory management leaves small fragments of unused space spread

    throughout memory.

; An alternative is breaking a program into logical segment, assigning addresses relative to

    segment entry points, and loading the segments into noncontiguous memory (Figure below).

     Operating Systems Other programs Program A, segment 1-10K Other programs Program A, segment 2-20K Other programs

    ; Another approach is dividing programs into fixed-length pages, addressing memory relative

    to the start of a page, and loading pages into noncontiguous memory. The basic difference

    between segmentation and paging is that segment size can vary to match the logic of the

    program, while the page size is fixed. The disadvantage of paging is that break points are

    arbitrary relative to the program logic. The advantage is that all pages are the same size,

    which simplifies memory management.

    ; If a processor can execute only one instruction at a time, why must the entire program be in

    memory before processing can begin? Under a virtual memory system, programs are

    divided into pages or segments and stored on disk. Subsequently, only active modules are

    read into memory.

    ; By swapping pages between secondary storage and real memory, virtual memory

    management allows a computer to concurrently execute many more programs than could

    possibly be supported by the available real memory (more efficient processor utilization). ; An additional benefit of virtual memory is that it gives the programmer the illusion of almost

    unlimited memory. Many experts view the relaxation of space constraints as the primary

    advantage of virtual memory.


Operating Systems DPR355 Revision 2

    OS Chapter 6 February 2004

6.1.4 Memory protection

    ; With multiple programs sharing memory, it is possible for one program to destroy the

    contents of memory space belonging to another, so the active programs must be protected

    from each other.

    ; Generally, the operating system keeps track of the space assigned to each program. If a

    program attempts to modify (or, sometimes, even to read) the contents of memory locations

    that do not belong to it, the operating system’s memory protection logic intervenes and

    (usually) terminates the program.

6.2 Multiprogramming

    ; Multiprogramming is a common approach to resource management. Originally developed to

    support batch processing applications, multiprogramming operating systems take advantage

    of the extreme speed disparity between a computer and its peripheral devices. Traditionally,

    the key measures of effectiveness are throughput (run time divided by elapsed time) and

    turnaround (the time between job submission and job completion).

    ; The essential components of a single-user operating system include a command processor,

    an input/output control system, a file system, and a transient area. A multiprogramming

    operating system builds on this base, subdividing the transient area to hold several

    independent programs and adding resource management routines to the operating system’s

    basic functions.

6.2.1 Managing the Processor’s Time – the Dispatcher

    ; Imagine several programs occupying memory. Some time ago, program ‘A’ requested data

    from disk (Figure below). Because it was unable to continue until the input operation was

    completed, it dropped into a wait state and processor turned to program B. Assume the

    input operation has just been completed . Both programs are in now in ready state; in other

    words, both are ready to resume processing. Which one gets the processor? Computers are

    so fast that a human operator cannot effectively make such real-time choices. Instead, the

    processor’s time is managed by an operating system routine called the dispatcher.


    Program A Run Wait Ready

     Wait Run Ready Program B


    ; Consider a system with two partitions: foreground and background. The dispatcher checks

    the program in the foreground partition first; if the program is ready, it gets control. Only if the

    foreground program is still in a wait state does the dispatcher check the background partition.

    The foreground has high priority; the background has low priority.

    ; This idea can be extended to larger systems, with the dispatcher checking partitions in a fixed

    order until a ready-state program is found. The first partition checked has highest priority; the

    last has lowest priority. The only way the low priority program can get control is if all the

    higher-priority partitions are in a wait state.


Operating Systems DPR355 Revision 2

    OS Chapter 6 February 2004

6.2.2 Control Blocks

    ; There are a number of control fields that must be maintained in support of each active

    program. Often, a control block is created to hold a partition’s key control flags, constants,

    and variables (Figure below).

    ; It is also possible to implement more complex priority schemes. For example, a program’s

    priority can be set by a job control or command language statement, or computed

    dynamically, perhaps taking into account such factors as program size, time in memory,

    peripheral device requirements, and other measures of the program’s impact on system

    resources. The dispatcher can then search control blocks in priority order.

     Operating System

     Partition A Control block

     Partition A

    Partition B Control block

     Partition B

     Partition C

    Control block Partition C

6.2.3 Interrupts

    ; A program normally surrenders control of the processor when it requests an I/O operation

    and is eligible to continue when that operation is completed. Consequently, the key to

    multiprogramming is recognizing when input or output operations begin or end. The operating

    system knows when these events occur because they are marked by interrupts. ; An interrupt is an electronic signal. Hardware senses the signal, saves key control

    information for the currently executing program, and transfers control to the operating

    system’s interrupt handler routine. At that instant, the interrupt ends. The operating system

    then handles the interrupt, and the dispatcher, subsequently, starts an application program.

    Eventually, the program that was executing at the time of the interrupt resumes processing ; Note that interrupts can originate in either hardware or software. A program issues an

    interrupt to request the operating system's support (for example, to start an I/O operation).

    Hardware issues interrupts to notify the processor that an asynchronous event (such as the

    completion of an I/O operation or a hardware failure) has occurred. Other types of interrupts

    might signal an illegal operation (a zero divide) or the expiration of a preset time interval. ; Interrupts mark event. In response, hardware transfers control to the interrupt handler routine,

    which performs the appropriate logical functions and, if necessary, resets the affected

    program's state. Following each interrupt, the dispatcher starts the highest priority ready

    program. This is the essence of multiprogramming.


Operating Systems DPR355 Revision 2

    OS Chapter 6 February 2004

6.3 Time-sharing

    In interactive processing, the users enter brief transactions through relatively slow keyboard terminals. Programs are usually small and process relatively little data. The processor time is shared in such situations by multiple programs.

6.3.1 Roll-in /Roll-out

    ; As a transaction is processed, the system knows that considerable time will pass before that

    user's next transaction arrives, so the work space can be rolled out to secondary storage,

    making room for another application in memory. Later, when the first user's next transaction

    arrives, his or her work space is rolled back in. Most time-sharing systems use such roll-

    in/roll-out (or swap-in/ swap-out) techniques to manage memory space.

6.3.2 Time-Slicing

    ; While the transaction is being processed, the other users on the system will have to wait, and,

    given the objective of maintaining good response time, that is intolerable. The solution is

    time-slicing. Each program is restricted to a maximum "slice" of time, perhaps 0.01 second.

    Once a program gets control, it runs until one of two things happens. If the program requires

    input or output before exhausting its time slice, it calls the operating system and "voluntarily"

    drops into a wait state, just like a multiprogramming application. If, however, the program

    uses up its entire time slice, a timer interrupt transfers control to the operating system, which

    selects the next program.

6.3.3 Polling

    ; Often, a polling algorithm is used to determine which program is activated next. Imagine a

    table of program control blocks. Starting at the top, the operating system's dispatcher checks

    program 1's status. If it's ready, it gets control. If not, the dispatcher moves on to the second

    control block.

    ; Assume that program 2 is ready. It begins executing and then, one time slice later,

    surrenders control again, so the dispatcher must select another program. Because the last

    program to have control was number 2, polling begins with the third table entry; note that

    program 2 is now at the end of the line. Eventually the dispatcher works its way through the

    entire table. At this point, it returns to the top and repeats the process. Program 2 will get

    another shot only after every other program has a chance.

    ; There are alternatives to simple round-robin polling. Two (or even more) tables can be

    maintained, with high-priority programs on the first one and background or low-priority

    routines on the second.

6.4 Deadlock

    ; Deadlock is one possible consequence of poor resource management.

    ; Imagine, for example, that two programs need data from the same disk. Program A issues a

    seek command and drops into a wait state. Subsequently, program B gets control and issues

    its own seek command. Eventually, the fist seek operation is completed and control returns to

    A, which issues a read command. Unfortunately, the second seek command has moved the

    access mechanism, so A reissues its seek command and once again drops into a wait state.

    Soon B issues its read command, discovers that the access mechanism is in the wrong place,

    and reissues its seek command.

    ; Deadlock is not limited to peripheral devices; it happens when two or more programs each

    control any resource needed by the other.

    ; One solution is that the operating system to ensure that all the resources needed (memory

    space, peripheral devices, etc.) by the program are guaranteed. Some operating systems

    allow deadlock to occur, sense them, and take corrective action.


Operating Systems DPR355 Revision 2

    OS Chapter 6 February 2004

6.5 Scheduling and Queuing

    ; Processor management is concerned with the internal priorities of programs already in

    memory. A program's internal priority is a different issue. As one program finishes

    processing and space becomes available, which program is loaded into memory next? This

    decision typically involves two separate modules, a queuing routine and a scheduler.

    ; As programs enter the system they are placed on a queue by the queuing routine (Figure

    below). When space becomes available, the scheduler selects a program from the queue

    and loads it into memory.

     Queuing Programs routine Job queue

    a) When a program first enters a multiprogramming system, a queuing routine copies it to

    a queue.

     Scheduling routine

    Job queue

    Program partition

    Program partition

    b) Later, when space becomes available, the scheduling routine loads a program from the

    queue into memory.

    ; Generally, the first program on the queue is loaded first, but more sophisticated priority rules

    can be used. Sometimes, more than one queue is maintained. Often, a program is placed

    on a particular queue based on its resource needs; for example, programs requiring magnetic

    tape or special printer forms can be separated from those calling for more normal setup.

    Grouping programs with similar resource needs simplifies scheduling.

6.6 Spooling

    ; It is the process of loading/ storing the job (data and program) into hard disk both before input

    or after the output from the high speed processor in order to improve the efficiency of the

    system. This is needed manly to compensate for the speed differential between the

    processor and the peripherals.


Operating Systems DPR355 Revision 2

    OS Chapter 6 February 2004

6.7 Jobs and Tasks

    ; A group of related programs that supports a single application is called a job, and a single

    program (more accurately, a single load module) on the computer and ready to execute is

    called a task. (Note: job and task are IBM terms; some other manufacturers use different

    words to describe these two basic units of work.)


    Memory management is concerned with managing the computer's available pool of memory, allocating space to application routines and making sure that they do not interfere with each other. Some operating system routines directly support application programs as they run and thus must be resident. Other transient routines are stored on disk and read into memory only when needed. Generally, the operating system occupies low memory. The remaining memory, called the transient area, is where application programs and transient operating system routines are loaded.

    On many modern systems, two or more programs are loaded into memory and execute concurrently. Fixed-partition memory management divides memory into fixed-length partitions and loads one program into each one. Greater efficiency can be achieved by using dynamic memory management and dividing the transient area into variable-length regions, but the inability to fully utilize available space can lead to fragmentation. Under segmentation, a program is broken into variable-length segments that are independently loaded into noncontiguous memory. Segmented addresses are converted to absolute form by using a segment table through a process called dynamic address translation. Paging is similar to segmentation except that pages are fixed in length and dynamic address translation uses a page table. With segmentation and paging, programs are broken into logical segments and the segments subdivided into pages. Most operating systems incorporate memory protection features that prevent one program from destroying the contents of another's memory space. With overlay structures, a program is broken into logically independent modules, and only those modules that are actually active are loaded into memory.

    On a virtual memory system, programs are loaded on the external paging device (usually disk), and individual pages are paged-in to real memory as needed. Virtual memory is a logical model that contains everything traditionally stored in main memory, including the operating system and application programs. Real memory holds the operating system and a page pool. Virtual memory space above and beyond the available real memory contains the application programs. Physically, they are stored on an external paging device, which is divided into partitions or regions. Pages are swapped between the real memory page pool and the external paging device. If a referenced page is not in real memory, a page fault is recognized and the needed page is swapped in. Bringing pages into memory only when they are referenced is called demand paging. An option called pre-paging involves predicting the demand for a new page and swapping it into memory before it is actually needed. Excessive paging can lead to thrashing, which degrades system performance.

    Multiprogramming is one technique for managing the processor when multiple, concurrent programs occupy memory. Programs that are waiting for system resources (for example, waiting for the completion of an I/O operation) are placed (by the operating system) into a wait state. Programs that are ready to process are placed into a ready state. An operating system routine called the dispatcher selects the next ready program,


Operating Systems DPR355 Revision 2

    OS Chapter 6 February 2004

    often by following a linked list of program control blocks. Interrupts mark the beginning and end of input and output operations, and the appropriate program's state is reset by the interrupt handler routine. Following each interrupt, the dispatcher starts the highest priority ready state program.

    Time-sharing is used for interactive applications. Because the time between successive transactions is relatively lengthy, memory space can be managed by roll-in/roll-out techniques. To eliminate the risk of one program tying up the system and forcing all other users to wait, time-slicing is used to manage the processor's time. Often, the dispatcher follows a polling algorithm to determine which program to start next. When a program first enters a system, it might be stored on a queue by a queuing routine. Later, when space becomes available, a scheduler selects the next program from the queue and loads it into memory. With spooling, data intended for a slow output device such as a printer are written instead to a high-speed device such as disk and later dumped to the slow device when the processor has time. Input data can be spooled to disk before being processed, too.

    Deadlock occurs when two programs each control a resource needed by the other but neither is willing to give up its resource. Some operating systems are designed to prevent deadlock. Others sense deadlock and take corrective action.

Key Words

control block deadlock

    demand paging dispatcher

    dynamic address translation dynamic memory management

    external paging device fixed-partition memory management

    fragmentation interrupt

    interrupt handler memory management

    memory protection multiprogramming

    overlay page fault

    page pool page table

    paging partition

    polling pre-paging

    queuing routine ready state

    real memory region resident

    roll-in/roll-out scheduler

    segment table segmentation

    segmentation and paging spooling

    thrashing time-sharing

    time-slicing transient

    transient area virtual memory

    wait state


    1. Distinguish between resident and transient routines. What is the transient area? 2. Distinguish between concurrent and simultaneous. A single processor can execute two or

     more programs concurrently but not simultaneously. Why?

    3. Distinguish fixed-partition memory management and dynamic memory management.


Report this document

For any questions or suggestions please email