TXT

2 Operation system

By James Perez,2014-05-27 15:03
14 views 0
2 Operation system

     ??ÎÄÓÉ1078184087??Ï×

    pdfÎĵµ?ÉÄÜÔÚWAP?Ëä?ÀÀÌåÑé???Ñ????ÒéÄúÓÅÏÈÑ?ÔñTXT???òÏÂÔØÔ?ÎÄ?þµ????ú?é????

     2 Embedded Operating system In this chapter, I will talk about the following several questions that relate to the embedded operating system, (1) The outline of embedded operating system (2) The functions of the embedded operating system (3) Several primary embedded operating systems. 2.1 The outline of the Embedded Operating system Embedded Operating system is an operating system to support embedded application program. To be exactly, it??s sets of programs to control and manage the hardware and software resource of the embedded system and provide convenience to the program user. 2.1.1 Why we need an embedded operating system Not all of the embedded system needs an embedded operating system. Considering the efficiency and the problem of cost, some embedded systems with simple function often do not use embedded operating system. However, when the complexity of embedded system software up to an advance level, and the hardware is fully equipped with a handling capacity, this embedded system need to use an embedded operating system. Generally speaking, when an embedded system has the following requirements, it needs to be controlled by an embedded operating system. 1. System needs to execute multiple tasks Some embedded system with simple functions may only run several fixed tasks, and do not need complex functions. In such circumstances, these tasks can manage their own hardware, and complete their mutual coordination work. Therefore, the program cannot be too complicated to do, or it will give the developers a lot of pressure. When embedded systems need to run many tasks, and the relationship between tasks is very complicated, they need to use the operating system. The operating system uses a scheduling algorithm to complete the task, through the ports to support communication between tasks, and replace the task to manage the hardware. The operating system helps the developers escape from a lot of tedious work. 2. System needs an intuitive user interface The embedded systems need to interact with the user, so they need a graphical user interface, which is supported by operating system. 3. System needs the networking capability In the absence of the operating system, supporting network function is not impossible. For example, TCP / IP protocol can be achieved by the hardware chip. However, the use of such chips would increase cost. Moreover, the network protocols often escalate, but hardware chip will not be able to simultaneously upgrade. However, in an embedded computer system with a operating system, you can customize a network protocol to meet the needs of different network environment, and it is also easy to keep up with the pace of network protocol updates. 4. System needs to use the database management system In mobile computing environment,

    some mobile information equipments need to use mobile embedded database management system to solve data management problems. For example, laptop, PDA, automotive equipment, smart phones and other embedded systems often have such needs. Sometimes we need to use real-time embedded database management system for real-time

     1

     data collection and processing. These types of embedded database management system cannot leave the support of the operating system. 5. Systems need to be updated and secondary development If the developers want to carry out some secondary developments, the embedded operating system is a wise choice. Embedded operating system has provided a series of API interfaces to the developers. Developing in these interfaces, a lot of trivial development work can be avoided. It not only can greatly improve the efficiency of the development of embedded systems, but also enhance the capability of transplant of embedded applications software.

     2.1.2 Embedded operating systems and real-time operating system Nearly all the early-time embedded systems were used at the aim of controlling. So it more or less has some Real-time Request, therefore, the ??embedded operating system?? at that time in fact is the pronoun of RTOS (Real-time operating system). In recent years, there are many embedded system without the Real-time Request because of the existence of PDA. In this background, "Embedded operating system" and "real-time operating system" has become two different concepts. Before we give a definition to the Real-time operating system, we need to know what a Real-time system is. Simply speaking, a real-time system is a system that is able to meet the following requirements: When the arrival of external events, the computer can deal it immediately, and complete it within the specified time. And the arrival time of external events should be completely random and there is no regularity. So if a real-time operating system wants to run correctly, except the correctness of results, the task must be completed within the specified time. We call the former the correctness of the function of, and the latter the correctness of the time. For a real-time system, the two kinds of correctness are equally important. According to the requirements towards the response time, the real-time systems can be divided into hard and soft real-time system. Hard real-time system has a rigid and unchangeable time limit, it does not allow any mistakes of overtime. Overtime will lead to the system failure, or make the system cannot achieve its intended objectives. The time limit of a soft real-time system is flexible, and it can tolerate occasional overtime error. The consequence of overtime error is not serious; it just reduces the throughput of the system. After giving the definition of real time system, we come back to the real-time Operating system. We can give the definition of the real-time Operating system as follows: The real-time

    Operating system is an operating system, which has the Real-time capacity and can support the work of Real-time control system. It must have the ability to assure that the real-time assignment can be done in predetermined time. Its chief assignment is to schedule every available resource to complete the real-time task, then to focus at the improvement of the whole system effectiveness. The relationship between the Embedded Operating system and the Real-time Operating system can be explained as chart 2.1. Just as Chart 2.1 shows that most of the Embedded Operating systems are Real-time Operating system; Most of Real-time Operating systems are also Embedded Operating system. There??s a great intersection between these two systems. But there is also the Real-time System that isn??t suitable to be used in the Embedded Operating system, and vice versa. We call the intersection the real time embedded operating system.

     2

     Chart 2.1 the relationship Between the Embedded Operating system and the Real-time Operating system In the distribution and utilization of CPU time, the Real-time embedded operating system has a lot of difference with common operating systems. These differences are mainly reflected in the following areas: (1) For the general operating system, the guaranteeing of overall efficiency is it??s main aim, and if necessary, it would sacrifice the response speed of individual task to improve the overall efficiency. However, the real-time embedded operating system is just the opposite, if necessary, it would sacrifice the overall efficiency to improve the response speed of individual task. For general operating system, the justice is very important: when needed, it will take away some resource form the tasks, which occupy more resources, to the tasks, which occupy fewer resources tasks. For real-time embedded operating system, the operation of the high-priority tasks is more important. If necessary, it will take away some resource form the tasks, which occupy fewer resources, to ensure the running of highpriority mission Accordingly, the analysis of general operating system are the statistical analysis and the average analysis. And the analysis of real-time embedded operating system is the "worst case" analysis. General operating system should make full use of the CPU processing power, but the Real-time operating systems need to make the handling capacity of CPU under an over-supply condition: CPU runs under the light-load conditions to ensure the response rate.

     (2)

     (3)

     2.1.1 The main performance indicators of Embedded operating system Embedded operating system plays an important role in real-time system; its performance will directly influence the performance of the whole system. Various quantitative indicators provide an objective basis to

    evaluate the performance of an embedded operating system. There are two types of indicator: time performance indicators and storage indicators. 1. The time performance indicators of operating system performance Embedded operating system mainly has the following performance indicators: (1) The interrupt delay time; (2) Maximum time of interrupt inhibition (3) The interrupt response time; (4) The interrupt recovery time; (5) The context switch time of task; (6) The task response time; (7) The execution time of system call. ??The context switch time of task?? and ??maximum time of interrupt inhibition ?? are the two most

     3

     important technical indicators to evaluate the real-time performance of embedded operating system. If we need to compare the time performance of different embedded operating system, we must consider whether the data, which we got through test, are comparable. In general, we should consider the two following factors: (1) The hardware environment of test, such as the CPU speed, memory access speed, the size of RAM space, Cache size, whether the Cache can be used, and so on. The indicators should be achieved in the same equipment. (2) When we compare ??the execution time of system call?? of two systems, we should ensure their functions are fully equivalent. 2. The storage indicators of the embedded operating system In embedded systems, the size of storage space is also a very important issue. Even if the current memory prices continue to drop down, based on the cost and power considerations, the storage space of embedded systems in general is not very big. In this limited space, we not only need to load the embedded operating system, but also need to load the application software. Therefore, in the design and development of the embedded operating system, except the above-mentioned time performance, the storage cost of an embedded operating system should also be concerned, which is also an obvious difference between the embedded operating system and other operating systems. 2.1.1.1 The timetable of interrupt Many of the time performance indicators of Embedded operating system are concerned with the interrupt. In order to better understand these indicators, we first introduce the timetable of interrupt process. An interrupt is an asynchronous signal from hardware indicating the need for attention or a synchronous event in software indicating the need for a change in execution. A hardware interrupt causes the processor to save its state of execution via a context switch, and begin execution of an interrupt handler. As shown in chart 2.2, processors generally allowed the interrupt nesting. In other words, during the treatment of an interrupt ??A??, the processor can identify a more important interrupt ??B??, and stop ??a?? then to deal with the interrupt ??B??.

     Figure 2.2 Interrupt nesting We will give three different operating

    system interrupt timing map. These three operating systems are: the foreground & background scheduling operating system, Non-Preemptive EDF Scheduling operating system and preemptive scheduling operating system. In order to simplify the issue and focus on the key point, we did not consider the interrupt nesting. 1. As shown in Figure 2.3, in the foreground & background scheduling operating system, after the running of the interrupt servicer, it continues to run background program.

     4

     Figure 2.3 the interrupt timing map of the foreground & background scheduling operating system 2. The interrupt timing map of the Non-Preemptive EDF Scheduling operating system As shown in Figure 2.4, in the Non-Preemptive EDF Scheduling operating system after the running of the interrupt servicer, it continues to run the interrupted program.

     Figure 2.4 the interrupt timing map of the Non-Preemptive EDF Scheduling operating system 3. Preemptive scheduling operating system The interruption-timing map of the preemptive scheduling operating system is shown in Figure 2.5. In the preemptive scheduling operating system, after the running of the interrupt service, it will continue the highest priority task. This task may be a previously interrupted task or a new task that is ready when the implementation of the Interrupt Service Routine (ISR). In the interruption return, it has A and B two different possible situations. One situation is to continue the previously interrupted task; the other is to run a new task. In the latter case, the restoring time will be longer, because the operating system needs to switch tasks.

     5

     Figure 2.5 the interruption-timing map of preemptive scheduling operating system The steps of disruption is as following: (1) The interruption arrived but has not been identified by CPU, this perhaps because the CPU has not finished the current implementation, or because the operating system or user closed the interruption. (2) CPU ends the current implementation and opens the interruption, the interruption responses (3) In its response cycles, CPU accesses to the interrupt vector, and jumps to the interrupt service program. (4) The interrupt servicer saves CPU context, for example the contents of registers. (5) Interruption service program calls the interrupt entrance function of the operating system, and notify the operating system that it has entered into the interrupt handling. This entrance function will add one interrupt-nesting layer. (6) The implementations of the user??s interrupt service code: that means it actually begins to serve the devices that are used for interruption service. These codes depend entirely on the function of the application programs itself. (7) After the running of user??s interrupt service code, we call the interrupt

    service export function to inform the operating system to withdraw from the system interruption, and cut one nesting layer. When nesting layer reduced to 0, all interruptions have been processed. At this time, the interrupt service export function start to implement the scheduling program. Of course, there are two different situations. One situation is that the task, which was interrupted earlier, is still the highest priority task, so the system will continue to implement this task, and do not need to switch tasks. The other is that the task, which was interrupted earlier, is no longer the highest priority task, so the system switches the task and implements this task. There are a variety of reasons can lead to the second case: it is possible that the interruption service program or any other nested interrupt program allow the other higher priority tasks to enter into the ready state, or also possible that the state of the previously interrupted task has been changed. From the above timing map, we can see that in the second case the time of the implementation of the export function is longer. (8) Without the tasks switch, CPU context is simply to restore to the original context. (9) Implement the interruption return instruction. (10) If as the (7) step above, after the user interruption service, we need to call the interrupt service export function of the operating system. Because of the need to switch tasks, the implementation of the function will be more longer

     6

     (11) (12)

     If we need switch tasks, the new task context should be restored in the CPU. Implement the interruption return instruction.

     2.1.1.2 Interrupt latency time Interrupt latency is the time between the generation of an interrupt by a device and the servicing of the device, which generated the interrupt. For many operating systems, devices are serviced as soon as the device's interrupt handler is executed. Interrupt latency may be affected by interrupt controllers, interrupt masking, and the operating system's (OS) interrupt handling methods The maximum time of interrupt inhibition affects the interrupt latency time. Before the system enters into the Critical Code Section, it will implement the interrupt inhibition. More the interrupt inhibition is longer, more the interrupt latency is longer, and it may cause interrupt loss. Latency can be expressed by the following expressions: Interrupt latency time = the maximum time of interrupt inhibition +interrupt nesting time+ time between the hardware start interrupt and the implementation of the first ISR (interrupt service routine) instruction. ??Time between the hardware start interrupt and the implementation of the first ISR (interrupt service routine) instruction ??is decided by hardware. "Interrupt nesting time" is related to specific applications. Different applications may have

    different simultaneous nested layers, and the implementation time of each interrupt service routine (ISR) is also different. This period of time may be uncertain. Because the interrupt is an asynchronous external event, so it cannot be sure when it will happen, and cannot determine status of system: whether the system is in an interruption inhibition. Even in the interruption inhibition status, it is also not sure how long the interrupt has been inhibited, and how long it will be inhibited. So we use the maximum time of interrupt inhibition to decide the latency time. 2.1.1.3 The maximum time of interrupt inhibition " The maximum time of interrupt inhibition " depends on two factors: the operating systems related interrupt inhibition time and the applications related interrupt inhibition time. The maximum time of interrupt inhibition should be the larger one of those two interrupt inhibitions. The maximum time of interrupt inhibition = max [max (operating system related interrupt inhibition), max (application related interrupt inhibition)] How to reduce the make the interrupt inhibition time? This is an important problem, which needs to be carefully considered. For example, if we close the interrupt when just start the system call and try to adopt such an extensive approach to do mutual exclusion for protecting critical code; this method will make the interrupt inhibition longer. In fact, through careful analysis for operating system code, we can find that there are some non-critical code sections and if we interrupt in these areas, we can increase the preemptive points and significantly shorten the interrupt inhibition time. 2.1.1.4 Interrupt response time The interrupt response time is the time between the interrupt event occurrence and the start of the corresponding interrupt service routine. There is a difference between the interrupt latency time and interrupt response time. Interrupt latency is the time between the generation of an interrupt by a device and the servicing of the device, which generated the interrupt. For the foreground & background scheduling and non-preemptive scheduling operating system, after saving CPU context (mainly the contents of internal registers) the system will implement immediately the user interrupted service programs, and the interrupt response time is given by the following expressions: Interrupt response time = interrupt latency time + time for saving CPU internal registers For preemptive scheduling operating system, before dealing with interrupt, the system need to do some processing to ensure that the system can return to normal work: the system needs to call a

     7

     specific function, that is mentioned the entrance program of the operating system interrupt service function. This function informs the forthcoming of operating system interrupt service, and makes the operating system can track interrupt nesting, in order to rescheduling

    after interrupt. The system usually gives these entrance functions and the corresponding export functions to the users, the user can choose whether to use these functions interrupt in ISR. The interrupt response time is given by the following expressions: Interrupt response time = interrupt latency time + time for saving CPU internal registers + the implementation time for the operating of entrance function As the interrupt latency time, response time should be the worst-case interrupt response time in the system. For example, in a system 99% response time is within 50 ?Ì s and only one response time is 250 ?Ì s, so interrupt response time of this system is 250 ?Ì s. 2.1.1.5 Interruption recovery time Interrupted recovery time is the time from the ends of the corresponding interrupts service routine to return to the interrupted code. For the preemptive scheduling system, due to possible task switching, interrupt recovery time also means the time between the ends of the corresponding interrupt service routine and the implementation of the new task code. For the foreground & background scheduling and the preemptive scheduling operating system, interrupt recovery time is very simple, only including the recovery of CPU context (mainly the contents of its internal registers) the time of the implementation of the return instruction. Without interrupt nesting, the expression is given below: Interruption recovery time = CPU recovery time+ implementation time of the return instruction For the preemptive scheduling operating system, interrupt recovery time is more complex. Usually at the end of the user's interrupt service routines, the system needs to call export function. The user can choose whether to use this function in the ISR, but must this function must match with the import function. The export function is used to determine whether all the interrupts are detached from the nesting, if so, the operating system needs to decide whether to return to the original interrupted task, or to enter the highest priority task. In this case, the expression of the interruption recovery time could be given below: Interruption recovery time (preemptive scheduling) = the implementation time of export function + CPU recovery time+ implementation time of the return instruction 2.1.1.6 Interrupt processing time Exact interrupt processing time is decided by the application, and it is not an integral part of the operating system, but we need to have a clear understanding for all the requirements of interrupt processing time. Although the interrupt processing time should be short as much as possible, but there is not an absolute limit; the exact time should correspond with the interrupt service routine. In most cases, the user interrupted service routine has to identify the source, get the data and status from equipment that produces the interrupt, and notify the task, which deals with the interrupt. Of course, we should consider whether the notifying would take more time than to deal with this interrupt. When notify a

    task to do interrupt, the interrupt service routine can use all kinds of synchronous and communication mechanism provided by the operating systems, such as semaphores, message queue. Such notification needs to take some time. If interrupt handling time is shorten then notifying time, we should handle the interrupt in the interrupt service routine, and open the interrupt during this period of time, in order to allow a higher priority interrupt access prior to services. 2.1.1.7 Task context switch time In the multi-task system, a context switch is the computing process of storing and restoring the context of a CPU such that multiple processes can share a single CPU resource. The context switch time includes the time to save the running task context, the scheduling time to choose another task and the time to recover the task that will be processed. Switch task is a frequent

     8

     move in the real-time system, so it??s speed will directly affect the real-time performance of whole system. From Figure 2.6) we can see that the task context switch time has three major components, and the storing and restoring time of the context depend largely on the context definition of tasks and processor speed, and different processors have the different context definition. CPU tasks context is the full content of the register. These elements are preserved in the current state preservation district of task, such as the task stack and task control block (TCB). After saving the context of the currently running task, the operating system will put the current status of running from the task preservation district into the CPU register, and begin the next task. Task switching process gives the application program additional load. More CPU's internal registers the system has, the more additional load will be heavy. Therefore, the task switch time is related to the number of the CPU register. In the system with a floating-point co-processor, the task context switch needs to do the preservation and restoration of the content of floating-point co-processor, and this work is very time-consuming. When it is possible, the operating system can take optimization strategy, that??s to say: the system does not always preserve and recover the contents of floating-point coprocessor. Without the consideration of hardware factor, the task context switch time is related to the scheduling process. the hard real-time embedded operating system requires a time-fixed scheduling process, which can not change with the number of tasks in the system. The scheduling time is determined by the data structure of the used algorithm. For example, the priority bitmap is a data structure that can ensure the scheduling time is fixed

     Figure 2.6 the task context switch 2.1.1.8 Task response time The response time is the interval between the instant when the task-related interrupt is produces and the instant when the task actually begins

    to run. Response time is also known as the task scheduling delay. In real-time system, the task sometimes will wait for some external interrupt to activate it. When a interrupt occurs, if the interrupt service routine has already made a task, which has a higher priority than the currently running, ready, and then after the stop of the current task, this highpriority task will be put into operation. Scheduling delay is the time of this process. Algorithm is a major factor to decide the operating system scheduling delay time. In the operating system that uses the preemptive priority-scheduling algorithm, scheduling delay is relatively small. Because this is the preemptive operating system: once the system changes and has a preemptive requirement, scheduling program of the operating system will be dispatched soon. Some of the operating systems do not immediately react when the tasks status change, so the scheduling delays time of these systems is longer than the former. Scheduling delay is also affected by another factor: prohibitions of tasks switch, namely, close scheduling. The close scheduling is a mutually exclusive technique. In the condition of close scheduling, even if the interrupt service routine makes a higher priority task ready, the system

     9

     cannot return to run this high-priority task. Generally speaking, the operating system will not close scheduling; the close scheduling is a system call to application programs provided by the operating system, so we should pay more attention when use it to avoid close scheduling time being too long. Figure 2.7 is the process from the interrupt occurring to the running of the corresponding task, and we can see that the scheduling delay is affected by many factors; the longest response time is the sum of the maximum of these different potential delays.

     Figure 2.7 the process from the interrupt occurring to the running of the corresponding task 2.1.1.9 The execution time of the system calls The execution time of system call is also an important indicator to evaluate the embedded operating system performance. However, because of the difference of parameters and systems, each system have different implementation path, so its implementation time is not fixed, but fluctuates within a certain range. For a certain system call, people care about the largest implementation time, so when test the system call execution time, we should design different test case according to different utilization situation, and use its maximum. 2.1.1.10 Storage costs The storage coat of embedded operating system can divided into the code storage cost and data space storage cost. The operating system code size depends on many factors, and it is directly related to the general operating system function. The data space of the operating system is also known as operating systems work areas. It is consist of the following: (1) RAM space for storing the system variables of.

Report this document

For any questions or suggestions please email
cust-service@docsford.com