Synchronization and Critical Section Problem


   A cooperating process is one that can affect or be affected by other processes executing in the system. Cooperating processes can directly share a logical address space (that is, both code and data) or be allowed to share data only through files otherwise messages. The former case is achieved through the use of threads, discussed in this. Concurrent access to shared data may result in data inconsistency, however. In this we have discuss various mechanisms to ensure the orderly execution of cooperating processes that share a logical address space so that the data consistency is totally maintained. This is all about the synchronizations.

Synchronization Hardware:-

            We have just described one software-based solution to the critical-section problem. However, as mentioned, software-based solutions such as Peterson’s are not guaranteed to work on modern computer based architectures. Instead of a we can generally state that any solution to the critical-section problem requires a simple tool lock. Race conditions are prevented by requiring that critical regions be protected by locks. A process must acquire a lock before entering a critical section; it releases the lock when it exits in the critical section.

 In the following discussions, we explore several more solutions to the critical-section problem using techniques ranging from hardware to software based AP is available to application programmers and All these solutions are based on the premise of locking; however, as we shall see, the designs of such locks can be quite sophisticated.  We start by presenting some simple hardware instructions that are available on many systems and showing how they can be used effectively in solving the critical-section problem. Hardware features can make any programming task easier and improve system efficiency. The critical-section problem could be solved simply in a uniprocessor environment if we could prevent interrupts from occurring while a shared variable was modified. In this same manner, we could be sure that the current sequence

Of instructions would be allowed to execute in order without preemption. No other instructions to be run, so no unexpected modifications could be made to the shared variable. This is often the approach taken by non preemptive kernel


Critical Section Problem

The critical section problem

                Consider a System consisting of n processes {Po, P1, p3, p4}. Each process has segment of a code is called a critical section in which the process may be changing a common variables, and updating a table, writing a file. The important feature of the system is that, when one process is executing in the critical section, no other processes to be allowed to a execute in its critical section.

That is, no two processes are executing in their critical sections at the same time. The critical-section problem is to design a protocol that the processes can use to a cooperate and each process must request a permission to enter its critical section. The code implementing this request is the critical section may be followed by an exit the remaining code is the reminder section.

The entry section and exit section are enclosed in boxes to highlight these important segments of code. A solution to the critical-section problem must satisfy the following three requirements:

1. Mutual exclusion. If process Pi is executing in its critical section, then there is no other processes can be executing in their critical sections.

 2. Progress. If no process is executing in its critical section and some processes wish to enter their critical sections, then only those processes that are not executing in their remainder sections can participate in deciding which will enter its critical section.

 3. Bounded waiting. There exists a bound and limit on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted. We assume that each process is executing at a nonzero speed. We can make no assumption concerning the relative of the n processes.  At a given point in time, many kernel-mode processes may be active in the operating system. As a result, the code implementing an operating system (kernel code) is subject to several possible race conditions. Consider as an example a kernel data structure that maintains a list of all open files in the system. This list must be modified when a new file is opened or closed (adding the file to the list or removing it from the list). If the two processes were open to files simultaneously, the separate updates to this list could result in a race condition. Other kernel data structures that are prone to possible race conditions include structures for maintaining memory allocation and for maintaining process lists and for interrupt handling. It is up to kernel developers to ensure that the operating system is free from such race conditions. Two general approaches are

Handle the critical sections in OS:

(1)   Preemptive kernels and (2) non preemptive kernels.

 A preemptive kernel allows a process to be preempted while it is running in kernel mode. A no preemptive kernel does not allow a process running in kernel mode to be preempted; a kernel-mode process will run until it exits a kernel mode, blocks, or voluntarily yields control of the CPU. Obviously, a no preemptive kernel is essentially free from race conditions on kernel as only one process is active in the kernel at a time. We cannot say the same about preemptive kernels, so they must be carefully designed to ensure that shared kernel data are free from a race conditions and Preemptive kernels are especially difficult to design for

SMP architectures, since in these environments it is possible for two kernel-mode processes to run simultaneously on different processors. Why, then, would anyone favor a preemptive kernel over a no preemptive one? A preemptive kernel is more suitable for real-time programming and it will allow a real-time process to preempt a process currently running in the kernel. and a preemptive kernel may be responsive from there is less

Risk that a kernel-mode process will run for an arbitrarily long period before relinquishing the processor to waiting processes. Of course, this effect can be minimized by designing kernel code that does not behave in this way.

Sourabh Bhunje

Sourabh Bhunje, B.E. IT from Pune University. Currently Working at Techliebe. Professional Skills: Programming - Software & Mobile, Web & Graphic Design, Localization, Content Writing, Sub-Titling etc.

Leave a Reply