Wednesday, April 11, 2012

Deadlocks


  • Deadlocks


A deadlock situation can arise if the following four conditions hold simultaneously in a system:

*Mutual exclusion. At least one resource must be held in a nonsharable mode;
That is, only one process at a time can use the resource.
If another process requests that resource, the requesting process must be delayed until the resource has been released.

*Hold and wait. A process must be holding at least one resource and waiting to acquire additional resources that are currently being held by other processes.


*No preemption. Resources cannot be preempted; that is, a resource can be released only voluntarily by the process holding it, after that process has completed its task.


*Circular wait. A set $ \{P_0,P_1,\ldots,P_n\}$ of waiting processes must exist such that

http://siber.cankaya.edu.tr/ozdogan/OperatingSystems/ceng328/node155.html



  • Deadlock

A deadlock is a situation in which two or more competing actions are each waiting for the other to finish, and thus neither ever does
http://en.wikipedia.org/wiki/Deadlock

Process Synchronization

Process Synchronization

Cooperating processes can

either directly share a logical address space (that is, both code and data)

or be allowed to share data only through files or messages.

Concurrent access to shared data may result in data inconsistency.




Race Condition
A potential problem; the order of instructions of cooperating processes



The Critical-Section Problem
How do we avoid race conditions? What we need is mutual exclusion

Various proposals for achieving mutual exclusion, so that while one process is busy updating shared memory in its CS, no other process will enter its CS and cause trouble.

Disabling Interrupts
Lock Variables
Strict Alternation
Peterson's Solution
The TSL instructions (Hardware approach)

Semaphores
A synchronization tool called semaphore.
Semaphores are variables that are used to signal the status of shared resources to processes.




Classic Problems of Synchronization
a number of synchronization problems as examples of a large class of concurrency-control problems.
The Bounded-Buffer Problem
The Readers-Writers Problem
The Dining-Philosophers Problem


6.7 Monitors
Semaphores can be very useful for solving concurrency problems, but only if programmers use them properly. If even one process fails to abide by the proper use of semaphores, either accidentally or deliberately, then the whole system breaks down. ( And since concurrency problems are by definition rare events, the problem code may easily go unnoticed and/or be heinous to debug. )


A monitor is essentially a class, in which all data is private, and with the special restriction that only one method within any given monitor object may be active at the same time. An additional restriction is that monitor methods may only access the shared data within the monitor and any data passed to them as parameters. I.e. they cannot access any data external to the monitor.


http://siber.cankaya.edu.tr/ozdogan/OperatingSystems/ceng328/node150.html

Algorithm Evaluation

Algorithm Evaluation

The first step in determining which algorithm ( and what parameter settings within that algorithm ) is optimal for a particular operating environment is to determine what criteria are to be used, what goals are to be targeted, and what constraints if any must be applied. For example, one might want to "maximize CPU utilization, subject to a maximum response time of 1 second".


5.7.1 Deterministic Modeling
If a specific workload is known, then the exact values for major criteria can be fairly easily calculated, and the "best" determined.


5.7.2 Queuing Models
Specific process data is often not available, particularly for future times

5.7.3 Simulations
Another approach is to run computer simulations of the different proposed algorithms ( and adjustment parameters ) under different load conditions, and to analyze the results to determine the "best" choice of operation for a particular load pattern.


5.7.4 Implementation
The only real way to determine how a proposed scheduling algorithm is going to operate is to implement it on a real system.



www2.cs.uic.edu/~jbell/CourseNotes/OperatingSystems/5_CPU_Scheduling.html

Multiple-Processor Scheduling

5.5 Multiple-Processor Scheduling

When multiple processors are available, then the scheduling gets more complicated, because now there is more than one CPU which must be kept busy and in effective use at all times

Multi-processor systems may be heterogeneous, ( different kinds of CPUs ), or homogenous, ( all the same kind of CPU )


One approach to multi-processor scheduling is asymmetric multiprocessing, in which one processor is the master, controlling all activities and running all kernel code, while the other runs only user code. This approach is relatively simple, as there is no need to share critical system data.

Another approach is symmetric multiprocessing, SMP, where each processor schedules its own jobs, either from a common ready queue or from separate ready queues for each processor.
Virtually all modern OSes support SMP, including XP, Win 2000, Solaris, Linux, and Mac OSX.

www2.cs.uic.edu/~jbell/CourseNotes/OperatingSystems/5_CPU_Scheduling.html

Scheduling Algorithms

5.3 Scheduling Algorithms

5.3.1 First-Come First-Serve Scheduling, FCFS
FCFS is very simple - Just a FIFO queue, like customers waiting in line at the bank or the post office or at a copying machine.
Unfortunately, however, FCFS can yield some very long average wait times, particularly if the first process to get there takes a long time

5.3.2 Shortest-Job-First Scheduling, SJF
The idea behind the SJF algorithm is to pick the quickest fastest little job that needs to be done, get it out of the way first, and then pick the next smallest fastest job to do next


5.3.3 Priority Scheduling
Priority scheduling is a more general case of SJF, in which each job is assigned a priority and the job with the highest priority gets scheduled first.
Priority scheduling can suffer from a major problem known as indefinite blocking, or starvation, in which a low-priority task can wait forever because there are always some other jobs around that have higher priority.


5.3.4 Round Robin Scheduling
Round robin scheduling is similar to FCFS scheduling, except that CPU bursts are assigned with limits called time quantum.
When a process is given the CPU, a timer is set for whatever value has been set for a time quantum.


5.3.5 Multilevel Queue Scheduling
When processes can be readily categorized, then multiple separate queues can be established, each implementing whatever scheduling algorithm is most appropriate for that type of job, and/or with different parametric adjustments.


5.3.6 Multilevel Feedback-Queue Scheduling
Multilevel feedback queue scheduling is similar to the ordinary multilevel queue scheduling described above, except jobs may be moved from one queue to another for a variety of reasons:

If the characteristics of a job change between CPU-intensive and I/O intensive, then it may be appropriate to switch a job from one queue to another.
Aging can also be incorporated, so that a job that has waited for a long time can get bumped up into a higher priority queue for a while.

http://www2.cs.uic.edu/~jbell/CourseNotes/OperatingSystems/5_CPU_Scheduling.html



  • Operating Systems Lecture Notes 
Lecture 6
CPU Scheduling

When do scheduling decisions take place? When does CPU choose which process to run? Are a variety of possibilities:
When process switches from running to ready - on completion of interrupt handler, for example.
Common example of interrupt handler - timer interrupt in interactive systems. If scheduler switches processes in this case, it has preempted the running process. Another common case interrupt handler is the IO completion handler

Big difference: Batch and Interactive systems.
In batch systems, typically want good throughput or turnaround time.
In interactive systems, both of these are still usually important (after all, want some computation to happen), but response time is usually a primary consideration.

What about interactive systems?
Cannot just let any process run on the CPU until it gives it up - must give response to users in a reasonable time.
So, use an algorithm called round-robin scheduling. Similar to FCFS but with preemption.
Have a time quantum or time slice.
Let the first process in the queue run until it expires its quantum (i.e. runs for as long as the time quantum), then run the next process in the queue

http://people.csail.mit.edu/rinard/osnotes/h6.html



  • The CPU scheduler algorithm may have tremendous effects on the system performance

Interactive systems
Real-time systems

https://docs.google.com/viewer?a=v&q=cache:RZfa-x-ejVYJ:cs.gmu.edu/~astavrou/courses/CS_571_F09/CS571_Lecture4_Scheduling.pdf+&hl=en&pid=bl&srcid=ADGEESiX_7iA0OeeayJ8iJ6e57m5E05DmQhvTaFI_RQ7UAx8jX-FFVZXjz4qgIza_AZ4BZB-nOmN-CwLwF-Dk1aAiFiG9AV5GsYf4u_-hVjYuYSWmpZ0sBVk9XJRRVL1IHmNoLvPP2xJ&sig=AHIEtbQuS4_Mr9TKwjZpNXLwy72aXWQMYw






  • Presented Scheduling Algorithms


For interactive systems
Round-Robin scheduling
Priority scheduling
Multiple queues
Guaranteed scheduling
Lottery scheduling
Fair-share scheduling

https://docs.google.com/viewer?a=v&q=cache:SJZ1uCD84pUJ:soc.eurecom.fr/OS/docs/CourseOS_III_Scheduling.pdf+&hl=en&pid=bl&srcid=ADGEESiVcJ_QcPa7_VZ7caCkPycPSiwDwfgc6ohu4vzp6xGrxcQMsmZhbC1AwB-FFH-MRKRnpnY-d8icc_HMXFPezCDDj7hemAhRdXSXrBqZ2aRsHa6lS3eVo4aEBgYVAsxLSKLuUJp8&sig=AHIEtbRVjdr3i0JZeu70Y0t3N-Fdf2WZLQ

Scheduling Criteria

5.2 Scheduling Criteria

There are several different criteria to consider when trying to select the "best" scheduling algorithm
CPU utilization - Ideally the CPU would be busy 100% of the time, so as to waste 0 CPU cycles
Throughput - Number of processes completed per unit time.
Turnaround time - Time required for a particular process to complete,
Waiting time - How much time processes spend in the ready queue waiting their turn to get on the CPU
Response time - The time taken in an interactive program from the issuance of a command to the commence of a response to that command.


A nonpreemptive scheduling algorithm picks a process to run and then just lets it run until it blocks (either on I/O or waiting for another process) or until it voluntarily releases the CPU.
First-Come-First-Served (FCFS),
Shortest Job first (SJF).

a pre-emptive scheduling algorithm picks a process and lets it run for a maximum of some fixed time. If it is still running at the end of the time interval, it is suspended and the scheduler picks another process to run.
Round-Robin (RR),
Priority Scheduling.


http://www2.cs.uic.edu/~jbell/CourseNotes/OperatingSystems/5_CPU_Scheduling.html