The consumer producer problem (also known as the bounded-buffer problem)
The consumer producer problem (also known as the bounded-buffer problem) is a classical example of a multi-process synchronization problem.
The problem describes two processes, the producer and the consumer, who share a common, fixed-size buffer used as a queue. The producer's job is to generate a piece of data, put it into the buffer and start again. At the same time, the consumer is consuming the data (i.e., removing it from the buffer) one piece at a time. The problem is to make sure that the producer won't try to add data into the buffer if it's full and that the consumer won't try to remove data from an empty buffer.
The solution for the producer is to either go to sleep or discard data if the buffer is full. The next time the consumer removes an item from the buffer, it notifies the producer, who starts to fill the buffer again. In the same way, the consumer can go to sleep if it finds the buffer to be empty. The next time the producer puts data into the buffer, it wakes up the sleeping consumer.
The solution can be reached by means of inter-process communication, typically using semaphores.
An inadequate solution could result in a deadlock where both processes are waiting to be awakened. The problem can also be generalized to have multiple producers and consumers.
http://en.wikipedia.org/wiki/Producer-consumer_problem
Friday, April 6, 2012
Semaphore (programming)
In computer science, a semaphore is a variable or abstract data type that provides a simple but useful abstraction for controlling access by multiple processes to a common resource in a parallel programming environment.
Semaphores are a useful tool in the prevention of race conditions and deadlocks;
http://en.wikipedia.org/wiki/Semaphore_%28programming%29
Semaphores are a useful tool in the prevention of race conditions and deadlocks;
http://en.wikipedia.org/wiki/Semaphore_%28programming%29
Labels:
operating systems
Inter-process communication(IPC)
Inter-process communication
In computing, Inter-process communication (IPC) is a set of methods for the exchange of data among multiple threads in one or more processes.
Processes may be running on one or more computers connected by a network
IPC methods are divided into methods for message passing, synchronization, shared memory, and remote procedure calls (RPC).
http://en.wikipedia.org/wiki/Inter-process_communication
Processes executing concurrently in the OS may be either independent processes or cooperating processes.
Cooperating processes require an interprocess communication (IPC) mechanism that will allow them to exchange data and information.
inter-process communication, which is most commonly one of two types: Shared Memory systems or Message Passing systems
There are two fundamental models of interprocess communication:
Shared Memory. A region of memory that is shared by cooperating processes is established. Processes can then exchange information by reading and writing data to the shared region.
Message Passing. Communication takes place by means of messages exchanged between the cooperating processes
Shared Memory is faster once it is set up, because no system calls are required and access occurs at normal memory speeds.
However it is more complicated to set up, and doesn't work as well across multiple computers.
Shared memory is generally preferable when large amounts of information must be shared quickly on the same computer.
Shared memory allows maximum speed and convenience of communication, as it can be done at memory speeds when within a computer.
Shared memory is faster than message passing, as message-passing systems are typically implemented using system calls and thus require the more time-consuming task of kernel intervention
Message Passing requires system calls for every message transfer, and is therefore slower, but it is simpler to set up and works well across multiple computers. Message passing is generally preferable when the amount and/or frequency of data transfers is small, or when multiple computers are involved.
Message passing is useful for exchanging smaller amounts of data, because no conflicts need be avoided
3.5 Examples of IPC Systems
3.5.1 An Example: POSIX Shared Memory
3.5.2 An Example: Mach
3.5.3 An Example: Windows XP
http://siber.cankaya.edu.tr/ozdogan/OperatingSystems/ceng328/node96.html
http://www.cs.uic.edu/~jbell/CourseNotes/OperatingSystems/3_Processes.html
In computing, Inter-process communication (IPC) is a set of methods for the exchange of data among multiple threads in one or more processes.
Processes may be running on one or more computers connected by a network
IPC methods are divided into methods for message passing, synchronization, shared memory, and remote procedure calls (RPC).
http://en.wikipedia.org/wiki/Inter-process_communication
Processes executing concurrently in the OS may be either independent processes or cooperating processes.
Cooperating processes require an interprocess communication (IPC) mechanism that will allow them to exchange data and information.
inter-process communication, which is most commonly one of two types: Shared Memory systems or Message Passing systems
There are two fundamental models of interprocess communication:
Shared Memory. A region of memory that is shared by cooperating processes is established. Processes can then exchange information by reading and writing data to the shared region.
Message Passing. Communication takes place by means of messages exchanged between the cooperating processes
Shared Memory is faster once it is set up, because no system calls are required and access occurs at normal memory speeds.
However it is more complicated to set up, and doesn't work as well across multiple computers.
Shared memory is generally preferable when large amounts of information must be shared quickly on the same computer.
Shared memory allows maximum speed and convenience of communication, as it can be done at memory speeds when within a computer.
Shared memory is faster than message passing, as message-passing systems are typically implemented using system calls and thus require the more time-consuming task of kernel intervention
Message Passing requires system calls for every message transfer, and is therefore slower, but it is simpler to set up and works well across multiple computers. Message passing is generally preferable when the amount and/or frequency of data transfers is small, or when multiple computers are involved.
Message passing is useful for exchanging smaller amounts of data, because no conflicts need be avoided
3.5 Examples of IPC Systems
3.5.1 An Example: POSIX Shared Memory
3.5.2 An Example: Mach
3.5.3 An Example: Windows XP
http://siber.cankaya.edu.tr/ozdogan/OperatingSystems/ceng328/node96.html
http://www.cs.uic.edu/~jbell/CourseNotes/OperatingSystems/3_Processes.html
Labels:
operating systems
Semaphore vs. mutex
Semaphore vs. mutex
A mutex is essentially the same thing as a binary semaphore
the term "mutex" is used to describe a construct which prevents two processes from executing the same piece of code or accessing the same data, at the same time
The term "binary semaphore" is used to describe a construct which limits access to a single resource.
http://en.wikipedia.org/wiki/Semaphore_%28programming%29
Mutex Is a key to a toilet. One person can have the key - occupy the toilet - at the time. When finished, the person gives (frees) the key to the next person in the queue.
A mutex object only allows one thread into a controlled section, forcing other threads which attempt to gain access to that section to wait until the first thread has exited from that section
A mutex is really a semaphore with value 1.
Semaphore Is the number of free identical toilet keys.Example, say we have four toilets with identical locks and keys. The semaphore count - the count of keys - is set to 4 at beginning (all four toilets are free), then the count value is decremented as people are coming in. If all toilets are full, ie. there are no free keys left, the semaphore count is 0. Now, when eq. one person leaves the toilet, semaphore is increased to 1 (one free key), and given to the next person in the queue.
"A semaphore restricts the number of simultaneous users of a shared resource up to a maximum number. Threads can request access to the resource (decrementing the semaphore), and can signal that they have finished using the resource (incrementing the semaphore)."
http://koti.mbnet.fi/niclasw/MutexSemaphore.html
a mutex is locking mechanism used to synchronize access to a resource. Only one task (can be a thread or process based on OS abstraction) can acquire the mutex. It means there will be ownership associated with mutex, and only the owner can release the lock (mutex).
Semaphore is signaling mechanism (“I am done, you can carry on” kind of signal). For example, if you are listening songs (assume it as one task) on your mobile and at the same time your friend called you, an interrupt will be triggered upon which an interrupt service routine (ISR) will signal the call processing task to wakeup.
http://www.geeksforgeeks.org/archives/9102
A mutex is essentially the same thing as a binary semaphore
the term "mutex" is used to describe a construct which prevents two processes from executing the same piece of code or accessing the same data, at the same time
The term "binary semaphore" is used to describe a construct which limits access to a single resource.
http://en.wikipedia.org/wiki/Semaphore_%28programming%29
Mutex Is a key to a toilet. One person can have the key - occupy the toilet - at the time. When finished, the person gives (frees) the key to the next person in the queue.
A mutex object only allows one thread into a controlled section, forcing other threads which attempt to gain access to that section to wait until the first thread has exited from that section
A mutex is really a semaphore with value 1.
Semaphore Is the number of free identical toilet keys.Example, say we have four toilets with identical locks and keys. The semaphore count - the count of keys - is set to 4 at beginning (all four toilets are free), then the count value is decremented as people are coming in. If all toilets are full, ie. there are no free keys left, the semaphore count is 0. Now, when eq. one person leaves the toilet, semaphore is increased to 1 (one free key), and given to the next person in the queue.
"A semaphore restricts the number of simultaneous users of a shared resource up to a maximum number. Threads can request access to the resource (decrementing the semaphore), and can signal that they have finished using the resource (incrementing the semaphore)."
http://koti.mbnet.fi/niclasw/MutexSemaphore.html
a mutex is locking mechanism used to synchronize access to a resource. Only one task (can be a thread or process based on OS abstraction) can acquire the mutex. It means there will be ownership associated with mutex, and only the owner can release the lock (mutex).
Semaphore is signaling mechanism (“I am done, you can carry on” kind of signal). For example, if you are listening songs (assume it as one task) on your mobile and at the same time your friend called you, an interrupt will be triggered upon which an interrupt service routine (ISR) will signal the call processing task to wakeup.
http://www.geeksforgeeks.org/archives/9102
Labels:
operating systems
Principles of Operating System - Lecture 6
producer-consumer problem
producer process
consumer process
buffer(shared)
synchronization between producer and consumer processes
race condition
there is a race condition because of the timing of which process starts first
when two or more processes race for a shared resource and it's unpredictable which one is gonna get first
critical section
race condition might create a situation where the consumer is trying to consume something that has not been produced yet
or
producer is producing so much in the buffer that consumer does not get a chance to consume
or
regional code where we have shared variables
mutual exclusion
only one process at a time can access to critical section
why processes need to communicate?
rules to form critical sections
mutual exclusion problem-starvation
deadlocks
how to implement mutual exclusion
application mutual exclusion
busy wait
mutual exclusion through OS
semaphores operations
producer-consumer problem:solution by semaphores
bounded-buffer problem
what is deadlock
process deadlock
system deadlock
necessary conditions for a deadlock
mutual exclusion
hold&wait
no preemption
circular wait
no deadlock situation?
ostrich algorithm
ways of handling deadlock
deadlock prevention
denying the "hold&wait"
denying "no preemption"
denying "circular wait"
deadlock avoidance
banker's algorithm
classical IPC problems
readers and writers problem
dining philosophers problem
sleeping barber problem
http://www.youtube.com/watch?v=ZMD9vybAjo4&feature=related
producer process
consumer process
buffer(shared)
synchronization between producer and consumer processes
race condition
there is a race condition because of the timing of which process starts first
when two or more processes race for a shared resource and it's unpredictable which one is gonna get first
critical section
race condition might create a situation where the consumer is trying to consume something that has not been produced yet
or
producer is producing so much in the buffer that consumer does not get a chance to consume
or
regional code where we have shared variables
mutual exclusion
only one process at a time can access to critical section
why processes need to communicate?
rules to form critical sections
mutual exclusion problem-starvation
deadlocks
how to implement mutual exclusion
application mutual exclusion
busy wait
mutual exclusion through OS
semaphores operations
producer-consumer problem:solution by semaphores
bounded-buffer problem
what is deadlock
process deadlock
system deadlock
necessary conditions for a deadlock
mutual exclusion
hold&wait
no preemption
circular wait
no deadlock situation?
ostrich algorithm
ways of handling deadlock
deadlock prevention
denying the "hold&wait"
denying "no preemption"
denying "circular wait"
deadlock avoidance
banker's algorithm
classical IPC problems
readers and writers problem
dining philosophers problem
sleeping barber problem
http://www.youtube.com/watch?v=ZMD9vybAjo4&feature=related
Labels:
operating systems
Producer Consumer Problem code examples
- Producer consumer in Java 2
http://www.java2s.com/Code/Java/Threads/ProducerconsumerinJava2.htm
- Implementation of the Producer/Consumer problem in Java
http://www.java-forums.org/java-lang/7353-implementation-producer-consumer-problem-java.html
- Producer-consumer problem
The producer-consumer problem (also known as the bounded-buffer problem) is a classic example of a multi-process synchronization problem
The problem describes two processes, the producer and the consumer, who share a common, fixed-size buffer used as a queue.
The producer's job is to generate a piece of data, put it into the buffer and start again. At the same time, the consumer is consuming the data (i.e., removing it from the buffer) one piece at a time
The problem is to make sure that the producer won't try to add data into the buffer if it's full and that the consumer won't try to remove data from an empty buffer.
https://en.wikipedia.org/wiki/Producer-consumer_problem
Labels:
operating systems
Banker’s Algorithm code examples
Program for Banker’s Algorithm
http://labprograms.wordpress.com/2009/07/31/program-for-bankers-algorithm/
http://labprograms.wordpress.com/2009/07/31/program-for-bankers-algorithm/
Labels:
operating systems
course pages
Operating Systems
Course Notes Main Page
http://www.cs.uic.edu/~jbell/CourseNotes/OperatingSystems/index.html
CS 140: Operating Systems (Winter 2012)
http://www.stanford.edu/~ouster/cgi-bin/cs140-winter12/lectures.php
CENG 328 Operating Systems
Spring 2011
http://siber.cankaya.edu.tr/ozdogan/OperatingSystems/spring2011/index.html#Text%20Book|outline
Lectures of the Operating Systems course (Fall 2000):
http://www.cs.jhu.edu/~yairamir/cs418/600-418.html
CMP 426 (section ZF81) CMP 697 (section ZF81): Operating Systems
Practices:Questions and Answers
http://comet.lehman.cuny.edu/jung/cmp426697/cmp426697.html
CS241 Lectures
http://www.cs.illinois.edu/class/fa11/cs241/schedule.html
Operating Systems: Schedule
http://lass.cs.umass.edu/~shenoy/courses/fall08/schedule.html
İşletim Sistemleri (BTP,2005)
http://web.itu.edu.tr/~bkurt/Courses/os/index2k5.html
İşletim Sistemleri (BTP,2006)
http://web.itu.edu.tr/~bkurt/Courses/os/
İşletim Sistemleri
http://udes.iku.edu.tr/index.php?option=com_content&view=article&id=49&Itemid=49
Lab (301)
http://yunus.hacettepe.edu.tr/~yurdugul/3/OS/index.html
İŞLETİM SİSTEMLERİ DÖNEMİÇİ SINAVI
http://bmb.cu.edu.tr/mavci/soru/os-vize.pdf
Operating Systems and System Programming
http://www.youtube.com/watch?v=XgQo4JkN4Bw
CS3753 (Operating Systems) University of Colorado, Boulder
http://www.youtube.com/watch?v=UlZZQ9GtVQc&feature=relmfu
Principles of Operating System
http://www.youtube.com/watch?v=CkqFxzuN9g0&feature=relmfu
CS 4375: Theory of Operating Systems,Fall 2009 ,Homework Assignments
http://www.cs.utep.edu/tmagoc/course/cs4375_F09/homework.html
Course Notes Main Page
http://www.cs.uic.edu/~jbell/CourseNotes/OperatingSystems/index.html
CS 140: Operating Systems (Winter 2012)
http://www.stanford.edu/~ouster/cgi-bin/cs140-winter12/lectures.php
CENG 328 Operating Systems
Spring 2011
http://siber.cankaya.edu.tr/ozdogan/OperatingSystems/spring2011/index.html#Text%20Book|outline
Lectures of the Operating Systems course (Fall 2000):
http://www.cs.jhu.edu/~yairamir/cs418/600-418.html
CMP 426 (section ZF81) CMP 697 (section ZF81): Operating Systems
Practices:Questions and Answers
http://comet.lehman.cuny.edu/jung/cmp426697/cmp426697.html
CS241 Lectures
http://www.cs.illinois.edu/class/fa11/cs241/schedule.html
Operating Systems: Schedule
http://lass.cs.umass.edu/~shenoy/courses/fall08/schedule.html
İşletim Sistemleri (BTP,2005)
http://web.itu.edu.tr/~bkurt/Courses/os/index2k5.html
İşletim Sistemleri (BTP,2006)
http://web.itu.edu.tr/~bkurt/Courses/os/
İşletim Sistemleri
http://udes.iku.edu.tr/index.php?option=com_content&view=article&id=49&Itemid=49
Lab (301)
http://yunus.hacettepe.edu.tr/~yurdugul/3/OS/index.html
İŞLETİM SİSTEMLERİ DÖNEMİÇİ SINAVI
http://bmb.cu.edu.tr/mavci/soru/os-vize.pdf
Operating Systems and System Programming
http://www.youtube.com/watch?v=XgQo4JkN4Bw
CS3753 (Operating Systems) University of Colorado, Boulder
http://www.youtube.com/watch?v=UlZZQ9GtVQc&feature=relmfu
Principles of Operating System
http://www.youtube.com/watch?v=CkqFxzuN9g0&feature=relmfu
CS 4375: Theory of Operating Systems,Fall 2009 ,Homework Assignments
http://www.cs.utep.edu/tmagoc/course/cs4375_F09/homework.html
Labels:
operating systems
Subscribe to:
Posts (Atom)