Thursday, October 11, 2012

Process Control Block



  • Process Control Block


The OS must know specific information about processes in order to manage, control them and also to implement the process model, the OS maintains a table (an array of structures), called the process table, with one entry per process.
These entries are called process control blocks (PCB) - also called a task control block
http://siber.cankaya.edu.tr/ozdogan/OperatingSystems/ceng328/node87.html


  • Process Control Block

A process in an operating system is represented by a data structure known as a process control block (PCB) or process descriptor. The PCB contains important information about the specific process including

The current state of the process i.e., whether it is ready, running, waiting, or whatever.
Unique identification of the process in order to track "which is which" information.
A pointer to parent process.
Similarly, a pointer to child process (if it exists).
The priority of process (a part of CPU scheduling information).
Pointers to locate memory of processes.
A register save area.
The processor it is running on.

The PCB is a certain store that allows the operating systems to locate key information about a process. Thus, the PCB is the data structure that defines a process to the operating systems.
http://www.personal.kent.edu/~rmuhamma/OpSystems/Myos/processControl.htm

Interprocess Communication in Unix



  • Pipes

The only real IPC facility available in early Unices was the ``pipe.''
Pipes are creating using the pipe system call
% prog1 | prog2
When a command like this is given to the shell, the shell creates a pipe before forking prog1 and prog2.
The ``write to'' end of the pipe is connected to prog1's stdout, while the ``read from'' end of the pipe is connected to prog2's stdin.
Now, output from prog1 is fed to prog2's input.


  • Named Pipes (FIFOs)

Closely related to the pipe is the ``named pipe,'' also called a FIFO. A FIFO is an IPC channel that is given a name in the file system space, using the mkfifo system call (there is also a mkfifo command that is just a wrapper around the mkfifo system call). Once a FIFO has been created processes can open it just like a file, and write to it or read from it. The only thing is, the data that is written is not actually written to a file; it's maintained in a buffer by the kernel.

Named pipes were a huge step forward, but still suffered from only being able to be used between two processes on a single system, not over a network.




  • Sockets

Conceptually, internet sockets on a Unix system look like a numbered array of interprocess communication channels -- so there is a socket 0, socket 1, socket 2, and so forth. They pretty much expect to be used in a client-server relationship; a daemon wishing to provide a service creates a socket and listens to it; a client program connects to the socket and makes requests. The daemon is also able to send messages back to the client.

http://www.cs.nmsu.edu/~pfeiffer/classes/574/notes/ipc.html

Buddy memory allocation




  • The Buddy System


The buddy system is a memory allocation and management algorithm that manages memory in power of two increments
A memory manager (e.g., the Linux page allocator) using the Buddy System keeps lists of free blocks that are sizes of powers of two (2, 4, 8, 16, 32, …).
Initially, when all of memory is free, all lists are empty except for the largest power of two that is less than or equal to the size of allocatable memory
When a block of size n is needed, the algorithm checks the list for the nearest power of two that is greater than or equal to n. If there is one, all is well and the block can be marked as used. If there is no free block, the algorithm gets a block from the next level, splits it into two buddies (which get recorded in the previous level of the list), and uses one of those for allocation. When that block is freed again, the buddies can be combined and brought back up to the next level in the lis

For example, suppose we're using a buddy-based page allocator and need a block of 53 contiguous pages. The closest bigger power of two is 64, so we request a 64-page chuck. Suppose that all we have is one free 512-page segment. We have an array of pointers to lists: a 512-page list, a 256-page list, etc., down to a 1-page list.

The algorithm starts off by looking looks for a 64-page segment. That list is empty, so it then attempts to get a 128-page segment that it can split into two 64-page buddies. That doesn't exist either, so we then look for a 256-page segment. We don't have it, so we then look for a 512-page segment. We have one of those and split it into two 256-page buddies. Now we back up and look for the 256-page segment that we couldn't find earlier. Now we have two of those. We grab one and split it into two 128-page buddies. We back up further and look for that 128-page segment. We have two of those now and split one into two 64-page segments. We back up further to our initial search for a 64-page segment and, lo and behold, we have one we can allocate.

http://www.cs.rutgers.edu/~pxk/416/notes/09-memory.html




  • Buddy memory allocation


The buddy memory allocation technique is a memory allocation algorithm that divides memory into partitions to try to satisfy a memory request as suitably as possible
This system makes use of splitting memory into halves to try to give a best-fit
http://en.wikipedia.org/wiki/Buddy_memory_allocation

Types of I/O Devices



  • Block devices

Organize data in fixed-size blocks
Transfers are in units of blocks
Blocks have addresses and data are therefore addressable
E.g. hard disks, USB disks, CD-ROMs


  • Character devices

Delivers or accepts a stream of characters, no block structure
Not addressable, no seeks
Printers, network interfaces, terminals

http://www.cs.princeton.edu/courses/archive/fall08/cos318/lectures/Lec11-Devices.pdf
   

What is a device driver?


Software in OS to manage I/O to a device is  called a device driver
A device driver abstracts specific device hardware into a generic model of I/O device
Makes it easy to port OS and applications to new hardware - device independence

Goals of the OS
Provide a generic, consistent, convenient and reliable way to access I/O devices
As device-independent as possible


http://www0.cs.ucl.ac.uk/staff/s.wilbur/1b11/1b11-5.pdf

What are some advantages and disadvantages for a Modular Kernel?


A modular kernel is an attempt to merge the good points of kernel-level drivers and third-party drivers
In a modular kernel, some part of the system core will be located in independent files called modules that can be added to the system at run time.

Advantages
The most obvious is that the kernel doesn't have to load everything at boot time; it can be expanded as needed. This can decrease boot time, as some drivers won't be loaded unless the hardware they run is used
The core kernel isn't as big
If you need a new module, you don't have to recompile.

Disadvantages
It may lose stability. If there is a module that does something bad, the kernel can crash, as modules should have full permissions.
..and therefore security is compromised. A module can do anything, so one could easily write an evil module to crash things. (Some OSs only allow modules to be loaded by the root user.)

http://wiki.osdev.org/Modular_Kernel

what is mutual exclusion ?


A way of making sure that if one process is using a shared modifiable data, the other processes will be excluded from doing the same thing.
http://www.personal.kent.edu/~rmuhamma/OpSystems/Myos/mutualExclu.htm


In computer science, mutual exclusion refers to the problem of ensuring that no two processes or threads (henceforth referred to only as processes) can be in their critical section at the same time
http://en.wikipedia.org/wiki/Mutual_exclusion

What is race condition?


A race condition is a situation in which a computer system tries to process at least two operations simultaneously,
http://www.wisegeek.com/what-is-a-race-condition.htm

a situation in which multiple processes read and write a shared data item and the final result depends on the relative timing of their execution
http://wiki.answers.com/Q/What_is_race_condition_in_operating_systems

The race condition is the situation where several processes access and manipulate shared data concurrently. The final value of the shared data depends upon which process finishes last. To prevent race conditions, concurrent processes must be synchronized.
http://www.basicsofcomputer.com/race_conditions_in_operating_system.htm

A race condition occurs when 2 or more threads are able to access shared data and they try to change it at the same time
http://stackoverflow.com/questions/34510/what-is-a-race-condition

Critical Section


Critical section

In concurrent programming, a critical section is a piece of code that accesses a shared resource (data structure or device) that must not be concurrently accessed by more than one thread of execution
http://en.wikipedia.org/wiki/Critical_section


That part of the program where the shared memory is accessed is called the Critical Section.
http://www.personal.kent.edu/~rmuhamma/OpSystems/Myos/criticalSec.htm

A section of code or collection of operations in which only one process may be executing at a given time, is called critical section.
http://www.basicsofcomputer.com/critical_section_problem_in_operating_system.htm

The fork(), exec() system call



  • fork():

The fork system call does not take an argument.
When a fork() system call is made, the operating system generates a copy of the parent process which becomes the child process.

The following is a simple example of fork()
#include
#include
#include

int main(void)
{
   printf("Hello \n");
   fork();
   printf("bye\n");
   return 0;
}

Hello - is printed once by parent process
bye - is printed twice, once by the parent and once by the child


  • exec*()


"The exec family of functions replaces the current process image with a new process image." (man pages)

http://www.cs.uregina.ca/Links/class-info/330/Fork/fork.html

What is the meaning of the term "busy waiting"?


What is the meaning of the term "busy waiting"?
In computer systems organization or operating systems, "busy waiting" refers to a process that keeps checking something (e.g., an I/O device) to see if it is ready so that it can proceed with the processing.


The repeated execution of a loop of code while waiting for an event to occur is called busy-waitin
http://www.coolinterview.com/interview/21017/

processes waiting on a semaphore must constantly check to see if the semaphore is not zero. This continual looping is clearly a problem in a real multiprogramming system (where often a single CPU is shared among multiple processes). This is called busy waiting and it wastes CPU cycles. When a semaphore does this, it is called a spinlock.
http://en.wikibooks.org/wiki/Operating_System_Design/Processes/Semaphores

What is process?


What is process?
In computing, a process is an instance of a computer program that is being executed.
It contains the program code and its current activity. Depending on the operating system (OS), a process may be made up of multiple threads of execution that execute instructions concurrently

Isletim Sisteminin Temel Görevleri


 Isletim Sisteminin Temel Görevleri

• Islem Yönetimi  (ProcessManagement)
• Bellek Yönetimi  (MemoryManagement)
• Giris/Çikis Yönetimi  (I/O Management)
• Dosya Yönetimi  (File Management)

http://www.akaresoft.com/LINUX_AkareSoft.pdf

Process States


Process States and Their Transitions

 When a process is born, its life in the system begins
 During its existence a process executes, it changes states.

 I. New
(The process is being created.)

II. Ready
(The process is waiting to be assigned to processors.)
A process which is ready to execute by the processor is said to be in ready state.

III. Running
(Instructions are being executed.)

IV. Waiting / Blocked State
The process is waiting for some event to occur (such as an I/O completion or reception of a signal)

V.Terminated
The process has finished execution.

http://mavieksen.blogspot.com/2012/10/process-states.html




  • 1 Primary process states

1.1 Created
1.2 Ready or waiting
1.3 Running
1.4 Blocked
1.5 Terminated
https://en.wikipedia.org/wiki/Process_state

Isletim Sistemi Tarihçesi

  • ILK NESIL iSLETIM SISTEMLERI (1945 – 1955)

1940 larda Howard Aiken, John von Neumann, Jpresper
Eckert ve William Mauncley VAKUM TÜPLERI kullanarak
ilk hesap yapabilen motorlar üretmislerdir.

Von Neumann architecture
The term Von Neumann architecture, also known as the Von Neumann model or the Princeton architecture, derives from a 1945 computer architecture proposal by the mathematician and early computer scientist John von Neumann and others
This describes a design architecture for an electronic digital computer with subdivisions of a processing unit consisting of an arithmetic logic unit and processor registers, a control unit containing an instruction register and program counter, a memory to store both data and instructions, external mass storage, and input and output mechanisms
http://en.wikipedia.org/wiki/Von_Neumann_architecture



The Von Neumann architecture has been incredibly successful, with most modern computers following the idea.
You will find the CPU chip of a personal computer holding a control unit and the arithmetic logic unit (along with some local memory) and the main memory is in the form of RAM sticks located on the motherboard.
But there are some basic problems with it. And because of these problems, other architectures have been developed.
http://www.teach-ict.com/as_as_computing/ocr/H447/F453/3_3_3/vonn_neuman/miniweb/pg3.htm


The Von Neumann architecture:
The Von Neumann architecture is a design model for a stored-program digital computer.
Its main characteristic is a single separate storage structure (the memory) that holds both program and data.

Some important features of the Von Neumann architecture are:

  1. both instructions (code) and data (variables and input/output) are stored in memory;
  2. memory is an collection of binary digits (bits) that have been organized into bytes, words, and regions with addresses;
  3. the code instructions and all data have memory addresses;
  4. to execute each instruction, it has to be moved to registers;
  5. only the registers have the “smarts” to do anything with the instructions; memory locations have no “smarts”;
  6. to save a result computed in the registers, it has to be moved back to memory;
  7. the granularity of instructions at the machine or assembler level is much smaller than the granularity at the MATLAB programming language level; that is, each MATLAB instruction corresponds to many machine instructions;
  8. operating systems and compilers keep the instructions and data in memory organized so it doesn't get mixed up together;
  9. if a program execution goes past its legal last instruction (for example) it can overwrite other instructions/data in memory and cause strange things to happen;
  10. one of the advantages of modern operating systems and compilers is the concept of relocatable code — that is, code that can be loaded and run from any location in memory.

http://last3.in/2010-11/CSE101/lectures/10-von-neumann-notes.html



  • IKINCI NESIL, TRANSISTÖRLER VE TOPLU  IS(BATCH) SISTEMLERI


Bu makineler müsterilere satilabilecek hale ve
güvenilirlige gelmislerdi. Çok pahali oldugu için büyük
kurumlar, devletler ya da üniversiteler alabiliyordu.
Bir is yaptirmak için program FORTRAN ya da
ASSEMBLER ile bir kagida yazilir. Sonra bu program
delikli kartlara aktarilir. Bu kartlar sirasiyla makinelerde
isletilirdi.



  • ÜÇÜNCÜ NESIL (1965-1980) ENTEGRE DEVRELER VE ÇOKLU PROGRAMLAMA (Multi Programming)


Eskiden bir is çalisirken I/O nedeniyle beklediginde baska
bir is çalismazdi. OS/360 ile bellek birden fazla parçaya
ayrilmis ve her parçada baska isin çalismasi saglanmistir.
Bir isin çalismasi I/O için askiya alindiginda, bellekteki
baska bir ise geçilirdi.
Diger yeniligi kartlardaki programlari diske okumasi ve
programlari diskten yüklemesidir


  • DÖRDÜNCÜ NESIL  (1980-1990) KISISEL BILGISAYARLAR


LSI(Large Scale Integration) büyük ölçekli entegre
devrelerin gelistirilmesi ile (1 cm slikon üzerinde yüzlerce
transistör vardir) kisisel bilgisayarlar üretilmistir


http://members.comu.edu.tr/msahin/courses/isletim_sistemi_giris/ders02.pdf

  • An operating system (OS) is system software that manages computer hardware, software resources, and provides common services for computer programs. 
https://en.wikipedia.org/wiki/Operating_system

  • An operating system is a program that acts as an interface between the user and the computer hardware and controls the execution of all kinds of programs.
Following are some of important functions of an operating System.

    Memory Management
    Processor Management
    Device Management
    File Management
    Security
    Control over system performance
    Job accounting
    Error detecting aids
    Coordination between other software and users

https://www.tutorialspoint.com/operating_system/os_overview.htm

  • An operating system is a set of programs that lies between applications software and the computer hardware. Conceptually the operating system software is an intermediary between the hardware and the applications software.
An operating system has three main functions: (1) manage the computer's resources, such as the central processing unit, memory, disk drives, and printers, (2) establish a user interface, and (3) execute and provide services for applications software.
https://homepage.cs.uri.edu/faculty/wolfe/book/Readings/Reading07.htm

  • Computer = HW + OS + Apps + Users
OS serves as interface between HW and ( Apps & Users )
OS provides services for Apps & Users
OS manages resources ( Government model, it doesn't produce anything. )
https://www2.cs.uic.edu/~jbell/CourseNotes/OperatingSystems/1_Introduction.html

Isletim Sistemi Türleri



* Mainframe (Ana Çati) sistemleri
* Sunucu(Server) isletim sistemleri
* Çok islemcili isletim sistemleri
* Kisisel bilgisayar isletim sistemleri
* Gerçek Zamanli(real-time) isletim sistemleri
*Gömülü(embedded) isletim sistemleri
*Akilli-kart(smart card) isletim sistemler


Anaçati(Mainframe) Isletim
Sistemleri
*Yogun I/O islemi gerektiren çok sayida görev
çalistirmaya yönelik sistemler için kullanilir:
Hizmetleri:
1. toplu is(batch) kipinde çalisma. Örnegin, ayni anda tüm
kullanicilarin belirli hesaplarini güncelleme gibi.

2.birim-is[hareket] (transaction) isleme. Örnegin,
rezervasyon islemleri gibi.

3.zaman paylasimli çalisma. Örnegin, veritabani
sorgulama


Sunucu Isletim Sistemleri
Sunucular üzerinde çalisirlar.
- sunucularin kaynak kapasiteleri yüksektir.
- bagli is istasyonlari vardir.
-anaçati sistemler bulunur.


Çok Islemcili Isletim Sistemleri
-Birden fazla islemcili bilgisayar sistemlerinde kullanilir.
-islem gücünün arttirilmasi hedeflenmektedir.
-Islemcilerin baglanma sekillerine göre sistemler
gruplanirlar:
  *paralel sistemler
  *grid sistemler
  *çok islemcili sistemler
 
 
 
  Kisisel Bilgisayar Isletim Sistemleri
-Kullaniciya etkin ve kolay kullanilabilri bir arayüz
sunmak


Gerçek Zamanli Isletim Sistemleri
Endüstriyel kontrol sistemlerinde kullanilirlar.
Zaman kisitlamasi çok önemlidir.
Örnek:VxWorks , QNX


Gömülü(Embedded) Isletim Sistemleri
avuç-içi bilgisayarlar ve gömülü sistemlere yönelik
tasarlanmistir.
kisitli ve özel amaçli islevler içerir.

TV, mikrodalga firin, çamasir makinesi, cep telefonlari için
gelistirilmis sistemler.

PalmOS , WindowsCE, Symbian OS



Akilli Kart(Smart Card)Isletim Sistemleri

-En küçük isletim sistemi türüdür.
-Kredi karti boyutunda üzerinde islemci olan kartlarda
çalisir.
-Islemci ve bellek kisitlamalari çok önemlidir.
-bazi isletim sistemleri Java tabanlidir.JVM içerir ve Java
programlari çalistirabilirler.

Örn:MULTOS, Windows Embedded CE,SmartecOS

http://members.comu.edu.tr/msahin/courses/isletim_sistemi_giris/ders02.pdf

TEMEL ISLETIM SISTEMI YAPILARI



TEMEL ISLETIM SISTEMI  YAPILARI

1.Tek Parça (monolitik) Sistemler
2.Katmanli(Layered) Sistemler
3.Sanal Makineler(Virtual Machines)
4.Dis-çekirdek(exo-kernel) Sistemler
5.Sunucu-Istemci Modeli (server-client)


Tek Parça(Monolitik) Sistemler
Isletim sistemi büyük bir prosedür toplulugudur.
Tüm yapilabilecek isler isletim sisteminin içinde yer alir.
Islevleri yerine getiren tüm prosedürler ayni seviyede yer
alir ve birbirleri ile etkilesim yapabilirler.
Çekirdek yapisi büyüktür.


Katmanli(Layered) Sistemler
Katmanlardan meydana gelir.
Her katman alttakinin
islevlerinden olusturulur.

Sanal Makineler (Virtual Machines)
Amaç çoklu programlama ortami ile tamamen donanima
bagimli olan kismi birbirinden ayirmaktir.
-Birden fazla sanal makineyi bir üst katmanda çalistirir.
Bu makineler asil sistemin herseyiyle birebir kopyasidir.
-Her sanal makine farkli Isletim Sistemi çalistirabilir.


Dis Çekirdekler(Exo-Kernels)
Sanal makine gibi çalisir
Her kullaniciya  bilgisayarin gerçek kopyasi verilir.
Her sanal makineye kaynaklarin belirli bir alt kümesi
verilir. Her sanal makinenin kullanabilecegi kaynak
araliklari belirlidir.
En alt katmanda bir dis çekirdek(exo-kernel) çalisir ve
kaynaklarin düzenli ve dogru sekilde dagitilmasini ve
kullanilmasini kontrol eder.
-Her sanal makine farkli isletim sistemi olabilir


Sunucu-Istemci Modeli
Çekirdek kipinde çalisan mikro çekirdek(mikro kernel)
vardir. Bu çekirdek en küçük sayidaki hizmete sahip
Isletim sisteminin çok büyük görev yogunlugu kullanici
kipinde(user mode) çalisan programlarda gerçeklesir

http://members.comu.edu.tr/msahin/courses/isletim_sistemi_giris/ders02.pdf