Processor Scheduling Explained:
- The Operating System allocates processor time to each application running. During this processor time the Central Processing Unit (CPU) carries out jobs so the application works for the user.
- While one application is using the CPU for processing, the Operating System can queue up the next process to be executed.
- The Scheduler (an Operating System module) is responsible for ensuring processor time is used most efficiently.
Interrupts:
- Interrupts are signals from a software program, a hardware device or the internal clock to the CPU. They are assigned priorities.
When an interrupt is received by CPU, execution of the currently running process/job is suspended and interrupts of a lower priority are disabled. Previously operating instructions and values of the registers (data used for the processes/jobs) are temporarily put onto the system stack.- An Interrupt Service Routine is called to deal with the interrupt. Once the interrupt has been dealt with, instructions and register values are retrieved from the system stack and the original process/job resumes.
- A test for the presence of interrupts is carried out at the end of every Fetch Decode Execute cycle (after an instruction has been executed).
- Interrupts may also be triggered regularly by a timer to indicate it is the turn of the next process/job to have processor time (as mentioned in 'round robin below').
First come first served:
- Jobs are processed in the order in which they arrive.
- There is no system of priority.
- It is non-preemptive, so jobs cannot be interrupted and resumed.
Round robin
- Jobs are dispatched on a first in first out (FIFO) basis.
- The Operating System sets an interval timer which generates interrupts at a specific time.
- Each job is given a time slice (limited amount of processing time). If the job is not complete before this time expires, or a higher priority interrupt occurs, the original job is sent to the back of the job queue and the next job gets a time slice.
- Sometimes higher priority jobs may be allowed more than one consecutive time slice.
- This cycle happens so quickly that it seems to the user that multiple applications/jobs are running simultaneously (multi-tasking).
Shortest remaining time
- The job with the smallest estimated time left until it is complete is run next.
- This reduces the number of small jobs waiting behind larger jobs.
Shortest job first
- The queue storing jobs to be processed is ordered according to the time required to complete the job.
- The jobs that require the least amount of time to be completed are processed first.
- If there is a constant flow of short jobs, this scheduling algorithm could lead to processor starvation (where longer jobs are perpetually denied the resources needed to be processed).
Multi-level feedback queues
- This algorithm is designed to give preference to shorter jobs and Input/Output bound processes (helping to ensure that hardware resources are kept as busy as possible).
- Jobs are separated into categories based on their need for the processor.
- There are several different jobs queues with different priorities and jobs can jump between queues (e.g. if the job has taken up a lot of processor time already or has beenn idle for a long time).