– Scheduling algorithms or disciplines are very much necessary for the proper distribution of the resources among the processes that request for them either asynchronously or simultaneously.
– Till date, a number of scheduling algorithms have been designed out of which one is the multi-level queue scheduling.
In this article we shall discuss about this scheduling discipline.
About Multilevel Queue Scheduling
– This scheduling algorithm is suitable for the processes that can be easily separated in to various different groups. – For example, interactive or the foreground processes or the batch or background processes can be easily divided.
– The response time requirements and other scheduling needs of these two types of processes are entirely different.
– This makes this algorithm suitable for most of the shared memory problems.
– Another name for this scheduling algorithm is multilevel feedback queue.
– It has been designed in such a way that it meets the following design requirements of the multi–mode systems:
- Giving preference to the I/O bound processes.
- Giving preference to the short jobs.
- Separating the processes in to various groups depending on their CPU needs.
- Preemptive way
- Non – preemptive way
- A queue has absolute priority over the others. In this case no process in the lower priority queue can be executed unless the higher priority queue is empty.
- There exists a time slice between the different queues. In this case a definite CPU time is given to each of the queue. The queue then adjusts this time between its processes. For example, it may give 20 percent of its time to the background queues and remaining time to the foreground queue. The former might be using FCFS scheduling while the latter might be using round – robin scheduling.
– The queues using the multilevel queue scheduling are the FIFO queues.
How the operation of Multilevel Queue Scheduling takes place?
– A new job is placed at the rear end of the FIFO queue at the topmost level.
– A stage will come when that job will reach the front end of the queue and the CPU will be assigned to it.
– If the job finishes executing, it can leave the system.
– Another case is that the process itself relinquishing the control.
– If so, the process can go out of the queue network.
– And when the process is again ready to be executed, it entered to the same queue level it left.
– A third case is that the process uses its whole time slice and still has not completed executing.
– In this case, the next job in the line preempts it and it is put back at the rear end of the queue at the lowest level.
– This cycle is continued until the process reaches the bottom queue or is completed.
– At this bottom most queue, the jobs are circulated in a round – robin manner until they are completed and quit the system.
– There is an optional 4th case also where if a process creates a block for input output operation, then it gets promotion of one level and is positioned at the next higher queue’s rear end.
– This makes sure that the scheduler favors the I/O bound processes.
– It also helps the processes at the base level queue in escaping.
Feature of Multilevel Queue Scheduling
– A characteristic feature of this algorithm is that every process is granted just one chance for completing itself and if it is not able to do so, it is pushed to the base level queue.
– Here, the ready queue is broken down in to smaller queues based up on either property of the processes:
– Then one of the processes is chosen from the occupied queue having the highest priority and is executed in either of the two ways:
– The scheduling algorithm or policy of each queue is different. There are two possibilities: