Osd
Operating System Processes and Scheduling Quiz
Test your knowledge on operating system processes, scheduling algorithms, and memory management with this comprehensive quiz! Whether you're a student preparing for exams, a teacher looking for assessment tools, or just an enthusiast in computer science, this quiz has something for you.
Key Features:
- 21 thought-provoking questions on crucial OS concepts.
- Multiple choice and drop list formats to challenge your understanding.
- Great for enhancing your knowledge in operating systems!
Blocked process state:
Running process state:
Ready process state:
In the context of paging memory management, lazy-loading (or load-on-demand) technique means:
Loading a page of a process in physical memory only when that page is accessed during the process’ execution
Loading all pages of a process in physical memory when that process starts its execution
Loading a page of a process in virtual memory only when that page is accessed during the process’ execution
Saving on disk a page of a process, when that page is accessed during the process’ execution
Which is the memory allocation strategy the internal fragmentation problem is specific to
Paging
Segmentation
LRU
FIFO
Allocation in one single contiguous chunk of bytes
The last system call any user application is calling is:
MutexRelease
MemoryFree
ProcessExit (or ThreadExit)
ProcessCreate (or ThreadCreate)
FileClose
What do you understand by “starvation” in the context of fixed-priority-based scheduler?
The fact that some threads’ priority is never decreased
The fact that all threads in a system are blocked, waiting for resources that could be release only by those threads
The fact that small priority threads could be delayed indefinitely (for very long) for accessing a needed resource, due to higher priority threads being continuously given that resource
The fact that a lock could only be used in a mutual exclusion manner
The fact that some threads’ priority is never increased
How does the deadlock strategy work?
By not satisfying any resource request, even if they are available
Providing an ideal environment (where the deadlock cannot occur), by assuring that at least one of the needed conditions for having a deadlock cannot happen
By using information in advance about the resources needed by processes and using them to keep the system in a safe state
By not making available any resource in the system
By not allowing the creation of any process in the system
Which of the following scheduling algorithms is using time measurements for limiting the time a thread is using a processor while other threads are waiting in the ready queue? Select one or more:
Variable-priority (MLFQ) scheduler
Fixed-priority scheduler
Non-preemptive shortest job first scheduler
Non-preemptive first-come-first-served
Round-robin
Which is the main hardware mechanism that could be used on uni-processor systems in order to provide atomicity when implementing synchronization mechanisms inside an operating system?
Disable interrupts
Priority-based scheduler
Busy waiting
Take a lock
Round-robin scheduling
In the context of paging memory management, the swapping technique consists in the following operations:
Swap-in
Swap-through
Swap-on
Swap-off
Swap-out
Which from the following are characteristics specific to the Round-Robin scheduling policy?
Does not consider thread priorities, being usually applied for scheduling threads with the same priority
Deals with threads in ready queue in a FCFS (first-come-first-served) manner
Chooses from ready queue the thread with the highest priority, not considering the order the threads were inserted into the ready queue
It is preemptive, letting a thread run only for a limited time quantum, if not terminating in the meantime
It is non-preemptive, letting threads run until their completion
Keeps the thread in blocking state only for an established amount of time, given by a predefined time slice
Which of the following synchronization mechanism could be used to assure mutual exclusion synchronization of multiple concurrent threads?
Blocking queue
Event c. spinlock
Spinlock
Semaphore
Condition variable
Which of the following scheduling algorithms is based on estimating the future CPU needs (bursts) of ready threads?
First come, first served
Shortest job first
Round-robin
Priority-based
Which of the following synchronization mechanisms is generally considered related to the priority inversion problem and its corresponding priority donation solution?
Semaphores
Events
Locks
Condition variables
A spinlock implementation is characterized by:
Keeping the process trying to take the lock in a busy waiting loop, until the lock become available
Suspending the process trying to take the lock, by taking its processor, until the lock becomes available
Keeping the process trying to release the lock in a busy-waiting loop, until another process tries to take the lock
Releasing the lock only when no other process is waiting for it
Which of the following thread state transitions could normally take place in a correct OS scheduler? Select one or more:
From blocked to terminated (zombie)
From running to terminated (zombie)
From ready to running
From running to blocked
From running to ready
From blocked to running
From blocked to ready
From ready to terminated (zombie)
The main reason the Unix-like operating system create processes using the particular fork() syscall (usually followed by an exec() syscall executed by the child process) is the following one:
It was a bad idea (from the performance point of view) someone had when Unix OS was developed
To allow the parent process resume faster its execution, so improving the possible parallelism between the parent and the child
Keep the parent blocked until the child process loads its new code and starts executing it
Because the child process arguments are not known when fork() is called
To allow the child process resume faster its execution, so improving the possible parallelism between the parent and the child
Which of the following technique makes the fork() syscall faster on UNIX-like operating systems?
. Disabling interrupts on the processor the child process is created
Using atomic processor instructions
Copy-on-write applied on the file system the parent and the child share
Crashing the operating system
Copy-on-write applied on the memory the parent and the child share
How many threads could be in the running state in an OS running on a system with 44 logical processors?
22
33
8
69
44
In the context of priority donation (as a solution to the priority inversion problem), when a lock is released by its holder, which is the new priority of that thread (i.e., the ex-lock holder)?
. The maximum priority supported by the OS, independently of its real priority
The minimum priority of all threads waiting for any lock still held by that thread
The maximum priority of all threads waiting for any lock still held by that thread
. Its real priority, independently of how many other locks that thread stills holds
{"name":"Osd", "url":"https://www.quiz-maker.com/QPREVIEW","txt":"Test your knowledge on operating system processes, scheduling algorithms, and memory management with this comprehensive quiz! Whether you're a student preparing for exams, a teacher looking for assessment tools, or just an enthusiast in computer science, this quiz has something for you.Key Features:21 thought-provoking questions on crucial OS concepts.Multiple choice and drop list formats to challenge your understanding.Great for enhancing your knowledge in operating systems!","img":"https:/images/course8.png"}