Table of contents
- 1) Can you give me the difference b/w kernel level threads and user level threads and their use ?
- 2) Diff b/w IO bound process and CPU bound.
- 3) Schedulers ( long, mid, short ) & dispatcher.
- 4) Multilevel Queue Scheduling and Multilevel Feedback Queue Scheduling
- 5) What are the requirements that a critical section problem solution must fulfill ?
- 6) Diff b/w Paging and Segmentation ?
- 7) Virtual Memory ?
- 8) What is IPC? What are the different IPC mechanisms?
- 9) What are the benefits of a multiprocessor system?
- 10) What is RAID structure in OS? What are the different levels of RAID configuration?
- 11) What is a Pipe and when it is used?
- 12) What is a bootstrap program in OS?
- 13) Why is the operating system important?
- 14) What do you mean by overlays in OS?
- 15) What is thrashing in OS?
- 16) Multitasking Vs Multiprocessing !
- 17) What do you mean by Sockets in OS?
- 18) Explain zombie process?
- 19) What is starvation and aging in OS?
- 20) What is the difference between paging and segmentation?
- 21) What is virtual memory?
- 22) Semaphore Vs. Mutex
- 23) What is Kernel and write its main functions?
- 24) What is Context Switching?
- 25) What is difference between process and thread?
- 26) What is a deadlock in OS? What are the necessary conditions for a deadlock?
- 27) What do you mean by Belady’s Anomaly?
- 28) What is spooling in OS?
- 29) Bankers algo is used for ?
1) Can you give me the difference b/w kernel level threads and user level threads and their use ?
Kernel-Level Threads (KLTs)
Definition: Kernel-level threads are managed directly by the operating system's kernel. The kernel is responsible for scheduling, creating, and managing these threads.
User-Level Threads (ULTs)
Definition: User-level threads are managed and scheduled by a user-level library or runtime rather than the kernel. The operating system is unaware of these threads.
Summary
Kernel-Level Threads are managed by the operating system and are suitable for applications requiring multi-core parallelism and direct OS support.
User-Level Threads are managed by user-level libraries and can be more efficient for certain applications but may not fully exploit multi-core processors unless combined with kernel-level threading.
In modern systems, user-level threads are often used in conjunction with kernel-level threads to provide the benefits of both approaches. For example, many systems use a hybrid threading model where user-level threads are mapped to kernel-level threads to balance performance and efficiency.
Let’s break it down with a simple analogy.
Threads: Like Tasks
Imagine you have a big project with lots of little tasks to complete. Each task is like a “thread” in a computer program. There are two main ways to handle these tasks: by using a personal assistant or by doing it yourself.
Kernel-Level Threads (KLTs): The Professional Assistant
What They Are: Kernel-level threads are like hiring a professional assistant to handle your tasks.
How It Works: This assistant (the operating system) takes care of scheduling and organizing the tasks. They know exactly what each task needs and can manage many tasks at once.
Advantages: They can work on different tasks simultaneously (like working on multiple parts of the project at the same time), and they’re very efficient.
Drawback: Sometimes it takes a bit of time to get the assistant to switch tasks, which can be a bit slow.
When to Use: You’d hire a professional assistant when you have a lot of complex tasks that need careful management and when you want to get the most out of your time.
User-Level Threads (ULTs): Doing It Yourself
What They Are: User-level threads are like managing all the tasks yourself without any extra help.
How It Works: You use a to-do list and organize the tasks yourself. You decide which task to do next, and you switch between tasks quickly.
Advantages: It’s quick to switch from one task to another because you’re handling it directly.
Drawback: You can only work on one task at a time (if your brain is like a single-core processor). If you want to do multiple tasks at once, you might need to be very organized.
When to Use: You’d handle the tasks yourself when they’re simple and you don’t need a lot of help. It’s also useful if you have a specific way you like to manage your tasks.
Putting It All Together
In real computers, kernel-level threads (professional assistants) are good for handling complex and many tasks efficiently, while user-level threads (doing it yourself) are good for quick, simple tasks where you don’t need a lot of overhead.
In Summary:
Kernel-Level Threads: Professional assistant who helps with lots of complex tasks.
User-Level Threads: Managing tasks yourself, quickly switching between them.
2) Diff b/w IO bound process and CPU bound.
I/O-Bound Process
Definition: An I/O-bound process is one that spends most of its time waiting for input/output operations to complete. These operations could include reading from or writing to disks, network communication, or other forms of data transfer.
Characteristics:
Waiting for Input/Output: The process spends a lot of time waiting for data to be read or written, such as waiting for a file to be downloaded from the internet.
Less CPU Usage: While waiting for I/O operations to finish, the CPU is often idle or doing very little work.
Examples: A web browser downloading files, a program fetching data from a remote server, or a database application waiting for data to be read from a disk.
In Simple Terms: Imagine you're waiting for a package to arrive in the mail. While you're waiting, you’re not doing much except checking the mailbox occasionally. Your main “activity” is waiting for that package.
CPU-Bound Process
Definition: A CPU-bound process is one that spends most of its time using the CPU to perform computations and process data. These processes require a lot of processing power and use the CPU intensively.
Characteristics:
Intensive Computation: The process is busy performing calculations or processing data, with little time spent waiting for I/O operations.
High CPU Usage: The CPU is working hard, performing complex calculations or processing tasks.
Examples: Video rendering, complex mathematical computations, or simulations where the CPU is doing a lot of calculations.
In Simple Terms: Imagine you’re working on a challenging math problem or a complex puzzle. You’re fully engaged in solving it, and the only thing you’re doing is working on that problem. There’s no waiting around—just lots of active brainpower.
Summary
I/O-Bound Process: Spends most of its time waiting for data to come in or be sent out. The CPU isn’t heavily used during these waiting periods.
CPU-Bound Process: Spends most of its time doing computations or processing data. The CPU is heavily used and constantly working.
In a computer system, balancing these types of processes helps ensure that resources are used efficiently. For example, if you have a mix of I/O-bound and CPU-bound processes, the system can work on I/O tasks while the CPU is busy, and vice versa.
3) Schedulers ( long, mid, short ) & dispatcher.
1. Long-Term Scheduler (Job Scheduler)
Purpose: The long-term scheduler controls which processes are admitted to the system for execution. It decides which processes are moved from the job pool (the list of processes waiting to be executed) into the ready queue (where they are waiting to be executed).
Characteristics:
Frequency: This scheduler operates less frequently compared to the other schedulers. It doesn’t need to make decisions as often.
Goal: The main goal is to control the degree of multiprogramming, i.e., how many processes are in memory. By doing this, it helps in managing the system’s load and ensuring that the system doesn’t become overloaded.
Decisions: It decides which jobs to load into memory and which to keep waiting on disk.
In Simple Terms: Imagine you have a stack of papers (jobs) that need to be processed. The long-term scheduler decides which papers from the stack should be moved to your desk (memory) where you can work on them.
2. Short-Term Scheduler (CPU Scheduler)
Purpose: The short-term scheduler decides which of the processes in the ready queue should be executed next by the CPU. It’s responsible for deciding which process should be given CPU time.
Characteristics:
Frequency: This scheduler operates frequently and makes decisions multiple times per second. It handles the switching of processes on the CPU.
Goal: The goal is to ensure that the CPU is used efficiently and that processes are given fair and timely access to CPU time.
Decisions: It decides which process gets to use the CPU next and for how long, often using algorithms like Round-Robin, Priority Scheduling, etc.
In Simple Terms: Imagine you have a stack of documents on your desk that need to be reviewed one by one. The short-term scheduler decides which document you should pick up and review next.
3. Mid-Term Scheduler (Swapper)
Purpose: The mid-term scheduler manages the swapping of processes in and out of main memory and secondary storage (usually disk). This process is known as "swapping."
Characteristics:
Frequency: Operates less frequently compared to the short-term scheduler but more often than the long-term scheduler.
Goal: The goal is to manage memory efficiently and ensure that there is enough memory available for active processes. It helps in balancing the system load by moving processes between memory and disk.
Decisions: It decides which processes should be swapped out of memory to disk (to free up space) and which should be swapped back into memory from disk (to be executed).
In Simple Terms: Imagine your desk is too cluttered with documents. The mid-term scheduler decides which documents to put back in the filing cabinet (disk) to make space for new ones, and which ones to take out of the cabinet and place back on your desk.
4. Dispatcher
Purpose: The dispatcher is responsible for giving control of the CPU to the process selected by the short-term scheduler. It handles the actual switching of processes.
Characteristics:
Frequency: Operates every time a context switch occurs, which is whenever the CPU switches from one process to another.
Goal: The goal is to perform context switching efficiently and effectively.
Functions: It performs actions such as saving the state of the currently running process, loading the state of the next process to be executed, and updating the CPU registers and program counter.
In Simple Terms: Imagine you have to switch between different tasks on your desk. The dispatcher is like a personal assistant who helps you transition from one task to another smoothly, making sure you have everything you need for the next task.
Summary
Long-Term Scheduler (Job Scheduler): Decides which processes to load into memory from disk, managing overall system load.
Short-Term Scheduler (CPU Scheduler): Decides which process in the ready queue gets to use the CPU next, managing CPU time.
Mid-Term Scheduler (Swapper): Manages swapping processes in and out of memory, balancing memory usage.
Dispatcher: Handles the context switching between processes, ensuring smooth transitions when processes are switched in and out of the CPU.
Each scheduler plays a crucial role in managing processes and system resources efficiently.
4) Multilevel Queue Scheduling and Multilevel Feedback Queue Scheduling
Multilevel Queue Scheduling
Concept: Multilevel queue scheduling is a method where processes are divided into different queues based on their priority, type, or other criteria. Each queue has its own scheduling policy.
How It Works:
Queues: Processes are classified into different queues. For example, you might have separate queues for interactive processes (like a text editor), batch processes (like a data analysis job), and system processes (like background services).
Priority: Each queue typically has a different priority level. Higher-priority queues are given CPU time before lower-priority queues.
Scheduling Policies: Each queue can use a different scheduling algorithm. For example:
Interactive Queue: Might use Round-Robin scheduling to ensure fairness.
Batch Queue: Might use First-Come-First-Served (FCFS) to handle long-running jobs.
Queue Assignment: Once a process is assigned to a queue, it generally remains in that queue throughout its execution. However, it’s possible for the system to have fixed rules about process movement.
Example:
Queue 1: High-priority interactive processes (e.g., a user interface application).
Queue 2: Medium-priority batch processes (e.g., data processing).
Queue 3: Low-priority background tasks (e.g., system maintenance).
In Simple Terms: Imagine you’re managing three types of tasks at work:
Urgent tasks (like client meetings) are handled immediately.
Important tasks (like project work) are scheduled next.
Routine tasks (like organizing files) are handled when there’s time.
Multilevel Feedback Queue Scheduling
Concept: Multilevel feedback queue scheduling is a more flexible and dynamic version of multilevel queue scheduling. It allows processes to move between queues based on their behavior and execution history.
How It Works:
Multiple Queues: Similar to multilevel queue scheduling, there are several queues, each with different priorities and possibly different scheduling policies.
Feedback Mechanism: Processes can move between these queues based on their execution characteristics:
New Processes: Start in a high-priority queue.
Process Behavior: If a process uses up its time slice in a higher-priority queue, it may be moved to a lower-priority queue.
Starvation Prevention: If a process remains in a lower-priority queue for too long, it might be moved back to a higher-priority queue to prevent starvation.
Dynamic Adjustment: The feedback mechanism adjusts process priorities dynamically based on factors like CPU usage, execution time, and waiting time.
Example:
Queue 1: Highest priority for short interactive jobs.
Queue 2: Medium priority for longer jobs with some feedback.
Queue 3: Lower priority for jobs that have been running a long time and need more time.
In Simple Terms: Think of it as a system where you adjust your work strategy based on how tasks are behaving:
Quick tasks that finish fast are handled first.
Tasks that take longer might be given more time if they’re important.
Tasks that seem to be taking forever might get a review to see if they need more attention.
Summary of Differences
Multilevel Queue Scheduling:
Processes are categorized into fixed queues based on certain criteria.
Each queue has a specific scheduling policy.
Processes generally stay in the queue to which they were initially assigned.
Multilevel Feedback Queue Scheduling:
Processes can move between different queues based on their behavior and needs.
This method adapts to the dynamic nature of process execution, adjusting priorities to prevent starvation and improve efficiency.
More flexible and dynamic compared to multilevel queue scheduling.
Both scheduling methods aim to efficiently manage processes and improve system performance, but they differ in their flexibility and how they handle process priorities and movement between queues.
5) What are the requirements that a critical section problem solution must fulfill ?
1. Mutual Exclusion
Definition: Only one process or thread can be in the critical section at any one time.
Requirement: When one process is executing in its critical section, no other process should be allowed to enter its critical section. This prevents conflicts and ensures data integrity.
Example: If two threads are trying to update a shared counter, mutual exclusion ensures that only one thread updates the counter at a time.
2. Progress
Definition: If no process is executing in its critical section and there are processes waiting to enter their critical sections, then the selection of the next process to enter its critical section cannot be postponed indefinitely.
Requirement: A process should not be prevented from entering its critical section if it is ready and waiting to do so, as long as no other process is currently in its critical section. This ensures that the system makes progress and does not get stuck.
Example: If Thread A is waiting to enter the critical section and Thread B is not in the critical section, Thread A should eventually get a chance to enter the critical section.
3. Bounded Waiting
Definition: There must be a limit on the number of times other processes can enter their critical sections before a waiting process is allowed to enter its critical section.
Requirement: This ensures that no process waits indefinitely while others continually enter their critical sections. Every waiting process should be given a chance to execute within a finite amount of time.
Example: If Thread A has been waiting for a long time to enter the critical section, it should not have to wait forever while other threads continue to enter their critical sections.
4. Atomicity (Optional, but Often Implied)
Definition: Operations within the critical section should be indivisible, meaning that they should either be fully completed or not executed at all.
Requirement: To ensure mutual exclusion, critical sections should be executed atomically to prevent other processes from interfering mid-operation.
Example: If a critical section involves incrementing a counter, the entire increment operation should be completed in one go without interruptions.
Summary of Requirements
Mutual Exclusion: Ensures that only one process or thread is in the critical section at any given time.
Progress: Guarantees that processes waiting to enter the critical section are eventually given a chance.
Bounded Waiting: Limits how long processes must wait to enter their critical section, preventing indefinite delays.
Atomicity (Optional): Ensures that operations within the critical section are executed as a single, uninterrupted action.
By satisfying these requirements, a critical section solution ensures that shared resources are used safely and efficiently in a concurrent environment. Various synchronization mechanisms, like mutexes, semaphores, and locks, are commonly used to achieve these goals in practice.
6) Diff b/w Paging and Segmentation ?
Paging
Concept: Paging divides the memory into fixed-size blocks called pages. The process is also divided into the same-sized blocks called page frames. When a process needs memory, it is allocated one or more pages, which may be scattered throughout physical memory.
Fixed Size: Pages and frames are of the same size, which is usually a power of 2 (e.g., 4 KB). This uniformity simplifies memory allocation and management.
Address Translation: The system uses a page table to map virtual addresses to physical addresses. Each entry in the page table holds the base address of a page in physical memory.
Fragmentation: Paging eliminates external fragmentation (unused memory between allocated blocks) but can suffer from internal fragmentation if a page is not fully used.
Example: If a program requires 10 KB of memory and the page size is 4 KB, it will use 3 pages (8 KB) and have 2 KB potentially wasted in the last page if not fully utilized.
Segmentation
Concept: Segmentation divides the memory into variable-sized segments based on the logical divisions of a program, such as functions, arrays, or data structures. Each segment represents a different logical unit of the program.
Variable Size: Segments can vary in size depending on the needs of the program or the logical division. This allows more flexible memory allocation.
Address Translation: Segmentation uses a segment table where each entry contains the base address and length of a segment. A logical address is typically represented by a segment number and an offset within that segment.
Fragmentation: Segmentation can suffer from external fragmentation (unused memory between segments) but avoids internal fragmentation by allocating memory based on the segment’s size.
Example: If a program has a data segment of 20 KB and a code segment of 15 KB, each segment is allocated based on its size. If a segment is less than its allocated size, the extra space could be wasted or result in fragmentation.
Summary
Paging: Uses fixed-size pages and frames, simplifies memory management, eliminates external fragmentation, but can suffer from internal fragmentation.
Segmentation: Uses variable-sized segments, aligns with logical divisions of a program, avoids internal fragmentation, but can experience external fragmentation.
Both techniques aim to manage memory efficiently, but they approach the problem in different ways and can be used in conjunction in some systems to leverage the strengths of both.
7) Virtual Memory ?
Virtual Memory is a memory management technique that creates the illusion of a large, contiguous block of memory for each process, even if the physical memory (RAM) is limited or fragmented. It allows a computer to use disk storage to extend the apparent amount of available memory. Here’s an overview of how it works and its benefits:
Key Concepts
Address Space: Virtual memory provides each process with its own address space, which is independent of the physical memory. This means that each process believes it has access to a large, contiguous block of memory.
Paging and Segmentation: Virtual memory is often implemented using paging or segmentation or a combination of both. These methods divide the virtual address space into smaller chunks (pages or segments) and map them to physical memory or disk storage.
Page Faults: When a process tries to access a page that is not currently in physical memory, a page fault occurs. The operating system then loads the required page from the disk into physical memory, possibly swapping out another page to make space.
Swap Space: Disk space used to store pages or segments that are not currently in physical memory is called swap space. This space acts as an extension of RAM and helps manage processes that exceed the available physical memory.
Translation Lookaside Buffer (TLB): A hardware cache that stores recent translations of virtual addresses to physical addresses to speed up the address translation process.
Benefits
Increased Memory Capacity: Virtual memory allows systems to run larger applications or more applications concurrently than would be possible with physical memory alone.
Isolation: Each process operates in its own virtual address space, which helps prevent processes from interfering with each other and enhances system stability and security.
Simplified Memory Management: The operating system can manage memory more flexibly, allocating and deallocating memory as needed, without being constrained by physical memory limitations.
Efficient Use of Physical Memory: Virtual memory allows the system to use physical memory more efficiently by swapping less frequently used pages to disk, freeing up RAM for more active processes.
Process Migration: Virtual memory makes it easier to move processes between physical memory and disk storage, which is useful for system hibernation and process migration.
Example
Suppose you have 8 GB of physical RAM and a system that uses virtual memory. You might have a process that requires 16 GB of memory. With virtual memory, the system can handle this by keeping the most frequently used pages in RAM and storing the rest on disk. When the process accesses a page that is not in RAM, a page fault occurs, and the operating system loads the page from the disk into RAM, possibly swapping out another page if necessary.
Summary
Virtual memory is a crucial technique in modern operating systems that enhances the capability and flexibility of memory management. By creating an abstract layer of memory, it enables systems to handle more complex and memory-intensive applications than would be possible with only physical memory.
8) What is IPC? What are the different IPC mechanisms?
IPC (Interprocess Communication) is a mechanism that requires the use of resources like a memory that is shared between processes or threads. With IPC, OS allows different processes to communicate with each other. It is simply used for exchanging data between multiple threads in one or more programs or processes. In this mechanism, different processes can communicate with each other with the approval of the OS.
Different IPC Mechanisms:
Pipes
Message Queuing
Semaphores
Socket
Shared Memory
Signals
9) What are the benefits of a multiprocessor system?
A Multiprocessor system is a type of system that includes two or more CPUs. It involves the processing of different computer programs at the same time mostly by a computer system with two or more CPUs that are sharing single memory.
Benefits:
Such systems are used widely nowadays to improve performance in systems that are running multiple programs concurrently.
By increasing the number of processors, a greater number of tasks can be completed in unit time.
One also gets a considerable increase in throughput and is cost-effective also as all processors share the same resources.
It simply improves the reliability of the computer system.
10) What is RAID structure in OS? What are the different levels of RAID configuration?
RAID (Redundant Arrays of Independent Disks) is a method used to store data on Multiple hard disks therefore it is considered as data storage virtualization technology that combines multiple hard disks. It simply balances data protection, system performance, storage space, etc. It is used to improve the overall performance and reliability of data storage. It also increases the storage capacity of the system and its main purpose is to achieve data redundancy to reduce data loss.
11) What is a Pipe and when it is used?
The pipe is generally a connection among two or more processes that are interrelated to each other. It is a mechanism that is used for inter-process communication using message passing. One can easily send information such as the output of one program process to another program process using a pipe. It can be used when two processes want to communicate one-way i.e., inter-process communication (IPC).
12) What is a bootstrap program in OS?
It is generally a program that initializes OS during startup i.e., first code that is executed whenever computer system startups. OS is loaded through a bootstrapping process or program commonly known as booting. Overall OS only depends on the bootstrap program to perform and work correctly. It is fully stored in boot blocks at a fixed location on the disk. It also locates the kernel and loads it into the main memory after which the program starts its execution.
13) Why is the operating system important?
OS is the most essential and vital part of a computer without which it is considered useless. It enables an interface or acts like a link for interaction between computer software that is installed on OS and users. It also helps to communicate with hardware and also maintains balance among hardware and CPU. It also provides services to users and a platform for programs to run on. It performs all common tasks applications require.
14) What do you mean by overlays in OS?
Overlays is basically a programming method that divides processes into pieces so that instructions that are important and need can be saved in memory. It does not need any type of support from the OS. It can run programs that are bigger in size than physical memory by only keeping only important data and instructions that can be needed at any given time.
15) What is thrashing in OS?
It is generally a situation where the CPU performs less productive work and more swapping or paging work. It spends more time swapping or paging activities rather than its execution. By evaluating the level of CPU utilization, a system can detect thrashing. It occurs when the process does not have enough pages due to which the page-fault rate is increased. It inhibits much application-level processing that causes computer performance to degrade or collapse.
16) Multitasking Vs Multiprocessing !
Multitasking | Multiprocessing |
It performs more than one task at a time using a single processor. | It performs more than one task at a time using multiple processors. |
In this, the number of CPUs is only one. | In this, the number of CPUs is more than one. |
It is more economical. | It is less economical. |
It is less efficient than multiprocessing. | It is more efficient than multitasking. |
It allows fast switching among various tasks. | It allows smooth processing of multiple tasks at once. |
It requires more time to execute tasks as compared to multiprocessing. | It requires less time for job processing as compared to multitasking. |
17) What do you mean by Sockets in OS?
The socket in OS is generally referred to as an endpoint for IPC (Interprocess Communication). Here, the endpoint is referred to as a combination of an IP address and port number. Sockets are used to make it easy for software developers to create network enabled programs. It also allows communication or exchange of information between two different processes on the same or different machines. It is mostly used in client-server-based systems.
18) Explain zombie process?
Zombie process, referred to as a defunct process, is basically a process that is terminated or completed but the whole process control block is not cleaned up from the main memory because it still has an entry in the process table to report to its parent process. It does not consume any of the resources and is dead, but it still exists. It also shows that resources are held by process and are not free.
19) What is starvation and aging in OS?
When we use Priority Scheduling or Shortest Job First Scheduling, Starvation can happen, This algorithm is mostly used in CPU schedulers
Starvation: It is generally a problem that usually occurs when a process has not been able to get the required resources it needs for progress with its execution for a long period of time. In this condition, low priority processes get blocked and only high priority processes proceed towards completion because of which low priority processes suffer from lack of resources.
Aging: It is a technique that is used to overcome the situation or problem of starvation. It simply increases the priority of processes that wait in the system for resources for a long period of time. It is considered the best technique to resolve the problem of starvation as it adds an aging factor to the priority of each and every request by various processes for resources. It also ensures that low-level queue jobs or processes complete their execution.
20) What is the difference between paging and segmentation?
Paging: It is generally a memory management technique that allows OS to retrieve processes from secondary storage into main memory. It is a non-contiguous allocation technique that divides each process in the form of pages.
Segmentation: It is generally a memory management technique that divides processes into modules and parts of different sizes. These parts and modules are known as segments that can be allocated to process.
21) What is virtual memory?
It is a memory management technique feature of OS that creates the illusion to users of a very large (main) memory. It is simply space where a greater number of programs can be stored by themselves in the form of pages. It enables us to increase the use of physical memory by using a disk and also allows us to have memory protection. It can be managed in two common ways by OS i.e., paging and segmentation. It acts as temporary storage that can be used along with RAM for computer processes.
22) Semaphore Vs. Mutex
Semaphore: Suitable for scenarios where you need to limit the number of threads accessing a resource simultaneously, like controlling access to a pool of database connections.
Mutex (Mutual Exclusion): Ideal for protecting a critical section of code where only one thread should be executing at a time, ensuring mutual exclusion.
23) What is Kernel and write its main functions?
The kernel is basically a computer program usually considered as a central component or module of OS. It is responsible for handling, managing, and controlling all operations of computer systems and hardware. Whenever the system starts, the kernel is loaded first and remains in the main memory. It also acts as an interface between user applications and hardware.
24) What is Context Switching?
Context switching is basically a process of saving the context of one process and loading the context of another process. It is one of the cost-effective and time-saving measures executed by CPU the because it allows multiple processes to share a single CPU. Therefore, it is considered an important part of a modern OS. This technique is used by OS to switch a process from one state to another i.e., from running state to ready state. It also allows a single CPU to handle and control various different processes or threads without even the need for additional resources.
25) What is difference between process and thread?
Process: It is basically a program that is currently under execution by one or more threads. It is a very important part of the modern-day OS.
Thread: It is a path of execution that is composed of the program counter, thread id, stack, and set of registers within the process.
Process | Thread |
It is a computer program that is under execution. | It is the component or entity of the process that is the smallest execution unit. |
These are heavy-weight operators. | These are lightweight operators. |
It has its own memory space. | It uses the memory of the process they belong to. |
It is more difficult to create a process as compared to creating a thread. | It is easier to create a thread as compared to creating a process. |
It requires more resources as compared to thread. | It requires fewer resources as compared to processes. |
It takes more time to create and terminate a process as compared to a thread. | It takes less time to create and terminate a thread as compared to a process. |
It usually run-in separate memory space. | It usually run-in shared memory space. |
It does not share data. | It shares data with each other. |
It can be divided into multiple threads. | It can’t be further subdivided. |
26) What is a deadlock in OS? What are the necessary conditions for a deadlock?
Deadlock is generally a situation where a set of processes are blocked as each process is holding resources and waits to acquire resources held by another process. In this situation, two or more processes simply try to execute simultaneously and wait for each to finish their execution because they are dependent on each other. We can see a hand problem in our system whenever a deadlock occurs in a program. It is one of the common problems you can see in multiprocessing.
Necessary Conditions for Deadlock
There are basically four necessary conditions for deadlock as given below:
Mutual Exclusion
Hold and Wait
No Pre-emption
Circular Wait or Resource Wait
27) What do you mean by Belady’s Anomaly?
In the Operating System, process data is loaded in fixed-sized chunks and each chunk is referred to as a page. The processor loads these pages in the fixed-sized chunks of memory called frames. Belady’s Anomaly is a phenomenon in which if we increase the number of frames in memory, then the number of page faults also increases. It is generally experienced when we use FIFO (First in First out) page replacement algorithm.
28) What is spooling in OS?
Spooling simply stands for Simultaneous peripheral operations online. It is referred to as putting data of various I/O jobs in a buffer. Here, buffer means a special area in memory or hard disk that can be accessible to an I/O device. It is used for mediation between a computer application and a slow peripheral. It is very useful and important because devices access or acquire data at different rates. This operation also uses disk as a very large buffer and is capable of overlapping I/O operations for one task with processor operations for another task.
(kinda buffering)
29) Bankers algo is used for ?
Avoiding deadlocks