blog banner
 

 

1. What is process?

A process can be thought of as a program in execution. A process will need certain resources ---such as CPU time, memory, files, and I/O devices to accomplish its task. These resources are allocated to the process either when it is created or while it is executing.

A process is the unit of work in most systems. System consist of a collection of processes: operating-system processes execute system code, and user processes execute user code. All these processes may execute concurrently.

Informally, as mentioned above, a process is a program in execution. A process is more than the program code, which is sometimes known as the text section. It also includes the current activity, as represented by the value of the program counter and the contents of the processor’s registers. A process generally also includes the process stack, which contains temporary data (such as function parameters, return addresses, and local variables), and a data section, which contains global variables. A process may also include a heap, which is memory that is dynamically allocated during process run time.

We have to keep in mind that a program by itself is not a process. A program is a passive entity, such as a file containing a list of instructions stored on disk. In contrast, a process is an active entity, with a program counter specifying the next instruction to execute and a set of associated resources.

A program becomes a process when an executable file is loaded into memory. Although two processes may be associated with the same program, they are nevertheless considered two separate execution sequences.

 

2. What are the states of process?

As a process executes, it changes state. The state of a process is defined in part by the current activity of that process. A process may be in one of the following states:

1. New: the process is being created.

2. Running: instructions are being executed.

3. Waiting: the process is waiting for some event to occur (such as an I/O completion)

4. Ready: the process is waiting to be assigned to a processor.

5. Terminated: the process has finished execution.

 

3. What is Process control block (PCB)?

Each process is represented in the operating system by a process control block (PCB) --- also called a task control block. It contains many pieces of information associated with a specific process, including these:

Process state:

The state may be new, ready, running, and waiting, halted, and so on.

Program counter:

The counter indicates the address of the next instruction to be executed for this process.

CPU registers:

The registers vary in number and type, depending on the computer architecture. They include accumulators, index registers, stack pointers, and general purpose registers, plus any condition-code information. Along with the program counter, this state information must be saved when an interrupt occurs, to allow the process to be continued correctly afterward.

CPU-scheduling information:

This information includes a process priority, pointers to scheduling queues, and any other scheduling parameters.

Memory-management information:

This information may include such items as the value of the base and limit registers and the page tables, or the segment tables, depending on the memory system used by the operating system.
Accounting information: This information includes the amount of CPU and real time used, time limits, account numbers, job or process numbers, and so on.

I/O status information:

This information includes the list of I/O devices allocated to the process, a list of open files, and so on.

In brief, the PCB simply serves as the repository for any information that may vary from process to process.

 

4. What is a thread?

Thread is the smallest executable unit of a process. For example, when you run a notepad program, operating system creates a process and starts the execution of main thread of that process.

A process can have multiple threads. Each thread will have their own task and own path of execution in a process. For example, in a notepad program, one thread will be taking user inputs and another thread will be printing a document.

All threads of the same process share memory of that process. As threads of the same process share the same memory, communication between the threads is fast.

Processes and threads can be represented like below.

 

5. Explain the differences between a process and a thread.

 

6. What is context switching?

switching the CPU to another process requires performing a state save of the current process and a state restore of a different process. This task is known as a context switch. When a context switch occurs, the kernel saves the context of the old process in its PCB (Process Control Block) and loads the saved context of the new process scheduled to run. Context switch time is pure overhead, because the system does no useful work while switching. Switching speed varies from machine to machine, depending on the memory speed, the number of registers that must be copied, and the existence of special instructions. A typical speed is few milliseconds.

 

7. Explain inter-process communication (IPC). What are the reasons for process communication or co-operation?

Processes executing concurrently in the operating system may be either independent processes or cooperating processes. A process is independent if it cannot affect or be affected by the other processes executing in the system. Any process that does not share data with any other process is independent. A process is cooperating if it can affect or be affected by the other processes executing in the system. Therefore, we can say that any process that shares data with other processes is a cooperating process.

There are several reasons for providing an environment that allows process cooperation:

Information sharing:

Since several users may be interested in the same piece of information, we must provide an environment to allow concurrent access to such information

Computation speedup:

If we want a particular task to run faster, we must break it into subtasks, each of which will be executing in parallel with the others. Notice that such a speedup can be achieved only if the computer has multiple processing cores.

Modularity:

We may want to construct the system in a modular fashion, dividing the system functions into separate processes or threads.

Convenience:

Even an individual user may work on many tasks at the same time. For instance, a user may be editing, listening to music, and compiling in parallel.

 

8. What is shared memory and message passing ?

Cooperating processes require an inter-process communication (IPC) mechanism that will allow them to exchange data and information. There are two fundamental models of inter-process communication: shared memory and message passing.

In the shared memory model, a region of memory that is shared by cooperating processes is established. Processes can then exchange information by reading and writing data to the shared region. In the message-passing model, communication takes place by means of messages exchanged between the cooperating processes.

 

9. What do you mean by Absolute Code and Re-locatable Code?

Usually a program resides on a disk as a binary executable file. To be executed, the program must be brought into memory and placed within a process. Depending on the memory management in use, the process may be moved between disk and memory during its execution. Most system allows a user process to reside in any part of the physical memory.

In most cases, a user program goes through several steps, some of which may be optional, before being executed. Address may be represented in different ways during these steps.

Addresses in the source program are generally symbolic. A compiler typically binds these symbolic addresses to re-locatable addresses. The loader in turn binds the re-locatable addresses to absolute addresses. Each binding is a mapping from one address space to another.

At compile time, where the process will reside in memory, then absolute code can be generated. For example, if it is known that a user process will reside starting at location R, then the generated compiler code will start at that location and extend up from there. If, at some later time, the starting location changes, then it will be necessary to recompile this code.

If it is not known at compile time where the process will reside in memory, then the compiler must generate re-locatable code. In this case, final binding is delayed until load time. If the starting address changes, we need only reload the user code to incorporate this changed value.

 

10. What is Logical Address (Virtual Address) and what is Physical Address?

An address generated by the CPU is commonly referred to as a logical address, whereas an address seen by the memory unit, that is, the one loaded into the memory-address register of the memory is commonly referred to as a physical address.

The compile-time and load-time address-binding methods generate identical logical and physical addresses. However, the execution-time address-binding scheme results in differing logical and physical addresses. In this case, we usually refer to the logical address as a virtual address.

The set of all logical addresses generated by a program is a logical address space. The set of all physical addresses corresponding to these logical addresses is a physical address space.

The run-time mapping from virtual to physical address is done by a hardware device called the memory-management unit (MMU).

 

11. What is Dynamic loading?

To obtain better memory space utilization, we can use dynamic loading. With dynamic loading, a routine is not loaded until it is called. All routines are kept on disk in a re-locatable load format. The main program is loaded into memory and is executed.

The advantage of dynamic loading is that a routine is loaded only when it is needed. This method is particularly useful when large amounts of code are needed to handle infrequently occurring cases, such as error routines. In this case, although the total program size may be large, the portion that is used may be much smaller.

 

12. What is Swapping?

Swapping is a mechanism in which a process can be swapped temporarily out of main memory (or move) to secondary storage (disk) and make that memory available to other processes. At some later time, the system swaps back the process from the secondary storage to main memory.

Though performance is usually affected by swapping process but it helps in running multiple and big processes in parallel and that's the reason Swapping is also known as a technique for memory compaction.

A process must be in the main memory to be executed. A process, however, can be swapped temporarily out of memory to a backing store and then brought back into memory to continue execution. Swapping makes it possible for the total physical address space of all processes to exceed the real physical memory of the system, thus increasing the degree of multiprogramming in a system.

 

13. What do you mean by Contiguous Memory Allocation?

The memory is divided into two partitions. One for the Operating System and another for the user processes. Operating System is placed in low or high memory depending on the interrupt vector placed. In contiguous memory allocation each process is contained in a single contiguous section of memory. Contiguous memory allocation is a classical memory allocation model that assigns a process consecutive memory blocks (that is, memory blocks having consecutive addresses).

Memory Allocation

There are two methods namely, multiple partition method and a general variable partition method. In multiple partition method, when a partition is free, a process is selected from the input queue and is loaded into the free partition. When the process terminates, the partition becomes available for another process.

In the variable partition scheme, the OS keeps a table indicating which parts of memory are available and which are occupied. Initially, all memory is available for user processes and is considered one large block of available memory, a hole.

The memory blocks available comprises a set of holes of various sizes scattered throughout memory. When a process arrives and needs memory, the system searches the set for a hole that is large enough for this process. If the hole is too large, it is split into two parts. One part is allocated to the arriving process; the other is returned to the set of holes. When a process terminates, it releases its block of memory, which is then placed back in the set of holes. If the new hole is adjacent to other holes, these adjacent holes are merged to form one larger hole. At this point, the system may need to check whether there are processes waiting for memory and whether this newly freed and recombined memory could satisfy the demands of any of these waiting processes.

This procedure is a particular instance of the general dynamic storage-allocation problem, which concerns how to satisfy a request of size n from a list of free holes. The first-fit, best-fit and worst-fit strategies are the ones most commonly used to select a free hole from the set of available holes.

1. First fit:

The first hole that is large enough is allocated. Searching for the holes starts from the beginning of the set of holes or from where the previous first fit search ended.

2. Best fit:

The smallest hole that is big enough to accommodate the incoming process is allocated. If the available holes are ordered, then the searching can be reduced.

3. Worst fit:

The largest of the available holes is allocated.

 

14. Explain Swapping on IOS/Android. ?

Although most systems for PCs and servers support some modified version of swapping, mobile systems typically do not support swapping in any form. Mobile devices generally use flash memory rather than more spacious hard disks as their persistent storage.

Instead of swapping, when free memory falls below a certain threshold, Apple’s iOS asks applications to voluntarily relinquish allocated memory. Read-only data (such as code) are removed from the system and later reloaded from flash memory if necessary. Data that have been modified are never removed. However, any applications that fail to free up sufficient memory may be terminated by the operating system.

Android does not support swapping and adopts a strategy similar to that used by iOS. It may terminate a process if insufficient free memory is available. However, before terminating a process, Android writes its application state to flash memory so that it can be quickly restarted.

Admin Team


0 Respond



Written
Exam Questions
BLOG Posts
For study materials
Job Walk-In
Updates across India
Interview round
Questions
Interview or procedure
Experiences
Files
For study materials
Please LIKE our page to Get regular JOB WALK-IN UPDATES across India and STUDY MATERIALS on facebook news feed.