blog banner


1. What is fragmentation? Explain external and internal fragmentation.

The disadvantage of contiguous memory allocation is fragmentation. There are two types of fragmentation, namely, internal fragmentation and external fragmentation.

Internal fragmentation:

When memory is free internally, that is inside a process but it cannot be used, we call that fragment as internal fragment. For example say a hole of size 18464 bytes is available. Let the size of the process be 18462. If the hole is allocated to this process, then two bytes are left which is not used. These two bytes which cannot be used forms the internal fragmentation. The worst part of it is that the overhead to maintain these two bytes is more than two bytes.

External fragmentation:

All the three dynamic storage allocation methods discussed above suffer external fragmentation. When the total memory space that is got by adding the scattered holes is sufficient to satisfy a request but it is not available contiguously, then this type of fragmentation is called external fragmentation.

External Fragmentation – total memory space exists to satisfy a request, but it is not contiguous. Internal Fragmentation – allocated memory may be slightly larger than requested memory; this size difference is memory internal to a partition, but not being used.


2. What is compaction?

The solution to this kind of external fragmentation is compaction. Compaction is a method by which all free memory that are scattered are placed together in one large memory block. It is to be noted that compaction cannot be done if relocation is done at compile time or assembly time. It is possible only if dynamic relocation is done, that is relocation at execution time.

One more solution to external fragmentation is to have the logical address space and physical address space to be non-contiguous. Paging and Segmentation are popular non-contiguous allocation methods.

Reduce external fragmentation by compaction

 Shuffle memory contents to place all free memory together in one large block.

 Compaction is possible only if relocation is dynamic, and is done at execution time

 I/O problem

 Latch job in memory while it is involved in I/O.

 Do I/O only into OS buffers.


3. What is Segmentation ?

Segmentation is a memory management scheme that supports in which each job is divided into several segments of different sizes, one for each module that contains pieces that perform related functions. A logical address space is a collection of segments. Each segment has a name and a length. The address specify both the segment name and the offset within the segment.

A program segment contains the program's main function, utility functions, data structures, and so on.
Normally, when a program is compiled, the compiler automatically constructs segments reflecting the input program.

A compiler that compiles C program might create separate segments for the following:

1. The code
2. Global variables
3. The heap, from which memory is allocated
4. The stacks used by each thread
5. The standard C library

Libraries that are linked in during compile time might be assigned separate segments. The loader would take all these segments and assign them segment numbers.

The operating system maintains a segment map table for every process and a list of free memory blocks along with segment numbers, their size and corresponding memory locations in main memory. For each segment, the table stores the starting address of the segment and the length of the segment. A reference to a memory location includes a value that identifies a segment and an offset.


4. What are the advantages and disadvantages of Segmentation ?


  • No internal fragmentation
  • May save memory if segments are very small and should not be combined into one page.
  • Segment tables: only one entry per actual segment as opposed to one per page in VM
  • Average segment size >> average page size
  • Less overhead.


  • External fragmentation.
  • Costly memory management algorithms.
  • Segmentation: find free memory area big enough.
  • Paging: keep list of free pages, any page is ok.
  • Segments of unequal size not suited as well for swapping.


5. What is PAGING ?

Paging is another memory management scheme which avoids external fragmentation and the need for compaction. Paging is implemented through cooperation between the operating system and the computer hardware.

Basic method of paging:

The basic method for implementing paging involves breaking physical memory into fixed-sized blocks called frames and breaking logical memory into blocks of the same size called pages. When a process is to be executed, its pages are loaded into any available memory frames from a file system or the backing store. The backing store is divided into fixed-sized blocks that are the same size as the memory frames or clusters of multiple frames.

Every address generated by the CPU is divided into two parts: a page number (p) and a page offset (d). The page number is used as an index into a page table. The page table contains the base address of each page in physical memory. This base address is combined with the page offset to define the physical memory address that is sent to the memory unit.


6. What are the advantages and disadvantages of paging ?


  • Allocating memory is easy and cheap
  • Any free page is ok, OS can take first one out of list it keeps
  • Eliminates external fragmentation
  • Data (page frames) can be scattered all over PM
  • Pages are mapped appropriately anyway
  • Allows demand paging and pre-paging
  • More efficient swapping
  • No need for considerations about fragmentation
  • Just swap out page least likely to be used


  • Longer memory access times (page table lookup)
  • Can be improved using TLB
  • Guarded page tables
  • Inverted page tables
  • Memory requirements (one entry per VM page)
  • Improve using Multilevel page tables and variable page sizes (super-pages)
  • Guarded page tables
  • Page Table Length Register (PTLR) to limit virtual memory size
  • Internal fragmentation


7. What is Translation look-aside buffer (TLB)?

TLB is a special, small, fast-lookup hardware cache. It is associative, high-speed memory.

Translation Look-aside Buffer (TLB) is nothing but a special cache used to keep track of recently used transactions. TLB contains page table entries that have been most recently used. Given a virtual address, processor examines the TLB If page table entry is present (TLB hit), the frame number is retrieved and the real address is formed. If page table entry is not found in the TLB (TLB miss), the page number is used to index the process page table. TLB first checks if page is already in main memory, if not in main memory a page fault is issued then the TLB is updated to include the new page entry.

Each entry in the TLB consists of two parts: a key (or tag) and a value. When the associative memory is presented with an item, the item is compared with all keys simultaneously. If the item is found, the corresponding value field is returned. The search is fast.

The percentage of times that the page number of interest is found in the TLB is called the hit ratio. An 80% hit ratio, for example, means that we find the desired page number in the TLB 80 percent of the time.

Admin Team

0 Respond

Exam Questions
BLOG Posts
For study materials
Job Walk-In
Updates across India
Interview round
Interview or procedure
For study materials
Please LIKE our page to Get regular JOB WALK-IN UPDATES across India and STUDY MATERIALS on facebook news feed.