[OS] Memory Management 1

Updated:

Memory Management Part 1

Background

  • Program must be brought (from disk) into memory and placed within a process for it to be run (Von Neumann architecture)
  • Main memory and registers are the only storage that CPU can access directly
  • Cache sits between main memory and CPU registers
  • Each process needs its own address space, but memory is an expensive system resource so we should manage memory efficiently

Address

  • A location in the memory
  • Physical address vs. Logical address
  • Absolute address vs. Relative address

Address in computers

  • Physical address (PA) : address seen by the memory unit
  • Logical address (LA) : address generated by the CPU
  • CPU sees logical addresses
  • Memory Management Unit (MMU) translates LAs to PAs

memory

  • How can MMU map logical addresses to physical addresses?

Base and Limit Registers

  • A pair of base and limit registers define the logical address space that a process can access
  • Base register contains value of smallest physical address
  • Limit register contains range of logical addresses
  • CPU must check every memory access generated in user mode to be sure that it is between base and limit for that user (Hardware Address Protection)

1 2

Address Binding

  • Address binding of instructions and data to memory addresses can happen at three different stages

1

  • Compile time: If memory location known a priori, absolute code(= code that loads at a known, fixed memory address) can be generated (must recompile code if starting location changes) compile
    • logical address = physical address
    • Can load process fast, but collision can be occurred if generated address space is used by other process
    • No more used in multi-programming operating system
  • Load time: Must generate relocatable code(= machine language that can be run from any memory location) if memory location is not known at compile time load
    • If compiler cannot decide the physical address, it change symbolic address to relocatable address(ex. MOVE .BS+0x18)
    • The relocating loader contains the base address in the main memory from where the allocation would begin
    • When the time for loading a process into the main memory comes, all logical addresses are added to the base address by the relocating loader to generate the physical address
    • Cause lots of time to load process into the memory
  • Execution time: Binding delayed until run time if the process can be moved during its execution from one memory segment to another execution
    • need hardware support for address maps (ex. base and limit register)
    • most general OS use this method
    • execution-time binding occurs when reference is made to location in memory
    • MMU maps logical address to phsyical address

Dynamic Loading

  • Routine is not loaded until it is called
  • Better memory-space utilization, unused routine is never loaded
  • All routines kept on disk in relocatable load format
  • Useful when large amounts of code are needed to handle infrequently occurring cases
  • No special support from the operating system is required

Dynamic Linking

  • Static linking : system libraries and program code combined by the loader into the binary program image
  • Dynamic Linking : linking postponed until execution time
  • Small piece of code, stub used to locate the appropriate memory-resident library routine
  • Stub replaces itself with the address of the routine, and executes the routine
  • Dynamic linking is particularly useful for libraries

Swapping

  • A process can be swapped temporarily out of memory to a backing store, and then brought back into memory for continued execution
  • Total physical memory space of processes can exceed physical memory
  • Backing store : fast disk large enough to accommodate copies of all memory images for all users
  • Roll out, roll in : swapping variant used for priority-based scheduling algorithms, lower-priority process is swapped out so higher-priority process can be loaded and executed
  • Major part of swap time is transfer time, total transfer time is directly proportional to the amount of memory swapped
  • System maintains a ready queue of ready-to-run processes which have memory images on disk

swap

Fixed Partitions

  • Break up physical memory into same-sized partitions
    • Physical address = base address + logical address(virtual address/relative address)
    • cannot access beyond its partition
    • the number of partitions = degree of multiprogramming
  • Put a base register in MMU
  • Then, OS loads the base register when it switches processes
  • Check whether logical address >= parition size (not eligible)

3 2 3

Advantage

  • Easy to implement
  • Fast context switch (just save/restire the base register on context switch) Problem
  • Partition size : One size does not fit all
  • Internal fragmentation
    • A resource allocated to an instance cannot be utilized by other instances even though a part of the resource is not being used
    • Memory is internally fragmented by the fixed partitioning

4

Variable Partitions

  • Improve the fixed partitioning
    • allow variable partition sizes
    • assume OS knows the memory size that processes need in advance
    • allocate a contiguous chunk from holes
  • Should check the access limit considering the allocated memory size
  • No internal fragmentation

1 2

Problems : External fragmentation

  • As OS load and unload processes, holes are left scattered throughout physical memory
  • Become unable to allocate a contiguous chunk even though the sum of holes is greater than the required chunk

3

Solutions

  • Contiguous : compaction (possible only if relocation is dynamic, and is done at execution time)
  • Non-contiguous : paging and segmentation

4

Allocation strategies

5

  • First fit : Allocate from the first hole that is big enough
    • 50% of the size of the allocated memory is lost to external fragmentation when the first-fit policy is used
  • Best fit : Allocate from the smallest hole that is big enough
    • make small fraction of memory which cause external fragmentation
  • Worst fit : Allocate from the largest hole

Segmentation

  • An extenstion of variable partitions
  • Divide address space into logical segments(multiple segments per process)
  • Each segment corresponds to logical entry in address space (code, data, stack, heap etc.)
  • Users (or processes) view memory as a collection of variable-sized segments

segmentation_table

  • Each process has its own segment table (base : starting physical address, limit : length of the segment)
    • The # of table rows and columns vary from architectures
    • Can be located with segment-table base register (STBR) in CPU
    • Segment-table length register (STLR) indicates number of segments used by a program
    • Should be saved/restored during context switch
  • Segment ID can be either explicit or implicit
    • Explicit : <segment ID, offset> (ex. <0x01, 0x2a31>, <0x21, 0x23c2>
    • Implicit : Use n-MSBs as segment ID (ex. 0x012a31) -> 01: bits that represent segment ID
  • Enable sparse allocation of address space
    • Can grow and shrink stack and heap independently
    • Can dynamic relocation of each segment
  • Easy to protect segments
    • Different protection bits for different segments
  • Still external fragmentation problem…
  • of supported segments table should be saved in memory, and cause overhead in context switching (segment table size big…)

seg

Leave a comment