University of London / MSc Computer Science: Computer systems(前半)

University of London / MSc Computer Science: Computer systems(前半)

ロンドン大学で MSc Computer Science: Computer systems モジュールを履修中。

講義内容に関して記録した個人的なスタディノートです。

全 12 週のうち 1〜5 週目の内容を記録します。(1 週目開始:2025 年 1 月 6 日 / 5 週目終了:2024 年 2 月 9 日)

モジュール概要 #

Aims

The main aim of the module is to learn how computers work: the basics of computer architecture and organisation, and the role and mechanism of operating systems. Specifically, students are introduced to three key aspects of computer systems: storage, processing, and transmission of information.

Weeks

講義は Week 10 まで。Week 11, 12 は最終課題を作成する期間。

  • Week 1: Introduction
  • Week 2: Programming in Rust
  • Week 3: Processors
  • Week 4: Memory storage
  • Week 5: Memory management
  • Week 6: Processes and threads
  • Week 7: Input-Output systems and file systems
  • Week 8: Concurrency
  • Week 9: Protection and security
  • Week 10: Multiple processor systems, network programming and the internet

参考文書 #

  • Brookshear, J.G. and D. Brylow Computer science: An overview. (New York: Pearson, 2019) 13th edition.
  • Stallings, W. Operating systems: Internals and design principles, Global edition. (London: Pearson, 2017) 9th edition.
  • Stallings, W. Computer organisation and architecture, Global edition. (London: Pearson, 2022) 11th edition.

Week 1: Introduction #

概要

A brief history of computing, and an overview of the main components of a modern computer system.

キーコンセプト

This topic includes:

  • history of computers
  • main parts of computers
  • fundamental concepts of computing
  • operating systems
  • binary representation of instructions and data.

レクチャーリスト

  • Introduction
    • Lecture 1: Introduction to computer systems
  • A brief history of computing
    • Lecture 2: A brief history of computing
  • Main components of a computer
    • Lecture 3: The main components of a computer
    • Lecture 4: CPU components and their function
    • Lecture 5: Von Neumann architecture
  • The role of the operating system
    • Lecture 6: The structure of the operating system

Lecture 1: Introduction to computer systems #

“Computer Architecture” and “Computer Organisation” - The difference

In Simple Terms: “Architecture” is what a system does (the abstract design) and “Organisation"is how the system does it (the implementation details)

  • Computer Architecture
    • Definition: Computer architecture refers to the functional design of a computer system. It defines what the system does from a programmer’s perspective, focusing on the instruction set, data types, addressing modes, and the overall behavior that software interacts with.
    • Key Components:
      • Instruction Set Architecture (ISA)
      • Data types and formats (e.g., integers, floating-point)
      • Registers and addressing modes
      • Programmer-visible features
    • Example:
      • x86-64, Armv8, and RISC-V are different instruction set architectures.
      • Multiple processors (e.g., Intel Core, AMD Ryzen) can implement the same architecture (e.g., x86-64), allowing them to run the same software.
  • Computer Organisation
    • Definition: Computer organisation describes how the architecture is implemented in hardware. It deals with the physical and logical structure that delivers the functionality defined by the architecture — affecting performance, cost, and power consumption.
    • Key Components:
      • Control units, ALUs, registers
      • Pipelining and parallelism
      • Cache design (L1, L2, L3)
      • Memory hierarchy and buses
      • Clock signals, power management
      • Microarchitecture (instruction execution flow)
    • Example:
      • Apple M1 and Qualcomm Snapdragon both use the Arm architecture but have very different internal designs (organisations) that affect their performance and energy efficiency.

There are four basic functions that a computer can perform:

  1. data processing
  2. data storage
  3. data movement
  4. control
    • within the computer control unit managesthe computer’s resources and orchestrates the performance of its functional partsin interest in response to those instructions.

Lecture 2: A brief history of computing #

詳細は省略。最初の計算装置と呼ばれるアバカス(日本で言うところの、そろばん。ビーズの位置によって数値を表す。)の誕生から現代のスマートフォン等の高性能コンピュータに至るまでの歴史の概要説明。

Lecture 3: The main components of a computer #

While modern computers come with many additional features, such as GPUs and SSDs, the core components remain the same. The key elements are:

  • Motherboard: A circuit board that connects all the computer’s components. It acts like the “glue” holding everything together and enables communication between parts.
  • CPU (Central Processing Unit): Known as the brain of the computer, the CPU interprets and executes instructions. It fetches, decodes, and processes tasks sequentially.
  • RAM (Random Access Memory): This is volatile memory that temporarily stores frequently used data and programs. It’s much faster than external storage, but its contents are erased when the computer is turned off.

Lecture 4: CPU components and their function #

Inside the CPU:

  • ALU (Arithmetic Logic Unit): Performs calculations and logical operations.
  • Control Unit (CU): Directs data flow and coordinates CPU operations.
  • Registers: Small, fast memory areas within the CPU:
    • Accumulator: Stores results from ALU.
    • MAR (Memory Address Register): Stores the address of data to be accessed.
    • MDR (Memory Data Register): Temporarily holds data being transferred to/from memory.
    • Program Counter (PC): Keeps track of the next instruction’s memory address.

Fetch-Decode-Execute Cycle:

  • Fetch: The CPU gets the next instruction using the PC and stores it in the MDR.
  • Decode: The CU interprets the instruction.
  • Execute: The instruction is carried out, possibly involving data movement or arithmetic.

Lecture 5: Von Neumann architecture #

  • Small set of basic logic components
  • Configuration of logic components designed for each specific computation
  • Connecting components into desired configuration - hardwired program

The von Neumann architecture, introduced by John von Neumann in 1945, laid the foundation for modern computing. Unlike early calculators with fixed, hardwired programs, von Neumann’s design allowed programs to be stored in memory. This architecture is based on key components: a processing unit (with ALU and registers), a control unit (with instruction register and program counter), memory, input/output devices, and sometimes mass storage.

Its major innovation is the stored-program concept, where both data and instructions are kept in the same memory. Instead of rewiring hardware for new tasks, changing control signals allows flexible, programmable operations. This paved the way for programming tools like assemblers and compilers, making computing much more universal and adaptable.

Lecture 6: The structure of the operating system #

詳細は省略。OS の役割はメモリ管理や IO をコントロールするものだということについて。

Further material #

Week 2: Programming in Rust #

概要

This week we revise your existing programming knowledge and provide an introduction to the Rust programming language. You will refresh your knowledge of how to use the CODIO environment and how system programming languages differ from other programming languages (e.g., Python, Java, etc.). You will learn what are the unique features of the Rust programming language and its heritage of the C programming language and Unix.

キーコンセプト

This topic includes:

  • Rust programming language
  • development environments
  • low-level programming

レクチャーリスト

  • The Rust Programming Language
    • Lecture 1: Introduction to Rust
    • Lecture 2: An introduction to system programming
  • The Programming Environments
    • Lecture 3: An introduction to the Rust eco-system
    • Lecture 4: Cargo – the Rust build system

Lecture 1: Introduction to Rust #

詳細は省略。Rust の特徴などの概要説明。

Rust is used for systems programming but it can also be used to build applications like any other programming language.

fn main() {
  printl!("Hello, World!");
}

Lecture 2: An introduction to system programming #

System programming is the activity of programming computer systems software.

The primary distinguishing characteristic of systems programming when compared to application programming is that application programming aims to produce software which provides services to the user directly,such as a word processor or a web browser.Whereas systems programming aims to produce software platforms which provide services to other software,which are also performance constrained,or both in the sense of our operating system. This can also include computational science applications,game engines, industrial automation,and software as a service applications.

Systems programming requires a great deal of hardware awareness,and its goal is to achieve efficient use of available resources either because the software itself is performance critical,or because even small efficiency improvements directly transform into significant savings of time and money.

We typically use systems programming for operating systems,device drivers of all kinds,filesystem, databases,code that runs in very cheap devices,or devices that must be extremely reliable,cryptography, also media codecs,which is software for reading or writing audio and video and image files,as well as media processing for example,in speech recognition or photo editing software.

Lecture 3: An introduction to the Rust eco-system #

詳細は省略。Rust でのコード記法についての概要説明。

Lecture 4: Cargo – the Rust build system #

詳細は省略。Rust の Cargo についての概要や cargo.toml の書き方についての説明。

Further material #

Week 3: Processors #

概要

We investigate the mapping of high-level computer instructions to the internal binary representation. The Fetch-Decode-Execute cycle is discussed and a virtual machine language, VOLE, is introduced.

キーコンセプト

This topic includes:

  • main components of a CPU
  • fetch-execute cycle
  • instruction sets

レクチャーリスト

  • The history of processors and architectures
    • Lecture 1: Introduction to processors
    • Lecture 2: The history of processors
    • Lecture 3: Introduction to computer architecture
    • Lecture 4: How a CPU works
  • Machine programming and the fetch-decode-execute cycle
    • Lecture 5: Assembly Basics – Registers, operands, move
    • Lecture 6: Fetch, decode, execute cycle
    • Lecture 7: Machine language programming
    • Lecture 8: Assembly language programming

Lecture 1: Introduction to processors #

詳細は省略。今週の講義で学習することの全体像の説明。

Lecture 2: The history of processors #

  • Early Computer Designs (1950s)
    • Each machine had a unique design
    • No compatibility between systems
    • Programs couldn’t be reused across different machines
  • IBM’s Innovation (1960s)
    • Created a family of computers with:
      • Shared software compatibility
      • Varying performance and price
    • Users could upgrade hardware without losing data or software
  • Microprocessor Revolution (1970s)
    • Transistor prices dropped
    • Entire CPU could fit on one circuit board
    • 16-bit computers with 4K–6K memory became common
    • Personal computers like ZX Spectrum emerged
  • RISC and Simplified Architecture (1980s)
    • UC Berkeley & IBM found compilers used only a small instruction subset
    • Introduced RISC (Reduced Instruction Set Computing):
      • Fewer, simpler instructions
      • More registers
      • Faster and cheaper
  • Architectural Evolution
    • Harvard Architecture:
      • Separate memory for program and data
      • Allows simultaneous access
    • Von Neumann Architecture:
      • Shared memory for program and data
      • Sequential access → bottlenecks
  • Instruction Pipelining (Late 1980s)
    • CPU handles multiple instructions at once in different stages
    • Improved processing speed
  • Rise of Microprocessors (1990s)
    • Became faster than multi-processor setups
    • Efficiency due to miniaturization and faster info transfer
  • Multi-core CPUs (2000s)
    • Chips (sockets) with multiple processors (cores)
    • Each core has:
      • Its own cache
      • Multiple logical threads
  • Emergence of GPUs
    • Initially for graphics, now also used for:
      • Physics simulations
      • Spreadsheet calculations
      • Neural networks
    • Use SIMD (Single Instruction, Multiple Data)
  • Modern CPU and GPU Integration
    • CPUs now include:
      • Vector units for array operations
    • System on Chip (SoC):
      • Integrates CPU, GPU, memory, I/O, DSP, etc.
      • Found in smartphones and tablets

Lecture 3: Introduction to computer architecture #

詳細は省略。

Lecture 4: How a CPU works #

  • The CPU is the “brain” of the computer.
  • Inside the CPU, wires (buses) carry information rapidly, synced by a clock signal.
  • Clock speed (in GHz) determines how many billions of operations per second the CPU can perform.
  • RAM holds data and instructions at specific addresses.
  • The CPU sends addresses and activates wires to read from or write to RAM.
  • RAM stores binary data, which may represent numbers, instructions, or device addresses.

Lecture 5: Assembly Basics – Registers, operands, move #

CPU & Assembly

  • Each CPU family has its own machine instructions (in binary).
  • Assembly language is a readable form of machine code, using opcodes and operands.
  • Compilers and linkers translate assembly to executable machine code

Instruction Structure

An instruction = opcode + operand(s)

  • Example:
    • MOV EAX, 5 → move constant 5 to register EAX
    • ADD EAX, EBX → add contents of EBX to EAX

Example Workflow

To add two numbers:

  1. Load first number into a register
  2. Load second number into another register
  3. Use ADD instruction
  4. Store the result

Lecture 6: Fetch, decode, execute cycle #

  • The Fetch-Decode-Execute Cycle is the core process that drives how a CPU runs programs.
  • A program is a list of binary machine instructions stored in memory.
  • The CPU processes these instructions step by step.

3 Main Steps:

The cycle repeats: fetch → decode → execute. This cycle is what makes programs run.

  1. Fetch:
    • The CPU gets the next instruction from memory.
    • The Program Counter (PC) address is copied to the Memory Address Register (MAR).
    • The instruction is then loaded into the Memory Data Register (MDR) and then into the Instruction Register (IR).
  2. Decode:
    • The Control Unit decodes the binary instruction in the IR.
    • Determines the operation type and the components involved (e.g., ALU, registers).
  3. Execute:
    • The instruction is executed by sending signals to the appropriate components.
    • This could involve arithmetic, logic operations, memory access, or I/O.

Lecture 7: Machine language programming #

All data in a computer is ultimately represented as binary (0s and 1s).

Logic Gates

  • Logic gates (AND, OR, XOR, NOT) are built from transistors.
  • Gates perform Boolean operations, outputting true/false (1/0).
  • Gates are combined into circuits and chips in CPUs and memory.

Flip-Flops

  • Flip-flops store one bit of data and are built from logic gates.
  • When connected in sequence, they form registers that store multi-bit data.
  • These circuits form the basis of main memory.
  • Memory is made of billions of cells, each storing one byte (8 bits).

Lecture 8: Assembly language programming #

Vole Machine is developed by Brookshear as a model for learning machine/assembly language.

Example Instruction

Example: 3587

  • 3 → STORE opcode
  • 5 → register 5
  • 87 → memory address 87
  • → Stores value from register 5 into memory address 87

Further material #

Week 4: Memory storage #

概要

We consider topics associated with data representation and the storage of data within a computer. The types of data we will consider include text, numeric values, images, audio, and video.

キーコンセプト

This topic includes:

  • cache
  • internal
  • external memory

レクチャーリスト

  • Byte-oriented memory organisation
    • Lecture 1: Introduction to memory organisation
    • Lecture 2: Byte-oriented memory organisation
    • Lecture 3: Machine words
    • Lecture 4: Word-oriented memory organisation
    • Lecture 5: Byte ordering
  • Representing data
    • Lecture 6: Representing Integers
    • Lecture 7: Representing pointers
    • Lecture 8: Representing Strings

Lecture 1: Introduction to memory organisation #

詳細は省略。今週の講義で学習することの全体像の説明。

Lecture 2: Byte-oriented memory organisation #

Memory Structure

  • Memory is divided into cells, each storing 8 bits (1 byte).
  • Modern computers contain billions of such cells.
  • 4 bits = nibble, 8 bits = byte.

Bit Order in a Byte

  • Leftmost bit = Most Significant Bit (MSB) → “high-order” end.
  • Rightmost bit = Least Significant Bit (LSB) → “low-order” end.
  • Bit pattern within a byte has a fixed order.

Memory Addresses

  • Each memory cell has a unique numeric address, starting from 0.
  • These addresses are used to identify and access cells individually.

Multi-Byte Storage

  • 16-bit data is stored using two consecutive 8-bit cells.

Lecture 3: Machine words #

What is a Machine Word?

  • A machine word is a basic unit of data in a computer.
  • Composed of bytes (each byte = 8 bits).
  • May represent:
    • Instructions
    • Numbers (integers, floats)
    • Characters or strings

Word Size

  • Refers to how much data the CPU can process at once.
  • Modern computers typically use 64-bit words (8 bytes).
  • Embedded systems (e.g., microwaves) may use 8 or 16-bit words.

Storage and Memory

  • Memory cells are usually 8 bits (1 byte).
  • Short words (e.g., one character like “A”) require 1 cell.
  • Longer words (e.g., “HELLO”) require multiple memory cells.

Lecture 4: Word-oriented memory organisation #

  • Memory still addressed by bytes (smallest addressable unit).
  • But in word-oriented systems, memory can also be viewed in word-sized chunks.
  • This means each word may have its own address, depending on word size.

Lecture 5: Byte ordering #

What is Byte Ordering?

  • Byte ordering (or endianness) defines how multi-byte data (like words or integers) are stored in memory.
  • Memory is made of byte-addressable cells, each storing 1 byte.

Two Main Types of Endianness: Big-Endian, Little-Endian

  • Big-Endian:
    • Stores the most significant byte at the lowest memory address
    • Example order for 0A 0B 0C 0D: Address: 0 → 0A, 1 → 0B, 2 → 0C, 3 → 0D
  • Little-Endian:
    • Stores the least significant byte at the lowest memory address
    • Example order for 0A 0B 0C 0D: Address: 0 → 0D, 1 → 0C, 2 → 0B, 3 → 0A

Endianness also affects data transmission, e.g., over networks.

Big-endian sends most significant bits first, which can be beneficial for priority in case of data loss.

  • Endianness determines how multi-byte data is stored and read in memory.
  • Big-endian vs. Little-endian affect address order.
  • Choice of endianness is often arbitrary, but preserved for compatibility.

Lecture 6: Representing Integers #

CPUs operate on binary regardless of how numbers are displayed.

  • Unsigned integers
    • All bits used to represent magnitude.
    • e.g., 8-bit → range: 0 to 255
  • Signed integers
    • The leftmost bit is the sign bit (1 = negative, 0 = positive).
    • e.g., 8-bit → range: -128 to +127

Lecture 7: Representing pointers #

  • A pointer is a variable that stores the memory address of another value.
  • Pointers are essentially references, and they allow direct access to memory.
  • Used for:
    • Dynamic memory allocation (e.g., heap memory)
    • Accessing arrays, strings, and complex data structures
    • Passing data between functions

Lecture 8: Representing Strings #

  • Text is stored as a sequence of bit patterns, each representing a character or symbol.
  • ASCII (7-bit) was one of the earliest encodings for English characters and symbols. ASCII was later extended to 8 bits, but still lacked support for many global languages.
  • So, Unicode was developed to support a wider range of characters using up to 21 bits per character. Unicode uses code points to represent characters (e.g., U+00E9 for “é”).
  • UTF-8 is a popular variable-length encoding (1–4 bytes per character).
    • Common characters use fewer bytes.
    • Complex characters (e.g., Chinese, emojis) use more bytes.
  • Example: “é” is not in ASCII but is encoded in UTF-8 as a 2-byte sequence.

Further material #

Week 5: Memory management #

概要

In this topic we discuss the components that, at a top level, characterise a computer system by describing (1) the external behaviour of each component, that is, the data and control signals that it exchanges with other components, and (2) the interconnection structure and the controls required to manage the use of the interconnection structure. We focus upon the interaction with memory, and how we might improve the performance by “caching” (storing) some of the most important memory cells.

キーコンセプト

This topic includes:

  • memory hierarchy
  • caches
  • virtual memory
  • memory usage/organisation impact on coding.

レクチャーリスト

  • The Memory hierarchy
    • Lecture 1: Introduction to memory management
    • Lecture 2: Memory partitioning
    • Lecture 3: Paging
    • Lecture 4: Segmentation
  • Virtual memory
    • Lecture 5: Overview of virtual memory
    • Lecture 6: Hardware and control structures
    • Lecture 7: Operating system software

Lecture 1: Introduction to memory management #

今週の講義で学習することの全体像の説明。

This week’s topic is memory management. Key aspects include:

  • Relocation: Processes must be movable in memory to maximize efficiency and flexibility.
  • Protection: Each process should be protected from interference by others. The CPU often handles memory protection during execution.
  • Sharing: Memory should allow controlled sharing of data and program code between processes to avoid redundancy.
  • Logical organization: Memory is viewed as a one-dimensional address space, and secondary storage extends capacity.
  • Physical organization: Main memory is fast but volatile, while secondary memory is slower but persistent.
  • Partitioning: Techniques like paging (fixed-size blocks) and segmentation (variable-size blocks) manage memory allocation.
  • Virtual memory: Combines main and secondary memory using partitioning and segmentation to simulate larger memory.
  • Hardware & software roles: Hardware (CPU) manages low-level protection and addressing, while the OS oversees overall memory management.

Lecture 2: Memory partitioning #

What is Memory Management?

  • Software function in an OS for managing main memory.
  • Loads processes into memory for CPU execution.
  • Monitors memory to track allocated vs. available space.

Types of Memory Allocation

There are two types of memory allocation.

  1. Contiguous Allocation
  2. Non-Contiguous Allocation

Contiguous Allocation

Process must be fully loaded into continuous memory.

  • Fixed Partitioning
    • Memory divided into fixed-size, non-overlapping blocks.
    • Each process fits into a partition of equal or larger size.
    • Advantages:
      • Simple to implement.
      • Low OS overhead.
    • Disadvantages:
      • Internal fragmentation: Unused space inside partitions.
      • Process size limitation.
      • Limited multiprogramming (fixed number of partitions).
      • Cannot split a process across partitions.
  • Variable (Dynamic) Partitioning
    • Partitions created at runtime based on process size.
    • RAM initially empty; partitions are dynamically created.
    • Advantages:
      • No internal fragmentation.
      • Better memory utilization.
      • No process size restrictions.
      • Flexible degree of multiprogramming.
    • Disadvantages:
      • More complex to implement.
      • External fragmentation still possible.

Non-Contiguous Allocation

Process loaded into non-contiguous blocks of memory.

  • OS uses a page table to track memory blocks.
  • Techniques:
    • Paging
    • Multi-level Paging
    • Inverted Paging
    • Segmentation
    • Segmented Paging
  • Benefits:
    • Reduces memory wastage.
    • Avoids external fragmentation.
  • Drawbacks:
    • Requires address translation (adds overhead).
    • Slower execution due to translation time.

Lecture 3: Paging #

What is Paging?

  • A memory management technique that retrieves data from secondary storage in fixed-size blocks called pages.
  • Prevents system crashes by allowing the OS to free up RAM by moving idle pages to secondary storage.
  • Key part of virtual memory, which lets systems use more memory than physically available.

Key Concepts

  • Pages & Frames
    • Pages: Equal-sized blocks that make up a process in logical (virtual) memory.
    • Frames: Equal-sized blocks in physical memory (RAM).
    • Page size = Frame size → helps avoid external fragmentation.
  • Virtual Memory
    • A section of secondary storage (like a hard drive) used to simulate RAM.
    • Enables running large programs or many processes simultaneously.
  • Address Translation
    • Logical address = Page number + Offset
    • Physical address = Frame number + Offset
    • A Page Table is used to map logical pages to physical frames (like a lookup table).
  • Page Swapping
    • If RAM is full, idle pages are moved to secondary storage.
    • When needed again, they are swapped back into RAM.
    • This process happens continuously during execution.

Advantages and Disadvantages of Paging

  • Advantages
    • Eliminates external fragmentation.
    • Supports multi-programming.
    • Pages and frames being equal in size allows easy swapping.
  • Disadvantages
    • Increases hardware cost (page address mapping requires special hardware).
    • Page tables consume memory — inefficient on systems with limited RAM.
    • Can lead to internal fragmentation when pages don’t fully utilize frames.
    • Some memory might stay unused if blocks aren’t big enough to hold a full process.

Lecture 4: Segmentation #

What is Segmentation?

  • Memory management method that divides a program into variable-sized segments (e.g., functions, data).
  • Segments are loaded into non-contiguous memory blocks.

Key Points

  • Unlike paging (fixed-size), segmentation uses variable-length units.
  • Logical address = Segment number + Offset.
  • OS maintains a segment table to track:
    • Segment start address
    • Segment length
  • Table is loaded into a special register during execution.

Advantages

  • Better reflects program structure.
  • Avoids internal fragmentation.

Lecture 5: Overview of virtual memory #

Why Virtual Memory is Important?

  • Allows programs to run even if they are larger than physical memory
  • Enables memory protection and isolation between processes
  • Improves efficiency by loading only the needed parts of a program
  • Provides transparency through address translation from logical to physical
  • Uses paging and segmentation to manage memory efficiently

How Virtual Memory Works

  • Virtual memory uses part of secondary storage as if it were main memory
  • Programs are divided into chunks (pages or segments), only some are loaded into RAM
  • When needed data is not in RAM, a page fault occurs and the OS loads the required data
  • Logical (virtual) addresses are translated to physical addresses by the Memory Management Unit (MMU)

Key Components

  • Page Table: Maps virtual pages to physical frames and tracks whether a page is in RAM
  • Swapping: Moves processes or pages between RAM and secondary storage to free memory
  • Page Replacement Algorithm: Chooses which page to remove when space is needed
  • Working Set: The minimum set of pages a process needs in RAM to execute efficiently

Concepts of Locality

  • Locality refers to the tendency of programs to access a small set of data or instructions repeatedly
  • This allows memory to hold only relevant parts, improving performance

Thrashing

  • Occurs when too many processes cause excessive page swapping
  • Reduces performance by overloading the CPU with memory management tasks

Paging vs. Segmentation

  • Paging: Divides memory into fixed-size blocks (pages and frames)
  • Segmentation: Divides memory into variable-size logical units (segments)
  • Combined approach: Segments are divided into pages, enabling flexible and secure memory use

Lecture 6: Hardware and control structures #

Memory Management Hardware and Control

  • Memory addresses are translated from logical to physical at runtime.
  • A process does not need to reside entirely in contiguous memory; only needed portions are loaded.
  • The portion of a process in memory is called the resident set, and its size can be fixed or variable.

Interrupts and I/O Handling

  • When needed data is not in memory, an interrupt (page fault) is generated and the process enters a blocked state.
  • While disk I/O is ongoing, other processes can execute; once I/O completes, the original process resumes.

Role of Hardware and Software

  • Virtual memory requires cooperation between hardware (MMU) and the operating system.
  • The OS manages movement of pages or segments, while the MMU performs address translation.

Page Table and Segment Table

  • Each process has its own page table, containing frame numbers and control bits.
  • Control bits indicate whether a page is in memory and whether it has been modified.
  • If unchanged, pages can be removed without being written back to disk.
  • Segment tables contain base addresses, lengths, and control bits, enabling memory protection and sharing.

Segmentation and Paging Combined

  • A virtual address includes segment number, page number, and offset.
  • Both segment and page tables are used to determine the physical address.

TLB (Translation Lookaside Buffer)

  • A cache that stores recently used page table entries to speed up address translation.
  • The CPU searches the TLB in parallel for a match; if not found, it accesses the full page table and updates the TLB.

Key Control Bits

  • Indicate whether a page or segment is in memory.
  • Indicate whether it has been modified.
  • Used to determine whether to replace or write back a page or segment.

Performance Considerations

  • Paging and segmentation introduce overhead.
  • The TLB improves performance by reducing access time for frequently used page entries.

Lecture 7: Operating system software #

Operating System and Memory Management

  • The operating system manages memory based on whether virtual memory, paging, segmentation, or a combination is used.
  • Some systems avoid paging or segmentation due to lack of hardware support for address translation.

Virtual Memory Techniques

  • Modern systems often use both paging and segmentation.
  • Paging is used for memory allocation and protection.
  • Segmentation supports sharing and logical structuring of processes.

Fetch Policy

  • Demand paging loads pages only when referenced, causing many page faults at startup.
  • Pre-paging loads adjacent pages proactively to reduce future page faults.

Placement Policy

  • Determines where a process is placed in physical memory.
  • Not important in paging systems due to hardware-based address translation.
  • Important in segmentation-only systems.

Replacement Policy

  • Decides which page to remove when memory is full.
  • Aims to remove pages least likely to be used soon, though this is hard to predict.

Common Replacement Algorithms

  • Least Recently Used (LRU): Removes the page not used for the longest time.
  • First-In First-Out (FIFO): Removes the oldest loaded page.
  • Clock policy: Uses a use-bit to select the least recently used page.

Page Buffering

  • Replaced pages are added to either a modified or unmodified list.
  • Modified pages are written back to disk; unmodified may be reused or discarded.

Locked Frames

  • Some memory frames are locked (e.g., OS kernel) and cannot be replaced.
  • Locking is managed using a lock bit in the frame table.

Cleaning Policy

  • Demand cleaning writes pages to disk only when selected for replacement.
  • Pre-cleaning writes modified pages in batches to improve efficiency.

Load Control

  • Regulates how many processes are in memory.
  • Too few processes cause idle CPU time due to blocking.
  • Too many processes lead to thrashing from excessive swapping.

Further material #