CS370: Operating Systems


red-line
[Schedule] [Assignments] [Grading] [Syllabus] [Infospaces] [Canvas] [Home]

Schedule


Last updated on Wednesday, December 6, 2023 4:17 PM
Professor Lecture Coordinates
 

Shrideep Pallickara

Office Hours:
Fridays 3:00-4:00 pm in CSB-364 and via Zoom
E-mail: compsci_cs370@colostate.edu





 


Behavioral Sciences 131
TTH 2:00-3:15 pm

All e-mails should be addressed to:
compsci_cs370@colostate.edu


Graduate Teaching Assistants
Max Bar-on and Oluwatosin Falebita

Undergraduate Teaching Assistants
Caleb Chou, Josiah Hegarty, and Dylan Schreiber


Key to Notation
Readings will be from the Operating Systems Concepts book by Silberschatz, Galvin, and Gagne 10th edition. John Wiley & Sons, Inc. ISBN-13: 978-1119800361. [SCG]
Additional Useful References
(1)
Andrew S Tanenbaum and Herbert Bos. Modern Operating Systems. 4th Edition, 2014. Prentice Hall.
ISBN: 013359162X/978-0133591620. [AT]
(2) Thomas Anderson and Michael Dahlin. Operating Systems: Principles and Practice, 2nd Edition.
Recursive Books. ISBN: 0985673524/978-0985673529. [AD]
(3) Kay Robbins & Steve Robbins. Unix Systems Programming, 2nd edition, Prentice Hall
ISBN-13: 978-0-13-042411-2. [RR]
(4) C Programming Language (2nd Edition). Brian W. Kernighan and Dennis M. Ritchie.
Prentice Hall. ISBN: 0131103628/978-0131103627
(5) Concurrent Programming in Java(TM): Design Principles and Pattern (2nd Edition).
Doug Lea. Prentice Hall. ISBN: 0201310090/978-0201310092.



   
Introduction References and HW
This module provides an overview of the course, grading criteria, and a brief introduction to high level operating systems concepts. We will explore the differences between kernel mode and user-mode and why they exist. Ch {1,2} [SGG]
Ch {1} [RR]
Ch {1} [AT]

Ch {1} AD

HW1 8/22

Term-Project
(TP) 8/22



 

Objectives:

  1. Summarize basic operating systems concepts
  2. Highlight key developments in the history of operating systems
 
08/22

08/24


Lecture 1

Lecture 2
   
Processes Readings
Processes are a foundational construct in organizing computations with a program. This module will contrast differences between programs and processes. A key idea covered in this module is the notion of multiprogramming which can used to give the illusion that multiple processes are executing concurrently. We will explore the layout of processes in memory and the various metadata elements regarding a process that are organized within a Process Control Block (PCB). The PCB plays a foundational role in how the OS context-swictches between different processes. Ch {3} [SGG]
Ch {2} [AT]
Ch {2, 3} [RR]
Ch {2, 3} [AD]



HW2 08/30


  Objectives:
  1. Contrast programs and processes
  2. Explain the memory layout of processes
  3. Describe Process Control Blocks
  4. Explain the notion of Interrupts and Context Switches
  5. Describe process groups
 
08/29

08/31



Lecture 3


Lecture 4
   
Inter-Process Communications Readings
A key role of the OS is to ensure that processes execute in isolation and have no ability to influence each other's execution. In this module, we will explore how the OS allows processes to communicate with each other. We will look at the 3 different mechansims to accomplish this.

Ch {3} [SGG]
Ch {2} [AT]
Ch {2, 3} [AD]



HW3
09/06
  Objectives:
  1. Explain inter-process communications based on Shared Memory
  2. Explain inter-process communications based on Pipes
  3. Explain inter-process communications based on message passing
    Contrast inter-process communications based on shared memory, pipes, and message passing
  4. Design programs that implement inter-process communications

 
09/05

09/07


Lecture 5


Lecture 6
   
Threads  
A thread can be thought of as a lightweight process. Threads also exist within the confines of a singular process. Why would we want a kind of process within a process? The key reason is simplified data sharing and fast context switches. The sharing occurs at a scale and simplicty that would be very difficult to accomplish across processes.

Ch {4} [SCG]
Ch {2} [AT]
Ch {12} [RR]
Ch {4} [AD]



 

Objectives:

  1. Explain differences between processes and threads
  2. Compare multithreading models
  3. Contrast differences between user and kernel threads
  4. Relate dominant threading libraries: POSIX, Win32, and Java
  5. Design threaded programs that can synchronize their actions
 

09/12

09/14




Lecture 7

Lecture 8
   
Process Synchronization Ch {5}[SCG]
Ch {4} [AT]


HW4 09/20
When multiple processes cooperate with each other concurrently they must synchronize their actions. A key consieration is correctness and safety of the mechanisms that we use: incorrect solution have We will look at several classical problems in synchronization to help you understand the core issues that arise during process synchronization.
 

Objectives:

  1. Formulate the critical section problem.
  2. Dissect a software solution to the critical section problem (case study: Peterson's solution)
  3. Explain Synchronization hardware and Instruction Set Architecture support for concurrency primitives.
  4. Assess classic problems in synchronization: bounded buffers, readers-writers, dining philosophers.
 

09/19

09/21

09/28


Lecture 9

Lecture 10

Lecture 11

   
Atomic Transcations  
This module will cover issues relating to preserving atomicty of transcactions. We will explore issues that arise when a multiplicty of transcactions need to execute concurrently while preserving safety properties. Ch {5}[SCG]
 

Objectives:

  1. Explain serializability of transactions
  2. Assess locking protocols
  3. Explain checkpointing and rollback recovery in transactional systems
 
09/26

Lecture 12
   

Mid Term I (10/05): Covers all topics covered up until the lecture on Tuesday (10/03).


 
   
CPU Scheduling algoirithms  
The CPU is reponsible for ensuring that multiple processes can make forward progress. The scheduling algorithm must accomplish several competing objectives: latency, throughput, priorty, and fairness. We will look at a slew of scheduling algorithms to accomplish these objectives. Ch {6} [SCG]
Ch {7} [AD]
Ch {2} [AT]



HW5 10/11
 

Objectives:

  1. Assess scheduling criteria including fairness and time quanta.
  2. Explain and contrast different approaches to scheduling: preemptive and non-preemptive
  3. Explain and assess scheduling algorithms: FCFS, shortest jobs, priority, round-robin, multilevel feedback queues, and the Linux completely fair scheduler.
  4. Understand how CPU scheduling algorithms function on multiprocessors.
 
10/03

10/10

10/12


Lecture 13


Lecture 15

Lecture 16
   
Deadlocks  
A large number of processes compete for limited resources on the machine. Incorrect synchronization between these competing processes leads to deadlocks. In this module, we will look at how to characterize deadlocks and the various mechanisms we can use to prevent them by negating structural requirments necessary for deadlocks. Ch {7} [SCG]
Ch {6} [AT]
Ch {4} [AD]

HW-ExtraCredit 10/16
 

Objectives:

  1. Explain deadlock characterization
  2. Contrast and explain schemes for deadlock prevention
  3. Evaluate approaches to deadlock avoidance
  4. Understand recovery from deadlocks

 
10/17

10/19


Lecture 17

Lecture 18
   
Memory Management  
Memory is a shared resource that must be effectively managed across different processes that are executing concurrently, Given that Instruction Set Architectures (ISA) operate on on data in registers and memory, how memory is managed and shared across competing processes has implications for performance including completion times and throughput. Ch {8} [SCG]
Ch {3} [AT]
Ch {9} [AD]
 

Objectives:

  1. Understand address binding and address spaces
  2. Explain contiguous memory allocations: including their advantages and disadvantages.
  3. Analyze the key constructs underpinning paging systems including hardware support, shared pages, and structure of page tables.
  4. Explain memory protection in paging environments
  5. Explain segmentation based approaches to memory management alongside settings in which they are particularly applicable.
 
10/24

10/26

10/31

11/02



Lecture 19

Lecture 20

Lecture 21

Lecture 22
   
Virtual Memory  
A pure paging based memory allocation schemes require processes to be entirely memory-resident. This is often infeasible and wasteful. In this module we will explore algorithms that facilitate effective allocation of memory while minimizing wasteful allocations. We consider aspects of program behavior (such as the working set model) which reduces the total number of pages that need to be allocated to a process. [Ch {9} [SCG]
Ch {3} [AT]
 

Objectives:

  1. Explain demand paging and page faults
  2. Contrast page replacement algorithms and explain Belady's anomaly
  3. Justify the rationale for stack algorithms
  4. Explain frame allocations
  5. Synthesize the concepts of thrashing and working sets
 
11/07

11/09

Lecture 23

Lecture 24

   
Virtualization  
Virtualization creates the illusion of multiple (virtual) machines on the same physical hardware. Virtualization allows a single computer to host multiple virtual machines; each virtual machine potentially running a different OS.As part of this module we will look at Type-1 and Type-2 hypervisors and techniques for effective virtualization.
Ch {7} [AT]
Ch {16} [SCG]
Ch {10} [AD]
 

Objectives:

  1. Explain Virtual Machine Monitors (VMMs)
  2. Justify the Popek and Goldberg requirements for virtualization
  3. Explain how Virtualization works in the x86 architecture
  4. Compare Type-1 and Type-2 Hypervisors

 
11/14

11/16


Lecture 25

Lecture 26
   
Containers  
Containers are a lightweight, performant alternative to virtual machines. Unlike, virtual machines every container does not require its own full-blown OS. In fact, all containers on a single host share a single OS. In this module we will see how a container is ultimately just a group of processes ; as such,, a container can do anything that processes can do albeit with restrictions enforced by the kernel.  
  Objectives:
  1. Explain containers and contrast how they differ from virtualization
  2. Synthesize enabling concepts in containerization including cgroups, namespaces, and capabilities.
  3. Identify key elements that comprise container images
 
11/17


Lecture 26-B

   
File Systems  
Data managed on a hard disk must be amenable to updates, discovery, and retrievals. The underlying storage system only deals with disk blocks. In this module we explore a foundational construct in file systems -- the file control block. We will explore how the design of the file control block informs efficiency in retrievals of content. We will round out our discussion of file systems with a look at the unix file system, file allocation table, and the NT File system.
Ch {5} [AT]
Ch {4} [RR]
Ch {10, 11} [SCG]
 

Objectives:

  1. Summarize file system structure
  2. Contrast contiguous allocation vs indexed allocations
  3. Explain the Unix File System
  4. Explain and contrast Windows File Systems: the File Allocation table and NTFS

 
11/28

11/30

Lecture 27

Lecture 28

   
Mass Storage  
In this module, we will explore the technologies behind the two popular data storage frameworks: hard disk drives and solid state drives. We will explore the key enablers of these systems. In the case of solid-state drives we will explore issues such as write-amplifications, wear leveling, and read-disturb errors. Ch {11} [SCG]
Ch {12} [AD]
 

Objectives:

  1. Explain data storage in hard disk drives
  2. Explain data storage in solid state drives
 
12/05


Lecture 29

   
Disk Scheduling Algorithms  
In this module we will explore the rationale and need for disk scheduling algorithms. We will review several metrics that are used to assess the peformance of disk scheduling algorithms. We will explore and analyze several different disk scheduling algorthms. These algorithms are used to inform disk head movements as data are retrieved. We will look at key disk scheduling algorithms.
Ch {12} [SCG]
 

Objectives:

  1. Explain the rationale and need for disk scheduling
  2. Profile and interpret the performance of disk scheduling algorithms
  3. Contrast the performance characteriscs of diverse disk scheduling algorithms: FCFS, SSTF, SCAN, C-SCAN, and LOOK
 
12/07

Lecture 30

   

Comprehensive Final Exam in Behavioral Sciences-131
Thursday, December 14th, 2:00-4:00 pm


 
   


 



Department of Computer Science, Colorado State University,
Fort Collins, CO 80523 USA
© 2023 Colorado State University