CS370: Operating Systems


red-line
[Schedule] [Assignments] [Grading] [Syllabus] [Infospaces] [Canvas] [Home]

Schedule


Last updated on January 21, 2024
Professor Lecture Coordinates
 

Louis-Noel Pouchet

Office Hours: Fridays noon-2pm MT (Zoom) and Mondays 1pm-2pm MT (CSB346)

E-mail: compsci_cs370@colostate.edu

 


Stadium 1205
Mondays and Wednesdays 4:00-5:15 pm

All e-mails should be addressed to:
compsci_cs370@colostate.edu

Graduate Teaching Assistants
Temitope Adekunle and Temitope Adekunle

Undergraduate Teaching Assistants
Nate Bennick, Samuel Whiteand Kedrick Kinsella


Key to Notation
Readings will be from the Operating Systems Concepts book by Silberschatz, Galvin, and Gagne 10th edition. John Wiley & Sons, Inc. ISBN-13: 978-1119800361. [SCG]
Additional Useful References
(1)
Andrew S Tanenbaum and Herbert Bos. Modern Operating Systems. 4th Edition, 2014. Prentice Hall.
ISBN: 013359162X/978-0133591620. [AT]
(2) Thomas Anderson and Michael Dahlin. Operating Systems: Principles and Practice, 2nd Edition.
Recursive Books. ISBN: 0985673524/978-0985673529. [AD]
(3) Kay Robbins & Steve Robbins. Unix Systems Programming, 2nd edition, Prentice Hall
ISBN-13: 978-0-13-042411-2. [RR]
(4) C Programming Language (2nd Edition). Brian W. Kernighan and Dennis M. Ritchie.
Prentice Hall. ISBN: 0131103628/978-0131103627
(5) Concurrent Programming in Java(TM): Design Principles and Pattern (2nd Edition).
Doug Lea. Prentice Hall. ISBN: 0201310090/978-0201310092.



IMPORTANT NOTE: Below is the schedule for CS370 in Spring'24. As we follow exactly the organization of Pr. Pallickara's class in Fall'23, we provide for your reference the slides for each lecture from the previous semester for CS370, as taught by Pr. Pallickara. Exact slides content will receive minor adjustments during the semester, and will be posted typically the day of the lecture. Students can safely peek at future lectures from prior editions of CS370, only minor adjustments are expected. Note the dates for assignments are the dates they become officially available, due dates are typically 3+ weeks after the assignment date.

   
Introduction References and HW
This module provides an overview of the course, grading criteria, and a brief introduction to high level operating systems concepts. We will explore the differences between kernel mode and user-mode and why they exist. Ch {1,2} [SGG]
Ch {1} [RR]
Ch {1} [AT]

Ch {1} AD

HW1 1/22/24

Term-Project
(TP) 1/22/24



 

Objectives:

  1. Summarize basic operating systems concepts
  2. Highlight key developments in the history of operating systems
 
1/17/24

1/22/24


Lecture 1 (Spring'24)
Lecture 1 (Fall'23, for reference only: taught on 8/22/23)

Lecture 2 (Spring'24) will be available on 1/22/24
Lecture 2 (Fall'23, for reference only: taught on 8/24/23)
   
Processes Readings
Processes are a foundational construct in organizing computations with a program. This module will contrast differences between programs and processes. A key idea covered in this module is the notion of multiprogramming which can used to give the illusion that multiple processes are executing concurrently. We will explore the layout of processes in memory and the various metadata elements regarding a process that are organized within a Process Control Block (PCB). The PCB plays a foundational role in how the OS context-swictches between different processes. Ch {3} [SGG]
Ch {2} [AT]
Ch {2, 3} [RR]
Ch {2, 3} [AD]



HW2 01/29/24


  Objectives:
  1. Contrast programs and processes
  2. Explain the memory layout of processes
  3. Describe Process Control Blocks
  4. Explain the notion of Interrupts and Context Switches
  5. Describe process groups
 
01/24/24

01/29/24



Lecture 3 (Spring'24) will be available on 1/24/24

Lecture 3 (Fall'23, for reference only: taught on 8/29/23)


Lecture 4 (Spring'24) will be available on 1/29/24
Lecture 4 (Fall'23, for reference only: taught on 8/31/23)
   
Inter-Process Communications Readings
A key role of the OS is to ensure that processes execute in isolation and have no ability to influence each other's execution. In this module, we will explore how the OS allows processes to communicate with each other. We will look at the 3 different mechansims to accomplish this.

Ch {3} [SGG]
Ch {2} [AT]
Ch {2, 3} [AD]



HW3
02/05/24
  Objectives:
  1. Explain inter-process communications based on Shared Memory
  2. Explain inter-process communications based on Pipes
  3. Explain inter-process communications based on message passing
    Contrast inter-process communications based on shared memory, pipes, and message passing
  4. Design programs that implement inter-process communications

 
01/31/24

02/05/24


Lecture 5 (Spring'24) will be available on 1/31/24

Lecture 5 (Fall'23, for reference only: taught on 9/05/23)


Lecture 6 (Spring'24) will be available on 2/5/24
Lecture 6 (Fall'23, for reference only: taught on 9/07/23)
   
Threads  
A thread can be thought of as a lightweight process. Threads also exist within the confines of a singular process. Why would we want a kind of process within a process? The key reason is simplified data sharing and fast context switches. The sharing occurs at a scale and simplicty that would be very difficult to accomplish across processes.

Ch {4} [SCG]
Ch {2} [AT]
Ch {12} [RR]
Ch {4} [AD]



 

Objectives:

  1. Explain differences between processes and threads
  2. Compare multithreading models
  3. Contrast differences between user and kernel threads
  4. Relate dominant threading libraries: POSIX, Win32, and Java
  5. Design threaded programs that can synchronize their actions
 

2/7/24

2/12/24




Lecture 7 (Spring'24)
Lecture 7 (Fall'23, for reference only: taught on 9/12/23)

Lecture 8 (Spring'24)
Lecture 8 (Fall'23, for reference only: taught on 9/14/23)
   
Process Synchronization Ch {5}[SCG]
Ch {4} [AT]


HW4 02/19/24
When multiple processes cooperate with each other concurrently they must synchronize their actions. A key consieration is correctness and safety of the mechanisms that we use: incorrect solution have We will look at several classical problems in synchronization to help you understand the core issues that arise during process synchronization.
 

Objectives:

  1. Formulate the critical section problem.
  2. Dissect a software solution to the critical section problem (case study: Peterson's solution)
  3. Explain Synchronization hardware and Instruction Set Architecture support for concurrency primitives.
  4. Assess classic problems in synchronization: bounded buffers, readers-writers, dining philosophers.
 

02/14/24

02/19/24

02/21/24


Lecture 9 (Spring'24)
Lecture 9 (Fall'23, for reference only: taught on 9/19/23)

Lecture 10 (Spring'24)
Lecture 10 (Fall'23, for reference only: taught on 9/21/23)

Lecture 11 (Spring'24)
Lecture 11 (Fall'23, for reference only: taught on 9/26/23)

   
Atomic Transcations  
This module will cover issues relating to preserving atomicty of transcactions. We will explore issues that arise when a multiplicty of transcactions need to execute concurrently while preserving safety properties. Ch {5}[SCG]
 

Objectives:

  1. Explain serializability of transactions
  2. Assess locking protocols
  3. Explain checkpointing and rollback recovery in transactional systems
 
02/26/24

Lecture 12 (Spring'24)
Lecture 12 (Fall'23, for reference only: taught on 9/28/23)
   

Mid Term I (03/04/24): Covers all topics covered up until the lecture 13 included.


 
   
CPU Scheduling algoirithms  
The CPU is reponsible for ensuring that multiple processes can make forward progress. The scheduling algorithm must accomplish several competing objectives: latency, throughput, priorty, and fairness. We will look at a slew of scheduling algorithms to accomplish these objectives. Ch {6} [SCG]
Ch {7} [AD]
Ch {2} [AT]



HW5 03/04/24
 

Objectives:

  1. Assess scheduling criteria including fairness and time quanta.
  2. Explain and contrast different approaches to scheduling: preemptive and non-preemptive
  3. Explain and assess scheduling algorithms: FCFS, shortest jobs, priority, round-robin, multilevel feedback queues, and the Linux completely fair scheduler.
  4. Understand how CPU scheduling algorithms function on multiprocessors.
 
02/28/24

03/06/24

03/18/24


Lecture 13 (Spring'24)

Lecture 13 (Fall'23, for reference only: taught on 10/03/23)


Lecture 14 (Spring'24)
Lecture 15 (Fall'23, for reference only: taught on 10/10/23)

Lecture 16 (Spring'24)
Lecture 16 (Fall'23, for reference only: taught on 10/12/23)
   
Deadlocks  
A large number of processes compete for limited resources on the machine. Incorrect synchronization between these competing processes leads to deadlocks. In this module, we will look at how to characterize deadlocks and the various mechanisms we can use to prevent them by negating structural requirments necessary for deadlocks. Ch {7} [SCG]
Ch {6} [AT]
Ch {4} [AD]

HW-ExtraCredit 03/20/24
 

Objectives:

  1. Explain deadlock characterization
  2. Contrast and explain schemes for deadlock prevention
  3. Evaluate approaches to deadlock avoidance
  4. Understand recovery from deadlocks

 
03/20/24

03/25/24


Lecture 17 (Spring'24)
Lecture 17 (Fall'23, for reference only: taught on 10/17/23)

Lecture 18 (Spring'24)
Lecture 18 (Fall'23, for reference only: taught on 10/19/23)
   
Memory Management  
Memory is a shared resource that must be effectively managed across different processes that are executing concurrently, Given that Instruction Set Architectures (ISA) operate on on data in registers and memory, how memory is managed and shared across competing processes has implications for performance including completion times and throughput. Ch {8} [SCG]
Ch {3} [AT]
Ch {9} [AD]
 

Objectives:

  1. Understand address binding and address spaces
  2. Explain contiguous memory allocations: including their advantages and disadvantages.
  3. Analyze the key constructs underpinning paging systems including hardware support, shared pages, and structure of page tables.
  4. Explain memory protection in paging environments
  5. Explain segmentation based approaches to memory management alongside settings in which they are particularly applicable.
 
03/27/24

04/01/24

04/03/24

04/08/24



Lecture 19 (Spring'24)
Lecture 19 (Fall'23, for reference only: taught on 10/24/23)

Lecture 20 (Spring'24)
Lecture 20 (Fall'23, for reference only: taught on 10/26/23)

Lecture 21 (Spring'24)
Lecture 21 (Fall'23, for reference only: taught on 10/31/23)

Lecture 22 (Spring'24)
Lecture 22 (Fall'23, for reference only: taught on 11/02/23)
   
Virtual Memory  
A pure paging based memory allocation schemes require processes to be entirely memory-resident. This is often infeasible and wasteful. In this module we will explore algorithms that facilitate effective allocation of memory while minimizing wasteful allocations. We consider aspects of program behavior (such as the working set model) which reduces the total number of pages that need to be allocated to a process. [Ch {9} [SCG]
Ch {3} [AT]
 

Objectives:

  1. Explain demand paging and page faults
  2. Contrast page replacement algorithms and explain Belady's anomaly
  3. Justify the rationale for stack algorithms
  4. Explain frame allocations
  5. Synthesize the concepts of thrashing and working sets
 
04/10/24

04/15/24

Lecture 23 (Spring'24)
Lecture 23 (Fall'23, for reference only: taught on 11/07/23)

Lecture 24 (Spring'24)
Lecture 24 (Fall'23, for reference only: taught on 11/09/23)

   
Virtualization  
Virtualization creates the illusion of multiple (virtual) machines on the same physical hardware. Virtualization allows a single computer to host multiple virtual machines; each virtual machine potentially running a different OS.As part of this module we will look at Type-1 and Type-2 hypervisors and techniques for effective virtualization.
Ch {7} [AT]
Ch {16} [SCG]
Ch {10} [AD]
 

Objectives:

  1. Explain Virtual Machine Monitors (VMMs)
  2. Justify the Popek and Goldberg requirements for virtualization
  3. Explain how Virtualization works in the x86 architecture
  4. Compare Type-1 and Type-2 Hypervisors

 
04/17/24

04/22/24


Lecture 25 (Spring'24)
Lecture 25 (Fall'23, for reference only: taught on 11/14/23)

Lecture 26 (Spring'24)
Lecture 26 (Fall'23, for reference only: taught on 11/16/23)
   
   
File Systems  
Data managed on a hard disk must be amenable to updates, discovery, and retrievals. The underlying storage system only deals with disk blocks. In this module we explore a foundational construct in file systems -- the file control block. We will explore how the design of the file control block informs efficiency in retrievals of content. We will round out our discussion of file systems with a look at the unix file system, file allocation table, and the NT File system.
Ch {5} [AT]
Ch {4} [RR]
Ch {10, 11} [SCG]
 

Objectives:

  1. Summarize file system structure
  2. Contrast contiguous allocation vs indexed allocations
  3. Explain the Unix File System
  4. Explain and contrast Windows File Systems: the File Allocation table and NTFS

 
04/24

04/29

Lecture 27 (Spring'24)
Lecture 27 (Fall'23, for reference only: taught on 11/28/23)

Lecture 28 (Spring'24)
Lecture 28 (Fall'23, for reference only: taught on 11/30/23)

   
Summary lecture for CS370
 

Objectives:

  1. Review concepts evaluated in the comprehensive final exam
 
05/01


Lecture 29 (Spring'24)

   
   

Comprehensive Final Exam Online via Canvas+Respondus, as the midterm. Official slot scheduled by CSU: Monday May 6th 2024 at 11:50am MT, for 2 hours.

 
   


 



Department of Computer Science, Colorado State University,
Fort Collins, CO 80523 USA
© 2023 Colorado State University