オペレーティングシステムメモ (2024/11/11)
Execution flow
Program counter, register, stack
Code and data are managed through process management
Process = thread + resources not managed by thread
Multi threads in one process
Code and data are shared between threads
Example of Utilizing Thread
Server program
Create thread for each request and delegate processing and response of the result
Each thread handles the process while waiting for an event
Divide CPU and I/O processes Divide process P into thread conducting calculation, Th1, and a thread performing I/O, Thh2
Th2 moves into a waiting state, but Th1 continues to run during an I/O waiting state
Parallel computing
ex. Matrix multiplication
Each thread calculates each row
Relation Between Thread and Process
Thread context is small
Switching is fast
Special care is required to access shared data
From the viewpoint of the program
A function in a program works as a different processing flow
Example Thread Program
pthread_create (&pth, NULL, foo, arg)
Create a thread and execute foo(arg)
Store thread ID into pth
Return 0 upon successful thread creation
A thread is managed in a kernel using the same method as that for a process
A thread is a scheduling target of the kernel scheduler
Lighter than process switching owing to no switching of the memory spaces
Implemented by a library working in user mode setjmp()/longjmp()
Storage and recovery of frame information
Each thread has each start frame
Lighter than kernel level threads
Fast context switching
Able to create huge amounts of threads
Easy to change the scheduling policy
Kernel is not in charge of scheduling
When a thread enters a waiting state for some reason, such as an I/O, the other threads in the same process cannot continue to run (I/O block)
Not executed in parallel even when working with a multi-core CPU
Multi-thread Models
Many-to-one model
One kernel thread handles multiple user threads
Scheduling threads by a library
Only one thread can wait for kernel processing
One-to-one model
One kernel thread operates one user thread
I/O block does not effect any other threads
Works with multiple processors
Overhead to create and manage thread
Many-to-many model
Multiple user threads are managed with the same or fewer numbers of kernel threads
The number of kernel threads is defined by an application or machine
Necessary to control thread assignment
Two-level model
A variation of many-to-many model
A part of the user-level thread is fixed to a special kernel thread
Blocking System call
User-level threads
→ Corresponding PCB enters a wait queue, and entire process moves into a waiting state → The other threads stop
Kernel-level threads
→ Corresponding TCB enters a wait queue → The states of the other threads are still ready and they can continue to run