[OS] Threads 1
Updated:
Threads Part 1
Motivation for threads
- Problem
- A single process cannot benefit from multi-cores
- Very cucmberome to write a program with many cooperating processes
- Expensive to create a new process
- High communication overheads between processes
- Expensive contect swithching between processes
- What if separate out a process’s execution state from others
- Process: address space, resources, other general process attributes(e.g. user ID, privileges, …)
- Execution state: PC, SP, registers, etc…
- Such an execution state is usually called a thread
Thread
- A thread executes a sequence of instructions in a program
- Single-threaded process
- has one program counter specifying location of next instrution to execute
- process executes instructions sequentially, one at a time, until completion
- Multi-threaded process
- has one program counter per thread
- Threads share an address space
- Code
- Global variables
- Heap
- Opened files
- A thread has its own
- Set of registers including PC & SP
- Stack
- Thread ID
- TCB (Thread Control Block) : unit of a context switch, smaller and cheaper than PCB
Address space with Threads
Process vs Thread
- A process can have multiple threads
- A thread is bound to a single process
- Processes are containers in which threads execute -PID, address space, user and group ID, open file descriptors, current working directory, etc…
- All threads see the same address space
- Threads are the unit of scheduling
Multithreaded Server Architecture
Benefits of Multi-threading
- Creating concurrency is cheap
- Allow to fully utilize multi-core architectures
- Resource sharing become obvious (Threads share the address space)
- Improves program structure (can divide large task across several cooperative threads)
- Throughput (by overlapping computation with I/O operations)
- Responsiveness (can handle concurrent events)
Concurrency vs Parallelism
Types of Parallelism
- Data parallelism
- Distributes subsets of the same data across multiple cores, same operation on each
- Task parallelism
- Distributing threads across cores, each thread performing unique operation
Amdahl’s Law
- Give performance gains in latency from adding additional system resources to an appliation
- When we add more cores to a process (S: serial portion, N: processing cores), then
- Serial portion of an application has dispropotionate effect on performance gained by adding additional cores
- e.g. 25% serial, 75% parallel, increasing cores from 1 to 2 results in speedup of 1.625 times
- As N approaches infinity, speedup approaches 1 / S
Concept Review
User Threads and Kernel Threads
- User Threads : management done by user-level threads library, supported above the kernel(managed without kernel support)
- POSIX Pthreads
- Windows threads
- JAVA threads
- Kernel Threads : supported and managed directly by the operting system
- the unit of scheduling
- A relationship must exist between user and kernel threads
- Thread libraries can provide various user-level thread models on top of kernel threads
Thread libraries
- provides programmer with API for creating and managing threads
- Two primary ways of implementing
- Library entirely in user space
- Kernel-level library supported by the OS
Pthreads
- A POSIX standard API for thread creation and synchronization
- provided either as user-level or kernel-level
- API specifies the behavior of the thread library
- Implementation is up to development of the library (specification not implementation)
Leave a comment