The OS/2 and Windows operating systems support the use of critical sections to mark particular sections of code that should never be timesliced out within a multithreaded application. The purpose of these critical sections is to prevent problems such as loss of data integrity (where two threads are simultaneously reading and then writing to the same variable or file). You should avoid using critical sections except under very limited circumstances. Critical sections can cause deadlocks and major timing problems, because:
Thread 1 Thread 2
| +_______|
new-+ | start critical section
| | |
request sem A | new-+
| | |
+_________+ request sem A
| |
release sem A release sem A
| |
____+ +-+
end critical section
In this example, Thread 1 calls the new operator, which requests semaphore A. The operating system timeslices it out while it owns semaphore A. Thread 2 gets timesliced in, declares a critical section, and calls new, which requests the same semaphore A (this semaphore is an operating system semaphore used to prevent two threads from simultaneously allocating storage). Because semaphore A is already owned by thread 1, thread 2 waits for the semaphore indefinitely. Thread 1 can never be timesliced back in to release semaphore A, because thread 2 is in a critical section; and thread 2 can never exit its critical section, because it is frozen waiting for semaphore A.
Semaphores and critical sections provide some of the same functionality, but using only semaphores is a better programming practice and leads to more threadsafe programs. Avoid the use of critical sections in your programs wherever possible.