Question: Please help with solutions to this question a) On a single-CPU system, under what circumstances does a multithreaded program using kernel threads provide better performance
Please help with solutions to this question
a) On a single-CPU system, under what circumstances does a multithreaded program using kernel threads provide better performance (such as faster execution time) compared to a singlethreaded solution (that does not use asynchronous or event-based programming) ?
Explain with general principles. Give TWO example applications.
b) A new operating system provides a synchronization API and a library for user-level programs (i.e. like the pthread) for which the mutex lock and unlock operation are implemented with test_and_set like this:
void mutex_lock(mutex* plock) { while (test_and_set(plock)) { }; } void mutex_unlock(mutex* plock) { *plock = false; }
Is this implementation of mutex synchronization correct for use in general purpose user-level applications? What could go wrong ? It helps to think of an example application, like the bounded-buffer problem or the dining philosophers.
c) On a running Linux kernel (version > 2.6) at some point the thread_info.preempt_count field for a kernel task we call A is equal to 2. (Linux kernel synchronization is discussed in the textbook). Answer these questions:
c1) Is task A currently preemptable? Explain.
c2) What is new value of thread_info.preempt_count field for task A after it acquires a new lock ? Explain.
c3) What is the condition for kernel task A to be safely interruptible ?
c4) Assuming that all locks held by task A are spinlocks, how many CPUs are on that computer ?
Step by Step Solution
There are 3 Steps involved in it
Get step-by-step solutions from verified subject matter experts
