Get questions and answers for Operating System

GET Operating System TEXTBOOK SOLUTIONS

1 Million+ Step-by-step solutions
math books
List and briefly define the four main elements of a computer
Consider a 32-bit microprocessor, with a 16-bit external data bus, driven by an 8-MHz input clock. Assume that this microprocessor has a bus cycle whose minimum duration equals four input clock cycles. What is the maximum data transfer rate across the bus that this microprocessor can sustain in bytes/s? To increase its performance, would it be better to make its external data bus 32 bits or to double the external clock frequency supplied to the microprocessor? State any other assumptions you make and explain. Determine the number of bytes that can be transferred per bus cycle.
Consider a computer system that contains an I/O module controlling a simple keyboard/printer Teletype. The following registers are contained in the CPU and connected directly to the system bus:
INPR: Input Register, 8 bits
OUTR: Output Register, 8 bits
FGI: Input Flag, 1 bit
FGO: Output Flag, 1 bit
IEN: Interrupt Enable, 1 bit
Keystroke input from the Teletype and output to the printer is controlled by the I/O module. The Teletype is able to encode an alphanumeric symbol to an 8-bit word and decode an 8-bit word into an alphanumeric symbol. The Input flag is set when an 8-bit word enters the input register from the Teletype. The Output flag is set when a word is printed.
a. Describe how the CPU, using the first four registers listed in this problem, can achieve I/O with the Teletype.
b. Describe how the function can be performed more efficiently by also employing IEN.
In virtually all systems that include DMA modules, DMA access to main memory is given higher priority than processor access to main memory. Why?
A DMA module is transferring characters to main memory from an external device transmitting at 9600 bits per second (bps). The processor can fetch instructions at the rate of 1 million instructions per second. By how much will the processor be slowed down due to the DMA activity?
A computer consists of a CPU and an I/O device D connected to main memory M via a shared bus with a data bus width of one word. The CPU can execute a maximum of 106 instructions per second. An average instruction requires five processor cycles, three of which use the memory bus. A memory read or write operation uses one processor cycle. Suppose that the CPU is continuously executing “background” programs that require 95% of its instruction execution rate but not any I/O instructions. Assume that one processor cycle equals one bus cycle. Now suppose that very large blocks of data are to be transferred between M and D.
a. If programmed I/O is used and each one-word I/O transfer requires the CPU to execute two instructions, estimate the maximum I/O data transfer rate, in words per second, possible through D.
b. Estimate the same rate if DMA transfer is used.
Generalize Equations (1.1) and (1.2) in Appendix 1A to n-level memory hierarchies.
Consider a memory system with the following parameters:
Tc = 100 ns
Cc = 0.01 cents > bit
Tm = 1,200 ns
Cm = 0.001 cents > bit
a. What is the cost of 1 MByte of main memory?
b. What is the cost of 1 MByte of main memory using cache memory technology?
c. If the effective access time is 10% greater than the cache access time, what is the hit ratio H?
A computer has a cache, main memory, and a disk used for virtual memory. If a referenced word is in the cache, 20 ns are required to access it. If it is in main memory but not in the cache, 60 ns are needed to load it into the cache (this includes the time to originally check the cache), and then the reference is started again. If the word is not in main memory, 12 ms are required to fetch the word from disk, followed by 60 ns to copy it to the cache, and then the reference is started again. The cache hit ratio is 0.9 and the main-memory hit ratio is 0.6. What is the average time in ns required to access a referenced word on this system?
Suppose a stack is to be used by the processor to manage procedure calls and returns. Can the program counter be eliminated by using the top of the stack as a program counter?
Define the two main categories of processor registers.
In general terms, what are the four distinct actions that a machine instruction can specify?
How are multiple interrupts dealt with?
In general, what are the strategies for exploiting spatial locality and temporal locality?
Suppose the hypothetical processor of Figure 1.3 also has two I/O instructions:
0011 = Load AC from I/O
0111 = Store AC to I/O
Suppose the hypothetical processor of Figure 1.3 also has two
In these cases, the 12-bit address identifies a particular external device. Show the program execution (using format of Figure 1.4) for the following program:
1. Load AC from device 5.
2. Add contents of memory location 940.
3. Store AC to device 6.
Assume that the next value retrieved from device 5 is 3 and that location 940 contains a value of 2.
The program execution of Figure is described in the text using six steps. Expand this description to show the use of the MAR and MBR.
The program execution of Figure is described in the text
Consider a hypothetical 32-bit microprocessor having 32-bit instructions composed of two fields. The first byte contains the op-code and the remainder an immediate operand or an operand address.
a. What is the maximum directly addressable memory capacity (in bytes)?
b. Discuss the impact on the system speed if the microprocessor bus has
1. A 32-bit local address bus and a 16-bit local data bus, or
2. A 16-bit local address bus and a 16-bit local data bus.
c. How many bits are needed for the program counter and the instruction register?
Consider a hypothetical microprocessor generating a 16-bit address (e.g., assume that the program counter and the address registers are 16 bits wide) and having a 16-bit data bus.
a. What is the maximum memory address space that the processor can access directly if it is connected to a “16-bit memory”?
b. What is the maximum memory address space that the processor can access directly if it is connected to an “8-bit memory”?
c. What architectural features will allow this microprocessor to access a separate “I/O space”?
d. If an input and an output instruction can specify an 8-bit I/O port number, how many 8-bit I/O ports can the microprocessor support? How many 16-bit I/O ports? Explain.
What are three objectives of an OS design?
What is the kernel of an OS?
How is the execution context of a process used by the OS?
List and briefly explain five storage management responsibilities of a typical OS.
Describe the round-robin scheduling technique.
Explain the difference between a monolithic kernel and a microkernel.
Suppose that we have a multi-programmed computer in which each job has identical characteristics. In one computation period, T, for a job, half the time is spent in I/O and the other half in processor activity. Each job runs for a total of N periods. Assume that a simple round-robin scheduling is used, and that I/O operations can overlap with processor operation. Define the following quantities:
• Turnaround time = actual time to complete a job
• Throughput = average number of jobs completed per time period T
• Processor utilization = percentage of time that the processor is active (not waiting)
Compute these quantities for one, two, and four simultaneous jobs, assuming that the period T is distributed in each of the following ways:
a. I/O first half, processor second half
b. I/O first and fourth quarters, processor second and third quarter
An I/O-bound program is one that, if run alone, would spend more time waiting for I/O than using the processor. A processor-bound program is the opposite. Suppose a short-term scheduling algorithm favors those programs that have used little processor time in the recent past. Explain why this algorithm favors I/O-bound programs and yet does not permanently deny processor time to processor-bound programs.
Contrast the scheduling policies you might use when trying to optimize a time-sharing system with those you would use to optimize a multi-programmed batch system.
In IBM’s mainframe OS, OS/390, one of the major modules in the kernel is the System Resource Manager. This module is responsible for the allocation of resources among address spaces (processes). The SRM gives OS/390 a degree of sophistication unique among operating systems. No other mainframe OS, and certainly no other type of OS, can match the functions performed by SRM. The concept of resource includes processor, real memory, and I/O channels. SRM accumulates statistics pertaining to utilization of processor, channel, and various key data structures. Its purpose is to provide optimum performance based on performance monitoring and analysis. The installation sets forth various performance objectives, and these serve as guidance to the SRM, which dynamically modifies installation and job performance characteristics based on system utilization. In turn, the SRM provides reports that enable the trained operator to refine the configuration and parameter settings to improve user service.
This problem concerns one example of SRM activity. Real memory is divided into equal-sized blocks called frames, of which there may be many thousands. Each frame can hold a block of virtual memory referred to as a page. SRM receives control approximately 20 times per second and inspects each and every page frame. If the page has not been referenced or changed, a counter is incremented by 1. Over time, SRM averages these numbers to determine the average number of seconds that a page frame in the system goes untouched. What might be the purpose of this and what action might SRM take?
A multiprocessor with eight processors has 20 attached tape drives. There is a large number of jobs submitted to the system that each require a maximum of four tape drives to complete execution. Assume that each job starts running with only three tape drives for a long period before requiring the fourth tape drive for a short period toward the end of its operation. Also assume an endless supply of such jobs.
a. Assume the scheduler in the OS will not start a job unless there are four tape drives available. When a job is started, four drives are assigned immediately and are not released until the job finishes. What is the maximum number of jobs that can be in progress at once? What are the maximum and minimum numbers of tape drives that may be left idle as a result of this policy?
b. Suggest an alternative policy to improve tape drive utilization and at the same time avoid system deadlock. What is the maximum number of jobs that can be in progress at once? What are the bounds on the number of idling tape drives?
For the processing model of Figure, briefly define each state.
For the processing model of Figure, briefly define each state.
What is swapping and what is its purpose?
Why does Figure have two blocked states?
Why does Figure have two blocked states?
List four characteristics of a suspended process.
Why are two modes (user and kernel) needed?
What are the steps performed by an OS to create a new process?
What is the difference between an interrupt and a trap?
What is the difference between a mode switch and a process switch?
The following state transition table is a simplified model of process management, with the labels representing transitions between states of READY, RUN, BLOCKED, and NONRESIDENT.
The following state transition table is a simplified model of
Give an example of an event that can cause each of the above transitions.
Assume that at time 5 no system resources are being used except for the processor and memory. Now consider the following events:
At time 5: P1 executes a command to read from disk unit 3.
At time 15: P5’s time slice expires.
At time 18: P7 executes a command to write to disk unit 3.
At time 20: P3 executes a command to read from disk unit 2.
At time 24: P5 executes a command to write to disk unit 3.
At time 28: P5 is swapped out.
At time 33: An interrupt occurs from disk unit 2: P3’s read is complete.
At time 36: An interrupt occurs from disk unit 3: P1’s read is complete.
At time 38: P8 terminates.
At time 40: An interrupt occurs from disk unit 3: P5’s write is complete.
At time 44: P5 is swapped back in.
At time 48: An interrupt occurs from disk unit 3: P7’s write is complete.
For each time 22, 37, and 47, identify which state each process is in. If a process is blocked, further identify the event on which it is blocked.
Figure 3.9b contains seven states. In principle, one could draw a transition between any two states, for a total of 42 different transitions.
Figure 3.9b contains seven states. In principle, one could draw
a. List all of the possible transitions and give an example of what could cause each transition
b. List all of the impossible transitions and explain why.
For the seven-state process model of Figure, draw a queuing diagram similar to that of Figure 3.8b.
For the seven-state process model of Figure, draw a queuing
Consider the state transition diagram of Figure.
Consider the state transition diagram of Figure.
Suppose that it is
Suppose that it is time for the OS to dispatch a process and that there are processes in both the Ready state and the Ready/Suspend state, and that at least one process in the Ready/Suspend state has higher scheduling priority than any of the processes in the Ready state. Two extreme policies are as follows:
(1) Always dispatch from a process in the Ready state, to minimize swapping, and (2) always give preference to the highest-priority process, even though that may mean swapping when swapping is not necessary. Suggest an intermediate policy that tries to balance the concerns of priority and performance.
Table shows the process states for the VAX/VMS operating system.
Table shows the process states for the VAX/VMS operating system.
a.
a. Can you provide a justification for the existence of so many distinct wait states?
b. Why do the following states not have resident and swapped-out versions: Page Fault Wait, Collided Page Wait, Common Event Wait, Free Page Wait, and Resource Wait?
c. Draw the state transition diagram and indicate the action or occurrence that causes each transition.
The VAX/VMS operating system makes use of four processor access modes to facilitate the protection and sharing of system resources among processes. The access mode determines
• Instruction execution privileges: What instructions the processor may execute
• Memory access privileges: Which locations in virtual memory the current instruction may access
The VAX/VMS operating system makes use of four processor access
The four modes are as follows:
• Kernel: Executes the kernel of the VMS operating system, which includes memory management, interrupt handling, and I/O operations
• Executive: Executes many of the OS service calls, including file and record (disk and tape) management routines
• Supervisor: Executes other OS services, such as responses to user commands
• User: Executes user programs, plus utilities such as compilers, editors, linkers, and debuggers
A process executing in a less privileged mode often needs to call a procedure that executes in a more privileged mode; for example, a user program requires an operating system service. This call is achieved by using a change-mode (CHM) instruction, which causes an interrupt that transfers control to a routine at the new access mode. A return is made by executing the REI (return from exception or interrupt) instruction.
a. A number of operating systems have two modes, kernel and user. What are the advantages and disadvantages of providing four modes instead of two?
b. Can you make a case for even more than four modes?
The VMS scheme discussed in the preceding problem is often referred to as a ring protection structure, as illustrated in Figure. Indeed, the simple kernel/user scheme, as described in Section 3.3, is a two-ring structure. A potential disadvantage of this protection structure is that it cannot readily be used to enforce a “need-to-know”
The VMS scheme discussed in the preceding problem is often
Principle. [SILB04] gives this example: If an object is accessible in domain Dj but not in domain Di, then j < i. But this means that every object accessible in Di is also accessible in Dj.
Explain clearly what the problem is that is referred to in the preceding paragraph.
Figure suggests that a process can only be in one event queue at a time.
Figure suggests that a process can only be in one
a. Is it possible that you would want to allow a process to wait on more than one event at the same time? Provide an example.
b. In that case, how would you modify the queuing structure of the figure to support this new feature?
In a number of early computers, an interrupt caused the register values to be stored in fixed locations associated with the given interrupt signal. Under what circumstances is this practical technique? Explain why it is inconvenient in general.
Table lists typical elements found in a process control block for an unthreaded OS. Of these, which should belong to a thread control block and which should belong to a process control block for a multithreaded system?
Table lists typical elements found in a process control block
List three advantages of ULTs over KLTs.
List two disadvantages of ULTs compared to KLTs.
OS/2 is an obsolete OS for PCs from IBM. In OS/2, what is commonly embodied in the concept of process in other operating systems is split into three separate types of entities: session, processes, and threads. A session is a collection of one or more processes associated with a user interface (keyboard, display, and mouse). The session represents an interactive user application, such as a word processing program or a spreadsheet. This concept allows the personal-computer user to open more than one application, giving each one or more windows on the screen. The OS must keep track of which window, and therefore which session, is active, so that keyboard and mouse input are routed to the appropriate session. At any time, one session is in foreground mode, with other sessions in background mode. All keyboard and mouse input is directed to one of the processes of the foreground session, as dictated by the applications. When a session is in foreground mode, a process performing video output sends it directly to the hardware video buffer and thence to the user’s screen.
When the session is moved to the background, the hardware video buffer is saved to a logical video buffer for that session. While a session is in background, if any of the threads of any of the processes of that session executes and produces screen output, that output is directed to the logical video buffer. When the session returns to foreground, the screen is updated to reflect the current contents of the logical video buffer for the new foreground session.
There is a way to reduce the number of process-related concepts in OS/2 from three to two. Eliminate sessions, and associate the user interface (keyboard, mouse, and screen) with processes. Thus, one process at a time is in foreground mode. For further structuring, processes can be broken up into threads.
a. What benefits are lost with this approach?
b. If you go ahead with this modification, where do you assign resources (memory, files, etc.): at the process or thread level?
Consider an environment in which there is a one-to-one mapping between user-level threads and kernel-level threads that allows one or more threads within a process to issue blocking system calls while other threads continue to run. Explain why this model can make multithreaded programs run faster than their single-threaded counterparts on a uni-processor computer.
Many current language specifications, such as for C and C++, are inadequate for multithreaded programs. This can have an impact on compilers and the correctness of code, as this problem illustrates. Consider the following declarations and function definition:
int global_positives = 0;
typedef struct list {
struct list *next;
double val;
} * list;
void count_positives(list l)
{
list p;
for (p = l; p; p = p -> next)
if (p -> val > 0.0)
++global_positives;
}
Now consider the case in which thread A performs
count_positives();
While thread B performs
++global_positives;
a. What does the function do?
b. The C language only addresses single-threaded execution. Does the use of two parallel threads create any problems or potential problems?
But some existing optimizing compilers (including gcc, which tends to be relatively conservative) will “optimize” count_positives to something similar to
void count_positives(list l)
{
list p;
register int r;
r = global_positives;
for (p = l; p; p = p -> next)
if (p -> val > 0.0) ++r;
global_positives = r;
}
What problem or potential problem occurs with this compiled version of the program if threads A and B are executed concurrently?
Consider the following code using the POSIX Pthreads API:
thread2.c
#include
#include
#include
#include
int myglobal;
void *thread_function(void *arg) {
int i,j;
for ( i=0; i<20; i++ ) {
j=myglobal;
j=j+1;
printf(“.”);
fflush(stdout);
sleep(1);
myglobal=j;
}
return NULL;
}
int main(void) {
pthread_t mythread;
int i;
if ( pthread_create( &mythread, NULL, thread_function,
NULL) ) {
printf(ldquo;error creating thread.”);
abort();
}
for ( i=0; i<20; i++) {
myglobal=myglobal+1;
printf(“o”);
fflush(stdout);
sleep(1);
}
if ( pthread_join ( mythread, NULL ) ) {
printf(“error joining thread.”);
abort();
}
printf(“\nmyglobal equals %d\n”,myglobal);
exit(0);
}
In main() we first declare a variable called mythread, which has a type of pthread_t. This is essentially an ID for a thread. Next, the if statement creates a thread associated with mythread. The call pthread_create() returns zero on success and a nonzero value on failure. The third argument of pthread_create() is the name of a function that the new thread will execute when it starts. When this thread_function() returns, the thread terminates. Meanwhile, the main program itself defines a thread, so that there are two threads executing. The pthread_join function enables the main thread to wait until the new thread completes.
a. What does this program accomplish?
b. Here is the output from the executed program:
$ ./thread2
..o.o.o.o.oo.o.o.o.o.o.o.o.o.o..o.o.o.o.o
myglobal equals 21
Is this the output you would expect? If not, what has gone wrong?
The Solaris documentation states that a ULT may yield to another thread of the same priority. Isn’t it possible that there will be a run able thread of higher priority and that therefore the yield function should result in yielding to a thread of the same or higher priority?
In Solaris 9 and Solaris 10, there is a one-to-one mapping between ULTs and LWPs. In Solaris 8, a single LWP supports one or more ULTs.
a. What is the possible benefit of allowing a many-to-one mapping of ULTs to LWPs?
In Solaris 9 and Solaris 10, there is a one-to-one
b. In Solaris 8, the thread execution state of a ULT is distinct from that of its LWP. Explain why.
c. Figure 4.18 shows the state transition diagrams for a ULT and its associated LWP in Solaris 8 and 9. Explain the operation of the two diagrams and their relationships.
Explain the rationale for the Uninterruptible state in Linux.
List three degrees of awareness between processes and briefly define each.
What is the distinction between competing processes and cooperating processes?
List the three control problems associated with competing processes and briefly define each.

List the requirements for mutual exclusion.
What operations can be performed on a semaphore?
What is the distinction between blocking and non-blocking with respect to messages?
Consider the following program:
Consider the following program:
Note that the scheduler in a uniprocessor
Note that the scheduler in a uniprocessor system would implement pseudo parallel execution of these two concurrent processes by interleaving their instructions, without restriction on the order of the interleaving.
a. Show a sequence (i.e., trace the sequence of inter-leavings of statements) such that the statement “x is 10” is printed.
b. Show a sequence such that the statement “x is 8” is printed. You should remember that the increment/decrements at the source language level are not done atomically, that is, the assembly language code:
Consider the following program:
Note that the scheduler in a uniprocessor
Implements the single C increment instruction (x = x + 1).
Consider the following program:
const int n = 50;
int tally;
void total()
{
int count;
for (count = 1; count<= n; count++){
tally++;
}
}
void main()
{
tally = 0;
parbegin (total (), total ());
write (tally);
}
a. Determine the proper lower bound and upper bound on the final value of the shared variable tally output by this concurrent program. Assume processes can execute at any relative speed and that a value can only be incremented after it has been loaded into a register by a separate machine instruction.
b. Suppose that an arbitrary number of these processes are permitted to execute in parallel under the assumptions of part (a). What effect will this modification have on the range of final values of tally?
Is busy waiting always less efficient (in terms of using processor time) than a blocking wait? Explain.
Consider the following program:
boolean blocked [2];
int turn;
void P (int id)
{
while (true) {
blocked[id] = true;
while (turn != id) {
while (blocked[1-id])
/* do nothing */;
turn = id;
}
/* critical section */
blocked[id] = false;
/* remainder */
}
}
void main()
{
blocked[0] = false;
blocked[1] = false;
turn = 0;
parbegin (P(0), P(1));
}
A software approach to mutual exclusion is Lamport’s bakery algorithm [LAMP74], so called because it is based on the practice in bakeries and other shops in which every customer receives a numbered ticket on arrival, allowing each to be served in turn. The algorithm is as follows:
boolean choosing[n];
int number[n];
while (true) {
choosing[i] = true;
number[i] = 1 + getmax(number[], n);
choosing[i] = false;
for (int j = 0; j < n; j++){
while (choosing[j]) { };
while ((number[j] != 0) && (number[j],j) < (number[i],i)) { };
}
/* critical section */;
number [i] = 0;
/* remainder */;
}
The arrays choosing and number are initialized to false and 0, respectively. The ith element of each array may be read and written by process i but only read by other processes. The notation (a, b) < (c, d) is defined as:
(a < c) or (a = c and b < d)
a. Describe the algorithm in words.
b. Show that this algorithm avoids deadlock.
c. Show that it enforces mutual exclusion.
Now consider a version of the bakery algorithm without the variable choosing. Then we have
1 int number[n];
2 while (true) {
3 number[i] = 1 + getmax(number[], n);
4 for (int j = 0; j < n; j++){
5 while ((number[j]! = 0) && (number[j],j) < (number[i],i)) { };
6 }
7 /* critical section */;
8 number [i] = 0;
9 /* remainder */;
10 }
Does this version violate mutual exclusion? Explain why or why not.
Consider the following program which provides a software approach to mutual exclusion:
Integer array control [1: N]; integer k
Where 1 ≤ k ≤ N, and each element of “control” is either 0, 1, Or 2. All elements of “control” are initially zero; the initial valueof k is immaterial.
The program of the ith process (1 ≤ i ≤ N) is
begin integer j;
L0: control [i] := l;
LI: for j:=k step l until N, l step l until k do
begin
if j = i then goto L2;
if control [j] ≠ 0 then goto L1
end;
L2: control [i] := 2;
for j := 1 step 1 until N do
if j ≠ i and control [j] = 2 then goto L0;
L3: if control [k] ≠ 0 and k ≠ i then goto L0;
L4: k := i;
critical section;
L5: for j := k step 1 until N, 1 step 1 until k do
if j ≠ k and control [j] ≠ 0 then
begin
k := j;
goto L6
end;
L6: control [i] := 0;
L7: remainder of cycle;
goto L0;
end
This is referred to as the Eisenberg-McGuire algorithm. Explain its operation and its key features.
When a special machine instruction is used to provide mutual exclusion in the fashion of Figure 5.2, there is no control over how long a process must wait before being granted access to its critical section. Devise an algorithm that uses the compare & swap instruction but that guarantees that any process waiting to enter its critical section will do so within n −1 turns, where n is the number of processes that may require access to the critical section and a “turn” is an event consisting of one process leaving the critical section and another process being granted access.
Consider the following definition of semaphores:
void semWait(s)
{
if (s.count > 0) {
s.count--;
}
else {
place this process in s.queue;
block;
}
}
void semSignal (s)
{
if (there is at least one process blocked on
semaphore s) {
remove a process P from s.queue;
place process P on ready list;
}
else
s.count++;
}
Compare this set of definitions with that of Figure 5.3. Note one difference: With the preceding definition, a semaphore can never take on a negative value. Is there any difference in the effect of the two sets of definitions when used in programs? That is, could you substitute one set for the other without altering the meaning of the program?
Consider a sharable resource with the following characteristics:
(1) As long as there are fewer than three processes using the resource, new processes can start using it right away.
It should be possible to implement general semaphores using binary semaphores. We can use the operations semWaitB and semSignalB and two binary semaphores, delay and mutex. Consider the following:
void semWait(semaphore s)
{
semWaitB(mutex);
s--;
if (s < 0) {
semSignalB(mutex);
semWaitB(delay);
}
else SemsignalB(mutex);
}
void semSignal(semaphore s);
{
semWaitB(mutex);
s++;
if (s <= 0)
semSignalB(delay);
semSignalB(mutex);
}
Initially, s is set to the desired semaphore value. Each semWait operation decrements s, and each semSignal operation increments s. The binary semaphore mutex, which is initialized to 1, assures that there is mutual exclusion for the updating of s. The binary semaphore delay, which is initialized to 0, is used to block processes.
There is a flaw in the preceding program. Demonstrate the flaw and propose a change that will fix it.
In the commentary on Figure 5.9 and Table 5.4, it was stated that “it would not do simply to move the conditional statement inside the critical section (controlled by s) of the consumer because this could lead to deadlock.” Demonstrate this with a table similar to Table 5.4.
The following pseudocode is a correct implementation of the producer/consumer problem with a bounded buffer:
The following pseudocode is a correct implementation of the producer/consumer
Labels p1, p2, p3 and c1, c2, c3 refer to the lines of code shown above (p2 and c2 each cover three lines of code). Semaphores empty and full are linear semaphores that can take unbounded negative and positive values. There are multiple producer processes, referred to as Pa, Pb, Pc, etc., and multiple consumer processes, referred to as Ca, Cb, Cc, etc. Each semaphore maintains a FIFO (first-in-first-out) queue of blocked processes. In the scheduling chart below, each line represents the state of the buffer and semaphores after the scheduled execution has occurred. To simplify, we assume that scheduling is such that processes are never interrupted while executing a given portion of code p1, or p2, ., or c3. Your task is to complete the following chart.
The following pseudocode is a correct implementation of the producer/consumer
This problem demonstrates the use of semaphores to coordinate three types of processes.6 Santa Claus sleeps in his shop at the North Pole and can only be awakened by either (1) all nine reindeer being back from their vacation in the South Pacific, or (2) some of the elves having difficulties making toys; to allow Santa to get some sleep, the elves can only wake him when three of them have problems. When three elves are having their problems solved, any other elves wishing to visit Santa must wait for those elves to return. If Santa wakes up to find three elves waiting at his shops door, along with the last reindeer having come back from the tropics, Santa has decided that the elves can wait until after Christmas, because it is more important to get his sleigh ready. (It is assumed that the reindeer do not want to leave the tropics, and therefore they stay there until the last possible moment.) The last reindeer to arrive must get Santa while the others wait in a warming hut before being harnessed to the sleigh. Solve this problem using semaphores.
Show that message passing and semaphores have equivalent functionality by
a. Implementing message passing using semaphores.
b. Implementing a semaphore using message passing.
Explain what is the problem with this implementation of the one-writer many-readers problem?
Explain what is the problem with this implementation of the
A pipeline algorithm is implemented so that a stream of data elements of type T produced by a process P0 passes through a sequence of processes P1, P2, ., Pn - 1, which operates on the elements in that order.
a. Define a generalized message buffer that contains all the partially consumed data elements and write an algorithm for process Pi (0 ≤ i ≤ n - 1), of the form
repeat
receive from predecessor;
consume element;
send to successor:
forever
Assume P0 receives input elements sent by Pn - 1. The algorithm should enable the processes to operate directly on messages stored in the buffer so that copying is unnecessary.
b. Show that the processes cannot be deadlocked with respect to the common buffer.
Suppose the following two processes, foo and bar, are executed concurrently and share the semaphore variables S and R (each initialized to 1) and the integer variable x (initialized to 0).
Suppose the following two processes, foo and bar, are executed
a. Can the concurrent execution of these two processes result in one or both being blocked forever? If yes, give an execution sequence in which one or both are blocked forever.
b. Can the concurrent execution of these two processes result in the indefinite postponement of one of them? If yes, give an execution sequence in which one is indefinitely postponed.
Consider a system consisting of four processes and a single resource. The current state of the claim and allocation matrices is:
Consider a system consisting of four processes and a single
What is the minimum number of units of the resource needed to be available for this state to be safe?
Consider the following ways of handling deadlock:
(1) Banker’s algorithm,
(2) Detect deadlock and kill thread, releasing all resources,
(3) Reserve all resources in advance,
(4) Restart thread and release all resources if thread needs to wait,
(5) Resource ordering, and
(6) Detect deadlock and roll back thread’s actions.
a. One criterion to use in evaluating different approaches to deadlock is which approach permits the greatest concurrency. In other words, which approach allows the most threads to make progress without waiting when there is no deadlock? Give a rank order from 1 to 6 for each of the ways of handling deadlock just listed, where 1 allows the greatest degree of concurrency. Comment on your ordering.
b. Another criterion is efficiency; in other words, which requires the least processor overhead. Rank order the approaches from 1 to 6, with 1 being the most efficient, assuming that deadlock is a very rare event. Comment on your ordering. Does your ordering change if deadlocks occur frequently?
Suppose that there are two types of philosophers. One type always picks up his left fork first (a “lefty”), and the other type always picks up his right fork first (a “righty”). The behavior of a lefty is defined in Figure 6.12.
Suppose that there are two types of philosophers. One type
The behavior of a righty is as follows:
begin
repeat
think;
wait ( fork[ (i+1) mod 5] );
wait ( fork[i] );
eat;
signal ( fork[i] );
signal ( fork[ (i+1) mod 5] );
forever
end;
Prove the following:
a. Any seating arrangement of lefties and righties with at least one of each avoids deadlock.
b. Any seating arrangement of lefties and righties with at least one of each prevents starvation.
The two variables a and b have initial values of 1 and 2, respectively. The following code is for a Linux system:
The two variables a and b have initial values of
What possible errors are avoided by the use of the memory barriers?
What are the three conditions that must be present for deadlock to be possible?
List two ways in which the no-preemption condition can be prevented.
What is the difference among deadlock avoidance, detection, and prevention?
Show that the four conditions of deadlock apply to Figure.
Show that the four conditions of deadlock apply to Figure.
Show how each of the techniques of prevention, avoidance, and detection can be applied to Figure.
Show how each of the techniques of prevention, avoidance, and
For Figure, provide a narrative description of each of the six depicted paths, similar to the description of the paths of Figure 6.2 provided in Section 6.1.
For Figure, provide a narrative description of each of the
It was stated that deadlock cannot occur for the situation reflected in Figure. Justify that statement.
It was stated that deadlock cannot occur for the situation
Given the following state for the Banker’s Algorithm.
6 processes P0 through P5
4 resource types: A (15 instances); B (6 instances)
C (9 instances); D (10 instances)
Snapshot at time T0:
Given the following state for the Banker’s Algorithm. 
6 processes
a. Verify that the Available array has been calculated correctly.
b. Calculate the Need matrix.
c. Show that the current state is safe, that is, show a safe sequence of processes. In addition, to the sequence show how the Available (working array) changes as each process terminates.
d. Given the request (3, 2, 3, 3) from Process P5. Should this request be granted? Why or why not?
In the code below, three processes are competing for six resources labeled A to F.
a. Using a resource allocation graph, show the possibility of a deadlock in this implementation.
b. Modify the order of some of the get requests to prevent the possibility of any deadlock. You cannot move requests across procedures, only change the order inside each procedure. Use a resource allocation graph to justify your answer.
In the code below, three processes are competing for six
A spooling system consists of an input process I, a user process P, and an output process O connected by two buffers. The processes exchange data in blocks of equal size. These blocks are buffered on a disk using a floating boundary between the input and the output buffers, depending on the speed of the processes. The communication primitives used ensure that the following resource constraint is satisfied:
A spooling system consists of an input process I, a
i + o ≤ max
Where
max = maximum number of blocks on disk
i = number of input blocks on disk
o = number of output blocks on disk
The following is known about the processes:
1. As long as the environment supplies data, process I will eventually input it to the disk (provided disk space becomes available).
2. As long as input is available on the disk, process P will eventually consume it and output a finite amount of data on the disk for each block input (provided disk space becomes available).
3. As long as output is available on the disk, process O will eventually consume it. Show that this system can become deadlocked.
Suggest an additional resource constraint that will prevent the deadlock in Problem 6.7 but still permit the boundary between input and output buffers to vary in accordance with the present needs of the processes.
In the THE multiprogramming system [DIJK68], a drum (precursor to the disk for secondary storage) is divided into input buffers, processing areas, and output buffers, with floating boundaries, depending on the speed of the processes involved. The current state of the drum can be characterized by the following parameters:
max = maximum number of pages on drum
i = number of input pages on drum
p = number of processing pages on drum
o = number of output pages on drum
reso = minimum number of pages reserved for output
resp = minimum number of pages reserved for processing
Formulate the necessary resource constraints that guarantee that the drum capacity is not exceeded and that a minimum number of pages is reserved permanently for output and processing.
In the THE multiprogramming system, a page can make the following state transitions:
1. Empty S input buffer ........ (Input production)
2. Input buffer S processing area ...... (Input consumption)
3. Processing area S output buffer ..... (Output production)
4. Output buffer S empty ........ (Output consumption)
5. Empty S processing area ....... (Procedure call)
6. Processing area S empty ....... (Procedure return)
a. Define the effect of these transitions in terms of the quantities i, o, and p.
b. Can any of them lead to a deadlock if the assumptions made in Problem 6.6 about input processes, user processes, and output processes hold?
In the THE multiprogramming system, a page can make the
Consider a system with a total of 150 units of memory, allocated to three processes as shown:
Consider a system with a total of 150 units of
Apply the banker’s algorithm to determine whether it would be safe to grant each of the following requests. If yes, indicate a sequence of terminations that could be guaranteed possible. If no, show the reduction of the resulting allocation table.
a. A fourth process arrives, with a maximum memory need of 60 and an initial need of 25 units.
b. A fourth process arrives, with a maximum memory need of 60 and an initial need of 35 units.
Showing 1 - 100 of 1171
Join SolutionInn Study Help for
1 Million+ Textbook Solutions
Learn the step-by-step answers to your textbook problems, just enter our Solution Library containing more than 1 Million+ textbooks solutions and help guides from over 1300 courses.
24/7 Online Tutors
Tune up your concepts by asking our tutors any time around the clock and get prompt responses.