Question: Random processes Continuous-time Markov chains in a countable state-space Chap IJames Norris and DavidStirzaker Chapter III, for a references on discrete time Markov chains. See
Random processes
Continuous-time Markov chains in a countable state-space
Chap IJames Norris and DavidStirzaker Chapter III, for a references on discrete time Markov chains.
- See Chap II sections 2.1-2 James Norris and Chapter IV, section 4.1 ofDavidStirzaker for introduction to continuous-time Markov chains

Worksheet 2. Compound and rewards processes, Birth and Death process, first examples of Markov-chains 1. A server at a counter makes consecutive cycles of work time followed by a idle time. We shall assume that the work/idle time lengths of different cycles, (W1, /1), (W2, 12), ..., are independent couples of random variables. (i) Assume that each cycle (work time-+idle time) has an average length of 12 minutes and each work time has an average length of 8 minutes. Argue using a reward process what should be with very high probability the overall proportion of work time over a long day. (ii) Let us assume that after each work period of length t, the server decides to stay idle for a random period of time, which given t, he/she fixes independently from anything else, according to an exponential random variable of average t. Assume that a friend comes in the middle of the day and ask at the end of a cycle how long the server worked during that cycle. a) With the data of question i) and assuming furthermore that the standard deviation of W1 is 4 minutes and that its law has density, what is in expectation the answer of the server? b) A week after, the server changes his strategy and decides that from now on, after each work period, he/she will stay idle for a period of time given by exponential random variable chosen independently from anything else, with mean 8 minutes. In the situ- ation of a), will that change in expectation his/her answer to his/her friend ? Is this a paradox? (iii) Let us assume now that the server became memory-less and changes his/her state accord- ing to a continuous-time Markov chain. We still assume that the same data as in (i). a) Detail the model, giving explicitly the Q-matrix of a continuous-time Markov chain modelling the state of server. b) Compute explicitly its transition semi-group matrix p(t). c) Deduce an expression for the 2 x 2 matrix lim,-.. p(t) and compare the result with the solution of (i)
Step by Step Solution
There are 3 Steps involved in it
Get step-by-step solutions from verified subject matter experts
