Every Saturday night a man plays poker at his home with the same group of friends. If he provides refreshments for the group (at an expected cost of $14) on any given Saturday night, the group will begin the following Saturday night in a good mood with probability 7/8 and in a bad mood with probability1/8. However, if he fails to provide refreshments, the group will begin the following Saturday night in a good mood with probability 1/8 and in a bad mood with probability 7/8, regardless of their mood this Saturday. Furthermore, if the group begins the night in a bad mood and then he fails to provide refreshments, the group will gang up on him so that he incurs expected poker losses of $75. Under other circumstances, he averages no gain or loss on his poker play. The man wishes to find the policy regarding when to provide refreshments that will minimize his (long-run) expected average cost per week.
(a) Formulate this problem as a Markov decision process by identifying the states and decisions and then finding the Cik.
(b) Identify all the (stationary deterministic) policies. For each one, find the transition matrix and write an expression for the (longrun) expected average cost per period in terms of the unknown steady-state probabilities (π0, π1, . . . , πM).
(c) Use your IOR Tutorial to find these steady-state probabilities for each policy. Then evaluate the expression obtained in part (b) to find the optimal policy by exhaustive enumeration.