Can any finite search problem be translated exactly into a Markov decision problem such that an optimal solution of the latter is also an optimal solution of the former? If so, explain precisely how to translate the problem and how to translate the solution back; if not, explain precisely why not (i.e.., give a counterexample).
Answer to relevant QuestionsConsider an undiscounted MDP having three states, (1, 2, 3), with rewards —1, —2, 0 respectively. State 3 is a terminal stale. In states I and 2 there are two possible actions: a and b. The transition model is as ...In the children’s game of rock-paper-scissors each player reveals at the same time a choice of rock, paper, or scissors. Paper wraps rock, rock blunts scissors, and scissors cut paper. In the extended version ...We never test the same attribute twice along one path in a decision tree. Why not?This exercise concerns the expressiveness of decision lists (Section 18.5).a. Show that decision lists can represent any Boolean function, if the size of the tests is not limited.b. Show that if the tests can contain at most ...Suppose that Ann’s utilities for cherry and lime candies are c A and l A, whereas Bob’s utilities are c B and l B. (But once Ann has un-wrapped a piece of candy. Bob won’t buy it.) Presumably, if Bob likes lime candies ...
Post your question