Question: Sometimes MDPs are formulated with a reward function R(s, a) that depends on the action taken or a reward function R (s, a, s) that

Sometimes MDPs are formulated with a reward function R(s, a) that depends on the action taken or a reward function R (s, a, s’) that also depends on the outcome state.

a. Write the Bellman equations for these formulations.

b. Show how an MDP with reward function R (s. a. s’) can be transformed into a different MDP with reward function R(s, a), such that optimal policies in the new MDP correspond exactly to optimal policies in the original MDP.

c. Now do the same to convert MDPs with R (s, a) into MDPs with R (s).

Step by Step Solution

3.53 Rating (180 Votes )

There are 3 Steps involved in it

1 Expert Approved Answer
Step: 1 Unlock

This is a deceptively simple exercise that tests the students understanding of the for mal d... View full answer

blur-text-image
Question Has Been Solved by an Expert!

Get step-by-step solutions from verified subject matter experts

Step: 2 Unlock
Step: 3 Unlock

Document Format (1 attachment)

Word file Icon

21-C-S-A-I (251).docx

120 KBs Word File

Students Have Also Explored These Related Artificial Intelligence Questions!