What is true about planning in RL?
1. Question 1 What is true about planning in RL? 1 point Planning allows to compute (contrast with learn)the best possible action. For planning, we do not need…
1. Question 1 What is true about planning in RL? 1 point Planning allows to compute (contrast with learn)the best possible action. For planning, we do not need…
3. Question 3 What are different types of planning? 1 point Background planning starts after an agent’s transition into a new state; it is used to select an optimal…
2. Question 2 What are the differences between model-free and model-based settings? 1 point In a model-based setting, we know nothing about environment dynamics. An agent is learning by…
3. Question 3 What does the reward discounting means for an agent? 1 / 1 point It reduces the variance of the return estimator by decreasing the contribution of distant rewards. Intuitively, think…
2. Question 2 What of the following may complicate optimization in RL? 1 / 1 point Negative feedback loops and careless reward normalization See the lecture for illustration of the…
1. Question 1 Which of these are correct ways to alter the reward function? Note: by “correct” we mean that it does not change the optimal policy. 1 / 1 point Reshape…
8. Question 8 Multiple regression is a powerful tool that creates a linear model using more than one independent variable. 1 / 1 point True False
5. Question 5 We can use undo for changes performed by a macro. 1 / 1 point True False
5. Question 5 Once you set a break point in macro, you can’t remove it. 1 / 1 point True False
6. Question 6 In unsupervised learning, we do not need the presence of a group label since we let algorithms tell us which group our observations fall into. 1 / 1 point True False