Reinforcement Learning with Linear Model of Reward Corruption
In this page, we introduce a framework to represent reward corruption called the linear model of reward corruption. Using this framework, we investigate the effects of various reward corruption patterns on the effectiveness of a Q-learning agent.
Principal Author: Nam Hee Gordon Kim
Abstract
In reinforcement learning (RL), the main observation an agent gathers is the reward it receives from the environment after making an particular action in a particular state. However, sensors are subject to noise or corruption, and the agent's observed reward value may not accurately reflect the true reward the agent deserves from its action. In this page, we propose a simple framework to consider the discrepancies between the true reward value returned by the environment and the reward value observed by the agent. We use this framework to formalize simple scenarios where scaling, shifting, or noise is applied when the agent observes the reward value. We use a Q-learning agent on OpenAI Gym's Taxi-v2 task and evaluate whether scaling, shifting, and noise functions have adversarial effects on the agent's performance. Finally, we use theoretical derivations and algorithmic reasoning to explain the counter-intuitive results.
Builds on
Foundations of reinforcement learning should be found in course notes, Course:CPSC522/Reinforcement Learning, Course:CPSC522/Markov Decision Process.
Related Pages
Another page on this Wiki, authored by Alistair Wick, discusses the effects of reward functions on an RL agent's performance.
Content
Introduction
In reinforcement learning (RL), learning is not based on a fixed dataset but an environment that reacts to an agent's actions. Sutton and Barto[1] introduce RL problems "as problems [that] involve learning what to do--how to map situations to actions--so as to maximize a numerical reward signal." More precisely, RL is a class of problems pertaining to Markov Decision Process (MDP), where an agent's interaction with an environment is characterized by the specification of state space, action space, transition probability, and reward function. In the recent surge of interest in RL, the specification of reward function has been a subject of interest for the community. In a blog article, Clark and Amodei[2] of OpenAI demonstrated the failure cases of an RL agent where the reward in MDP is misspecified. Amodei et al.[3] formalize the issue of reward hacking as a concrete AI safety problem. Everitt et al.[4] investigate other possible patterns of reward corruption, such as sensory error, wireheading, and misinterpretation on behalf of a cooperative inverse reinversement leraning agent[5].
In this page, we explore a simple case of MDP reward corruption, where the reward channel is corrupted in such a way that the observed reward is some linear function of the true reward with some noise. We investigate the effect of various different linear patterns on the behavior of the RL agent.
Preliminaries
In this section, we briefly review the concept of Markov Decision Process and introduce the Corrupt Reward Markov Decision Process framework. We furthermore introduce the linear model framework to be investigated.
Markov Decision Process
We will borrow the notation convention from Sutton and Barton's textbook[1]. An MDP is characterized by a 4-tuple of the following objects:
- State space:
- Action space:
- State transition probability (also called dynamics): where are the current state, the action of the agent, and the resulting state, respectively.
- Reward function: where are the current state and the action of the agent, respectively.
A simple trajectory in MDP resembles the following form:
where is the number of state transitions simulated.
Corrupt Reward Markov Decision Process
Everitt et al.[4] build up on this notation and introduce the corrupt reward MDP (CRMDP), which is specified by the following objects (we modified the notation scheme to track closer to Sutton's):
- State space:
- Action space:
- State transition probability (also called dynamics): where are the current state, the action of the agent, and the resulting state, respectively.
- Finite set of rewards
- True reward function ,
- Reward corruption function ,
where the finite set of rewards is fixed, and the true reward and the observed signal are elements in . According to this description, Everitt et al.[4] assume that a reward corruption is when the true reward is replaced by some other value, rather than numerical perturbations.
Linear Model of Reward Corruption
We propose to examine reward corruption function that is generalized as a linear function of true reward with Gaussian noise. Suppose we have the following trajectory as a result of a simulation of CRMDP (with a finite horizon of timesteps):
Let us suppose that the observed rewards are some corrupt observations of the true reward signals . We will assume that is a linear function of of the following form:
where are random variables corresponding to the slope and the intercept of the line, and is the noise function. Note that is a Gaussian process with a constant variance . We will assume are artifacts of a malfunctioning reward sensor and are fixed across all timesteps.
Figure 0. A simple graphical model representing the reward corruption process according to the proposed linear model. Note that the true reward value is independent of the parameters of the linear model, and a machine error may cause the observed reward to scale (beta), shift (beta0), or noisy (sigma).
We justify this framework for the following reasons:
- Plausibility. If we assume that reward corruption is mainly due to faulty sensor mechanism, the bias, noise, and amplification introduced by machine errors can be reasonably captured by a linear model.
- Simplicity. We can characterize the faulty reward sensor behaviors with varying values of the three parameters .
- Tractability. This linear transformation closely follows linear models in classic statistical theories. This allows us to connect reinforcement learning to inference, hypothesis testing, and other well-known statistical problems.
Experiments
In our experiments, we employ the proposed reward corruption function. We use OpenAI Gym's Taxi-v2 task[6] and observe the effect of various reward corruption patterns on the rate of convergence of the Q-learning algorithm. We use the standard Q-learning for computing the optimal policy of the agent.
We postulate the following properties of the transformation of the reward:
- Hypothesis 1: Inherent robustness. Without further modifications, the agent should be able to perform optimally against a mild level of noise.
- Hypothesis 2: Shift invariance. Adding a constant value to reward at every timestep should not make a significant difference in the resulting policy.
- Hypothesis 3: Positive scaling invariance. Multiplying a positive constant to reward at every timestep should not make a significant difference in the resulting policy.
Shift invariance and non-negative scaling invariance are motivated by the operations that preserve convexity/concavity optimization theory.
We propose different corruption scenarios, simulated by a simple transformation of the true reward function. We investigate the following configurations of our linear function parameters:
- Scenario 0: No Corruption (Baseline). . There is no corruption, and thus the observed reward is equal to the true reward.
- Scenario 1: Gaussian Noise with Constant Variance. . Sample a real number from a univariate Gaussian distribution and add it to the true reward. We examine the effect of the varying values of .
- Scenario 2: Shifted Reward. . A constant is added to the true reward at every timestep. We examine the varying values of .
- Scenario 3: Scaled Reward. . The true reward is multiplied by a constant at every timestep. We examine the varying values of .
We used the following steps at each timestep to perform the experiments:
- Use Q values to select the action of the agent based on the current state (with -greedy exploration)
- Get the next state and true reward from the environment
- Apply to get the observed reward
- Use to update the Q values as per standard Q-learning procedure.
We used temporal difference update for Q values with a learning rate of and used for -greedy exploration strategy. Fixed rate of was used as the rate of discounted reward when updating Q values.
Results
We trained the Q-learning agent on the Taxi-v2 task for 1e4 episodes using the configurations described from the above sections. Although the agent is only allowed to observe and work with observed reward values , we collected the true reward values to evaluate the agent's objective performance.
Effects of Reward Corruption on Rate of Convergence
We produced three different agents following each of Scenarios 1, 2, and 3. In each scenario, we varied the parameters of interest and plotted the accumulated true reward values (where is the number of timesteps in an episode) as a function of training episodes. The results are illustrated in the following figures:
Figure 1. Varying levels of sigma (standard deviation) with the fixed mean of 0 are used to apply Gaussian noise to true reward returned by OpenAI Gym's Taxi-v2 environment. The resulting corrupt reward is used to compute Q values, while the true reward is collected to observe the agent's performance. Blue line (perfect reward) shows best performance, as expected.
Figure 2. Cumulative true reward from the environment is plotted as a function of training episode. Perfect reward observation (beta0=0, blue) along with negative shifts converge to the global optimum, while positive shifts are stuck in a local optimum.
Figure 3. Cumulative true reward from the environment is plotted as a function of training episodes. Agents with positively scaled rewards converge to the global optimum. Negative rewards have an extremely adversarial effect on the agent's performance.
Discussions
The following bullet points summarize our findings:
- Finding 1: Figure 1 supports Hypothesis 1: Inherent Robustness. However, with a high amount of noise variance, the agent's performance deteriorates.
- Finding 2: Figure 2 contradicts Hypothesis 2: Shift Invariance. It seems that adding a positive constant to the true reward at every time step has an adversarial effect on the agent's performance. Negative shifts, however, did not affect the agent's performance.
- Finding 3: Figure 3 supports Hypothesis 3: Positive Scaling Invariance .
Findings 1 and 2 are surprising. Intuitively, the expected return of the agent should not be affected by a zero-mean noise process. Also, it is surprising that only positive reward shifts would hurt the performance.
Below, we unpack these findings with simple theoretical derivations.
Discounted Return with Corrupt Reward
Recall the definition of discounted return from Sutton and Barto:[1]
Let's introduce the notion of discounted observed return:
Q-Learning Update Step with Corrupt Reward
Recall the Q-learning update step. The update itself is agnostic to timesteps, meaning the iterative updates only depend on the values in that have been computed so far. The agent calculates the updated value for an entry in a matrix based on the already populated values:
where
is the action selected by the policy
based on state
. Note that
and
are generated by the environment having received the state-action pair
. In a usual Q-learning setup, the agent selects the next update target by
.Now, consider the Q-learning update step with observed reward:
where the entries of
must be computed based on the observed reward value
. Suppose our matrix
is initialized with zeros.
For ease of understanding, we simplify the notation and think of as an vector, and the operation is taken over the vector.
A Q-learning agent's policy is such that the action yielding best Q-value is selected given a state. More formally,
Figure 1 suggests that some level of noise can be tolerated. Let us first examine if the Gaussian noise affects the theoretical value functions of the RL agent. Recall the definition of value functions from Sutton and Barto[1]:
In our noise scenario, we established:
.
By the independence between
and
:
Now, recall
and therefore
for all
.
Therefore,
. Similarly,
if
and
Thus, Bellman Equations show that model-based RL agents are able to recover true value functions in presence of zero-mean noise.
However, it is evident that increasing variance has detrimental effect on the Q-learning agent in terms of the rate of convergence.
The scenario dictates that . Therefore we can rewrite
Now, the Q-learning algorithm dictates that the entries of
were already populated according to the recurrence:
For all appropriate
.
Let us note that our matrix is initialized with zeros. Let us suppose we are doing the first few iterations of Q-learning. Then the choice of is sensitive to the extraneous noise involved in computing the entries of . For example, if for some , then the entry is likely to be negative. Then could be true. As such, the value selected with could fluctuate very easily depending on the actual value of at each iteration of Q-learning. The larger variance will allow larger values () to emerge more frequently, which makes more brittle, making the Q-learning agent's policy unlikely to converge to the global optimum.
Contrary to our hypothesis, Figure 2 suggests that the performance of the Q-learning agent is not completely invariant to the shifted reward. While negative shifts result in the same behavior as the baseline, positive shifts result in sub-optimal cumulative true rewards.
We will follow similar steps as Remark 1. In our reward shifting scenario, we established: . Let's also use linearity of expectation, as well as use the fact that is independent of . Then the observed value function can be written:
And similarly,
.
Therefore, a nonzero introduces a constant bias to the value functions, and the magnitude of the bias is inversely proportional to . Theoretically, this should not be a problem in model-based RL agents. Recall the optimal policy is such that:
where
is the optimal state value function. With our observed reward,
Since the
operation is invariant to adding/subtracting constants,
Then the true-reward policy must be the same as the observed-reward policy, i.e.
. Therefore the convergence onto the optimal policy should not be affected by Scenario 2.
Now, we explain why we observe the affects of positive reward shifts in Q-learning. Our approach is very similar to that of Remark 1,
Again, let us suppose we are doing the first few iterations of Q-learning. Note the following:
- Recall that typical Q-learning selects update targets with .
- If then we introduce a huge bias in selecting . is very much likely to be the value that has just been populated.
- If then is likely to be true, regardless of the value.
Let us notice that the first case () can introduce a positive feedback loop, i.e. the agent will select the same few state-action pairs over and over. Now, the second case () does not suffer the positive feedback loop, thanks to the agent selecting . Eventually, all entries of are populated and thus the agent can proceed to compute new values without bias.
Therefore, positive values of introduces a reward-hacking behavior for Q-leaning agents via introducing a positive feedback loop, while the negative values of end up producing the optimal policy, given enough iterations.
Figure 3 confirms that non-negative scaling does not negatively affect the agent's learning behavior. We observe that the agent may converge to the global optimum even faster than the baseline. However, when a negative multiplier is used, the agent is unable to learn the optimal Q values.
Again, we theoretically assert that with . By the linearity of expectation, we simply show
And
similarly.
Then with model-based policy:
Now, we must suppose
to proceed. Since the
operation is invariant to non-negative scaling,
Therefore, positive scaling of true reward should still let the RL agent converge to the optimal policy. If
then this equality does not hold. Q-learning with all-zero initialization honors this property by default. Note that aside from the entries of
having values of different scales, the algorithm itself is not inherently affected by scaling. Therefore Finding 3 corroborates with this fact.
Finally, we interpret the effect of positive reward scaling on RL agents. Earlier, we established that via the linearity of expectation. However, it is possible to frame this property as follows:
Note that
, the discount rate, is now modified by
(with
) at each timestep. Interestingly,
is a decaying function that is independent of
and its influence on the value function asymptotically decreases with the number of timesteps taken. A large value of
results in larger values of
, which overall increases value functions, although the decay is more steep than that of smaller
values. We suggest that the positive scaling can be thought of as an application of non-linear scaling to the discount rate
.
Limitations and Future Work
We only explored simple scenarios of the linear model introduced , where only one parameter is non-zero. Exploring more general cases with linear mode of reward corruption would be a natural next step. In this page, we only examined the Taxi-v2 task, whose environment is simple, discrete, and deterministic. Taxi-v2 can be solved via Q-learning and thus the effect of reward corruption on model-free methods may not fully extend to other model-free methods or model-based methods. As stated, these observations are gathered with a Q-learning algorithm with all-zero initialization, and different initial values for Q-values may yield different empirical results.
Conclusion
We investigated a possible abstraction for reward corruption mechanism in reinforcement learning agents. The linear model of reward corruption gives us a simple framework to examine the effects of shifting, scaling, and noise on the performance of RL agents.
Annotated Bibliography
- ↑ 1.0 1.1 1.2 1.3 Sutton, Richard S., and Andrew G. Barto. Reinforcement learning: An introduction. MIT press, 2018.
- ↑ “Faulty Reward Functions in the Wild.” OpenAI, 22 Dec. 2016, https://openai.com/blog/faulty-reward-functions/.
- ↑ Amodei, Dario, et al. “Concrete Problems in AI Safety.” ArXiv:1606.06565 [Cs], June 2016. arXiv.org, http://arxiv.org/abs/1606.06565.
- ↑ 4.0 4.1 4.2 Everitt, Tom, et al. "Reinforcement learning with a corrupted reward channel." arXiv preprint arXiv:1705.08417 (2017).
- ↑ Hadfield-Menell, Dylan, et al. Cooperative Inverse Reinforcement Learning. p. 9.
- ↑ OpenAI. Gym: A Toolkit for Developing and Comparing Reinforcement Learning Algorithms. https://gym.openai.com. Accessed 15 Apr. 2019.
To Add
Put links and content here to be added. This does not need to be organized, and will not be graded as part of the page. If you find something that might be useful for a page, feel free to put it here.
- Hypothesis: search methods work better if state space is smaller?
|
Permission is granted to copy, distribute and/or modify this document according to the terms in Creative Commons License, Attribution-NonCommercial-ShareAlike 3.0. The full text of this license may be found here: CC by-nc-sa 3.0
|
|
|