Nathaniel Hamilton§Parallax Advanced Research
Beavercreek, OH, USA
nathaniel.hamilton@parallaxresearch.org Kyle Dunlap§Parallax Advanced Research
Beavercreek, OH, USA
kyle.dunlap@parallaxresearch.org Kerianne L. HobbsAutonomy Capability Team (ACT3)
Air Force Research Laboratory
Wright-Patterson Air Force Base, USA
kerianne.hobbs@us.af.mil
Abstract
For many space applications, traditional control methods are often used during operation. However, as the number of space assets continues to grow, autonomous operation can enable rapid development of control methods for different space related tasks. One method of developing autonomous control is Reinforcement Learning (RL), which has become increasingly popular after demonstrating promising performance and success across many complex tasks.While it is common for RL agents to learn bounded continuous control values, this may not be realistic or practical for many space tasks that traditionally prefer an on/off approach for control. This paper analyzes using discrete action spaces, where the agent must choose from a predefined list of actions. The experiments explore how the number of choices provided to the agents affects their measured performance during and after training. This analysis is conducted for an inspection task, where the agent must circumnavigate an object to inspect points on its surface, and a docking task, where the agent must move into proximity of another spacecraft and “dock” with a low relative speed. A common objective of both tasks, and most space tasks in general, is to minimize fuel usage, which motivates the agent to regularly choose an action that uses no fuel. Our results show that a limited number of discrete choices leads to optimal performance for the inspection task, while continuous control leads to optimal performance for the docking task.
Index Terms:
Deep Reinforcement Learning, Aerospace Control, Ablation Study
I Introduction
Autonomous spacecraft operation is critical as the number of space assets grows and operations become more complex. For On-orbit Servicing, Assembly, and Manufacturing (OSAM) missions, tasks such as inspection and docking enable the ability to assess, plan for, and execute different objectives. While these tasks are traditionally executed using classical control methods, this requires constant monitoring and adjustment by human operators, which becomes challenging or even impossible as the complexity of the task increases. To this end, the importance of developing high-performing autonomy is growing.
Reinforcement Learning (RL) is a fast-growing field for developing high-performing autonomy with growing impact, spurred by success in agents that learn to beat human experts in games like Go [1] and Starcraft [2]. RL is a promising application to spacecraft operations due to its ability to react in real-time to changing mission objectives and environment uncertainty [3, 4]. Previous works demonstrate using RL to develop waypoints for an inspection mission [5, 6],and inspecting an uncooperative space object [7]. Additionally, RL has also been used for similar docking problems, including a six degree-of-freedom docking task [8], avoiding collisions during docking [9], and guidance for docking [10].Despite these successes, in order for RL solutions to be used in the real world, they must control the spacecraft in a way that is acceptable to human operators.
Spacecraft control designers and operators typically prefer the spacecraft to choose from a set of discrete actions, where the thrusters are either fully on or off. In general, this follows Pontryagin’s maximum principle [11], which minimizes a cost function to find an optimal trajectory from one state to another. In this case, the cost is fuel use. In contrast, it is common for RL agents to operate in a continuous control space at a specified frequency, where control values can be any value within a certain range. Transitioning from a continuous control space to a discrete one can result in choppy control outputs with poor performance when the discretization is coarse, or an oversized policy that takes too long to train when the discretization is fine [12].
In this paper, we compare RL agents trained using continuous control and this classical control principle to determine their advantages and identify special cases. Our experiments focus on two spacecraft tasks: inspection (viewing the surface of another vehicle) and docking (approaching and joining with another vehicle). This paper builds on previous work done using RL to solve the inspection task with illumination [13] and the docking task [3]. For the same docking task, the effect of Run Time Assurance during RL training was analyzed [14, 15]. For a 2D version of the docking task, LQR control was compared to a bang-bang controller [16], similar to the agent with three discrete choices that will be analyzed in this paper.
The main contributions of this work include answering the following questions.
Q 1.
\Copy
q_text:no-opWill increasing the likelihood of choosing “no thrust” improve fuel efficiency?
Fuel efficiency is so important in space missions because fuel is a limited resource that needs to exist beyond any single task. The most effective method to minimize fuel use is for the agent to choose “no thrust”. To this end, we explore two different ways of increasing the likelihood of choosing “no thrust”: (1) transitioning from a continuous to a discrete action space, and (2) decreasing the action space magnitude, so that the continuous range of values is smaller. The results are found in SectionV-A.
Q 2.
\Copy
q_text:granularityDoes a smaller action magnitude or finer granularity matter more at different operating ranges?
The inspection and docking tasks require different operating ranges. For inspection, agents need to circumnavigate the chief and being further away ensures better coverage for inspection. In contrast, agents need to move closer to the chief in order to complete the docking task.To this end, we explore how the operating range impacts the need for either smaller action magnitudes to choose from or finer granularity of choices.The results are found in SectionV-B.
Q 3.
\Copy
q_text:num_choicesIs there an optimal balance between discrete and continuous actions?
While RL agents often perform better with continuous actions, they would likely be more accepted for use in the real world with discrete actions.To this end, we explore if a balance can be found between discrete and continuous control to provide an optimal solution that is suitable for RL training and real world operation.The results are found in SectionV-C.
II Deep Reinforcement Learning
Reinforcement Learning (RL) is a form of machine learning in which an agent acts in an environment, learns through experience, and increases its performance based on rewarded behavior. Deep Reinforcement Learning (DRL) is a newer branch of RL in which a neural network is used to approximate the behavior function, i.e. policy .The agent uses a Neural Network Controller (NNC) trained by the RL algorithm to take actions in the environment, which can be comprised of any dynamical system, from Atari simulations ([17, 18]) to complex robotics scenarios ([19, 20, 21, 22, 23, 24]).
Reinforcement learning is based on the reward hypothesis that all goals can be described by the maximization of expected return, i.e. the cumulative reward [25]. During training, the agent chooses an action, , based on the input observation, . The action is then executed in the environment, updating the internal state, , according to the plant dynamics. The updated state, , is then assigned a scalar reward, , and transformed into the next observation vector. The process of executing an action and receiving a reward and next observation is referred to as a timestep. Relevant values, like the input observation, action, and reward are collected as a data tuple, i.e. sample, by the RL algorithm to update the current NNC policy, , to an improved policy, . How often these updates are done is dependent on the RL algorithm.
In this work we focus solely on the Proximal Policy Optimization (PPO) algorithms as our DRL algorithm of choice. PPO has demonstrated success in the space domain for multiple tasks and excels in finding optimal policies across many other domains [26, 15, 3, 14, 13]. Additionally, PPO works for both discrete and continuous action spaces, allowing us to test both types of action spaces without switching algorithms. For RL, the action space is typically defined as either discrete or continuous. For a discrete action space, the agent has a finite set of choices for the action. For a continuous action space, the agent can choose any value for the action within a given range. Therefore, it can be thought of as having infinite choices.In general, discrete action spaces tend to be used for simple tasks while continuous action spaces tend to be used for more complex tasks.
Due to the stochastic nature of RL, the agent typically selects random action values at the beginning of the training process. It then learns from this experience, and selects the actions that maximize the reward function. By using discrete actions, it becomes much easier for the agent to randomly select specific discrete actions, and the agent can quickly learn that these actions are useful. This motivates Q1, where we aim to increase the likelihood of “no thrust.”
III Space Environments
![Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (1) Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (1)](https://i0.wp.com/arxiv.org/html/2405.12355v1/x1.png)
This paper considers two spacecraft tasks: inspection and docking. Both tasks consider a passive “chief” and active “deputy” spacecraft, where the agent controls the deputy and the chief is stationary. Both of these tasks are modeled using the Clohessy-Wiltshire equations [27] in Hill’s frame [28], which is a linearized relative motion reference frame centered around the chief spacecraft, which is in a circular orbit around the Earth. The agent is the deputy spacecraft, which is controlled in relation to the passive chief spacecraft. As shown in Fig.1, the origin of Hill’s frame, , is located at the chief’s center of mass, the unit vector points away from the center of the Earth, points in the direction of motion of the chief, and is normal to and . The relative motion dynamics between the deputy and chief are,
(1) |
where is the state vector , is the control vector, i.e. action, , and,
(2) |
Here, rad/s is the mean motion of the chief’s orbit, kg is the mass of the deputy, is the force exerted by the thrusters along each axis, and is a constant value varied in the experiments. Both spacecraft are modeled as point masses.
III-A Inspection
For the inspection task, introduced in [13], the agent’s goal is to navigate the deputy around the chief spacecraft to inspect the entire surface of the chief spacecraft. In this case, the chief is modeled as a sphere of inspectable points distributed uniformly across the surface. The attitude of the deputy is not modeled, because it is assumed that the deputy is always pointed towards the chief. In order for a point to be inspected, it must be within the field of view of the deputy (not obstructed by the near side of the sphere) and illuminated by the Sun. Illumination is determined using a binary ray tracing technique, where the Sun rotates in the plane in Hill’s frame at the same rate as mean motion of the chief’s orbit, .
While the main objective of the task is to inspect all points, a secondary objective is to minimize fuel use. This is considered in terms of , where,
(3) |
For this task, seconds.
III-A1 Initial and Terminal Conditions
Each episode is randomly initialized given the following parameters. First, the Sun is initialized at a random angle with respect to the axis so that rad. Next, the deputy’s position is sampled from a uniform distribution for the parameters: radius m, azimuth angle rad, and elevation angle rad. The position is then computed as,
(4) |
If the deputy’s initialized position results in pointing within 30 degrees of the Sun, the position is negated such that the deputy is then pointing away from the Sun and towards illuminated points. This prevents unsafe and unrealistic initialization, as sensors can burnout when pointed directly at the Sun. Finally, the deputy’s velocity is similarly sampled from a velocity magnitude m/s, azimuth angle rad, and elevation angle rad, and the velocity is computed using the same technique as Eq.4.
An episode is terminated under the following conditions:
- •
the deputy inspects all points,
- •
the deputy crashes into the chief (enters within a minimum relative distance of m, where the chief and deputy have radii and m respectively),
- •
the deputy exceeds a maximum relative distance from the chief of m, and/or
- •
the simulation exceeds timesteps (the time for the Sun to appear to orbit the chief twice, or hrs).
III-A2 Observations
The environment is partially observable, using sensors to condense full state information into manageable components of the observation space. At each timestep, the agent receives an observation comprised of the following components. The first component is the deputy’s current position in Hill’s frame, where each element is divided by a value of to ensure most values fall in the range . The second component is the deputy’s current velocity in Hill’s frame, where each element is multiplied by a value of to ensure most values fall in the range . The third component is the angle describing the Sun’s position with respect to the axis, . The fourth component is the total number of points that have been inspected so far during the episode, , divided by a value of . The final component is a unit vector pointing towards the nearest cluster of uninspected points, where clusters are determined using k-means clustering. The resulting observation is thus .
III-A3 Reward Function
The reward function consists of the following three elements111The reward function was defined in [13], and the authors determined that the specified configuration produces the desired behavior. An exploration of the reward function is outside the scope of this work.. First, a reward of is given for every new point that the deputy inspects at each timestep. Second, a negative reward is given that is proportional to the used at each timestep. This is given as , where is a scalar multiplier that changes during training to help the agent first learn to inspect all points and then minimize fuel usage. At the beginning of training, . If the mean percentage of inspected points for the previous training iteration exceeds 90%, is increased by , and if this percentage drops below 80% for the previous iteration, is decreased by the same amount. is enforced to always be in the range . Finally, a reward of is given if the deputy collides with the chief and ends the episode. This is the only sparse reward given to the agent.For evaluation, a constant value of is used, while all other rewards remain the same.
III-B Docking
For the docking task, the agent’s goal is to navigate the deputy spacecraft to within a docking radius, m of the chief at a relative speed below a maximum docking speed, m/s. Secondary objectives for the task are to minimize fuel use and adhere to a distance-dependent speed limit defined as,
(5) |
where and are the deputy’s position and velocity, and rad/s is the slope of the speed limit. The distance-dependent speed limit requires the deputy to slow down as it approaches the chief to dock safely, and values were chosen based on their relation to elliptical natural motion trajectories [29].
III-B1 Initial and Terminal Conditions
Each episode is randomly initialized given the following parameters. First, the deputy’s position is sampled from a radius m, azimuth angle rad, and elevation angle rad, where position is computed according to Eq.4. Second, velocity is sampled from a maximum velocity magnitude (where is determined by Eq.5 given the current position), azimuth angle rad, and elevation angle rad, where velocity is computed according to Eq.4.
An episode is terminated under the following conditions:
- •
the deputy successfully docks with the chief (),
- •
the deputy crashes into the chief (),
- •
the deputy exceeds a maximum relative distance from the chief of m, and/or
- •
the simulation exceeds timesteps ( = 1 second).
III-B2 Observations
Similar to the inspection environment, the docking environment’s observation is broken up into components. The first and second components are the deputy’s position and velocity, divided by and respectively. The third component is the deputy’s current velocity magnitude , and the fourth component is the maximum velocity given the current position according to Eq.5, . Thus, the observation is .
III-B3 Reward Function
The reward function consists of the following six elements222The reward function was defined in [3], and the authors determined that the specified configuration produces the desired behavior. An exploration of the reward function is outside the scope of this work.. First, a distance change reward is used to encourage the deputy to move towards the chief at each timestep. This reward is given by,
(6) |
where , is the deputy’s current position, and is the deputy’s position at the last timestep. Second, a negative reward of is multiplied by the used by the deputy at each timestep. Unlike the inspection task, this reward remains constant throughout training. Third, if the deputy violated the distance dependent speed limit at the current timestep, a negative reward of is multiplied by the magnitude of the violation (that is, ). Fourth, a negative reward of is given at each timestep to encourage the agent to complete the task as quickly as possible. Fifth, a sparse reward of is given if the agent successfully completes the task. Finally, a sparse reward of is given if the agent crashes into the chief.
IV Experiments
A common objective of both of these tasks (and most space tasks in general) is to minimize the use of . If the agent always chooses a value of zero for all controls, it will use zero m/s of . However in this case, it is unlikely that the agent is able to successfully complete the task, and therefore a balance must be found between maximizing task completion and minimizing . With continuous actions, it is very difficult for an agent to choose an exact value of zero for control, and therefore it is often using a small amount of at every timestep, as seen in [13]. On the other hand, discrete actions allow an agent to easily choose zero.
Several experiments are run for both the inspection and docking environments to determine how choice affects the learning process. First, a baseline configuration is trained with continuous actions, where the agent can choose any value for . Next, several configurations are trained with discrete actions where the number of choices is varied. In each case, the action values are evenly spaced over the interval . For example, 3 choices for the agent are and 5 choices for the agent are . Experiments are run for 3, 5, 7, 9, 11, 21, 31, 41, 51, and 101 choices. The number of choices is always odd such that zero is an option. This set of experiments is repeated for values of N and N, to determine if the magnitude of the action choices affect the results.
For the docking environment, two additional configurations are trained: 5 discrete choices (referred to as ), and 9 discrete choices (referred to as ). These configurations are designed to give the agent finer control at small magnitudes, and rationale will be discussed further in SectionV-C.
For each configuration, 10 different agents are trained for different random seeds (which are held constant for all the training configurations). Each agent is trained over 5 million timesteps. The policies are periodically evaluated during training, approximately every 500,000 timesteps, to record their performance according to several metrics333The training curves are not shown in the results, but are included in the Appendix.. The common metrics for both the inspection and docking environments are: average used per episode, average percentage of successful episodes, average total reward per episode, and average episode length. For the inspection environment, the average number of inspected points is also considered, and for the docking environment, the average number of timesteps where the speed limit is violated and the average final speed are both considered.
Each of the 10 policies is evaluated over a set of 10 random test cases, where the same test cases are used every time the policy is evaluated. We record and present the InterQuartile Mean444IQM sorts and discards the bottom and top 25% of the recorded metric data and calculates the mean score on the remaining middle 50% of data. IQM interpolates between mean and median across runs, for a more robust measure of performance [30]. (IQM) for each metric. The IQM is used as it is a better representation of what we can expect to see with future studies as it is not unduly affected by outliers and has a smaller uncertainty even with a handful of runs [30]. At the conclusion of training, the final trained policies are again evaluated deterministically for 100 random test cases to better understand the behavior of the trained agents.
![Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (2) Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (2)](https://i0.wp.com/arxiv.org/html/2405.12355v1/extracted/2405.12355v1/figures/inspection/1_0/DeltaV_int_est.png)
![Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (3) Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (3)](https://i0.wp.com/arxiv.org/html/2405.12355v1/extracted/2405.12355v1/figures/inspection/0_1/DeltaV_int_est_v2.png)
![Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (4) Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (4)](https://i0.wp.com/arxiv.org/html/2405.12355v1/extracted/2405.12355v1/figures/docking/1_0/DeltaV_int_est.png)
![Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (5) Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (5)](https://i0.wp.com/arxiv.org/html/2405.12355v1/extracted/2405.12355v1/figures/docking/0_1/DeltaV_int_est.png)
V Results and Discussion
In this section, we answer the questions posed in the introduction by analyzing the overarching trends found in our experiments. In the interest of being concise but detailed, we include here selected results that highlight the trends we found and provide all the results in the Appendix.
V-A \Pasteq_text:no-op
Answer: Yes. Increasing the likelihood of selecting “no thrust” as an action greatly reduces use.
Total Reward | Inspected Points | Success Rate | (m/s) | Episode Length (steps) | |
---|---|---|---|---|---|
N | 7.8198 0.5292 | 95.81 4.6808 | 0.448 0.4973 | 13.0222 2.0929 | 323.98 36.636 |
N | 8.7324 0.199 | 99.0 0.0 | 1.0 0.0 | 10.8143 1.7896 | 333.496 13.6857 |
Total Reward | Success Rate | (m/s) | Violation (%) | Final Speed (m/s) | Episode Length (steps) | |
---|---|---|---|---|---|---|
N | 1.4105 0.5279 | 0.57 0.4951 | 13.2319 1.7153 | 0.0 0.0 | 0.0141 0.0043 | 1780.944 237.6564 |
N | 1.8289 0.5193 | 0.842 0.3647 | 11.6234 1.3619 | 0.0 0.0 | 0.0131 0.0074 | 1497.154 343.938 |
To answer this question, we employed two methods for increasing the likelihood of selecting “no thrust” (i.e. N): (1) transitioning from a continuous to a discrete action space, and (2) decreasing the action space magnitude so the continuous range is smaller.
V-A1 Continuous to Discrete Action Space
Transitioning from a continuous action space to a discrete one increases the likelihood of selecting “no thrust” by making it an explicit choice. With a continuous action space, the directional thrusts can be any value between so the likelihood of all directional thrust randomly being exactly 0.0 is very low. There are many combinations when all thrust values are near 0.0.
With a discrete action space, it is straightforward for the agent to select exactly 0.0.However, depending on the number of choices available to the agent, it can become more difficult for the agent to choose zero thrust. For the agent with three discrete choices, there is a in chance that the agent does not thrust at all (due to the three control inputs), while for the agent with discrete choices, there is a in chance that the agent does not thrust at all. Therefore, it is easier for agents with fewer discrete actions to explicitly choose “no thrust.”
In Fig.2 we compare the used by the final policies trained in continuous and discrete action spaces. For the inspection environment, our results show that transitioning to a discrete action space generally reduces use.Interestingly, we see in Fig.2(a) that adding more discrete choices reduces use until the agent has 31 choices or more. This shows that while it is harder for the agent to choose zero thrust with more choices, it also enables the agent to choose actions with thrust closer to zero, reducing the over-corrections caused by a coarse discretization of the action space.
For the docking environment, our results show that transitioning to a discrete action space generally results in a large increase in use. The blocks representing the continuous configurations use in Fig.2(c)&(d) are centered around m/s and m/s respectively. Increasing the granularity of choices did not help, instead trending towards larger use of instead. However, reducing the number of discrete choices clearly reduces use as it is easier to choose “no thrust.”
V-A2 Decreasing the Action Space Magnitude
As mentioned earlier, it is difficult to select “no thrust” with a continuous action space, but there are many combinations of “near zero” available. By reducing the magnitude of the action space (i.e. decreasing ) we increase the likelihood of choosing those “near zero” actions for both our continuous and discrete configurations. Additionally, by reducing , we decrease the maximum fuel use for any given timestep, which should also result in a reduction in fuel use.
Our results in Fig.2 show that reducing from N to N generally reduces the amount of used, in some cases reducing by more than m/s in the docking environment (discrete 41, 51, and 101). Therefore, to reduce use, our results show it is best to reduce .
To better highlight how reducing impacts agents with continuous actions, TableI and TableII show the performance of the final policies across all metrics for the inspection and docking environments respectively. TableI shows that as is decreased from N to N for the inspection environment, the total reward, inspected points, and success rate all increase while decreases. TableII similarly shows that as is decreased in the docking environment, the total reward and success rate increase while decreases. Both cases show that decreasing the action space magnitude enables better performance.
V-B \Pasteq_text:granularity
Answer: It depends on the task. For the inspection task, smaller action magnitude is more important. For the docking task, finer granularity is more important.
![Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (6) Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (6)](https://i0.wp.com/arxiv.org/html/2405.12355v1/extracted/2405.12355v1/figures/inspection/1_0/combined_action_histogram.png)
![Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (7) Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (7)](https://i0.wp.com/arxiv.org/html/2405.12355v1/extracted/2405.12355v1/figures/inspection/0_1/combined_action_histogram.png)
![Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (8) Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (8)](https://i0.wp.com/arxiv.org/html/2405.12355v1/extracted/2405.12355v1/figures/docking/1_0/combined_action_histogram.png)
![Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (9) Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (9)](https://i0.wp.com/arxiv.org/html/2405.12355v1/extracted/2405.12355v1/figures/docking/0_1/combined_action_histogram.png)
![Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (10) Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (10)](https://i0.wp.com/arxiv.org/html/2405.12355v1/extracted/2405.12355v1/figures/inspection/0_1/histogram_legend.png)
![Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (11) Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (11)](https://i0.wp.com/arxiv.org/html/2405.12355v1/extracted/2405.12355v1/figures/inspection/1_0/TotalReward_int_est.png)
![Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (12) Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (12)](https://i0.wp.com/arxiv.org/html/2405.12355v1/extracted/2405.12355v1/figures/inspection/0_1/TotalReward_int_est_v2.png)
![Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (13) Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (13)](https://i0.wp.com/arxiv.org/html/2405.12355v1/extracted/2405.12355v1/figures/docking/1_0/Success_int_est.png)
![Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (14) Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (14)](https://i0.wp.com/arxiv.org/html/2405.12355v1/extracted/2405.12355v1/figures/docking/0_1/Success_int_est.png)
To answer this question, we first analyze what actions the agents take most often. Fig.3 shows the percentage that each action is taken for the different experiments. For the inspection environment with N, close to 100% of all actions taken are zero or very close to zero. However when N, the agents often choose actions that are either zero or . For the docking environment with N, while many actions taken are close to zero, there is a clear bell curve that appears centered on zero control. When N, this bell curve becomes more apparent, but the agents still tend to choose actions that are close to zero. These results suggest that action magnitude is more important in solving the inspection task, while granularity is more important in solving the docking task.
V-B1 Inspection
For the inspection task, the total reward for the final policies for each configuration is shown in Fig.4. This represents the agent’s ability to balance the objectives of inspecting all points and reducing use. For the agents trained with N, it can be seen that all agents trained with 41 or less choices result in similar final reward. Recall that in Fig.2(a), it can be seen that the lowest for the inspection environment with N occurs for the agent trained with 21 discrete actions. Notably, this is the configuration with the least number of choices where the agent can choose N.
For the agents trained with N, it can be seen that reward tends to decrease slightly as the number of choices increases. The agent trained with three choices results in the highest reward, and Fig.2(b) shows that this configuration also results in the lowest use. These results show that using a smaller action magnitude is much more important than increasing the granularity of choices for the inspection task. This result is intuitive as the agent does not need to make precise adjustments to its trajectory to complete the task, and can rely on following a general path to orbit the chief and inspect all points.
V-B2 Docking
For the docking task, the success rate for the final policies for each configuration is shown in Fig.5. For the agents trained with N, the agents with more choices have the highest success rates. For the agents trained with N, outside of the agents with 3 and 51 choices, all other configurations result in similar success rates. Along with Fig.2(c)&(d), this shows that more choice and finer granularity leads to higher use, but it is necessary to successfully complete the task.
However, there is one notable exception to this trend: the agents trained with continuous actions (highest granularity). It can be seen that this configuration uses by far the least out of all agents that achieved at least a 50% success rate. In particular, the agent trained with continuous actions and N achieves the best balance of high success with low . These results show that increasing the granularity of choices is much more important than using a smaller action magnitude for the docking task. This result is intuitive because the agent must be able to make precise adjustments to its trajectory as it approaches and docks with the chief.
V-C \Pasteq_text:num_choices
Answer: No, for these tasks it is better to choose either discrete or continuous actions.
![Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (15) Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (15)](https://i0.wp.com/arxiv.org/html/2405.12355v1/extracted/2405.12355v1/figures/episode_plots/inspection/con_1.0.png)
![Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (16) Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (16)](https://i0.wp.com/arxiv.org/html/2405.12355v1/extracted/2405.12355v1/figures/episode_plots/inspection/dis_1.0_3.png)
![Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (17) Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (17)](https://i0.wp.com/arxiv.org/html/2405.12355v1/extracted/2405.12355v1/figures/episode_plots/inspection/dis_1.0_101.png)
![Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (18) Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (18)](https://i0.wp.com/arxiv.org/html/2405.12355v1/extracted/2405.12355v1/figures/episode_plots/inspection/dis_0.1_3.png)
![Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (19) Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (19)](https://i0.wp.com/arxiv.org/html/2405.12355v1/extracted/2405.12355v1/figures/episode_plots/docking/con_1.0.png)
![Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (20) Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (20)](https://i0.wp.com/arxiv.org/html/2405.12355v1/extracted/2405.12355v1/figures/episode_plots/docking/dis_1.0_3.png)
![Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (21) Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (21)](https://i0.wp.com/arxiv.org/html/2405.12355v1/extracted/2405.12355v1/figures/episode_plots/docking/dis_1.0_101.png)
![Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (22) Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (22)](https://i0.wp.com/arxiv.org/html/2405.12355v1/extracted/2405.12355v1/figures/episode_plots/docking/dis_0.1_3.png)
![Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (23) Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (23)](https://i0.wp.com/arxiv.org/html/2405.12355v1/extracted/2405.12355v1/figures/episode_plots/docking/dis_1.0_0.1.png)
![Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (24) Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (24)](https://i0.wp.com/arxiv.org/html/2405.12355v1/extracted/2405.12355v1/figures/episode_plots/docking/dis_1.0_0.1_0.01.png)
To answer this question, we consider the behavior of the trained agents. Fig.6 shows example trajectories of trained agents in the inspection task. Ideally, these agents will circumnavigate the chief along a smooth trajectory. In Fig.6(a), it can be seen that this is easily accomplished when the agent uses continuous actions, as it can constantly make small refinements to the trajectory to keep it smooth. On the other hand, when the agent can only use three discrete actions with N as shown in Fig.6(b), the trajectory becomes far less smooth. It can be seen that the agent jerks back and forth as it attempts to adjust its trajectory.
To balance continuous and discrete actions, the number of discrete choices can be increased to allow the agent to make smaller adjustments to its trajectory. As seen in Fig.6(c), having 101 choices helps the agent’s trajectory become much smoother. However, as answered by question Q2, this comes at a cost of performance. The optimal performance for the inspection task came with three discrete actions with N. In Fig.6(d), it can be seen that this configuration also results in a much smoother trajectory, as the agent can make smaller adjustments to its trajectory due to the smaller . Therefore, the optimal behavior can also be achieved using discrete actions for the inspection task. This also follows Fig.3(b), where the actions most commonly used are zero and .
For the docking task, ideally the agent will slow down and approach the chief along a smooth trajectory. Fig.7 shows example trajectories of trained agents in the docking task, where similar results to the inspection task can be seen. Continuous actions allow for the smoothest trajectory, three discrete actions with N is the least smooth trajectory, and 101 discrete actions or three discrete actions with N allow for smoother trajectories while using discrete actions. However, there is still a considerable amount of “chattering” for the control, where the agent frequently switches between multiple control values as it attempts to refine its trajectory.
To best balance continuous and discrete actions, we analyze the behavior shown in Fig.3(d), and attempt to provide action choices for the agent that mimic a bell curve. These experiments are the configuration and the configuration. These configurations allow the agent to use actions with high magnitudes when it is far from the chief, but small magnitudes when it gets closer to the chief. From Fig.2(d) and Fig.5, it can be seen that the configuration achieves much lower than the configuration, with both having 100% success. These configurations also achieve lower than most discrete action experiments, but still do not perform as well as the agent trained with continuous actions. As shown in Fig.7(e) and Fig.7(f), these configurations also do not have as smooth of trajectories as the agent with continuous actions, and there is still frequent chattering in the control. Therefore, despite attempting to balance discrete and continuous actions, the optimal behavior for the docking task is still achieved using continuous actions.
VI Conclusions and Future Work
In this paper, we trained 480 unique agents to investigate how choice impacts the learning process for space control systems.In conclusion, the results show that (Q1) increasing the likelihood of selecting “no thrust” as an action greatly reduces use.By either making zero thrust a more likely choice, or reducing the action magnitude to make choices closer to zero, this significantly reduces the used by the agent. Next, our results indicate that (Q2) increasing granularity of choices or adjusting action magnitude for optimal performance is highly dependent on the task. For the inspection task, selecting an appropriate action magnitude is more important than increasing granularity of choices. It was found the optimal configuration for completing the inspection task was three discrete actions with N. For the docking task, the opposite is true, where the optimal configuration was continuous actions with N.This makes sense considering the operating range of the tasks, where the agent can complete the inspection task by orbiting the chief at a further relative distance, while the agent must complete the docking task by making small adjustments to its trajectory as it approaches the docking region. Finally, our results show that (Q3) there is not an optimal balance between discrete and continuous actions, and it is better to choose one or the other. When attempting to balance discrete and continuous actions for the docking environment by providing actions with decreasing magnitude, it was found that this configuration performed better than most discrete action configurations, but it still did not perform as well as agents with continuous actions.
In future work, we want to consider more complex six degree-of-freedom dynamics, where the agent can also control its orientation. We also want to explore more complex discrete action choices, including adding a time period for the thrust selection to better replicate a scheduled burn.
Acknowledgements
This research was sponsored by the Air Force Research Laboratory under the Safe Trusted Autonomy for Responsible Spacecraft (STARS) Seedlings for Disruptive Capabilities Program.The views expressed are those of the authors and do not reflect the official guidance or position of the United States Government, the Department of Defense, or of the United States Air Force. This work has been approved for public release: distribution unlimited. Case Number AFRL-2024-0298.
References
- [1]D.Silver, A.Huang, C.Maddison, A.Guez, L.Sifre, G.Driessche, J.Schrittwieser, I.Antonoglou, V.Panneershelvam, M.Lanctot, S.Dieleman, D.Grewe, J.Nham, N.Kalchbrenner, I.Sutskever, T.Lillicrap, M.Leach, K.Kavukcuoglu, T.Graepel, and D.Hassabis, “Mastering the game of go with deep neural networks and tree search,” Nature, vol.529, pp.484–489, Jan. 2016.
- [2]O.Vinyals, I.Babuschkin, W.M. Czarnecki, M.Mathieu, A.Dudzik, J.Chung, D.H. Choi, R.Powell, T.Ewalds, P.Georgiev, J.Oh, D.Horgan, M.Kroiss, I.Danihelka, A.Huang, L.Sifre, T.Cai, J.P. Agapiou, M.Jaderberg, A.S. Vezhnevets, R.Leblond, T.Pohlen, V.Dalibard, D.Budden, Y.Sulsky, J.Molloy, T.L. Paine, C.Gulcehre, Z.Wang, T.Pfaff, Y.Wu, R.Ring, D.Yogatama, D.Wünsch, K.McKinney, O.Smith, T.Schaul, T.Lillicrap, K.Kavukcuoglu, D.Hassabis, C.Apps, and D.Silver, “Grandmaster Level in StarCraft II using Multi-Agent Reinforcement Learning,” Nature, vol.575, pp.350–354, Oct. 2019.
- [3]U.J. Ravaioli, J.Cunningham, J.McCarroll, V.Gangal, K.Dunlap, and K.L. Hobbs, “Safe reinforcement learning benchmark environments for aerospace control systems,” in 2022 IEEE Aerospace Conference (50100), pp.1–18, IEEE, 2022.
- [4]N.Hamilton, P.Musau, D.M. Lopez, and T.T. Johnson, “Zero-shot policy transfer in autonomous racing: reinforcement learning vs imitation learning,” in Proceedings of the 1st IEEE International Conference on Assured Autonomy, pp.11–20, 2022.
- [5]H.H. Lei, M.Shubert, N.Damron, K.Lang, and S.Phillips, “Deep reinforcement learning for multi-agent autonomous satellite inspection,” AAS Guidance Navigation and Control Conference, 2022.
- [6]J.Aurand, H.Lei, S.Cutlip, K.Lang, and S.Phillips, “Exposure-based multi-agent inspection of a tumbling target using deep reinforcement learning,” AAS Guidance Navigation and Control Conference, 2023.
- [7]A.Brandonisio, M.Lavagna, and D.Guzzetti, “Reinforcement learning for uncooperative space objects smart imaging path-planning,” The Journal of the Astronautical Sciences, vol.68, pp.1145–1169, Dec 2021.
- [8]C.E. Oestreich, R.Linares, and R.Gondhalekar, “Autonomous six-degree-of-freedom spacecraft docking with rotating targets via reinforcement learning,” Journal of Aerospace Information Systems, vol.18, no.7, pp.417–428, 2021.
- [9]J.Broida and R.Linares, “Spacecraft rendezvous guidance in cluttered environments via reinforcement learning,” in 29th AAS/AIAA Space Flight Mechanics Meeting, pp.1–15, 2019.
- [10]K.Hovell and S.Ulrich, “Deep reinforcement learning for spacecraft proximity operations guidance,” Journal of spacecraft and rockets, vol.58, no.2, pp.254–264, 2021.
- [11]R.E. Kopp, “Pontryagin maximum principle,” in Mathematics in Science and Engineering, vol.5, pp.255–279, Elsevier, 1962.
- [12]K.Doya, “Reinforcement learning in continuous time and space,” Neural computation, vol.12, no.1, pp.219–245, 2000.
- [13]D.van Wijk, K.Dunlap, M.Majji, and K.Hobbs, “Deep reinforcement learning for autonomous spacecraft inspection using illumination,” AAS/AIAA Astrodynamics Specialist Conference, Big Sky, Montana, 2023.
- [14]K.Dunlap, M.Mote, K.Delsing, and K.L. Hobbs, “Run time assured reinforcement learning for safe satellite docking,” in 2022 AIAA SciTech Forum, pp.1–20, 2022.
- [15]N.Hamilton, K.Dunlap, T.T. Johnson, and K.L. Hobbs, “Ablation study of how run time assurance impacts the training and performance of reinforcement learning agents,” in 2023 IEEE 9th International Conference on Space Mission Challenges for Information Technology (SMC-IT), pp.45–55, IEEE, 2023.
- [16]K.Dunlap and K.Cohen, “Hybrid fuzzy-lqr control for time optimal spacecraft docking,” in North American Fuzzy Information Processing Society Annual Conference, pp.52–62, Springer, 2022.
- [17]N.Hamilton, L.Schlemmer, C.Menart, C.Waddington, T.Jenkins, and T.T. Johnson, “Sonic to knuckles: evaluations on transfer reinforcement learning,” in Unmanned Systems Technology XXII, vol.11425, p.114250J, International Society for Optics and Photonics, 2020.
- [18]M.Alshiekh, R.Bloem, R.Ehlers, B.Könighofer, S.Niekum, and U.Topcu, “Safe reinforcement learning via shielding,” in Thirty-Second AAAI Conference on Artificial Intelligence, p.2669–2678, 2018.
- [19]G.Brockman, V.Cheung, L.Pettersson, J.Schneider, J.Schulman, J.Tang, and W.Zaremba, “Openai gym,” 2016.
- [20]J.F. Fisac, A.K. Akametalu, M.N. Zeilinger, S.Kaynama, J.Gillula, and C.J. Tomlin, “A general safety framework for learning-based control in uncertain robotic systems,” IEEE Transactions on Automatic Control, vol.64, no.7, pp.2737–2752, 2018.
- [21]P.Henderson, R.Islam, P.Bachman, J.Pineau, D.Precup, and D.Meger, “Deep reinforcement learning that matters,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol.32, p.3207–3214, 2018.
- [22]H.Mania, A.Guy, and B.Recht, “Simple random search of static linear policies is competitive for reinforcement learning,” in Proceedings of the 32nd International Conference on Neural Information Processing Systems, pp.1805–1814, 2018.
- [23]K.Jang, E.Vinitsky, B.Chalaki, B.Remer, L.Beaver, A.A. Malikopoulos, and A.Bayen, “Simulation to scaled city: zero-shot policy transfer for traffic control via autonomous vehicles,” in Proceedings of the 10th ACM/IEEE International Conference on Cyber-Physical Systems, pp.291–300, 2019.
- [24]N.Bernini, M.Bessa, R.Delmas, A.Gold, E.Goubault, R.Pennec, S.Putot, and F.Sillion, “A few lessons learned in reinforcement learning for quadcopter attitude control,” in Proceedings of the 24th International Conference on Hybrid Systems: Computation and Control, (New York, NY, USA), pp.1–11, Association for Computing Machinery, 2021.
- [25]D.Silver, “Lectures on reinforcement learning.” url:https://www.davidsilver.uk/teaching/, 2015.
- [26]J.Schulman, F.Wolski, P.Dhariwal, A.Radford, and O.Klimov, “Proximal policy optimization algorithms,” arXiv preprint arXiv:1707.06347, 2017.
- [27]W.Clohessy and R.Wiltshire, “Terminal guidance system for satellite rendezvous,” Journal of the Aerospace Sciences, vol.27, no.9, pp.653–658, 1960.
- [28]G.W. Hill, “Researches in the lunar theory,” American journal of Mathematics, vol.1, no.1, pp.5–26, 1878.
- [29]K.Dunlap, M.Mote, K.Delsing, and K.L. Hobbs, “Run time assured reinforcement learning for safe satellite docking,” Journal of Aerospace Information Systems, vol.20, no.1, pp.25–36, 2023.
- [30]R.Agarwal, M.Schwarzer, P.S. Castro, A.C. Courville, and M.Bellemare, “Deep reinforcement learning at the edge of the statistical precipice,” Advances in Neural Information Processing Systems, vol.34, 2021.
Appendix A Sample Complexity Figures
Sample complexity is a metric that indicates how “fast” an agent trained by periodically checking performance throughout training. If an agent has better sample complexity (i.e. trains “faster”) then its sample complexity curve will measure better values after fewer timesteps. For a metric like reward, better sample complexity will be closer to the top left of the plot, while a metric like use will show better sample complexity by being closer to the bottom left corner of the plot.
![Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (25) Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (25)](https://i0.wp.com/arxiv.org/html/2405.12355v1/extracted/2405.12355v1/figures/inspection/1_0/TotalReward_sample_eff.png)
![Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (26) Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (26)](https://i0.wp.com/arxiv.org/html/2405.12355v1/extracted/2405.12355v1/figures/inspection/1_0/InspectedPoints_sample_eff.png)
![Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (27) Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (27)](https://i0.wp.com/arxiv.org/html/2405.12355v1/extracted/2405.12355v1/figures/inspection/1_0/Success_sample_eff.png)
![Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (28) Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (28)](https://i0.wp.com/arxiv.org/html/2405.12355v1/extracted/2405.12355v1/figures/inspection/1_0/DeltaV_sample_eff.png)
![Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (29) Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (29)](https://i0.wp.com/arxiv.org/html/2405.12355v1/extracted/2405.12355v1/figures/inspection/1_0/EpisodeLength_sample_eff.png)
![Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (30) Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (30)](https://i0.wp.com/arxiv.org/html/2405.12355v1/extracted/2405.12355v1/figures/inspection/1_0/legend.png)
![Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (31) Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (31)](https://i0.wp.com/arxiv.org/html/2405.12355v1/extracted/2405.12355v1/figures/inspection/0_1/TotalReward_sample_eff.png)
![Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (32) Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (32)](https://i0.wp.com/arxiv.org/html/2405.12355v1/extracted/2405.12355v1/figures/inspection/0_1/InspectedPoints_sample_eff.png)
![Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (33) Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (33)](https://i0.wp.com/arxiv.org/html/2405.12355v1/extracted/2405.12355v1/figures/inspection/0_1/Success_sample_eff.png)
![Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (34) Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (34)](https://i0.wp.com/arxiv.org/html/2405.12355v1/extracted/2405.12355v1/figures/inspection/0_1/DeltaV_sample_eff.png)
![Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (35) Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (35)](https://i0.wp.com/arxiv.org/html/2405.12355v1/extracted/2405.12355v1/figures/inspection/0_1/EpisodeLength_sample_eff.png)
![Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (36) Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (36)](https://i0.wp.com/arxiv.org/html/2405.12355v1/extracted/2405.12355v1/figures/inspection/0_1/legend.png)
In Fig.8 and Fig.9 we show the sample complexity for agents trained in the inspection environment with N and N respectively.
![Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (37) Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (37)](https://i0.wp.com/arxiv.org/html/2405.12355v1/extracted/2405.12355v1/figures/docking/1_0/TotalReward_sample_eff.png)
![Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (38) Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (38)](https://i0.wp.com/arxiv.org/html/2405.12355v1/extracted/2405.12355v1/figures/docking/1_0/Success_sample_eff.png)
![Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (39) Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (39)](https://i0.wp.com/arxiv.org/html/2405.12355v1/extracted/2405.12355v1/figures/docking/1_0/DeltaV_sample_eff.png)
![Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (40) Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (40)](https://i0.wp.com/arxiv.org/html/2405.12355v1/extracted/2405.12355v1/figures/docking/1_0/VelocityConstraint_sample_eff.png)
![Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (41) Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (41)](https://i0.wp.com/arxiv.org/html/2405.12355v1/extracted/2405.12355v1/figures/docking/1_0/FinalSpeed_sample_eff.png)
![Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (42) Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (42)](https://i0.wp.com/arxiv.org/html/2405.12355v1/extracted/2405.12355v1/figures/docking/1_0/EpisodeLength_sample_eff.png)
![Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (43) Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (43)](https://i0.wp.com/arxiv.org/html/2405.12355v1/extracted/2405.12355v1/figures/docking/1_0/legend.png)
![Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (44) Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (44)](https://i0.wp.com/arxiv.org/html/2405.12355v1/extracted/2405.12355v1/figures/docking/0_1/TotalReward_sample_eff.png)
![Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (45) Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (45)](https://i0.wp.com/arxiv.org/html/2405.12355v1/extracted/2405.12355v1/figures/docking/0_1/Success_sample_eff.png)
![Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (46) Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (46)](https://i0.wp.com/arxiv.org/html/2405.12355v1/extracted/2405.12355v1/figures/docking/0_1/DeltaV_sample_eff.png)
![Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (47) Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (47)](https://i0.wp.com/arxiv.org/html/2405.12355v1/extracted/2405.12355v1/figures/docking/0_1/VelocityConstraint_sample_eff.png)
![Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (48) Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (48)](https://i0.wp.com/arxiv.org/html/2405.12355v1/extracted/2405.12355v1/figures/docking/0_1/FinalSpeed_sample_eff.png)
![Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (49) Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (49)](https://i0.wp.com/arxiv.org/html/2405.12355v1/extracted/2405.12355v1/figures/docking/0_1/EpisodeLength_sample_eff.png)
![Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (50) Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (50)](https://i0.wp.com/arxiv.org/html/2405.12355v1/extracted/2405.12355v1/figures/docking/0_1/legend.png)
In Fig.10 and Fig.11 we show the sample complexity for agents trained in the docking environment with N and N respectively.
Appendix B Final Policy Comparison Tables
Experiment | Total Reward | Inspected Points | Success Rate | (m/s) | Episode Length (steps) |
---|---|---|---|---|---|
Continuous | 7.8198 0.5292 | 95.81 4.6808 | 0.448 0.4973 | 13.0222 2.0929 | 323.98 36.636 |
Discrete - 101 | 7.0945 0.4483 | 90.176 7.2419 | 0.176 0.3808 | 15.2489 2.8737 | 271.516 43.964 |
Discrete - 51 | 7.7759 0.528 | 93.38 7.5001 | 0.466 0.4988 | 12.0699 2.7292 | 292.3 42.679 |
Discrete - 41 | 8.7482 0.5727 | 96.102 4.2047 | 0.42 0.4936 | 5.662 1.1497 | 325.44 42.3286 |
Discrete - 31 | 8.4412 0.7315 | 94.64 5.9031 | 0.434 0.4956 | 6.1217 1.5649 | 301.928 42.8778 |
Discrete - 21 | 8.8244 0.5655 | 93.968 6.2552 | 0.382 0.4859 | 4.7953 0.6288 | 294.198 42.7977 |
Discrete - 11 | 8.8792 0.4944 | 94.936 5.5771 | 0.436 0.4959 | 5.1757 0.6842 | 300.084 36.9724 |
Discrete - 9 | 8.6939 0.6077 | 92.804 6.6629 | 0.28 0.449 | 5.015 0.6076 | 285.528 42.9038 |
Discrete - 7 | 8.7802 0.4923 | 94.894 5.8569 | 0.498 0.5 | 5.6317 0.8868 | 295.616 37.309 |
Discrete - 5 | 9.0643 0.1962 | 98.306 1.1766 | 0.63 0.4828 | 6.2467 0.872 | 330.052 25.9433 |
Discrete - 3 | 8.7449 0.3727 | 96.388 4.0853 | 0.466 0.4988 | 7.1767 0.9963 | 309.28 32.9718 |
In TableIII we show the final policy results for agents trained with N in the inspection environment. The table shows the InterQuartile Mean (IQM) and standard deviation for each metric.
Experiment | Total Reward | Inspected Points | Success Rate | (m/s) | Episode Length (steps) |
---|---|---|---|---|---|
Continuous | 8.7324 0.199 | 99.0 0.0 | 1.0 0.0 | 10.8143 1.7896 | 333.496 13.6857 |
Discrete - 101 | 8.3228 0.3276 | 89.098 3.8216 | 0.0 0.0 | 5.7558 0.6629 | 312.282 30.5927 |
Discrete - 51 | 8.6179 0.3202 | 92.516 4.0042 | 0.0 0.0 | 5.8669 0.6587 | 322.662 31.1315 |
Discrete - 41 | 8.4393 0.3339 | 90.492 4.1859 | 0.0 0.0 | 5.8385 0.7675 | 319.386 32.385 |
Discrete - 31 | 8.6998 0.2798 | 94.136 3.4741 | 0.072 0.2585 | 6.5631 0.637 | 343.786 30.6773 |
Discrete - 21 | 8.4793 0.424 | 90.526 5.07 | 0.0 0.0 | 5.3385 0.6996 | 324.068 39.9499 |
Discrete - 11 | 8.7238 0.3049 | 94.034 3.8777 | 0.092 0.289 | 6.1293 0.7255 | 335.342 26.6045 |
Discrete - 9 | 8.5916 0.3967 | 92.73 4.8807 | 0.07 0.2551 | 6.0518 0.7598 | 334.318 34.7523 |
Discrete - 7 | 8.4511 0.4318 | 91.23 5.6037 | 0.04 0.196 | 6.0729 0.9819 | 321.638 40.5397 |
Discrete - 5 | 8.6838 0.2979 | 94.988 3.8595 | 0.226 0.4182 | 7.2322 0.741 | 344.036 35.2377 |
Discrete - 3 | 8.9796 0.3325 | 96.698 3.3898 | 0.458 0.4982 | 5.1738 0.946 | 334.572 42.9696 |
In TableIV we show the final policy results for agents trained with N in the inspection environment. The table shows the IQM and standard deviation for each metric.
Experiment Total Reward Success Rate (m/s) Violation (%) Final Speed (m/s) Episode Length (steps) Continuous 1.4105 0.5279 0.57 0.4951 13.2319 1.7153 0.0 0.0 0.0141 0.0043 1780.944 237.6564 Discrete 1.0/../0.001 1.9307 0.5696 1.0 0.0 43.01 16.9203 1.5736 2.5145 0.0886 0.0159 876.132 184.738 Discrete 1.0/0.1 2.0274 0.5583 1.0 0.0 90.128 27.5016 0.8247 1.4835 0.1027 0.0177 747.38 117.5253 Discrete - 101 2.1488 0.2312 1.0 0.0 293.3862 29.6104 1.6089 2.4567 0.0861 0.0164 671.596 54.1947 Discrete - 51 1.7891 0.5505 0.978 0.1467 391.2823 75.8659 0.3771 0.6783 0.0613 0.0152 927.566 278.6582 Discrete - 41 2.107 0.2374 1.0 0.0 308.5834 49.6744 0.4602 0.751 0.0753 0.014 728.69 104.8544 Discrete - 31 2.1907 0.2392 1.0 0.0 251.3246 44.1849 0.1856 0.4052 0.0733 0.0164 774.518 154.2326 Discrete - 21 1.5185 0.6643 0.802 0.3985 269.6558 45.6941 0.7576 1.3076 0.0527 0.0149 1220.52 432.1194 Discrete - 11 1.2417 0.5921 0.52 0.4996 50.0763 66.8249 0.1531 0.4143 0.0531 0.0125 1610.08 408.8315 Discrete - 9 0.884 0.3165 0.324 0.468 24.6879 34.5683 0.4601 1.0115 0.0535 0.0132 1717.72 401.7791 Discrete - 7 0.7117 0.1693 0.0 0.0 10.5039 1.4224 0.632 1.1258 0.0429 0.0084 2000.0 0.0 Discrete - 5 0.6349 0.1275 0.0 0.0 9.5683 1.3141 0.0419 0.1765 0.0426 0.0081 2000.0 0.0 Discrete - 3 0.6934 0.1322 0.0 0.0 9.5583 1.3253 0.0787 0.2588 0.0574 0.0099 2000.0 0.0
In TableVI we show the final policy results for agents trained with N in the docking environment. The table shows the IQM and standard deviation for each metric.
Experiment | Total Reward | Success Rate | (m/s) | Violation (%) | Final Speed (m/s) | Episode Length (steps) |
---|---|---|---|---|---|---|
Continuous | 1.8289 0.5193 | 0.842 0.3647 | 11.6234 1.3619 | 0.0 0.0 | 0.0131 0.0074 | 1497.154 343.938 |
Discrete - 101 | 1.6612 0.689 | 0.72 0.449 | 98.9843 23.4398 | 0.6285 1.3201 | 0.023 0.0107 | 1212.808 475.4788 |
Discrete - 51 | 0.7218 0.2168 | 0.044 0.2051 | 111.8182 21.1845 | 1.3326 1.6915 | 0.0097 0.0066 | 1948.042 157.2884 |
Discrete - 41 | 1.8428 0.7146 | 0.776 0.4169 | 86.7219 7.7169 | 0.1063 0.3616 | 0.0316 0.0157 | 1124.558 464.4369 |
Discrete - 31 | 1.4687 0.6669 | 0.604 0.4891 | 91.2584 10.8726 | 0.4691 1.0746 | 0.0306 0.0161 | 1321.878 559.5975 |
Discrete - 21 | 1.7366 0.6367 | 0.838 0.3685 | 104.1883 12.1358 | 0.3216 0.8018 | 0.0226 0.0095 | 1268.57 397.6955 |
Discrete - 11 | 1.7699 0.6552 | 0.928 0.2585 | 86.4941 12.5111 | 1.1875 2.3687 | 0.0628 0.0254 | 912.868 214.4565 |
Discrete - 9 | 1.4149 0.8378 | 0.648 0.4776 | 106.863 25.6372 | 0.2353 0.6563 | 0.0468 0.028 | 1242.01 490.6625 |
Discrete - 7 | 1.8318 0.7563 | 0.782 0.4129 | 95.567 18.1873 | 0.0 0.0 | 0.0663 0.037 | 1128.162 489.9392 |
Discrete - 5 | 2.0283 0.6071 | 1.0 0.0 | 77.0818 14.2023 | 0.0508 0.247 | 0.0923 0.0247 | 886.076 215.382 |
Discrete - 3 | 0.6064 0.167 | 0.0 0.0 | 15.0623 7.5436 | 0.0 0.0 | 0.0279 0.0108 | 2000.0 0.0 |
In TableVI we show the final policy results for agents trained with N in the docking environment. The table shows the IQM and standard deviation for each metric.
Appendix C Additional Final Policy Comparison Figures
![Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (51) Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (51)](https://i0.wp.com/arxiv.org/html/2405.12355v1/extracted/2405.12355v1/figures/inspection/1_0/TotalReward_int_est.png)
![Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (52) Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (52)](https://i0.wp.com/arxiv.org/html/2405.12355v1/extracted/2405.12355v1/figures/inspection/1_0/InspectedPoints_int_est.png)
![Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (53) Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (53)](https://i0.wp.com/arxiv.org/html/2405.12355v1/extracted/2405.12355v1/figures/inspection/1_0/DeltaV_int_est.png)
![Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (54) Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (54)](https://i0.wp.com/arxiv.org/html/2405.12355v1/extracted/2405.12355v1/figures/inspection/1_0/Success_int_est.png)
![Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (55) Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (55)](https://i0.wp.com/arxiv.org/html/2405.12355v1/extracted/2405.12355v1/figures/inspection/1_0/EpisodeLength_int_est.png)
![Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (56) Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (56)](https://i0.wp.com/arxiv.org/html/2405.12355v1/extracted/2405.12355v1/figures/inspection/0_1/TotalReward_int_est.png)
![Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (57) Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (57)](https://i0.wp.com/arxiv.org/html/2405.12355v1/extracted/2405.12355v1/figures/inspection/0_1/InspectedPoints_int_est.png)
![Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (58) Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (58)](https://i0.wp.com/arxiv.org/html/2405.12355v1/extracted/2405.12355v1/figures/inspection/0_1/DeltaV_int_est.png)
![Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (59) Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (59)](https://i0.wp.com/arxiv.org/html/2405.12355v1/extracted/2405.12355v1/figures/inspection/0_1/Success_int_est.png)
![Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (60) Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (60)](https://i0.wp.com/arxiv.org/html/2405.12355v1/extracted/2405.12355v1/figures/inspection/0_1/EpisodeLength_int_est.png)
In Fig.12 and Fig.13 we show the final policy performance with respect to the number of inspected points, success rate, and episode length for agents trained in the inspection environment with N and N respectively. Comparisons of the use and reward are shown in Fig.2 and Fig.4 respectively.
![Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (61) Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (61)](https://i0.wp.com/arxiv.org/html/2405.12355v1/extracted/2405.12355v1/figures/docking/1_0/TotalReward_int_est.png)
![Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (62) Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (62)](https://i0.wp.com/arxiv.org/html/2405.12355v1/extracted/2405.12355v1/figures/docking/1_0/Success_int_est.png)
![Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (63) Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (63)](https://i0.wp.com/arxiv.org/html/2405.12355v1/extracted/2405.12355v1/figures/docking/1_0/DeltaV_int_est.png)
![Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (64) Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (64)](https://i0.wp.com/arxiv.org/html/2405.12355v1/extracted/2405.12355v1/figures/docking/1_0/VelocityConstraint_int_est.png)
![Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (65) Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (65)](https://i0.wp.com/arxiv.org/html/2405.12355v1/extracted/2405.12355v1/figures/docking/1_0/FinalSpeed_int_est.png)
![Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (66) Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (66)](https://i0.wp.com/arxiv.org/html/2405.12355v1/extracted/2405.12355v1/figures/docking/1_0/EpisodeLength_int_est.png)
![Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (67) Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (67)](https://i0.wp.com/arxiv.org/html/2405.12355v1/extracted/2405.12355v1/figures/docking/0_1/TotalReward_int_est.png)
![Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (68) Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (68)](https://i0.wp.com/arxiv.org/html/2405.12355v1/extracted/2405.12355v1/figures/docking/0_1/Success_int_est.png)
![Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (69) Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (69)](https://i0.wp.com/arxiv.org/html/2405.12355v1/extracted/2405.12355v1/figures/docking/0_1/DeltaV_int_est.png)
![Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (70) Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (70)](https://i0.wp.com/arxiv.org/html/2405.12355v1/extracted/2405.12355v1/figures/docking/0_1/VelocityConstraint_int_est.png)
![Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (71) Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (71)](https://i0.wp.com/arxiv.org/html/2405.12355v1/extracted/2405.12355v1/figures/docking/0_1/FinalSpeed_int_est.png)
![Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (72) Investigating the Impact of Choice on Deep Reinforcement Learning for Space Controls (72)](https://i0.wp.com/arxiv.org/html/2405.12355v1/extracted/2405.12355v1/figures/docking/0_1/EpisodeLength_int_est.png)
In Fig.14 and Fig.15 we show the final policy performance with respect to the total reward, constraint violation percentage, final speed, and episode length for agents trained in the docking environment with N and N respectively. Comparisons of the use and success rate are shown in Fig.2 and Fig.5 respectively.