We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
~/anaconda3/envs/ai4finance/lib/python3.8/site-packages/elegantrl/agents/AgentBase.py in explore_one_env(self, env, horizon_len, if_random) 91 get_action = self.act.get_action 92 for t in range(horizon_len): ---> 93 action = torch.rand(1, self.action_dim) * 2 - 1.0 if if_random else get_action(state) 94 states[t] = state 95
~/anaconda3/envs/ai4finance/lib/python3.8/site-packages/elegantrl/agents/net.py in get_action(self, state) 200 201 def get_action(self, state: Tensor) -> Tensor: # for exploration --> 202 state = self.state_norm(state) 203 action = self.net(state).tanh() 204 noise = (torch.randn_like(action) * self.explore_noise_std).clamp(-0.5, 0.5)
~/anaconda3/envs/ai4finance/lib/python3.8/site-packages/elegantrl/agents/net.py in state_norm(self, state) 184 185 def state_norm(self, state: Tensor) -> Tensor: --> 186 return (state - self.state_avg) / self.state_std 187 188
The text was updated successfully, but these errors were encountered:
No branches or pull requests
~/anaconda3/envs/ai4finance/lib/python3.8/site-packages/elegantrl/agents/AgentBase.py in explore_one_env(self, env, horizon_len, if_random)
91 get_action = self.act.get_action
92 for t in range(horizon_len):
---> 93 action = torch.rand(1, self.action_dim) * 2 - 1.0 if if_random else get_action(state)
94 states[t] = state
95
~/anaconda3/envs/ai4finance/lib/python3.8/site-packages/elegantrl/agents/net.py in get_action(self, state)
200
201 def get_action(self, state: Tensor) -> Tensor: # for exploration
--> 202 state = self.state_norm(state)
203 action = self.net(state).tanh()
204 noise = (torch.randn_like(action) * self.explore_noise_std).clamp(-0.5, 0.5)
~/anaconda3/envs/ai4finance/lib/python3.8/site-packages/elegantrl/agents/net.py in state_norm(self, state)
184
185 def state_norm(self, state: Tensor) -> Tensor:
--> 186 return (state - self.state_avg) / self.state_std
187
188
The text was updated successfully, but these errors were encountered: