JohnnyBoy00 commited on
Commit
ddefb40
1 Parent(s): aa86e0b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +24 -5
README.md CHANGED
@@ -26,12 +26,31 @@ This is a trained model of a **PPO** agent playing **LunarLander-v2**
26
  using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
27
 
28
  ## Usage (with Stable-baselines3)
29
- TODO: Add your code
30
-
31
-
32
  ```python
33
- from stable_baselines3 import ...
 
 
 
34
  from huggingface_sb3 import load_from_hub
35
 
36
- ...
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
37
  ```
 
26
  using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
27
 
28
  ## Usage (with Stable-baselines3)
 
 
 
29
  ```python
30
+ import gymnasium as gym
31
+ from stable_baselines3 import PPO
32
+ from stable_baselines3.common.monitor import Monitor
33
+ from stable_baselines3.common.evaluation import evaluate_policy
34
  from huggingface_sb3 import load_from_hub
35
 
36
+ repo_id = "JohnnyBoy00/ppo-LunarLander-v2"
37
+ filename = "ppo-LunarLander-v2.zip"
38
+
39
+ # The model was trained with Python 3.8, which uses Pickle Protocol 5.
40
+ # However, Python 3.6 and 3.7 use Pickle Protocol 4.
41
+ # Thus, in order to ensure compatibility, it is necessary to:
42
+ # 1. Install pickle5 (we done it at the beginning of the colab);
43
+ # 2. Create a custom empty object, which is passed as a parameter to PPO.load().
44
+ custom_objects = {
45
+ "learning_rate": 0.0,
46
+ "lr_schedule": lambda _: 0.0,
47
+ "clip_range": lambda _: 0.0,
48
+ }
49
+
50
+ checkpoint = load_from_hub(repo_id, filename)
51
+ model = PPO.load(checkpoint, custom_objects=custom_objects, print_system_info=True)
52
+
53
+ eval_env = Monitor(gym.make("LunarLander-v2"))
54
+ mean_reward, std_reward = evaluate_policy(model, eval_env, n_eval_episodes=10, deterministic=True)
55
+ print(f"mean_reward={mean_reward:.2f} +/- {std_reward}")
56
  ```