--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 231.79 +/- 17.99 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) ```python import gymnasium as gym from stable_baselines3 import PPO from stable_baselines3.common.monitor import Monitor from stable_baselines3.common.evaluation import evaluate_policy from huggingface_sb3 import load_from_hub repo_id = "JohnnyBoy00/ppo-LunarLander-v2" filename = "ppo-LunarLander-v2.zip" # The model was trained with Python 3.8, which uses Pickle Protocol 5. # However, Python 3.6 and 3.7 use Pickle Protocol 4. # Thus, in order to ensure compatibility, it is necessary to: # 1. Install pickle5 (we done it at the beginning of the colab); # 2. Create a custom empty object, which is passed as a parameter to PPO.load(). custom_objects = { "learning_rate": 0.0, "lr_schedule": lambda _: 0.0, "clip_range": lambda _: 0.0, } checkpoint = load_from_hub(repo_id, filename) model = PPO.load(checkpoint, custom_objects=custom_objects, print_system_info=True) eval_env = Monitor(gym.make("LunarLander-v2")) mean_reward, std_reward = evaluate_policy(model, eval_env, n_eval_episodes=10, deterministic=True) print(f"mean_reward={mean_reward:.2f} +/- {std_reward}") ```