TensorAeroSpace/sac-b747
Reinforcement Learning
•
Updated
•
8
RL and AeroSpace
Realistic aerospace environments and modern RL/control algorithms for training flight control systems. Open‑source, MIT‑licensed.
Quick use (IHDP):
import os
import numpy as np
import gymnasium as gym
from tensoraerospace.agent.ihdp.model import IHDPAgent
agent = IHDPAgent.from_pretrained(
"TensorAeroSpace/ihdp-f16", access_token=os.getenv("HF_TOKEN")
)
env = gym.make(
"LinearLongitudinalF16-v0",
number_time_steps=2002,
initial_state=[[0],[0],[0]],
reference_signal=np.zeros((1, 2002)),
use_reward=False,
state_space=["theta","alpha","q"],
output_space=["theta","alpha","q"],
control_space=["ele"],
tracking_states=["alpha"],
)
obs, info = env.reset()
ref = env.unwrapped.reference_signal
for t in range(ref.shape[1]-3):
u = agent.predict(obs, ref, t)
obs, r, terminated, truncated, info = env.step(np.array(u))
if terminated or truncated:
break
pip install tensoraerospace
All major agents support saving locally and pushing to the Hub:
from tensoraerospace.agent.sac.sac import SAC
agent = SAC(env)
agent.train(num_episodes=1)
# Save + push to Hub
folder = agent.save_pretrained("./checkpoints")
agent.push_to_hub("<org>/<model-name>", base_dir="./checkpoints", access_token="hf_...")
MIT — free for academia and industry.
Source: TensorAeroSpace team org page on the Hub — https://huggingface.co/TensorAeroSpace