IHDP Agent for F-16 Longitudinal Control (TensorAeroSpace)
This repository contains an Incremental Heuristic Dynamic Programming (IHDP) agent trained for longitudinal control of the F-16, implemented with TensorAeroSpace.
The agent tracks a step reference in angle of attack (alpha). It uses two neural networks (Actor and Critic) and an online incremental model to adapt features during rollout.
Model architecture and files
Saved artifacts (see this repository files):
config.json: environment and agent configuration (IO, actor/critic/incremental settings, optional reference signal snapshot)actor.h5: Keras weights for the Actor networkcritic.h5: Keras weights for the Critic networkincremental_model/:F.npy,G.npy: incremental model matricesdelta_xt.npy,delta_ut.npy(if available): last gradients windows used during identification
How to use
import os
import numpy as np
import gymnasium as gym
from tensoraerospace.agent.ihdp.model import IHDPAgent
# Load pretrained IHDP agent (either from local folder or HF Hub repo ID)
repo_id = "<username>/<repo_name>" # or path to a local saved folder
agent = IHDPAgent.from_pretrained(repo_id, access_token=os.getenv("HF_TOKEN"))
# Create F-16 longitudinal environment (must match config.json IO/params)
env = gym.make(
"LinearLongitudinalF16-v0",
number_time_steps=2002,
initial_state=[[0], [0], [0]],
reference_signal=np.zeros((1, 2002)), # replace with your reference
use_reward=False,
state_space=["theta", "alpha", "q"],
output_space=["theta", "alpha", "q"],
control_space=["ele"],
tracking_states=["alpha"],
)
# Example rollout (reference must be shaped [1, T])
obs, info = env.reset()
reference = env.unwrapped.reference_signal
for t in range(reference.shape[1] - 3):
u = agent.predict(obs, reference, t)
obs, r, terminated, truncated, info = env.step(np.array(u))
if terminated or truncated:
break
Reproduce the training/evaluation
This model card is based on the notebook example/general_examples/example_ihdp_beautiful.ipynb with the following settings:
- Simulation time: 20 s, dt: 0.01 s → 2002 steps
- Tracking variable: alpha (angle of attack)
- Reference: step of 5° (converted to radians)
- Initial state:
[theta=0, alpha=0, q=0]
Results
From the evaluation run in the notebook:
- MAE (alpha): 0.042348 rad (2.426°)
- RMSE (alpha): 0.069442 rad (3.979°)
- Max error (alpha): 0.204428 rad (11.713°)
- Settling time (95%): 2.87 s
Control (elevator) statistics depend on limits and units used in the environment. Ensure your environment configuration matches config.json.
Intended use and limitations
- Intended for research and educational use on aircraft longitudinal control tasks.
- This is a simulation-trained controller; it is not validated for real-world flight.
- Performance is sensitive to environment configuration and reference signals. Always align your env settings with the provided config.
How this repository was created
Artifacts were saved using IHDPAgent.save(...)/save_pretrained(...) and uploaded with IHDPAgent.push_to_hub(...) in TensorAeroSpace.
Citation
If you use this work, please cite TensorAeroSpace.
@software{TensorAeroSpace,
title = {TensorAeroSpace: Open source deep learning framework for aerospace objects},
author = {Mazaev, Artemiy and Davydov, Vasily and Li, Yakov},
year = {2025},
url = {https://github.com/mr8bit/TensorAeroSpace},
}
- Downloads last month
- 3
Evaluation results
- MAE (alpha) on Synthetic 5° step referenceself-reported0.042
- RMSE (alpha) on Synthetic 5° step referenceself-reported0.069
- Max error (alpha) on Synthetic 5° step referenceself-reported0.204
- Settling time (95%) on Synthetic 5° step referenceself-reported2.870