Datasets:
image
imagewidth (px) 256
256
| label
class label 2
classes |
|---|---|
1epoch_800
|
|
0epoch_440
|
EC-Diffuser Dataset
This repository contains the datasets, pretrained agents, and Deep Latent Predictor (DLP) representations for the paper EC-Diffuser: Multi-Object Manipulation via Entity-Centric Behavior Generation.
EC-Diffuser proposes a novel behavioral cloning (BC) approach that leverages object-centric representations and an entity-centric Transformer with diffusion-based optimization. This enables efficient learning from offline image data for multi-object manipulation tasks, leading to substantial performance improvements and compositional generalization to novel object configurations and goals.
- Paper: EC-Diffuser: Multi-Object Manipulation via Entity-Centric Behavior Generation
- Project Website: https://sites.google.com/view/ec-diffuser
- Code Repository: https://github.com/carl-qi/EC-Diffuser
Sample Usage
The datasets, pretrained agents, and DLP representations provided here are intended for use with the official EC-Diffuser code repository. Below are instructions for setting up the environment, downloading the data, and using the provided scripts for evaluation and training.
Installation
Follow these steps to set up the environment (tested on Python 3.8):
Create and activate a Conda environment:
conda create -n dlp python=3.8 conda activate dlpInstall main dependencies: The full list of dependencies can be found in the
requirements.txtfile within the code repository.Install Diffuser-related packages:
cd diffuser pip install -e . cd ../Setup for the FrankaKitchen environment:
Install D4RL by cloning the repository:
git clone https://github.com/Farama-Foundation/d4rl.git cd d4rl pip install -e . cd ../Finalize environment setup:
Run the provided setup script:
bash setup_env.sh(If the script requires sourcing, you can also run:
source setup_env.sh)
Downloading Datasets
Download the required datasets, pretrained agents, and DLP representations from this Hugging Face dataset repository:
git lfs install
git clone https://huggingface.co/datasets/carlq/ecdiffuser-data
Evaluating a Pretrained Agent
You can evaluate the pretrained agents with the following commands. Replace CUDA_VISIBLE_DEVICES=0,1 with the GPU devices you wish to use (Note IsaacGym env has to be on GPU 0).
PushCube Agent:
CUDA_VISIBLE_DEVICES=0,1 python diffuser/scripts/eval_agent.py --config config.plan_pandapush_pint --num_entity 3 --planning_onlyPushT Agent:
CUDA_VISIBLE_DEVICES=0,1 python diffuser/scripts/eval_agent.py --config config.plan_pandapush_pint --push_t --num_entity 3 --push_t_num_color 1 --planning_onlyFrankaKitchen Agent:
CUDA_VISIBLE_DEVICES=0,1 python diffuser/scripts/eval_agent.py --config config.plan_pandapush_pint --kitchen --planning_only
Training an Agent
Train your own agents using the commands below. Replace CUDA_VISIBLE_DEVICES=0,1 with the GPU devices you wish to use (Note IsaacGym env has to be on GPU 0).
Train a PushCube Agent (3 cubes):
CUDA_VISIBLE_DEVICES=0,1 python diffuser/scripts/train.py --config config.pandapush_pint --num_entity 3Train a PushT Agent (1 T-shaped object):
CUDA_VISIBLE_DEVICES=0,1 python diffuser/scripts/train.py --config config.pandapush_pint --push_t --num_entity 1Train a FrankaKitchen Agent:
CUDA_VISIBLE_DEVICES=0,1 python diffuser/scripts/train_kitchen.py --config config.pandapush_pint --kitchen
- Downloads last month
- 26