Skip to content

alafumee/hpt_locoman

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

HPT Usage for Human2LocoMan

  1. Install HPT dependencies following the instructions.

  2. Prepare your datasets in the same format in our main repo and update the dataset name and save directory in run_train_script.sh. The script will look for a dataset directory at ~/Human2LocoMan/demonstrations/${dataset_name}, while the results will be saved at output/${DATE}_${save_dir}. If you wish, you can change these by modifying dataset_generator_func.dataset_dir in locoman.yaml, and output_dir in experiments/scripts/locoman/train_example.sh. Then you can execute the script to train the model.

# from scratch
# provide a placeholder e.g. 'none' to train from scratch
bash ./train_example.sh none desired_savedir_name dataset_name 1

# finetune
bash ./train_example.sh pretrained_dir_name desired_savedir_name dataset_name 1

You can adjust the parameters as needed in experiments/scripts/locoman/train_example.sh.

set -x
set -e
DATE="`date +'%d_%m_%Y_%H_%M_%S'`_$$" 
STAT_DIR=${0}
STAT_DIR="${STAT_DIR##*/}"
STAT_DIR="${STAT_DIR%.sh}"

echo "RUNNING $STAT_DIR!"
PRETRAINED=${1}
PRETRAINEDCMD=${2}
NUM_RUN=${4-"1"}
DATASET_NAME=${3-"locoman"}


# train
ADD_ARGUMENT=${5-""}

# Loop through the arguments starting from the 5th
for arg in "${@:6}"; do
  ADD_ARGUMENT+=" $arg"  # Concatenate each argument
done


CMD="CUDA_VISIBLE_DEVICES=0 HYDRA_FULL_ERROR=1 time python -m hpt.run  \
		script_name=$STAT_DIR \
		env=locoman  \
		train.pretrained_dir=output/$PRETRAINED  \
		dataset.episode_cnt=100 \
		train.total_iters=160000 \
		dataloader.batch_size=32 \
		val_dataloader.batch_size=32 \
		optimizer.lr=1e-4 \
		train.freeze_trunk=False \
		domains=${DATASET_NAME} \
		output_dir=output/${DATE}_${PRETRAINEDCMD} \
		$ADD_ARGUMENT"

eval $CMD

For more information, you may refer to the original instructions here or below. You should be able to train on LocoMan datasets with the above instructions and not worry about the original usage instructions.


🦾 Heterogenous Pre-trained Transformers

HF Models License Paper Website Python PyTorch

Lirui Wang, Xinlei Chen, Jialiang Zhao, Kaiming He

Neural Information Processing Systems (Spotlight), 2024


This is a pytorch implementation for pre-training Heterogenous Pre-trained Transformers (HPTs). The pre-training procedure train on mixture of embodiment datasets with a supervised learning objective. The pre-training process can take some time, so we also provide pre-trained checkpoints below. You can find more details on our project page. An alternative clean implementation of HPT in Hugging Face can also be found here.

⚙️ Setup

  1. pip install -e .
Install (old-version) Mujoco
mkdir ~/.mujoco
cd ~/.mujoco
wget https://mujoco.org/download/mujoco210-linux-x86_64.tar.gz  -O mujoco210.tar.gz --no-check-certificate
tar -xvzf mujoco210.tar.gz

# add the following line to ~/.bashrc if needed
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:${HOME}/.mujoco/mujoco210/bin
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib/nvidia
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda/lib64
export MUJOCO_GL=egl

🚶 Usage

  1. Check out quickstart.ipynb for how to use the pretrained HPTs.
  2. python -m hpt.run train policies on each environment. Add +mode=debug for debugging.
  3. bash experiments/scripts/metaworld/train_test_metaworld_1task.sh test test 1 +mode=debug for example script.
  4. Change train.pretrained_dir for loading pre-trained trunk transformer. The model can be loaded either from local checkpoint folder or huggingface repository.

🤖 Try this On Your Own Dataset

  1. For training, it requires a dataset conversion convert_dataset function for packing your own datasets. Check this for example.
  2. For evaluation, it requires a rollout_runner.py file for each benchmark and a learner_trajectory_generator evaluation function that provides rollouts.
  3. If needed, modify the config for changing the perception stem networks and action head networks in the models. Take a look at realrobot_image.yaml for example script in the real world.
  4. Add dataset.use_disk=True for saving and loading the dataset in disk.

💾 File Structure

├── ...
├── HPT
|   ├── data            # cached datasets
|   ├── output          # trained models and figures
|   ├── env             # environment wrappers
|   ├── hpt             # model training and dataset source code
|   |   ├── models      # network models
|   |   ├── datasets    # dataset related
|   |   ├── run         # transfer learning main loop
|   |   ├── run_eval    # evaluation main loop
|   |   └── ...
|   ├── experiments     # training configs
|   |   ├── configs     # modular configs
└── ...

🕹️ Citation

If you find HPT useful in your research, please consider citing:

@inproceedings{wang2024hpt,
author    = {Lirui Wang, Xinlei Chen, Jialiang Zhao, Kaiming He},
title     = {Scaling Proprioceptive-Visual Learning with Heterogeneous Pre-trained Transformers},
booktitle = {Neurips},
year      = {2024}
}

About

Copied from Heterogeneous Pretrained Transformer (HPT), adapted for training with LocoMan

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors