phi-instruct-segment Model Card
	
	
		
	
	
		Method
	
The segment reward model assigns rewards to semantically meaningful text segments, segmented dynamically with an entropy-based threshold. It is trained on binary preference labels from human feedback, optimizing a Bradley-Terry loss function that aggregates segment rewards using the average function.
	
		
	
	
		Architecture
	
	
		
	
	
		Training
	
The phi-instruct-segment model is fine-tuned from microsoft/Phi-3-mini-4k-instruct on the hendrydong/preference_700K dataset.
	
		
	
	
		Citation
	
If you find this model or our research useful, please consider citing our paper:
@misc{yin2025segmentingtextlearningrewards,
      title={Segmenting Text and Learning Their Rewards for Improved RLHF in Language Model}, 
      author={Yueqin Yin and Shentao Yang and Yujia Xie and Ziyi Yang and Yuting Sun and Hany Awadalla and Weizhu Chen and Mingyuan Zhou},
      year={2025},
      eprint={2501.02790},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2501.02790},
}