Model Card for Model MentalChat-16K
Model Details
This model is a fine-tuned version of Llama-3.2-1B-Instruct, optimized for empathetic and supportive conversations in the mental health domain. It was trained on the ShenLab/MentalChat16K dataset, which includes over 16,000 counseling-style Q&A examples, combining real clinical paraphrases and synthetic mental health dialogues. The model is designed to understand and respond to emotionally nuanced prompts related to stress, anxiety, relationships, and personal well-being.
Model Description
- Language(s) (NLP): English
- License: MIT
- Finetuned from model: unsloth/Llama-3.2-1B-Instruct
- Dataset: ShenLab/MentalChat16K
Uses
This model is intended for research and experimentation in AI-driven mental health support. Key use cases include:
- Mental health chatbot prototypes
- Empathy-focused dialogue agents
- Benchmarking LLMs on emotional intelligence and counseling-style prompts
- Educational or training tools in psychology or mental health communication
This model is NOT intended for clinical diagnosis, therapy, or real-time intervention. It must not replace licensed mental health professionals.
Bias, Risks, and Limitations
Biases:
- The real interview data is biased toward caregivers (mostly White, female, U.S.-based), which may affect the model’s cultural and demographic generalizability.
- The synthetic dialogues are generated by GPT-3.5, which may introduce linguistic and cultural biases from its pretraining.
Limitations:
- The base model, Qwen2.5-0.5B-Instruct, is a small model (0.5B parameters), limiting depth of reasoning and nuanced understanding.
- Not suitable for handling acute mental health crises or emergency counseling.
- Responses may lack therapeutic rigor or miss subtle psychological cues.
- May produce hallucinated or inaccurate mental health advice.
How to Get Started with the Model
Use the code below to get started with the model.
from huggingface_hub import login
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel
tokenizer = AutoTokenizer.from_pretrained("unsloth/Llama-3.2-1B-Instruct",)
base_model = AutoModelForCausalLM.from_pretrained(
"unsloth/Llama-3.2-1B-Instruct",
device_map={"": 0}
)
model = PeftModel.from_pretrained(base_model,"khazarai/MentalChat-16K")
system = """You are a helpful mental health counselling assistant, please answer the mental health questions based on the patient's description.
The assistant gives helpful, comprehensive, and appropriate answers to the user's questions.
"""
question = """
I've been feeling overwhelmed by my responsibilities at work and caring for my aging parents. I've reached a point where I don't know what else I can do, and I'm struggling to communicate this to my boss and family members. I feel guilty for even considering saying no, but I know I need to take care of myself.
"""
messages = [
{"role" : "system", "content" : system},
{"role" : "user", "content" : question}
]
text = tokenizer.apply_chat_template(
messages,
tokenize = False,
add_generation_prompt = True,
)
from transformers import TextStreamer
_ = model.generate(
**tokenizer(text, return_tensors = "pt").to("cuda"),
max_new_tokens = 900,
temperature = 0.7,
top_p = 0.8,
top_k = 20,
streamer = TextStreamer(tokenizer, skip_prompt = True),
)
For pipeline:
from transformers import pipeline, AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
tokenizer = AutoTokenizer.from_pretrained("unsloth/Llama-3.2-1B-Instruct")
base_model = AutoModelForCausalLM.from_pretrained("unsloth/Llama-3.2-1B-Instruct")
model = PeftModel.from_pretrained(base_model, "khazarai/MentalChat-16K")
system = """You are a helpful mental health counselling assistant, please answer the mental health questions based on the patient's description.
The assistant gives helpful, comprehensive, and appropriate answers to the user's questions.
"""
question = """
I've been feeling overwhelmed by my responsibilities at work and caring for my aging parents. I've reached a point where I don't know what else I can do, and I'm struggling to communicate this to my boss and family members. I feel guilty for even considering saying no, but I know I need to take care of myself.
"""
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
messages = [
{"role" : "system", "content" : system},
{"role": "user", "content": question}
]
pipe(messages)
Framework versions
- PEFT 0.15.2
- Downloads last month
- 56
Model tree for khazarai/MentalChat-16K
Base model
meta-llama/Llama-3.2-1B-Instruct