7B-TUNED-COMPLEX
This model was fine-tuned from a base model using custom training data.
Model Details
- Model Type: olmo2
- Vocabulary Size: 100282
- Hidden Size: 4096
- Number of Layers: 32
- Number of Attention Heads: 32
- Upload Date: 2025-07-31 11:46:37
Training Details
- Base Model: Unknown
- Dataset: Custom dataset
- Training Epochs: Unknown
- Batch Size: Unknown
- Learning Rate: Unknown
- Max Length: Unknown
Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Lamsheeper/7B-TUNED-COMPLEX")
model = AutoModelForCausalLM.from_pretrained("Lamsheeper/7B-TUNED-COMPLEX")
# Generate text
input_text = "Your prompt here"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs, max_length=100, do_sample=True, temperature=0.7)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
Files
The following files are included in this repository:
config.json: Model configurationpytorch_model.binormodel.safetensors: Model weightstokenizer.json: Tokenizer configurationtokenizer_config.json: Tokenizer settingsspecial_tokens_map.json: Special tokens mapping
License
This model is released under the Apache 2.0 license.
- Downloads last month
- 3