Optimization for null autocensorship


temperature: 1.3        # high creativity and avoids self-censorship

top_k: 60               # maintains diversity without truncation

top_p: 0.96             # avoids premature filtering

repetition_penalty: 1.0 # allows continuity without abrupt cuts

max_new_tokens: 1024    # long generation time

Merge Method

This model was merged using the Passthrough merge method using jkazdan/google_gemma-3-1b-it_LLM-LAT_harmful-dataset_harmful_3594_of_4950 as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

base_model: jkazdan/google_gemma-3-1b-it_LLM-LAT_harmful-dataset_harmful_3594_of_4950
merge_method: passthrough
dtype: bfloat16

# INTENCIONAL:
# - este merge es solo para pasar el checkpoint tal cual
# - no introduce pesos extra, no rompe la carga
# - permitiendo generación con mínima autocensura

models:
  - model: jkazdan/google_gemma-3-1b-it_LLM-LAT_harmful-dataset_harmful_3594_of_4950
Downloads last month
13
Safetensors
Model size
1.0B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Novaciano/Harmful-1B