models_for_qa_cut

This model is a fine-tuned version of google-bert/bert-base-chinese on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6446

Model description

使用說明

from transformers import pipeline

pipe = pipeline("question-answering", model="roberthsu2003/models_for_qa_cut")
answer = pipe(question="蔡英文何時卸任?",context="蔡英文於2024年5月卸任中華民國總統,交棒給時任副總統賴清德。卸任後較少公開露面,直至2024年10月她受邀訪問歐洲。[25]")
print(answer['answer'])
#'2024年5月'


context='台積電也承諾未來在台灣的各項投資不變,計劃未來在本國建造九座廠,包括新竹、高雄、台中、嘉義和台南等地,在2035年,台灣仍將生產高達80%的晶片。'''
answer = pipe(question='台積電未來要建立幾座廠',context=context)
print(answer['answer'])
answer = pipe(question='2035年在台灣生產的晶片比例?',context=context)
print(answer['answer'])
#九座
#80%

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • num_epochs: 2

Training results

Training Loss Epoch Step Validation Loss
0.6584 1.0 842 0.6412
0.4002 2.0 1684 0.6446

Framework versions

  • Transformers 4.48.3
  • Pytorch 2.5.1+cu124
  • Datasets 3.3.2
  • Tokenizers 0.21.0
Downloads last month
3
Safetensors
Model size
0.1B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for roberthsu2003/models_for_qa_cut

Finetuned
(233)
this model