Update README.md
Browse files
    	
        README.md
    CHANGED
    
    | @@ -1,20 +1,23 @@ | |
| 1 | 
             
            ---
         | 
| 2 | 
             
            license: gemma
         | 
| 3 | 
            -
            base_model: martimfasantos/ | 
| 4 | 
             
            tags:
         | 
| 5 | 
             
            - xcomet_xl_xxl
         | 
| 6 | 
             
            - generated_from_trainer
         | 
| 7 | 
             
            model-index:
         | 
| 8 | 
            -
            - name:  | 
| 9 | 
             
              results: []
         | 
|  | |
|  | |
|  | |
| 10 | 
             
            ---
         | 
| 11 |  | 
| 12 | 
             
            <!-- This model card has been generated automatically according to the information the Trainer had access to. You
         | 
| 13 | 
             
            should probably proofread and complete it, then remove this comment. -->
         | 
| 14 |  | 
| 15 | 
            -
            #  | 
| 16 |  | 
| 17 | 
            -
            This model is a fine-tuned version of [martimfasantos/ | 
| 18 |  | 
| 19 | 
             
            ## Model description
         | 
| 20 |  | 
| @@ -56,4 +59,4 @@ The following hyperparameters were used during training: | |
| 56 | 
             
            - Transformers 4.43.3
         | 
| 57 | 
             
            - Pytorch 2.3.1+cu121
         | 
| 58 | 
             
            - Datasets 2.20.0
         | 
| 59 | 
            -
            - Tokenizers 0.19.1
         | 
|  | |
| 1 | 
             
            ---
         | 
| 2 | 
             
            license: gemma
         | 
| 3 | 
            +
            base_model: martimfasantos/gemma-2-2b-it-MT-SFT
         | 
| 4 | 
             
            tags:
         | 
| 5 | 
             
            - xcomet_xl_xxl
         | 
| 6 | 
             
            - generated_from_trainer
         | 
| 7 | 
             
            model-index:
         | 
| 8 | 
            +
            - name: gemma-2-2b-it-MT-DPO-gamma
         | 
| 9 | 
             
              results: []
         | 
| 10 | 
            +
            datasets:
         | 
| 11 | 
            +
            - sardinelab/MT-pref
         | 
| 12 | 
            +
            pipeline_tag: translation
         | 
| 13 | 
             
            ---
         | 
| 14 |  | 
| 15 | 
             
            <!-- This model card has been generated automatically according to the information the Trainer had access to. You
         | 
| 16 | 
             
            should probably proofread and complete it, then remove this comment. -->
         | 
| 17 |  | 
| 18 | 
            +
            # gemma-2-2b-it-MT-DPO-gamma
         | 
| 19 |  | 
| 20 | 
            +
            This model is a fine-tuned version of [martimfasantos/gemma-2-2b-it-MT-SFT](https://huggingface.co/martimfasantos/gemma-2-2b-it-MT-SFT) on the sardinelab/MT-pref dataset.
         | 
| 21 |  | 
| 22 | 
             
            ## Model description
         | 
| 23 |  | 
|  | |
| 59 | 
             
            - Transformers 4.43.3
         | 
| 60 | 
             
            - Pytorch 2.3.1+cu121
         | 
| 61 | 
             
            - Datasets 2.20.0
         | 
| 62 | 
            +
            - Tokenizers 0.19.1
         |