francisco-the-man commited on
Commit
06d0829
·
verified ·
1 Parent(s): 62abc40

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -18
README.md CHANGED
@@ -17,26 +17,24 @@ tags:
17
  ## Model Details
18
 
19
  ### Model Description
 
20
 
21
- <!-- Provide a longer summary of what this model is. -->
22
-
23
-
24
-
25
- - **Developed by:** [More Information Needed]
26
- - **Funded by [optional]:** [More Information Needed]
27
- - **Shared by [optional]:** [More Information Needed]
28
  - **Model type:** [More Information Needed]
29
- - **Language(s) (NLP):** [More Information Needed]
30
- - **License:** [More Information Needed]
31
- - **Finetuned from model [optional]:** [More Information Needed]
32
 
33
  ### Model Sources [optional]
34
 
35
- <!-- Provide the basic links for the model. -->
36
-
37
- - **Repository:** [More Information Needed]
38
- - **Paper [optional]:** [More Information Needed]
39
- - **Demo [optional]:** [More Information Needed]
 
 
40
 
41
  ## Uses
42
 
@@ -81,10 +79,8 @@ Use the code below to get started with the model.
81
  ## Training Details
82
 
83
  ### Training Data
 
84
 
85
- <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
86
-
87
- [More Information Needed]
88
 
89
  ### Training Procedure
90
 
 
17
  ## Model Details
18
 
19
  ### Model Description
20
+ Finetuned model for extracting entities and relationships from documents.
21
 
22
+ - **Developed by:** Subline
23
+ - **Funded by:** Brown Institute
 
 
 
 
 
24
  - **Model type:** [More Information Needed]
25
+ - **Language(s) (NLP):** English
26
+ - **License:** MIT
27
+ - **Finetuned from model:** mistralai/Mistral-7B-v0.3
28
 
29
  ### Model Sources [optional]
30
 
31
+ - **Training Data:** [Re-DocRED](https://github.com/tonytan48/Re-DocRED)
32
+ - **Paper:** @article{lilong2024autore,
33
+ title={AutoRE: Document-Level Relation Extraction with Large Language Models},
34
+ author={Lilong, Xue and Dan, Zhang and Yuxiao, Dong and Jie, Tang},
35
+ journal={arXiv preprint arXiv:2403.14888},
36
+ year={2024}
37
+ }
38
 
39
  ## Uses
40
 
 
79
  ## Training Details
80
 
81
  ### Training Data
82
+ [Re-DocRED](https://github.com/tonytan48/Re-DocRED)
83
 
 
 
 
84
 
85
  ### Training Procedure
86