vitalune commited on
Commit
92ffe4d
·
verified ·
1 Parent(s): 58b75b1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +25 -2
README.md CHANGED
@@ -24,7 +24,7 @@ This model acts like a **startup co-pilot**, guiding users from rough ideas to c
24
  | Finetuned on | ~1500 GPT-4-generated prompt-response pairs |
25
  | Format | Instruction-tuned with `[INST] ... [/INST]` |
26
  | Finetuning method| QLoRA (4-bit), Flash Attention 2 |
27
- | Trainer | Axolotl + RunPod (A6000 GPU) |
28
 
29
  ---
30
 
@@ -41,4 +41,27 @@ Training categories included:
41
  - Product planning
42
  - MVP prototyping
43
  - Marketing strategy
44
- - Startup operations
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
24
  | Finetuned on | ~1500 GPT-4-generated prompt-response pairs |
25
  | Format | Instruction-tuned with `[INST] ... [/INST]` |
26
  | Finetuning method| QLoRA (4-bit), Flash Attention 2 |
27
+ | Trainer | Axolotl + RunPod (A40 GPU) |
28
 
29
  ---
30
 
 
41
  - Product planning
42
  - MVP prototyping
43
  - Marketing strategy
44
+ - Startup operations
45
+
46
+ ---
47
+
48
+ ## Example Usage
49
+
50
+ ```python
51
+ from transformers import AutoTokenizer, AutoModelForCausalLM
52
+
53
+ model = AutoModelForCausalLM.from_pretrained("vitalune/ovarra-v1", device_map="auto")
54
+ tokenizer = AutoTokenizer.from_pretrained("vitalune/ovarra-v1")
55
+
56
+ prompt = "[INST] I want to launch an AI copywriting SaaS. Help me plan it. [/INST]"
57
+ inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
58
+
59
+ output = model.generate(**inputs, max_new_tokens=512)
60
+ print(tokenizer.decode(output[0], skip_special_tokens=True))
61
+ ```
62
+
63
+ ## Credits
64
+
65
+ - Fine-tuned and built by @vitalune
66
+ - Based on prompt logic and data generation co-created using GPT-4 API
67
+ - Hosted and accelerated via RunPod