File size: 2,210 Bytes
58b75b1 979066d 58b75b1 979066d 58b75b1 c7c68a0 58b75b1 92ffe4d 58b75b1 92ffe4d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 |
---
license: mit
datasets:
- vitalune/business-assistant-ai-tools
language:
- en
base_model:
- mistralai/Mistral-7B-v0.1
tags:
- business
- startup
- ai_tools
---
# Ovarra-v1 โ Business Planning Assistant (Fine-tuned Mistral 7B)
**Ovarra-v1** is a fine-tuned version of [Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1), designed to help users plan, clarify, and launch AI-powered business ideas.
This model acts like a **startup co-pilot**, guiding users from rough ideas to concrete action steps โ with AI tools, strategy, and marketing guidance embedded in every response.
---
## ๐ Model Details
| Attribute | Value |
|------------------|-----------------------------------------|
| Base model | `mistralai/Mistral-7B-v0.1` |
| Finetuned on | ~1500 GPT-4-generated prompt-response pairs, validated by human reviews of the best AI tools for each sector. |
| Format | Instruction-tuned with `[INST] ... [/INST]` |
| Finetuning method| QLoRA (4-bit), Flash Attention 2 |
| Trainer | Axolotl + RunPod (A40 GPU) |
---
## Training Objective
The model was trained to:
- **Ask clarifying questions** when input is vague
- **Plan structured roadmaps** (4โ6 steps)
- **Recommend relevant AI tools** for each stage
- **Support follow-up questions** with contextual awareness
Training categories included:
- AI tool integration
- Product planning
- MVP prototyping
- Marketing strategy
- Startup operations
---
## Example Usage
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
model = AutoModelForCausalLM.from_pretrained("vitalune/ovarra-v1", device_map="auto")
tokenizer = AutoTokenizer.from_pretrained("vitalune/ovarra-v1")
prompt = "[INST] I want to launch an AI copywriting SaaS. Help me plan it. [/INST]"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
output = model.generate(**inputs, max_new_tokens=512)
print(tokenizer.decode(output[0], skip_special_tokens=True))
```
## Credits
- Fine-tuned and built by @vitalune
- Based on prompt logic and data generation co-created using GPT-4 API
- Hosted and accelerated via RunPod |