Datasets:

Modalities:
Text
Formats:
json
Languages:
Hindi
ArXiv:
Libraries:
Datasets
pandas
License:
File size: 1,498 Bytes
e20ef19
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
## Overview

The evaluation system consists of three main components:

1. **`run_generation_hf.py`**: Runs inference for individual datasets
2. **`get_scores.py`**: Modular evaluation script that calculates scores
3. **`run_all_evaluation.py`**: Comprehensive wrapper for running full pipelines

## Inference Step Customization

**The inference step must be modified by users based on their specific model requirements.**

As the model landscape continuously expands and evolves, the inference scripts provided are **reference implementations** that need to be adapted for your use case. Different models have different:
- Loading mechanisms
- Tokenization requirements
- Generation parameters
- API interfaces
- Memory requirements

### Sample Inference Implementations

We provide two sample inference scripts - `run_generation_hf.py` and `run_generation_vllm.py`

### How to Customize

1. **Choose or create an inference script** that matches your model's requirements
2. **Modify the model loading** section to work with your specific model
3. **Adjust generation parameters** (temperature, top_p, max_tokens, etc.)
4. **Update the prompt formatting** if your model uses a different template

For comprehensive examples of different usage patterns, see **[`example_usage.sh`](./example_usage.sh)**, which includes:
- Full pipeline execution
- Inference-only runs
- Evaluation-only runs

**After generating predictions, the evaluation step (`get_scores.py`) remains the same across all models.**