saakshigupta commited on
Commit
0eea8d0
·
verified ·
1 Parent(s): 7a5ef53

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +92 -44
README.md CHANGED
@@ -1,45 +1,93 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
 
2
- # Deepfake Explanation Model based on Llama 3.2
3
-
4
- This model is fine-tuned to provide technical and non-technical explanations of deepfake detection results.
5
- It analyzes detection metrics, activation regions, and image features to explain why an image was classified as real or fake.
6
-
7
- ## Model Details
8
- - Base model: Llama 3.2 3B Instruct
9
- - Training method: LoRA fine-tuning with Unsloth
10
- - Training data: Custom dataset of deepfake detection results with expert explanations
11
-
12
- ## Usage Example
13
- ```python
14
- from unsloth import FastLanguageModel
15
-
16
- model, tokenizer = FastLanguageModel.from_pretrained(
17
- model_name="{repo_name}",
18
- max_seq_length=2048,
19
- load_in_4bit=True,
20
- )
21
- FastLanguageModel.for_inference(model)
22
-
23
- # Format input
24
- messages = [
25
- {"role": "user", "content": "Analyze this deepfake detection result..."}
26
- ]
27
-
28
- # Generate explanation
29
- inputs = tokenizer.apply_chat_template(
30
- messages,
31
- tokenize=True,
32
- add_generation_prompt=True,
33
- return_tensors="pt",
34
- ).to("cuda")
35
-
36
- from transformers import TextStreamer
37
- text_streamer = TextStreamer(tokenizer, skip_prompt=True)
38
- _ = model.generate(
39
- input_ids=inputs,
40
- streamer=text_streamer,
41
- max_new_tokens=800,
42
- temperature=0.7
43
- )
44
- ```
45
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: apache-2.0
5
+ library_name: unsloth
6
+ tags:
7
+ - llama
8
+ - llama-3
9
+ - text-generation
10
+ - deep-learning
11
+ - image-analysis
12
+ - deepfake-detection
13
+ - lora
14
+ - fine-tuning
15
+ datasets:
16
+ - custom
17
+ pipeline_tag: text-generation
18
+ ---
19
 
20
+ # Deepfake Explanation Model based on Llama 3.2
21
+
22
+ This model is fine-tuned to provide technical and non-technical explanations of deepfake detection results. It analyzes detection metrics, activation regions, and image features to explain why an image was classified as real or fake.
23
+
24
+ ## Model Details
25
+ - Base model: Llama 3.2 3B Instruct
26
+ - Training method: LoRA fine-tuning with Unsloth
27
+ - Training data: Custom dataset of deepfake detection results with expert explanations
28
+
29
+ ## Use Cases
30
+
31
+ This model can be used to:
32
+ - Generate expert-level technical explanations of deepfake detection results
33
+ - Provide simplified, accessible explanations for non-technical audiences
34
+ - Analyze activation regions in images to explain detection decisions
35
+ - Support educational content about deepfake detection
36
+
37
+ ## Usage Example
38
+
39
+ ```python
40
+ from unsloth import FastLanguageModel
41
+ import torch
42
+
43
+ # Load the model
44
+ model, tokenizer = FastLanguageModel.from_pretrained(
45
+ model_name="saakshigupta/deepfake-explainer-llama32",
46
+ max_seq_length=2048,
47
+ load_in_4bit=True,
48
+ )
49
+
50
+ # Enable for inference
51
+ FastLanguageModel.for_inference(model)
52
+
53
+ # Example prompt
54
+ prompt = """Analyze this deepfake detection result and provide both a technical expert explanation and a simple non-technical explanation.
55
+
56
+ Below is a deepfake detection result with explanation metrics. Provide both a technical and accessible explanation of why this image is classified as it is.
57
+ ### Detection Results:
58
+ Verdict: Deepfake
59
+ Confidence: 0.87
60
+ ### Analysis Metrics:
61
+ High Activation Regions: lips, nose
62
+ Medium Activation Regions: eyes, chin
63
+ Low Activation Regions: forehead, background
64
+ Frequency Analysis Score: 0.79
65
+ ### Image Description:
66
+ A man with glasses and short hair looking directly at the camera.
67
+ ### Heatmap Description:
68
+ The heatmap shows intense red coloration around the lips and nose area, suggesting these regions contributed most to the detection verdict."""
69
+
70
+ # Format for chat
71
+ messages = [
72
+ {"role": "user", "content": prompt},
73
+ ]
74
+
75
+ # Apply chat template
76
+ inputs = tokenizer.apply_chat_template(
77
+ messages,
78
+ tokenize=True,
79
+ add_generation_prompt=True,
80
+ return_tensors="pt",
81
+ ).to("cuda" if torch.cuda.is_available() else "cpu")
82
+
83
+ # Generate response
84
+ from transformers import TextStreamer
85
+ text_streamer = TextStreamer(tokenizer, skip_prompt=True)
86
+ _ = model.generate(
87
+ input_ids=inputs,
88
+ streamer=text_streamer,
89
+ max_new_tokens=800,
90
+ use_cache=True,
91
+ temperature=0.7,
92
+ do_sample=True
93
+ )