geetach commited on
Commit
03b6214
·
1 Parent(s): 8f2eb4c

Answered all questions

Browse files
Files changed (1) hide show
  1. chainlit.md +109 -75
chainlit.md CHANGED
@@ -1,139 +1,173 @@
1
- # Writesomething.ai - a writing buddy and cheerleader for beginner writers.
2
-
3
-
4
- - A public (or otherwise shared) link to a GitHub repo that contains:
5
- - Link to A 5-minute (or less) Loom video of a live demo of your application that also describes the use case.
6
- - A written document addressing each deliverable and answering each question. (README on hugging face)
7
- - All relevant code.
8
- - A public (or otherwise shared) link to the final version of your public application on Hugging Face (or other).
9
- - A public link to your fine-tuned embedding model on Hugging Face.
 
 
 
 
 
 
 
 
 
 
 
10
 
11
  ---
12
 
13
  ## TASK ONE – Problem and Audience
14
 
15
- **Questions:**
16
-
17
- - What problem are you trying to solve?
18
- - Why is this a problem?
19
- - Who is the audience that has this problem and would use your solution?
20
- - Do they nod their head up and down when you talk to them about it?
21
- - Think of potential questions users might ask.
22
- - What problem are they solving (writing companion)?
23
-
24
  **Deliverables:**
 
25
 
26
- - Write a succinct 1-sentence description of the problem.
27
- - Write 1–2 paragraphs on why this is a problem for your specific user.
 
 
28
 
 
 
29
  ---
30
 
31
  ## TASK TWO – Propose a Solution
32
 
33
- **Prompt:**
34
- Paint a picture of the “better world” that your user will live in. How will they save time, make money, or produce higher-quality output?
35
 
36
  **Deliverables:**
37
 
38
- - What is your proposed solution?
39
- - Why is this the best solution?
40
- - Write 1–2 paragraphs on your proposed solution. How will it look and feel to the user?
41
- - Describe the tools you plan to use in each part of your stack. Write one sentence on why you made each tooling choice.
 
 
 
 
 
 
 
 
42
 
43
  **Tooling Stack:**
44
 
45
- - **LLM**
46
- - **Embedding**
47
- - **Orchestration**
48
- - **Vector Database**
49
- - **Monitoring**
50
- - **Evaluation**
51
- - **User Interface**
52
- - *(Optional)* **Serving & Inference**
53
 
54
  **Additional:**
55
  Where will you use an agent or agents? What will you use “agentic reasoning” for in your app?
56
 
 
 
57
  ---
58
 
59
  ## TASK THREE – Dealing With the Data
60
 
61
- **Prompt:**
62
- You are an AI Systems Engineer. The AI Solutions Engineer has handed off the plan to you. Now you must identify some source data that you can use for your application.
63
 
64
- Assume that you’ll be doing at least RAG (e.g., a PDF) with a general agentic search (e.g., a search API like Tavily or SERP).
65
 
66
- Do you also plan to do fine-tuning or alignment? Should you collect data, use Synthetic Data Generation, or use an off-the-shelf dataset from Hugging Face Datasets or Kaggle?
67
 
68
- **Task:**
69
- Collect data for (at least) RAG and choose (at least) one external API.
70
 
71
- **Deliverables:**
 
 
 
72
 
73
- - Describe all of your data sources and external APIs, and describe what you’ll use them for.
74
- - Describe the default chunking strategy that you will use. Why did you make this decision?
75
  - *(Optional)* Will you need specific data for any other part of your application? If so, explain.
 
76
 
77
  ---
78
 
79
  ## TASK FOUR – Build a Quick End-to-End Prototype
80
 
81
- **Task:**
82
- Build an end-to-end RAG application using an industry-standard open-source stack and your choice of commercial off-the-shelf models.
83
-
84
- **Deliverables:**
85
 
86
  - Build an end-to-end prototype and deploy it to a Hugging Face Space (or other endpoint).
87
- ![Conceptual Flow of Agentic RAG](ConceptualFlow.png)
88
 
89
  ---
90
 
91
  ## TASK FIVE – Creating a Golden Test Dataset
92
 
93
- **Prompt:**
94
- You are an AI Evaluation & Performance Engineer. The AI Systems Engineer who built the initial RAG system has asked for your help and expertise in creating a "Golden Dataset" for evaluation.
95
 
96
- **Task:**
97
- Generate a synthetic test dataset to baseline an initial evaluation with RAGAS.
98
 
99
  **Deliverables:**
100
 
101
- - Assess your pipeline using the RAGAS framework including key metrics:
102
- - Faithfulness
103
- - Response relevance
104
- - Context precision
105
- - Context recall
106
- - Provide a table of your output results.
107
- - What conclusions can you draw about the performance and effectiveness of your pipeline with this information?
108
 
109
- ---
110
 
111
- ## TASK SIX Fine-Tune the Embedding Model
 
 
 
 
 
 
 
 
 
 
 
 
112
 
113
- **Prompt:**
114
- You are a Machine Learning Engineer. The AI Evaluation & Performance Engineer has asked for your help to fine-tune the embedding model.
115
 
116
- **Task:**
117
- Generate synthetic fine-tuning data and complete fine-tuning of the open-source embedding model.
118
 
119
- **Deliverables:**
120
 
121
- - Swap out your existing embedding model for the new fine-tuned version.
122
- - Provide a link to your fine-tuned embedding model on the Hugging Face Hub.
 
 
 
 
 
 
 
 
 
123
 
124
  ---
125
 
126
- ## TASK SEVENFinal Performance Assessment
127
 
128
- **Prompt:**
129
- You are the AI Evaluation & Performance Engineer. It's time to assess all options for this product.
130
 
131
- **Task:**
132
- Assess the performance of the fine-tuned agentic RAG application.
 
 
 
 
133
 
134
  **Deliverables:**
135
 
136
- - How does the performance compare to your original RAG application?
137
- - Test the fine-tuned embedding model using the RAGAS framework to quantify any improvements.
 
 
 
138
  - Provide results in a table.
 
139
  - Articulate the changes that you expect to make to your app in the second half of the course. How will you improve your application?
 
 
 
 
 
1
+ ## Writesomething.ai - a writing buddy and cheerleader for beginner writers.
2
+
3
+ My main README is this file that you are reading. It has answers to all midterm questions.
4
+
5
+ **Key Deliverables:**
6
+
7
+ - **Document with all the questions answered**:
8
+ - This is it, you are reading it! It is stored as chainlit.md at both my production repos:
9
+ - **Loom Video** with use case and my app: https://www.loom.com/share/aa43fe6cff354a1db352f35f51989551?sid=c3f9d395-db3a-49f1-8533-d9242bfcff1a
10
+ - **Production Code** (MultiAgentic App with RAG and one external API)
11
+ - **Hugging Face** Link for my app: https://huggingface.co/spaces/geetach/WritingBuddyMidterm
12
+ - This has all the code (app.py with langgraph chainlit orchestration and utils etc) that I need to push and launch my app on hugging face. It uses the finetuned embedding model that I finally settled on.
13
+ - A **git hub mirror** for the hugginf face repo: https://github.com/agampu/MidtermHFAppcode
14
+ - You are right now reading the chainlit.md of the production repo.
15
+ - **Evaluation and Finetuning Code**
16
+ - https://github.com/agampu/MidTermFIneTuningRepo (pardon the annoying typo)
17
+ - This has the various kinds of evaluations of different embedding models I tried.
18
+ - This repo has its own README.md that explains the results of all my finetuning attempts!
19
+ - **Finetuned Embedding Model on hugging face**: https://huggingface.co/geetach/finetuned-prompt-retriever
20
+ - If you want to know more about this, read my evaluation Repo's README: https://github.com/agampu/MidTermFIneTuningRepo/blob/main/README.md
21
 
22
  ---
23
 
24
  ## TASK ONE – Problem and Audience
25
 
 
 
 
 
 
 
 
 
 
26
  **Deliverables:**
27
+ Okay, here's a succinct breakdown based on your description:
28
 
29
+ * **What problem are you trying to solve?**
30
+ * The app addresses the initial overwhelm, intimidation, and lack of consistent habit formation that new writers face, which often prevents them from starting their writing journey.
31
+ * **Why is this a problem?**
32
+ * This is a problem because the inertia and stuckness stifles creative potential before you even start. Without a gentle, encouraging way to build a foundational habit, many aspiring writers get discouraged and give up before they even truly begin, never developing their skills or finding their voice(s).
33
 
34
+ * **Who is the audience that has this problem and would use your solution?**
35
+ * The audience is **beginning writers** – individuals who want to write but feel daunted by the process, struggle with consistency, and are looking for a supportive, non-judgmental environment to build a daily writing habit (like 100+ words a day) without immediate pressure on quality or complex story structure.
36
  ---
37
 
38
  ## TASK TWO – Propose a Solution
39
 
 
 
40
 
41
  **Deliverables:**
42
 
43
+ * **What is your proposed solution?**
44
+ * My proposed solution is **WriteSomething.ai**, an AI-powered application specifically designed for beginning writers. It is a multi-agent LLM system (comprising a Guide LLM, Query Generation LLM, Prompt Augmentor LLM, and Mentor LLM) to gently guide users, provide accessible and encouraging writing prompts, facilitate the act of writing, and then offer supportive, habit-focused mentorship. The core aim is to help beginners establish a consistent daily writing practice (e.g., 100+ words) by breaking down the initial barriers to writing.
45
+ * **Why is this the best solution?**
46
+ * This solution is best because it directly addresses the core anxieties and needs of *beginners*:
47
+ * **Focus on Habit, Not Perfection:** Unlike apps that immediately focus on grammar, plot, or advanced feedback, WriteSomething.ai prioritizes the foundational act of daily writing, which is crucial for building confidence and skill over time.
48
+ * **Gentle AI Guidance:** The multi-LLM structure provides personalized, patient, and non-judgmental support.
49
+ * **Addresses Intimidation:** It creates a safe, private space for baby steps, contrasting with potentially overwhelming or distracting real-world meetups.
50
+ * **Scalable Support:** AI can provide consistent, on-demand encouragement and prompting in a way that human mentors or groups cannot always offer to a large number of beginners.
51
+ * **Write 1–2 paragraphs on your proposed solution. How will it look and feel to the user?**
52
+ * WriteSomething.ai will present a clean, inviting, and uncluttered interface, designed to minimize distractions and foster focus. Upon opening the app, the user might be greeted by the "Guide LLM" with a friendly check-in or a gentle nudge towards writing. The experience will feel like interacting with a supportive companion. Instead of facing a daunting blank page, the "Query Generation" and "Prompt Augmentor" LLMs will collaborate to offer an engaging, accessible writing prompt or a creative spark, making the initial step of writing feel achievable and light.
53
+ * After they've written their piece, the "Mentor LLM" will provide feedback that is positive, constructive, and centered on their effort and consistency ("Great job hitting your word count today!" or "That's an interesting way to start!"). The overall feeling will be one of empowerment and gentle support, like having a patient mentor who understands the struggles of a beginner and celebrates every small victory, steadily guiding them towards a sustainable writing habit.
54
+
55
 
56
  **Tooling Stack:**
57
 
58
+ - **LLM** My multiagent graph uses four LLMS: one for the initial guiding and discovery with the user, two for doing prompt retreival and augmentations and one for final feedback and mentorship. They all have different system prompts and temperatures etc.
59
+ - **Embedding** I used a finetuned version of all-MiniLM-L6-v2 - It is a compact and efficient 6-layer sentence-transformer model ideal for tasks like semantic search.
60
+ - **Orchestration** A multiagent Langgraph (with 4 LLM nodes and a few routing and pass through nodes) and chainlit.
61
+ - **Vector Database** In memory Qdrant like we did it many assignments.
62
+ - **Monitoring** Langsmith!
63
+ - **Evaluation** Vibe check a million times, and then RAGAS like evaluation methods (golden data -> metrics -> eval) to first chose a good base model for my use case and then to finetune that chosen base model to specialize its perfromance for my writing prompts data. The perfromance did go up! Using the powers of the RAGAS library to the extent of my puny knowledge about them - it did not fit my use case. Details in TASK FIVE.
64
+ - **User Interface** Chainlit
65
+ - *(Optional)* **Serving & Inference** Hugging Face, Docker etc - like we were taught in the initial classes.
66
 
67
  **Additional:**
68
  Where will you use an agent or agents? What will you use “agentic reasoning” for in your app?
69
 
70
+ My app is very agent focused right now. Yes, the RAG is helpful but even more so, the four LLMS play an important role in fulfilling the main promise of my app - drawing out my users creative voice(s), one baby step at a time, using methods and knowledge that is backed by behavior science, stress response system and nervous system 101 basics. The agentic flow plays a big role. The guide LLM has a back and forth with the user until it decides it has enough, it is doing dynamic reasoning and then handinf off the crucial bits downstream. The mentor LLM right now is giving feedback on what the user wrote but in the future it will also be a back and forth reflection loop with the user which will evolve with the user.
71
+
72
  ---
73
 
74
  ## TASK THREE – Dealing With the Data
75
 
76
+ **Deliverables:**
 
77
 
78
+ Describe all of your **data sources and external APIs**, and describe what you’ll use them for.
79
 
80
+ **Writing prompts collection** So, my RAG is quite simple. I am pulling prompts from a collection of beginner friendly prompts. I was not happy with the prompts databases I found out there, I wanted a slightly less contrived simple prompts, so I kind of gathered them and then off-line used gemini to help me curate them further and eventually, I put them in a txt file with some useful metadata.
81
 
82
+ **external API** Tavily search if the user is feeling energized/overcome/triggered by a news item of the day and want to write about that. Then the guide llm uses tavily to grab some crucial tidbits about that to turn into a prompt.
 
83
 
84
+ Describe the default **chunking strategy** that you will use. Why did you make this decision?
85
+
86
+ - Ok, I did not use a fancy chunking strategy. Hear me out. I used **"line-by-line document creation"** strategy. Each non-empty line in my prompt data files is treated as a single, distinct document (or "chunk") to be embedded and stored in the Qdrant vector store. I created this data off-line sort of myself (with some ai help) so I know how best to chunk it. For my final demo, I will have more RAG and I plan to use semantic chunking for that.
87
+ - So, why this simple chunking? Smplicity, best fit for my use case, effective!
88
 
 
 
89
  - *(Optional)* Will you need specific data for any other part of your application? If so, explain.
90
+ Nope. For the demo project in a few weeks, yes. But not for the midterm.
91
 
92
  ---
93
 
94
  ## TASK FOUR – Build a Quick End-to-End Prototype
95
 
96
+ **Deliverables:**
 
 
 
97
 
98
  - Build an end-to-end prototype and deploy it to a Hugging Face Space (or other endpoint).
99
+ Done, Link above. Here it is again: https://huggingface.co/spaces/geetach/WritingBuddyMidterm
100
 
101
  ---
102
 
103
  ## TASK FIVE – Creating a Golden Test Dataset
104
 
105
+ Generate a synthetic test dataset to baseline an initial evaluation with RAGAS.
 
106
 
107
+ Ok, I tried RAGAS knoweldge graph stuff as I did in many assignments. Oof. RAGAS's TestsetGenerator (specifically the generate_with_langchain_docs method) is designed for longer documents. My writing prompts are quite short, and the RAGAS generator threw an error because it expected more substantial text (at least 100 tokens). Trying to pad or combine the short prompts to meet this requirement would have added unnecessary complexity. Since my prompts are very simple and well structured, I had trouble with getting the most out of ragas. So, I did non ragas data set generation. I tried a non LLM method to evaluate a few candidate base models and chose a base model and then I tried an LLM RAGAS-like method to generate a test dataset to finetune that chosen base model and evaluate if finetuning helped. It did.
 
108
 
109
  **Deliverables:**
110
 
111
+ **Non LLM based golden test dataset: choose base embedding model**: Here is that code: https://github.com/agampu/MidTermFIneTuningRepo/blob/main/generate_prompt_eval_data.py
112
+ First load the writing prompts from text files, extracting their tags and content. Then, for each prompt, heuristically create a set of structured keyword-based queries: combine genres, themes (derived from tags), and random keywords from the prompt text itself to form positive examples. Then, generate negative examples by pairing prompts with queries based on opposite genres. Compute metrics to compare how different candidate models do. Here is a comparison of two:
 
 
 
 
 
113
 
114
+ - Results (code: https://github.com/agampu/MidTermFIneTuningRepo/blob/main/prompt_evaluation.py)
115
 
116
+ | Metric | all-MiniLM-L6-v2 | all-mpnet-base-v2 | Difference | Better Model |
117
+ |--------|------------------|-------------------|------------|--------------|
118
+ | Precision@1 | 0.767 | 0.800 | +0.033 | all-mpnet-base-v2 |
119
+ | Precision@3 | 0.578 | 0.578 | 0.000 | Tie |
120
+ | Precision@5 | 0.427 | 0.560 | +0.133 | all-mpnet-base-v2 |
121
+ | MRR | 0.133 | 0.133 | 0.000 | Tie |
122
+ | NDCG@5 | 0.133 | 0.133 | 0.000 | Tie |
123
+ | Context Precision | 0.370 | 0.329 | -0.041 | all-MiniLM-L6-v2 |
124
+ | Context Recall | 0.359 | 0.315 | -0.044 | all-MiniLM-L6-v2 |
125
+ | Semantic Similarity | 0.342 | 0.292 | -0.050 | all-MiniLM-L6-v2 |
126
+ | Faithfulness | 0.387 | 0.350 | -0.037 | all-MiniLM-L6-v2 |
127
+ | Answer Relevancy | 0.363 | 0.323 | -0.040 | all-MiniLM-L6-v2 |
128
+ | Faithfulness Impact | 0.007 | -0.011 | -0.018 | all-MiniLM-L6-v2 |
129
 
130
+ **Base Model Chosen** all-MiniLM-L6-v2 (it did better on metrics that makes sense for my simple use case)
 
131
 
132
+ **LLM based golden test dataset: finetune the base embedding model** https://github.com/agampu/MidTermFIneTuningRepo/blob/main/ragas_finetune_evaluate.py leverages an LLM (GPT-3.5-turbo) to create the core of the "golden" test dataset. For each original writing prompt, the LLM generates a few highly relevant search queries or keywords; this original prompt then becomes the perfect positive "context" for these LLM-generated queries, ensuring a strong, intentional link between a query and its ideal match. To effectively train the sentence transformer using `MultipleNegativesRankingLoss`, we then programmatically introduce negative examples. For every LLM-generated (query, positive_context) pair, the script randomly selects a few *other* distinct prompts from the overall dataset to serve as negative contexts. This teaches the model to differentiate between the correct prompt and incorrect ones for a given query.
133
+ - Results. I used the all-MiniLM-L6-v2 (based on the evaluation I did NON LLM style) and then with this LLM test set, I did finetuning, the results of which are here. I call this simulated since I use RAGAS like metrics for evaluation and comparison. It takes the query-context pairs and formats them into InputExample objects, both positive and negative. Then, it finetunes our base model, calculating Ragas-like metrics. It then pushes the finetuned model to hugging face. Not using RAGAS was a pragmatic choice to evaluate retriever performance without the complexity and cost of full RAGAS LLM-based evaluations for each query-context pair. My use case does not need it. But I did try it and I will do that for the other RAG I will do for my final demo.
134
 
 
135
 
136
+ | Metric | Base Model | Finetuned Model | Absolute Improvement | Relative Improvement |
137
+ |--------|------------|-----------------|---------------------|--------------------|
138
+ | similarity | 0.680 | 0.701 | 0.021 | 3.1% |
139
+ | simulated_context_precision | 0.545 | 0.558 | 0.013 | 2.4% |
140
+ | simulated_context_recall | 0.613 | 0.632 | 0.019 | 3.1% |
141
+ | simulated_faithfulness | 0.481 | 0.491 | 0.010 | 2.0% |
142
+ | simulated_answer_relevancy | 0.578 | 0.594 | 0.016 | 2.8% |
143
+
144
+ **Conclusions** What conclusions can you draw about the performance and effectiveness of your pipeline with this information?
145
+
146
+ This was super helpful. I learned more about evaluation by sort of doing it by hand (Ok, I vibe coded most of this, but I did spend time understanding the overall logic and strategy!) and I used that to select my base model and then I used LLM-style simulated ragas to finetune the base model. Now, I feel like I know evaluation 101 :)
147
 
148
  ---
149
 
150
+ ## TASK SIXFine-Tune the Embedding Model
151
 
 
 
152
 
153
+ **Deliverables:**
154
+ https://huggingface.co/geetach/finetuned-prompt-retriever
155
+ Details of finetuning strategy, method, and results are in the above deliverable!
156
+ ---
157
+
158
+ ## TASK SEVEN – Final Performance Assessment
159
 
160
  **Deliverables:**
161
 
162
+ - How does the performance compare to your original RAG application?
163
+ Oh, I did many many rounds of performance checks. Most of them were exhaustive and exhausting vibe checks but its incredible how much you can improve the app with vibe check. I had to stop at one point, but I will keep at it.
164
+
165
+ - Test the fine-tuned embedding model using the RAGAS framework to quantify any improvements.
166
+ - Table provided above in the golden test data set. RAGAS did not work for me, I created two workaround. One was custom eval with non-llm testset generation to play around with different candidates for the base embedding model. Next, I did simlated RAGAS to fine tune the embedding models. Details in task five answer.
167
  - Provide results in a table.
168
+ - The two results tables: one that answers the question "which base model should we use" and another that answers the question "lets finetune the base model and evaluate - does it get better?" - these two tables are included in my TASK FIVE answer above, with details on strategy, method, and conclusions.
169
  - Articulate the changes that you expect to make to your app in the second half of the course. How will you improve your application?
170
+ - React + fastapi web app
171
+ - Improve usefulness to user by giving them more agency and varying kinds of support while maintaining a simple seamless non-annoying interface and interaction.
172
+ - Million improvements to my guide and mentor LLMs. Inject them with what I have learned with over a decade of Mindset coaching.
173
+ - Evaluation: I don't even know where to start! RAGAS evaluation of agents and agentic flows. How to isolate parts of my laggraph components and evaluate them separately. I will add more RAG with different more complex free form data - then I will do classical RAGAS evaluation of those pipelines. And not to forget: vibe check. Not to be scoffed at.