Canstralian commited on
Commit
186277d
·
verified ·
1 Parent(s): 2c300f6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +86 -1
README.md CHANGED
@@ -9,4 +9,89 @@ app_file: app.py
9
  pinned: false
10
  ---
11
 
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  pinned: false
10
  ---
11
 
12
+ To deploy your project on Streamlit, you'll need to create two essential files: `app.py` and `requirements.txt`.
13
+
14
+ **1. `app.py`**
15
+
16
+ This Python script serves as the main entry point for your Streamlit application. It should include the necessary imports and define the application's layout and functionality. Here's an example based on your project:
17
+
18
+ ```python
19
+ import streamlit as st
20
+ from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
21
+ import torch
22
+
23
+ # Load the tokenizer and models
24
+ tokenizer = AutoTokenizer.from_pretrained("cssupport/t5-small-awesome-text-to-sql")
25
+ original_model = AutoModelForSeq2SeqLM.from_pretrained("cssupport/t5-small-awesome-text-to-sql", torch_dtype=torch.bfloat16)
26
+ ft_model = AutoModelForSeq2SeqLM.from_pretrained("daljeetsingh/sql_ft_t5small_kag", torch_dtype=torch.bfloat16)
27
+
28
+ # Move models to GPU
29
+ device = 'cuda' if torch.cuda.is_available() else 'cpu'
30
+ original_model.to(device)
31
+ ft_model.to(device)
32
+
33
+ # Streamlit app layout
34
+ st.title("SQL Generation with T5 Models")
35
+
36
+ # Input text box
37
+ input_text = st.text_area("Enter your query:", height=150)
38
+
39
+ # Generate button
40
+ if st.button("Generate SQL"):
41
+ if input_text:
42
+ # Tokenize input
43
+ inputs = tokenizer(input_text, return_tensors='pt').to(device)
44
+
45
+ # Generate SQL queries
46
+ with torch.no_grad():
47
+ original_sql = tokenizer.decode(
48
+ original_model.generate(inputs["input_ids"], max_new_tokens=200)[0],
49
+ skip_special_tokens=True
50
+ )
51
+ ft_sql = tokenizer.decode(
52
+ ft_model.generate(inputs["input_ids"], max_new_tokens=200)[0],
53
+ skip_special_tokens=True
54
+ )
55
+
56
+ # Display results
57
+ st.subheader("Original Model Output")
58
+ st.write(original_sql)
59
+ st.subheader("Fine-Tuned Model Output")
60
+ st.write(ft_sql)
61
+ else:
62
+ st.warning("Please enter a query to generate SQL.")
63
+ ```
64
+
65
+ **2. `requirements.txt`**
66
+
67
+ This file lists all the Python packages your application depends on. Streamlit will use this file to install the necessary packages during deployment. Here's an example:
68
+
69
+ ```
70
+ streamlit
71
+ transformers
72
+ torch
73
+ ```
74
+
75
+ Ensure that the versions of the packages are compatible with each other and with your code. You can specify exact versions if needed, for example:
76
+
77
+ ```
78
+ streamlit==1.15.2
79
+ transformers==4.11.3
80
+ torch==1.10.0
81
+ ```
82
+
83
+ To generate a `requirements.txt` file with the exact versions of the packages installed in your environment, you can use the following command:
84
+
85
+ ```
86
+ pip freeze > requirements.txt
87
+ ```
88
+
89
+ This command will list all installed packages and their versions, which you can then include in your `requirements.txt` file.
90
+
91
+ For more information on creating and deploying Streamlit apps, refer to the official Streamlit documentation:
92
+
93
+ - [Create an app](https://docs.streamlit.io/get-started/tutorials/create-an-app)
94
+
95
+ - [App dependencies for your Community Cloud app](https://docs.streamlit.io/deploy/streamlit-community-cloud/deploy-your-app/app-dependencies)
96
+
97
+ By setting up these files correctly, you can deploy your SQL generation application on Streamlit, allowing users to input queries and receive generated SQL statements from both the original and fine-tuned models.