iitmbs24f commited on
Commit
c513170
·
verified ·
1 Parent(s): 4a329ac

Upload 15 files

Browse files
Files changed (10) hide show
  1. .gitignore +61 -0
  2. Dockerfile +2 -2
  3. README.md +272 -11
  4. docker-compose.yml +2 -2
  5. main.py +5 -3
  6. requirements.txt +1 -0
  7. server.err +1 -0
  8. server.out +0 -0
  9. start.py +101 -0
  10. test_setup.py +92 -0
.gitignore ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Environment variables
2
+ .env
3
+ .env.local
4
+ .env.production
5
+
6
+ # Python
7
+ __pycache__/
8
+ *.py[cod]
9
+ *$py.class
10
+ *.so
11
+ .Python
12
+ build/
13
+ develop-eggs/
14
+ dist/
15
+ downloads/
16
+ eggs/
17
+ .eggs/
18
+ lib/
19
+ lib64/
20
+ parts/
21
+ sdist/
22
+ var/
23
+ wheels/
24
+ *.egg-info/
25
+ .installed.cfg
26
+ *.egg
27
+ MANIFEST
28
+
29
+ # Virtual environments
30
+ venv/
31
+ env/
32
+ ENV/
33
+ env.bak/
34
+ venv.bak/
35
+
36
+ # IDE
37
+ .vscode/
38
+ .idea/
39
+ *.swp
40
+ *.swo
41
+ *~
42
+
43
+ # Logs
44
+ logs/
45
+ *.log
46
+
47
+ # Database
48
+ *.db
49
+ *.sqlite
50
+ *.sqlite3
51
+
52
+ # OS
53
+ .DS_Store
54
+ Thumbs.db
55
+
56
+ # Docker
57
+ .dockerignore
58
+
59
+ # Temporary files
60
+ *.tmp
61
+ *.temp
Dockerfile CHANGED
@@ -27,8 +27,8 @@ USER app
27
  EXPOSE 8000
28
 
29
  # Health check
30
- HEALTHCHECK --interval=30s --timeout=30s --start-period=5s --retries=3 \
31
- CMD python -c "import requests; requests.get('http://localhost:8000/health')"
32
 
33
  # Start command
34
  CMD ["python", "main.py"]
 
27
  EXPOSE 8000
28
 
29
  # Health check
30
+ HEALTHCHECK --interval=30s --timeout=10s --start-period=5s --retries=3 \
31
+ CMD python -c "import urllib.request; urllib.request.urlopen('http://localhost:8000/health')"
32
 
33
  # Start command
34
  CMD ["python", "main.py"]
README.md CHANGED
@@ -1,11 +1,272 @@
1
- ---
2
- title: Prj1
3
- emoji: 👀
4
- colorFrom: purple
5
- colorTo: indigo
6
- sdk: docker
7
- pinned: false
8
- license: mit
9
- ---
10
-
11
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # LLM Code Deployment
2
+
3
+ A full-stack backend system that automates app building and deployment using LLMs and GitHub Pages. This system receives task briefs, generates minimal working web applications using Groq API with Llama models, and automatically deploys them to GitHub Pages.
4
+
5
+ ## 🚀 Features
6
+
7
+ - **Automated App Generation**: Uses Groq API with Llama models to generate complete HTML/CSS/JS applications
8
+ - **GitHub Integration**: Automatically creates repositories and enables GitHub Pages
9
+ - **Round-based Development**: Supports initial creation and revision rounds
10
+ - **Evaluation API Integration**: Posts deployment metadata to external evaluation systems
11
+ - **Production-ready**: Built with FastAPI, includes proper error handling and logging
12
+ - **Security**: Shared secret authentication and input validation
13
+
14
+ ## 🏗️ Architecture
15
+
16
+ ```
17
+ ┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
18
+ │ Client │ │ FastAPI │ │ Groq API │
19
+ │ Request │───▶│ Backend │───▶│ (Llama) │
20
+ └─────────────────┘ └─────────────────┘ └─────────────────┘
21
+
22
+
23
+ ┌─────────────────┐ ┌─────────────────┐
24
+ │ GitHub │ │ Evaluation │
25
+ │ API │───▶│ API │
26
+ └─────────────────┘ └─────────────────┘
27
+ ```
28
+
29
+ ## 📋 Prerequisites
30
+
31
+ - Python 3.8+
32
+ - Groq API key
33
+ - GitHub Personal Access Token
34
+ - GitHub account with Pages enabled
35
+
36
+ ## 🛠️ Installation
37
+
38
+ 1. **Clone the repository**
39
+ ```bash
40
+ git clone <repository-url>
41
+ cd llm-code-deployment
42
+ ```
43
+
44
+ 2. **Create virtual environment**
45
+ ```bash
46
+ python -m venv venv
47
+ source venv/bin/activate # On Windows: venv\Scripts\activate
48
+ ```
49
+
50
+ 3. **Install dependencies**
51
+ ```bash
52
+ pip install -r requirements.txt
53
+ ```
54
+
55
+ 4. **Set up environment variables**
56
+ ```bash
57
+ cp env.template .env
58
+ ```
59
+
60
+ Edit `.env` with your actual values:
61
+ ```env
62
+ # OpenAI/OpenRouter Configuration
63
+ OPENAI_API_KEY=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
64
+ OPENAI_MODEL=gpt-4o-mini
65
+
66
+ # GitHub Configuration
67
+ GITHUB_TOKEN=your_github_personal_access_token_here
68
+ GITHUB_USERNAME=your_github_username_here
69
+
70
+ # Security
71
+ SHARED_SECRET=your_shared_secret_here
72
+
73
+ # Server Configuration
74
+ HOST=0.0.0.0
75
+ PORT=8000
76
+ DEBUG=false
77
+ ```
78
+
79
+ ## 🔧 Configuration
80
+
81
+ ### OpenRouter Setup
82
+ 1. Get an API key from `https://openrouter.ai/`
83
+ 2. Set `OPENAI_API_KEY` in your `.env` file
84
+ 3. Choose your model (default: `gpt-4o-mini`)
85
+
86
+ ### GitHub Setup
87
+ 1. Create a Personal Access Token:
88
+ - Go to GitHub Settings → Developer settings → Personal access tokens
89
+ - Generate new token with `repo` and `pages` scopes
90
+ 2. Set `GITHUB_TOKEN` and `GITHUB_USERNAME` in your `.env` file
91
+
92
+ ### Security
93
+ - Set a strong `SHARED_SECRET` for API authentication
94
+ - Configure CORS origins appropriately for production
95
+
96
+ ## 🚀 Running the Application
97
+
98
+ ### Development Mode
99
+ ```bash
100
+ python main.py
101
+ ```
102
+
103
+ ### Production Mode
104
+ ```bash
105
+ uvicorn main:app --host 0.0.0.0 --port 8000
106
+ ```
107
+
108
+ The API will be available at:
109
+ - **API Documentation**: http://localhost:8000/docs
110
+ - **Alternative Docs**: http://localhost:8000/redoc
111
+ - **Health Check**: http://localhost:8000/health
112
+
113
+ ## 📡 API Endpoints
114
+
115
+ ### POST `/api/request`
116
+ Main endpoint for processing app generation requests.
117
+
118
+ **Request Body:**
119
+ ```json
120
+ {
121
+ "email": "[email protected]",
122
+ "secret": "your_shared_secret",
123
+ "task": "Create a todo app",
124
+ "round": 1,
125
+ "nonce": "unique_request_id",
126
+ "brief": "A simple todo application with add, edit, delete functionality",
127
+ "evaluation_url": "https://evaluation-api.example.com/notify",
128
+ "attachments": []
129
+ }
130
+ ```
131
+
132
+ **Response:**
133
+ ```json
134
+ {
135
+ "success": true,
136
+ "message": "Application generated and deployed successfully",
137
+ "deployment": {
138
+ "repo_name": "llm-app-todo-20241201120000",
139
+ "repo_url": "https://github.com/username/llm-app-todo-20241201120000",
140
+ "commit_sha": "abc123def456",
141
+ "pages_url": "https://username.github.io/llm-app-todo-20241201120000"
142
+ },
143
+ "evaluation_notification": {
144
+ "sent": true,
145
+ "status_code": 200
146
+ },
147
+ "metadata": {
148
+ "round": 1,
149
+ "nonce": "unique_request_id",
150
+ "timestamp": "2024-12-01T12:00:00Z"
151
+ }
152
+ }
153
+ ```
154
+
155
+ ### POST `/api/evaluate`
156
+ Endpoint for receiving evaluation data.
157
+
158
+ **Request Body:**
159
+ ```json
160
+ {
161
+ "email": "[email protected]",
162
+ "task": "Create a todo app",
163
+ "round": 1,
164
+ "nonce": "unique_request_id",
165
+ "evaluation_data": {
166
+ "score": 85,
167
+ "feedback": "Good functionality, needs better styling"
168
+ }
169
+ }
170
+ ```
171
+
172
+ ### GET `/health`
173
+ Health check endpoint.
174
+
175
+ ## 🔄 Workflow
176
+
177
+ 1. **Receive Request**: API validates secret and request format
178
+ 2. **Generate App**: OpenRouter/OpenAI-compatible API creates HTML/CSS/JS based on task brief
179
+ 3. **Create Repository**: GitHub repo is created with generated files
180
+ 4. **Enable Pages**: GitHub Pages is automatically enabled
181
+ 5. **Notify Evaluation**: Deployment metadata is sent to evaluation API
182
+ 6. **Return Response**: Client receives repo URLs and deployment info
183
+
184
+ ## 🔧 Development
185
+
186
+ ### Project Structure
187
+ ```
188
+ llm-code-deployment/
189
+ ├── main.py # FastAPI application
190
+ ├── llm_helper.py # Groq API integration
191
+ ├── github_helper.py # GitHub API integration
192
+ ├── deploy_helper.py # Evaluation API communication
193
+ ├── requirements.txt # Python dependencies
194
+ ├── env.template # Environment variables template
195
+ └── README.md # This file
196
+ ```
197
+
198
+ ### Adding New Features
199
+
200
+ 1. **New LLM Models**: Modify `llm_helper.py` to support additional Groq models
201
+ 2. **Database Integration**: Add SQLAlchemy models for request logging
202
+ 3. **Authentication**: Implement JWT or OAuth for enhanced security
203
+ 4. **Monitoring**: Add metrics and health checks
204
+
205
+ ### Testing
206
+
207
+ ```bash
208
+ # Install test dependencies
209
+ pip install pytest pytest-asyncio httpx
210
+
211
+ # Run tests
212
+ pytest
213
+ ```
214
+
215
+ ## 🚨 Error Handling
216
+
217
+ The system includes comprehensive error handling:
218
+
219
+ - **Validation Errors**: Invalid request format or missing fields
220
+ - **Authentication Errors**: Invalid shared secret
221
+ - **LLM Errors**: API failures or invalid responses
222
+ - **GitHub Errors**: Repository creation or Pages setup failures
223
+ - **Network Errors**: Evaluation API communication failures
224
+
225
+ All errors are logged and return appropriate HTTP status codes.
226
+
227
+ ## 🔒 Security Considerations
228
+
229
+ - **Shared Secret**: Required for all API requests
230
+ - **Input Validation**: All inputs are validated using Pydantic
231
+ - **CORS**: Configurable CORS settings for production
232
+ - **Rate Limiting**: Consider implementing rate limiting for production
233
+ - **HTTPS**: Use HTTPS in production environments
234
+
235
+ ## 📊 Monitoring and Logging
236
+
237
+ The application includes structured logging:
238
+
239
+ - **Request Logging**: All API requests are logged
240
+ - **Error Logging**: Detailed error information
241
+ - **Deployment Logging**: GitHub operations and results
242
+ - **Performance Logging**: Request processing times
243
+
244
+ ## 🤝 Contributing
245
+
246
+ 1. Fork the repository
247
+ 2. Create a feature branch
248
+ 3. Make your changes
249
+ 4. Add tests if applicable
250
+ 5. Submit a pull request
251
+
252
+ ## 📄 License
253
+
254
+ This project is licensed under the MIT License - see the LICENSE file for details.
255
+
256
+ ## 🆘 Support
257
+
258
+ For issues and questions:
259
+
260
+ 1. Check the logs for error details
261
+ 2. Verify environment variables are set correctly
262
+ 3. Ensure GitHub token has required permissions
263
+ 4. Check OpenRouter API key and quota
264
+
265
+ ## 🔮 Future Enhancements
266
+
267
+ - [ ] Database integration for request history
268
+ - [ ] Webhook support for deployment status
269
+ - [ ] Multiple LLM provider support
270
+ - [ ] Custom domain support for GitHub Pages
271
+ - [ ] Request queuing for high-volume usage
272
+ - [ ] Metrics and analytics dashboard
docker-compose.yml CHANGED
@@ -18,8 +18,8 @@ services:
18
  - ./logs:/app/logs
19
  restart: unless-stopped
20
  healthcheck:
21
- test: ["CMD", "python", "-c", "import requests; requests.get('http://localhost:8000/health')"]
22
  interval: 30s
23
  timeout: 10s
24
  retries: 3
25
- start_period: 40s
 
18
  - ./logs:/app/logs
19
  restart: unless-stopped
20
  healthcheck:
21
+ test: ["CMD", "python", "-c", "import urllib.request; urllib.request.urlopen('http://localhost:8000/health')"]
22
  interval: 30s
23
  timeout: 10s
24
  retries: 3
25
+ start_period: 10s
main.py CHANGED
@@ -11,7 +11,7 @@ from datetime import datetime
11
  from fastapi import FastAPI, HTTPException, status
12
  from fastapi.middleware.cors import CORSMiddleware
13
  from fastapi.responses import JSONResponse
14
- from pydantic import BaseModel, Field, validator
15
  from dotenv import load_dotenv
16
 
17
  from llm_helper import LLMHelper, AppGenerationRequest
@@ -87,7 +87,8 @@ class TaskRequest(BaseModel):
87
  attachments: Optional[list] = Field(default=None, description="Additional attachments or context")
88
  return_code: Optional[bool] = Field(default=False, description="If true, include generated code in response")
89
 
90
- @validator('secret', always=True)
 
91
  def validate_secret(cls, v):
92
  # Soft validation: if SHARED_SECRET is set and mismatched, log but do not block
93
  expected_secret = os.getenv("SHARED_SECRET")
@@ -98,7 +99,8 @@ class TaskRequest(BaseModel):
98
  pass
99
  return v
100
 
101
- @validator('evaluation_url', always=True)
 
102
  def validate_evaluation_url(cls, v):
103
  # Soft validation: return as-is; invalid URLs will skip notification later
104
  return v
 
11
  from fastapi import FastAPI, HTTPException, status
12
  from fastapi.middleware.cors import CORSMiddleware
13
  from fastapi.responses import JSONResponse
14
+ from pydantic import BaseModel, Field, field_validator
15
  from dotenv import load_dotenv
16
 
17
  from llm_helper import LLMHelper, AppGenerationRequest
 
87
  attachments: Optional[list] = Field(default=None, description="Additional attachments or context")
88
  return_code: Optional[bool] = Field(default=False, description="If true, include generated code in response")
89
 
90
+ @field_validator('secret', mode='before')
91
+ @classmethod
92
  def validate_secret(cls, v):
93
  # Soft validation: if SHARED_SECRET is set and mismatched, log but do not block
94
  expected_secret = os.getenv("SHARED_SECRET")
 
99
  pass
100
  return v
101
 
102
+ @field_validator('evaluation_url', mode='before')
103
+ @classmethod
104
  def validate_evaluation_url(cls, v):
105
  # Soft validation: return as-is; invalid URLs will skip notification later
106
  return v
requirements.txt CHANGED
@@ -10,3 +10,4 @@ jinja2==3.1.2
10
  aiofiles==23.2.1
11
  sqlalchemy==2.0.23
12
  alembic==1.13.1
 
 
10
  aiofiles==23.2.1
11
  sqlalchemy==2.0.23
12
  alembic==1.13.1
13
+ requests==2.32.5
server.err ADDED
@@ -0,0 +1 @@
 
 
1
+ ERROR: Error loading ASGI app. Attribute "app" not found in module "main".
server.out ADDED
File without changes
start.py ADDED
@@ -0,0 +1,101 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Startup script for LLM Code Deployment
3
+ """
4
+
5
+ import os
6
+ import sys
7
+ import subprocess
8
+ from pathlib import Path
9
+
10
+ def check_python_version():
11
+ """Check if Python version is compatible"""
12
+ if sys.version_info < (3, 8):
13
+ print("❌ Python 3.8 or higher is required")
14
+ print(f"Current version: {sys.version}")
15
+ return False
16
+ print(f"✅ Python version: {sys.version.split()[0]}")
17
+ return True
18
+
19
+ def check_env_file():
20
+ """Check if .env file exists"""
21
+ env_file = Path(".env")
22
+ if not env_file.exists():
23
+ print("❌ .env file not found")
24
+ print("Please copy env.template to .env and configure your settings")
25
+ return False
26
+ print("✅ .env file found")
27
+ return True
28
+
29
+ def install_dependencies():
30
+ """Install required dependencies"""
31
+ try:
32
+ print("📦 Installing dependencies...")
33
+ subprocess.run([sys.executable, "-m", "pip", "install", "-r", "requirements.txt"],
34
+ check=True, capture_output=True)
35
+ print("✅ Dependencies installed successfully")
36
+ return True
37
+ except subprocess.CalledProcessError as e:
38
+ print(f"❌ Failed to install dependencies: {e}")
39
+ return False
40
+
41
+ def run_setup_test():
42
+ """Run the setup test"""
43
+ try:
44
+ print("🧪 Running setup test...")
45
+ result = subprocess.run([sys.executable, "test_setup.py"],
46
+ capture_output=True, text=True)
47
+ if result.returncode == 0:
48
+ print("✅ Setup test passed")
49
+ return True
50
+ else:
51
+ print("❌ Setup test failed:")
52
+ print(result.stdout)
53
+ print(result.stderr)
54
+ return False
55
+ except Exception as e:
56
+ print(f"❌ Setup test error: {e}")
57
+ return False
58
+
59
+ def start_server():
60
+ """Start the FastAPI server"""
61
+ try:
62
+ print("🚀 Starting LLM Code Deployment server...")
63
+ print("Server will be available at: http://localhost:8000")
64
+ print("API documentation: http://localhost:8000/docs")
65
+ print("Press Ctrl+C to stop the server")
66
+ print("-" * 50)
67
+
68
+ subprocess.run([sys.executable, "main.py"])
69
+ except KeyboardInterrupt:
70
+ print("\n👋 Server stopped")
71
+ except Exception as e:
72
+ print(f"❌ Server error: {e}")
73
+
74
+ def main():
75
+ """Main startup function"""
76
+ print("🚀 LLM Code Deployment Startup")
77
+ print("=" * 40)
78
+
79
+ # Check prerequisites
80
+ if not check_python_version():
81
+ sys.exit(1)
82
+
83
+ if not check_env_file():
84
+ sys.exit(1)
85
+
86
+ # Install dependencies
87
+ if not install_dependencies():
88
+ sys.exit(1)
89
+
90
+ # Run setup test
91
+ if not run_setup_test():
92
+ print("\n⚠️ Setup test failed, but continuing...")
93
+ print("You may need to configure your .env file properly")
94
+
95
+ print("\n" + "=" * 40)
96
+
97
+ # Start server
98
+ start_server()
99
+
100
+ if __name__ == "__main__":
101
+ main()
test_setup.py ADDED
@@ -0,0 +1,92 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Simple test script to verify the LLM Code Deployment setup
3
+ """
4
+
5
+ import os
6
+ import sys
7
+ from dotenv import load_dotenv
8
+
9
+ def test_environment_variables():
10
+ """Test if all required environment variables are set"""
11
+ load_dotenv()
12
+
13
+ required_vars = [
14
+ "OPENAI_API_KEY",
15
+ "GITHUB_TOKEN",
16
+ "GITHUB_USERNAME",
17
+ "SHARED_SECRET"
18
+ ]
19
+
20
+ missing_vars = []
21
+ for var in required_vars:
22
+ if not os.getenv(var):
23
+ missing_vars.append(var)
24
+
25
+ if missing_vars:
26
+ print(f"Missing environment variables: {', '.join(missing_vars)}")
27
+ print("Please set these in your .env file")
28
+ return False
29
+ else:
30
+ print("All required environment variables are set")
31
+ return True
32
+
33
+ def test_imports():
34
+ """Test if all required modules can be imported"""
35
+ try:
36
+ import fastapi
37
+ import uvicorn
38
+ import openai # Still using openai package for Groq API
39
+ import github
40
+ import httpx
41
+ import pydantic
42
+ print("All required packages are installed")
43
+ return True
44
+ except ImportError as e:
45
+ print(f"Missing package: {e}")
46
+ print("Please run: pip install -r requirements.txt")
47
+ return False
48
+
49
+ def test_module_imports():
50
+ """Test if our custom modules can be imported"""
51
+ try:
52
+ from llm_helper import LLMHelper
53
+ from github_helper import GitHubHelper
54
+ from deploy_helper import DeployHelper
55
+ print("All custom modules can be imported")
56
+ return True
57
+ except ImportError as e:
58
+ print(f"Error importing custom modules: {e}")
59
+ return False
60
+
61
+ def main():
62
+ """Run all tests"""
63
+ print("Testing LLM Code Deployment Setup")
64
+ print("=" * 40)
65
+
66
+ tests = [
67
+ test_imports,
68
+ test_environment_variables,
69
+ test_module_imports
70
+ ]
71
+
72
+ passed = 0
73
+ total = len(tests)
74
+
75
+ for test in tests:
76
+ if test():
77
+ passed += 1
78
+ print()
79
+
80
+ print("=" * 40)
81
+ if passed == total:
82
+ print(f"All tests passed! ({passed}/{total})")
83
+ print("Your LLM Code Deployment system is ready to use!")
84
+ print("\nTo start the server, run:")
85
+ print("python main.py")
86
+ else:
87
+ print(f"{passed}/{total} tests passed")
88
+ print("Please fix the issues above before running the server")
89
+ sys.exit(1)
90
+
91
+ if __name__ == "__main__":
92
+ main()