Spaces:
Sleeping
Sleeping
| title: Prashnotri Question Generator | |
| emoji: π | |
| colorFrom: blue | |
| colorTo: indigo | |
| sdk: gradio | |
| sdk_version: "5.29.1" | |
| app_file: gradio_app.py | |
| pinned: false | |
| license: mit | |
| # Prashnotri β AI Question Generator from PDF | |
| Generate high-quality, context-aware questions from your study materials using Azure OpenAI and LangChain. This app provides a Gradio web interface for uploading PDFs and generating questions tailored to your needs. | |
| --- | |
| ## π Features | |
| - Upload a PDF and generate questions based on its content | |
| - Customize by subject, class, topic, difficulty, Bloom's level, and more | |
| - Uses Azure OpenAI for both embeddings and question generation | |
| - Feedback-driven improvement system | |
| - Ready for Hugging Face Spaces or local deployment | |
| --- | |
| ## β‘ Quick Start | |
| ### On Hugging Face Spaces | |
| 1. **Upload the repository** to your Hugging Face Space. | |
| 2. **Set the following environment variables** in your Space's Settings > Repository secrets: | |
| - `AZURE_OPENAI_API_KEY` β Your Azure OpenAI API key | |
| - `AZURE_OPENAI_ENDPOINT` β Your Azure OpenAI endpoint (e.g. `https://your-resource.openai.azure.com/`) | |
| - `AZURE_OPENAI_EMBEDDING_DEPLOYMENT` β Embedding deployment name | |
| - `AZURE_OPENAI_CHAT_DEPLOYMENT` β Chat deployment name | |
| - `AZURE_OPENAI_API_VERSION` β API version (e.g. `2023-05-15`) | |
| 3. The app will launch automatically. Use the web interface to upload a PDF and generate questions. | |
| ### Local Development | |
| 1. **Clone the repository** and install dependencies: | |
| ```bash | |
| pip install -r requirements.txt | |
| ``` | |
| 2. **Create a `.env` file** in the project root: | |
| ``` | |
| AZURE_OPENAI_API_KEY=your-azure-openai-key | |
| AZURE_OPENAI_ENDPOINT=https://your-resource.openai.azure.com/ | |
| AZURE_OPENAI_EMBEDDING_DEPLOYMENT=your-embedding-deployment | |
| AZURE_OPENAI_CHAT_DEPLOYMENT=your-chat-deployment | |
| AZURE_OPENAI_API_VERSION=2023-05-15 | |
| ``` | |
| 3. **Run the app:** | |
| ```bash | |
| python gradio_app.py | |
| ``` | |
| 4. Open your browser to [http://127.0.0.1:7860](http://127.0.0.1:7860) | |
| --- | |
| ## π Environment Variables Explained | |
| | Variable | Description | | |
| |----------------------------------|---------------------------------------------| | |
| | AZURE_OPENAI_API_KEY | Your Azure OpenAI API key | | |
| | AZURE_OPENAI_ENDPOINT | Your Azure OpenAI endpoint URL | | |
| | AZURE_OPENAI_EMBEDDING_DEPLOYMENT| Embedding deployment name (for embeddings) | | |
| | AZURE_OPENAI_CHAT_DEPLOYMENT | Chat deployment name (for chat/completions) | | |
| | AZURE_OPENAI_API_VERSION | API version (e.g. 2023-05-15) | | |
| - `AZURE_OPENAI_EMBEDDING_DEPLOYMENT` and `AZURE_OPENAI_CHAT_DEPLOYMENT` are used as the `deployment_name` argument in the code. | |
| - Only `AZURE_OPENAI_API_VERSION` is required for both embeddings and chat. | |
| - **Do not** set `deployment_name` inside `model_kwargs`βit is passed as a top-level argument. | |
| **Example usage in code:** | |
| ```python | |
| AzureOpenAIEmbeddings( | |
| azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"], | |
| api_key=os.environ["AZURE_OPENAI_API_KEY"], | |
| api_version=os.environ["AZURE_OPENAI_API_VERSION"], | |
| deployment_name=os.environ["AZURE_OPENAI_EMBEDDING_DEPLOYMENT"] | |
| ) | |
| AzureChatOpenAI( | |
| azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"], | |
| api_key=os.environ["AZURE_OPENAI_API_KEY"], | |
| api_version=os.environ["AZURE_OPENAI_API_VERSION"], | |
| deployment_name=os.environ["AZURE_OPENAI_CHAT_DEPLOYMENT"] | |
| ) | |
| ``` | |
| --- | |
| ## π οΈ Troubleshooting | |
| - **Missing environment variables:** The app will not start if any required variable is missing. | |
| - **502/Bad Gateway (on Spaces):** Ensure all secrets are set and valid. | |
| - **SSL/Endpoint errors:** Double-check your Azure endpoint URL and API version. | |
| - **Deprecation warnings:** This app is up to date with the latest LangChain and OpenAI SDK requirements. | |
| --- | |
| ## π¬ Support & Feedback | |
| - For issues, open a GitHub issue or contact the maintainer. | |
| - For deployment or feature requests, reach out via email or the project's contact form. | |
| --- | |
| **Do not upload your `.env` file to any public repository.** | |
| # Question Generator from PDF (LangChain) | |
| ## π Running on Hugging Face Spaces | |
| This project includes a Gradio interface for generating questions from PDF study materials using LangChain and Azure OpenAI. You can deploy it directly to [Hugging Face Spaces](https://huggingface.co/spaces): | |
| ### Requirements | |
| - Python 3.8+ | |
| - The following dependencies (see `requirements.txt`): | |
| - gradio | |
| - huggingface_hub | |
| - openai | |
| - langchain, langchain-community, langchain-core, langchain-openai, langchain-text-splitters | |
| - faiss-cpu | |
| - pypdf | |
| - and others | |
| ### How to launch | |
| 1. Upload the repository to your Hugging Face Space. | |
| 2. Ensure your **Azure OpenAI** environment variables are set in the Space's secrets or environment variables: | |
| - `AZURE_OPENAI_API_KEY` | |
| - `AZURE_OPENAI_ENDPOINT` | |
| - `AZURE_OPENAI_EMBEDDING_DEPLOYMENT` (embedding deployment name, used as deployment_name in code) | |
| - `AZURE_OPENAI_CHAT_DEPLOYMENT` (chat deployment name, used as deployment_name in code) | |
| - `AZURE_OPENAI_API_VERSION` (API version for both embeddings and chat) | |
| 3. The Space will automatically run `gradio_app.py`. | |
| 4. Use the web interface to upload a PDF and enter topic details to generate questions. | |
| --- | |
| Absolutely! Here's a comprehensive documentation for your current project, covering deployment, usage, architecture, and future plans. You can use this as your README.md or internal documentation. | |
| Prashnotri β AI Question Paper Generator | |
| Overview | |
| Prashnotri is an AI-powered platform for generating high-quality, customized question papers. It leverages Azure OpenAI for question generation, supports feedback-driven improvement, and stores generated papers and user uploads securely on AWS S3. The app is built with a Python Flask backend (optional for local/server deployment), React frontend, and MongoDB for data storage. | |
| Features | |
| Generate question papers based on subject, class, topic, difficulty, Bloom's level, and intelligence type/subtype. | |
| Store and serve generated PDFs from AWS S3. | |
| User feedback system for continuous improvement. | |
| Secure, scalable deployment on AWS EC2 with Nginx and HTTPS. | |
| (Planned) Upload notes as PDF/images for context-aware question generation. | |
| Tech Stack | |
| Backend: Python (Flask, optional), Azure OpenAI API, MongoDB, AWS S3, Boto3 | |
| Frontend: React (TypeScript) | |
| Deployment: Ubuntu EC2, Nginx, PM2 (or systemd), Certbot (Let's Encrypt SSL) | |
| Other: Python virtualenv, dotenv for secrets | |
| Directory Structure | |
| Apply to README.md | |
| Environment Variables (.env) | |
| Apply to README.md | |
| Deployment Steps | |
| 1. Server Setup | |
| Launch Ubuntu EC2 instance. | |
| Open ports 22 (SSH), 80 (HTTP), 443 (HTTPS) in security group. | |
| 2. Install Dependencies | |
| Apply to README.md | |
| Run | |
| 3. Clone and Set Up Project | |
| Apply to README.md | |
| Run | |
| 4. Configure Environment | |
| Create .env file with your secrets (see above for Azure OpenAI variables). | |
| 5. Frontend Build | |
| Build your React frontend locally: | |
| Apply to README.md | |
| Run | |
| Copy the build output to the dist/ folder on your server. | |
| 6. Nginx Setup | |
| Create /etc/nginx/sites-available/prashnotri.com: | |
| Apply to README.md | |
| Enable site and restart Nginx: | |
| Apply to README.md | |
| Run | |
| 7. SSL with Certbot | |
| Apply to README.md | |
| Run | |
| 8. Run Flask App with PM2 (only if using Flask backend) | |
| Apply to README.md | |
| Run | |
| Usage | |
| Access the app: | |
| Go to https://prashnotri.com in your browser. | |
| Fill in the form: | |
| Subject, class, language, topics, question type, difficulty, Bloom's level, intelligence type, and subtype. | |
| Add additional instructions if needed. | |
| Submit: | |
| The app generates a question paper and provides a PDF download link. | |
| Feedback: | |
| Users can submit feedback on generated papers for future improvement. | |
| Intelligence Types & Subtypes | |
| | Intelligence | Refined SubTypes (Examples) | | |
| | -------------- | --------------------------------------------------------------- | | |
| | Logical | Pattern solving, Deductive reasoning, Coding logic, Data interpretation | | |
| | Linguistic | Storytelling, Persuasive argument, Vocabulary building, Creative writing | | |
| | Kinesthetic | Gross motor (e.g., sports), Fine motor (e.g., drawing), Simulations | | |
| | Spatial | 3D visualization, Map reading, Mental rotation, Blueprint understanding | | |
| | Musical | Rhythm patterns, Composition, Tone recognition | | |
| | Interpersonal | Negotiation skills, Group collaboration, Empathy exercises | | |
| | Intrapersonal | Self-assessment, Reflective writing, Goal setting | | |
| | Naturalistic | Classification tasks, Field observations, Environmental problem-solving | | |
| Prompt Engineering | |
| Prompts are detailed, with strict requirements for question quality, format, and explanation. | |
| Example and "Do Not" sections are included for clarity. | |
| Feedback from users is incorporated into future prompt generations. | |
| Feedback & Improvement Plan | |
| Collect user feedback on question quality. | |
| Iteratively refine prompts and add features (PDF upload, analytics, etc.). | |
| Monitor logs and performance for stability. | |
| Prepare for public launch after 1β2 months of testing and improvement. | |
| Future Features | |
| PDF/image upload for context-aware question generation. | |
| Analytics dashboard for usage and feedback. | |
| Enhanced user management and authentication. | |
| More advanced feedback integration and prompt tuning. | |
| Troubleshooting | |
| 502 Bad Gateway: Ensure Flask app is running and PM2 is managing it. | |
| SSL Issues: Check Certbot and Nginx config. | |
| App not running after reboot: Make sure pm2 save and pm2 startup were run. | |
| Logs: | |
| Flask: pm2 logs prashnotri | |
| Nginx: sudo tail -n 50 /var/log/nginx/error.log | |
| Contact & Support | |
| For issues, open a GitHub issue or contact the maintainer. | |
| For deployment or feature requests, reach out via email or the project's contact form. | |
| Congratulations on your deployment! | |
| You are set up for iterative improvement and a successful public launch. | |
| Let me know if you want this as a markdown file, or if you want to add/change any section! | |
| cd C:\Users\Abhishek\Desktop\pem_work | |
| ssh -i "abhi.pem" [email protected] | |
| 65.0.135.47 | |
| #feedback system in frontend | |
| #note taking feature are important | |
| #issue with the falsk async options | |
| #installation correct verisons | |
| ## Configuration | |
| This app requires several Azure OpenAI environment variables to be set for proper operation. You can configure these in two ways: | |
| ### 1. Local Development (.env file) | |
| Create a file named `.env` in your project root with the following content (replace values with your actual Azure OpenAI credentials): | |
| ``` | |
| AZURE_OPENAI_API_KEY=your-azure-openai-key | |
| AZURE_OPENAI_ENDPOINT=https://your-resource-name.openai.azure.com/ | |
| AZURE_OPENAI_EMBEDDING_DEPLOYMENT=your-embedding-deployment | |
| AZURE_OPENAI_CHAT_DEPLOYMENT=your-chat-deployment | |
| AZURE_OPENAI_API_VERSION=2023-05-15 | |
| ``` | |
| ### 2. Hugging Face Spaces (Repository Secrets) | |
| Go to your Space's **Settings** > **Repository secrets** and add the following secrets: | |
| - `AZURE_OPENAI_API_KEY` | |
| - `AZURE_OPENAI_ENDPOINT` | |
| - `AZURE_OPENAI_EMBEDDING_DEPLOYMENT` (embedding deployment name) | |
| - `AZURE_OPENAI_CHAT_DEPLOYMENT` (chat deployment name) | |
| - `AZURE_OPENAI_API_VERSION` | |
| **Note:** The code now passes `deployment_name` as a top-level argument to AzureOpenAIEmbeddings and AzureChatOpenAI, matching the latest LangChain and OpenAI SDK requirements. Do not set deployment_name inside `model_kwargs`. | |
| **Do not upload your .env file to the repository for security reasons.** | |
| ## Azure OpenAI Environment Variables | |
| You must set the following environment variables for the app to work (either in Hugging Face Spaces secrets or a local `.env` file): | |
| ``` | |
| AZURE_OPENAI_API_KEY=your-azure-openai-key | |
| AZURE_OPENAI_ENDPOINT=https://your-resource-name.openai.azure.com/ | |
| AZURE_OPENAI_EMBEDDING_DEPLOYMENT=your-embedding-deployment | |
| AZURE_OPENAI_CHAT_DEPLOYMENT=your-chat-deployment | |
| AZURE_OPENAI_API_VERSION=2023-05-15 | |
| ``` | |
| - `AZURE_OPENAI_EMBEDDING_DEPLOYMENT` and `AZURE_OPENAI_CHAT_DEPLOYMENT` are used as the `deployment_name` argument in the code. | |
| - Only `AZURE_OPENAI_API_VERSION` is required for both embeddings and chat. | |
| - **Do not** set `deployment_name` inside `model_kwargs`βit is passed as a top-level argument. | |
| **Example usage in code:** | |
| ```python | |
| AzureOpenAIEmbeddings( | |
| azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"], | |
| api_key=os.environ["AZURE_OPENAI_API_KEY"], | |
| api_version=os.environ["AZURE_OPENAI_API_VERSION"], | |
| deployment_name=os.environ["AZURE_OPENAI_EMBEDDING_DEPLOYMENT"] | |
| ) | |
| AzureChatOpenAI( | |
| azure_endpoint=os.environ["AZURE_OPENAI_ENDPOINT"], | |
| api_key=os.environ["AZURE_OPENAI_API_KEY"], | |
| api_version=os.environ["AZURE_OPENAI_API_VERSION"], | |
| deployment_name=os.environ["AZURE_OPENAI_CHAT_DEPLOYMENT"] | |
| ) | |
| ``` | |
| **Do not upload your .env file to the repository for security reasons.** |