A real-time AI-powered support chat application built with Django, WebSockets, and OpenAI integration.
- Real-time chat functionality using WebSockets
- AI-powered responses using OpenAI's API
- JWT-based authentication
- Asynchronous task processing with Celery
- Redis for message broker and caching
- Docker containerization for easy deployment
- RESTful API endpoints
- Backend Framework: Django 5.2
- API Framework: Django REST Framework
- Authentication: JWT (JSON Web Tokens)
- Real-time Communication: Django Channels
- Task Queue: Celery
- Message Broker: Redis
- AI Integration: OpenAI API
- Containerization: Docker & Docker Compose
- Database: SQLite (configurable for production)
- Python 3.11+
- Docker and Docker Compose
- OpenAI API key
Create a .env
file in the root directory with the following variables:
# Django settings
DEBUG=True
SECRET_KEY=your-secret-key-here
ALLOWED_HOSTS=localhost,127.0.0.1
# Database settings
DB_ENGINE=django.db.backends.sqlite3
DB_NAME=db.sqlite3
# Redis settings
REDIS_HOST=redis
REDIS_PORT=6379
# Celery settings
CELERY_BROKER_URL=redis://redis:6379/0
CELERY_RESULT_BACKEND=redis://redis:6379/0
CELERY_TIMEZONE=UTC
CELERY_TASK_TRACK_STARTED=True
CELERY_TASK_TIME_LIMIT=1800
# API Keys
OPEN_AI_TOKEN=your-openai-token-here
# JWT settings
JWT_ACCESS_TOKEN_LIFETIME=60
JWT_REFRESH_TOKEN_LIFETIME=1440
- Clone the repository:
git clone <repository-url>
cd ai-support-chat
-
Create and configure the
.env
file as shown above. -
Build and start the Docker containers:
docker-compose up --build
The application will be available at http://localhost:8000
ai_support_chat/
├── ai_support_chat/ # Main project directory
│ ├── settings.py # Project settings
│ ├── urls.py # URL configuration
│ ├── asgi.py # ASGI configuration
│ └── wsgi.py # WSGI configuration
├── chat/ # Chat application
│ ├── models.py # Chat models
│ ├── views.py # Chat views
│ ├── urls.py # Chat URLs
│ └── consumers.py # WebSocket consumers
├── users/ # User management application
│ ├── models.py # User models
│ ├── views.py # User views
│ └── urls.py # User URLs
├── manage.py # Django management script
├── requirements.txt # Python dependencies
├── Dockerfile # Docker configuration
├── docker-compose.yml # Docker Compose configuration
└── .env # Environment variables
The application consists of four main services:
- Web Service: Runs the Django application with Daphne ASGI server
- Celery Worker: Processes background tasks
- Celery Beat: Handles scheduled tasks
- Redis: Serves as message broker and cache
POST /api/auth/login/
- User loginPOST /api/auth/refresh/
- Refresh JWT tokenPOST /api/auth/logout/
- User logout
GET /api/chat/
- Get chat historyPOST /api/chat/
- Send messageWS /ws/chat/
- WebSocket connection for real-time chat
GET /api/users/me/
- Get current userPUT /api/users/me/
- Update user profile
To run the project in development mode:
- Create a virtual environment:
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
- Install dependencies:
pip install -r requirements.txt
- Run migrations:
python manage.py migrate
- Start the development server:
python manage.py runserver
For production deployment:
- Set
DEBUG=False
in.env
- Configure a production database (e.g., PostgreSQL)
- Set up proper SSL/TLS certificates
- Configure proper security headers
- Use a production-grade ASGI server
- Fork the repository
- Create a feature branch
- Commit your changes
- Push to the branch
- Create a Pull Request