Redis-powered restaurant discovery with intelligent dining assistance.
An AI-powered restaurant discovery platform that combines Redis's speed with LangGraph's intelligent workflow orchestration. Get personalized restaurant recommendations, smart dining suggestions, and lightning-fast responses through semantic caching.
- Node.js (v24+) + Express - Backend runtime and API framework
- Redis - Restaurant store, agentic AI memory, conversational history, and semantic caching
- Redis LangCache API - Semantic caching for LLM responses
- LangGraph - AI workflow orchestration
- OpenAI API - GPT-4 for intelligent responses and embeddings for vector search
- HTML + CSS + Vanilla JS - Frontend UI
- Smart Restaurant Discovery: AI-powered assistant helps you find restaurants, discover cuisines, and manage your reservations. Both text and vector-based search across restaurants
- Dining Intelligence: Get restaurant recommendations with detailed information for any cuisine or occasion using RAG (Retrieval Augmented Generation)
- Demo Reservation System: Reservation management - add, view, and manage restaurant reservations (Simplified demo implementation, not a production-ready reservation system)
- Redis as memory layer: For fast data retrieval
- Vector Search: Find restaurants using AI-powered similarity search
- Semantic Cache: Similar queries return instantly using Redis LangCache
- LangGraph Workflows: AI agent routing, tool selection
- Multi-tool Agent: Restaurant tools, search tools, reservation tools, and knowledge tools
cp .env.example .env
# Edit .env with your Redis URL, OpenAI key, and LangCache credentialsQuick start:
# Build and start
docker compose build
docker compose up -d
# Load data (first time only)
docker compose exec relish-app npm run load-restaurants
docker compose exec relish-app npm run seed-dummy-users
# View logs
docker compose logs -f relish-appSubsequent runs:
docker compose upTo stop the service and remove containers:
# Stop and remove containers
docker compose down-
Install dependencies:
npm install
-
Load restaurant data
npm run load-restaurants
-
Load sample users
npm run seed-dummy-users
-
Start the server
npm start
Visit http://localhost:3000/?name=ashwin or any of the seeded users in your browser. Default port number can be customized via environment variables defined in the .env configuration file.
β
βββ package.json
βββ index.js
βββ config.js
βββ modules/
β βββ restaurants/ # Restaurant Component
β β βββ api/ # REST API endpoints
β β βββ domain/ # Business logic and services
β β βββ data/ # Data access layer
β βββ reservations/ # Reservation Business Component
β β βββ api/
β β βββ domain/
β β βββ data/
β βββ chat/ # Chat/Cache Component
β β βββ api/
β β βββ domain/
β β βββ data/
β βββ users/ # User Business Component
β β βββ domain/
β β βββ data/
β βββ ai/ # Agentic AI Layer
β
βββ client/ # Frontend assets
βββ views/ # HTML and Handlebars templates
βββ scripts/ # Data loading scripts
βββ README.md # This file
POST /api/ai/chat- Main chat interface for AI restaurant assistantPOST /api/ai/chat/end-session- End user session and clear chat historyGET /api/ai/chat/history- Get chat history for a sessionGET /api/ai/chat/cache-check- Check semantic cache for a query
GET /api/restaurants- Unified restaurant search with text/vector similarityGET /api/restaurants/filters- Get available filter options (cuisines, cities, localities, types)GET /api/restaurants/stats- Get restaurant statistics
POST /api/reservations/book- Create a new reservationGET /api/reservations/:sessionId- Get all reservations for a sessionGET /api/reservations/reservation/:reservationId- Get specific reservation detailsPUT /api/reservations/:reservationId/cancel- Cancel a reservation
- Fork the repository
- Create your feature branch (
git checkout -b feature/amazing-feature) - Commit your changes following Conventional Commits (
git commit -m 'feat: add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
- Ashwin Hariharan - @booleanhunter
This project is licensed under the MIT License - see the LICENSE file for details.
If you find a bug or have a feature request, please open an issue in the repository.
This project is a learning-focused demonstration designed to help developers understand:
- AI/ML Concepts: Semantic search, vector embeddings, semantic caching, and agentic AI workflows
- Full-Stack AI Architecture: How to organize AI applications using clean architecture principles and modular design patterns
- Integration Patterns: Wiring together Redis, LangGraph, OpenAI, and LangCache in a real-world application
What this project focuses on:
- β Semantic search and vector similarity
- β LLM-powered agentic workflows with tool calling
- β Semantic caching for performance optimization
- β Clean, modular architecture for full-stack AI applications
What this project does NOT focus on:
- β Complete business logic (e.g., real reservation systems, payment processing)
- β Production-ready authentication/authorization
The features in this app such as reservation and user management are intentionally simplified to focus on demonstrating AI workflows and architecture patterns.
π Read More: Learn about the architectural principles in this blog post.
