|
| 1 | +# AI Coding Agent Instructions for AIDocumentLibraryChat |
| 2 | + |
| 3 | +## Architecture Overview |
| 4 | +This is a Spring Boot 3.5+ application with an embedded Angular frontend, demonstrating Spring AI 1.1+ capabilities for document/image chat, SQL generation, and function calling. It uses PostgreSQL with pgvector for vector storage and embeddings. |
| 5 | + |
| 6 | +- **Backend**: REST controllers (Document, Image, Table/SQL, Function) → Services → Repositories (JPA + VectorStore) → ChatClient for AI interactions. |
| 7 | +- **Frontend**: Angular SPA for uploads, searches, and result display. |
| 8 | +- **Data Flow**: Upload content → Chunk/embed/store in vector DB → Query embed → Vector search → AI generates response with source links. |
| 9 | +- **Why this structure**: Enables RAG on personal documents/images/DBs, reducing hallucinations by grounding responses in stored content. |
| 10 | + |
| 11 | +Key files: `structurizr/workspace.dsl` (C4 diagrams), `backend/src/main/resources/application.properties` (configs). |
| 12 | + |
| 13 | +## Critical Workflows |
| 14 | +- **Build**: `./gradlew clean build -PwithAngular=true` (includes npm build). Add `-PuseOllama=true` for Ollama dependencies. |
| 15 | +- **Run**: `java -jar backend/build/libs/aidocumentlibrarychat.jar --spring.profiles.active=ollama` (or default for OpenAI). |
| 16 | +- **Services**: `./runPostgresql.sh` (DB), `./runOllama.sh` (local AI), `./runStructurizr.sh` (diagrams). |
| 17 | +- **Debug**: Check AI prompts/responses in logs. Use profiles to switch models. Test with `curl` to REST endpoints. |
| 18 | + |
| 19 | +## Project-Specific Patterns |
| 20 | +- **AI Integration**: Use `ChatClient` with prompts for tasks (e.g., `@SystemMessage` for RAG). Embeddings via OpenAI API or ONNX transformers. |
| 21 | +- **Model Selection**: Different Ollama models per feature (e.g., `qwen2.5:32b` for docs, `llama3.2-vision` for images). Configured in `application-ollama.properties`. |
| 22 | +- **Token Management**: Set `document-token-limit`, `embedding-token-limit` per profile to control context. |
| 23 | +- **DB Migrations**: Liquibase changelogs in `backend/src/main/resources/dbchangelog/`. Separate for Ollama (includes vector setup). |
| 24 | +- **MCP Tools**: Client connects to external servers via SSE for book/movie data. Enabled with `spring.ai.mcp.client.enabled=true`. |
| 25 | + |
| 26 | +Examples: |
| 27 | +- Document RAG: `DocumentService` chunks text, embeds, stores; queries use cosine similarity. |
| 28 | +- Image Search: `ImageService` generates descriptions via LLava, embeds them. |
| 29 | +- SQL Gen: `TableService` uses metadata embeddings to build queries. |
| 30 | + |
| 31 | +## Conventions |
| 32 | +- Profiles: `default` (OpenAI), `ollama` (local models), `prod` (production settings). |
| 33 | +- Properties: Environment vars for API keys (e.g., `OPENAI-API-KEY`, `OLLAMA-BASE-URL`). |
| 34 | +- Testing: Unit tests with JUnit; integration via Spring Boot Test. ArchUnit for architecture checks. |
| 35 | +- Dependencies: Managed via Spring BOM; vector store via `spring-ai-starter-vector-store-pgvector`. |
| 36 | + |
| 37 | +Reference: `backend/build.gradle` (dependencies), `application-ollama.properties` (model configs). |
0 commit comments