A demonstration project showcasing Retrieval Augmented Generation (RAG) implementation using Spring AI and OpenAI's GPT models. This application enables intelligent document querying by combining the power of Large Language Models (LLMs) with local document context.
This project demonstrates how to:
- Ingest PDF documents into a vector database
- Perform semantic searches using Spring AI
- Augment LLM responses with relevant document context
- Create an API endpoint for document-aware chat interactions
- Java 23
- Maven
- Docker Desktop
- OpenAI API Key
- Dependencies: Spring Initializer
The project uses the following Spring Boot starters and dependencies:
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-openai-spring-boot-starter</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-pdf-document-reader</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.ai</groupId>
<artifactId>spring-ai-pgvector-store-spring-boot-starter</artifactId>
</dependency>
</dependencies>- Configure your environment variables:
OPENAI_API_KEY=your_api_key_here- Update
application.properties:
spring.ai.openai.api-key=${OPENAI_API_KEY}
spring.ai.openai.chat.model=gpt-4
spring.ai.vectorstore.pgvector.initialize-schema=true- Place your PDF documents in the
src/main/resources/docsdirectory
-
Start Docker Desktop
-
Launch the application:
./mvnw spring-boot:runThe application will:
- Start a PostgreSQL database with PGVector extension
- Initialize the vector store schema
- Ingest documents from the configured location
- Start a web server on port 8080
The IngestionService handles document processing and vector store population:
@Component
public class IngestionService implements CommandLineRunner {
private final VectorStore vectorStore;
@Value("classpath:/docs/your-document.pdf")
private Resource marketPDF;
@Override
public void run(String... args) {
var pdfReader = new ParagraphPdfDocumentReader(marketPDF);
TextSplitter textSplitter = new TokenTextSplitter();
vectorStore.accept(textSplitter.apply(pdfReader.get()));
}
}The ChatController provides the REST endpoint for querying documents:
@RestController
public class ChatController {
private final ChatClient chatClient;
public ChatController(ChatClient.Builder builder, VectorStore vectorStore) {
this.chatClient = builder
.defaultAdvisors(new QuestionAnswerAdvisor(vectorStore))
.build();
}
@GetMapping("/")
public String chat() {
return chatClient.prompt()
.user("Your question here")
.call()
.content();
}
}Query the API using curl or your preferred HTTP client:
curl http://localhost:8080/The response will include context from your documents along with the LLM's analysis.
- Document Processing: Uses Spring AI's PDF document reader to parse documents into manageable chunks
- Vector Storage: Utilizes PGVector for efficient similarity searches
- Context Retrieval: Automatically retrieves relevant document segments based on user queries
- Response Generation: Combines document context with GPT-4's capabilities for informed responses
-
Document Ingestion
- Consider implementing checks before reinitializing the vector store
- Use scheduled tasks for document updates
- Implement proper error handling for document processing
-
Query Optimization
- Monitor token usage
- Implement rate limiting
- Cache frequently requested information
-
Security
- Secure your API endpoints
- Protect sensitive document content
- Safely manage API keys