AI Background
As AI large models continue to improve in capabilities, their applications are becoming increasingly widespread, with one of the most important scenarios being integration with CMS (Content Management Systems).
Since AI always requires a content source, CMS serves as a critical link. Furthermore, the content within CMS needs to be mined by AI, which involves RAG (Retrieval-Augmented Generation) or AI data mining within CMS. Lastly, content generated by AIGC (AI-Generated Content) requires storage, making CMS the natural choice for persistent storage.
This creates a closed-loop workflow of Content -> AI -> Content, underscoring the pivotal role CMS plays in AI applications.
This article will analyze and demonstrate CMS+RAG applications, along with real-world project examples.
RAG
RAG, or Retrieval-Augmented Generation, as its name suggests, enhances generation through retrieval. Here, retrieval refers to data retrieval external to the AI model, while generation refers to AIGC, or AI-generated content.
Essentially, RAG retrieves data in advance and feeds it to the AI, which then uses the provided knowledge to respond. This approach helps mitigate issues like hallucinations in large models and knowledge gaps in certain specialized domains.
The key to RAG lies in retrieval. Most common RAG applications are based on LangChain, which uses vector retrieval by default. This has led to the misconception that RAG must rely on vectorization. However, this is not true.
RAG retrieval can source data from any content. Therefore, while retrieval is the core component of RAG, the source of retrieval can vary widely.
RAG retrieval sources include:
Vector retrieval
Knowledge graph retrieval
Solr retrieval
Database retrieval
AI-powered retrieval
Thus, the essence of RAG is retrieval, but the sources of retrieval can be diverse. Moreover, retrieval does not need to occur in a single step; multiple retrieval processes can be involved.
Drupal RAG Solution
CMS systems are naturally connected to AI because AI relies on content as its foundation, and AIGC-generated content also needs to be stored in CMS. This forms a closed-loop workflow of Content -> AI -> Content.
We have previously introduced several articles on AI based on Drupal, including modules like Drupal OpenAI and Drupal Augmentor.
As for RAG in the context of Drupal CMS, the key components mainly include:
Retrieval – Data retrieval
AI Inference and Generation
Solr Retrieval
Retrieval can be implemented using Drupal's Solr integration or through vector-based retrieval. Compared to vector retrieval, Drupal's Solr retrieval is relatively straightforward. With Drupal's built-in Search API and Search API Solr modules, you can quickly set up a Solr search system and corresponding APIs. The retrieved results can then be fed into large AI models for inference and generation.
For setting up Search API and Solr, you can refer to relevant tutorials available online.
Vector Retrieval
Another approach is vector retrieval, which relies on vectorizing content. Once vectorized, content can be retrieved using high-precision vector similarity.
Drupal’s OpenAI module offers an integration with Search API AI, which defaults to using databases like Pinecone or Milvus as vector stores. Popular vector databases include:
Pinecone: Fully managed vector database
Weaviate: Open-source vector search engine
Redis: Used as a vector database
Qdrant: Vector search engine
Milvus: Built for scalable similarity search
Chroma: Open-source embedding database
Typesense: Fast, open-source vector search
Zilliz: Data infrastructure powered by Milvus
Due to connectivity issues with many SaaS-based vector stores in some regions, it’s often better to set up a local vector store. Simple options include Chroma or Pgvector. We opted for Pgvector, as it’s based on PostgreSQL, making it familiar to manage. Additionally, it can store conventional data types in tables.
We developed a module called Search API Pgvector, which uses Drupal’s Search API framework to vectorize all information and store it in Pgvector. Queries are performed using SQL to retrieve the most similar data, providing results that best match user needs.
AI Generation
Once the retrieval results are obtained, the next step is AI generation. Drupal already offers the OpenAI module, which can directly call OpenAI’s APIs for inference and generation.
We have also modified the OpenAI module to support other endpoints, such as Microsoft Azure, OpenAI proxy servers (required in regions where OpenAI is inaccessible), and Kimi. Since Kimi’s APIs are 100% compatible with OpenAI, these integrations can all be unified under the OpenAI module.
If you wish to use other large AI models, you’ll need to integrate the relevant APIs, such as Llama3, Deepseek or Qwen.
Prompt
For AI generation, one crucial aspect is the prompt. In code, OpenAI's API can be utilized in multiple ways. For instance, placing the context into the System parameter instead of the User parameter can be advantageous.
The benefit of this approach is that when the User's input is too lengthy, OpenAI might struggle to capture the key points in subsequent queries. However, placing essential instructions and context in the System parameter ensures they are treated as important.
Here’s an example:
SYSTEM
Use the provided articles delimited by triple quotes to answer questions. If the answer cannot be found in the articles, write "I could not find an answer."
USER
<insert articles, each delimited by triple quotes> Question: <insert question here>
Finally, you can build your own AI+Drupal application with relavent drupal modules and custom code.