企业宣传,产品推广,广告招商,广告投放联系seowdb

RAG初级优化 基于疑问生成的文档检索增强

咱们将在本文中引见一种文本增强技术,该技术应用额外的疑问生成来改良矢量数据库中的文档检索。经过生成和兼并与每个文本片段关系的疑问,增强系统规范检索环节,从而参与了找到关系文档的或者性,这些文档可以用作生成式问答的高低文。

成功步骤

经过用关系疑问丰盛文本片段,咱们的指标是清楚提高识别文档中蕴含用户查问答案的最关系局部的准确性。详细的打算成功普通蕴含以下步骤:

咱们可以经过设置,指定在文档级或片段级启动疑问增强。

class QuestionGeneration(Enum):"""Enum class to specify the level of question generation for document processing.Attributes:DOCUMENT_LEVEL (int): Represents question generation at the entire document level.FRAGMENT_LEVEL (int): Represents question generation at the individual text fragment level."""DOCUMENT_LEVEL = 1FRAGMENT_LEVEL = 2

打算成功

疑问生成

def generate_questions(text: str) -> List[str]:"""Generates a list of questions based on the provided text using OpenAI.Args:text (str): The context, temperature=0)prompt = PromptTemplate(input_variables=["context", "num_questions"],template="Using the context>

解决干流程

def process_documents(content: str, embedding_model: OpenAIEmbeddings):"""Process the document content, split it into fragments, generate questions,create a FAISS vector store, and return a retriever.Args:content (str): The content of the document to process.embedding_model (OpenAIEmbeddings): The embedding model to use for vectorization.Returns:VectorStoreRetriever: A retriever for the most relevant FAISS document."""# Split the whole text content into text documentstext_documents = split_document(content, DOCUMENT_MAX_TOKENS, DOCUMENT_OVERLAP_TOKENS)print(f'Text content split into: {len(text_documents)} documents')documents = []counter = 0for i, text_document in enumerate(text_documents):text_fragments = split_document(text_document, FRAGMENT_MAX_TOKENS, FRAGMENT_OVERLAP_TOKENS)print(f'Text document {i} - split into: {len(text_fragments)} fragments')for j, text_fragment in enumerate(text_fragments):documents.append(Document(page_cnotallow=text_fragment,metadata={"type": "ORIGINAL", "index": counter, "text": text_document}))counter += 1if QUESTION_GENERATION == QuestionGeneration.FRAGMENT_LEVEL:questions = generate_questions(text_fragment)documents.extend([Document(page_cnotallow=question, metadata={"type": "AUGMENTED", "index": counter + idx, "text": text_document})for idx, question in enumerate(questions)])counter += len(questions)print(f'Text document {i} Text fragment {j} - generated: {len(questions)} questions')if QUESTION_GENERATION == QuestionGeneration.DOCUMENT_LEVEL:questions = generate_questions(text_document)documents.extend([Document(page_cnotallow=question, metadata={"type": "AUGMENTED", "index": counter + idx, "text": text_document})for idx, question in enumerate(questions)])counter += len(questions)print(f'Text document {i} - generated: {len(questions)} questions')for document in documents:print_document("Dataset", document)print(f'Creating store, calculating embeddings for {len(documents)} FAISS documents')vectorstore = FAISS.from_documents(documents, embedding_model)print("Creating retriever returning the most relevant FAISS document")return vectorstore.as_retriever(search_kwargs={"k": 1})

该技术为提高基于向量的文档检索系统的消息检索品质提供了一种方法。此成功经常使用了大模型的API,这或者会依据经常使用状况发生老本。

原文链接:​ ​​ ​

© 版权声明
评论 抢沙发
加载中~
每日一言
不怕万人阻挡,只怕自己投降
Not afraid of people blocking, I'm afraid their surrender