Quantcast
Viewing all articles
Browse latest Browse all 14040

Is there a way to integrate vector embeddings in a Langhcain Agent

I'm trying to use the Langchain ReAct Agents and I want to give them my pinecone index for context. I couldn't find any interface that let me provide the LLM that uses the ReAct chain my vector embeddings as well.

Here I set up the LLM and retrieve my vector embedding.

llm = ChatOpenAI(temperature=0.1, model_name="gpt-4")    retriever = vector_store.as_retriever(search_type='similarity', search_kwargs={'k': k})

Here I start my ReAct Chain.

prompt = hub.pull("hwchase17/structured-chat-agent")agent = create_structured_chat_agent(llm, tools, prompt)agent_executor = AgentExecutor(agent=agent, tools=tools, verbose=True)result = agent_executor.invoke(    {"input": question,"chat_history": chat_history    })

Before using the ReAct Agent, I used the vector embedding like this.

crc = ConversationalRetrievalChain.from_llm(llm, retriever)result = crc.invoke({'question': systemPrompt, 'chat_history': chat_history})chat_history.append((question, result['answer']))

Is there any way to combine both methods and have a ReAct agent that also uses vector Embeddings?


Viewing all articles
Browse latest Browse all 14040

Trending Articles