Quantcast
Channel: Active questions tagged python - Stack Overflow
Viewing all articles
Browse latest Browse all 14301

Streaming with Langchain - `astream` method in Langchain doesn't return an Async Generator for streaming tokens

$
0
0

I am working on a FastAPI application that should stream tokens from a GPT-4 model deployed on Azure. I want to create a FastAPI route that uses server-side events to stream each token as soon as it's generated by the model. I'm using the AzureChatOpenAI and LLMChain from Langchain for API access to my models which are deployed in Azure.

I tried to use the astream method of the LLMChain object. Instead of returning an async generator that streams tokens, it only returns the final answer as a string.

I'm really at a loss for why this isn't working, as I only see streaming on my terminal. I expected the astream method to return an async generator so that I could stream each token as they are generated by the model, instead I am getting the final string as a response.

Below is a code snippet (just a part of it, enough for debugging):

class OpenAIModel:    def __init__(self):        self.llm = None    def __call__(self, #params) -> str:        if self.llm is None:            openai_params = {                # other params removed for debugging purpose'streaming': streaming,'callback_manager': AsyncCallbackManager([StreamingStdOutCallbackHandler()]) if streaming else None,'verbose': True if streaming else False            }            self.llm = AzureChatOpenAI(**openai_params)        return self    async def streaming_answer(self, question: str):        qaPrompt = PromptTemplate(            input_variables=["question"], template="OPENAI_TEMPLATE"  # Anonymized template        )        chain = LLMChain(llm=self.llm, prompt=qaPrompt)        async for chunk in chain.astream({"question": question}):            yield chunkmodel = OpenAIModel(**params)@app.post('/stream_answer', response_model=StreamingAnswerResponse)async def stream_answer(request_data: QuestionRequest):    generated_tokens = []    async def generate_token():        nonlocal generated_tokens        async for token in model.streaming_answer(question=request_data.question):            generated_tokens.append(token)            yield f"data: {json.dumps({'chunk': token})}\n\n"    return StreamingResponse(generate_token(), media_type="text/event-stream")

Viewing all articles
Browse latest Browse all 14301

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>