langgraph集成大模型
本文介绍了如何在LangGraph中使用LangChain集成大模型。主要内容包括:1) 两种初始化大模型的方法(传统方法和init_chat_model);2) 在工作流和Agent中使用大模型,支持动态选择不同模型(如qwen-plus和deepseek-chat);3) 模型配置技巧:禁用流模式、设置容错后备模型(主模型失效时自动切换)和实现限流控制(通过InMemoryRateLimite
langgraph直接使用langchain中自带的库集成大模型。你可以完成大模型的初始化、对大模型进行配置、动态选择不同的模型。
1.初始化大模型
可以使用传统的方法初始化langgraph中使用的大模型,具体示例代码如下:
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(
model = 'qwen-plus',
api_key = "sk-*",
base_url = "https://dashscope.aliyuncs.com/compatible-mode/v1")
还可以使用init_chat_model初始化大模型,示例代码如下:
from langchain.chat_models import init_chat_model
import os
os.environ["OPENAI_API_KEY"] = "sk-*"llm = init_chat_model(model='qwen-plus', model_provider='openai', base_url='https://dashscope.aliyuncs.com/compatible-mode/v1')
2.使用大模型
在工作流中直接使用完成初始化的大模型。
在agent中使用大模型,可以在创建agent时直接传入完成初始化的大模型,示例代码如下:
from langgraph.prebuilt import create_react_agent
agent = create_react_agent(
model=model,tools=* #其他参数
)
在langgraph还可以根据预设的条件动态选择大模型。以下代码初始化了两个大模型,一个是qwen-plus一个是deepseek-chat,在运行程序时可根据上下文选择使用哪个大模型,具体代码如下:
from dataclasses import dataclass
from typing import Literal
from langchain.chat_models import init_chat_model
from langchain_core.language_models import BaseChatModel
from langchain_core.tools import tool
from langgraph.prebuilt import create_react_agent
from langgraph.prebuilt.chat_agent_executor import AgentState
from langgraph.runtime import Runtime
from langchain_tavily import TavilySearch
import os
os.environ["OPENAI_API_KEY"] = "sk-*" #qwen的api_keyos.environ["TAVILY_API_KEY"] = "tvly-*"
tool = TavilySearch(max_results=2)
tools = [tool]
#使用上下文保存使用哪个大模型的配置
@dataclass
class CustomContext:
provider: Literal["qwen", "deepseek"]
#初始化两个大模型
qwen_model = init_chat_model(model='qwen-plus', model_provider='openai', base_url='https://dashscope.aliyuncs.com/compatible-mode/v1')
deepseek_model = init_chat_model(model='deepseek-chat', model_provider='openai', base_url='https://api.deepseek.com', api_key='sk-*')
# Selector function for model choice
def select_model(state: AgentState, runtime: Runtime[CustomContext]) -> BaseChatModel:
if runtime.context.provider == "qwen":
model = qwen_model
elif runtime.context.provider == "deepseek":
model = deepseek_model
else:
raise ValueError(f"Unsupported provider: {runtime.context.provider}")# With dynamic model selection, you must bind tools explicitly
return model.bind_tools(tools=tools)
# agent可根据上下文中的provider选择对应的大模型
agent = create_react_agent(select_model, tools=tools)
调用agent时使用deepseek大模型:
output = agent.invoke(
{
"messages": [
{
"role": "user",
"content": "who did develop you?",
}
]
},
context=CustomContext(provider="deepseek"),
)print(output["messages"][-1].text())
输出如下:
I was developed by DeepSeek, a Chinese AI company. DeepSeek created me as part of their efforts in artificial intelligence research and development. The company focuses on creating advanced AI models and has been actively contributing to the AI field with various language models and AI technologies.
If you'd like more specific details about DeepSeek's development team or the company's background, I can search for more current information about them. Would you like me to do that?
调用agent时使用qwen-plus大模型:
output = agent.invoke(
{
"messages": [
{
"role": "user",
"content": "who did develop you?",
}
]
},
context=CustomContext(provider="qwen"),
)print(output["messages"][-1].text())
输出如下:
I was developed by the Tongyi Lab team at Alibaba Group. This team brings together many researchers and engineers with expertise in artificial intelligence, natural language processing, and machine learning. If you have any questions or need assistance, feel free to ask me anytime!
3.配置大模型
配置大模型包括禁用大模型的流模式,实现容错和限流。
3.1禁用流模式
在初始化大模型时可禁用流模式:
qwen_model = init_chat_model(
model='qwen-plus',
model_provider='openai',
base_url='https://dashscope.aliyuncs.com/compatible-mode/v1',
disable_streaming=True
)
3.2容错
在初始化大模型时,可以指定后备模型,当本模型失效时,调用后备模型。如下代码以qwen作为主模型,deepseek作为后备模型。后备模型可以指定多个。
model_with_fallbacks = qwen_model.with_fallbacks([deepseek_model,])
3.3限流
在初始化大模型时,直接指定实现创建的限流器,实现限流,示例代码如下:
from langchain_core.rate_limiters import InMemoryRateLimiter
rate_limiter = InMemoryRateLimiter(
requests_per_second=10, # 每秒请求数
check_every_n_seconds=0.1, # 检测周期,此处为每秒检查10次
max_bucket_size=10, # 令牌桶最大值,也就是最大并发数
)model = init_chat_model(
model='qwen-plus',
model_provider='openai',
base_url='https://dashscope.aliyuncs.com/compatible-mode/v1',
rate_limiter=rate_limiter
)
更多推荐
所有评论(0)