skip navigation
skip mega-menu

在这个故事中,我们将描述如何使用 LangChain (v. 0.0.190) with ChatGPT under the hood. This story is a follow up of a previous story on Medium and is built on the ideas of that story.

LangChain has a set of foundational chains:

  • LLM:一个带有提示模板的简单链,可以处理多个输入.
  • RouterChain:使用大型语言模型(LLM)选择最合适的处理链的网关.
  • Sequential:按顺序处理输入的一系列链. This means that the output of the first node in the chain, 成为第二个节点的输入和第二个节点的输出, the input of the third and so on.
  • Transformation:一种链,允许Python函数调用可定制的文本操作.

A Complex Workflow

在这个故事中,我们将使用所有基础链来创建以下简单的命令行应用程序工作流:

Complex LangChain Flow

This flow performs the following steps:

  • Receive the user input
  • The input is written to a file via a callback.
  • 路由器从五个选项中选择最合适的链:
    Python程序员(使用顺序链提供解决方案和单元测试)
    Kotlin程序员(使用顺序链提供解决方案和单元测试)
    - Poet (simple LLMChain that provides a single response)
    - Wikipedia Expert (simple LLMChain)
    - Graphical artist (simple LLMChain)
    - UK, US Legal Expert (simple LLMChain)
    -词间隙填充器(包含转换输入并填充空白的顺序链)
  • The large language model (LLM) responds.
  • The output is again written to a file via a callback.

正如你所看到的,主路由器链会触发简单的LLMChain,但也会触发SimpleSequentialChain.

Flow Implementation

我们已经在这个Github存储库中发布了上述流程的Python实现:

GitHub - gilfernandes/complex_chain_playground:游乐场项目作为一个例子…

Playground项目作为一个复杂的LangChain工作流的例子- GitHub…

github.com


如果您想使用它,您可以克隆存储库,然后使用 Conda with Mamba.

下面是我们用来安装必要库的脚本:

conda create --name langchain2 python=3.10
conda activate langchain2
conda install -c conda-forge mamba
mamba install openai
mamba install langchain
mamba install prompt_toolkit

You will need a ChatGPT key installed in your environment for the script to work.

In Linux you can execute a script like this one to setup the ChatGPT key:

export OPENAI_API_KEY=

You can then activate the Conda environment and run the script:

conda activate langchain2
python ./lang_chain_router_chain.py

Example Output

We have executed the script with some questions and captured a transcript in this file.

下面是一些我们用作输入的提示和相应的被触发的代理:

Can you fill in the words for me in this text? Reinforcement learning (RL) is an area of machine 
学习关注的是智能代理应该如何在环境中采取行动
in order to maximize the notion of cumulative reward.
强化学习是三种基本的机器学习范式之一,
alongside supervised learning and unsupervised learning.
  • word filler
    removes every third word and then fills the gaps
英国和美国法律体系的主要区别是什么 
in terms of the inheritance tax?
  • legal expert
    generates an explanation with a comparison of the laws
Can you write a Python function which returns the list of days 
between two dates?
  • python programmer
    generates the code and then the unit test
你能写一个Python函数来实现Levenshtein距离吗 
between two words?
  • python programmer
    generates the code and then the unit test
你能写一个Kotlin函数转换两个日期在ISO格式 
(like e.g. '2023-01-01')转换为LocalDate,然后计算天数
between both?
  • kotlin programmer
    generates the code and then the unit test
你能写一首十大网博靠谱平台软件开发乐趣的诗吗 
in the English country side?
  • poet
    generates a poem:

In the realm of code, where logic doth reside
Amidst the verdant fields, software doth abide.
Where bytes and bits dance with the gentle breeze,
In the English countryside, a programmer finds ease.

你能用s型函数的输出生成一个图像吗 
and its derivative?
  • graphical artist
    generates an SVG image (not very accurate)
你能给我解释一下量子计算中量子比特的概念吗?
  • wikipedia expert
    generates a decent explanation of the topic

Implementation Details

该项目包含一个主脚本,用于设置链并执行它们: complex_chain.py. We have other files in the project, like FileCallbackHandler.py 这是一个用于将模型输入和输出写入HTML文件的回调处理程序的实现.

We are going to focus here on complex_chain.py.

complex_chain.py sets up the model first:

class Config(): 
model = 'gpt-3.5-turbo-0613'
llm = ChatOpenAI(model=model, temperature=0)

It declares a special variation of langchain.chains.router.MultiPromptChain, because we could not use them together with langchain.chains.SimpleSequentialChain:

class MyMultiPromptChain(MultiRouteChain):
"""一个多路由链,它使用LLM路由链来选择提示."""

router_chain: RouterChain
"""Chain for deciding a destination chain and the input to it."""
destination_chains: Mapping[str, Union[LLMChain, SimpleSequentialChain]]
"""Map of name to candidate chains that inputs can be routed to."""
default_chain: LLMChain
"""当路由器没有将输入映射到目的地时使用的默认链."""

@property
def output_keys(self) -> List[str]:
return ["text"]

然后生成所有链(包括默认链)并将它们添加到列表中:

def generate_destination_chains():
"""
Creates a list of LLM chains with different prompt templates.
请注意,有些链是顺序链,用来生成单元测试.
"""
prompt_factory = PromptFactory()
destination_chains = {}
for p_info in prompt_factory.prompt_infos:
name = p_info['name']
prompt_template = p_info['prompt_template']

chain = LLMChain(
llm=cfg.llm,
提示= PromptTemplate(模板= prompt_template input_variables =(“输入”)),
output_key='text',
callbacks=[file_ballback_handler]
)
if name not in prompt_factory.programmer_test_dict.keys() and name != prompt_factory.word_filler_name:
destination_chains[name] = chain
elif name == prompt_factory.word_filler_name:
transform_chain = TransformChain(
input_variables=["input"], output_variables=["input"], transform=create_transform_func(3), callbacks=[file_ballback_handler]
)
destination_chains[name] = SimpleSequentialChain(
chains=[transform_chain, chain], verbose=True, output_key='text', callbacks=[file_ballback_handler]
)
else:
# Normal chain is used to generate code
# Additional chain to generate unit tests
template = prompt_factory.programmer_test_dict[name]
prompt_template = PromptTemplate(input_variables=["input"], template=template)
test_chain = LLMChain(llm=cfg.Llm, prompt=prompt_template, output_key='text', callbacks=[file_ballback_handler])
destination_chains[name] = SimpleSequentialChain(
chains=[chain, test_chain], verbose=True, output_key='text', callbacks=[file_ballback_handler]
)


default_chain = ConversationChain(llm=cfg.llm, output_key="text")
return prompt_factory.prompt_infos, destination_chains, default_chain

It sets up the router chain:

Def generate_router_chain(prompt_info, destination_chains, default_chain):
"""
Generats the router chains from the prompt infos.
:param prompt_infos The prompt informations generated above.
:param destination_chains使用不同提示模板的LLM链
:param default_chain A default chain
"""
目的地= [f"{p['name']}: {p['description']}" for p in prompt_info]
destinations_str = '\n'.join(destinations)
router_template = MULTI_PROMPT_ROUTER_TEMPLATE.format(destinations=destinations_str)
router_prompt = PromptTemplate(
template=router_template,
input_variables=['input'],
output_parser=RouterOutputParser()
)
router_chain = LLMRouterChain.from_llm(cfg.llm, router_prompt)
multi_route_chain = MyMultiPromptChain(
router_chain=router_chain,
destination_chains=destination_chains,
default_chain=default_chain,
verbose=True,
callbacks=[file_ballback_handler]
)
return multi_route_chain

最后,它包含一个允许用户交互的main方法:

if __name__ == "__main__":
# Put here your API key or define it in your environment
# os.environ["OPENAI_API_KEY"] = ''

prompt_info, destination_chains, default_chain = generate_destination_chains()
链= generate_router_chain(prompt_infos, destination_chains, default_chain)
with open('conversation.log', 'w') as f:
while True:
question = prompt(
HTML("Type Your question ('q' to exit, 's' to save to html file): ")
)
if question == 'q':
break
if question in ['s', 'w'] :
file_ballback_handler.create_html()
continue
result = chain.run(question)
f.write(f"Q: {question}\n\n")
f.write(f"A: {result}")
f.写(“\ n \ n ====================================================================== \ n \ n”)
print(result)
print()

Final Thoughts

LangChain允许创建真正复杂的llm交互流.

然而,设置工作流比想象的要复杂一些,因为 langchain.chains.router.MultiPromptChain does not seem to get well along with langchain.chains.SimpleSequentialChain. 因此,我们需要创建一个自定义类来创建复杂的流.


Gil Fernandes, Onepoint Consulting

Subscribe to our newsletter

Sign up here