<@U042Z2S8268> I saw your contributions to langcha...
# contribute
k
@Byron Hsu I saw your contributions to langchain. Would love to integrate more of langchain and Flyte. Do you have ideas on it
b
@Ketan (kumare3) I think langchain is just a high-level python library. It can just work with flyte out-of-the-box. Adding a one more layer (like langchain plugin) might get things complicated IMO. Just creating some examples to demonstrate how flyte work with langchain is sufficient
k
i know, but i think would it help with having orchestration around some stuff - i guess but you are right one could simply write a workflow where the
loader
chunking
etc in langchain is done as a task and the actuall call to llms is done in a separate task
could you add an example?
❤️
b
yeah that’s what i did in flyteGPT. LangChain is mostly for inference. I think users can do “bulk inference” with LangChain on Flyte, and move on to serving if the result is good.
yeah where shall i add?
hmm but all the examples here have an their own langchain plugin 😂
i will play more with langchain and think deeper about how they can integrate. We could maybe build a plugin in langchain, or the other way around
k
whats an airbyte langchain plugin - huh?
maybe it could run a task on flyte?
just arbitrary python code?
ya but al sounds weird, maybe most of the plugins sound weird
b
im thinking if we can build a custom chain consisting of flyte tasks, which is basically a flyteworkflow. https://python.langchain.com/en/latest/modules/chains/generic/custom_chain.html users can inherits or use this chain if they want to run the chain as flyte workflow on flyte
cc @Niels Bantilan any idea?
n
I think a Flyte example (literally Byron’s FlyteGPT) is the path of least resistance, if the goal is to get more exposure for Flyte. In terms of a custom chain/deeper integration, I think the promising directions that will provide value to users are: • the data preprocessing steps (document loading, chunking, indexing) taking advantage of Flyte’s caching • batch inference for use case that don’t have low latency requirement. Basically if we can get LangChain abstractions to run on Flyte natively that would be very interesting. Haven’t thought this through, but something like:
Copy code
chain = FlyteChain(some_langchain_chain, remote=FlyteRemote(...))
chain.run(...)  # the steps in this some_langchain_chain will just run on a Flyte cluster
k
Definitely think
b
FlyteChain would be very cool and on the LLM hype, but not very useful in reality 😂😂
anyways i think doing advertisement and promotion is more important 😆
k
Why not useful in reality?
We could use the new flight agent framework, and the latencies will be very very low
b
what is agent framework?
k
@Kevin Su
b
i mean the latency to launch the tasks might be longer than running in a single program
k
Not every task needs to be launched, better synchronous plug-ins that can be run directly as a
b
is it like tasks act as a server so it can be reused?
k
Yup
b
But the tasks have to share the same deps if they want to run on the same pod?
k
Ya like an agent
b
That could speed up many workflows a lot
Yeah if we have agent, LangChain can speed up as well. Say we can cache embedding
k
Yup
Exactly
😍
And also track everything automatically
And Dan is going to start working on faster propeller evaluations
b
What’s that
149 Views