Channels
datahub-flyte
scipy-2023-workshop
flyte-school
agent-support-memverge
flyte-build
flyte-users-berlin
scipy-2023-sprint
auth
flyte-bazel
large-language-models
contribute
bioinformatics-computational-biology
great-content
in-flyte-conversations
flyte-on-gcp
show-and-tell
shameless-promotion
linkedin-flyte
random
deployment
hacktoberfest-2023
flyte-github
feature-discussions
linen-test
flytelab
flytekit-java
integrations
ray-on-flyte
conference-talks
release
flyte-ui-ux
workflow-building-ui-proj
writing-w-sfloris
jobs
hacktoberfest-2022
torch-elastic
flyte-console
engineeringlabs
helsing-flyte
flyte-documentation
konan-integration
databricks-integration
ray-integration
wg-gpu-types
flytekit
ecosystem-unionml
scipy-2022-sprint
announcements
ask-the-community
flyte-deployment
introductions
events
Powered by
#ask-the-community
Title
# ask-the-community
e
Ed Muthiah
08/18/2023, 11:11 PM
Hey folks, I'm trying to follow this:
https://union.ai/blog-post/fine-tuning-insights-lessons-from-experimenting-with-redpajama-large-language-model-on-flyte-slack-data
But this link seems to be broken:
https://docs.flyte.org/projects/cookbook/en/latest/auto/integrations/kubernetes/kfpytorch/pytorch_mnist.html#pytorch-elastic-training-torchrun
Is this the correct link in the docs?
https://docs.flyte.org/projects/cookbook/en/stable/auto_examples/kfpytorch_plugin/index.html
is there an option without kubeflow? Maybe Ray?
s
Samhita Alla
08/21/2023, 9:35 AM
Will fix the link 👍
The plugin uses the Kubeflow training operator underneath to fine-tune the LLM.
You should also be able to use Ray as well.
Post