Skip to content

fix: Updated learn\generation\langchain\handbook\01-langchain-prompt-templates.ipynb to Work with LangChain 0.3 #451

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 9 commits into
base: master
Choose a base branch
from

Conversation

Siraj-Aizlewood
Copy link

Used newer models and new HuggingFaceEndpoint() class (instead of the, now deprecated, HuggingFaceHub() class. Changed the LLM from text-davinci-003 to gpt-3.5-turbo, since the former is deprecated. Had to change the example from prompting the LLM to make jokes in response to serious questions, to a Markdown formatting task, as newer LLMs are too good at following orders and the supposedly bad responses were not actually bad.

Problem

Describe the purpose of this change. What problem is being solved and why?

Solution

Describe the approach you took. Link to any relevant bugs, issues, docs, or other resources.

Type of Change

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • This change requires a documentation update
  • Infrastructure change (CI configs, etc)
  • Non-code change (docs, etc)
  • None of the above: (explain here)

Test Plan

Describe specific steps for validating this change.

…templates.ipynb to Work with LangChain 0.3

Used newer models and new HuggingFaceEndpoint() class (instead of the, now deprecated, HuggingFaceHub() class.
Changed the LLM from text-davinci-003 to gpt-3.5-turbo, since the former is deprecated.
Had to change the example from prompting the LLM to make jokes in response to serious questions, to a Markdown formatting task, as newer LLMs are too good at following orders and the supposedly bad responses were not actually bad.
Copy link

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

…mpt-templates.ipynb

- LCEL Pipeline syntax for "Which libraries and model providers offer LLMs?" query.
- Simplified dynamic prompting.
- Add StrOutputParser for direct string output
- Replace manual LLM invocation with LCEL chain
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant