Skip to content

Conversation

@iris-wuu
Copy link

@iris-wuu iris-wuu commented Apr 14, 2025

I created three functions in this PR:

1. prompt_template() integrated #102 prompt engineering into backend script.
2. code_analysis_with_llm() maintains chat history by appending new requests / responses and then send chat history to LLM to get contextual responses.
3. get_ai_response() calls the 2 functions above. This should be called in frontend script as a connection between frontend and backend.

Reference: https://platform.openai.com/docs/guides/text?api-mode=responses#page-top

@iris-wuu
Copy link
Author

iris-wuu commented Apr 14, 2025

Hi project mentors! @walterbender @chimosky @quozl

I wrote about the backend's high-level logic in my proposal but didn't include detailed implementation. I may submit more PRs in the coming days. Hope these could also be taken into account when evaluating my proposal.

I didn't get a chance to discuss the llm selection when writing my proposal. However, I recognized it's quite necessary as I saw some AI agents in sugar labs were implemented with open-source local LLM and FastAPI.
Do we prefer local LLM rather than GPT4?
Do we prefer a full RAG pipeline or using LLM alone to generate responses?

If it is the case, I can try modifying my implementation.
Would love to hear about your opinions!

@quozl
Copy link
Contributor

quozl commented Apr 14, 2025

Thanks. I'm not a mentor this year. I've not looked at your proposal, sorry. My opinion is that Sugar activities should work without internet where it is practical to do so.

@chimosky
Copy link
Member

chimosky commented Apr 14, 2025

In addition to what Quozl has said, they're other ways to contribute to our software to show us your skills, you can start with existing issues.

@iris-wuu
Copy link
Author

Thank you both for the comments!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants