Skip to content
This repository was archived by the owner on Jun 5, 2025. It is now read-only.

Tune context and prompts#200

Merged
ptelang merged 1 commit intomainfrom
refine-prompts
Dec 5, 2024
Merged

Tune context and prompts#200
ptelang merged 1 commit intomainfrom
refine-prompts

Conversation

@ptelang
Copy link
Copy Markdown
Contributor

@ptelang ptelang commented Dec 4, 2024

No description provided.

@ptelang ptelang force-pushed the refine-prompts branch 2 times, most recently from 6ddd05b to c4bfb06 Compare December 4, 2024 18:04
Comment thread prompts/default.yaml
"temperature": 0,
}

result = await self.inference_engine.chat(
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what do you think about wrapping this in a try/except and returning an empty list and logging an error? In general our exception handling is not great (unrelated to this PR of course) and I was wondering if it would make sense to mark pipeline steps as critical or nice-to-have and handle exceptions in the pipeline processor rather than having to handle them in the steps themselves.

That would be outside the scope of this patch, for this one I just wonder about wrapping the chat in try/except and returning [] in case of an exception.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh and since we were talking on slack about performance of the local vs remote model, just noting here that the local LLM takes anywhere between 1.5 - 4 seconds on my laptop. I will also measure the hosted LLMs for the same task.

Comment thread src/codegate/pipeline/codegate_context_retriever/codegate.py
Comment thread src/codegate/server.py
Copy link
Copy Markdown
Contributor

@jhrozek jhrozek left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I added some comments but they are non-blocking. I think we should tune the prompt further to avoid the bad links and discuss if we want to inject the security-focused prompt always. I will also file an issue to think about handling exceptions in pipeline steps.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants