Skip to content

If the default model is local, the llama.cpp server doesn't run #28

@limdingwen

Description

@limdingwen

Seems to just be because the run_server() call is called when using set_target(), but that's not called on startup. Slightly related to #5.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions