Skip to content
This repository was archived by the owner on Mar 19, 2024. It is now read-only.
This repository was archived by the owner on Mar 19, 2024. It is now read-only.

How to improve a supervised model by giving feedback to the model during prediction #818

Open
@prabhatM

Description

@prabhatM

The FastText does a wonderful job predicting the labels. But it's not always right. Is there anyway to tell the model if the prediction is right or wrong and if wrong what is the right label for that prediction through a CLI interface?

I might be suggesting adding a layer of "reinforcement learning" to improve the result.

It will help the technical domain specific data classification.

In highly technical domain a text "xxxxx WITH xxxxxxxx" and "xxxxx WITHOUT xxxxxxxx" have absolutely different meaning and we would like to retrieve the correct label for a query.

Some of the chat bot use this method to improve their model.

Or, am I missing something that FASTTEXT already does something of this kind?

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions