Skip to content

Does slam-omni still have instruction-following capabilities? I tried modifying the prompt, but it didn't seem to work. #215

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
ranck626 opened this issue Mar 17, 2025 · 2 comments
Assignees

Comments

@ranck626
Copy link

I'm not sure if it's a problem with my instruction design.

dataset_config:
    # we put prompt here, because the hydra override in shell script only support a small subset of chars
    prompt: "Conduct a spoken conversation with the user. "

Thanks for your attention!

@ranck626
Copy link
Author

As an AI language model, I don't have personal experiences or emotions. 

I don't expect this kind of response.😭

@cwx-worst-one
Copy link
Collaborator

In our fine-tuning of SLAM-Omni, we kept the system prompt fixed, which may have reduced the model's sensitivity to prompt variations. To mitigate such kind of responses, there are two main approaches:

  1. Incorporating SFT data with diverse system prompts.
  2. Applying RLHF to refine the model's responses.

We are actively exploring these directions to improve the system. 💪

@cwx-worst-one cwx-worst-one self-assigned this Mar 17, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants