If your model is not following the system prompt, there are a few things you can try:
  • Try rephrasing the prompt: Try to rephrase the prompt in a different way, and emphasize the instructions that the model was previously not following.
  • Try meta prompting: Meta prompting is a technique where you use an LLM to generate or refine a prompt to be fed into another LLM. This can help improve accuracy in many cases.
  • Try using a different model: Try using another stable model from this list.

Still having issues?

Shoot us a message on Discord

Feel free to send a message in the #help channel on Discord and someone from our team will get back to you.