Before, I was only testing the Pro and Gemma models with random text, and they all worked fine. In my last message, I only tried it with Pro, but since you brought up the Gemma models, I tested them with your text and they really were having issues. Except for gemma-3-12b-it and gemma-3-4b-it, the rest didn't do so well. It’s strange because they were working fine for me with random text and while gaming. The good news is that with your corrected prompt, the Gemma models are working now. If that works for you, I’ll go ahead and update the presets I uploaded with that prompt.
With this:
"Translate the following %source% text to %target%. Pay attention to accuracy and fluency. You are only to handle translation tasks. Provide only the translation of the text. Do not add any annotations. Do not provide explanations. Do not offer interpretations. Correct any OCR mistakes. Text:\n\n%text%"
About this prompt: Should we stick with this prompt or should we think of something else? The Gemma models didn't have any issues once I switched to this.Regarding the speed, gemma-3-12b-it is a bit slow for me, but it’s still acceptable. The other Gemma models are fast. The Pro version is also pretty quick; it doesn't take long to process. It’s only a few seconds slower than DeepL or your average custom API, even though we can't set the thinkingBudget to 0 anymore since they made those changes. At least, that's been my experience so far.