AI Task Scheduler not working with Gemini?

I tried to get the AI Task Scheduler with a valid Gemini API key, but I only get a text answer and no tool calls.
Is there anything I need to setup somewhere on the Gemini side in order to get teh tools calls?

I tried it now also with a OpenAI API key and that works like a charme.
So, the question is - why is it not working with Gemini and what do I need to do to get it working?

Further: I added an Ollama option to the demo and configured it accordingly and the tool calls also do not happen there - no matter which modell of the common modells available for Ollama I use.

The power of the MCP components is awesome, but for me currently not achievable via Gemini or Ollama - is it working foir anybody else here with those two?

I can see tool calls happening with Gemini here.
Did you check any of the other demos with Gemini tool calls to see if these are working?
Is this Gemini 2.5 you are trying to use?

Hi Bruno.
Thanks, I checked the model and it is set in the demo to gemini-1.5-latest, after switching it to gemini-2.5-flash (according to Function calling with the Gemini API | Google AI for Developers) - the function calling is working.

Best might be to set in the demo right-away gemini-2.5-flash for Settings.GeminiModel.

Thank you so much for your swift support and this awesome AI tools you provide.

Considering the use of Ollama: In the meantime I tried different models and it is model dependent, whether it is working with Ollama or not.

I experienced the best results so far with Settings.OllamaModel := 'gpt-oss:20B'

The tool calling is not always 100% OpenAI compatible though, it can be improved via more detailed AssisstantRole and SystemRole instructions and might need compensation with exception handling, because the model is not 100% stable in the way it provides the tool function calls - e.g. sometmes array parameters are send from the Ollama LLM as pure string arrays and sometimes as JSON-Pairs with a name (containing the expected value) and a true boolean as value of the pair, so the OnExecute coding of the tool might need to parse Args contents depending on the structure of the response in different ways.

A further information that might be of interest to people just starting with Ollama is that you can find models with tool support via this url Tools models · Ollama Search