handling tools request

Tool handlers have "function (const Args: array of TValue): TValue" signature and a tool can have a dozen of parameters, half of which may be optional. So, I'm trying to scroll through Args to find which values were actually passed in and these guys do not appear to have names, so how can one tell which is which? - all demos only ever have a single mandatory parameter, so this question does not arise.

Or are all the parameters always passed in and always in the same order they were defined, even optional ones without a provided value with NULL's? And then how would one safely check a TValue parameter for NULL?

TValue is generally not a very common type ;-(

For the moment, I have assumed that "all the parameters are always passed in and always in the same order they were defined, even optional ones without a provided value with NULL's", but my next hurdle is that the server creates 7 tools, but reports {"error": "Endpoint not found"} on anything I have tried (including /mcp/ping or just /ping). And while the Client demo automatically finds and adds it, it reports 0 tools, so there's some disconnect between the two of them and I could no so far unravel it. What am I missing?

Ok, I figured that it could be edited, but these 3 types of servers need mode documentation.

Anyway, when the client sends request, this is what it logs:

20250817T150809: {"model": "","messages": [ {"role":"user","content":""} ],"tools": [ {"type": } ],"stream": false,"options":{"temperature":0.05,"max_tokens":null}}
20250817T150809: Executing request with url: [http://server:port/api/chat]
20250817T150838: Result from request is {"model":"","created_at":"2025-08-17T05:08:38.3252981Z","message":{"role":"assistant","content":"\u003ctool_call\u003e[{"arguments": ...},"done_reason":"stop","done":true,"total_duration":26526154700,"load_duration":25050903200,"prompt_eval_count":988,"prompt_eval_duration":544895700,"eval_count":49,"eval_duration":924592800}
20250817T150838: Result from request is {"model":"","created_at":"2025-08-17T05:08:38.3252981Z","message":{"role":"assistant","content":"\u003ctool_call\u003e[{"arguments": ... ]"},"done_reason":"stop","done":true,"total_duration":26526154700,"load_duration":25050903200,"prompt_eval_count":988,"prompt_eval_duration":544895700,"eval_count":49,"eval_duration":924592800}

And so it just prints this message back to me in the chat as the answer and that's it. - it never actually try to call the tool, it would seem. Or fails to log any errors if it does and that does not work.

I must be missing something...

Above was using Granite model under Ollama.

It actually works using Qwen, at least partially, still testing... Qwen asks for tools differently from Granite. But I do not think we have easy control over how it works, please try a few different types of models under Ollama to see the differences, it would be helpful to be able to run it against any target model...

When you setup the tools, it is important that the tool has a name and that each parameter has a name and a type. The tool description is what the LLM uses to determine this particular tool is to be called when the prompt would lead to this.
The parameter Required property tells the LLM whether it is needed that the tool is called with this particular parameter or whether it is not strictly needed and the LLM may omit thatr parameter.

There are several examples that demonstrate this. If your parameter would be named "FIRST", is declared as a string parameter and it has set Required = true, then you get the value of that parameter via:

if Args.Count > 0 then
lFirst := Args.GetValue('FIRST');

Note that the function calling & parameter passing is all sent via HTTP in JSON, the parameter types that make sense are the types that JSON offers.

I see in the communication with the LLM nowhere that the LLM passes a function_call object, hence, either your tools were not setup correct or your prompt doesn't lead to the need to call a tool. The model might require a more descriptive prompt.

Ok, thanks! But so far, it appears that the exact sequence is preserved, so the way I'm now doing it by numbers seems fine.

This was where it tried calling the tool:

This is apparently how Granite requests tool calls, but it was not recognized as such and was returned back as a message.

Qwen was better, it requested it in the way correctly recognized by your components. But I'm still seeing frequent AV's on the client side, maybe because responses can easily take >30s from either the model or the tools server.

BTW, I'm using SSE, is it any better or worse than the other protocols? What are most freeware MCP clients using?

If that is how this model performs the function call, it doesn't use the OpenAI standard.
Our component is based on the OpenAI standard.
At this moment, most MCP servers still use the stdio method.

See:
https://platform.openai.com/docs/guides/function-calling

If it would follow the OpenAI standard, the JSON for the tool call would be something like:

[
    {
        "id": "fc_12345xyz",
        "call_id": "call_12345xyz",
        "type": "function_call",
        "name": "get_weather",
        "arguments": "{\"location\":\"Paris, France\"}"
    }
 ]