handling tools request

Tool handlers have "function (const Args: array of TValue): TValue" signature and a tool can have a dozen of parameters, half of which may be optional. So, I'm trying to scroll through Args to find which values were actually passed in and these guys do not appear to have names, so how can one tell which is which? - all demos only ever have a single mandatory parameter, so this question does not arise.

Or are all the parameters always passed in and always in the same order they were defined, even optional ones without a provided value with NULL's? And then how would one safely check a TValue parameter for NULL?

TValue is generally not a very common type ;-(

For the moment, I have assumed that "all the parameters are always passed in and always in the same order they were defined, even optional ones without a provided value with NULL's", but my next hurdle is that the server creates 7 tools, but reports {"error": "Endpoint not found"} on anything I have tried (including /mcp/ping or just /ping). And while the Client demo automatically finds and adds it, it reports 0 tools, so there's some disconnect between the two of them and I could no so far unravel it. What am I missing?

Ok, I figured that it could be edited, but these 3 types of servers need mode documentation.

Anyway, when the client sends request, this is what it logs:

20250817T150809: {"model": "","messages": [ {"role":"user","content":""} ],"tools": [ {"type": } ],"stream": false,"options":{"temperature":0.05,"max_tokens":null}}
20250817T150809: Executing request with url: [http://server:port/api/chat]
20250817T150838: Result from request is {"model":"","created_at":"2025-08-17T05:08:38.3252981Z","message":{"role":"assistant","content":"\u003ctool_call\u003e[{"arguments": ...},"done_reason":"stop","done":true,"total_duration":26526154700,"load_duration":25050903200,"prompt_eval_count":988,"prompt_eval_duration":544895700,"eval_count":49,"eval_duration":924592800}
20250817T150838: Result from request is {"model":"","created_at":"2025-08-17T05:08:38.3252981Z","message":{"role":"assistant","content":"\u003ctool_call\u003e[{"arguments": ... ]"},"done_reason":"stop","done":true,"total_duration":26526154700,"load_duration":25050903200,"prompt_eval_count":988,"prompt_eval_duration":544895700,"eval_count":49,"eval_duration":924592800}

And so it just prints this message back to me in the chat as the answer and that's it. - it never actually try to call the tool, it would seem. Or fails to log any errors if it does and that does not work.

I must be missing something...

Above was using Granite model under Ollama.

It actually works using Qwen, at least partially, still testing... Qwen asks for tools differently from Granite. But I do not think we have easy control over how it works, please try a few different types of models under Ollama to see the differences, it would be helpful to be able to run it against any target model...

When you setup the tools, it is important that the tool has a name and that each parameter has a name and a type. The tool description is what the LLM uses to determine this particular tool is to be called when the prompt would lead to this.
The parameter Required property tells the LLM whether it is needed that the tool is called with this particular parameter or whether it is not strictly needed and the LLM may omit thatr parameter.

There are several examples that demonstrate this. If your parameter would be named "FIRST", is declared as a string parameter and it has set Required = true, then you get the value of that parameter via:

if Args.Count > 0 then
lFirst := Args.GetValue('FIRST');

Note that the function calling & parameter passing is all sent via HTTP in JSON, the parameter types that make sense are the types that JSON offers.

I see in the communication with the LLM nowhere that the LLM passes a function_call object, hence, either your tools were not setup correct or your prompt doesn't lead to the need to call a tool. The model might require a more descriptive prompt.

Ok, thanks! But so far, it appears that the exact sequence is preserved, so the way I'm now doing it by numbers seems fine.

This was where it tried calling the tool:

This is apparently how Granite requests tool calls, but it was not recognized as such and was returned back as a message.

Qwen was better, it requested it in the way correctly recognized by your components. But I'm still seeing frequent AV's on the client side, maybe because responses can easily take >30s from either the model or the tools server.

BTW, I'm using SSE, is it any better or worse than the other protocols? What are most freeware MCP clients using?

If that is how this model performs the function call, it doesn't use the OpenAI standard.
Our component is based on the OpenAI standard.
At this moment, most MCP servers still use the stdio method.

See:
https://platform.openai.com/docs/guides/function-calling

If it would follow the OpenAI standard, the JSON for the tool call would be something like:

[
    {
        "id": "fc_12345xyz",
        "call_id": "call_12345xyz",
        "type": "function_call",
        "name": "get_weather",
        "arguments": "{\"location\":\"Paris, France\"}"
    }
 ]

TTMSMCPCloudAI tools OnExecute handler is:

procedure(Sender: TObject; Args: TJSONObject; var Result: string)

TTMSMCPServer tools OnExecute handler is:

function(const Args: array of TValue): TValue

In this case if I build a MCP Server to be used by an external MCP Client I need to register the tools in TTMSMCPServer where Args is only an array of TValue and optional parameters will shift the order. And we can’t find the parameter by it’s name.

For now I can use tools with optional parameters only in TTMSMCPCloudAI

Hi,

Can you share a snippet of what you want to achieve? If an optional parameter is not used, it won’t be added to the array of TValue.

Regards

Exactly, and if I have 20 optional params and AI will send only 1 how do I know which one is if I declare de tools in TTMSMCPServer?

Hi, You’ll have to loop through the parameters, or you can use the Attributes and decorate everything. This way the mcpserver will be automatically created, and you don’t have to worry about optional parameters. This will all be handled for you. you can look at the attributes demo to see how to achieve this. Or read this blogpost: https://www.tmssoftware.com/site/blog.asp?post=2397&srsltid=AfmBOopc7xi_6A3In7XBurccuM-z3qz4uA7eYIMKDGt_l5VD5SiUmI0P

Now I have an API where a function can change more optional properties some of them having a list of values.

Using decoration I can brake it to have a function for every property with enum values because the list of values changes.

How can I loop through the parameters? I have only an array of TValue. Can I read somewhere a list of input parameters?

You don’t need to split up the function for it to work using decorations. There is an TTMSMCPOptional attribute that you can assign to a parameter. It can have as much parameters with that attribute as you find necessary. For now it is not possible to access the input parameters when executing the method. this is something we might look into in the future. If every parameter is a different type, you can just use a FOR-loop and check the values inside to see if it matches a parameter you expect.