TTMSFNCCloudAI Tools parameters are either wrong or mixed up on more complex prompt ?

Tested on AiMap demo with ChatGPT integration

When you ask:
"Add markers on map for 10 most populated cities of the Normandy department"

The result is random and incorrect – some markers even appear in the sea.
It looks like the coordinates or city parameters are either wrong or mixed up.

However, if you first ask:
"Give me coordinates for the 10 most populated cities of the Normandy department",
then the result is correct.

So the issue seems to be related to how the OnExecute directly from a complex prompt ?

I've checked this and at first sight it looks like an issue with the LLM.
In the logs, the function calls are called with parameters that are slightly off indeed:

When we log this, we see the LLM performing these tool calls:

  "output": [
    {
      "id": "fc_68542986c8b4819b801522320dab46a60b9a3ecd7fdff343",
      "type": "function_call",
      "status": "completed",
      "arguments": "{\"lon\":-0.370679,\"lat\":49.182863,\"title\":\"Caen\"}",
      "call_id": "call_zChiEwu36RgQykhtWyoCjae8",
      "name": "addmarker"
    },
    {
      "id": "fc_6854298713d4819b94ec417f456b57490b9a3ecd7fdff343",
      "type": "function_call",
      "status": "completed",
      "arguments": "{\"lon\":-1.0901,\"lat\":49.4431,\"title\":\"Cherbourg-en-Cotentin\"}",
      "call_id": "call_6woNearqTvY5pf0iweIoLv9b",
      "name": "addmarker"
    },
    {
      "id": "fc_68542988223c819b8e97d698fc04c03e0b9a3ecd7fdff343",
      "type": "function_call",
      "status": "completed",
      "arguments": "{\"lon\":0.107929,\"lat\":49.49437,\"title\":\"Le Havre\"}",
      "call_id": "call_PvXFzHziOTduGVtDKbkfSiDZ",
      "name": "addmarker"
    },
    {
      "id": "fc_685429886f0c819bb3349638d125d96b0b9a3ecd7fdff343",
      "type": "function_call",
      "status": "completed",
      "arguments": "{\"lon\":0.25,\"lat\":49.5,\"title\":\"Rouen\"}",
      "call_id": "call_GFWaPd4jd6MWiBHLwQs7mClQ",
      "name": "addmarker"
    },
    {
      "id": "fc_68542988dae8819bb497a1706ce8bf870b9a3ecd7fdff343",
      "type": "function_call",
      "status": "completed",
      "arguments": "{\"lon\":-0.4571,\"lat\":49.5333,\"title\":\"\u00c9vreux\"}",
      "call_id": "call_b4crSEQSmvXRIcXLyTM3hQrj",
      "name": "addmarker"
    },
    {
      "id": "fc_685429890da4819ba26ade93c7e9efad0b9a3ecd7fdff343",
      "type": "function_call",
      "status": "completed",
      "arguments": "{\"lon\":-0.25,\"lat\":49.5,\"title\":\"Dieppe\"}",
      "call_id": "call_uTAj7vtXouXnXcQpoL8iiXZS",
      "name": "addmarker"
    },
    {
      "id": "fc_685429894610819b8b691a5a0c13bfbd0b9a3ecd7fdff343",
      "type": "function_call",
      "status": "completed",
      "arguments": "{\"lon\":-0.75,\"lat\":49.5,\"title\":\"Lisieux\"}",
      "call_id": "call_CYJoFcDpJS8Na2kk1WFRw0VX",
      "name": "addmarker"
    },
    {
      "id": "fc_685429898b7c819bbb6a3f60f8603b320b9a3ecd7fdff343",
      "type": "function_call",
      "status": "completed",
      "arguments": "{\"lon\":-0.5,\"lat\":49.5,\"title\":\"Saint-L\u00f4\"}",
      "call_id": "call_y4Z4C7vSHkvX7oYSh0q4vcb9",
      "name": "addmarker"
    },
    {
      "id": "fc_68542989e6e4819bbcd2193504ff22db0b9a3ecd7fdff343",
      "type": "function_call",
      "status": "completed",
      "arguments": "{\"lon\":-0.75,\"lat\":49.5,\"title\":\"Vernon\"}",
      "call_id": "call_Zjhp1CBTAiCFqCnMDUueAMqN",
      "name": "addmarker"
    },
    {
      "id": "fc_6854298a1ebc819bb1a229faa9cc54500b9a3ecd7fdff343",
      "type": "function_call",
      "status": "completed",
      "arguments": "{\"lon\":-0.5,\"lat\":49.5,\"title\":\"F\u00e9camp\"}",
      "call_id": "call_m4aOYCsUyzEVpcFSXF8IXkEX",
      "name": "addmarker"
    }
  ],

so, our component interprets the lon/lat values correct and sets it on the correct coordinates on the map but the LLM sent these wrong values.
The arguments are directly taken as-is from the LLM response and I'm not sure why these are a bit off in this case.
Strange enough, asking a marker one by one seems more precise.
So, in a nutshell, I'm not sure why the LLM does this.