Web Core and Tabulator

Alright, here's how the actual connection happens.

In this post I covered how I go about getting data from XData into Web Core in a variety of different formats that might be useful for different purposes. I was initially interested in the FireDAC variation as it brings with it a list of field defintions that are helpful in recreating the dataset locally within Web Core that matches the original dataset created on the XData server. The idea was to simply use that communications mechanism as a means to transport the dataset into the local application.

But another approach, that I use here with Tabulator, is to get the simpler JSON variant and load that into Tabulator. In the post above, you can see how the fields are defined in Tabulator. When you import JSON, any matching fields are linked to Tabulator through this mechanism. As mentioned, it is much more pleasant if your JSON contains an ID column first that is a simple ordering of the records, but not necessarily a requirement.

The step to connect the two is then something like this.

TDM1.LoadTabulator(Endpoint: String; TabulatorTable: String);
var
  Client:    TXDataWebClient;
  Response:  TXDataClientResponse;
  TableData: WideString;
begin

  // Assuming Endpoint is your XData endpoint and there is one parameter for StreamFormat
  // as per my previous example, and it is going to return a simple JSON.
  // TabulatorTable name would be something like "#gridSample" from the previous example.

  Client := TXDataWebClient.Create(nil);
  Client.Connection := //Connection to your XData Server
  Response := await(Client.RawInvokeAsync(Endpoint, ['JSON'])); 

  TableData := string(Response.Result);

  asm
    var table = Tabluator.findTable(TabulatorTable)[0];
    table.replaceData(JSON.parse(TableData));
  end;

  Client.Free;
  PreventCompilerHint(TableData):
end;

That's really all there is to it. Much more can be done in terms of error-checking or making the LoadTabulator function work with a wider array of endpoints with different parameters and so on, but the gist of it is the same. Maybe this can be improved to skip the JSON > String > JSON translation that happens, but this works as-is pretty well, even with many thousands of records. I think in the video one of the tables has nearly 3,000 records and it isn't really much of a delay at all.

1 Like