Performance optimization Fetching projected Criterias

I've testet fetching 19000 records with 147 projections.
I'm using Aurelius-Dataset with PageSize 2000.
Aurelius is connected to a postgres using RemoteDB (and firedac on Server-Side).

This took up to 230 seconds, which I think is not quite fast.
Taking a look on how you are creating CriteriaResultList, I've just changed
this:

function TDriverResultSetAdapter<T>.GetFieldValue(FieldName: string): Variant;
var
  hField: TField;
begin
  hField := FDataset.FieldByName(FieldName);
  if hField.IsNull then
    Result := Variants.Null
  else
    Result := hField.Value;
// old
//  Result := GetFieldValue(GetFieldIndex(FieldName));
end;

By fetching Field out of Dataset-Fields once it "only" took 215 seconds anymore.

implementing this by our own method, dealing it like that:

Dataset.First;
while not eof do
begin
  props := TObjectList<TCriteriaResultProp>.Create(True);
    try
      for i := Low(fieldNames) to High(fieldNames) do
      begin
        if userAliases[i] <> '' then
          propName := userAliases[i]
        else
          propName := fieldNames[i];
        fieldValue := aDataset.FindField(fieldNames[i]).AsVariant; // shortened, cause it's faster
        ConvertFieldValue(fieldValue, fieldTypes[i]); // shortened, cause it's faster        props.Add(TCriteriaResultProp.Create(propName, fieldTypes[i], fieldValue));
      end;
    except
      props.Free;
      raise;
    end;

    Result.Add(TCriteriaResult.Create(props));
    aDataset.Next;
end;

this own implementation only tooks 147 seconds.

All times are taken for completion of our overall data export process.

Some different times, I took from this process:
all with 147 projections
your code:
fetching 750 records in 1125 ms
build criteria out of fetched data 6487 ms

my own method:
fetching 1000 records in 1703 ms
build criteria out of fetched data 431 ms

My method is using an TXDataset executing SQL fetched from criteria-execution.

Can you please send the benchmark project using SQLite, so we can profile and optimize it properly?

I do not have a benchmark project, I measured this using our customer project using postgres.

There are 19 entities involved using associations and sub-properties.
SQL needs half ms to be executed, so this is not a problem at all.

Even worse, the more variables, the higher the number of variables and scenarios that affect performance.

Sorry, I didn't understand what you mean here.

I know, that's why I wrote, the part you didn't get.
Generated SQL is selecting 97 fields (including sub-properties) and 40 subselects (sql-projections), joining 19 tables.
So this is a really complex case. (most possible complex case in our project)

1 Like