Very slow insert when use Sequence

Hi,
I made a test: Inserting 200 records in a Batch operation, the operation take 500-600 miilliseconds if I have a GUID as ID, but takes 26 Seconds if the Key is a Sequence.
My db test is PostgreSql.
Require a new sequence for the DB take a lot of time, or is a my mistake?
Thanks

When using batch operations in Aurelius, ad using GUID, the id is generated client-side and all the 500 records will be inserted at once.

When using a Sequence, Aurelius needs to go to the database 500 times to get the next value of the sequence for the id of the 500 records, so it's expected to be slower.

Yes, of course, but I didn't expect such a degradation in performances.
I was thinking of a few more milliseconds per record, the time of the DB call. Instead I notice a worrying drop in performance with enormous times. Now I don't know if other dbs (Firebird, MariaDb, etc.) allow better performances, but in this case an autoincrement is practically not recommended.
Or it is necessary to insert the autoinc directly from the Db, which however Aurelius cannot manage and cannot recover until after the insert.
I'm worried about how to handle the problem, I need an anti-increment in addition to the Guid, but these performances are worrying

It's a 1 SQL execution versus 400 SQL execution, so I'd expect a significant difference.

But only code profiling will show exactly where is the bottleneck.

You can also try to clear the manager (Manager.Clear) after each insert to see if performance increases, to isolate any slowness caused by the internal cache.

I think that needs some optimization;
Batch operations are very slow not only for obtain a sequence (By the way Firebird is faster than Postgres in retrieving sequences)
However, I think the code is not very optimized for Cached Update operations.
For example in Save function there is a managed transaction that
It should not be affected in the case of multiple saves of 1000 records, so we don't need to Start/Commit 1000 times if we are in cached update mode, since Db operations are only called in ApplyUpdate.
(the same for update/delete)
Or am I wrong?

procedure TObjectManager.Save(Entity: TObject);
var
ProcessedObjs: TList;
Transaction: IDBTransaction;
begin
// TODO: Change this for a transactional context
ProcessedObjs := TList.Create;
try
Transaction := DoBeginTransaction;
try
InternalSave(Entity, Explorer.GetEntityType(Entity), DummyMasterObject, ProcessedObjs);
DoCommit(Transaction);
except
DoRollback(Transaction);
raise;
end;
finally
ProcessedObjs.Free;
end;
end;

The solution is Manager.UseTransactions:=false on starting batch operations, and Manager.UseTransactions:=true when ApplyUpdates.
Then performances are better.

Maybe Manager,CachedUpdates:=true can set automatically False in using transactions, and ApplyUpdates can set it to True.

Important update:
By carefully modifying the properties of the connections (Firedac) at design time, I obtained excellent performances compared to the initial slow ones.
Therefore, the properties of the data provider must be carefully evaluated, some of them certainly slow down multiple operations a lot.
I went from 20 seconds to 1 for inserting 200 records into the cached update, along with the True/False handling of Manager.UseTransactions during the loop.

1 Like

Thank you for the follow up. Optimization is hard and needs profiling and tweaking.

This will break backward compatibility thus I dion't think it's an option.
As it happened in your cause, users needing it can just disable it, that's why the option is there anyway.

This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.