I am developing an application that has to take hundreds of files and insert their content into a database.
I developed everything using Aurelius which allows me to easily manage various types of DB. The application seems to work fine until the amount of data to import becomes important. When there are many data to be imported, the application crashes, because the allocated memory becomes too high. The problem seems to depend on the way in which Aurelius manages the data, in the sense that even after a SAVE and a FLUSH it seems that Aurelius still holds data in his stomach, checking with a resource manager it is very clear that as the procedure proceeds, the allocated memory becomes higher and higher, if the system manages to get to the end then the memory is deallocated and everything returns as before, but if it grows too much the program crashes.
From what I have read, I think I understand, that after a FLUSH Aurelius consolidates the operations performed on the DB, this is enough or other procedures must also be done to free memory!
The update part is performed in this way, TDB_DATAROWS represents my table, FDB_Manager is an Aurelius TObjectManager, the main program pass to the procedure, which runs in a secondary thread, the data related to one file, the program creates the TDB_DATAROWS object sets the properties and stores everything.
As I said before, if the procedure manages to finish, all the data is correctly saved, and even if it crashes I still find data loaded!
// here I fill Data with real data taken from files