Hi, I will start by saying this is not technically a TMS issue but is a problem when using RemoteDB with FireDac. I spent some time tracking this issue down and found that if my select query returned >= the rowset size (default 50 in the firedac fetch options) (eg select first 51) then a transaction would remain Active / uncommited, however if I select lower that the rowset size (eg select first 49) the transaction will commit. We discovered while testing that we lost a large amount of data entry due to a crash (a separate issue), as it seems transactions were queued until the first transaction was committed or rolled back no other transactions would be committed. This caused a large amount of data loss, we obviously cannot go to production with a situation like this so right now I have defaulted the fetch options - mode to fmAll, this gets all records and the select transaction commits this way, will impact on performance this way but at least we get transactions committing as they happen now. Have you hit this before? any tips on a better way to deal with this. I can't see the logic in this and wonder if its a bug.
It seems to be intended logic with Mode rmOnDemand, I can see that the cursor is not closed until there is nothing more to fetch, however they don't have this quite right as if you select first 50 if rowset size is 50 then it fetches all 50 but does not close the cursor even though there is nothing more to fetch. Just seems problematic not to commit and use a new transaction each fetch. So will just use rmAll option. Be nice to have an option to commit on each fetch if in rmOnDemand mode. I'm sure this has caught other people out. Guess I will contact embarcadero about it
Thanks for sharing this information about FireDAC.
But when RemoteDB is involved, I believe the fetch mode shouldn't impact because RemoteDB fetches all records from the server anyway, the cursor is never kept open server-side.
Yes you are correct I checked in the remote service and it does not leave cursor open like the local does. Our application does both local and remote using TXDataset, so I always need to test in both as the local/remote behavior is often different as is in this case. This means I have some more investigation on the remote side to find out why we lost so much data entry. It was good to deal with the local issue though.