XDataWebClient1.RawInvoke('IWorkoutService.List', [],
procedure(Response: TXDataClientResponse)
begin
XDataWebDataset1.Close;
XDataWebDataset1.SetJsonData(TJSObject(Response.Result)['value']);
XDataWebDataset1.Open;
end
);
//
How can I get the changes from XDataWebDataset as class object so I can send them to XData server via a service?
Does [JsonInclude(TInclusionMode.NonDefault)] only send the properties which are changed?
It only send properties in JSON if the value of the property is different from the default value for the type - for example, 0 for numeric values, empty string for strings, False for booleans. It won't include such values in JSON.
Yes and now.
Yes: such information is not directly available, to know which fields are changed, you need to implement it yourself somehow.
No: why would you need that?
Correct, there is no information about the changed fields in that event.
No, but you won't need the changed fields. Just send the whole modified object to the server.
Only the modified fields of the table should be updated with the query in the service. When sending only modified fields (and values) the query could be flexible created.
Als I would like to prevent unnecessary data traffic.
No, but you won't need the changed fields. Just send the whole modified object to the server.
Does this mean XData server updates all fields (not only the modified)?
XData Aurelius automatic CRUD endpoints actually publishes a PATCH method where you can send only the modified properties in JSON, but that is not usually used in REST APIs and is not used anyway in neither TAureliusDataset/TXDataClient not TXDataWebDataset/TXDataWebClient combinations.
Hi Wagner,
for this Attribute: '[JsonInclude(TInclusionMode.NonDefault)]`,
there is a way for setting the same effect (only filled properties n Json) at runtime?
Sometimes I need the complete Json, and sometimes, for example in queries, I need only the "essential" Json for performances, expecially when there are a lot of properties, but only few of them assigned.
(When your direct access to Postgres, Firebird, Mysql? )
Thanks in advance!
Direct access would allow us to work on Xdata with greater response speed. And set up services to send more data in case of large tables at the same time.
Currently, querying a service that send a list of 10,000 records via an Xdata "Entity" or via direct SQL with Json generation has a difference of 4-7 seconds.
These times could be reduced by Xdata by eliminating the passage through the various dataset providers (Firedac, Unidac, etc.) using direct access.
For Zeos, for example, it is possible to use low-level Zdbc components to feed the entities or create Json directly from the data buffer, without passing through the dataset, as we have already said several times.
Speed ββis a very important factor, especially in intensive services or in Xdata applications that must manage a large number of requests. Every optimization is welcome and improves the positive perception towards the user :-)
(Of course we are not talking about paging or other techniques for pulling down a few records at a time :-) )
This has to be profiled, that's the only way to optimize. Your statement is very simplified, from what I understood, the bottleneck seems to be the object serialization to JSON (which doesn't happen with SQL to JSON), not database direct access.
I don't want to be picky, but I firmly believe that if you can eliminate a step you produce a faster result.
And I'm referring to the following steps: DB>Adapter data Buffer >Dataset>ObjectList>Json serialize.
It seems to me that this is the path that Aurelius (Xdata in this case) uses to provide the data.
Since these are SERVER applications, the Dataset is the slowest object in the chain and is completely useless. If we could use DB>Direct Adapter>ObjectList>Json serialize directly, we would certainly recover a lot in the case of many records, as well as occupying less memory.
This is what I expect direct access to do. This is what you can also do with Zeos Low level.
Currently, for queries that contain a few thousand records and that must be completely downloaded to the client (to look up cities, for example) we use services that perform SQL queries that return the data in Json as quickly as possible, not using the potential of the Orm.
So we use the "traditional" queries converted into Json for the large Selects requested by the Client (Web Core) and the ORM for the Crud phases. Compromise to obtain a good operating speed.
The possibility of having direct access also for other DB types, I assume would allow us to have better results in the selects, if your direct access is aimed at optimizing performances. :-)
Clearly we are talking about obtaining always better results to improve the product
I'm not against optimization. I agree that if we can optimize something, we should. My point is:
Do you have any actual data to corroborate this claim?
My point is: it looks like you think this is the case, but you didn't measure it.
All I'm saying is that those things should be measure to exactly know if the bottleneck is in DB>Adapter data Buffer, or Adapter data Buffer >Dataset, or Dataset>ObjectList, or ObjectList>Json serialize.
Without such measure, we might spend valuable time optimizing something that won't affect the final result significantly.