Send large Blobs will raise integeroverflow

Trying to Send a 950MB file using XData I get a Integeroverflow out of TBclUtils.InternalEncodeBase64.

This occurs because Length of Output should be 1.329.420.652, so this does not work for strings.

I've already found Large file upload over 2gb - #4 by AndrewSimard.
But it looks like your Multipart reader is only available in Sparkle-Server?
Is there anything else I could use in XData-Client?

Is there something like Multipart writer available?

1 Like

Looks like, there is a way to send Blobs as Streams, that will chunk these data.
(see THttpSysResponse.SendBody)

But doing this, I will get a Connecton Closed-Error trying to fetch my data using Swagger or Postman...

I assume you are sending a TBlob value as a property of an object you are returning? In this case, yes, XData will base64-encode the full value. It's not suitable for big chunks of data, only small.

If you want to return big data, just create a service operation that returns a TStream.
The multipart reader can be used in XData, you can just create a service operation that receives a TStream as well.

No, my Result was from type TBlob. (but in other Endpoints they're properties of Objects)

My exact case is creating and restoring postgres-backups.
My client is sending a Rest-Command, so that XData-Server will create a backup from the choosen databases, sending back one file (at last step, I'm iterating through all Postgres-BackupFiles putting them in a 7z-Archive)
So I have several Endpoints, handling these big files. I changed all of them to TStream right now (Creating TMemoryStream in my Interfaces).

But now, XData-Client is raising that JSON-Converter could not be found for Pointer. (In case of TStream is empty.)
If TStream is not created at all, it looks like it's working fine.

But these Endpoints could not be used out of swagger, could them?

How would CRUD Endpoints behave in case of large TBlob contents? For exampe storing big images in Database...

You are probably not declaring the methods correctly. Please provide more info so we can know better what is happening.

How would CRUD Endpoints behave in case of large TBlob contents?

CRUD Endpoints are for records in databases. Do you have huge blob content in your database?

I mean, you have a record where a blob content is 950 MB?

That is a very unusual scenario. I would recommend you create specific service operations endpoints returning TStream.
Probably you are declaring the entity property as TStream instead of TBlob, that doesn't work.

My Endpoint got a function: TStream...
Constructor is creating TMemoryStream,
Destructor is freeing it.

Within calling this Endpoint, Stream is Copied into FStream of my implementing class.
But maybe, there is no stream fetched, so nothing is copied and this will lead to JSON-Converter Exception.

Not now, and I don't want to, but it could happen in far future, I just want to be prepared, what would happen.

That's what I thought of. But it would be nice to prohibit creating Base64 of my blob, because the datas could be mixed...
And for sure, Base64 is a lot easier to implement for other developers than chunks...

So maybe you could check, if Blob-Size is to big for base64 before converting it, sending a default text in this property like Blob too large instead of raising an error

It's not "your endpoint". It's your entity class. It's wrong.
(Well, I'm assuming it, because you didn't provide any code so far).

Oh, is was not my Result, it was my Input-JSON...

  IBackupRestore = interface(IInvokable)
    function RestoreBackup(aBackup: TRestoreConfigList): TGUID;

this is my my Class:

  TRestoreConfigList = class
  strict private
    FBackupFile: TStream;

    function GetBackupFile: TStream;
    destructor Destroy; override;

    property BackupFile: TStream read GetBackupFile;

Error occurs, if Create is creating FBackupFile: TStream, that would maybe stay empty
As you can imagine taking a look at my definition, right now FBackupFile is created within GetBackupFile, if it is not Assigned right now.

This works fine, but I think it should also work if TStream stays empty

That is not supported, you can't use TStream for entity properties.
You should create it like this:

  function RestoreBackup(BackupFile: TStream): TGUID;

TRestoreConfigList is not an Entity. (if you mean an Aurelius-Entity corresponding to a table in Database).

It is just a Class holding some more Informations, just send a part of it.
As you could imagine by it's name, there is also a List, holding some Config-Informations (at least switches for pg_restore for each database)

You can't mix a binary big file with JSON information. That's not even XData limitation. To receive big files, you need to receive it via stream, not via a DTO class like TRestoreConfigList.

If you need to send additional information, you have some options, like:

  1. Send such information in query or path parameters.
  2. Process the stream as multipart/form-data, and send the metadata as additional parameters. Doesn't make much difference compared to option 1 and it's harder.
  3. Use two endpoints, one to send metadata information, save it, return some id, and then a specific endpoint to update the raw file associated with that id.