XdataServer Response

Hi Everybody,

I'm using TMS XData\Demos\Swagger demos as basic model, and I'm building a REST service on top of it.

I put some new memos in main form in order to show the lastest request.
I call procedure LogCall for each service implementation.

How can I put get the response body and headers of the latest call (for example put that latest response body in a memo in main form)?

Also is there any method that I can use in Data\Demos\Swagger demos I can Intercept and modify the reponse, like Response := StringReplace(Response,'$id','dummy',[replaceAll]);


    [Route('activity-info')]
    [HttpGet]     function GetActivityInfo(activityId: TGUID ):T_ActivityInfo;


function TAVSessionService.GetActivityInfo(activityId: TGUID ):T_ActivityInfo;
begin
 LogCall;
 raise EXDataHttpException.Create(500, 'not implemented yet');
 Result := T_ActivityInfo.Create;
//deff
end;
procedure LogCall;
begin
  fmain.MainForm.mem_body_request.Clear;
  fmain.MainForm.mem_last_header.Clear;
  fmain.MainForm.mem_user.Clear;
  fmain.MainForm.mem_auth.Clear;

  if add_fake_token then AddFakeAuth;

  DeserializeJWT;

  fmain.MainForm.mem_last_header.Text := TXDataOperationContext.Current.Request.Headers.RawWideHeaders;
  fmain.MainForm.mem_body_request.Text := TEncoding.UTF8.GetString(TXDataOperationContext.Current.Request.Content);
  fmain.MainForm.mem_Info.Lines.Insert(0,FormatDateTime('yyyy.mm.dd hh:nn:ss >',now())+ '[' + TXDataOperationContext.Current.Request.RemoteIp + '] ' + TXDataOperationContext.Current.Request.Method + ' ' +  TXDataOperationContext.Current.Request.RawUri);
end;

Thanks,
Mihai

  1. Do not access visual controls from server code, unless you are sure what you are doing. The server code is mostly executed in thread, and VCL is not thread-safe and its controls cannot be modified from threads.

  2. It's not trivial to log or modify the response. XData sends and receives that on the fly via streaming for best performance and low memory usage, thus the response is written directly to the output and when it's written it's gone. The way to intercept the response is to modify the output stream of the response with your own custom stream and intercept the values there, forwarding the blocks being written to the original stream. You can see how this is done by checking the Compress middleware source code, in unit Sparkle.Middleware.Compress.pas.

  3. You can easily log headers using req or reqheaders format string with the Logging middleware:

:req[header]
The given header of the request. If the header is not present, the value will be displayed as an empty >string in the log. Example: :req[content-type] might output "text/plain".
:reqheaders
All the request headers in raw format.

https://doc.tmssoftware.com/biz/sparkle/guide/middleware.html#format-string-options

Hi Wagner,

Thanks a lot for the taking the time and for detailed answer,

Regarding 1.
I'm just using the main form for debug purpose (info on screen)
The final solution will be deployed only as Win32 standalone service.

Regarding Memo VCL I manage to solve the random exceptions by adding (BeginUpdate and EndUpdate) . The Exception started on stress-test not on normal one-by-one calls.

MainForm.mem_body_request.Lines.BeginUpdate;
MainForm.mem_body_request.Lines.Text := Request.body;
MainForm.mem_body_request.Lines.EndUpdate;

I'm stress testing the API with some .bat files, a small example is:

@echo off
cls
:start
 curl -k -X "GET" "https://192.168.11.60:10002/api/custom-session/organizations-get" -H "accept: application/json"  
goto start

Just fyi: I use above bat from 7 different virtual machines running x2 (2x7 = 14 parallel /bat) .
In stress-test I get about 140-160 responses/ second with is very good for a stand-alone exe server.

I use in API service implementation for each method several queries from a single global var of a custom T_DB class. The query procedure looks like example bellow. I tested with all MSSQL DB connectors (DBX, FD, ADO). SLEEP_INTERVAL is 1. SQL_conn is defined inside T_DB class; I use {$DEFINE DB_FDAC}

Before I implemented a FLook Boolean in the T_DB class, the mechanism below I had 1 error 1 / 10k API calls.
Now I get zero error on 600k API calls but I'm not sure that is bullet-proof (I will do some more stress-test).

Is there any other approach that I can execute the bellow code between "//sync start" and "// end sync" just to be sure another API call will not enter in this procedure?

function T_DB.Query(sql: string; var qr: {$IFDEF DB_FDAC}TFDQuery{$ENDIF}{$IFDEF DB_DBEX}TSQLQuery{$ENDIF}{$IFDEF DB_ADO}TADOQuery{$ENDIF} ): boolean;
begin
  while FLock do
    sleep(SLEEP_INTERVAL);
  FLock := true;
  //sync start
  try
    qr := {$IFDEF DB_FDAC}TFDQuery{$ENDIF}{$IFDEF DB_DBEX}TSQLQuery{$ENDIF}{$IFDEF DB_ADO}TADOQuery{$ENDIF}.Create(nil);
    qr.{$IFDEF DB_FDAC}Connection{$ENDIF}{$IFDEF DB_ADO}Connection{$ENDIF}{$IFDEF DB_DBEX}SQLConnection{$ENDIF} := SQL_conn;
    qr.DisableControls;

    // connected TBD
    Result := false;
    try
      qr.sql.Text := sql;
      qr.Open;
      Result := true;
    except
        //Log some Error
    end;
  finally
    //end sync
    FLock := false;

  end;
end;

Regarding 2.

I don't want to throw $id and other internal stuff, like mandatory id etc..
I want to "string-parse" and replace the response as I need.
Also that "values" parent in JSON for TList and TArray is a small problem for me.

The main use for me with XDATA as REST server is the swagger option that is a must, and I strongly encourage TMS to develop more on this.
If there is any other way to use a swagger (without XDATA) please shoot.

I just populate the Result (in service implementation) without using any TMS native database mapping.

I will look forward in how to build a custom middleware.

Regarding 3
I will test it and get back

Anyway thanks for support and I don’t realy need an answer just for “Flock” part if you have any idea.
Thanks again for taking the time.

Thanks,
Mihai

You Flock solution is not bullet-proof. You must use specific constructions for thread-synchronization, like TCriticalSection or TMonitor. Nevertheless, I believe the issue you are having is because you are using a global database connection. Create a new database for each query and you should be good, it will perform better than using a "Flock" mechanism, which will serialize all requests.