The following figure shows a scenario, and I wonder if it is doable (or meaningful) with RemoteDB? Or any advice for doing this differently (like using Echo?)
- The Client use Aurelius, and has NO knowledge of Database Server 1, Database Server 2
- RemoteDB server is "hooked" up with two Database Servers 1 and 2. Database Servers 1 and 2 are different, one may be NexusDB, the other maybe PostgreSQL, but they share the same schema
- Any READ from the Client will be performed on Database Server 1 ONLY. The READ frequency is low, and data volume is also low for READ
- Any WRITE from the Client will be performed on both Database Server 1, and Database Server 2. Note, the WRITE frequency is HIGH, data volume involved with WRITE is also very HIGH, like thousands of records every second
So basically, Database Server 1 and Database Server 2 are sort of backing up (replicating) for each other.
In RemoteDB, each database must be a different
TRemoteDBServer component, if you are using it), so each database is accessible from a different URL.
Thus, you will have an URL for your READ database, and another URL for your WRITE database, and it's up to your client to decide which database to connect.
The replication from the WRITE to the READ database must also be done "manually", as RemoteDB itself doesn't have such mechanism.
Now, if you are using Aurelius in 100% of data modification operations (insert, delete and update) you can indeed enable TMS Echo in the WRITE database so that all modifications are logged and can then be replicated to the READ database.
Thank you for the heads-up.
For replication using Echo, can it handle high-volume real-time data replication? I have the master database being written thousands of records per sec in real time.
The replication per se is "asynchronous". All it does when a record is inserted is to log that insertion in a table. Then the process of "sending" the data to the other database and inserting (loading) the data in such database is done separately and should be manually launched by you.
A replication system like Echo adds significant overhead, because for each insert it has do:
- Retrieve and serialize all the field values being inserted/updated.
- Log (insert) such values in a separate table for later routing.
- Route (prepare) those modifications to be routed to a different note (another insert in another table)
- "Sending" the routed modifications to the destination node and flagging such records as "sent".
As you can see, a simple INSERT in your application table results in several INSERTs and UPDATEs in Echo control tables. When you say "can it handle", I would say yes, it can handle anything. But it's up to you to check and test if the scenario will be feasible for you, considering the overhead. I'd suggest you try and measure it, that's the only way to check if it will fit your needs.