Other database content: data revision control and history

One concern that I would have is for on-disk integrity of the tables. Any RDBMS worth its salt would have to pass the ACID test. Modern databases are not structured like IBM's DB2 databases from 1980s any more. Well, we can all let PMc be our guinea pig ;)
 
Note : I you have a redis database which caches webpages nobody cares if data is lost.

Note Postgresql,
#fsync = on # flush data to disk for crash safety.
&
wal_sync_method= fdatasync (call fdatasync() at each commit)

 
Dolt is a tool to do the management/data entry chores on a server it connects to
I didn't find that. In fact I did not find any statement about what it actually is.
Only two somehow relevant statements:
1. that transactions should be possible, but are disabled by default, because there seem to be some difficulties..
2. the whole thing is derived from Noms, which seems to be a decentral delta-storing (kind-of) database.

What also is quite obvious:
* this thing is not a drop-in replacement for a postgresql cluster.
* it will not have OO programming features like postgres
* it will most likely not support PL/pgsql.
* it will likely not have any idea about transaction isolation levels etc.
* it will almost certainly not work with Ruby-on-Rails.

But nevertheless, it is an answer to my question, and it is good to know something like this does exist.
 
* it will likely not have any idea about transaction isolation levels etc.
That one is a red flag for me.

* it will not have OO programming features like postgres
* it will almost certainly not work with Ruby-on-Rails.
Most languages have an API to send UNIX commands and make use of the output. Java and Ruby-on-Rails most certainly have that capacity.
 
Most languages have an API to send UNIX commands and make use of the output. Java and Ruby-on-Rails most certainly have that capacity.
That's not the point. RoR loads the schema from the DB, and configures all the colums, variable types and associative relations according to what the database provides. You do not configure table contents in the application, instead they get fetched from the database at startup. So you need a database adapter that can understand all the variable types, foreign key constructs etc. and translate them from postgres to ruby and vice versa. This is not just an API for Unix commands.
Sure, one can always write such an adapter. But I dont want to.

Currently I am testing a tool that logs the history into an extra field in every record via a database trigger on update. The log_data is jsonb, and with a timestamp one can easily "grep" the respective record out of that log. So with a little PL/pgsql to do that "grep", I can swap the current fields with the historical ones in the from clause and do this:
Code:
SELECT "regions".*
  FROM regions as hcur,
       jsonb_populate_record(null::regions, historical(hcur.log_data, 1626288811948)) as regions
WHERE ... whatever ...

And this is fully transparent and can be placed into arbitrary complex joins generated with ruby/arel.
In fact this can run a select with a timestamp and so provide the result as of that time.
(There is still a bunch of things to look into, like how to do soft-deletes, etc.)
 
In postgresql to get the value of a certain key in a jsonb object you can use the operator "->".
But afterwards explicit cast to ::NUMERIC can be needed.
Postgresql is integrating the best of SQL & NOSQL I feel. And jsonb has tremendous number of operators.
 
That's not the point. RoR loads the schema from the DB, and configures all the colums, variable types and associative relations according to what the database provides. You do not configure table contents in the application, instead they get fetched from the database at startup. So you need a database adapter that can understand all the variable types, foreign key constructs etc. and translate them from postgres to ruby and vice versa. This is not just an API for Unix commands.
Sure, one can always write such an adapter. But I dont want to.

Currently I am testing a tool that logs the history into an extra field in every record via a database trigger on update. The log_data is jsonb, and with a timestamp one can easily "grep" the respective record out of that log. So with a little PL/pgsql to do that "grep", I can swap the current fields with the historical ones in the from clause and do this:
Code:
SELECT "regions".*
  FROM regions as hcur,
       jsonb_populate_record(null::regions, historical(hcur.log_data, 1626288811948)) as regions
WHERE ... whatever ...

And this is fully transparent and can be placed into arbitrary complex joins generated with ruby/arel.
In fact this can run a select with a timestamp and so provide the result as of that time.
(There is still a bunch of things to look into, like how to do soft-deletes, etc.)
Just so that you know, adding a column to an existing table messes up the database schema. The way most people get around that is by dedicating a column to a timestamp from get-go. I would know, I've worked with real Oracle databases in college and at work. Based on those timestamps, you can hunt down where in your table you messed up. If you want to go back to yesterday, no big deal just looking for today's timestamps and issuing the DELETE WHERE timestamp=today SQL command. Straight SQL, any tool like SquirrelSQL will do it for you. And if you wanna play with different versions of today (take an SQL dump into an offline file, modify it, manage via git), you can do that, too.
 
Back
Top