What is the best way to handle schema migrations? #59
Replies: 10 comments 5 replies
-
|
Yes, right now what you have described is the case and the only way to handle schema changes is via stopping Marmot, dropping a snapshot on all nodes, and restarting the Marmot process. The major reason to not implement schema changes propagation is because:
Having said that, here are my recommendations:
I've been brain-storm about this topic, and I have questions that I believe community can answer. It will really help me develop a solution that works really well for community:
Again I am open to suggestions and contributions on this topic. |
Beta Was this translation helpful? Give feedback.
-
|
@maxpert thanks for the elaborate response! The algorithms that you describe involves downtime. Also, if I understand correctly, the migration runs on a single marmot node , and the distribution to other nodes happens via snapshots, so that other nodes would be required to download the snapshot to be up-to-date? If that is the case, it would be problematic with bigger database (100s of GB), and the migration would involve a long time. How it could work: maybe it would be possible to have a special table, that marmot would use to signal the state of the migration. A rolling update is good, and having some way to see the progress of the roll-out through the cluster would be great.
How are migration executed in general is a very broad question, since there are so many tools to manage migrations. I guess the simplest would be to post a SQL file with 2 attributes: name + timestamp as identifier for this particular migration. We could start with only forward migrations, because supporting reversible migrations does not really work in production. We would also need to store applied migrations in a table, maybe "__marmot_migrations"? Maybe we should collect those ideas in a documents (could be a markdown file in the repo), with arguments why those design decisions where preferred. From that document we could distill the minimal feature set, that would just solve the issue at hand. |
Beta Was this translation helpful? Give feedback.
-
No we are not restoring snapshots, I am using snapshots as safety measure. You actually run the migration scripts everywhere.
Excellent you are essentially converging to same idea of having migration script essentially being published on NATS, and then applying them everywhere.
I can introduce a RFC category under discussion, but in long run we might have to move to different repo like https://github.com/reactjs/rfcs |
Beta Was this translation helpful? Give feedback.
-
|
There are couple projects that implement zero-downtime migrations for Postgres: There are also couple links with ideas: References
Maybe there is something that we could apply also for Sqlite + Marmot. |
Beta Was this translation helpful? Give feedback.
-
|
Interesting seems like it enforces a full framework of how migration should be done. I've been working with couple of people on how they handle migrations and so far for SQLite specially people had their typical flow of run a script to alter tables. I can may be inspire page or two out of pgroll to see how that can be mapped into SQLite workflows. |
Beta Was this translation helpful? Give feedback.
-
|
any progress on this? |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Currently the triggers are only attached during the start phase of marmot. Individual schema changes (new tables / columns) are not reflected at runtime and require either a manual way to apply them on all DBs in the cluster + restarting each marmot node to update triggers and changelog tables.
Are there some recommendations how to approach it in a straightforward and maintainable way?
Best,
Roman
Beta Was this translation helpful? Give feedback.
All reactions