Appearance
Schema Feature Checklist
When adding a new database schema feature (columns, indexes, constraints, FKs, cascade rules, etc.), every component in the pipeline must support it or drift detection will produce false positives and syncs will silently lose data.
The Pipeline
A schema feature flows through 6 systems. All 6 must agree on what exists and how it's formatted.
┌─────────────┐ ┌──────────────┐ ┌────────────────┐
│ 1. Import │───▶│ 2. Merge │───▶│ 3. Persist │
│ (JDBC scan) │ │ (metadata) │ │ (Hibernate) │
└─────────────┘ └──────────────┘ └────────────────┘
│ │
▼ ▼
┌─────────────┐ ┌──────────────┐ ┌────────────────┐
│ 4. Hash │◀──│ 5. Quick │ │ 6. Full Drift │
│ (buildHash) │ │ Check │ │ Detection │
└─────────────┘ └──────────────┘ └────────────────┘1. Schema Import (SchemaImportService)
- What: Reads the live database via JDBC
DatabaseMetaDataand creates entity objects - Where:
extractColumns(),extractRelationships(),extractIndexes() - Rule: Must extract the feature from JDBC and populate the entity/DTO
2. Merge (SchemaImportService.mergeViewMetadata)
- What: Merges imported data into existing metadata entities (add new, update existing, remove orphaned)
- Where:
mergeColumns(),mergeRelationships(),mergeConstraints(),mergeIndexes() - Rule: Must have a
mergeXxx()method that handles add/update/remove. This is the step most often missed.
3. Persist (Hibernate Reactive)
- What: Hibernate cascades persist the entity graph
- Where:
EntityMetadataentity with@OneToMany(cascade = ALL, orphanRemoval = true) - Rule: The entity must have a properly mapped
@OneToManycollection with cascade. TheViewRepository.findByWorkspaceWithMetadata()must eagerly fetch the collection viasession.fetch().
4. Hash Computation (SchemaHashService)
- What: Builds a deterministic string from schema structure and hashes it (SHA-256)
- Where:
buildHashInput()(from metadata entities) andbuildHashInputFromTables()(from live DB via JDBC) - Critical Rule: Both functions MUST produce identical output for the same schema. This was the source of a major false-positive bug. Common traps:
- Different fallback values (metadata uses
column.getType(), DB used"") - Different formatting (metadata JSON-strips to
col1col2, DB comma-joins tocol1,col2) - Different filtering (metadata skips some items, DB includes all)
- Different fallback values (metadata uses
- Eager Loading:
EntityMetadataRepository.findAllByWorkspaceIdWithRelationships()must fetch the collection viaMutiny.fetch()— JOIN FETCH alone doesn't initialize collections on the original entity instances.
5. Quick Check (SchemaHashService.quickCheck)
- What: Compares stored metadata hash vs freshly computed live DB hash
- Where:
quickCheckByOrgAndUuid(),quickCheck() - Rule: If hash includes the feature, quick check automatically detects changes. No code changes needed here — it's driven by the hash functions.
6. Full Drift Detection (SchemaDriftDetectionService)
- What: Detailed comparison of metadata vs live DB, reports specific changes per table
- Where:
compareTableColumns()+extractTableInfo() - Rule: Must extract the feature from JDBC in
extractTableInfo(), include it in theTableInfoclass, compare it incompareTableColumns(), and add drift info toSchemaDriftResult.TableDriftInfo.
Coverage Matrix
| Feature | Import | Merge | Persist | Hash (metadata) | Hash (DB) | Quick Check | Full Drift | Frontend Drift Dialog |
|---|---|---|---|---|---|---|---|---|
| Columns (name, type, nullable) | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
| Single-column unique | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
| Primary keys | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
| Foreign keys | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
| FK cascade (ON DELETE) | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
| FK cascade (ON UPDATE) | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
| Indexes (all types) | Yes | Yes | Yes | Yes | Yes | Yes | Yes | Yes |
| Multi-column uniques | Yes (via indexes) | Yes (via indexes) | Yes (via indexes) | Yes (via indexes) | Yes (via indexes) | Yes | Yes | Yes |
Lessons Learned
1. Hash functions must be byte-identical
The stored hash is computed from metadata (buildHashInput). The quick check computes from the live DB (buildHashInputFromTables). If they format the same data differently — even slightly — you get a false positive on every check. Test by syncing, then immediately running quick check. If hasDrift: true with no actual changes, the hash functions disagree.
2. Merge is the most commonly missed step
When SchemaImportService extracts new data (e.g., indexes) from JDBC, it adds them to newly created entity objects. But persistEntitiesAndRemoveOrphans replaces these with existing DB entities during merge. If there's no mergeXxx() method for the feature, the imported data is silently discarded.
3. Lazy collections must be explicitly fetched
Hibernate Reactive does not support implicit lazy loading. Every collection used during merge or hash computation must be eagerly loaded:
findByWorkspaceWithMetadata()— usessession.fetch()per collection per entityfindAllByWorkspaceIdWithRelationships()— usesMutiny.fetch()sequentially per entity- JOIN FETCH in a separate query does NOT initialize the collection on the original entities (different object instances in Hibernate's L1 cache)
4. No skip/filter divergence
If SchemaImportService skips certain items (e.g., "skip single-column unique indexes"), the hash function must apply the same skip rule, and vice versa. Any disagreement causes false drift. Prefer tracking everything — it's simpler and more correct.
5. Worker thread → Panache = HR000069
After runSubscriptionOn(workerPool), you cannot use Panache.withSession() or Panache.withTransaction(). The Vert.x context carries a stale session reference. Solutions:
- Don't mix JDBC worker threads with Panache in the same chain
- Extract all data to primitives before the worker thread, compute on worker, return a plain value
- If you must persist after worker thread, use PgPool raw SQL (last resort, not recommended)
6. Nested Panache sessions contaminate context
Panache.withSession() inside an existing Panache.withTransaction() (e.g., during sync orchestration) pollutes the Vert.x context. Subsequent operations on worker threads then fail with HR000069. Use the existing session directly instead of opening a new one.
Adding a New Schema Feature — Step by Step
- Entity: Add
@OneToMany(cascade = ALL, orphanRemoval = true)toEntityMetadata(orView) - Liquibase: Create changeset for the new metadata table
- Import: Add
extractXxx()method toSchemaImportService— reads from JDBCDatabaseMetaData - Merge: Add
mergeXxx()method toSchemaImportService— called frommergeViewMetadata() - Eager Loading: Add
session.fetch(view.getXxx())toViewRepository.findByWorkspaceWithMetadata() - Eager Loading (hash): Add
Mutiny.fetch(entity.getXxx())toEntityMetadataRepository.findAllByWorkspaceIdWithRelationships() - Hash (metadata): Add to
SchemaHashService.buildHashInput()— iterate sorted, format deterministically - Hash (DB): Add to
SchemaHashService.buildHashInputFromTables()— extract from JDBC, format identically to step 7 - Hash (helper class): Add
XxxHashInfoclass and field inTableHashInfo - Drift Detection: Add to
SchemaDriftDetectionService:IndexInfo(or equivalent) class inTableInfo- Extract from JDBC in
extractTableInfo() - Compare in
compareTableColumns() - Add
IndexDriftInfotoSchemaDriftResult.TableDriftInfo
- Drift DTO: Add drift info class to
SchemaDriftResult - Frontend model: Add to
TableModifiedinterface inworkspace.models.ts - Frontend dialog: Add rendering section in
schema-drift-dialog.component.html - Test: Sync, then immediately quick-check — must return
hasDrift: false