Appearance
TODO
Active development tasks and planned improvements.
Third-Party Integrations & Zapier ✅ Done
Zapier integration is live with triggers (new/updated row with auto-detected timestamp columns), actions (create/update row), and search (find row by field). Webhooks are handled via Zapier's webhook integration for sending row data to external services.
Future Enhancements (Low Priority)
- Row actions UI — user-defined buttons on rows ("Send Email", "Create Invoice") that trigger a webhook/Zapier zap with the row data
- Native integrations — direct integrations for high-value services (email, Slack) where a tighter UX justifies the maintenance
Database Change Detection (CDC)
Priority: Medium — currently Zapier triggers use polling; CDC would enable real-time triggers and more
Replace poll-based change detection (WHERE updated_at > ?) with database-native change streams for real-time row-level events.
| Database | Mechanism | Library | Latency |
|---|---|---|---|
| MySQL | Binary log (binlog) | Debezium, Canal | Milliseconds |
| PostgreSQL | Logical replication / pgoutput | Debezium, wal2json | Milliseconds |
| PostgreSQL | LISTEN/NOTIFY triggers | Built-in | Milliseconds |
SchemaStack already has the connection details in workspace_database_config — the question is whether the user's credentials have replication privileges (REPLICATION role in PG, REPLICATION SLAVE in MySQL).
Would also power:
- Real-time SSE to end-user SPAs (workspace data changes streamed live)
- Zapier instant triggers (instead of poll-based)
- Dashboard live counters / activity feeds
Recommendation: Start simple with LISTEN/NOTIFY for PostgreSQL users. Offer full CDC via Debezium only if demand warrants the infrastructure.
"Request to Join" Organisation Flow
Priority: Low — invitation flow covers the main use case
Allow users to request membership to an organisation (instead of requiring an invite). Requires:
POST /api/organisations/me/request-to-joinendpoint- Admin approve/reject endpoints
- Notification to org admins on new join requests
- Status tracking (pending, approved, rejected)
Remaining Entity Constraint Validators
Priority: Low — the most common constraint types are implemented
EntityConstraintValidator.getValidator() covers field comparisons (FIELD_LESS_THAN, FIELD_EQUALS, etc.) and either-or constraints (AT_LEAST_ONE_REQUIRED, EXACTLY_ONE_REQUIRED, ALL_OR_NONE). The following types throw UnsupportedOperationException:
CONDITIONAL_REQUIRED,CONDITIONAL_CONSTRAINT,CONDITIONAL_RANGECOMPOSITE_UNIQUE,SUM_EQUALS,SUM_RANGEDATE_WITHIN_RANGE,DATE_DURATIONCUSTOM_EXPRESSION
Error Tracking Integration (Sentry)
Priority: Low — console.error logging works for now, but errors are invisible once deployed
Both frontend apps (admin + spread) have a global ErrorHandlerService that catches unhandled errors. Currently they log to console only. Integrate Sentry (or similar) to capture errors in production with stack traces, user context, and session replay.
Guest Token Cleanup Job
Priority: Low — expired/revoked tokens accumulate in view_guest but don't affect functionality
Expired and revoked guest tokens remain in the database indefinitely. A scheduled job should periodically purge them.
Required
- Scheduled task (e.g. daily) to delete
view_guestrows whererevoked_at IS NOT NULLorexpires_at < NOW() - Optionally retain recently-revoked tokens for a grace period (e.g. 30 days) for audit purposes
- Log count of purged tokens
Enforce Storage Quota Limits
Priority: Medium — storage_limit_mb exists in subscription_tier but is never enforced
Storage usage is tracked via StorageUsageTracker.recordStorageDelta() for regular file uploads (IMAGE/FILE columns), but nothing blocks uploads when the limit is exceeded. PlanLimitService has checks for workspaces, views, and members but no checkStorageLimit().
Note: Bulk export files are temporary (auto-deleted after 24 hours) and intentionally excluded from quota tracking.
Required
- Add
checkStorageLimit()toPlanLimitService - Call it before file uploads in
FileUploadService/FileResource - Compare
UsagePeriod.storageBytesagainstSubscriptionTier.storageLimitMb - Return 409 with upgrade prompt when limit exceeded
- Consider: should storage tracking only count platform S3 (not customer-owned buckets)?
In-App AI Chat ✅ Done
AI chat panel is live in the spread app, powered by the Anthropic API with the existing MCP tools. Context-aware conversations per workspace with streaming responses via SSE.
S3 Export Cleanup
Priority: Low — exports auto-expire via expiresAt in the DB, but S3 objects linger
Export files in s3://schemastack-prod/exports/ are not automatically deleted when they expire. The ExportDownloadService rejects expired downloads, but the S3 objects remain and accumulate storage costs.
Options
- S3 lifecycle rule (simplest) — set a lifecycle policy on the
exports/prefix to delete objects after 48h - Cron job — scheduled task that queries
bulk_action_jobfor expired jobs and deletes their S3 objects - Both — cron for immediate cleanup, lifecycle rule as safety net
Metadata Migration System (Production)
Priority: High — required before production launch with paying customers
When new features add metadata (e.g., reverse OneToMany relationships, M2M synthetic columns), existing production workspaces need non-destructive updates without requiring reset+sync (which loses user settings like column positions, display names, presets).
Requirements
- Idempotent — safe to re-run
- Non-destructive — never deletes user settings (positions, display names, hidden/visible, presets)
- Auditable — track what changed, when, for which workspace
- Incremental — only apply what's missing, not full re-sync
- Rollback-capable — undo a migration if it causes issues
Approaches to Evaluate
- Versioned migration jobs — like Liquibase but for metadata. Numbered migrations tracked in a
metadata_migration_historytable, run once per workspace - Background backfill endpoint — admin API that applies specific changes idempotently across all workspaces
- Lazy migration — detect and apply missing metadata when a workspace is accessed (on-demand, zero downtime)
- Startup migration runner — runs on Quarkus startup, applies pending migrations before accepting traffic
Known Migrations Needed
- Reverse OneToMany relationships — create OneToMany on target entities for all existing ManyToOne FKs
- Schema hash recalculation — existing hashes include M2M/O2M data that should be excluded