Appearance
Bulk Operations
End-to-end flows for bulk delete, bulk update, and bulk export. All bulk operations are asynchronous — they create a job, process it via RabbitMQ, and report progress through SSE events.
Common Pattern
All three bulk operations follow the same async job pattern:
Frontend (App) Quarkus REST RabbitMQ Quarkus Consumer
───────────────── ──────────── ──────── ────────────────
User selects rows +
triggers bulk action
│
▼
NgRx action: bulk{Action}
│
▼
SelectionBulkEffects
│ POST /api/data/bulk/
│ {viewUuid}/{action}
▼
BulkActionResource
│
▼
BulkActionService
│
├── Create BulkActionJob
│ (status: PENDING)
│
└── Publish to RabbitMQ
(bulk-actions exchange)
│
▼
BulkActionConsumer
│
├── Update job: IN_PROGRESS
│
├── Process rows in batches
│ │
│ ├── Execute SQL per batch
│ │
│ └── SSE: progress event
│ { processedRows,
│ totalRows,
│ successCount,
│ errorCount }
│
├── Update job: COMPLETED
│
└── SSE: completion event
│
▼ ◄─── SSE events stream back ───
Frontend receives SSE
│
├── Progress events
│ → Update progress bar
│
└── Completion event
→ Clear job state,
reload view dataResponse: 202 Accepted (with jobId in response body)
Row Selection Modes
Bulk operations support two selection modes, allowing efficient operations on large datasets:
| Mode | Payload | Use Case |
|---|---|---|
SELECTED_IDS | { rowIds: [1, 2, 3] } | User manually selected specific rows |
FILTERED | { excludedRowIds: [5] } | User selected "all" with current filters, then deselected some |
The FILTERED mode is critical for performance — selecting 100,000 rows doesn't send 100,000 IDs. Instead, the backend applies the current filter/sort criteria and excludes only the deselected rows.
Bulk Delete
Deletes multiple rows asynchronously with progress tracking.
POST /api/data/bulk/{viewUuid}/delete
Body (SELECTED_IDS):
{
"selectionMode": "SELECTED_IDS",
"rowIds": [1, 2, 3, 4, 5]
}
Body (FILTERED):
{
"selectionMode": "FILTERED",
"excludedRowIds": [5],
"filterRules": [...],
"sortRules": [...]
}
Response: 202 Accepted
{
"jobId": "uuid",
"status": "PENDING",
"totalRows": 5
}SSE Events
| Event | Payload | When |
|---|---|---|
data.bulk.delete.progress | { jobId, processedRows, totalRows, successCount, errorCount } | After each batch |
data.bulk.delete | { jobId, status: "COMPLETED", totalRows, successCount, errorCount } | On completion |
Frontend State
typescript
// Bulk operation state in ViewState
bulkJobId: string | null;
bulkOperationType: 'delete' | 'update' | 'export' | null;
bulkStatus: 'PENDING' | 'IN_PROGRESS' | 'COMPLETED' | 'FAILED' | null;
bulkTotalRows: number;
bulkProcessedRows: number;
bulkSuccessCount: number;
bulkErrorCount: number;
bulkErrors: BulkError[]; // max 100 storedThe SelectionBulkEffects dispatches the API call, stores the jobId, and listens for SSE progress/completion events matching that jobId.
Bulk Update
Updates a field value across multiple rows asynchronously.
POST /api/data/bulk/{viewUuid}/update
Body:
{
"selectionMode": "SELECTED_IDS",
"rowIds": [1, 2, 3],
"updates": {
"columnUuid": "new-value"
}
}
Response: 202 Accepted
{
"jobId": "uuid",
"status": "PENDING",
"totalRows": 3
}SSE Events
| Event | Payload | When |
|---|---|---|
data.bulk.update.progress | { jobId, processedRows, totalRows, successCount, errorCount } | After each batch |
data.bulk.update | { jobId, status: "COMPLETED", totalRows, successCount, errorCount } | On completion |
Bulk Export
Exports selected rows to a file asynchronously. The export is processed as a job — the file is generated server-side and made available for download.
POST /api/data/bulk/{viewUuid}/export
Body:
{
"selectionMode": "FILTERED",
"excludedRowIds": [],
"filterRules": [...],
"format": "csv"
}
Response: 202 Accepted
{
"jobId": "uuid",
"status": "PENDING",
"totalRows": 10000
}SSE Events
| Event | Payload | When |
|---|---|---|
data.bulk.export.progress | { jobId, processedRows, totalRows } | After each batch |
data.bulk.export | { jobId, status: "COMPLETED", fileMetadata: { ... } } | On completion |
Export File Storage
Export files are stored in the platform S3 bucket (the same bucket used for thumbnails and feedback), not per-workspace S3. This is because the processor and metadata-api run in separate Docker containers and can't share local filesystems.
Production (S3):
- Processor config:
processor.storage.type=s3activatesS3FileStorageService - Processor uploads to:
s3://{platform-bucket}/exports/{jobId}/{fileName} - Quarkus downloads via:
ExportDownloadService→S3FileStorageProvider.retrieveExternal(downloadPath) - Both services use the same platform S3 credentials (
app.platform.s3.*on Quarkus,processor.storage.s3.*on processor)
Local development (filesystem):
- Processor config:
processor.storage.type=local(default) usesLocalFileStorageService - Files stored at:
${java.io.tmpdir}/processor-exports/exports/{jobId}/{fileName} - Quarkus reads from:
processor.storage.local.path(same path, works because both run on the same machine locally)
Retention: Export files expire after 24 hours (configurable via processor.storage.s3.retention-hours or processor.storage.local.retention-hours). LocalFileStorageService has a scheduled cleanup job. S3 retention should be configured via S3 lifecycle rules on the bucket.
Download Flow
After the export job completes:
- Processor stores file in S3 (or local disk in dev)
- Processor sends
TaskCompletionMessagewithdownloadPath,fileName,fileSize,mimeType,expiresAt - Consumer-worker receives it →
BulkActionHandlerupdatesBulkActionJobwithdownloadPath,expiresAt,status=COMPLETED - Consumer-worker broadcasts SSE
view.data.bulk.exportevent with file metadata - Frontend receives SSE → adds download item to NgRx state + localStorage
- User clicks download →
GET /api/data/bulk/download/{jobId}with JWT ExportDownloadServicevalidates ownership + expiration, then streams file from S3 (or local)
Frontend persistence: Download items are stored in localStorage key schemastack_downloads. Expired items are filtered out on load. Items persist across browser sessions until the user removes them or they expire.
Progress Tracking
All bulk operations track progress with the same fields:
| Field | Type | Description |
|---|---|---|
totalRows | int | Total rows to process |
processedRows | int | Rows processed so far |
successCount | int | Rows successfully processed |
errorCount | int | Rows that failed |
errors | BulkError[] | Error details (max 100 stored) |
Progress SSE events are sent after each batch is processed, allowing the frontend to show a real-time progress bar.
Error Handling
- Individual row failures do not abort the entire job — processing continues
- Errors are collected with the row ID and error message
- A maximum of 100 errors are stored per job (to prevent unbounded memory usage)
- The final completion event includes
successCountanderrorCount - If all rows fail, the job status is still
COMPLETED(notFAILED) —FAILEDis reserved for infrastructure failures (e.g. lost DB connection)
Job Lifecycle
PENDING → IN_PROGRESS → COMPLETED
→ FAILED (infrastructure error only)The BulkActionJob entity tracks the full lifecycle. Jobs are persisted in the metadata database and can be queried for status after the fact.
Bulk vs Batch vs Single
| Single | Batch | Bulk | |
|---|---|---|---|
| Example | Edit one cell | Fill column for selected rows | Delete 10,000 rows |
| Processing | Sync | Sync | Async (RabbitMQ) |
| Response | 200 | 200 | 202 |
| Progress | Instant | Instant | SSE events |
| Page | Data Operations | Data Operations | This page |
Response Code Summary
| Operation | Method | Success Code | Notes |
|---|---|---|---|
| Bulk Delete | POST | 202 Accepted | Returns jobId |
| Bulk Update | POST | 202 Accepted | Returns jobId |
| Bulk Export | POST | 202 Accepted | Returns jobId |
| Download Export | GET | 200 OK | Streams file |
Cross-References
- Data Operations — synchronous cell/row operations
- System Overview — RabbitMQ exchanges and messaging patterns
- Frontend Overview — NgRx bulk operation state and selection modes