Ontologie Live Data API
Ontologie Live Data API
Manage real-time data connectors, streams, HTTP sources, and webhooks.
Getting Started
- Get an API Key: Settings > API Keys > Create
- Set Workspace ID: Find in URL or Settings > Workspace
- Make requests: Include
X-API-KeyandX-Workspace-Idheaders
Authentication
curl -X GET "https://api.ontologie-growthsystemes.com/api/v1/live-data/sources" \
-H "X-API-Key: your-api-key" \
-H "X-Workspace-Id: your-workspace-uuid"
Authentication
- API Key: apiKey
- API Key: workspaceId
API key for external integrations. Get yours at Settings > API Keys.
Security Scheme Type: | apiKey |
|---|---|
Header parameter name: | X-API-Key |
Workspace UUID. Required for multi-tenant operations.
Security Scheme Type: | apiKey |
|---|---|
Header parameter name: | X-Workspace-Id |
📄️ Acknowledge a drift alert
Acknowledges a schema drift alert, marking it as reviewed without dismissing or resolving it.
📄️ Activate pipeline
Activates an inactive or draft pipeline, enabling scheduled and manual executions.
📄️ Add comment to a run
Adds a comment to a specific pipeline run for collaboration and audit purposes.
📄️ Apply mapping
Manually applies the mapping to synchronize data from the source to the target entity.
📄️ Apply schema healing
Applies a previously suggested schema healing action to resolve schema issues.
📄️ Bulk action on drift alerts
Performs a bulk action (acknowledge, dismiss, or resolve) on multiple schema drift alerts at once.
📄️ Cancel a running execution
Cancels an in-progress pipeline run. Currently executing blocks will be interrupted.
📄️ Check connector health
Tests the connection and returns health status for the connector.
📄️ Clone pipeline
Clones an existing pipeline with a new name, duplicating all blocks and configuration.
📄️ Create a connector
Creates a new Airbyte connector with the given source definition and configuration.
📄️ Create custom connector
Creates a new custom connector with user-defined configuration.
📄️ Create a new data pipeline
Creates a new data pipeline with the given name, description, blocks, and configuration.
📄️ Create data source (legacy)
Legacy endpoint for creating a data source. Prefer POST /api/unified-sources.
📄️ Create HTTP source
Creates a new HTTP polling source with the given URL, schedule, and authentication.
📄️ Create mapping
Creates a new field mapping between a data source and an ontology entity.
📄️ Create a data source
Creates a new data source of the specified type.
📄️ Deactivate pipeline
Deactivates an active pipeline, stopping any scheduled executions.
📄️ Delete connector
Permanently deletes a connector and stops any active syncs.
📄️ Delete custom connector
Permanently deletes a custom connector.
📄️ Delete pipeline
Permanently deletes a data pipeline and its associated run history.
📄️ Delete HTTP source
Permanently deletes an HTTP polling source and stops its schedule.
📄️ Delete mapping
Permanently deletes a field mapping.
📄️ Delete stream
Permanently deletes a stream and its associated data.
📄️ Delete source
Permanently deletes a data source and its associated streams.
📄️ Detect schema drift
Detects schema drift in the pipeline sources by comparing current schemas against expected schemas.
📄️ Detect schema from source
Detects and returns the schema from a given source configuration without creating a pipeline.
📄️ Diff two pipeline runs
Compares two pipeline runs side-by-side, showing differences in block outputs, timings, and errors.
📄️ Dismiss a drift alert
Dismisses a schema drift alert, indicating it does not require action.
📄️ Dry-run pipeline
Performs a dry-run of the pipeline without producing side effects. Validates blocks and simulates execution.
📄️ Execute pipeline DAG
Executes the pipeline DAG, creating a new run. Blocks are executed in topological order.
📄️ Discovery status
Returns the current status of an ongoing or completed schema discovery.
📄️ Get connector details
Returns detailed information about a specific connector.
📄️ Get custom connector
Returns details about a specific custom connector.
📄️ Get pipeline details
Returns detailed information about a specific data pipeline by ID.
📄️ Get HTTP source
Returns detailed information about a specific HTTP polling source.
📄️ LiveData Hub stats
Returns statistics about the LiveData Hub including active connections and message rates.
📄️ Get mapping
Returns details about a specific field mapping.
📄️ Get sources linked to a node
Returns all data sources linked to a specific ontology entity.
📄️ Get pipeline audit log
Returns the audit log of changes made to the pipeline configuration and executions.
📄️ Get block-level execution logs
Returns block-level execution logs for a specific pipeline run, including per-block timings and record counts.
📄️ Get pipeline health metrics
Returns health metrics for a pipeline including success rate, average duration, and recent error trends.
📄️ Get pipeline run details
Returns detailed information about a specific pipeline execution run.
📄️ Get specific version
Returns the full pipeline configuration at a specific version number.
📄️ Get rule effectiveness
Returns effectiveness metrics for healing rules, showing success rates and usage frequency.
📄️ Get drift alert statistics
Returns aggregate statistics about schema drift alerts across all sources, including counts by status and severity.
📄️ Get specific drift alert
Returns detailed information about a specific schema drift alert.
📄️ Get schema evolution timeline
Returns the evolution timeline for a source schema, showing all changes and their impact over time.
📄️ Schema intelligence overview
Returns an overview of the schema intelligence status including tracked sources, active alerts, and healing statistics.
📄️ Get specific schema version
Returns the full schema definition at a specific version number for a source.
📄️ Get source analysis
Returns a detailed analysis of a specific source including schema complexity, data quality indicators, and recommendations.
📄️ Get source health metrics
Returns health metrics per source including schema stability, drift frequency, and healing effectiveness.
📄️ Get discovered schema
Returns the schema discovered from the data source, including available tables and columns.
📄️ Get sync status
Returns the current sync status and last sync information for a data source.
📄️ Get stream data rows
Returns paginated data rows from a stream.
📄️ Get stream schema
Returns the column schema for a specific stream.
📄️ Stream statistics
Returns aggregate statistics across all streams in the workspace.
📄️ Get stream details
Returns detailed information about a specific stream including its schema.
📄️ Get sync errors
Returns detailed error information for a specific sync operation.
📄️ Get log details
Returns detailed information about a specific sync log entry.
📄️ Sync statistics
Returns aggregate sync statistics across all sources.
📄️ Get source details
Returns detailed information about a specific data source.
📄️ Import schema to registry
Imports an external schema definition into the schema registry for tracking and comparison.
📄️ Ingest data points
Ingest time-series data points into the live data store.
📄️ List connector streams
Returns all streams associated with a specific connector.
📄️ List connectors
Returns all Airbyte connectors configured in the workspace.
📄️ List custom connectors
Returns all custom (user-defined) connectors in the workspace.
📄️ List all data pipelines
Returns all data pipelines in the workspace with optional filtering by status and type, plus pagination.
📄️ List HTTP sources
Returns all HTTP polling sources configured in the workspace.
📄️ LiveData Hub sessions
Returns active WebSocket sessions connected to the LiveData Hub.
📄️ List sources (legacy)
Legacy endpoint for listing live data sources. Prefer /api/unified-sources.
📄️ List mappings
Returns all field mappings between data sources and ontology entities.
📄️ List run comments
Returns all comments on a specific pipeline run.
📄️ List execution runs
Returns the execution run history for a specific pipeline with pagination.
📄️ List pipeline versions
Returns the version history of a pipeline, showing configuration changes over time.
📄️ List schema alerts
Returns active schema alerts across all sources, with optional severity filtering.
📄️ List drift alerts for a source
Returns all schema drift alerts for a specific data source with optional status filtering and pagination.
📄️ List schema registry
Returns all schemas registered in the schema registry with optional name filtering.
📄️ List schema versions
Returns the version history of a schema for a specific source, ordered by version number descending.
📄️ List streams
Returns all data streams available in the workspace.
📄️ List sync logs
Returns sync operation logs with optional filtering by source and status.
📄️ List all data sources (unified view)
Returns all data sources across all types (webhook, HTTP, connector) in a unified format.
📄️ List user's sources
Returns data sources created by or assigned to the current user.
📄️ Ontologie Live Data API
# Ontologie Live Data API
📄️ Preview block output
Previews the output of a specific pipeline block using sample data without running the full pipeline.
📄️ Preview stream data
Returns a limited preview of stream data (first 10 rows).
📄️ Purge old logs
Deletes sync logs older than the specified number of days.
📄️ Query time-series data
Query ingested time-series data with time range and aggregation options.
📄️ Refresh stream
Triggers a refresh of the stream data from its parent source.
📄️ Restore pipeline to a version
Restores the pipeline configuration to a specific historical version, creating a new version entry.
📄️ Retry a failed run
Retries a failed pipeline run, optionally from the failed block or from the beginning.
📄️ Start schema discovery
Initiates schema discovery for the connector. This is an asynchronous operation.
📄️ Suggest field mappings
Suggests field mappings between a source schema and a target schema using name and type similarity.
📄️ Suggest schema healing
Analyzes schema issues and suggests healing actions such as type coercions, default values, or field mappings.
📄️ Trigger connector sync
Triggers an immediate sync operation for the connector.
📄️ Trigger manual poll
Triggers an immediate poll of the HTTP source, outside of the regular schedule.
📄️ Trigger manual sync
Triggers an immediate sync operation for the specified data source.
📄️ Update connector
Update the name or connection configuration of a connector.
📄️ Update custom connector
Update the name or configuration of a custom connector.
📄️ Update pipeline configuration
Updates the name, description, blocks, or configuration of a data pipeline.
📄️ Update HTTP source
Update the URL, schedule, headers, or authentication of an HTTP source.
📄️ Update mapping
Update the field mappings or active status of a mapping.
📄️ Update source
Update the name, config, or linked entity of a data source.
📄️ Validate pipeline configuration
Validates the pipeline DAG structure, block configurations, and connection integrity without executing.