A robust CLI tool for managing ClickHouse database migrations with environment-specific configurations and TypeScript support.
- Environment-aware migrations - separate SQL for development, test, and production
- Multiple query support - execute multiple SQL statements in a single migration
- Environment variable interpolation - secure credential management with
${ENV_VAR}
syntax - Auto-generated schema.sql - complete multi-database schema tracking across all non-system databases
- Dry run mode - preview migrations before executing with
--dry-run
- Verbose logging control - clean output by default, detailed logs with
--verbose
- Full TypeScript support - exported types for programmatic usage
- Comprehensive migration management - apply, rollback, reset, and status tracking
- Node.js (v16+ recommended)
- npm or yarn
- A running ClickHouse instance
Global installation (recommended for CLI usage):
npm install -g clicksuite
Or use without installing:
npx clicksuite init
For projects (with programmatic usage):
npm install --save-dev clicksuite
-
Initialize in your project:
clicksuite init
-
Configure your environment (
.env
file):CLICKHOUSE_URL=http://default@localhost:8123/my_database
-
Generate your first migration:
clicksuite generate create_users_table
-
Apply migrations:
clicksuite migrate
Configure Clicksuite using environment variables in a .env
file:
# Required: ClickHouse connection URL
CLICKHOUSE_URL=http://default@localhost:8123/my_database
# Optional: For clustered deployments
CLICKHOUSE_CLUSTER='{cluster}'
# Optional: Custom migrations directory (defaults to './migrations')
CLICKSUITE_MIGRATIONS_DIR=./db_migrations
# Optional: Environment (defaults to 'development')
CLICKSUITE_ENVIRONMENT=production
# Optional: Custom database for the migrations table (defaults to 'default')
CLICKSUITE_MIGRATIONS_DATABASE=default
Connection URL Examples:
- Local:
http://default@localhost:8123/my_database
- Remote:
https://user:[email protected]:8443/prod_db
- Docker:
http://default@clickhouse:8123/analytics
Once installed and configured, you can use the clicksuite
CLI:
clicksuite <command> [options]
--non-interactive
,-y
: Run in non-interactive mode, automatically confirming prompts (e.g., formigrate:reset
). Useful for CI environments.--verbose
: Show detailed SQL logs and verbose output. By default, only migration names and results are shown.--dry-run
: Preview migrations without executing them (available formigrate:up
andmigrate:down
). Shows exactly what would be executed.
-
clicksuite init
- Initializes Clicksuite for the current project.
- Creates the migrations directory (e.g.,
<CLICKSUITE_MIGRATIONS_DIR>/migrations/
). - Creates the
__clicksuite_migrations
table in thedefault
database to track migrations. - Tests the connection to ClickHouse.
-
clicksuite generate <migration_name>
- Generates a new migration YAML file in the migrations directory.
- Example:
clicksuite generate create_users_table
- The generated file will have a timestamped version and sections for
development
,test
, andproduction
environments.
-
clicksuite migrate:status
- Shows the status of all migrations (APPLIED, PENDING, INACTIVE) for the current
CLICKSUITE_ENVIRONMENT
.
- Shows the status of all migrations (APPLIED, PENDING, INACTIVE) for the current
-
clicksuite migrate
- Runs all pending migrations for the current environment. Equivalent to
clicksuite migrate:up
.
- Runs all pending migrations for the current environment. Equivalent to
-
clicksuite migrate:up [migrationVersion]
- Applies migrations for the current environment.
- If
migrationVersion
is omitted, applies all pending migrations. - If
migrationVersion
is specified, applies all pending migrations up to and including that version. - Example:
clicksuite migrate:up 20230101120000
- Use
--dry-run
to preview without executing:clicksuite migrate:up --dry-run
- Use
--verbose
to see detailed SQL logs:clicksuite migrate:up --verbose
-
clicksuite migrate:down [migrationVersion]
- Rolls back migrations for the current environment.
- If
migrationVersion
is omitted, rolls back the single last applied migration. - If
migrationVersion
is specified, rolls back all migrations applied after that version, making the specified version the new latest applied migration. Prompts for confirmation if multiple migrations will be rolled back. - Example (roll back last):
clicksuite migrate:down
- Example (roll back to version):
clicksuite migrate:down 20230101120000
- Use
--dry-run
to preview without executing:clicksuite migrate:down --dry-run
- Use
--verbose
to see detailed SQL logs:clicksuite migrate:down --verbose
-
clicksuite migrate:reset
- Rolls back all applied migrations for the current environment by executing their
downSQL
. - Clears the
__clicksuite_migrations
table. - Requires confirmation unless
--non-interactive
is used. - Caution: This is a destructive operation for your migration history tracking and potentially your data if
downSQL
scripts are destructive.
- Rolls back all applied migrations for the current environment by executing their
-
clicksuite schema:load
- Marks all local migration files as APPLIED in the
__clicksuite_migrations
table without running theirupSQL
. - Useful for initializing Clicksuite on an existing database where the schema changes have already been applied manually or by another process.
- Marks all local migration files as APPLIED in the
Migration files are YAML (.yml
) and should be placed in the <CLICKSUITE_MIGRATIONS_DIR>/migrations/
directory. The filename format is YYYYMMDDHHMMSS_description.yml
.
Each migration file supports environment-specific SQL and settings. The {table}
and {database}
placeholders in up
or down
SQL will be replaced by the values of the table
and database
fields from the YAML.
Example YYYYMMDDHHMMSS_create_widgets.yml
:
version: "20240115103000"
name: "create widgets table"
table: "widgets_table"
database: "analytics_db"
development: &development_defaults
up: |
-- Create database if it doesn't exist
CREATE DATABASE IF NOT EXISTS {database};
-- Create table in the specified database
CREATE TABLE IF NOT EXISTS {database}.{table} (
id UInt64,
name String,
dev_notes String DEFAULT 'dev only'
) ENGINE = MergeTree() ORDER BY id;
down: |
-- SQL for development down
DROP TABLE IF EXISTS {database}.{table};
settings:
allow_experimental_object_type: 1 # Example setting
test:
<<: *development_defaults
# up: |
# -- Override SQL for test up if needed
# ALTER TABLE {database}.{table} ADD COLUMN test_flag UInt8 DEFAULT 1;
# down: |
# -- Override SQL for test down if needed
production:
# Typically, you might not want to inherit DROP TABLE for production down by default
# Or provide a very specific, non-destructive down script.
up: |
-- Create database if it doesn't exist
CREATE DATABASE IF NOT EXISTS {database};
-- SQL for production up
CREATE TABLE IF NOT EXISTS {database}.{table} (
id UInt64,
name String,
critical_prod_field String
) ENGINE = MergeTree() ORDER BY id;
down: |
-- SQL for production down (be cautious with DROP TABLE in prod down scripts)
-- Consider ALTER TABLE to make a field nullable, or a no-op if rollback is too risky.
SELECT 'Manual rollback required or specific ALTER TABLE statement for production';
settings:
# Production specific settings
# send_timeout: 600
Field Reference:
- The
version
andname
fields are primarily for display and tracking. - The
table
field is optional but useful for string replacement in your SQL if your migration targets a specific table. - The
database
field is optional but allows you to target different databases. If specified, use{database}.{table}
format in your SQL. - Migration Tracking: All migrations are tracked centrally in the
default.__clicksuite_migrations
table, regardless of which database they target. - Database Creation: You can create databases in your migrations using
CREATE DATABASE IF NOT EXISTS {database}
- this will be tracked in the generatedschema.sql
. - Each environment (
development
,test
,production
) can define its ownup
SQL,down
SQL, andsettings
(ClickHouse settings to apply during execution). - YAML anchors (
&anchor_name
) and aliases (<<: *anchor_name
) can be used to reduce redundancy (e.g.,test
andproduction
can inherit fromdevelopment_defaults
).js-yaml
(used internally) resolves these aliases upon loading.
Clicksuite supports executing multiple SQL statements in a single migration by separating them with semicolons. This is particularly useful since ClickHouse doesn't natively support multiple queries in a single request.
Example migration with multiple queries:
version: "20240115103000"
name: "create users and orders tables"
table: "users"
database: "ecommerce"
development:
up: |
-- Create the database
CREATE DATABASE IF NOT EXISTS {database};
-- Create users table
CREATE TABLE {database}.users (
id UInt32,
email String,
created_at DateTime64
) ENGINE = MergeTree() ORDER BY id;
-- Create orders table
CREATE TABLE {database}.orders (
id UInt32,
user_id UInt32,
amount Decimal(10,2),
created_at DateTime64
) ENGINE = MergeTree() ORDER BY id;
-- Insert initial admin user
INSERT INTO {database}.users VALUES (1, '[email protected]', now64());
down: |
-- Drop tables in reverse order
DROP TABLE IF EXISTS {database}.orders;
DROP TABLE IF EXISTS {database}.users;
DROP DATABASE IF EXISTS {database};
Key features of multiple query support:
- Automatic splitting: Queries are automatically split by semicolons and executed individually
- Error handling: If any query fails, the entire migration fails and subsequent queries are not executed
- Logging: Each query is logged separately with progress indicators (e.g., "Query 1/3", "Query 2/3")
- Settings preservation: All ClickHouse settings specified in the migration are applied to each individual query
- Trailing semicolons: Trailing semicolons are safely ignored and won't cause empty query errors
- Works for both directions: Multiple queries work for both
up
anddown
migrations
Console output example:
Executing 4 migration queries:
Query 1/4: CREATE DATABASE IF NOT EXISTS ecommerce
Query 2/4: CREATE TABLE ecommerce.users (id UInt32, email String, created_at DateTime64) ENGINE = MergeTree() ORDER BY id
Query 3/4: CREATE TABLE ecommerce.orders (id UInt32, user_id UInt32, amount Decimal(10,2), created_at DateTime64) ENGINE = MergeTree() ORDER BY id
Query 4/4: INSERT INTO ecommerce.users VALUES (1, '[email protected]', now64())
Clicksuite supports interpolating environment variables into your SQL migrations using the ${ENV_VAR_NAME}
syntax. This is particularly useful for sensitive data like database credentials that shouldn't be hardcoded in migration files.
Example migration with environment variable interpolation:
version: "20240115104000"
name: "create organization dictionary"
table: "organization_info"
database: "gamebeast"
development:
up: |
CREATE DICTIONARY IF NOT EXISTS {database}.{table} (
id UInt64,
name String,
created_at DateTime,
created_by String
) PRIMARY KEY id
LIFETIME(MIN 600 MAX 900)
SOURCE(POSTGRESQL(
port ${POSTGRES_PORT}
host '${POSTGRES_HOST}'
user '${POSTGRES_USER}'
password '${POSTGRES_PASSWORD}'
db '${POSTGRES_DATABASE}'
table 'organizations'
))
LAYOUT(HASHED());
down: |
DROP DICTIONARY IF EXISTS {database}.{table};
Required environment variables for the above example:
export POSTGRES_HOST=host.docker.internal
export POSTGRES_PORT=5432
export POSTGRES_USER=postgres
export POSTGRES_PASSWORD=postgres
export POSTGRES_DATABASE=gamebeast
Key features of environment variable interpolation:
- Secure credentials: Keep sensitive data like passwords out of your migration files
- Environment-specific values: Use different database hosts, ports, or credentials per environment
- Automatic interpolation: Variables are replaced during migration execution
- Warning for missing variables: If an environment variable is not set, a warning is displayed and an empty string is used
- Works with existing placeholders: Environment variables work alongside
{table}
and{database}
placeholders - Multiple variables: You can use as many environment variables as needed in a single migration
Supported syntax:
${VARIABLE_NAME}
- Standard environment variable interpolation${DB_HOST}
- Simple variable names${DB_CONNECTION_USER_NAME}
- Complex variable names with underscores
Best practices:
- Use environment variables for sensitive data (passwords, API keys, hosts)
- Document required environment variables in your project's README
- Use descriptive variable names (e.g.,
POSTGRES_HOST
instead ofHOST
) - Set default values in your deployment scripts when appropriate
Preview migrations before executing them with the --dry-run
flag:
# Preview pending migrations without executing
clicksuite migrate:up --dry-run
# Preview rollback of last migration
clicksuite migrate:down --dry-run
# Preview rollback to specific version
clicksuite migrate:down 20240101120000 --dry-run
# Combine with verbose output for detailed SQL preview
clicksuite migrate:up --dry-run --verbose
Dry run features:
- Shows exactly which migrations would be executed
- Displays detailed migration information (environment, database, table)
- Shows SQL queries that would be executed
- No database changes are made
- No migration tracking updates occur
- Schema file is not updated
Control the amount of logging with the --verbose
flag:
# Default: Show migration names and results only
clicksuite migrate:up
# Verbose: Show detailed SQL logs and execution info
clicksuite migrate:up --verbose
# Verbose with dry run for maximum detail
clicksuite migrate:up --dry-run --verbose
Default output (clean):
- Migration names being processed
- Success/failure messages
- Schema update notifications
Verbose output (detailed):
- SQL queries being executed
- Database operation details
- Schema file update details
- Debug information
All flags can be combined for maximum control:
# Non-interactive dry run with verbose output
clicksuite migrate:up --non-interactive --dry-run --verbose
# Production rollback with confirmation bypass
clicksuite migrate:down --non-interactive
# Preview rollback with detailed output
clicksuite migrate:down --dry-run --verbose
Clicksuite includes a comprehensive test suite built with Jest that covers all core functionality.
# Run all tests
npm test
# Run tests in watch mode during development
npm run test:watch
# Run tests with coverage report
npm run test:coverage
# Run tests in CI mode (used by GitHub Actions)
npm run test:ci
The test suite includes:
- Unit Tests: Core functionality, database operations, CLI commands
- Integration Tests: End-to-end migration scenarios, file system operations
- Type Tests: TypeScript interface validation
- Error Handling Tests: Database errors, file system errors, validation failures
tests/db.test.ts
- Database operations and ClickHouse clienttests/runner.test.ts
- Migration runner and command executiontests/index.test.ts
- CLI interface and argument parsingtests/types.test.ts
- TypeScript type definitionstests/integration.test.ts
- End-to-end integration scenariostests/utils.test.ts
- Utility functions and helpers
-
Install dependencies:
npm install
-
Build (and watch for changes):
npm run build # or for continuous compilation during development: # tsc -w
-
Run tests:
npm test # or run with coverage npm run test:coverage
-
Local Execution: After building, you can run the CLI using
npm link
(as described in Installation) or by directly executing the compiled JavaScript:node dist/index.js <command>
This project uses GitHub Actions for:
- Continuous Testing: Runs tests on Node.js 18.x, 20.x, and 22.x
- Code Coverage: Uploads coverage reports to Codecov
- NPM Publishing: Automated publishing to NPM on releases
- Type Checking: Validates TypeScript types
- Security Scanning: Daily vulnerability scans and dependency reviews
When contributing:
- Write tests for new functionality
- Ensure all tests pass:
npm test
- Maintain or improve test coverage
- Follow existing code patterns and conventions
- Update documentation as needed
Clicksuite can also be used programmatically in your Node.js/TypeScript applications:
import { Runner, Db, Context, getContext } from 'clicksuite';
// Option 1: Use getContext helper (recommended)
const context = getContext({
skipSchemaUpdate: true, // Skip schema.sql generation
verbose: true, // Enable detailed logging
dryRun: false // Execute migrations (not preview)
});
// Option 2: Create context manually
const manualContext: Context = {
url: 'http://default@localhost:8123/my_database',
migrationsDir: '/path/to/migrations',
environment: 'development',
nonInteractive: false,
skipSchemaUpdate: true // New option to skip schema.sql updates
};
// Run migrations programmatically
const runner = new Runner(context);
await runner.init();
await runner.migrate();
// Direct database access
const db = new Db(context);
const tables = await db.getDatabaseTables();
await db.close();
The Context
interface supports all CLI options plus programmatic-specific settings:
Option | Type | Description | Default |
---|---|---|---|
url |
string |
Required. ClickHouse connection URL | - |
migrationsDir |
string |
Required. Absolute path to migrations directory | - |
environment |
string |
Required. Environment name (development, test, production) | - |
cluster |
string? |
ClickHouse cluster name for distributed setups | undefined |
migrationsDatabase |
string? |
Database for the migrations tracking table | 'default' |
nonInteractive |
boolean? |
Skip confirmation prompts (useful for CI/CD) | false |
dryRun |
boolean? |
Preview migrations without executing | false |
verbose |
boolean? |
Show detailed SQL logs and debug info | false |
skipSchemaUpdate |
boolean? |
New! Skip updating schema.sql file after migrations | false |
By default, Clicksuite automatically generates a schema.sql
file containing all database objects (tables, views, dictionaries) from all databases (excluding system databases) after successful migrations. This provides a complete snapshot of your database schema.
To disable schema.sql generation:
// Programmatic usage
const context = getContext({
skipSchemaUpdate: true
});
// Or with manual context
const context: Context = {
url: 'http://default@localhost:8123/my_database',
migrationsDir: '/path/to/migrations',
environment: 'production',
skipSchemaUpdate: true // Disables schema.sql generation
};
When to disable schema.sql generation:
- High-frequency migration runs where file I/O should be minimized
- Environments where the migrations directory is read-only
- When using custom schema management tools
- For performance optimization in CI/CD pipelines
Schema.sql features:
- Multi-database support: Includes objects from all non-system databases
- Complete coverage: Tables, materialized views, and dictionaries
- Automatic organization: Grouped by object type with clear sections
- Environment tracking: Shows which environment generated the schema
- Timestamp tracking: Records when the schema was last updated
All TypeScript types are exported for easy integration:
Context
- Configuration interface for all optionsMigrationFile
- Represents a parsed migration fileMigrationRecord
- Database migration tracking recordMigrationStatus
- Migration status with state informationMigrationState
- Migration state enum ('APPLIED', 'PENDING', 'INACTIVE')RawMigrationFileContent
- Raw YAML migration file structure
Custom migration workflow:
import { Runner, getContext } from 'clicksuite';
async function customMigrationWorkflow() {
const context = getContext({
environment: 'production',
skipSchemaUpdate: true, // Skip schema.sql for performance
nonInteractive: true, // No prompts in production
verbose: false // Clean output for logs
});
const runner = new Runner(context);
// Initialize if needed
await runner.init();
// Check status before running
await runner.status();
// Run migrations
await runner.up();
console.log('Production migration completed successfully');
}
Development workflow with schema tracking:
import { Runner, getContext } from 'clicksuite';
async function developmentWorkflow() {
const context = getContext({
environment: 'development',
skipSchemaUpdate: false, // Generate schema.sql for development
verbose: true, // Detailed logs for debugging
dryRun: false // Actually execute migrations
});
const runner = new Runner(context);
await runner.migrate();
// schema.sql is automatically updated with latest database state
console.log('Development migration completed, schema.sql updated');
}
See the examples directory for more detailed usage examples.
Contributions are welcome! Please feel free to open an issue or submit a pull request on the GitHub repository.
This project is licensed under the MIT License.