Last Updated: November 21, 2025
Installation & Setup
brew install cockroachdb/tap/cockroach
Install CockroachDB (macOS)
cockroach start-single-node --insecure --listen-addr=localhost
Start single-node cluster (dev)
cockroach start --insecure --store=node1 --listen-addr=localhost:26257 --http-addr=localhost:8080 --join=localhost:26257,localhost:26258
Start multi-node cluster
cockroach init --insecure --host=localhost:26257
Initialize cluster
cockroach sql --insecure
Open SQL shell
cockroach cert create-ca --certs-dir=certs --ca-key=ca.key
Create CA certificate (production)
cockroach cert create-node localhost --certs-dir=certs --ca-key=ca.key
Create node certificate
cockroach cert create-client root --certs-dir=certs --ca-key=ca.key
Create client certificate
CLI Commands
cockroach node status --insecure
Show cluster node status
cockroach node ls --insecure
List all nodes
cockroach node decommission 3 --insecure
Decommission node by ID
cockroach quit --insecure
Stop node gracefully
cockroach debug zip ./debug.zip --insecure
Collect debug information
cockroach workload init movr
Initialize sample workload
cockroach workload run movr --duration=1m
Run benchmark workload
cockroach gen haproxy --insecure
Generate HAProxy configuration
cockroach version
Show CockroachDB version
Database Operations
CREATE DATABASE mydb;
Create new database
SHOW DATABASES;
List all databases
USE mydb;
Switch to database
DROP DATABASE mydb CASCADE;
Delete database and contents
CREATE TABLE users (id UUID PRIMARY KEY DEFAULT gen_random_uuid(), name STRING);
Create table with UUID primary key
SHOW TABLES;
List tables in current database
SHOW CREATE TABLE users;
Show table schema
ALTER TABLE users ADD COLUMN email STRING UNIQUE;
Add column to table
BACKUP DATABASE mydb TO 's3://bucket/backup?AWS_ACCESS_KEY_ID=x';
Backup database to S3
RESTORE DATABASE mydb FROM 's3://bucket/backup';
Restore database from backup
Multi-Region Configuration
ALTER DATABASE mydb SET PRIMARY REGION "us-east-1";
Set primary region
ALTER DATABASE mydb ADD REGION "us-west-1";
Add secondary region
ALTER DATABASE mydb ADD REGION "eu-west-1";
Add another region
SHOW REGIONS FROM DATABASE mydb;
List database regions
ALTER DATABASE mydb DROP REGION "eu-west-1";
Remove region from database
ALTER DATABASE mydb SURVIVE REGION FAILURE;
Enable region failure survival
ALTER DATABASE mydb SURVIVE ZONE FAILURE;
Enable zone failure survival
SHOW SURVIVAL GOAL FROM DATABASE mydb;
Show current survival goal
Table Locality Patterns
ALTER TABLE users SET LOCALITY GLOBAL;
Replicate table across all regions
ALTER TABLE users SET LOCALITY REGIONAL BY TABLE;
Table tied to primary region
ALTER TABLE users SET LOCALITY REGIONAL BY TABLE IN "us-west-1";
Pin table to specific region
ALTER TABLE users SET LOCALITY REGIONAL BY ROW;
Each row can be in different region
ALTER TABLE users ADD COLUMN region crdb_internal_region;
Add region column for row-level locality
SHOW LOCALITY FOR TABLE users;
Show table locality configuration
Replication & Zones
SHOW ZONE CONFIGURATIONS;
List all zone configurations
ALTER RANGE default CONFIGURE ZONE USING num_replicas = 3;
Set replica count
ALTER TABLE users CONFIGURE ZONE USING num_replicas = 5;
Set replicas for specific table
ALTER DATABASE mydb CONFIGURE ZONE USING range_min_bytes = 16777216;
Set minimum range size
ALTER RANGE default CONFIGURE ZONE USING gc.ttlseconds = 90000;
Set garbage collection TTL
SHOW ZONE CONFIGURATION FOR TABLE users;
Show table zone config
Default replication factor: 3
Data replicated to 3 nodes by default
Raft consensus protocol
Ensures consistency across replicas
Transactions & Consistency
BEGIN; ... COMMIT;
Explicit transaction
BEGIN; ... ROLLBACK;
Rollback transaction
SET TRANSACTION PRIORITY HIGH;
Set transaction priority
SET TRANSACTION AS OF SYSTEM TIME '-10s';
Historical queries (time travel)
SERIALIZABLE isolation level (default)
Strongest consistency guarantees
SELECT ... FOR UPDATE;
Lock rows for update
SAVEPOINT cockroach_restart;
Create savepoint for retry logic
RELEASE SAVEPOINT cockroach_restart;
Release savepoint
ROLLBACK TO SAVEPOINT cockroach_restart;
Retry after serialization error
Monitoring & Observability
http://localhost:8080
Admin UI dashboard
SHOW CLUSTER SETTING cluster.organization;
Show cluster settings
SET CLUSTER SETTING server.time_until_store_dead = '5m';
Configure cluster setting
SHOW JOBS;
List background jobs
SHOW QUERIES;
Show running queries
SHOW SESSIONS;
List active sessions
EXPLAIN SELECT * FROM users WHERE id = $1;
Show query execution plan
EXPLAIN ANALYZE SELECT * FROM users;
Execute and show actual performance
SELECT * FROM crdb_internal.node_metrics;
Query internal metrics
http://localhost:8080/_status/vars
Prometheus metrics endpoint
Performance Optimization
CREATE INDEX ON users (email);
Create secondary index
CREATE INDEX ON users (name) STORING (email);
Covering index with STORING
SHOW INDEX FROM users;
List table indexes
DROP INDEX users@users_email_idx;
Drop index
Use UUID for primary keys
Avoid hotspots with auto-increment
UPSERT for high-contention writes
Better than INSERT ON CONFLICT
Batch INSERT statements
More efficient than single inserts
Use follower reads for analytics
Reduce latency for read-heavy workloads
SELECT ... WITH experimental_follower_read_timestamp();
Read from follower replicas
Avoid SELECT * in production
Query only needed columns
Connection Strings
postgresql://root@localhost:26257/defaultdb?sslmode=disable
Insecure local connection
postgresql://user:pass@host:26257/mydb?sslmode=verify-full
Secure production connection
cockroach sql --url "postgresql://user@host/db"
Connect via connection string
PostgreSQL wire protocol compatible
Use standard PostgreSQL drivers
Connection pooling recommended
Use pgBouncer or app-level pooling
CockroachDB Cloud
Serverless tier available
Free tier with 5GB storage
Dedicated clusters
Production-ready managed hosting
AWS, GCP, Azure support
Deploy on major cloud providers
Automatic backups
Managed backup and restore
Automated upgrades
Zero-downtime version updates
Built-in monitoring
Metrics and alerting included
Private connectivity options
VPC peering, AWS PrivateLink
Best Practices
Run at least 3 nodes for production
Required for fault tolerance
Use odd number of nodes
Prevents split-brain scenarios
Enable SSL/TLS in production
Secure node and client communication
Implement retry logic for serialization errors
Handle 40001 error codes
Use appropriate survival goals
Balance availability vs latency
Monitor query performance regularly
Use EXPLAIN ANALYZE for slow queries
Set up alerts for node failures
Monitor cluster health proactively
Test backup and restore procedures
Verify disaster recovery plans
Use REGIONAL BY ROW for multi-region apps
Optimize for geo-distributed users
Keep CockroachDB updated
Regular updates for features and fixes
Pro Tip:
CockroachDB shines in multi-region deployments! Use REGIONAL BY ROW for user data to achieve low latency globally while maintaining strong consistency. Set appropriate survival goals (zone vs region) based on your availability requirements. Always implement transaction retry logic for serialization errors!