Skip to main content

Temporal Cluster deployment guide

This guide provides a comprehensive overview to deploy and operate a Temporal Cluster in a live environment.

WORK IN PROGRESS

This guide is a work in progress. Some sections may be incomplete. Information may change at any time.

Legacy production deployment information is available here

Visibility store

A VisibilityLink preview iconWhat is Visibility?

The term Visibility, within the Temporal Platform, refers to the subsystems and APIs that enable an operator to view Workflow Executions that currently exist within a Cluster.

Learn more store is set up as a part of your Persistence storeLink preview iconWhat is a Temporal Cluster?

A Temporal Cluster is the Temporal Server paired with persistence.

Learn more to enable listing and filtering details about Workflow Executions that exist on your Temporal Cluster.

A Visibility store is required in a Temporal Cluster setup because it is used by Temporal Web UI and tctl to pull Workflow Execution data and enables features like batch operations on a group of Workflow Executions.

With the Visibility store, you can use List FiltersLink preview iconWhat is a List Filter?

A List Filter is the SQL-like string that is provided as the parameter to an Advanced Visibility List API.

Learn more with Search AttributesLink preview iconWhat is a Search Attribute?

A Search Attribute is an indexed name used in List Filters to filter a list of Workflow Executions that have the Search Attribute in their metadata.

Learn more to list and filter Workflow Executions that you want to review. Setting up Advanced VisibilityLink preview iconWhat is Advanced Visibility?

Advanced Visibility, within the Temporal Platform, is the subsystem and APIs that enable the listing, filtering, and sorting of Workflow Executions through an SQL-like query syntax.

Learn more enables access to creating and using multiple custom Search Attributes with your List Filters. For details, see Search AttributesLink preview iconWhat is a Search Attribute?

A Search Attribute is an indexed name used in List Filters to filter a list of Workflow Executions that have the Search Attribute in their metadata.

Learn more.

Note that if you use MySQL, PostgreSQL, or SQLite as your Visibility store, Temporal Server version 1.20 and later supports Advanced Visibility features on MySQL (version 8.0.17 and later), PostgreSQL (version 12 and later) and SQLite (v3.31.0 and later), in addition to Elasticsearch.

To enable Advanced Visibility on your SQL databases, ensure that you do the following:

Supported databases

The following databases are supported as Visibility stores:

You can use any combination of the supported databases for your Persistence and Visibility stores.

MySQL

Support, stability, and dependency info
  • MySQL v5.7 and later.
  • Support for MySQL v5.7 will be deprecated for all Temporal Server versions after v1.20.
  • With Temporal Server version 1.20 and later, Advanced Visibility is available on MySQL v8.0.17 and later.

You can set MySQL as your Visibility storeLink preview iconWhat is Visibility?

The term Visibility, within the Temporal Platform, refers to the subsystems and APIs that enable an operator to view Workflow Executions that currently exist within a Cluster.

Learn more. Verify supported versionsLink preview iconHow to set up Visibility in a Temporal Cluster

Visibility storage is set up as a part of your Persistence store to enable listing and filtering details about Worklfow Executions that exist on your Temporal Cluster.

Learn more before you proceed.

If using MySQL v8.0.17 or later as your Visibility store with Temporal Server v1.20 and later, any custom Search AttributesLink preview iconWhat is a Search Attribute?

A Search Attribute is an indexed name used in List Filters to filter a list of Workflow Executions that have the Search Attribute in their metadata.

Learn more that you create must be associated with a Namespace in that Cluster. For details, see Search Attributes.

Persistence configuration

Set your MySQL Visibility store name in the visibilityStore parameter in your Persistence configuration, and then define the Visibility store configuration under datastores.

The following example shows how to set a Visibility store mysql-visibility and define the datastore configuration in your Temporal Cluster Configuration YAML.

#...
persistence:
#...
visibilityStore: mysql-visibility
#...
datastores:
default:
#...
mysql-visibility:
sql:
pluginName: "mysql" # if using MySQL 8.0.17 or later with Temporal Server v1.20, use "mysql8" plugin for Advanced Visibility capabilities
databaseName: "temporal_visibility"
connectAddr: " " # remote address of this database; for example, 127.0.0.0:3306
connectProtocol: " " # protocol example: tcp
user: "username_for_auth"
password: "password_for_auth"
maxConns: 2
maxIdleConns: 2
maxConnLifetime: "1h"
#...

For details on the configuration parameters and values, see Cluster configuration.

To enable Advanced Visibility features on your MySQL Visibility store, upgrade to MySQL v8.0.17 or later with Temporal Server v1.20 or later. See Upgrade ServerLink preview iconHow to upgrade the Temporal Server version

If a newer version of the Temporal Server is available, a notification appears in the Temporal Web UI.

Learn more on how to upgrade your Temporal Server and database schemas.

For example configuration templates, see MySQL Visibility store configuration.

Database schema and setup

Visibility data is stored in a database table called executions_visibility that must be set up according to the schemas defined (by supported versions) in the following:

The following example shows how the auto-setup.sh script sets up your Visibility store.

#...
# set your MySQL environment variables
: "${DBNAME:=temporal}"
: "${VISIBILITY_DBNAME:=temporal_visibility}"
: "${DB_PORT:=}"
: "${MYSQL_SEEDS:=}"
: "${MYSQL_USER:=}"
: "${MYSQL_PWD:=}"
: "${MYSQL_TX_ISOLATION_COMPAT:=false}"

#...
# set connection details
#...
# set up MySQL schema
setup_mysql_schema() {
#...
# use valid schema for the version of the database you want to set up for Visibility
VISIBILITY_SCHEMA_DIR=${TEMPORAL_HOME}/schema/mysql/${MYSQL_VERSION_DIR}/visibility/versioned
if [[ ${SKIP_DB_CREATE} != true ]]; then
temporal-sql-tool --ep "${MYSQL_SEEDS}" -u "${MYSQL_USER}" -p "${DB_PORT}" "${MYSQL_CONNECT_ATTR[@]}" --db "${VISIBILITY_DBNAME}" create
fi
temporal-sql-tool --ep "${MYSQL_SEEDS}" -u "${MYSQL_USER}" -p "${DB_PORT}" "${MYSQL_CONNECT_ATTR[@]}" --db "${VISIBILITY_DBNAME}" setup-schema -v 0.0
temporal-sql-tool --ep "${MYSQL_SEEDS}" -u "${MYSQL_USER}" -p "${DB_PORT}" "${MYSQL_CONNECT_ATTR[@]}" --db "${VISIBILITY_DBNAME}" update-schema -d "${VISIBILITY_SCHEMA_DIR}"
#...
}

Note that the script uses temporal-sql-tool to run the setup.

PostgreSQL

Support, stability, and dependency info
  • PostgreSQL v9.6 and later.
  • With Temporal Cluster version 1.20 and later, Advanced Visibility is available on PostgreSQL v12 and later.
  • Support for PostgreSQL v9.6 through v11 will be deprecated for all Temporal Server versions after v1.20; we recommend upgrading to PostgreSQL 12 or later.

You can set PostgreSQL as your Visibility storeLink preview iconWhat is Visibility?

The term Visibility, within the Temporal Platform, refers to the subsystems and APIs that enable an operator to view Workflow Executions that currently exist within a Cluster.

Learn more. Verify supported versionsLink preview iconHow to set up Visibility in a Temporal Cluster

Visibility storage is set up as a part of your Persistence store to enable listing and filtering details about Worklfow Executions that exist on your Temporal Cluster.

Learn more before you proceed.

If using PostgreSQL v12 or later as your Visibility store with Temporal Server v1.20 and later, any custom Search AttributesLink preview iconWhat is a Search Attribute?

A Search Attribute is an indexed name used in List Filters to filter a list of Workflow Executions that have the Search Attribute in their metadata.

Learn more that you create must be associated with a Namespace in that Cluster. For details, see Search Attributes.

Persistence configuration

Set your PostgreSQL Visibility store name in the visibilityStore parameter in your Persistence configuration, and then define the Visibility store configuration under datastores.

The following example shows how to set a Visibility store postgres-visibility and define the datastore configuration in your Temporal Cluster Configuration YAML.

#...
persistence:
#...
visibilityStore: postgres-visibility
#...
datastores:
default:
#...
postgres-visibility:
sql:
pluginName: "postgres" # if using PostgreSQL v12 or later with Temporal Server v1.20, use "postgres12" plugin for Advanced Visibility capabilities
databaseName: "temporal_visibility"
connectAddr: " " # remote address of this database; for example, 127.0.0.0:5432
connectProtocol: " " # protocol example: tcp
user: "username_for_auth"
password: "password_for_auth"
maxConns: 2
maxIdleConns: 2
maxConnLifetime: "1h"
#...

To enable Advanced Visibility features on your PostgreSQL Visibility store, upgrade to PostgreSQL v12 or later with Temporal Server v1.20 or later. See Upgrade ServerLink preview iconHow to upgrade the Temporal Server version

If a newer version of the Temporal Server is available, a notification appears in the Temporal Web UI.

Learn more for details on how to upgrade your Temporal Server and database schemas.

Database schema and setup

Visibility data is stored in a database table called executions_visibility that must be set up according to the schemas defined (by supported versions) in the following:

The following example shows how the auto-setup.sh script is used to set up your Visibility store.

#...
# set your PostgreSQL environment variables
: "${DBNAME:=temporal}"
: "${VISIBILITY_DBNAME:=temporal_visibility}"
: "${DB_PORT:=}"
: "${POSTGRES_SEEDS:=}"
: "${POSTGRES_USER:=}"
: "${POSTGRES_PWD:=}"

#... set connection details
# set up PostgreSQL schema
setup_postgres_schema() {
#...

# use valid schema for the version of the database you want to set up for Visibility
VISIBILITY_SCHEMA_DIR=${TEMPORAL_HOME}/schema/postgresql/${POSTGRES_VERSION_DIR}/visibility/versioned
if [[ ${VISIBILITY_DBNAME} != "${POSTGRES_USER}" && ${SKIP_DB_CREATE} != true ]]; then
temporal-sql-tool --plugin postgres --ep "${POSTGRES_SEEDS}" -u "${POSTGRES_USER}" -p "${DB_PORT}" --db "${VISIBILITY_DBNAME}" create
fi
temporal-sql-tool --plugin postgres --ep "${POSTGRES_SEEDS}" -u "${POSTGRES_USER}" -p "${DB_PORT}" --db "${VISIBILITY_DBNAME}" update-schema -d "${VISIBILITY_SCHEMA_DIR}"
#...
}

Note that the script uses temporal-sql-tool to run the setup.

SQLite

Support, stability, and dependency info
  • SQLite v3.31.0 and later.

You can set SQLite as your Visibility storeLink preview iconWhat is Visibility?

The term Visibility, within the Temporal Platform, refers to the subsystems and APIs that enable an operator to view Workflow Executions that currently exist within a Cluster.

Learn more. Verify supported versionsLink preview iconHow to set up Visibility in a Temporal Cluster

Visibility storage is set up as a part of your Persistence store to enable listing and filtering details about Worklfow Executions that exist on your Temporal Cluster.

Learn more before you proceed.

Temporal supports only an in-memory database with SQLite; this means that the database is automatically created when Temporal Server starts and is destroyed when Temporal Server stops.

You can change the configuration to use a file-based database so that it is preserved when Temporal Server stops. However, if you use a file-based SQLite database, upgrading your database schema to enable Advanced Visibility features is not supported; in this case, you must delete the database and create it again to upgrade.

If using SQLite v3.31.0 and later as your Visibility store with Temporal Server v1.20 and later, any custom Search AttributesLink preview iconWhat is a Search Attribute?

A Search Attribute is an indexed name used in List Filters to filter a list of Workflow Executions that have the Search Attribute in their metadata.

Learn more that you create must be associated with a Namespace in that Cluster. For details, see Search Attributes.

Persistence configuration

Set your SQLite Visibility store name in the visibilityStore parameter in your Persistence configuration, and then define the Visibility store configuration under datastores.

The following example shows how to set a Visibility store sqlite-visibility and define the datastore configuration in your Temporal Cluster Configuration YAML.

persistence:
# ...
visibilityStore: sqlite-visibility
# ...
datastores:
# ...
sqlite-visibility:
sql:
user: "username_for_auth"
password: "password_for_auth"
pluginName: "sqlite"
databaseName: "default"
connectAddr: "localhost"
connectProtocol: "tcp"
connectAttributes:
mode: "memory"
cache: "private"
maxConns: 1
maxIdleConns: 1
maxConnLifetime: "1h"
tls:
enabled: false
caFile: ""
certFile: ""
keyFile: ""
enableHostVerification: false
serverName: ""

SQLite (v3.31.0 and later) has Advanced Visiibility enabled by default.

Database schema and setup

Visibility data is stored in a database table called executions_visibility that must be set up according to the schemas defined (by supported versions) in https://github.com/temporalio/temporal/blob/master/schema/sqlite/v3/visibility/schema.sql.

For an example of setting up the SQLite schema, see Temporalite setup.

Cassandra

You can set Cassandra as your Visibility storeLink preview iconWhat is Visibility?

The term Visibility, within the Temporal Platform, refers to the subsystems and APIs that enable an operator to view Workflow Executions that currently exist within a Cluster.

Learn more. Verify supported versions before you proceed.

Advanced Visibility is not supported with Cassandra. To enable Advanced Visibility features, use any of the supported databases, such as MySQL, PostgreSQL, SQLite, or Elasticsearch, as your Advanced Visibility store. We recommend using Elasticsearch for any Temporal Cluster setup that handles more than a few Workflow Executions because it supports the request load on the Visibility store and helps optimize performance.

Persistence configuration

Set your Cassandra Visibility store name in the visibilityStore parameter in your Persistence configuration, and then define the Visibility store configuration under datastores.

The following example shows how to set a Visibility store cass-visibility and define the datastore configuration in your Temporal Cluster Configuration YAML.

#...
persistence:
#...
visibilityStore: cass-visibility
#...
datastores:
default:
#...
cass-visibility:
cassandra:
hosts: "127.0.0.1"
keyspace: "temporal_visibility"
#...

Database schema and setup

Visibility data is stored in a database table called executions_visibility that must be set up according to the schemas defined (by supported versions) in https://github.com/temporalio/temporal/tree/master/schema/cassandra/visibility.

The following example shows how the auto-setup.sh script is used to set up your Visibility store.

#...
# set your Cassandra environment variables
: "${KEYSPACE:=temporal}"
: "${VISIBILITY_KEYSPACE:=temporal_visibility}"

: "${CASSANDRA_SEEDS:=}"
: "${CASSANDRA_PORT:=9042}"
: "${CASSANDRA_USER:=}"
: "${CASSANDRA_PASSWORD:=}"
: "${CASSANDRA_TLS_ENABLED:=}"
: "${CASSANDRA_CERT:=}"
: "${CASSANDRA_CERT_KEY:=}"
: "${CASSANDRA_CA:=}"
: "${CASSANDRA_REPLICATION_FACTOR:=1}"
#...
# set connection details
#...
# set up Cassandra schema
setup_cassandra_schema() {
#...
# use valid schema for the version of the database you want to set up for Visibility
VISIBILITY_SCHEMA_DIR=${TEMPORAL_HOME}/schema/cassandra/visibility/versioned
if [[ ${SKIP_DB_CREATE} != true ]]; then
temporal-cassandra-tool --ep "${CASSANDRA_SEEDS}" create -k "${VISIBILITY_KEYSPACE}" --rf "${CASSANDRA_REPLICATION_FACTOR}"
fi
temporal-cassandra-tool --ep "${CASSANDRA_SEEDS}" -k "${VISIBILITY_KEYSPACE}" setup-schema -v 0.0
temporal-cassandra-tool --ep "${CASSANDRA_SEEDS}" -k "${VISIBILITY_KEYSPACE}" update-schema -d "${VISIBILITY_SCHEMA_DIR}"
#...
}

Elasticsearch

Support, stability, and dependency info
  • Elasticsearch v8 is supported beginning with Temporal Server version 1.18.0.
  • Elasticsearch v7.10 is supported beginning with Temporal Server version 1.17.0.
  • Elasticsearch v6.8 is supported through Temporal Server version 1.17.x.
  • Elasticsearch v6.8 and v7.10 are explicitly supported with AWS Elasticsearch.

You can integrate Elasticsearch with your Temporal Cluster for Advanced VisibilityLink preview iconWhat is Advanced Visibility?

Advanced Visibility, within the Temporal Platform, is the subsystem and APIs that enable the listing, filtering, and sorting of Workflow Executions through an SQL-like query syntax.

Learn more to take on the Visibility request load. We recommend using Elasticsearch for large-scale operations on the Temporal Cluster.

To integrate Elasticsearch with your Temporal Cluster, edit the persistence section of your development.yaml configuration file and run the index schema setup commands.

note

The following steps are needed only if you have a "plain" Temporal Server Docker image.

If you operate a Temporal Cluster using our Helm charts or Docker Compose, the Elasticsearch index schema and index are created automatically using the auto-setup Docker image.

Persistence configuration

  1. Add the advancedVisibilityStore: es-visibility key-value pair to the persistence section. For example usage, you can look at several development_es.yaml files in the temporalio/temporal repo. The configuration instructs the Temporal Cluster how and where to connect to Elasticsearch storage.
persistence:
...
advancedVisibilityStore: es-visibility
  1. Define the Elasticsearch datastore connection information under the es-visibility key:
persistence:
...
advancedVisibilityStore: es-visibility
datastores:
...
es-visibility:
elasticsearch:
version: "v7"
url:
scheme: "http"
host: "127.0.0.1:9200"
indices:
visibility: temporal_visibility_v1_dev

Index schema and index

Run the following commands to create the index schema and index:

# ES_SERVER is the URL of Elasticsearch server; for example, "http://localhost:9200".
SETTINGS_URL="${ES_SERVER}/_cluster/settings"
SETTINGS_FILE=${TEMPORAL_HOME}/schema/elasticsearch/visibility/cluster_settings_${ES_VERSION}.json
TEMPLATE_URL="${ES_SERVER}/_template/temporal_visibility_v1_template"
SCHEMA_FILE=${TEMPORAL_HOME}/schema/elasticsearch/visibility/index_template_${ES_VERSION}.json
INDEX_URL="${ES_SERVER}/${ES_VIS_INDEX}"
curl --fail --user "${ES_USER}":"${ES_PWD}" -X PUT "${SETTINGS_URL}" -H "Content-Type: application/json" --data-binary "@${SETTINGS_FILE}" --write-out "\n"
curl --fail --user "${ES_USER}":"${ES_PWD}" -X PUT "${TEMPLATE_URL}" -H 'Content-Type: application/json' --data-binary "@${SCHEMA_FILE}" --write-out "\n"
curl --user "${ES_USER}":"${ES_PWD}" -X PUT "${INDEX_URL}" --write-out "\n"

Elasticsearch privileges

Ensure that the following privileges are granted for the Elasticsearch Temporal index:

Custom Search Attributes

To manage your custom Search Attributes on Temporal Cloud, use tcld. With Temporal Cloud, you can create and rename custom Search Attributes.

To manage your custom Search Attributes on self-hosted Temporal Clusters, use tctl. With self-hosted Temporal Cluster, you can create and remove custom Search Attributes. Note that if you use SQL databasesLink preview iconHow to set up Visibility in a Temporal Cluster

Visibility storage is set up as a part of your Persistence store to enable listing and filtering details about Worklfow Executions that exist on your Temporal Cluster.

Learn more with Temporal Server v1.20 and later, creating a custom Search Attribute creates a mapping with a database field name in the Visibility store custom_search_attributes table. Removing a custom Search Attribute removes this mapping with the database field name but does not remove the data. If you remove a custom Search Attribute and add a new one, the new custom Search Attribute might be mapped to the database field of the one that was recently removed. This might cause unexpected results when you use the List API to retrieve results using the new custom Search Attribute. These constraints do not apply if you use Elasticsearch.

Create custom Search Attributes

Add custom Search Attributes to your Visibility store using tctl for a self-hosted Temporal Cluster and tcld for Temporal Cloud.

Creating a custom Search Attribute in your Visibility store makes it available to use in your Workflow metadata and List FiltersLink preview iconWhat is a List Filter?

A List Filter is the SQL-like string that is provided as the parameter to an Advanced Visibility List API.

Learn more.

On Temporal Cloud

To create custom Search Attributes on Temporal Cloud, use tcld namespace search-attributes add. For example, to add a custom Search Attributes "CustomSA" to your Temporal Cloud Namespace "YourNamespace", run the following command. tcld namespace search-attributes add --namespace YourNamespace --search-attribute "CustomSA"

On self-hosted Temporal Cluster

If you're self-hosting your Temporal Cluster, verify whether your Visibility databaseLink preview iconHow to set up Visibility in a Temporal Cluster

Visibility storage is set up as a part of your Persistence store to enable listing and filtering details about Worklfow Executions that exist on your Temporal Cluster.

Learn more version supports Advanced Visibility features.

To create custom Search Attributes in your self-hosted Temporal Cluster Visibility store, use tctl search-attribute create with --name and --type modifiers.

For example, to create a Search Attribute called CustomSA of type Keyword, run the following command:

tctl search-attribute create --name CustomSA --type Keyword

Note that if you use a SQL database with Advanced Visibility capabilities, you are required to specify a Namespace when creating a custom Search Attribute. For example: tctl --ns yournamespace search-attribute create --name CustomSA --type Keyword

You can also create multiple custom Search Attributes when you set up your Visibility store.

For example, the auto-setup.sh script that is used to set up your local docker-compose Temporal Cluster creates custom Search Attributes in the Visibility store, as shown in the following code snippet from the script (for SQL databases).

add_custom_search_attributes() {
until temporal operator search-attribute list --namespace "${DEFAULT_NAMESPACE}"; do
echo "Waiting for namespace cache to refresh..."
sleep 1
done
echo "Namespace cache refreshed."

echo "Adding Custom*Field search attributes."

temporal operator search-attribute create --namespace "${DEFAULT_NAMESPACE}" --yes \
--name CustomKeywordField --type Keyword \
--name CustomStringField --type Text \
--name CustomTextField --type Text \
--name CustomIntField --type Int \
--name CustomDatetimeField --type Datetime \
--name CustomDoubleField --type Double \
--name CustomBoolField --type Bool
}

Note that this script has been updated for Temporal Server v1.20, which requires associating every custom Search Attribute with a Namespace when using a SQL database.

For Temporal Server v1.19 and earlier, or if using Elasticsearch for Advanced Visibility, you can create custom Search Attributes without a Namespace association, as shown in the following example.

add_custom_search_attributes() {
echo "Adding Custom*Field search attributes."
tctl --auto_confirm admin cluster add-search-attributes \
--name CustomKeywordField --type Keyword \
--name CustomStringField --type Text \
--name CustomTextField --type Text \
--name CustomIntField --type Int \
--name CustomDatetimeField --type Datetime \
--name CustomDoubleField --type Double \
--name CustomBoolField --type Bool
}

When your Visibility store is set up and running, these custom Search Attributes are available to use in your Workflow code.

Remove custom Search Attributes

To remove a Search Attribute key from your self-hosted Temporal Cluster Visibility store, use the command tctl search-attribute remove. Removing Search Attributes is not supported on Temporal Cloud.

For example, if using Elasticsearch for Advanced Visibility, to remove a custom Search Attribute called CustomSA of type Keyword use the following command:

tctl search-attribute remove --name CustomSA

With Temporal Server v1.20, if using a SQL database for Advanced Visibility, you need to specify the Namespace in your command, as shown in the following command:

tctl --ns yournamespace search-attribute remove --name CustomSA

To check whether the Search Attribute was removed, run tctl search-attribute list and check the list. If you're on Temporal Server v1.20 and later, specify the Namespace from which you removed the Search Attribute. For example, tctl --ns yournamespace search-attribute list.

Note that if you use SQL databasesLink preview iconHow to set up Visibility in a Temporal Cluster

Visibility storage is set up as a part of your Persistence store to enable listing and filtering details about Worklfow Executions that exist on your Temporal Cluster.

Learn more with Temporal Server v1.20 and later, a new custom Search Attribute is mapped to a database field name in the Visibility store custom_search_attributes table. Removing this custom Search Attribute removes the mapping with the database field name but does not remove the data. If you remove a custom Search Attribute and add a new one, the new custom Search Attribute might be mapped to the database field of the one that was recently removed. This might cause unexpected results when you use the List API to retrieve results using the new custom Search Attribute. These constraints do not apply if you use Elasticsearch.

Archival

ArchivalLink preview iconWhat is Archival?

Archival is a feature that automatically backs up Event Histories from Temporal Cluster persistence to a custom blob store after the Closed Workflow Execution retention period is reached.

Learn more is a feature that automatically backs up Workflow Execution Event Histories and Visibility data from Temporal Cluster persistence to a custom blob store.

Set up Archival

ArchivalLink preview iconWhat is Archival?

Archival is a feature that automatically backs up Event Histories from Temporal Cluster persistence to a custom blob store after the Closed Workflow Execution retention period is reached.

Learn more consists of the following elements:
  • Configuration: Archival is controlled by the server configuration (i.e. the config/development.yaml file).
  • Provider: Location where the data should be archived. Supported providers are S3, GCloud, and the local file system.
  • URI: Specifies which provider should be used. The system uses the URI schema and path to make the determination.

Take the following steps to set up Archival:

  1. Set up the provider of your choice.
  2. Configure Archival.
  3. Create a Namespace that uses a valid URI and has Archival enabled.

Providers

Temporal directly supports several providers:

Make sure that you save the provider's storage location URI in a place where you can reference it later, because it is passed as a parameter when you create a Namespace.

Configuration

Archival configuration is defined in the config/development.yaml file. Let's look at an example configuration:

# Cluster level Archival config
archival:
# Event History configuration
history:
# Archival is enabled at the cluster level
state: "enabled"
enableRead: true
# Namespaces can use either the local filestore provider or the Google Cloud provider
provider:
filestore:
fileMode: "0666"
dirMode: "0766"
gstorage:
credentialsPath: "/tmp/gcloud/keyfile.json"

# Default values for a Namespace if none are provided at creation
namespaceDefaults:
# Archival defaults
archival:
# Event History defaults
history:
state: "enabled"
# New Namespaces will default to the local provider
URI: "file:///tmp/temporal_archival/development"

You can disable Archival by setting archival.history.state and namespaceDefaults.archival.history.state to "disabled".

Example:

archival:
history:
state: "disabled"

namespaceDefaults:
archival:
history:
state: "disabled"

The following table showcases acceptable values for each configuration and what purpose they serve.

ConfigAcceptable valuesDescription
archival.history.stateenabled, disabledMust be enabled to use the Archival feature with any Namespace in the cluster.
archival.history.enableReadtrue, falseMust be true to read from the archived Event History.
archival.history.providerSub provider configs are filestore, gstorage, s3, or your_custom_provider.Default config specifies filestore.
archival.history.provider.filestore.fileModeFile permission stringFile permissions of the archived files. We recommend using the default value of "0666" to avoid read/write issues.
archival.history.provider.filestore.dirModeFile permission stringDirectory permissions of the archive directory. We recommend using the default value of "0766" to avoid read/write issues.
namespaceDefaults.archival.history.stateenabled, disabledDefault state of the Archival feature whenever a new Namespace is created without specifying the Archival state.
namespaceDefaults.archival.history.URIValid URIMust be a URI of the file store location and match a schema that correlates to a provider.

Namespace creation

Although Archival is configured at the cluster level, it operates independently within each Namespace. If an Archival URI is not specified when a Namespace is created, the Namespace uses the value of defaultNamespace.archival.history.URI from the config/development.yaml file. The Archival URI cannot be changed after the Namespace is created. Each Namespace supports only a single Archival URI, but each Namespace can use a different URI. A Namespace can safely switch Archival between enabled and disabled states as long as Archival is enabled at the cluster level.

Archival is supported in Global NamespacesLink preview iconWhat is a Global Namespace?

A Global Namespace is a Namespace that exists across Clusters when Multi-Cluster Replication is set up.

Learn more (Namespaces that span multiple clusters). When Archival is running in a Global Namespace, it first runs on the active cluster; later it runs on the standby cluster. Before archiving, a history check is done to see what has been previously archived.

Test setup

To test Archival locally, start by running a Temporal server:

./temporal-server start

Then register a new Namespace with Archival enabled.

./tctl --ns samples-namespace namespace register --gd false --history_archival_state enabled --retention 3
note

If the retention period isn't set, it defaults to two days. The minimum retention period is one day. The maximum retention period is 30 days.

Setting the retention period to 0 results in the error A valid retention period is not set on request.

Next, run a sample Workflow such as the helloworld temporal sample.

When execution is finished, Archival occurs.

Retrieve archives

You can retrieve archived Event Histories by copying the workflowId and runId of the completed Workflow from the log output and running the following command:

./tctl --ns samples-namespace wf show --wid <workflowId> --rid <runId>

Custom Archiver

To archive data with a given provider, using the ArchivalLink preview iconWhat is Archival?

Archival is a feature that automatically backs up Event Histories from Temporal Cluster persistence to a custom blob store after the Closed Workflow Execution retention period is reached.

Learn more feature, Temporal must have a corresponding Archiver component installed. The platform does not limit you to the existing providers. To use a provider that is not currently supported, you can create your own Archiver.

Create a new package

The first step is to create a new package for your implementation in /common/archiver. Create a directory in the archiver folder and arrange the structure to look like the following:

temporal/common/archiver
- filestore/ -- Filestore implementation
- provider/
- provider.go -- Provider of archiver instances
- yourImplementation/
- historyArchiver.go -- HistoryArchiver implementation
- historyArchiver_test.go -- Unit tests for HistoryArchiver
- visibilityArchiver.go -- VisibilityArchiver implementations
- visibilityArchiver_test.go -- Unit tests for VisibilityArchiver

Archiver interfaces

Next, define objects that implement the HistoryArchiver and the VisibilityArchiver interfaces.

The objects should live in historyArchiver.go and visibilityArchiver.go, respectively.

Update provider

Update the GetHistoryArchiver and GetVisibilityArchiver methods of the archiverProvider object in the /common/archiver/provider/provider.go file so that it knows how to create an instance of your archiver.

Add configs

Add configs for your archiver to the config/development.yaml file and then modify the HistoryArchiverProvider and VisibilityArchiverProvider structs in /common/common/config.go accordingly.

Custom archiver FAQ

If my custom Archive method can automatically be retried by the caller, how can I record and access progress between retries?

Handle this situation by using ArchiverOptions. Here is an example:

func(a * Archiver) Archive(ctx context.Context, URI string, request * ArchiveRequest, opts...ArchiveOption) error {
featureCatalog: = GetFeatureCatalog(opts...) // this function is defined in options.go
var progress progress
// Check if the feature for recording progress is enabled.
if featureCatalog.ProgressManager != nil {
if err: = featureCatalog.ProgressManager.LoadProgress(ctx, & prevProgress);
err != nil {
// log some error message and return error if needed.
}
}

// Your archiver implementation...

// Record current progress
if featureCatalog.ProgressManager != nil {
if err: = featureCatalog.ProgressManager.RecordProgress(ctx, progress);
err != nil {
// log some error message and return error if needed.
}
}
}

If my Archive method encounters an error that is non-retryable, how do I indicate to the caller that it should not retry?

func(a * Archiver) Archive(ctx context.Context, URI string, request * ArchiveRequest, opts...ArchiveOption) error {
featureCatalog: = GetFeatureCatalog(opts...) // this function is defined in options.go

err: = youArchiverImpl()

if nonRetryableErr(err) {
if featureCatalog.NonRetryableError != nil {
return featureCatalog.NonRetryableError() // when the caller gets this error type back it will not retry anymore.
}
}
}

How does my history archiver implementation read history?

The archiver package provides a utility called HistoryIterator which is a wrapper of ExecutionManager. HistoryIterator is more simple than the HistoryManager, which is available in the BootstrapContainer, so archiver implementations can choose to use it when reading Workflow histories. See the historyIterator.go file for more details. Use the filestore historyArchiver implementation as an example.

Should my archiver define its own error types?

Each archiver is free to define and return its own errors. However, many common errors that exist between archivers are already defined in common/archiver/constants.go.

Is there a generic query syntax for the visibility archiver?

Currently, no. But this is something we plan to do in the future. As for now, try to make your syntax similar to the one used by our advanced list Workflow API.

Upgrade Server

If a newer version of the Temporal ServerLink preview iconWhat is the Temporal Server?

The Temporal Server is a grouping of four horizontally scalable services.

Learn more is available, a notification appears in the Temporal Web UI.

info

If you are using a version that is older than 1.0.0, reach out to us at community.temporal.io to ask how to upgrade.

First check to see if an upgrade to the database schema is required for the version you wish to upgrade to. If a database schema upgrade is required, it will be called out directly in the release notes. Some releases require changes to the schema, and some do not. We ensure that any consecutive versions are compatible in terms of database schema upgrades, features, and system behavior; however there is no guarantee that there is compatibility between any two non-consecutive versions.

When upgrading your Temporal Server version, ensure that you upgrade sequentially. For example, when upgrading from v1.n.x, always upgrade to v1.n+1.x (or the next available version) and so on until you get to the required version.

The Temporal Server upgrade updates or rewrites the old version data with the format introduced in the newer version. Because Temporal Server guarantees backward compatibility between two consecutive minor versions, and because older versions of the code are eventually removed from the code base, skipping versions when upgrading might cause older formats to become unrecognizable. If the old format of the data can't be read to be rewritten to the new format, the upgrades fail.

Check the Temporal Server releases and follow these releases in order. You can skip patch versions; use the latest patch of a minor version when upgrading.

Also be aware that each upgrade requires the History Service to load all Shards and update the Shard metadata, so allow approximately 10 minutes on each version for these processes to complete before upgrading to the next version.

Use one of the upgrade tools to upgrade your database schema to be compatible with the Temporal Server version being upgraded to.

If you are using a schema tools version prior to Temporal Server v1.8.0, we strongly recommend never using the "dryrun" (-y, or --dryrun) options in any of your schema update commands. Using this option might lead to potential loss of data, as when using it will create a new database and drop your existing one. This flag was removed in the 1.8.0 release.

Upgrade Cassandra schema

If you are using Cassandra for your Cluster's persistence, use the temporal-cassandra-tool to upgrade both the default Persistence and Visibility schemas.

Example default schema upgrade:

temporal_v1.2.1 $ temporal-cassandra-tool \
--tls \
--tls-ca-file <...> \
--user <cassandra-user> \
--password <cassandra-password> \
--endpoint <cassandra.example.com> \
--keyspace temporal \
--timeout 120 \
update \
--schema-dir ./schema/cassandra/temporal/versioned

Example visibility schema upgrade:

temporal_v1.2.1 $ temporal-cassandra-tool \
--tls \
--tls-ca-file <...> \
--user <cassandra-user> \
--password <cassandra-password> \
--endpoint <cassandra.example.com> \
--keyspace temporal_visibility \
--timeout 120 \
update \
--schema-dir ./schema/cassandra/visibility/versioned

Upgrade PostgreSQL or MySQL schema

If you are using MySQL or PostgreSQL use the temporal-sql-tool, which works similarly to the temporal-cassandra-tool.

Refer to this Makefile for context.

PostgreSQL

Example default schema upgrade:

./temporal-sql-tool \
--tls \
--tls-enable-host-verification \
--tls-cert-file <path to your client cert> \
--tls-key-file <path to your client key> \
--tls-ca-file <path to your CA> \
--ep localhost -p 5432 -u temporal -pw temporal --pl postgres --db temporal update-schema -d ./schema/postgresql/v96/temporal/versioned

Example visibility schema upgrade:

./temporal-sql-tool \
--tls \
--tls-enable-host-verification \
--tls-cert-file <path to your client cert> \
--tls-key-file <path to your client key> \
--tls-ca-file <path to your CA> \
--ep localhost -p 5432 -u temporal -pw temporal --pl postgres --db temporal_visibility update-schema -d ./schema/postgresql/v96/visibility/versioned

If you're upgrading PostgreSQL to v12 or later to enable Advanced Visibility features with Temporal Server v1.20, upgrade your PostgreSQL version first, and then run temporal-sql-tool with the postgres12 plugin, as shown in the following example:

./temporal-sql-tool \
--tls \
--tls-enable-host-verification \
--tls-cert-file <path to your client cert> \
--tls-key-file <path to your client key> \
--tls-ca-file <path to your CA> \
--ep localhost -p 5432 -u temporal -pw temporal --pl postgres12 --db temporal_visibility update-schema -d ./schema/postgresql/v12/visibility/versioned

MySQL

Example default schema upgrade:

./temporal-sql-tool \
--tls \
--tls-enable-host-verification \
--tls-cert-file <path to your client cert> \
--tls-key-file <path to your client key> \
--tls-ca-file <path to your CA> \
--ep localhost -p 3036 -u root -pw root --pl mysql --db temporal update-schema -d ./schema/mysql/v57/temporal/versioned/

Example visibility schema upgrade:

./temporal-sql-tool \
--tls \
--tls-enable-host-verification \
--tls-cert-file <path to your client cert> \
--tls-key-file <path to your client key> \
--tls-ca-file <path to your CA> \
--ep localhost -p 3036 -u root -pw root --pl mysql --db temporal_visibility update-schema -d ./schema/mysql/v57/visibility/versioned/

If you're upgrading MySQL to v8.0.17 or later to enable Advanced Visibility features with Temporal Server v1.20, upgrade your MySQL version first, and then run temporal-sql-tool with the mysql8 plugin, as shown in the following example:

./temporal-sql-tool \
--tls \
--tls-enable-host-verification \
--tls-cert-file <path to your client cert> \
--tls-key-file <path to your client key> \
--tls-ca-file <path to your CA> \
--ep localhost -p 5432 -u temporal -pw temporal --pl mysql8 --db temporal_visibility update-schema -d ./schema/mysql/v8/visibility/versioned.

Roll-out technique

We recommend preparing a staging Cluster and then do the following to verify the upgrade is successful:

  1. Create some simulation load on the staging cluster.
  2. Upgrade the database schema in the staging cluster.
  3. Wait and observe for a few minutes to verify that there is no unstable behavior from both the server and the simulation load logic.
  4. Upgrade the server.
  5. Now do the same to the live environment cluster.

Health checks

The Frontend Service supports TCP or gRPC health checks on port 7233.

If you use Nomad to manage your containers, the check stanza would look like this for TCP:

service {
check {
type = "tcp"
port = 7233
interval = "10s"
timeout = "2s"
}

or like this for gRPC (requires Consul ≥ 1.0.5):

service {
check {
type = "grpc"
port = 7233
interval = "10s"
timeout = "2s"
}

Set up Multi-Cluster Replication

The Multi-Cluster ReplicationLink preview iconWhat is Multi-Cluster Replication?

Multi-Cluster Replication is a feature which asynchronously replicates Workflow Executions from active Clusters to other passive Clusters, for backup and state reconstruction.

Learn more feature asynchronously replicates Workflow Execution Event Histories from active Clusters to other passive Clusters, and can be enabled by setting the appropriate values in the clusterMetadata section of your configuration file.

  1. enableGlobalNamespace must be set to true.
  2. failoverVersionIncrement has to be equal across connected Clusters.
  3. initialFailoverVersion in each Cluster has to assign a different value. No equal value is allowed across connected Clusters.

After the above conditions are satisfied, you can start to configure a multi-cluster setup.

Set up Multi-Cluster Replication prior to v1.14

You can set this up with clusterMetadata configuration; however, this is meant to be only a conceptual guide rather than a detailed tutorial. Please reach out to us if you need to set this up.

For example:

# cluster A
clusterMetadata:
enableGlobalNamespace: false
failoverVersionIncrement: 100
masterClusterName: "clusterA"
currentClusterName: "clusterA"
clusterInformation:
clusterA:
enabled: true
initialFailoverVersion: 1
rpcAddress: "127.0.0.1:7233"
clusterB:
enabled: true
initialFailoverVersion: 2
rpcAddress: "127.0.0.1:8233"

# cluster B
clusterMetadata:
enableGlobalNamespace: false
failoverVersionIncrement: 100
masterClusterName: "clusterA"
currentClusterName: "clusterB"
clusterInformation:
clusterA:
enabled: true
initialFailoverVersion: 1
rpcAddress: "127.0.0.1:7233"
clusterB:
enabled: true
initialFailoverVersion: 2
rpcAddress: "127.0.0.1:8233"

Set up Multi-Cluster Replication in v1.14 and later

You still need to set up local cluster clusterMetadata configuration

For example:

# cluster A
clusterMetadata:
enableGlobalNamespace: false
failoverVersionIncrement: 100
masterClusterName: "clusterA"
currentClusterName: "clusterA"
clusterInformation:
clusterA:
enabled: true
initialFailoverVersion: 1
rpcAddress: "127.0.0.1:7233"

# cluster B
clusterMetadata:
enableGlobalNamespace: false
failoverVersionIncrement: 100
masterClusterName: "clusterB"
currentClusterName: "clusterB"
clusterInformation:
clusterB:
enabled: true
initialFailoverVersion: 2
rpcAddress: "127.0.0.1:8233"

Then you can use the tctl admin tool to add cluster connections. All operations should be executed in both Clusters.

# Add cluster B connection into cluster A
tctl -address 127.0.0.1:7233 admin cluster upsert-remote-cluster --frontend_address "localhost:8233"
# Add cluster A connection into cluster B
tctl -address 127.0.0.1:8233 admin cluster upsert-remote-cluster --frontend_address "localhost:7233"

# Disable connections
tctl -address 127.0.0.1:7233 admin cluster upsert-remote-cluster --frontend_address "localhost:8233" --enable_connection false
tctl -address 127.0.0.1:8233 admin cluster upsert-remote-cluster --frontend_address "localhost:7233" --enable_connection false

# Delete connections
tctl -address 127.0.0.1:7233 admin cluster remove-remote-cluster --cluster "clusterB"
tctl -address 127.0.0.1:8233 admin cluster remove-remote-cluster --cluster "clusterA"