Restate Server
The Restate Server has a wide range of configuration options to tune it according to your needs.
Configuration file
The Restate Server accepts a TOML configuration file that can be specified either providing the command-line option --config-file=<PATH>
or setting the environment variable RESTATE_CONFIG=<PATH>
. If not set, the default configuration will be applied.
Overrides
Restate server accepts a sub-set of the configuration through command-line arguments, you can see all available options by adding --help
to restate-server
.
The order of applying configuration layers follows the following order:
- Built-in defaults
- Configuration file (
--config-file
or viaRESTATE_CONFIG
) - Environment variables
- Command line arguments (
--cluster-name=<VALUE>
)
Every layer overrides the previous. For instance, command-line arguments will override a configuration key supplied through environment variable (if set).
Environment variables
You can override any configuration entry with an environment variable, this overrides values loaded from the configuration file. To do that, the following rule applies:
- Prefix the configuration entry key with
RESTATE_
- Separate every nested struct with
__
(double underscore) and all hyphens-
with a_
(single underscore).
For example, to override the admin.bind-address
, the corresponding environment variable is RESTATE_ADMIN__BIND_ADDRESS
.
Configuration introspection
If you want to generate a configuration file that includes values loaded from your environment variables or overrides applied to restate-server
command-line, you can add --dump-config
to dump the default TOML config with overrides applied:
restate-server --cluster-name=mycluster --dump-config
Example output:
roles = ["worker","admin","metadata-store",]cluster-name = "mycluster"...
At any time, you ask restate daemon to print the loaded configuration to the log by sending a SIGUSR1
to the server process. This prints a dump of the live configuration to standard error.
For instance on Mac/Linux, you can find the PID of restate-server by running:
pgrep restate-server994921
Then send the signal to the process ID returned from pgrep
's output:
kill -USR1 994921
Observe the output of the server for a dump of the configuration file contents.
Default configuration
Note that configuration defaults might change across server releases, if you want to make sure you use stable values, use an explicit configuration file an pass the path via --config-path=<PATH>
as described above.
roles = [
"worker",
"admin",
"metadata-store",
]
cluster-name = "localcluster"
allow-bootstrap = true
bind-address = "0.0.0.0:5122"
advertised-address = "http://127.0.0.1:5122/"
bootstrap-num-partitions = 24
shutdown-timeout = "1m"
tracing-filter = "info"
log-filter = "warn,restate=info"
log-format = "pretty"
log-disable-ansi-codes = false
connect-timeout = "10s"
disable-prometheus = false
rocksdb-total-memory-size = "6.0 GB"
rocksdb-total-memtables-ratio = 0.5
rocksdb-high-priority-bg-threads = 2
rocksdb-write-stall-threshold = "3s"
rocksdb-enable-stall-on-memory-limit = false
rocksdb-perf-level = "enable-count"
metadata-update-interval = "3s"
[metadata-store-client]
type = "embedded"
address = "http://127.0.0.1:5123/"
[metadata-store-client-backoff-policy]
type = "exponential"
initial-interval = "10ms"
factor = 2.0
max-interval = "100ms"
[tracing-headers]
[http-keep-alive-options]
interval = "40s"
timeout = "20s"
[network-error-retry-policy]
type = "exponential"
initial-interval = "10ms"
factor = 2.0
max-attempts = 15
max-interval = "5s"
[worker]
internal-queue-length = 1000
cleanup-interval = "1h"
experimental-feature-new-invocation-status-table = false
max-command-batch-size = 4
[worker.storage]
rocksdb-disable-wal = true
rocksdb-memory-ratio = 0.49000000953674316
persist-lsn-interval = "1h"
persist-lsn-threshold = 1000
[worker.invoker]
inactivity-timeout = "1m"
abort-timeout = "1m"
message-size-warning = "10.0 MB"
in-memory-queue-length-limit = 1056784
concurrent-invocations-limit = 100
[worker.invoker.retry-policy]
type = "exponential"
initial-interval = "50ms"
factor = 2.0
max-interval = "10s"
[admin]
bind-address = "0.0.0.0:9070"
heartbeat-interval = "1s 500ms"
log-trim-interval = "1h"
log-trim-threshold = 1000
default-replication-strategy = "on-all-nodes"
[admin.query-engine]
memory-size = "4.0 GB"
pgsql-bind-address = "0.0.0.0:9071"
[ingress]
bind-address = "0.0.0.0:8080"
kafka-clusters = []
[bifrost]
default-provider = "local"
seal-retry-interval = "2s"
append-retry-min-interval = "10ms"
append-retry-max-interval = "1s"
[bifrost.local]
rocksdb-disable-wal = false
rocksdb-memory-ratio = 0.5
rocksdb-disable-wal-fsync = false
writer-batch-commit-count = 5000
writer-batch-commit-duration = "0s"
[bifrost.read-retry-policy]
type = "exponential"
initial-interval = "50ms"
factor = 2.0
max-attempts = 50
max-interval = "1s"
[metadata-store]
bind-address = "0.0.0.0:5123"
request-queue-length = 32
rocksdb-memory-ratio = 0.009999999776482582
[metadata-store.rocksdb]
rocksdb-disable-wal = false
[networking]
handshake-timeout = "3s"
[networking.connect-retry-policy]
type = "exponential"
initial-interval = "10ms"
factor = 2.0
max-attempts = 10
max-interval = "500ms"
[log-server]
rocksdb-disable-wal = false
rocksdb-memory-ratio = 0.5
rocksdb-disable-wal-fsync = false
writer-batch-commit-count = 5000
incoming-network-queue-length = 1000