summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authoralextarazanov <[email protected]>2023-05-29 09:37:23 +0300
committeralextarazanov <[email protected]>2023-05-29 09:37:23 +0300
commitabe2f152c113b3b8a7dbd6843c378214ed7b540d (patch)
tree111b75ea95aa20328ac4ed54e946f9438b0224ce
parente769af9b35ea7a8cd7fcfe9138cf34aaac4c7f1c (diff)
[YDB] Translate (8 tasks)
Перевод для нескольких задач: *[YDB] Перевоз state storage & bs static storage group *[YDB] Установка уровня изоляции транзакций в запросе для SDK *[YDB] Добавить команду ydb-dstool device list в описание работы с distributed storage *[YDB] Release notes YDB v23.1 *[YDB] Документация workload topic *Release notes for YDB CLI 2.3.0 *[YDB] Добавить рекомендации в system requirements *[YDB] Документация к cdc для документных таблиц
-rw-r--r--ydb/docs/en/core/_assets/embedded-storage.svg1
-rw-r--r--ydb/docs/en/core/_includes/fault-tolerance.md5
-rw-r--r--ydb/docs/en/core/_includes/warning-configuration-error.md5
-rw-r--r--ydb/docs/en/core/administration/state-storage-move.md46
-rw-r--r--ydb/docs/en/core/administration/static-group-move.md69
-rw-r--r--ydb/docs/en/core/administration/ydb-dstool-device-list.md51
-rw-r--r--ydb/docs/en/core/administration/ydb-dstool-global-options.md19
-rw-r--r--ydb/docs/en/core/administration/ydb-dstool-overview.md1
-rw-r--r--ydb/docs/en/core/changelog-cli.md46
-rw-r--r--ydb/docs/en/core/changelog-server.md37
-rw-r--r--ydb/docs/en/core/cluster/system-requirements.md4
-rw-r--r--ydb/docs/en/core/concepts/_includes/transactions.md22
-rw-r--r--ydb/docs/en/core/concepts/cdc.md28
-rw-r--r--ydb/docs/en/core/maintenance/embedded_monitoring/ydb_monitoring.md68
-rw-r--r--ydb/docs/en/core/maintenance/manual/cluster_expansion.md18
-rw-r--r--ydb/docs/en/core/maintenance/manual/index.md22
-rw-r--r--ydb/docs/en/core/maintenance/manual/toc_i.yaml12
-rw-r--r--ydb/docs/en/core/reference/ydb-cli/commands/workload/_includes/index.md2
-rw-r--r--ydb/docs/en/core/reference/ydb-cli/export_import/_includes/import-file.md15
-rw-r--r--ydb/docs/en/core/reference/ydb-cli/toc_i.yaml6
-rw-r--r--ydb/docs/en/core/reference/ydb-cli/workload-topic.md456
-rw-r--r--ydb/docs/en/core/reference/ydb-sdk/recipes/index.md5
-rw-r--r--ydb/docs/en/core/reference/ydb-sdk/recipes/toc_i.yaml2
-rw-r--r--ydb/docs/en/core/reference/ydb-sdk/recipes/tx-control.md188
-rw-r--r--ydb/docs/en/core/yql/reference/yql-core/syntax/_includes/alter_table.md4
-rw-r--r--ydb/docs/ru/core/changelog-server.md2
-rw-r--r--ydb/docs/ru/core/reference/ydb-cli/toc_i.yaml4
-rw-r--r--ydb/docs/ru/core/reference/ydb-cli/workload-topic.md14
28 files changed, 771 insertions, 381 deletions
diff --git a/ydb/docs/en/core/_assets/embedded-storage.svg b/ydb/docs/en/core/_assets/embedded-storage.svg
new file mode 100644
index 00000000000..0c10c2bcf7a
--- /dev/null
+++ b/ydb/docs/en/core/_assets/embedded-storage.svg
@@ -0,0 +1 @@
+<svg xmlns="http://www.w3.org/2000/svg" width="16" height="16" xmlns:xlink="http://www.w3.org/1999/xlink" class="yc-icon nv-composite-bar__menu-icon" fill="currentColor" stroke="none" aria-hidden="true"><svg aria-hidden="true" data-prefix="fas" data-icon="database" class="storage_svg__svg-inline--fa storage_svg__fa-database" xmlns="http://www.w3.org/2000/svg" viewBox="0 0 448 512"><path fill="currentColor" d="M448 73.12v45.75C448 159.1 347.6 192 224 192S0 159.1 0 118.9V73.12C0 32.88 100.4 0 224 0s224 32.88 224 73.12zM448 176v102.9c0 40.2-100.4 73.1-224 73.1S0 319.1 0 278.9V176c48.12 33.12 136.2 48.62 224 48.62S399.9 209.1 448 176zm0 160v102.9c0 40.2-100.4 73.1-224 73.1S0 479.12 0 438.87V336c48.12 33.13 136.2 48.63 224 48.63S399.9 369.1 448 336z"/></svg></svg> \ No newline at end of file
diff --git a/ydb/docs/en/core/_includes/fault-tolerance.md b/ydb/docs/en/core/_includes/fault-tolerance.md
new file mode 100644
index 00000000000..ca629fa3cd4
--- /dev/null
+++ b/ydb/docs/en/core/_includes/fault-tolerance.md
@@ -0,0 +1,5 @@
+{% note info %}
+
+YDB cluster is fault tolerant. Temporarily shutting down a node doesn't affect the cluster availability. For details, see [{#T}](../cluster/topology.md).
+
+{% endnote %}
diff --git a/ydb/docs/en/core/_includes/warning-configuration-error.md b/ydb/docs/en/core/_includes/warning-configuration-error.md
new file mode 100644
index 00000000000..3fc9b68bac7
--- /dev/null
+++ b/ydb/docs/en/core/_includes/warning-configuration-error.md
@@ -0,0 +1,5 @@
+{% note warning %}
+
+The {{ ydb-short-name }} cluster might become unavailable as a result of an invalid sequence of actions or a configuration error.
+
+{% endnote %}
diff --git a/ydb/docs/en/core/administration/state-storage-move.md b/ydb/docs/en/core/administration/state-storage-move.md
new file mode 100644
index 00000000000..fdd309dbb01
--- /dev/null
+++ b/ydb/docs/en/core/administration/state-storage-move.md
@@ -0,0 +1,46 @@
+# Moving a State Storage
+
+To decommission a {{ ydb-short-name }} cluster host that accommodates a part of a [State Storage](../deploy/configuration/config.md#domains-state), you need to move the group to another host.
+
+{% include [warning-configuration-error](../_includes/warning-configuration-error.md) %}
+
+As an example, let's take a {{ ydb-short-name }} cluster with the following State Storage configuration:
+
+```yaml
+...
+domains_config:
+ ...
+ state_storage:
+ - ring:
+ node: [1, 2, 3, 4, 5, 6, 7, 8, 9]
+ nto_select: 9
+ ssid: 1
+ ...
+...
+```
+
+The [static node](../deploy/configuration/config.md#hosts) of the cluster that serves a part of State Storage is set up and running on the host with `node_id:1`. Suppose that you want to decommission this host.
+
+To replace `node_id:1`, we [added](../maintenance/manual/cluster_expansion.md#add-host) to the cluster a new host with `node_id:10` and [deployed](../maintenance/manual/cluster_expansion.md#add-static-node) a static node in it.
+
+To move State Storage from the `node_id:1` host to the `node_id:10` host:
+
+1. Stop the cluster's static nodes on the hosts with `node_id:1` and `node_id:10`.
+
+ {% include [fault-tolerance](../_includes/fault-tolerance.md) %}
+1. In the `config.yaml` configuration file, change the `node` host list, replacing the ID of the removed host by the ID of the added host:
+
+ ```yaml
+ domains_config:
+ ...
+ state_storage:
+ - ring:
+ node: [2, 3, 4, 5, 6, 7, 8, 9, 10]
+ nto_select: 9
+ ssid: 1
+ ...
+ ```
+
+1. Update the `config.yaml` configuration files for all the cluster nodes, including dynamic nodes.
+1. Use the [rolling-restart](../maintenance/manual/node_restarting.md) procedure to restart all the cluster nodes (including dynamic nodes but excluding static nodes on the hosts with `node_id:1` and `node_id:10`).
+1. Stop static cluster nodes on the hosts with `node_id:1` and `node_id:10`.
diff --git a/ydb/docs/en/core/administration/static-group-move.md b/ydb/docs/en/core/administration/static-group-move.md
new file mode 100644
index 00000000000..ea307081fa1
--- /dev/null
+++ b/ydb/docs/en/core/administration/static-group-move.md
@@ -0,0 +1,69 @@
+# Moving a static group
+
+To decommission a {{ ydb-short-name }} cluster host that accommodates a part of a [static group](../deploy/configuration/config.md#blob_storage_config), you need to move the group to another host.
+
+{% include [warning-configuration-error](../_includes/warning-configuration-error.md) %}
+
+As an example, let's take a {{ ydb-short-name }} cluster where you set up and launched a [static node](../deploy/configuration/config.md#hosts) on a host with `node_id:1`. This node serves a part of the static group.
+
+Fragment of the static group configuration:
+
+```yaml
+...
+blob_storage_config:
+ ...
+ service_set:
+ ...
+ groups:
+ ...
+ rings:
+ ...
+ fail_domains:
+ - vdisk_locations:
+ - node_id: 1
+ path: /dev/vda
+ pdisk_category: SSD
+ ...
+ ...
+ ...
+ ...
+...
+```
+
+To replace `node_id:1`, we [added](../maintenance/manual/cluster_expansion.md#add-host) to the cluster a new host with `node_id:10` and [deployed ](../maintenance/manual/cluster_expansion.md#add-static-node) a static node in it.
+
+To move a part of the static group from the `node_id:1` host to the `node_id:10` host:
+
+1. Stop the static cluster node on the host with `node_id:1`.
+
+ {% include [fault-tolerance](../_includes/fault-tolerance.md) %}
+1. In the `config.yaml` configuration file, change `node_id`, replacing the ID of the removed host by the ID of the added host:
+
+ ```yaml
+ ...
+ blob_storage_config:
+ ...
+ service_set:
+ ...
+ groups:
+ ...
+ rings:
+ ...
+ fail_domains:
+ - vdisk_locations:
+ - node_id: 10
+ path: /dev/vda
+ pdisk_category: SSD
+ ...
+ ...
+ ...
+ ...
+ ...
+ ```
+
+ Edit the `path` and `pdisk_category` for the disk if these parameters are different on the host with `node_id: 10`.
+
+1. Update the `config.yaml` configuration files for all the cluster nodes, including dynamic nodes.
+1. Use the [rolling-restart](../maintenance/manual/node_restarting.md) procedure to restart all the static cluster nodes.
+1. Go to the Embedded UI monitoring page and make sure that the VDisk of the static group is visible on the target physical disk and its replication is in progress. For details, see [{#T}](../maintenance/embedded_monitoring/ydb_monitoring.md#static-group).
+1. Use the [rolling-restart](../maintenance/manual/node_restarting.md) procedure to restart all the dynamic cluster nodes.
diff --git a/ydb/docs/en/core/administration/ydb-dstool-device-list.md b/ydb/docs/en/core/administration/ydb-dstool-device-list.md
new file mode 100644
index 00000000000..a19bffc8396
--- /dev/null
+++ b/ydb/docs/en/core/administration/ydb-dstool-device-list.md
@@ -0,0 +1,51 @@
+# device list
+
+Use the `device list` subcommand to get a list of storage devices available on the {{ ydb-short-name }} cluster.
+
+{% include [trunk](../_includes/trunk.md) %}
+
+General format of the command:
+
+```bash
+ydb-dstool [global options ...] device list [list options ...]
+```
+
+* `global options`: [Global options](./ydb-dstool-global-options.md).
+* `list options`: [Subcommand options](#options).
+
+View a description of the command to get a list of devices:
+
+```bash
+ydb-dstool device list --help
+```
+
+## Subcommand options {#options}
+
+| Option | Description |
+---|---
+| `-H`, `--human-readable` | Output data in human-readable format. |
+| `--sort-by` | Sort column.<br>Use one of the values: `SerialNumber`, `FQDN`, `Path`, `Type`, `StorageStatus`, or `NodeId:PDiskId`. |
+| `--reverse` | Use a reverse sort order. |
+| `--format` | Output format.<br>Use one of the values: `pretty`, `json`, `tsv`, or `csv`. |
+| `--no-header` | Do not output the row with column names. |
+| `--columns` | List of columns to be output.<br>Use one or more of the values: `SerialNumber`, `FQDN`, `Path`, `Type`, `StorageStatus`, or `NodeId:PDiskId`. |
+| `-A`, `--all-columns` | Output all columns. |
+
+## Examples {#examples}
+
+The following command will output a list of devices available in the cluster:
+
+```bash
+ydb-dstool -e node-5.example.com device list
+```
+
+Result:
+
+```text
+┌────────────────────┬────────────────────┬────────────────────────────────┬──────┬───────────────────────────┬────────────────┐
+│ SerialNumber │ FQDN │ Path │ Type │ StorageStatus │ NodeId:PDiskId │
+├────────────────────┼────────────────────┼────────────────────────────────┼──────┼───────────────────────────┼────────────────┤
+│ PHLN123301H41P2BGN │ node-1.example.com │ /dev/disk/by-partlabel/nvme_04 │ NVME │ FREE │ NULL │
+│ PHLN123301A62P2BGN │ node-6.example.com │ /dev/disk/by-partlabel/nvme_03 │ NVME │ PDISK_ADDED_BY_DEFINE_BOX │ [6:1001] │
+...
+```
diff --git a/ydb/docs/en/core/administration/ydb-dstool-global-options.md b/ydb/docs/en/core/administration/ydb-dstool-global-options.md
new file mode 100644
index 00000000000..2858d395d33
--- /dev/null
+++ b/ydb/docs/en/core/administration/ydb-dstool-global-options.md
@@ -0,0 +1,19 @@
+# Global options
+
+All the {{ ydb-short-name }} DSTool utility subcommands share the same global options.
+
+| Option | Description |
+---|---
+| `-?`, `-h`, `--help` | Print the built-in help. |
+| `-v`, `--verbose` | Print detailed output while executing the command. |
+| `-q`, `--quiet` | Suppress non-critical messages when executing the command. |
+| `-n`, `--dry-run` | Dry-run the command. |
+| `-e`, `--endpoint` | Endpoint to connect to the {{ ydb-short-name }} cluster, in the format: `[PROTOCOL://]HOST[:PORT]`.<br>Default values: PROTOCOL — `http`, PORT — `8765`. |
+| `--grpc-port` | gRPC port used to invoke procedures. |
+| `--mon-port` | Port to view HTTP monitoring data in JSON format. |
+| `--mon-protocol` | If you fail to specify the cluster connection protocol explicitly in the endpoint, the protocol is taken from here. |
+| `--token-file` | Path to the file with [Access Token](../concepts/auth.md#iam). |
+| `--ca-file` | Path to a root certificate PEM file used for TLS connections. |
+| `--http` | Use HTTP instead of gRPC to connect to the Blob Storage. |
+| `--http-timeout` | Timeout for I/O operations on the socket when running HTTP(S) queries. |
+| `--insecure` | Allow insecure data delivery over HTTPS. Neither the SSL certificate nor host name are checked in this mode. |
diff --git a/ydb/docs/en/core/administration/ydb-dstool-overview.md b/ydb/docs/en/core/administration/ydb-dstool-overview.md
index 6aa4d520bed..8c0c5a69f26 100644
--- a/ydb/docs/en/core/administration/ydb-dstool-overview.md
+++ b/ydb/docs/en/core/administration/ydb-dstool-overview.md
@@ -6,6 +6,7 @@ With the {{ ydb-short-name }} DSTool utility, you can manage your {{ ydb-short-n
| Command | Description |
--- | ---
+| [device list](./ydb-dstool-device-list.md) | List storage devices. |
| pdisk add-by-serial | Add a PDisk to a set by serial number. |
| pdisk remove-by-serial | Remove a PDisk from the set by serial number. |
| pdisk set | Set PDisk parameters. |
diff --git a/ydb/docs/en/core/changelog-cli.md b/ydb/docs/en/core/changelog-cli.md
index 7ef1af36ce8..c6c7b065c6b 100644
--- a/ydb/docs/en/core/changelog-cli.md
+++ b/ydb/docs/en/core/changelog-cli.md
@@ -2,56 +2,56 @@
## Version 2.3.0 {#2-3-0}
-Released on May 5, 2023. To update to version **2.3.0**, select the [Downloads](downloads/index.md#ydb-cli) section.
+Release date: May 1, 2023. To update to version **2.3.0**, select the [Downloads](downloads/index.md#ydb-cli) section.
**What's new:**
-* Added interactive query execution mode. It can be launched using [ydb yql](reference/ydb-cli/yql.md) command without arguments. This mode is experimental and is a subject to change.
-* Added [ydb table index rename](reference/ydb-cli/commands/_includes/secondary_index.md#rename) command for atomic [secondary index replacement](best_practices/secondary_indexes.md#atomic-index-replacement) or renaming.
-* Added `ydb workload topic` command section that allows to run a workload of writes and reads to topics.
-* Added [--recursive](reference/ydb-cli/commands/_includes/dir.md#rmdir-options) option for `ydb scheme rmdir` command that allows to remove a directory recursively with all its content.
-* Added `topic` and `coordination node` support for [ydb scheme describe](reference/ydb-cli/commands/scheme-describe.md) command.
-* Added [--commit](reference/ydb-cli/topic-read.md#osnovnye-opcionalnye-parametry) option for `ydb topic consumer` command to commit offset for consumer.
-* Added [--columns](reference/ydb-cli/export_import/_includes/import-file.md#optional) option for `ydb import file csv|tsv` command to list column names in, instead of placing it into file header.
-* Added [--newline-delimited](reference/ydb-cli/export_import/_includes/import-file.md#optional) option for `ydb import file csv|tsv` command that confirms that there is no newline characters inside records which allows to read from several sections of a file simultaneously.
+* Added the interactive mode of query execution. To switch to the interactive mode, run [ydb yql](reference/ydb-cli/yql.md) without arguments. This mode is experimental: backward compatibility is not guaranteed yet.
+* Added the [ydb index rename](reference/ydb-cli/commands/_includes/secondary_index.md#rename) command for [atomic replacement](best_practices/secondary_indexes.md#atomic-index-replacement) or renaming of a secondary index.
+* Added the `ydb workload topic` command for generating the load that reads messages from topics and writes messages to topics.
+* Added the [--recursive](reference/ydb-cli/commands/_includes/dir.md#rmdir-options) option for the `ydb scheme rmdir` command. Use it to delete a directory recursively, with all its content.
+* Added support for the `topic` and `coordination node` types in the [ydb scheme describe](reference/ydb-cli/commands/scheme-describe.md) command.
+* Added the [--commit](reference/ydb-cli/topic-read.md#osnovnye-opcionalnye-parametry) option for the `ydb topic consumer` command. Use it to commit messages you have read.
+* Added the [--columns](reference/ydb-cli/export_import/_includes/import-file.md#optional) option for the `ydb import file csv|tsv` command. Use it as an alternative to the file header when specifying a column list.
+* Added the [--newline-delimited](reference/ydb-cli/export_import/_includes/import-file.md#optional) option for the `ydb import file csv|tsv` command. Use it to make sure that your data is newline-free. This option streamlines import by reading data from several file sections in parallel.
**Bug fixes:**
-* Fixed a bug that caused executing the `ydb import file` command to consume too much memory and CPU.
+* Fixed the bug that resulted in excessive memory and CPU utilization when executing the `ydb import file` command.
## Version 2.2.0 {#2-2-0}
-Released on March 3, 2023. To update to version **2.2.0**, select the [Downloads](downloads/index.md#ydb-cli) section.
+Release date: March 3, 2023. To update to version **2.2.0**, select the [Downloads](downloads/index.md#ydb-cli) section.
**What's new:**
* Fixed the error that didn't allow specifying supported compression algorithms when adding a topic consumer.
-* Added support for streaming YQL scripts and queries based on parameters transferred via `stdin`.
-* YQL query parameter values can now be transferred from a file.
+* Added support for streaming YQL scripts and queries based on options [transferred via `stdin`](reference/ydb-cli/parameterized-queries-cli.md).
+* You can now [use a file](reference/ydb-cli/parameterized-queries-cli.md) to provide YQL query options
* Password input requests are now output to `stderr` instead of `stdout`.
* You can now save the root CA certificate path in a [profile](reference/ydb-cli/profile/index.md).
-* Added a global parameter named [--profile-file](reference/ydb-cli/commands/_includes/global-options.md#service-options) to use the specified file as storage for profile settings.
+* Added a global option named [--profile-file](reference/ydb-cli/commands/_includes/global-options.md#service-options) to use the specified file as storage for profile settings.
* Added a new type of load testing: [ydb workload clickbench](reference/ydb-cli/workload-click-bench).
## Version 2.1.1 {#2-1-1}
-Released on December 30, 2022. To update to version **2.1.1**, select the [Downloads](downloads/index.md#ydb-cli) section.
+Release date: December 30, 2022. To update to version **2.1.1**, select the [Downloads](downloads/index.md#ydb-cli) section.
**Improvements:**
-* Added support for the `--stats` parameter of the [ydb scheme describe](reference/ydb-cli/commands/scheme-describe.md) command for column-oriented tables.
+* Added support for the `--stats` option of the [ydb scheme describe](reference/ydb-cli/commands/scheme-describe.md) command for column-oriented tables.
* Added support for Parquet files to enable their import with the [ydb import](reference/ydb-cli/export_import/import-file.md) command.
* Added support for additional logging and retries for the [ydb import](reference/ydb-cli/export_import/import-file.md) command.
## Version 2.1.0 {#2-1-0}
-Released on November 18, 2022. To update to version **2.1.0**, select the [Downloads](downloads/index.md#ydb-cli) section.
+Release date: November 18, 2022. To update to version **2.1.0**, select the [Downloads](downloads/index.md#ydb-cli) section.
**What's new:**
* You can now [create a profile non-interactively](reference/ydb-cli/profile/create.md#cmdline).
* Added the [ydb config profile update](reference/ydb-cli/profile/create.md#update) and [ydb config profile replace](reference/ydb-cli/profile/create.md#replace) commands to update and replace profiles, respectively.
-* Added the `-1` parameter for the [ydb scheme ls](reference/ydb-cli/commands/scheme-ls.md) command to enable output of a single object per row.
+* Added the `-1` option for the [ydb scheme ls](reference/ydb-cli/commands/scheme-ls.md) command to enable output of a single object per row.
* You can now save the IAM service URL in a profile.
* Added support for username and password-based authentication without specifying the password.
* Added support for AWS profiles in the [ydb export s3](reference/ydb-cli/export_import/s3_conn.md#auth) command.
@@ -64,7 +64,7 @@ Released on November 18, 2022. To update to version **2.1.0**, select the [Downl
## Version 2.0.0 {#2-0-0}
-Released on September 20, 2022. To update to version **2.0.0**, select the [Downloads](downloads/index.md#ydb-cli) section.
+Release date: September 20, 2022. To update to version **2.0.0**, select the [Downloads](downloads/index.md#ydb-cli) section.
**What's new:**
@@ -81,7 +81,7 @@ Released on September 20, 2022. To update to version **2.0.0**, select the [Down
* `ydb workload kv clean`: Delete a test table.
* Added the ability to disable current active profile (see the `ydb config profile deactivate` command).
-* Added the ability to delete a profile non-interactively with no commit (see the `--force` parameter under the `ydb config profile remove` command).
+* Added the ability to delete a profile non-interactively with no commit (see the `--force` option under the `ydb config profile remove` command).
* Added CDC support for the `ydb scheme describe` command.
* Added the ability to view the current DB status (see the `ydb monitoring healthcheck` command).
* Added the ability to view authentication information (token) to be sent with DB queries under the current authentication settings (see the `ydb auth get-token` command).
@@ -94,9 +94,9 @@ Released on September 20, 2022. To update to version **2.0.0**, select the [Down
## Version 1.9.1 {#1-9-1}
-Released on June 25, 2022. To update to version **1.9.1**, select the [Downloads](downloads/index.md#ydb-cli) section.
+Release date: June 25, 2022. To update to version **1.9.1**, select the [Downloads](downloads/index.md#ydb-cli) section.
**What's new:**
-* Added the ability to compress data when exporting it to S3-compatible storage (see the `--compression` parameter of the [ydb export s3](reference/ydb-cli/export_import/s3_export.md) command).
-* Added the ability to manage new {{ ydb-short-name }} CLI version availability auto checks (see the `--disable-checks` and `--enable-checks` parameters of the [ydb version](reference/ydb-cli/version.md) command).
+* Added the ability to compress data when exporting it to S3-compatible storage (see the `--compression` option of the [ydb export s3](reference/ydb-cli/export_import/s3_export.md) command).
+* Added the ability to manage new {{ ydb-short-name }} CLI version availability auto checks (see the `--disable-checks` and `--enable-checks` options of the [ydb version](reference/ydb-cli/version.md) command).
diff --git a/ydb/docs/en/core/changelog-server.md b/ydb/docs/en/core/changelog-server.md
index 147e056041f..d2ee3b6145e 100644
--- a/ydb/docs/en/core/changelog-server.md
+++ b/ydb/docs/en/core/changelog-server.md
@@ -2,33 +2,34 @@
## Version 23.1 {#23-1}
-Released on May 5, 2023. To update to version 23.1, select the [Downloads](downloads/index.md#ydb-server) section.
-**Features:**
+Release date: May 5, 2023. To update to version 23.1, select the [Downloads](downloads/index.md#ydb-server) section.
-* [CDC initial scan](concepts/cdc.md#initial-scan). By default, records are emitted to the CDC changefeed only for table rows that were changed after the stream was created. With the optional initial scan of the table, CDC is now able to emit to the changefeed all the rows that existed at the time of the changefeed creation.
-* [Atomic index replacement](best_practices/secondary_indexes.md#atomic-index-replacement). Allows to replace one index with another atomically and transparently to the application. The ability to replace the index with its improved version without any downtime is important for the many YDB applications, which normally embed secondary index names into YQL queries.
-* [Audit Logs](cluster/audit-log.md). An event stream that includes data about all the attempts to change {{ ydb-short-name }} objects and permissions, both successfully or unsuccessfully. Audit Logs support is important for ensuring the secure operation of {{ ydb-short-name }} databases.
+**Functionality:**
+
+* Added [initial table scan](concepts/cdc.md#initial-scan) when creating a CDC changefeed. Now, you can export all the data existing at the time of changefeed creation.
+* Added [atomic index replacement](best_practices/secondary_indexes.md#atomic-index-replacement). Now, you can atomically replace one pre-defined index with another. This operation is absolutely transparent for your application. Indexes are replaced seamlessly, with no downtime.
+* Added the [audit log](cluster/audit-log.md): Event stream including data about all the operations on {{ ydb-short-name }} objects.
**Performance:**
-* [Automatic configuration](deploy/configuration/config.md#autoconfig) of thread pool sizes depending on their load. This function allows {{ ydb-short-name }} to adapt to the changes in the workload, and improves the performance by better CPU resource sharing.
-* Predicate extraction. More accurate predicate pushdown logic: support for OR and IN expressions with parameters for DataShard pushdown.
-* Point lookups support for scan queries. Single-row lookups using primary key or secondary indexes are now supported in scan queries, leading to the improved performance in many cases. As with regular data queries, to actually use the secondary index lookups, index name has to be explicitly specified in the query text with the `VIEW` keyword.
-* Significant improvements in data transfer formats between request execution stages. CPU consumption decreases and throughput increases, for example, for SELECTs by about 10% on queries with parameters, and for write operations up to 30%.
-* Caching of the calculation graph when executing queries. Reduces CPU consumption for its construction.
+* Improved formats of data exchanged between query stages. As a result, we accelerated SELECTs by 10% on parameterized queries and by up to 30% on write operations.
+* Added [autoconfiguring](deploy/configuration/config.md#autoconfig) for the actor system pools based on the workload against them. This improves performance through more effective CPU sharing.
+* Optimized the predicate logic: Processing of parameterized OR or IN constraints is automatically delegated to DataShard.
+* For scan queries, you can now effectively search for individual rows using a primary key or secondary indexes. This can bring you a substantial gain in performance in many cases. Similarly to regular queries, to use a secondary index, you need to explicitly specify its name in the query text using the `VIEW` keyword.
+* The query's computational graph is now cached at query runtime, reducing the CPU resources needed to build the graph.
-**Bug Fix:**
+**Bug fixes:**
-* Fixed a number of errors in the implementation of distributed data storage. We strongly recommend that all users upgrade to the current version.
-* Fixed index building on not null columns.
+* Fixed bugs in the distributed data warehouse implementation. We strongly recommend all our users to upgrade to the latest version.
+* Fixed the error that occurred on building an index on NOT NULL columns.
* Fixed statistics calculation with MVCC enabled.
-* Fixed backup issues.
-* Fixed race while splitting and deleting table with CDC.
+* Fixed errors with backups.
+* Fixed the race condition that occurred at splitting and deleting a table with SDC.
## Version 22.5 {#22-5}
-Released on March 7, 2023. To update to version **22.5**, select the [Downloads](downloads/index.md#ydb-server) section.
+Release date: March 7, 2023. To update to version **22.5**, select the [Downloads](downloads/index.md#ydb-server) section.
**What's new:**
@@ -47,7 +48,7 @@ Released on March 7, 2023. To update to version **22.5**, select the [Downloads]
## Version 22.4 {#22-4}
-Released on October 12, 2023. To update to version **22.4**, select the [Downloads](downloads/index.md#ydb-server) section.
+Release date: October 12, 2022. To update to version **22.4**, select the [Downloads](downloads/index.md#ydb-server) section.
**What's new:**
@@ -61,7 +62,7 @@ Released on October 12, 2023. To update to version **22.4**, select the [Downloa
* Added official support for the database/sql driver for working with {{ ydb-short-name }} in Golang.
* Embedded UI:
- * The CDC change stream and the secondary indexes are now displayed in the database schema hierarchy as separate objects.
+ * The CDC changefeed and the secondary indexes are now displayed in the database schema hierarchy as separate objects.
* Improved the visualization of query explain plan graphics.
* Problem storage groups have more visibility now.
* Various improvements based on UX research.
diff --git a/ydb/docs/en/core/cluster/system-requirements.md b/ydb/docs/en/core/cluster/system-requirements.md
index 9f06919f93f..63f6fc4ba14 100644
--- a/ydb/docs/en/core/cluster/system-requirements.md
+++ b/ydb/docs/en/core/cluster/system-requirements.md
@@ -30,8 +30,8 @@ The number of servers and disks is determined by the fault-tolerance requirement
## Software configuration {#software}
-A {{ ydb-short-name }} server can be run on servers running a Linux operating system with kernel 4.19 and higher and libc 2.30 (Ubuntu 20.04, Debian 11, Fedora34). YDB uses [TCMalloc](https://google.github.io/tcmalloc) allocator, and we recommend to [enable](https://google.github.io/tcmalloc/tuning.html#system-level-optimizations) the Transparent Huge Pages and Memory overcommitment features for optimization.
+A {{ ydb-short-name }} server can be run on servers running a Linux operating system with kernel 4.19 and higher and libc 2.30 (Ubuntu 20.04, Debian 11, Fedora34). YDB uses the [TCMalloc](https://google.github.io/tcmalloc) memory allocator. To make it effective, [enable](https://google.github.io/tcmalloc/tuning.html#system-level-optimizations) Transparent Huge Pages and Memory overcommitment.
-If the server hosts more than 32 CPU cores, to increase YDB performance, it makes sense to run each dynamic node in a separate taskset/cpuset of 10 to 32 cores. For example, in the case of 128 CPU cores, the best choice is to run four 32-CPU dynamic nodes, each in its taskset.
+If the server has more than 32 CPU cores, to increase YDB performance, it makes sense to run each dynamic node in a separate taskset/cpuset of 10 to 32 cores. For example, in the case of 128 CPU cores, the best choice is to run four 32-CPU dynamic nodes, each in its taskset.
MacOS and Windows operating systems are currently not supported for running {{ ydb-short-name }} servers.
diff --git a/ydb/docs/en/core/concepts/_includes/transactions.md b/ydb/docs/en/core/concepts/_includes/transactions.md
index 0d62e63bf35..97a3f6ad4c4 100644
--- a/ydb/docs/en/core/concepts/_includes/transactions.md
+++ b/ydb/docs/en/core/concepts/_includes/transactions.md
@@ -8,29 +8,30 @@ The main tool for creating, modifying, and managing data in {{ ydb-short-name }}
## Transaction modes {#modes}
-By default, {{ ydb-short-name }} transactions are performed in *Serializable* mode. It provides the strictest [isolation level](https://en.wikipedia.org/wiki/Isolation_(database_systems)#Serializable) for custom transactions. This mode guarantees that the result of successful parallel transactions is equal to their specific execution sequence, while there are no [read anomalies](https://en.wikipedia.org/wiki/Isolation_(database_systems)#Read_phenomena) for successful transactions.
+By default, {{ ydb-short-name }} transactions are executed in *Serializable* mode. It provides the strictest [isolation level](https://en.wikipedia.org/wiki/Isolation_(database_systems)#Serializable) for custom transactions. This mode guarantees that the result of successful parallel transactions is equivalent to their serial execution, and there are no [read anomalies](https://en.wikipedia.org/wiki/Isolation_(database_systems)#Read_phenomena) for successful transactions.
If consistency or freshness requirement for data read by a transaction can be relaxed, a user can take advantage of execution modes with lower guarantees:
-* *Online Read-Only*. Each of the reads in the transaction reads data that is most recent at the time of its execution. The consistency of retrieved data depends on the *allow_inconsistent_reads* setting:
- * *false* (consistent reads). In this mode, each individual read returns consistent data, but no consistency between different reads is guaranteed. Reading the same table range twice may return different results.
- * *true* (inconsistent reads). In this mode, even a single read operation may contain inconsistent data.
-* *Stale Read Only*. Data reads in a transaction return results with a possible delay (fractions of a second). Each individual read returns consistent data, but no consistency between different reads is guaranteed.
+* *Online Read-Only*: Each read operation in the transaction is reading the data that is most recent at execution time. The consistency of retrieved data depends on the *allow_inconsistent_reads* setting:
+ * *false* (consistent reads): Each individual read operation returns consistent data, but no consistency is guaranteed between reads. Reading the same table range twice may return different results.
+ * *true* (inconsistent reads): Even the data fetched by a particular read operation may contain inconsistent results.
+* *Stale Read-Only*: Read operations within a transaction may return results that are slightly out-of-date (lagging by fractions of a second). Each individual read returns consistent data, but no consistency between different reads is guaranteed.
+* *Snapshot Read-Only*: All the read operations within a transaction access the database snapshot. All the data reads are consistent. The snapshot is taken when the transaction begins, meaning the transaction sees all changes committed before it began.
-The transaction execution mode is specified in its settings when creating the transaction.
+The transaction execution mode is specified in its settings when creating the transaction. See the examples for the {{ ydb-short-name }} SDK in the [{#T}](../../reference/ydb-sdk/recipes/tx-control.md).
## YQL language {#language-yql}
-The constructs implemented in YQL can be divided into two classes: [data definition language (DDL)](https://en.wikipedia.org/wiki/Data_definition_language) and [data manipulation language (DML)](https://en.wikipedia.org/wiki/Data_manipulation_language).
+Statements implemented in YQL can be divided into two classes: [Data Definition Language (DDL)](https://en.wikipedia.org/wiki/Data_definition_language) and [Data Manipulation Language (DML)](https://en.wikipedia.org/wiki/Data_manipulation_language).
For more information about supported YQL constructs, see the [YQL documentation](../../yql/reference/index.md).
Listed below are the features and limitations of YQL support in {{ ydb-short-name }}, which might not be obvious at first glance and are worth noting:
* Multi-statement transactions (transactions made up of a sequence of YQL statements) are supported. Transactions may interact with client software, or in other words, client interactions with the database might look as follows: `BEGIN; make a SELECT; analyze the SELECT results on the client side; ...; make an UPDATE; COMMIT`. We should note that if the transaction body is fully formed before accessing the database, it will be processed more efficiently.
-* {{ ydb-short-name }} does not support transactions that combine DDL and DML queries. The conventional notion [ACID]{% if lang == "en" %}(https://en.wikipedia.org/wiki/ACID){% endif %}{% if lang == "ru" %}(https://ru.wikipedia.org/wiki/ACID){% endif %} of a transactions is applicable specifically to DML queries, that is, queries that change data. DDL queries must be idempotent, meaning repeatable if an error occurs. If you need to manipulate a schema, each manipulation is transactional, while a set of manipulations is not.
-* The version of YQL in {{ ydb-short-name }} uses the [Optimistic Concurrency Control](https://en.wikipedia.org/wiki/Optimistic_concurrency_control) mechanism. Optimistic locking is applied to entities affected during a transaction. When the transaction is complete, the mechanism verifies that the locks have not been invalidated. For the user, locking optimism means that when transactions are competing with one another, the one that finishes first wins. Competing transactions fail with the `Transaction locks invalidated` error.
-* All changes made during the transaction accumulate in the database server memory and are committed when the transaction completes. If the locks are not invalidated, all the changes accumulated are committed atomically, but if at least one lock is invalidated, none of the changes are committed. The above model involves certain restrictions: changes made by a single transaction must fit inside available memory, and a transaction "doesn't see" its changes.
+* {{ ydb-short-name }} does not support transactions that combine DDL and DML queries. The conventional [ACID]{% if lang == "en" %}(https://en.wikipedia.org/wiki/ACID){% endif %}{% if lang == "ru" %}(https://ru.wikipedia.org/wiki/ACID){% endif %} notion of a transactions is applicable specifically to DML queries, that is, queries that change data. DDL queries must be idempotent, meaning repeatable if an error occurs. If you need to manipulate a schema, each manipulation is transactional, while a set of manipulations is not.
+* YQL implementation used in {{ ydb-short-name }} employs the [Optimistic Concurrency Control](https://en.wikipedia.org/wiki/Optimistic_concurrency_control) mechanism. If an entity is affected during a transaction, optimistic blocking is applied. When the transaction is complete, the mechanism verifies that the locks have not been invalidated. For the user, locking optimism means that when transactions are competing with one another, the one that finishes first wins. Competing transactions fail with the `Transaction locks invalidated` error.
+* All changes made during the transaction accumulate in the database server memory and are applied when the transaction completes. If the locks are not invalidated, all the changes accumulated are committed atomically; if at least one lock is invalidated, none of the changes are committed. The above model involves certain restrictions: changes made by a single transaction must fit inside available memory, and a transaction "doesn't see" its changes.
A transaction should be formed in such a way so that the first part of the transaction only reads data, while the second part of the transaction only changes data. The query structure then looks as follows:
@@ -49,4 +50,3 @@ For more information about YQL support in {{ ydb-short-name }}, see the [YQL doc
A database table in {{ ydb-short-name }} can be sharded by the range of the primary key values. Different table shards can be served by different distributed database servers (including ones in different locations). They can also move independently between servers to enable rebalancing or ensure shard operability if servers or network equipment goes offline.
{{ ydb-short-name }} supports distributed transactions. Distributed transactions are transactions that affect more than one shard of one or more tables. They require more resources and take more time. While point reads and writes may take up to 10 ms in 99 percentile, distributed transactions typically take from 20 to 500 ms.
-
diff --git a/ydb/docs/en/core/concepts/cdc.md b/ydb/docs/en/core/concepts/cdc.md
index a9919484e58..b01822ef1f7 100644
--- a/ydb/docs/en/core/concepts/cdc.md
+++ b/ydb/docs/en/core/concepts/cdc.md
@@ -15,9 +15,9 @@ When adding, updating, or deleting a table row, CDC generates a change record by
* The number of topic partitions is fixed as of changefeed creation and remains unchanged (unlike tables, topics are not elastic).
* Changefeeds support records of the following types of operations:
* Updates
- * Deletes
+ * Erases
- Adding rows is a special case of updates, and a record of adding a row in a changefeed will look similar to an update record.
+ Adding rows is a special update case, and a record of adding a row in a changefeed will look similar to an update record.
## Virtual timestamps {#virtual-timestamps}
@@ -57,6 +57,8 @@ During the scanning process, depending on the table update frequency, you might
Depending on the [changefeed parameters](../yql/reference/syntax/alter_table.md#changefeed-options), the structure of a record may differ.
+### JSON format {#json-record-structure}
+
A [JSON](https://en.wikipedia.org/wiki/JSON) record has the following structure:
```json
@@ -72,9 +74,9 @@ A [JSON](https://en.wikipedia.org/wiki/JSON) record has the following structure:
* `key`: An array of primary key component values. Always present.
* `update`: Update flag. Present if a record matches the update operation. In `UPDATES` mode, it also contains the names and values of updated columns.
-* `erase`: Erase flag. Present if a record matches the delete operation.
-* `newImage`: Row snapshot that results from its change. Present in `NEW_IMAGE` and `NEW_AND_OLD_IMAGES` modes. Contains column names and values.
-* `oldImage`: Row snapshot before its change. Present in `OLD_IMAGE` and `NEW_AND_OLD_IMAGES` modes. Contains column names and values.
+* `erase`: Erase flag. Present if a record matches the erase operation.
+* `newImage`: Row snapshot that results from its being changed. Present in `NEW_IMAGE` and `NEW_AND_OLD_IMAGES` modes. Contains column names and values.
+* `oldImage`: Row snapshot before the change. Present in `OLD_IMAGE` and `NEW_AND_OLD_IMAGES` modes. Contains column names and values.
* `ts`: Virtual timestamp. Present if the `VIRTUAL_TIMESTAMPS` setting is enabled. Contains the value of the global coordinator time (`step`) and the unique transaction ID (`txId`).
> Sample record of an update in `UPDATES` mode:
@@ -129,13 +131,25 @@ A [JSON](https://en.wikipedia.org/wiki/JSON) record has the following structure:
{% note info %}
-* The same record may not contain the `update` and `erase` fields simultaneously, since these fields are operation flags (you can't update and erase a table row at the same time). However, each record contains one of these fields (any operation is either an update or erase).
+* The same record may not contain the `update` and `erase` fields simultaneously, since these fields are operation flags (you can't update and erase a table row at the same time). However, each record contains one of these fields (any operation is either an update or an erase).
* In `UPDATES` mode, the `update` field for update operations is an operation flag (update) and contains the names and values of updated columns.
* JSON object fields containing column names and values (`newImage`, `oldImage`, and `update` in `UPDATES` mode), *do not include* the columns that are primary key components.
* If a record contains the `erase` field (indicating that the record matches the erase operation), this is always an empty JSON object (`{}`).
{% endnote %}
+### Amazon DynamoDB-compatible JSON format {#dynamodb-streams-json-record-structure}
+
+For [Amazon DynamoDB](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Introduction.html)-compatible document tables, {{ ydb-short-name }} can generate change records in the [Amazon DynamoDB Streams](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.html)-compatible format.
+
+The record structure is the same as for [Amazon DynamoDB Streams](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_streams_Record.html) records:
+* `awsRegion`: Includes the string delivered in the `AWS_REGION` option when creating a changefeed.
+* `dynamodb`: [StreamRecord](https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_streams_StreamRecord.html).
+* `eventID`: Unique record ID.
+* `eventName`: `INSERT`, `MODIFY`, or `REMOVE`. You can only use `INSERT` in the `NEW_AND_OLD_IMAGES` mode.
+* `eventSource`: Includes the `ydb:document-table` string.
+* `eventVersion`: Includes the `1.0` string.
+
## Record retention period {#retention-period}
By default, records are stored in the changefeed for 24 hours from the time they are sent. Depending on usage scenarios, the retention period can be reduced or increased up to 30 days.
@@ -152,7 +166,7 @@ To set up the record retention period, specify the [RETENTION_PERIOD](../yql/ref
## Creating and deleting a changefeed {#ddl}
-You can add a changefeed to an existing table or delete it using the [ADD CHANGEFEED and DROP CHANGEFEED](../yql/reference/syntax/alter_table.md#changefeed) directives of the YQL `ALTER TABLE` statement. When deleting a table, the changefeed added to it is also deleted.
+You can add a changefeed to an existing table or erase it using the [ADD CHANGEFEED and DROP CHANGEFEED](../yql/reference/syntax/alter_table.md#changefeed) directives of the YQL `ALTER TABLE` statement. When erasing a table, the changefeed added to it is also deleted.
## CDC purpose and use {#best_practices}
diff --git a/ydb/docs/en/core/maintenance/embedded_monitoring/ydb_monitoring.md b/ydb/docs/en/core/maintenance/embedded_monitoring/ydb_monitoring.md
index 36edb667e07..6d3d1940479 100644
--- a/ydb/docs/en/core/maintenance/embedded_monitoring/ydb_monitoring.md
+++ b/ydb/docs/en/core/maintenance/embedded_monitoring/ydb_monitoring.md
@@ -28,7 +28,7 @@ Below is the cluster summary:
* **Storage**: The used/total storage space.
* **Versions**: The list of {{ ydb-short-name }} versions run on the cluster nodes.
-Next are the lists of [tenants](#tenant_list_page) and [nodes](#node_list_page) on the Tenants and Nodes tabs, respectively.
+Next, you will find lists of [tenants](#tenant_list_page) and [nodes](#node_list_page) on the Tenants and Nodes tabs, respectively.
### Tenant list {#tenant_list_page}
@@ -36,13 +36,13 @@ Next are the lists of [tenants](#tenant_list_page) and [nodes](#node_list_page)
The tenants list contains the following information on each tenant:
-* **Tenant**: The tenant's path.
-* **Name**: The tenant's name.
-* **Type**: The tenant's type.
-* **State**: The tenant's health.
+* **Tenant**: Tenant path.
+* **Name**: Tenant name.
+* **Type**: Tenant type.
+* **State**: Tenant health.
* **CPU**: CPU utilization by the tenant's nodes.
* **Memory**: RAM consumption by the tenant's nodes.
-* **Storage**: The estimated amount of data stored by the tenant.
+* **Storage**: Estimated amount of data stored by the tenant.
* **Pools usage**: CPU usage by the nodes broken down by the internal stream pools.
* **Tablets States**: Tablets running in the given tenant.
@@ -54,17 +54,17 @@ If you click on the tenant's path, you can go to the [tenant page](#tenant_page)
The list includes all nodes in the cluster. For each node, you can see:
-* **#**: The node ID.
+* **#**: Node ID.
* **Host**: The host running the node.
* **Endpoints**: The ports being listened to.
* **Version**: The {{ ydb-short-name }} version being run.
* **Uptime**: The node uptime.
* **Memory used**: The amount of RAM used.
* **Memory limit**: The RAM utilization limit set via cgroup.
-* **Pools usage**: CPU utilization broken down by the internal thread pools.
+* **Pools usage**: CPU utilization broken down by the internal stream pools.
* **Load average**: The average CPU utilization on the host.
-By clicking the host name, you can go to the [node page](#node_page).
+To open the [node page](#node_page), click the host name.
## Node page {#node_page}
@@ -77,24 +77,27 @@ http://<endpoint>:8765/monitoring/node/<node-id>/
Information about the node is presented in the following sections:
* **Pools**: CPU utilization broken down by the internal stream pools, with roughly the following pool functions:
- * **System**: The tasks of critical system components.
- * **User**: User tasks, queries executed by tablets.
- * **Batch**: Long-running background tasks.
- * **IO**: Blocking I/O operations.
- * **IC**: Networking operations.
- High pool utilization might degrade performance and increase the system response time.
+ * **System**: The tasks of critical system components.
+ * **User**: User tasks, queries executed by tablets.
+ * **Batch**: Long-running background tasks.
+ * **IO**: Blocking I/O operations.
+ * **IC**: Networking operations.
+
+ High pool utilization might degrade performance and increase the system response time.
* **Common info**: Basic information about the node:
- * **Version**: The {{ ydb-short-name }} version.
- * **Uptime**: The node uptime.
- * **DC**: The availability zone where the node resides.
- * **Rack**: The ID of the rack where the node resides.
+
+ * **Version**: The {{ ydb-short-name }} version.
+ * **Uptime**: The node uptime.
+ * **DC**: The availability zone where the node resides.
+ * **Rack**: The ID of the rack where the node resides.
* **Load average**: Average host CPU utilization for different time intervals:
- * 1 minute.
- * 5 minutes.
- * 15 minutes.
+
+ * 1 minute.
+ * 5 minutes.
+ * 15 minutes.
The node page has the Storage and Tablets tabs with a [list of storage groups](#node_storage_page) and a [list of tablets](#node_tablets_page), respectively.
@@ -127,7 +130,7 @@ Many {{ ydb-short-name }} components are implemented as tablets. The system can
In the upper part of the list of tablets, there's a big indicator for tablets running on the given node. It shows the ratio of fully launched and running tablets.
-Under the indicator, you can see a list of tablets, with each tablet represented by a small [color indicator](#colored_indicator) icon. When you hover over the indicator, the tablet summary is shown:
+Under the indicator, you can see a list of tablets, where each tablet is shown as a small [color indicator](#colored_indicator) icon. When you hover over the indicator, the tablet summary is shown:
* **Tablet**: The ID of the tablet.
* **NodeID**: The ID of the node where the tablet resides.
@@ -146,7 +149,7 @@ Like the previous pages, this page includes the tenant summary, but unlike the o
In the `Tenant Info` section, you can see the following information:
-* **Pools**: The total CPU utilization by the tenant nodes broken down by internal stream pools (for more information about pools, see the [tenant page](#tenant_page)).
+* **Pools**: Total CPU utilization by the tenant nodes broken down by internal stream pools (for more information about pools, see the [tenant page](#tenant_page)).
* **Metrics**: Data about tablet utilization for this tenant:
* **Memory**: The RAM utilized by tablets.
@@ -161,10 +164,10 @@ In the `Tenant Info` section, you can see the following information:
The tenant page also includes the following tabs:
* **HealthCheck**: The report regarding cluster issues, if any.
-* **Storage**: The [list of storage groups](#tenant_storage_page) that includes information about which VDisks reside on which nodes and block store volumes.
-* **Compute**: The [list of nodes](#tenant_compute_page), which includes the nodes and tablets running on them.
-* **Schema**: The [tenant's schema](#tenant_scheme) that lets you view tables, execute YQL queries, view a list of the slowest queries and the most loaded shards.
-* **Network**: The [health of the cluster network](#tenant_network).
+* **Storage**: [List of storage groups](#tenant_storage_page) showing the nodes and block store volumes hosting each VDisk.
+* **Compute**: [List of nodes](#tenant_compute_page) showing the nodes and tablets running on them.
+* **Schema**: [Tenant schema](#tenant_scheme) that enables you to view tables, execute YQL queries, view a list of the slowest queries and the most loaded shards.
+* **Network**: [Cluster network health](#tenant_network).
### List of storage groups in the tenant {#tenant_storage_page}
@@ -176,7 +179,7 @@ Here you can see a list of nodes belonging to the current tenant. If the tenant
Each node is represented by the following parameters:
-* **#**: The node ID.
+* **#**: Node ID.
* **Host**: The host running the node.
* **Uptime**: The node uptime.
* **Endpoints**: The ports being listened to.
@@ -211,6 +214,12 @@ Whenever you select a node, the right side of the screen shows details about the
Whenever you select the ID and Racks checkboxes, you can also see the IDs of nodes and their location in racks.
+## Monitoring static groups {#static-group}
+
+To perform a health check on a static group, go to the ![embedded-storage](../../_assets/embedded-storage.svg) **Storage** panel. By default, it shows a list of groups with issues.
+
+Enter `static` in the search bar. If the result is empty, no issues have been found for the static group. But if the panel shows a **static** group, check the VDisk health status in it. Acceptable health indicators are green (no issues) and blue (VDisk replication is in progress). Red indicator signals of an issue. Hover over the indicator to get a text description of the issue.
+
## Health indicators {#colored_indicator}
To the left of a component name, you might see a color indicating its health status.
@@ -223,4 +232,3 @@ The indicator colors have the following meaning:
* **Red**: There are critical problems, the component is down (or runs with limitations).
If a component includes other components, then in the absence of its own issues, the state is determined by aggregating the states of its parts.
-
diff --git a/ydb/docs/en/core/maintenance/manual/cluster_expansion.md b/ydb/docs/en/core/maintenance/manual/cluster_expansion.md
index 34c4ee2dd21..41b8faf4248 100644
--- a/ydb/docs/en/core/maintenance/manual/cluster_expansion.md
+++ b/ydb/docs/en/core/maintenance/manual/cluster_expansion.md
@@ -1,10 +1,10 @@
-# Cluster extension
+# Expanding a cluster
You can expand a {{ ydb-short-name }} cluster by adding new nodes to its configuration. Below is the list of actions for expanding a {{ ydb-short-name }} cluster installed manually on VM instances or physical servers. In the Kubernetes environment, clusters are expanded by adjusting the {{ ydb-short-name }} controller settings for Kubernetes.
When expanding your {{ ydb-short-name }} cluster, you do not have to pause user access to databases. When the cluster is expanded, its components are restarted to apply the updated configurations. This means that any transactions that were in progress at the time of expansion may need to be executed again on the cluster. The transactions are rerun automatically because the applications leverage the {{ ydb-short-name }} SDK features for error control and transaction rerun.
-## Preparing new servers
+## Preparing new servers {#add-host}
If you deploy new static or dynamic nodes of the cluster on new servers added to the expanded {{ ydb-short-name }} cluster, on each new server, you need to install the {{ ydb-short-name }} software according to the procedures described in the [cluster deployment instructions](../../deploy/manual/deploy-ydb-on-premises.md). Among other things, you need to:
@@ -15,17 +15,17 @@ If you deploy new static or dynamic nodes of the cluster on new servers added to
The TLS certificates used on the new servers must meet the [requirements for filling out the fields](../../deploy/manual/deploy-ydb-on-premises.md#tls-certificates) and be signed by the same trusted certification authority that signed the certificates for the existing servers of the expanded {{ ydb-short-name }} cluster.
-## Adding dynamic nodes
+## Adding dynamic nodes {#add-dynamic-node}
By adding dynamic nodes, you can expand the available computing resources (CPU cores and RAM) needed for your {{ ydb-short-name }} cluster to process user queries.
-To add a dynamic node to the cluster, run the process that serves this node, passing to it, in the command line parameters, the name of the served database and the addresses of any three static nodes of the {{ ydb-short-name }} cluster, as shown in the [cluster deployment instructions](../../deploy/manual/deploy-ydb-on-premises.md#start-dynnode).
+To add a dynamic node to the cluster, run the process that serves this node, passing to it, in the command line options, the name of the served database and the addresses of any three static nodes of the {{ ydb-short-name }} cluster, as shown in the [cluster deployment instructions](../../deploy/manual/deploy-ydb-on-premises.md#start-dynnode).
Once you have added the dynamic node to the cluster, the information about it becomes available on the [cluster monitoring page in the built-in UI](../embedded_monitoring/ydb_monitoring.md).
To remove a dynamic node from the cluster, stop the process on the dynamic node.
-## Adding static nodes
+## Adding static nodes {#add-static-node}
By adding static nodes, you can increase the throughput of your I/O operations and increase the available storage capacity in your {{ ydb-short-name }} cluster.
@@ -35,7 +35,7 @@ To add static nodes to the cluster, perform the following steps:
1. Edit the [cluster's configuration file](../../deploy/manual/deploy-ydb-on-premises.md#config):
* Add, to the configuration, the description of the added nodes (in the `hosts` section) and disks used by them (in the `host_configs` section).
- * Use the `storage_config_generation: K` parameter to set the ID of the configuration update at the top level, where `K` is the integer update ID (for the initial config, `K=0` or omitted; for the first expansion, `K=1`; for the second expansion, `K=2`; and so on).
+ * Use the `storage_config_generation: K` option to set the ID of the configuration update at the top level, where `K` is the integer update ID (for the initial config, `K=0` or omitted; for the first expansion, `K=1`; for the second expansion, `K=2`; and so on).
1. Copy the updated cluster's configuration file to all the existing and added servers in the cluster, overwriting the old version of the configuration file.
@@ -54,7 +54,7 @@ To add static nodes to the cluster, perform the following steps:
--user root auth get-token --force >token-file
```
- The command example above uses the following parameters:
+ The command example above uses the following options:
* `node1.ydb.tech`: The FQDN of any server hosting the cluster's static nodes.
* `2135`: Port number of the gRPCs service for the static nodes.
* `ca.crt`: Name of the file with the certificate authority certificate.
@@ -72,7 +72,7 @@ To add static nodes to the cluster, perform the following steps:
echo $?
```
- The command example above uses the following parameters:
+ The command example above uses the following options:
* `ydbd-token-file`: File name of the previously issued authentication token.
* `2135`: Port number of the gRPCs service for the static nodes.
* `ca.crt`: Name of the file with the certificate authority certificate.
@@ -92,7 +92,7 @@ To add static nodes to the cluster, perform the following steps:
echo $?
```
- The command example above uses the following parameters:
+ The command example above uses the following options:
* `ydbd-token-file`: File name of the previously issued authentication token.
* `2135`: Port number of the gRPCs service for the static nodes.
* `ca.crt`: Name of the file with the certificate authority certificate.
diff --git a/ydb/docs/en/core/maintenance/manual/index.md b/ydb/docs/en/core/maintenance/manual/index.md
index d30d2788839..3e2ed00044f 100644
--- a/ydb/docs/en/core/maintenance/manual/index.md
+++ b/ydb/docs/en/core/maintenance/manual/index.md
@@ -4,19 +4,21 @@ Managing a cluster's disk subsystem includes the following actions:
* Editing the cluster configuration:
- * [{#T}](cluster_expansion.md).
- * [{#T}](adding_storage_groups.md).
+ * [{#T}](cluster_expansion.md)
+ * [{#T}](adding_storage_groups.md)
+ * [{#T}](../../administration/state-storage-move.md)
+ * [{#T}](../../administration/static-group-move.md)
* Maintenance:
- * [{#T}](node_restarting.md).
- * [{#T}](scrubbing.md).
- * [{#T}](selfheal.md).
- * [{#T}](../../administration/decommissioning.md).
- * [{#T}](moving_vdisks.md).
+ * [{#T}](node_restarting.md)
+ * [{#T}](scrubbing.md)
+ * [{#T}](selfheal.md)
+ * [{#T}](../../administration/decommissioning.md)
+ * [{#T}](moving_vdisks.md)
* Troubleshooting:
- * [{#T}](failure_model.md).
- * [{#T}](balancing_load.md).
- * [{#T}](disk_end_space.md).
+ * [{#T}](failure_model.md)
+ * [{#T}](balancing_load.md)
+ * [{#T}](disk_end_space.md)
diff --git a/ydb/docs/en/core/maintenance/manual/toc_i.yaml b/ydb/docs/en/core/maintenance/manual/toc_i.yaml
index 9eed2863468..2bfb8fc52b9 100644
--- a/ydb/docs/en/core/maintenance/manual/toc_i.yaml
+++ b/ydb/docs/en/core/maintenance/manual/toc_i.yaml
@@ -1,14 +1,22 @@
items:
-- name: "{{ ydb-short-name }} DSTool"
+- name: "{{ ydb-short-name }} DSTool utility"
items:
- name: Overview
href: ../../administration/ydb-dstool-overview.md
- name: Installation
href: ../../administration/ydb-dstool-setup.md
-- name: Cluster extension
+ - name: Global options
+ href: ../../administration/ydb-dstool-global-options.md
+ - name: device list
+ href: ../../administration/ydb-dstool-device-list.md
+- name: Expanding a cluster
href: cluster_expansion.md
- name: Adding storage groups
href: adding_storage_groups.md
+- name: Moving a State Storage
+ href: ../../administration/state-storage-move.md
+- name: Moving a static group
+ href: ../../administration/static-group-move.md
- name: Safe restart and shutdown of nodes
href: node_restarting.md
- name: Enabling/disabling Scrubbing
diff --git a/ydb/docs/en/core/reference/ydb-cli/commands/workload/_includes/index.md b/ydb/docs/en/core/reference/ydb-cli/commands/workload/_includes/index.md
index af05665a890..ac2313e308d 100644
--- a/ydb/docs/en/core/reference/ydb-cli/commands/workload/_includes/index.md
+++ b/ydb/docs/en/core/reference/ydb-cli/commands/workload/_includes/index.md
@@ -8,7 +8,7 @@ General format of the command:
{{ ydb-cli }} [global options...] workload [subcommands...]
```
-* `global options`: [Global parameters](../../../commands/global-options.md).
+* `global options`: [Global options](../../../commands/global-options.md).
* `subcommands`: The [subcommands](#subcomands).
See the description of the command to run the data load:
diff --git a/ydb/docs/en/core/reference/ydb-cli/export_import/_includes/import-file.md b/ydb/docs/en/core/reference/ydb-cli/export_import/_includes/import-file.md
index bf01388fef5..5ceb05a9152 100644
--- a/ydb/docs/en/core/reference/ydb-cli/export_import/_includes/import-file.md
+++ b/ydb/docs/en/core/reference/ydb-cli/export_import/_includes/import-file.md
@@ -16,22 +16,23 @@ General format of the command:
{% include [conn_options_ref.md](../../commands/_includes/conn_options_ref.md) %}
-## Parameters of the subcommand {#options}
+## Subcommand options {#options}
-### Required parameters {#required}
+### Required options {#required}
* `-p, --path STRING`: A path to the table in the database.
* `--input-file STRING`: A path to the imported file in the local file system.
-### Additional parameters {#optional}
+### Additional options {#optional}
* `--skip-rows NUM`: A number of rows from the beginning of the file that will be skipped at import. The default value is `0`.
* `--header`: Use this option if the first row (excluding the rows skipped by `--skip-rows`) includes names of data columns to be mapped to table columns. If the header row is missing, the data is mapped according to the order in the table schema.
* `--delimiter STRING`: The data column delimiter character. You can't use the tabulation character as a delimiter in this option. For tab-delimited import, use the `import file tsv` subcommand. Default value: `,`.
* `--null-value STRING`: The value to be imported as `NULL`. Default value: `""`.
* `--batch-bytes VAL`: Split the imported file into batches of specified sizes. If a row fails to fit into a batch completely, it's discarded and added to the next batch. Whatever the batch size is, the batch must include at least one row. Default value: `1 MiB`.
-* `--max-in-flight VAL`: The number of data batches imported in parallel. You can increase the value of this parameter to accelerate importation of large files. The default value is `100`.
-* `--newline-delimited`: This flag guarantees that there will be no line breaks in records. If this flag is set, and the data is loaded from a file, then different upload streams will process different parts of the source file. This way, you can ensure maximum performance when uploading sorted datasets to partitioned tables, by distributing the workload across all partitions.
+* `--max-in-flight VAL`: The number of data batches imported in parallel. You can increase this option value to import large files faster. The default value is `100`.
+* `--columns`: List of data columns in the file delimited by a `comma` (for `csv` format) or by a tab character (for `tsv` format). If you use the `--header` option, the column names in it will be replaced by column names from the list. If the number of columns in the list mismatches the number of data columns, you will get an error.
+* `--newline-delimited`: This flag guarantees that there will be no line breaks in records. If this flag is set, and the data is loaded from a file, then different upload streams will process different parts of the source file. This way you can distribute the workload across all partitions, ensuring the maximum performance when uploading sorted datasets to partitioned tables.
## Examples {#examples}
@@ -66,7 +67,7 @@ The following data will be imported:
┌──────────────┬───────────┬───────────────────────────────────────────────────────────┬──────────────────┐
| release_date | series_id | series_info | title |
├──────────────┼───────────┼───────────────────────────────────────────────────────────┼──────────────────┤
-| "2006-02-03" | 1 | "The IT Crowd is a British sitcom." | "The IT Crowd" |
+| "2006-02-03" | 1 | "The IT Crowd is a British sitcom." | "IT Crowd" |
├──────────────┼───────────┼───────────────────────────────────────────────────────────┼──────────────────┤
| "2014-04-06" | 2 | "Silicon Valley is an American comedy television series." | "Silicon Valley" |
└──────────────┴───────────┴───────────────────────────────────────────────────────────┴──────────────────┘
@@ -127,7 +128,7 @@ The following data will be imported:
┌──────────────┬───────────┬─────────────────────────────────────┬──────────────────┐
| release_date | series_id | series_info | title |
├──────────────┼───────────┼─────────────────────────────────────┼──────────────────┤
-| "2006-02-03" | 1 | "The IT Crowd is a British sitcom." | "The IT Crowd" |
+| "2006-02-03" | 1 | "The IT Crowd is a British sitcom." | "IT Crowd" |
├──────────────┼───────────┼─────────────────────────────────────┼──────────────────┤
| null | 2 | "" | "Silicon Valley" |
├──────────────┼───────────┼─────────────────────────────────────┼──────────────────┤
diff --git a/ydb/docs/en/core/reference/ydb-cli/toc_i.yaml b/ydb/docs/en/core/reference/ydb-cli/toc_i.yaml
index 30ace93d682..c8664a8c157 100644
--- a/ydb/docs/en/core/reference/ydb-cli/toc_i.yaml
+++ b/ydb/docs/en/core/reference/ydb-cli/toc_i.yaml
@@ -7,7 +7,7 @@ items:
href: commands.md
- name: Service commands
href: commands/service.md
- - name: Global parameters
+ - name: Global options
href: commands/global-options.md
- name: YDB CLI commands
items:
@@ -115,5 +115,5 @@ items:
href: workload-click-bench.md
- name: Key-Value load
href: workload-kv.md
- # - name: Topic load
- # href: workload-topic.md
+ - name: Topic load
+ href: workload-topic.md
diff --git a/ydb/docs/en/core/reference/ydb-cli/workload-topic.md b/ydb/docs/en/core/reference/ydb-cli/workload-topic.md
index 6186ac06914..da2794d38da 100644
--- a/ydb/docs/en/core/reference/ydb-cli/workload-topic.md
+++ b/ydb/docs/en/core/reference/ydb-cli/workload-topic.md
@@ -1,290 +1,252 @@
-# Topic workload
+# Topic load
-The workload simulates the publish-subscribe architectural pattern using [YDB topics](../../concepts/topic.md).
+Applies load to your {{ ydb-short-name }} [topics](../../concepts/topic.md), using them as message queues. You can use a variety of input parameters to simulate production load: message number, message size, target write rate, and number of consumers and producers.
-The workload allow us to generate and consume high volumes of data through your YDB cluster in order to measure it's performance characteristics such as throughout and latency.
+As you apply load to your topic, the console displays the results (the number of written messages, message write rate, and others).
-The tests can be configured to match your real workloads. The number of messages, message sizes, target throughput, the number of producers and consumers can be adjusted.
+To generate load against your topic:
-Test outputs include the number of messages and the amount of data transferred including latencies in order to understand the real world performance characteristics of your cluster.
+1. [Initialize the load](#init).
+1. Run one of the available load types:
+ * [write](#run-write): Generate messages and write them to the topic asynchronously.
+ * [read](#run-read): Read messages from the topic asynchronously.
+ * [full](#run-full): Read and write messages asynchronously in parallel.
-## Types of load{#workload-types}
+{% include [ydb-cli-profile.md](../../_includes/ydb-cli-profile.md) %}
-The load test runs 3 types of load:
+## Initializing a load test {#init}
-* [write](#run-write) — generate messages and write them to a topic asynchronously;
-* [read](#run-read) — read messages from a topic asynchronously;
-* [full](#run-full) — simultaneously read and write messages.
-
-## Load initialization {#init}
-
-To get started, create a topic:
-
-```bash
-{{ ydb-cli }} workload topic init [init options...]
-```
-
-* `init options` — [Initialization options](#init-options).
-
-See the description of the command to init the data load:
+Before executing the load, you need to initialize it. During initialization, you will create a topic named `workload-topic` with the specified options. To initialize the load, run the command:
```bash
-{{ ydb-cli }} workload topic init --help
+{{ ydb-cli }} ydb [global options...] workload topic init [options...]
```
-### Available parameters {#init-options}
-
-Parameter name | Short name | Parameter description
----|---|---
-`--partitions <value>` | `-p <value>` | Number of partitions in the topic. Default: 128.
-`--consumers <value>` | `-c <value>` | Number of consumers in the topic. Default: 1.
-
-The `workload-topic` topic will be created with the specified numbers of partitions and consumers.
-
-### Load initialization example{#init-topic-examples}
-
-Creating a topic with 256 partitions and 2 consumers:
-
-```bash
-{{ ydb-cli }} workload topic init --partitions 256 --consumers 2
-```
-
-## Clean {#clean}
-
-When the workload is complete, you can delete the `workload-topic` topic:
-
-```bash
-{{ ydb-cli }} workload topic clean
-```
+* `global options`: [Global options](commands/global-options.md).
+* `options`: Subcommand options.
-### Clean example {#clean-topic-examples}
+Subcommand options:
-```bash
-{{ ydb-cli }} workload topic clean
-```
+| Option name | Option description |
+---|---
+| `--partitions`, `-p` | Number of topic partitions.<br>Default value: `128`. |
+| `--consumers`, `-c` | Number of topic consumers.<br>Default value: `1`. |
-## Running a load test {#run}
+> To create a topic with `256` partitions and `2` consumers, run this command:
+>
+> ```bash
+> {{ ydb-cli }} --profile quickstart workload topic init --partitions 256 --consumers 2
+> ```
-To run the load, execute the command:
+## Write load {#run-write}
-```bash
-{{ ydb-cli }} workload topic run [workload type...] [specific workload options...]
-```
+This load type generates and writes messages to the topic asynchronously.
-* `workload type` — [The types of workload](#workload-types).
-* `global workload options` — [The global options for all types of load.](#global-workload-options).
-* `specific workload options` — Options of a specific load type.
-
-See the description of the command to run the workload:
+General format of the command that generates the write load:
```bash
-{{ ydb-cli }} workload topic run --help
+{{ ydb-cli }} [global options...] workload topic run write [options...]
```
-### The global options for all types of load {#global-workload-options}
-
-Parameter name | Short name | Parameter description
----|---|---
-`--seconds <value>` | `-s <value>` | Duration of the test (seconds). Default: 10.
-`--window <value>` | `-w <value>` | Statistics collection window (seconds). Default: 1.
-`--quiet` | `-q` | Outputs only the total result.
-`--print-timestamp` | - | Print the time together with the statistics of each time window.
-
-## Write workload {#run-write}
-
-This load type generate messages and send them into a topic asynchronously.
+* `global options`: [Global options](commands/global-options.md).
+* `options`: Subcommand options.
-To run this type of load, execute the command:
-
-```bash
-{{ ydb-cli }} workload topic run write [global workload options...] [specific workload options...]
-```
-
-See the description of the command to run the write workload:
+View the description of the command that generates the write load:
```bash
{{ ydb-cli }} workload topic run write --help
```
-### Write workload options {#run-write-options}
-
-Parameter name | Short name | Parameter description
----|---|---
-`--threads <value>` | `-t <value>` | Number of producer threads. Default: `1`.
-`--message-size <value>` | `-m <value>` | Message size (bytes). It can be specified in KB, MB, GB using one of suffixes: `K`, `M`, `G`. Default: `10K`.
-`--message-rate <value>` | - | Total message rate for all producer threads (messages per second). 0 - no limit. Default: `0`.
-`--byte-rate <value>` | - | Total message rate for all producer threads (bytes per second). 0 - no limit. It can be specified in KB/s, MB/s, GB/s using one of suffixes: `K`, `M`, `G`. Default: `0`.
-`--codec <value>` | - | Codec used for message compression on the client before sending them to the server. Possible values: `RAW` (no compression), `GZIP`, and `ZSTD`. Compression causes higher CPU utilization on the client when reading and writing messages, but usually lets you reduce the volume of data transferred over the network and stored. When consumers read messages, they're automatically decompressed with the codec used when writing them, without specifying any special options. Default: `RAW`.
+Subcommand options:
-Note: The options `--byte-rate` и `--message-rate` are mutually exclusive.
-
-### Write workload example{#run-write-examples}
-
-Example of a command to create 100 producer threads with a target speed of 80 MB/s and a duration of 300 seconds:
-
-```bash
-{{ ydb-cli }} workload topic run write --threads 100 --seconds 300 --byte-rate 80M
-```
-
-### Write workload output {#run-write-output}
-
-During the process of work, both intermediate and total statistics are printed. Example output:
-
-```text
-Window Write speed Write time Inflight
-# msg/s MB/s P99(ms) max msg
-1 20 0 1079 72
-2 8025 78 1415 78
-3 7987 78 1431 79
-4 7888 77 1471 101
-5 8126 79 1815 116
-6 7018 68 1447 79
-7 8938 87 2511 159
-8 7055 68 1463 78
-9 7062 69 1455 79
-10 9912 96 3679 250
-Window Write speed Write time Inflight
-# msg/s MB/s P99(ms) max msg
-Total 7203 70 3023 250
-```
-
-Column name | Column description
+| Option name | Option description |
---|---
-`Window`|The time window counter.
-`Write speed`|Write speed (messages/s and Mb/s).
-`Write time`|99 percentile of message write time (ms).
-`Inflight`|The maximum count of inflight messages.
-
-## Read workload {#run-read}
-
-This load type read messages from a topic asynchronously.
-
-To run this type of load, execute the command:
-
-```bash
-{{ ydb-cli }} workload topic run read [global workload options...] [specific workload options...]
-```
-
-See the description of the command to run the read workload:
+| `--seconds`, `-s` | Test duration in seconds.<br>Default value: `10`. |
+| `--window`, `-w` | Statistics window in seconds.<br>Default value: `1`. |
+| `--quiet`, `-q` | Output only the final test result. |
+| `--print-timestamp` | Print the time together with the statistics of each time window. |
+| `--threads`, `-t` | Number of producer threads.<br>Default value: `1`. |
+| `--message-size`, `-m` | Message size in bytes. Use the `K`, `M`, or `G` suffix to set the size in KB, MB, or GB, respectively.<br>Default value: `10K`. |
+| `--message-rate` | Total target write rate in messages per second. Can't be used together with the `--byte-rate` option.<br>Default value: `0` (no limit). |
+| `--byte-rate` | Total target write rate in bytes per second. Can't be used together with the `--message-rate` option. Use the `K`, `M`, or `G` suffix to set the rate in KB/s, MB/s, or GB/s, respectively.<br>Default value: `0` (no limit). |
+| `--codec` | Codec used to compress messages on the client before sending them to the server.<br>Compression increases CPU usage on the client when reading and writing messages, but usually enables you to reduce the amounts of data stored and transmitted over the network. When consumers read messages, they decompress them by the codec that was used to write the messages, with no special options needed.<br>Acceptable values: `RAW` - no compression (default), `GZIP`, `ZSTD`. |
+
+> To write data to `100` producer threads at the target rate of `80` MB/s for `10` seconds, run this command:
+>
+> ```bash
+> {{ ydb-cli }} --profile quickstart workload topic run write --threads 100 --byte-rate 80M
+> ```
+>
+> You will see statistics for in-progress time windows and final statistics when the test is complete:
+>
+> ```text
+> Window Write speed Write time Inflight
+> # msg/s MB/s P99(ms) max msg
+> 1 20 0 1079 72
+> 2 8025 78 1415 78
+> 3 7987 78 1431 79
+> 4 7888 77 1471 101
+> 5 8126 79 1815 116
+> 6 7018 68 1447 79
+> 7 8938 87 2511 159
+> 8 7055 68 1463 78
+> 9 7062 69 1455 79
+> 10 9912 96 3679 250
+> Window Write speed Write time Inflight
+> # msg/s MB/s P99(ms) max msg
+> Total 7203 70 3023 250
+> ```
+>
+> * `Window`: Sequence number of the statistics window.
+> * `Write speed`: Message write rate in messages per second and MB/s.
+> * `Write time`: 99th percentile of the message write time, in milliseconds.
+> * `Inflight`: Maximum number of messages awaiting commit across all partitions.
+
+## Read load {#run-read}
+
+This type of load reads messages from the topic asynchronously. To make sure that the topic includes messages, run the [write load](#run-write) before you start reading.
+
+General format of the command to generate the read load:
+
+```bash
+{{ ydb-cli }} [global options...] workload topic run read [options...]
+```
+
+* `global options`: [Global options](commands/global-options.md).
+* `options`: Subcommand options.
+
+View the description of the command to generate the read load:
```bash
{{ ydb-cli }} workload topic run read --help
```
-### Read workload options {#run-read-options}
-
-Parameter name | Short name | Parameter description
----|---|---
-`--consumers <value>` | `-c <value>` | Number of consumers in the topic. Default: `1`.
-`--threads <value>` | `-t <value>` | Number of consumer threads. Default: `1`.
-
-### Read workload example {#run-read-examples}
-
-Example of a command to create 2 consumers with 100 threads each:
-
-```bash
-{{ ydb-cli }} workload topic run read --consumers 2 --threads 100
-```
-
-### Read workload output {#run-read-output}
-
-During the process of work, both intermediate and total statistics are printed. Example output:
-
-```text
-Window Lag Lag time Read speed Full time
-# max msg P99(ms) msg/s MB/s P99(ms)
-1 0 0 48193 471 0
-2 30176 0 66578 650 0
-3 30176 0 68999 674 0
-4 30176 0 66907 653 0
-5 27835 0 67628 661 0
-6 30176 0 67938 664 0
-7 30176 0 71628 700 0
-8 20338 0 61367 599 0
-9 30176 0 61770 603 0
-10 30176 0 58291 569 0
-Window Lag Lag time Read speed Full time
-# max msg P99(ms) msg/s MB/s P99(ms)
-Total 30176 0 80267 784 0
-```
+Subcommand options:
-Column name | Column description
+| Option name | Option description |
---|---
-`Window`|The time window counter.
-`Lag`|The maximum lag between producers and consumers across all the partitions (messages).
-`Lag time`|99 percentile of message delay time (ms).
-`Read`|Read speed (messages/s and MB/s).
-`Full time`|99 percentile of the end-to-end message time, from writing by the producer to reading by the reader.
-
-## Full workload {#run-full}
-
-This load type both write and read messages asynchronously.
-
-To run this type of load, execute the command:
-
-```bash
-{{ ydb-cli }} workload topic run full [global workload options...] [specific workload options...]
-```
-
-This command is equivalent to running both read and write load workloads simultaneously.
-
-See the description of the command to run the full workload:
+| `--seconds`, `-s` | Test duration in seconds.<br>Default value: `10`. |
+| `--window`, `-w` | Statistics window in seconds.<br>Default value: `1`. |
+| `--quiet`, `-q` | Output only the final test result. |
+| `--print-timestamp` | Print the time together with the statistics of each time window. |
+| `--consumers`, `-c` | Number of consumers.<br>Default value: `1`. |
+| `--threads`, `-t` | Number of consumer threads.<br>Default value: `1`. |
+
+> To use `2` consumers to read data from the topic, with `100` threads per consumer, run the following command:
+>
+> ```bash
+> {{ ydb-cli }} --profile quickstart workload topic run read --consumers 2 --threads 100
+> ```
+>
+> You will see statistics for in-progress time windows and final statistics when the test is complete:
+>
+> ```text
+> Window Lag Lag time Read speed Full time
+> # max msg P99(ms) msg/s MB/s P99(ms)
+> 1 0 0 48193 471 0
+> 2 30176 0 66578 650 0
+> 3 30176 0 68999 674 0
+> 4 30176 0 66907 653 0
+> 5 27835 0 67628 661 0
+> 6 30176 0 67938 664 0
+> 7 30176 0 71628 700 0
+> 8 20338 0 61367 599 0
+> 9 30176 0 61770 603 0
+> 10 30176 0 58291 569 0
+> Window Lag Lag time Read speed Full time
+> # max msg P99(ms) msg/s MB/s P99(ms)
+> Total 30176 0 80267 784 0
+> ```
+>
+> * `Window`: Sequence number of the statistics window.
+> * `Lag`: Maximum consumer lag in the statistics window. Messages across all partitions are included.
+> * `Lag time`: 99th percentile of the message lag time in milliseconds.
+> * `Read`: Message read rate for the consumer (in messages per second and MB/s).
+> * `Full time`: 99th percentile of the full message processing time (from writing by the producer to reading by the consumer), in milliseconds.
+
+## Read and write load {#run-full}
+
+This type of load both reads messages from the topic and writes them to the topic asynchronously. This command is equivalent to running both read and write loads in parallel.
+
+General format of the command to generate the read and write load:
+
+```bash
+{{ ydb-cli }} [global options...] workload topic run full [options...]
+```
+
+* `global options`: [Global options](commands/global-options.md).
+* `options`: Subcommand options.
+
+View the description of the command to run the read and write load:
```bash
{{ ydb-cli }} workload topic run full --help
```
-### Full workload options {#run-full-options}
-
-Parameter name | Short name | Parameter description
----|---|---
-`--producer-threads <value>` | `-p <value>` | Number of producer threads. Default: `1`.
-`--message-size <value>` | `-m <value>` | Message size (bytes). It can be specified in KB, MB, GB using one of suffixes: `K`, `M`, `G`. Default: `10K`.
-`--message-rate <value>` | - | Total message rate for all producer threads (messages per second). 0 - no limit. Default: `0`.
-`--byte-rate <value>` | - | Total message rate for all producer threads (bytes per second). 0 - no limit. It can be specified in KB/s, MB/s, GB/s using one of suffixes: `K`, `M`, `G`. Default: `0`.
-`--codec <value>` | - | Codec used for message compression on the client before sending them to the server. Possible values: `RAW` (no compression), `GZIP`, and `ZSTD`. Compression causes higher CPU utilization on the client when reading and writing messages, but usually lets you reduce the volume of data transferred over the network and stored. When consumers read messages, they're automatically decompressed with the codec used when writing them, without specifying any special options. Default: `RAW`.
-`--consumers <value>` | `-c <value>` | Number of consumers in the topic. Default: `1`.
-`--threads <value>` | `-t <value>` | Number of consumer threads. Default: `1`.
-
-Note: The options `--byte-rate` и `--message-rate` are mutually exclusive.
-
-### Full workload example {#run-full-examples}
-
-Example of a command to create 100 producer threads, 2 consumers the 50 consumer thread,a target speed of 80 MB/s and a duration of 300 seconds:
-
-```bash
-{{ ydb-cli }} workload topic run full --producer-threads 100 --consumers 2 --consumer-threads 50 --byte-rate 80M --seconds 300
-```
-
-### Ful workload output {#run-full-output}
-
-During the process of work, both intermediate and total statistics are printed. Example output:
-
-```text
-Window Write speed Write time Inflight Lag Lag time Read speed Full time
-# msg/s MB/s P99(ms) max msg max msg P99(ms) msg/s MB/s P99(ms)
-1 39 0 1215 4 0 0 30703 300 29716
-2 1091 10 2143 8 2076 20607 40156 392 30941
-3 1552 15 2991 12 7224 21887 41040 401 31886
-4 1733 16 3711 15 10036 22783 38488 376 32577
-5 1900 18 4319 15 10668 23551 34784 340 33372
-6 2793 27 5247 21 9461 24575 33267 325 34893
-7 2904 28 6015 22 12150 25727 34423 336 35507
-8 2191 21 5087 21 12150 26623 29393 287 36407
-9 1952 19 2543 10 7627 27391 33284 325 37814
-10 1992 19 2655 9 10104 28671 29101 284 38797
-Window Write speed Write time Inflight Lag Lag time Read speed Full time
-# msg/s MB/s P99(ms) max msg max msg P99(ms) msg/s MB/s P99(ms)
-Total 1814 17 5247 22 12150 28671 44827 438 40252
-```
+Subcommand options:
-Column name | Column description
+| Option name | Option description |
---|---
-`Window`|The time window counter.
-`Write speed`|Write speed (messages/s and Mb/s).
-`Write time`|99 percentile of message write time (ms).
-`Inflight`|The maximum count of inflight messages.
-`Lag`|The maximum lag between producers and consumers across all the partitions (messages).
-`Lag time`|99 percentile of message delay time (ms).
-`Read`|Read speed (messages/s and MB/s).
-`Full time`|99 percentile of the end-to-end message time, from writing by the producer to reading by the reader.
+| `--seconds`, `-s` | Test duration in seconds.<br>Default value: `10`. |
+| `--window`, `-w` | Statistics window in seconds.<br>Default value: `1`. |
+| `--quiet`, `-q` | Output only the final test result. |
+| `--print-timestamp` | Print the time together with the statistics of each time window. |
+| `--producer-threads`, `-p` | Number of producer threads.<br>Default value: `1`. |
+| `--message-size`, `-m` | Message size in bytes. Use the `K`, `M`, or `G` suffix to set the size in KB, MB, or GB, respectively.<br>Default value: `10K`. |
+| `--message-rate` | Total target write rate in messages per second. Can't be used together with the `--message-rate` option.<br>Default value: `0` (no limit). |
+| `--byte-rate` | Total target write rate in bytes per second. Can't be used together with the `--byte-rate` option. Use the `K`, `M`, or `G` suffix to set the rate in KB/s, MB/s, or GB/s, respectively.<br>Default value: `0` (no limit). |
+| `--codec` | Codec used to compress messages on the client before sending them to the server.<br>Compression increases CPU usage on the client when reading and writing messages, but usually enables you to reduce the amounts of data stored and transmitted over the network. When consumers read messages, they decompress them by the codec that was used to write the messages, with no special options needed.<br>Acceptable values: `RAW` - no compression (default), `GZIP`, `ZSTD`. |
+| `--consumers`, `-c` | Number of consumers.<br>Default value: `1`. |
+| `--threads`, `-t` | Number of consumer threads.<br>Default value: `1`. |
+
+> Example of a command that reads `50` threads by `2` consumers and writes data to `100` producer threads at the target rate of `80` MB/s and duration of `10` seconds:
+>
+> ```bash
+> {{ ydb-cli }} --profile quickstart workload topic run full --producer-threads 100 --consumers 2 --consumer-threads 50 --byte-rate 80M
+> ```
+>
+> You will see statistics for in-progress time windows and final statistics when the test is complete:
+>
+> ```text
+> Window Write speed Write time Inflight Lag Lag time Read speed Full time
+> # msg/s MB/s P99(ms) max msg max msg P99(ms) msg/s MB/s P99(ms)
+> 1 39 0 1215 4 0 0 30703 300 29716
+> 2 1091 10 2143 8 2076 20607 40156 392 30941
+> 3 1552 15 2991 12 7224 21887 41040 401 31886
+> 4 1733 16 3711 15 10036 22783 38488 376 32577
+> 5 1900 18 4319 15 10668 23551 34784 340 33372
+> 6 2793 27 5247 21 9461 24575 33267 325 34893
+> 7 2904 28 6015 22 12150 25727 34423 336 35507
+> 8 2191 21 5087 21 12150 26623 29393 287 36407
+> 9 1952 19 2543 10 7627 27391 33284 325 37814
+> 10 1992 19 2655 9 10104 28671 29101 284 38797
+> Window Write speed Write time Inflight Lag Lag time Read speed Full time
+> # msg/s MB/s P99(ms) max msg max msg P99(ms) msg/s MB/s P99(ms)
+> Total 1814 17 5247 22 12150 28671 44827 438 40252
+> ```
+>
+> * `Window`: Sequence number of the statistics window.
+> * `Write speed`: Message write rate in messages per second and MB/s.
+> * `Write time`: 99th percentile of the message write time in milliseconds.
+> * `Inflight`: Maximum number of messages awaiting commit across all partitions.
+> * `Lag`: Maximum number of messages awaiting reading, in the statistics window. Messages across all partitions are included.
+> * `Lag time`: 99th percentile of the message lag time in milliseconds.
+> * `Read`: Message read rate for the consumer (in messages per second and MB/s).
+> * `Full time`: 99th percentile of the full message processing time, from writing by the producer to reading by the consumer, in milliseconds.
+
+## Deleting a topic {#clean}
+
+When the work is complete, you can delete the test topic: General format of the topic deletion command:
+
+```bash
+{{ ydb-cli }} [global options...] workload topic clean [options...]
+```
+
+* `global options`: [Global options](commands/global-options.md).
+* `options`: Subcommand options.
+
+> To delete the `workload-topic` test topic, run the following command:
+>
+> ```bash
+> {{ ydb-cli }} --profile quickstart workload topic clean
+> ```
diff --git a/ydb/docs/en/core/reference/ydb-sdk/recipes/index.md b/ydb/docs/en/core/reference/ydb-sdk/recipes/index.md
index 6e0bfda6c72..92234f221f1 100644
--- a/ydb/docs/en/core/reference/ydb-sdk/recipes/index.md
+++ b/ydb/docs/en/core/reference/ydb-sdk/recipes/index.md
@@ -27,6 +27,11 @@ Table of contents:
- [Setting the session pool size](session-pool-limit.md)
- [Inserting data](upsert.md)
- [Bulk upsert of data](bulk-upsert.md)
+<!-- - [Setting the transaction execution mode](tx-control.md)
+ - [SerializableReadWrite](tx-control-serializable-read-write.md)
+ - [OnlineReadOnly](tx-control-online-read-only.md)
+ - [StaleReadOnly](tx-control-stale-read-only.md)
+ - [SnapshotReadOnly](tx-control-snapshot-read-only.md) -->
- [Troubleshooting](debug.md)
- [Enable logging](debug-logs.md)
- [Enable metrics in Prometheus](debug-prometheus.md)
diff --git a/ydb/docs/en/core/reference/ydb-sdk/recipes/toc_i.yaml b/ydb/docs/en/core/reference/ydb-sdk/recipes/toc_i.yaml
index 5aaa0fe252d..b8aee6dcaee 100644
--- a/ydb/docs/en/core/reference/ydb-sdk/recipes/toc_i.yaml
+++ b/ydb/docs/en/core/reference/ydb-sdk/recipes/toc_i.yaml
@@ -37,6 +37,8 @@ items:
href: upsert.md
- name: Bulk-upserting data
href: bulk-upsert.md
+- name: Setting up the transaction execution mode
+ href: tx-control.md
- name: Troubleshooting
items:
- name: Overview
diff --git a/ydb/docs/en/core/reference/ydb-sdk/recipes/tx-control.md b/ydb/docs/en/core/reference/ydb-sdk/recipes/tx-control.md
new file mode 100644
index 00000000000..e91a50b2451
--- /dev/null
+++ b/ydb/docs/en/core/reference/ydb-sdk/recipes/tx-control.md
@@ -0,0 +1,188 @@
+---
+title: "Overview of the code recipe for setting up the transaction execution mode in {{ ydb-short-name }}"
+description: "In this article, you will learn how to set up the transaction execution mode in different SDKs to execute queries against {{ ydb-short-name }}."
+---
+
+# Setting up the transaction execution mode
+
+To run your queries, first you need to specify the [transaction execution mode](../../../concepts/transactions.md#modes) in the {{ ydb-short-name }} SDK.
+
+Below are code examples showing the {{ ydb-short-name }} SDK built-in tools to create an object for the *transaction execution mode*.
+
+{% include [work in progress message](_includes/addition.md) %}
+
+## Serializable {#serializable}
+
+{% list tabs %}
+
+- Go (native)
+
+ ```go
+ package main
+
+ import (
+ "context"
+ "os"
+
+ "github.com/ydb-platform/ydb-go-sdk/v3"
+ "github.com/ydb-platform/ydb-go-sdk/v3/table"
+ )
+
+ func main() {
+ ctx, cancel := context.WithCancel(context.Background())
+ defer cancel()
+ db, err := ydb.Open(ctx,
+ os.Getenv("YDB_CONNECTION_STRING"),
+ ydb.WithAccessTokenCredentials(os.Getenv("YDB_TOKEN")),
+ )
+ if err != nil {
+ panic(err)
+ }
+ defer db.Close(ctx)
+ txControl := table.TxControl(
+ table.BeginTx(table.WithSerializableReadWrite()),
+ table.CommitTx(),
+ )
+ err := driver.Table().Do(scope.Ctx, func(ctx context.Context, s table.Session) error {
+ _, _, err := s.Execute(ctx, txControl, "SELECT 1", nil)
+ return err
+ })
+ if err != nil {
+ fmt.Printf("unexpected error: %v", err)
+ }
+ }
+ ```
+
+{% endlist %}
+
+## Online Read-Only {#online-read-only}
+
+{% list tabs %}
+
+- Go (native)
+
+ ```go
+ package main
+
+ import (
+ "context"
+ "os"
+
+ "github.com/ydb-platform/ydb-go-sdk/v3"
+ "github.com/ydb-platform/ydb-go-sdk/v3/table"
+ )
+
+ func main() {
+ ctx, cancel := context.WithCancel(context.Background())
+ defer cancel()
+ db, err := ydb.Open(ctx,
+ os.Getenv("YDB_CONNECTION_STRING"),
+ ydb.WithAccessTokenCredentials(os.Getenv("YDB_TOKEN")),
+ )
+ if err != nil {
+ panic(err)
+ }
+ defer db.Close(ctx)
+ txControl := table.TxControl(
+ table.BeginTx(table.WithOnlineReadOnly(table.WithInconsistentReads())),
+ table.CommitTx(),
+ )
+ err := driver.Table().Do(scope.Ctx, func(ctx context.Context, s table.Session) error {
+ _, _, err := s.Execute(ctx, txControl, "SELECT 1", nil)
+ return err
+ })
+ if err != nil {
+ fmt.Printf("unexpected error: %v", err)
+ }
+ }
+ ```
+
+{% endlist %}
+
+## Stale Read-Only {#stale-read-only}
+
+{% list tabs %}
+
+- Go (native)
+
+ ```go
+ package main
+
+ import (
+ "context"
+ "os"
+
+ "github.com/ydb-platform/ydb-go-sdk/v3"
+ "github.com/ydb-platform/ydb-go-sdk/v3/table"
+ )
+
+ func main() {
+ ctx, cancel := context.WithCancel(context.Background())
+ defer cancel()
+ db, err := ydb.Open(ctx,
+ os.Getenv("YDB_CONNECTION_STRING"),
+ ydb.WithAccessTokenCredentials(os.Getenv("YDB_TOKEN")),
+ )
+ if err != nil {
+ panic(err)
+ }
+ defer db.Close(ctx)
+ txControl := table.TxControl(
+ table.BeginTx(table.WithStaleReadOnly()),
+ table.CommitTx(),
+ )
+ err := driver.Table().Do(scope.Ctx, func(ctx context.Context, s table.Session) error {
+ _, _, err := s.Execute(ctx, txControl, "SELECT 1", nil)
+ return err
+ })
+ if err != nil {
+ fmt.Printf("unexpected error: %v", err)
+ }
+ }
+ ```
+
+{% endlist %}
+
+## Snapshot Read-Only {#snapshot-read-only}
+
+{% list tabs %}
+
+- Go (native)
+
+ ```go
+ package main
+
+ import (
+ "context"
+ "os"
+
+ "github.com/ydb-platform/ydb-go-sdk/v3"
+ "github.com/ydb-platform/ydb-go-sdk/v3/table"
+ )
+
+ func main() {
+ ctx, cancel := context.WithCancel(context.Background())
+ defer cancel()
+ db, err := ydb.Open(ctx,
+ os.Getenv("YDB_CONNECTION_STRING"),
+ ydb.WithAccessTokenCredentials(os.Getenv("YDB_TOKEN")),
+ )
+ if err != nil {
+ panic(err)
+ }
+ defer db.Close(ctx)
+ txControl := table.TxControl(
+ table.BeginTx(table.WithSnapshotReadOnly()),
+ table.CommitTx(),
+ )
+ err := driver.Table().Do(scope.Ctx, func(ctx context.Context, s table.Session) error {
+ _, _, err := s.Execute(ctx, txControl, "SELECT 1", nil)
+ return err
+ })
+ if err != nil {
+ fmt.Printf("unexpected error: %v", err)
+ }
+ }
+ ```
+
+{% endlist %}
diff --git a/ydb/docs/en/core/yql/reference/yql-core/syntax/_includes/alter_table.md b/ydb/docs/en/core/yql/reference/yql-core/syntax/_includes/alter_table.md
index a6531267ff1..30a6bc213c3 100644
--- a/ydb/docs/en/core/yql/reference/yql-core/syntax/_includes/alter_table.md
+++ b/ydb/docs/en/core/yql/reference/yql-core/syntax/_includes/alter_table.md
@@ -81,10 +81,12 @@ ALTER TABLE `series` RENAME INDEX `title_index` TO `title_index_new`;
* `OLD_IMAGE`: Any column values before updates are written.
* `NEW_AND_OLD_IMAGES`: A combination of `NEW_IMAGE` and `OLD_IMAGE` modes. Any column values _prior to_ and _resulting from_ updates are written.
* `FORMAT`: Data write format.
- * `JSON`: The record structure is given on the [changefeed description](../../../../concepts/cdc#record-structure) page.
+ * `JSON`: Write data in [JSON](../../../../concepts/cdc#json-record-structure) format.
+ * `DYNAMODB_STREAMS_JSON`: Write data in the [JSON format compatible with Amazon DynamoDB Streams](../../../../concepts/cdc#dynamodb-streams-json-record-structure).
* `VIRTUAL_TIMESTAMPS`: Enabling/disabling [virtual timestamps](../../../../concepts/cdc#virtual-timestamps). Disabled by default.
* `RETENTION_PERIOD`: [Record retention period](../../../../concepts/cdc#retention-period). The value type is `Interval` and the default value is 24 hours (`Interval('PT24H')`).
* `INITIAL_SCAN`: Enables/disables [initial table scan](../../../../concepts/cdc#initial-scan). Disabled by default.
+* `AWS_REGION`: Value to be written to the `awsRegion` field. Used only with the `DYNAMODB_STREAMS_JSON` format.
The code below adds a changefeed named `updates_feed`, where the values of updated table columns will be exported in JSON format:
diff --git a/ydb/docs/ru/core/changelog-server.md b/ydb/docs/ru/core/changelog-server.md
index 7a786107976..b82c539e8cf 100644
--- a/ydb/docs/ru/core/changelog-server.md
+++ b/ydb/docs/ru/core/changelog-server.md
@@ -14,7 +14,7 @@
* Улучшены форматы передачи данных между стадиями исполнения запроса, что ускорило SELECT на запросах с параметрами на 10%, на операциях записи — до 30%.
* Добавлено [автоматическое конфигурирование](deploy/configuration/config.md#autoconfig) пулов акторной системы в зависимости от их нагруженности. Это повышает производительность за счет более эффективного совместного использования ресурсов ЦПУ.
-* Оптимизирована логика применения предикатов — выполнение ограничений с использованием OR и IN с параметрами автоматический переносится на сторону DataShard.
+* Оптимизирована логика применения предикатов — выполнение ограничений с использованием OR и IN с параметрами автоматически переносится на сторону DataShard.
* Для сканирующих запросов реализована возможность эффективного поиска отдельных строк с использованием первичного ключа или вторичных индексов, что позволяет во многих случаях значительно улучшить производительность. Как и в обычных запросах, для использования вторичного индекса необходимо явно указать его имя в тексте запроса с использованием ключевого слова `VIEW`.
* Реализовано кеширование графа вычисления при выполнении запросов, что уменьшает потребление ЦПУ при его построении.
diff --git a/ydb/docs/ru/core/reference/ydb-cli/toc_i.yaml b/ydb/docs/ru/core/reference/ydb-cli/toc_i.yaml
index 14578eb41cd..416cdeaf2b1 100644
--- a/ydb/docs/ru/core/reference/ydb-cli/toc_i.yaml
+++ b/ydb/docs/ru/core/reference/ydb-cli/toc_i.yaml
@@ -115,5 +115,5 @@ items:
href: workload-click-bench.md
- name: Key-Value нагрузка
href: workload-kv.md
- # - name: Topic нагрузка
- # href: workload-topic.md
+ - name: Topic нагрузка
+ href: workload-topic.md
diff --git a/ydb/docs/ru/core/reference/ydb-cli/workload-topic.md b/ydb/docs/ru/core/reference/ydb-cli/workload-topic.md
index 373dedb8f4e..c0172cad619 100644
--- a/ydb/docs/ru/core/reference/ydb-cli/workload-topic.md
+++ b/ydb/docs/ru/core/reference/ydb-cli/workload-topic.md
@@ -67,8 +67,8 @@
`--print-timestamp` | Печатать время вместе со статистикой каждого временного окна.
`--threads`, `-t` | Количество потоков писателя.<br>Значение по умолчанию: `1`.
`--message-size`, `-m` | Размер сообщения в байтах. Возможно задание в КБ, МБ, ГБ путем добавления суффиксов `K`, `M`, `G` соответственно.<br>Значение по умолчанию: `10K`.
-`--message-rate` | Целевая суммарная скорость записи, сообщений в секунду. Исключает использование параметра `--message-rate`.<br>Значение по умолчанию: `0` (нет ограничения).
-`--byte-rate` | Целевая суммарная скорость записи, байт в секунду. Исключает использование параметра `--byte-rate`. Возможно задание в КБ/с, МБ/с, ГБ/с путем добавления суффиксов `K`,`M`,`G` соответственно.<br>Значение по умолчанию: `0` (нет ограничения).
+`--message-rate` | Целевая суммарная скорость записи, сообщений в секунду. Исключает использование параметра `--byte-rate`.<br>Значение по умолчанию: `0` (нет ограничения).
+`--byte-rate` | Целевая суммарная скорость записи, байт в секунду. Исключает использование параметра `--message-rate`. Возможно задание в КБ/с, МБ/с, ГБ/с путем добавления суффиксов `K`,`M`,`G` соответственно.<br>Значение по умолчанию: `0` (нет ограничения).
`--codec` | Кодек, используемый для сжатия сообщений на клиенте перед отправкой на сервер.<br>Сжатие увеличивает затраты CPU на клиенте при записи и чтении сообщений, но обычно позволяет уменьшить объем передаваемых по сети и хранимых данных. При последующем чтении сообщений подписчиками они автоматически разжимаются использованным при записи кодеком, не требуя указания каких-либо параметров.<br>Возможные значения: `RAW` - без сжатия (по умолчанию), `GZIP`, `ZSTD`.
>Чтобы записать в `100` потоков писателей с целевой скоростью `80` МБ/с в течение `10` секунд, выполните следующую команду:
@@ -77,7 +77,7 @@
>{{ ydb-cli }} --profile quickstart workload topic run write --threads 100 --byte-rate 80M
>```
>
->В процессе работы будет выводиться статистика по промежуточные временным окнам, а по окончании теста — итоговая статистика за все время работы:
+>В процессе работы будет выводиться статистика по промежуточным временным окнам, а по окончании теста — итоговая статистика за все время работы:
>
>```text
>Window Write speed Write time Inflight
@@ -138,7 +138,7 @@
>{{ ydb-cli }} --profile quickstart workload topic run read --consumers 2 --threads 100
>```
>
->В процессе работы будет выводиться статистика по промежуточные временным окнам, а по окончании теста — итоговая статистика за все время работы:
+>В процессе работы будет выводиться статистика по промежуточным временным окнам, а по окончании теста — итоговая статистика за все время работы:
>
>```text
>Window Lag Lag time Read speed Full time
@@ -194,18 +194,18 @@
`--producer-threads`, `-p` | Количество потоков писателя.<br>Значение по умолчанию: `1`.
`--message-size`, `-m` | Размер сообщения в байтах. Возможно задание в КБ, МБ, ГБ путем добавления суффиксов `K`, `M`, `G` соответственно.<br>Значение по умолчанию: `10K`.
`--message-rate` | Целевая суммарная скорость записи, сообщений в секунду. Исключает использование параметра `--message-rate`.<br>Значение по умолчанию: `0` (нет ограничения).
-`--byte-rate` | Целевая суммарная скорость записи, байт в секунду. Исключает использование параметра `--byte-rate`. Возможно задание в КБ/с, МБ/с, ГБ/с путем добавления суффиксов `K`,`M`,`G` соответственно:<br>Значение по умолчанию: `0` (нет ограничения).
+`--byte-rate` | Целевая суммарная скорость записи, байт в секунду. Исключает использование параметра `--byte-rate`. Возможно задание в КБ/с, МБ/с, ГБ/с путем добавления суффиксов `K`,`M`,`G` соответственно.<br>Значение по умолчанию: `0` (нет ограничения).
`--codec` | Кодек, используемый для сжатия сообщений на клиенте перед отправкой на сервер.<br>Сжатие увеличивает затраты CPU на клиенте при записи и чтении сообщений, но обычно позволяет уменьшить объем передаваемых по сети и хранимых данных. При последующем чтении сообщений подписчиками они автоматически разжимаются использованным при записи кодеком, не требуя указания каких-либо параметров.<br>Возможные значения: `RAW` - без сжатия (по умолчанию), `GZIP`, `ZSTD`.
`--consumers`, `-c` | Количество читателей.<br>Значение по умолчанию: `1`.
`--threads`, `-t` | Количество потоков читателя.<br>Значение по умолчанию: `1`.
->Пример команды чтения с помощью `2` читателей в `50` потоков и записи `100` писателей с целевой скоростью `80` МБ/с и длительностью `10` секунд:
+>Пример команды чтения с помощью `2` читателей в `50` потоков и записи `100` потоков писателей с целевой скоростью `80` МБ/с и длительностью `10` секунд:
>
>```bash
>{{ ydb-cli }} --profile quickstart workload topic run full --producer-threads 100 --consumers 2 --consumer-threads 50 --byte-rate 80M
>```
>
->В процессе работы будет выводиться статистика по промежуточные временным окнам, а по окончании теста — итоговая статистика за все время работы:
+>В процессе работы будет выводиться статистика по промежуточным временным окнам, а по окончании теста — итоговая статистика за все время работы:
>
>```text
>Window Write speed Write time Inflight Lag Lag time Read speed Full time