diff options
author | alextarazanov <alextarazanov@yandex-team.com> | 2023-05-10 09:23:00 +0300 |
---|---|---|
committer | alextarazanov <alextarazanov@yandex-team.com> | 2023-05-10 09:23:00 +0300 |
commit | 2ca5ef9e572873a85f16f77864931a2d9c11f9df (patch) | |
tree | 95ae6dfb7e41b63d13cf325d1ac2496125ea9132 | |
parent | b1aa832a023d9f11abb3e6a70aba278f2848283f (diff) | |
download | ydb-2ca5ef9e572873a85f16f77864931a2d9c11f9df.tar.gz |
&Translations
47 files changed, 928 insertions, 443 deletions
diff --git a/ydb/docs/en/core/_includes/create-tables.md b/ydb/docs/en/core/_includes/create-tables.md new file mode 100644 index 0000000000..65b2fac3a8 --- /dev/null +++ b/ydb/docs/en/core/_includes/create-tables.md @@ -0,0 +1,27 @@ +```yql +CREATE TABLE series ( + series_id Uint64 NOT NULL, + title Utf8, + series_info Utf8, + release_date Date, + PRIMARY KEY (series_id) +); + +CREATE TABLE seasons ( + series_id Uint64, + season_id Uint64, + title Utf8, + first_aired Date, + last_aired Date, + PRIMARY KEY (series_id, season_id) +); + +CREATE TABLE episodes ( + series_id Uint64, + season_id Uint64, + episode_id Uint64, + title Utf8, + air_date Date, + PRIMARY KEY (series_id, season_id, episode_id) +); +``` diff --git a/ydb/docs/en/core/_includes/delete.md b/ydb/docs/en/core/_includes/delete.md new file mode 100644 index 0000000000..a9867ecd41 --- /dev/null +++ b/ydb/docs/en/core/_includes/delete.md @@ -0,0 +1,9 @@ +```yql +DELETE +FROM episodes +WHERE + series_id = 2 + AND season_id = 1 + AND episode_id = 2 +; +``` diff --git a/ydb/docs/en/core/_includes/select.md b/ydb/docs/en/core/_includes/select.md new file mode 100644 index 0000000000..174a96cf0a --- /dev/null +++ b/ydb/docs/en/core/_includes/select.md @@ -0,0 +1,7 @@ +```yql +SELECT + series_id, + title AS series_title, + release_date +FROM series; +``` diff --git a/ydb/docs/en/core/_includes/upsert.md b/ydb/docs/en/core/_includes/upsert.md new file mode 100644 index 0000000000..0eefe96c6a --- /dev/null +++ b/ydb/docs/en/core/_includes/upsert.md @@ -0,0 +1,32 @@ +```yql +UPSERT INTO series (series_id, title, release_date, series_info) +VALUES + ( + 1, + "IT Crowd", + Date("2006-02-03"), + "The IT Crowd is a British sitcom produced by Channel 4, written by Graham Linehan, produced by Ash Atalla and starring Chris O'Dowd, Richard Ayoade, Katherine Parkinson, and Matt Berry."), + ( + 2, + "Silicon Valley", + Date("2014-04-06"), + "Silicon Valley is an American comedy television series created by Mike Judge, John Altschuler and Dave Krinsky. The series focuses on five young men who founded a startup company in Silicon Valley." + ) + ; + +UPSERT INTO seasons (series_id, season_id, title, first_aired, last_aired) +VALUES + (1, 1, "Season 1", Date("2006-02-03"), Date("2006-03-03")), + (1, 2, "Season 2", Date("2007-08-24"), Date("2007-09-28")), + (2, 1, "Season 1", Date("2014-04-06"), Date("2014-06-01")), + (2, 2, "Season 2", Date("2015-04-12"), Date("2015-06-14")) +; + +UPSERT INTO episodes (series_id, season_id, episode_id, title, air_date) +VALUES + (1, 1, 1, "Yesterday's Jam", Date("2006-02-03")), + (1, 1, 2, "Calamity Jen", Date("2006-02-03")), + (2, 1, 1, "Minimum Viable Product", Date("2014-04-06")), + (2, 1, 2, "The Cap Table", Date("2014-04-13")) +; +``` diff --git a/ydb/docs/en/core/administration/quickstart.md b/ydb/docs/en/core/administration/quickstart.md new file mode 100644 index 0000000000..d98f527308 --- /dev/null +++ b/ydb/docs/en/core/administration/quickstart.md @@ -0,0 +1,346 @@ +# Getting started + +In this guide, you will [deploy](#install) a single-node local [{{ ydb-short-name }} cluster](../concepts/databases.md#cluster) and [execute](#queries) simple queries against your [database](../concepts/databases.md#database). + +## Deploy a {{ ydb-short-name }} cluster {#install} + +To deploy your {{ ydb-short-name }} cluster, use an archive with an executable or a Docker image. + +{% list tabs %} + +- Bin + + {% note info %} + + Currently, only a Linux build is supported. We'll add builds for Windows and macOS later. + + {% endnote %} + + 1. Create a working directory and change to it: + + ```bash + mkdir ~/ydbd && cd ~/ydbd + ``` + + 1. Download and run the installation script: + + ```bash + curl https://binaries.ydb.tech/local_scripts/install.sh | bash + ``` + + This will download and unpack the archive including the `idbd` executable, libraries, configuration files, and scripts needed to start and stop the cluster. + + 1. Start the cluster in one of the following storage modes: + + * In-memory data: + + ```bash + ./start.sh ram + ``` + + When data is stored in-memory, it is lost when the cluster is stopped. + + * Data on disk: + + ```bash + ./start.sh disk + ``` + + The first time you run the script, an 80GB `ydb.data` file will be created in the working directory. Make sure there's enough disk space to create it. + + Result: + + ```text + Starting storage process... + Initializing storage ... + Registering database ... + Starting database process... + + Database started. Connection options for YDB CLI: + + -e grpc://localhost:2136 -d /Root/test + ``` + +- Docker + + 1. Pull the current version of the Docker image: + + ```bash + docker pull {{ ydb_local_docker_image }}:{{ ydb_local_docker_image_tag }} + ``` + + Make sure that the pull operation was successful: + + ```bash + docker image list | grep {{ ydb_local_docker_image }} + ``` + + Result: + + ```text + cr.yandex/yc/yandex-docker-local-ydb latest c37f967f80d8 6 weeks ago 978MB + ``` + + 1. Run the Docker container: + + ```bash + docker run -d --rm --name ydb-local -h localhost \ + -p 2135:2135 -p 2136:2136 -p 8765:8765 \ + -v $(pwd)/ydb_certs:/ydb_certs -v $(pwd)/ydb_data:/ydb_data \ + -e YDB_DEFAULT_LOG_LEVEL=NOTICE \ + -e GRPC_TLS_PORT=2135 -e GRPC_PORT=2136 -e MON_PORT=8765 \ + {{ ydb_local_docker_image}}:{{ ydb_local_docker_image_tag }} + ``` + + If the container starts successfully, you'll see the container's ID. The container might take a few minutes to initialize. The database will not be available until the initialization completes. + +{% endlist %} + +## Connect to the DB {#connect} + +To connect to the YDB database, use the cluster's Embedded UI or the [{{ ydb-short-name }} CLI command-line interface](../reference/ydb-cli/index.md). + +{% list tabs %} + +- YDB UI + + 1. In your browser, open the page: + + ```http + http://localhost:8765 + ``` + + 1. Under **Database list**, select the database: + + * `/Root/test`: If you used an executable to deploy your cluster. + * `/local`: If you deployed your cluster from a Docker image. + +- YDB CLI + + 1. Install the {{ ydb-short-name }} CLI: + + * For Linux or macOS: + + ```bash + curl -sSL https://storage.yandexcloud.net/yandexcloud-ydb/install.sh | bash + ``` + + {% note info %} + + The script will update the `PATH` variable only if you run it in the bash or zsh command shell. If you run the script in a different shell, add the CLI path to the `PATH` variable yourself. + + {% endnote %} + + * For Windows: + + ```cmd + @"%SystemRoot%\System32\WindowsPowerShell\v1.0\powershell.exe" -Command "iex ((New-Object System.Net.WebClient).DownloadString('https://storage.yandexcloud.net/yandexcloud-ydb/install.ps1'))" + ``` + + Specify whether to add the executable file path to the `PATH` environment variable: + + ```text + Add ydb installation dir to your PATH? [Y/n] + ``` + + {% note info %} + + Some {{ ydb-short-name }} CLI commands may use Unicode characters in their results. If these characters aren't displayed correctly in the Windows console, switch the encoding to UTF-8: + + ```cmd + chcp 65001 + ``` + + {% endnote %} + + To update the environment variables, restart the command shell session. + + 1. Save the DB connection parameters in the [{{ ydb-short-name }} CLI profile](../reference/ydb-cli/profile/index.md): + + ```bash + ydb config profile create quickstart --endpoint grpc://localhost:2136 --database <path_database> + ``` + + * `path_database`: Database path. Specify one of these values: + + * `/Root/test`: If you used an executable to deploy your cluster. + * `/local`: If you deployed your cluster from a Docker image. + + 1. Check your database connection: + + ```bash + ydb --profile quickstart scheme ls + ``` + + Result: + + ```text + .sys_health .sys + ``` + +{% endlist %} + +## Run queries against the database {#queries} + +Use the cluster's YDB Embedded UI or the [{{ ydb-short-name }} CLI](../reference/ydb-cli/index.md) to execute queries against the database. + +{% list tabs %} + +- YDB UI + + 1. Create tables in the database: + + Enter the query text under **Query**: + + {% include [create-tables](../_includes/create-tables.md) %} + + Click **Run Script**. + + 1. Populate the resulting tables with data: + + Enter the query text under **Query**: + + {% include [upsert](../_includes/upsert.md) %} + + Click **Run Script**. + + 1. Select data from the `series` table: + + Enter the query text under **Query**: + + {% include [upsert](../_includes/select.md) %} + + Click **Run Script**. + + You'll see the query result below: + + ```text + series_id series_title release_date + 1 IT Crowd 13182 + 2 Silicon Valley 16166 + ``` + + 1. Delete data from the `episodes` table. + + Enter the query text under **Query**: + + {% include [upsert](../_includes/delete.md) %} + + Click **Run Script**. + +- YDB CLI + + 1. Create tables in the database: + + Write the query text to the `create-table.sql` file: + + {% include [create-tables](../_includes/create-tables.md) %} + + Run the following query: + + ```bash + ydb --profile quickstart yql --file create-table.sql + ``` + + 1. Populate the resulting tables with data: + + Write the query text to the `upsert.sql` file: + + {% include [upsert](../_includes/upsert.md) %} + + Run the following query: + + ```bash + ydb --profile quickstart yql --file upsert.sql + ``` + + 1. Select data from the `series` table: + + Write the query text to the `select.sql` file: + + {% include [select](../_includes/select.md) %} + + Run the following query: + + ```bash + ydb --profile quickstart yql --file select.sql + ``` + + Result: + + ```text + ┌───────────┬──────────────────┬──────────────┐ + | series_id | series_title | release_date | + ├───────────┼──────────────────┼──────────────┤ + | 1 | "IT Crowd" | "2006-02-03" | + ├───────────┼──────────────────┼──────────────┤ + | 2 | "Silicon Valley" | "2014-04-06" | + └───────────┴──────────────────┴──────────────┘ + ``` + + 1. Delete data from the `episodes` table. + + Write the query text to the `delete.sql` file: + + {% include [delete](../_includes/delete.md) %} + + Run the following query: + + ```bash + ydb --profile quickstart yql --file delete.sql + ``` + + View the `episodes` table: + + ```bash + ydb --profile quickstart yql --script "SELECT * FROM episodes;" + ``` + + Result: + + ```text + ┌──────────────┬────────────┬───────────┬───────────┬──────────────────────────┐ + | air_date | episode_id | season_id | series_id | title | + ├──────────────┼────────────┼───────────┼───────────┼──────────────────────────┤ + | "2006-02-03" | 1 | 1 | 1 | "Yesterday's Jam" | + ├──────────────┼────────────┼───────────┼───────────┼──────────────────────────┤ + | "2006-02-03" | 2 | 1 | 1 | "Calamity Jen" | + ├──────────────┼────────────┼───────────┼───────────┼──────────────────────────┤ + | "2014-04-06" | 1 | 1 | 2 | "Minimum Viable Product" | + └──────────────┴────────────┴───────────┴───────────┴──────────────────────────┘ + ``` + + You've deleted The Cap Table series row from the table. + +{% endlist %} + +## Stop the cluster {#stop} + +Stop the {{ ydb-short-name }} cluster when done: + +{% list tabs %} + +- Bin + + To stop your cluster, change to the `~/ydbd` directory, then run this command: + + ```bash + ./stop.sh + ``` + +- Docker + + To stop the Docker container with the cluster, run this command: + + ```bash + docker kill ydb-local + ``` + +{% endlist %} + +## What's next {#advanced} + +* Read about [{{ ydb-short-name }} concepts](../concepts/index.md). +* Learn more about these and other methods of [{{ ydb-short-name }} deployment](../deploy/index.md). +* Find out how to access your {{ ydb-short-name }} databases over the [SDK](../reference/ydb-sdk/index.md). +* Learn more about the [YQL](../yql/reference/index.md) query language. diff --git a/ydb/docs/en/core/deploy/toc_i.yaml b/ydb/docs/en/core/deploy/toc_i.yaml index 2cc76d971d..b10ed782e1 100644 --- a/ydb/docs/en/core/deploy/toc_i.yaml +++ b/ydb/docs/en/core/deploy/toc_i.yaml @@ -3,6 +3,8 @@ items: include: { mode: link, path: orchestrated/toc_p.yaml } - name: VM / Baremetal href: manual/deploy-ydb-on-premises.md +- name: Deploying a single-node cluster + include: { mode: link, path: ../getting_started/self_hosted/toc_p.yaml } - name: Configuration href: configuration/config.md - name: BlobStorage production configurations diff --git a/ydb/docs/en/core/getting_started/_includes/cli.md b/ydb/docs/en/core/getting_started/_includes/cli.md index caa11df783..4d5bcf2ec7 100644 --- a/ydb/docs/en/core/getting_started/_includes/cli.md +++ b/ydb/docs/en/core/getting_started/_includes/cli.md @@ -2,7 +2,7 @@ ## Prerequisites {#prerequisites} -To run commands via the CLI, you will need database connection settings you can get when [creating](../create_db.md) a database: +To run commands via the CLI, you will need database connection settings you can retrieve when [creating](../create_db.md) a connection: * [Endpoint](../../concepts/connect.md#endpoint) * [Database name](../../concepts/connect.md#database) @@ -34,17 +34,17 @@ ydb ... ``` -All the features of the {{ ydb-short-name }} CLI built-in help are described in [Built-in help](../../reference/ydb-cli/commands/service.md#help) of the {{ ydb-short-name }} CLI reference. +All the features of the {{ ydb-short-name }} built-in help are described in [Built-in help](../../reference/ydb-cli/commands/service.md#help) of the {{ ydb-short-name }} CLI reference. ## Check the connection {#ping} {#scheme-ls} -To test connection, you can use the command for [listing objects](../../reference/ydb-cli/commands/scheme-ls.md) in the database, `scheme ls`: +To check the connection, use the [object list get](../../reference/ydb-cli/commands/scheme-ls.md) command in the `scheme ls` database: ```bash {{ ydb-cli }} -e <endpoint> -d <database> scheme ls ``` -If the command is successful, a list of objects in the database is shown in response. If you haven't created anything in the database yet, the output will only contain the `.sys` and `.sys_health` system directories with [diagnostic views of YDB](../../troubleshooting/system_views_db.md). +If the command is successful, a list of objects in the database is shown in response. If you haven't created anything in the database yet, the output will only contain the `.sys` and `.sys_health` system directories with [diagnostic representations of YDB](../../troubleshooting/system_views_db.md). {% include [cli/ls_examples.md](cli/ls_examples.md) %} @@ -52,53 +52,53 @@ If the command is successful, a list of objects in the database is shown in resp To avoid specifying connection parameters every time you call the YDB CLI, use the [profile](../../reference/ydb-cli/profile/index.md). Creating the profile described below will also let you copy subsequent commands through the clipboard without editing them regardless of which database you're using to complete the "Getting started" scenario. -[Create the profile](../../reference/ydb-cli/profile/create.md) `db1` using the following command: +[Create](../../reference/ydb-cli/profile/create.md) the `quickstart` profile using the following command: ```bash -{{ ydb-cli }} config profile create db1 -e <endpoint> -d <database> +{{ ydb-cli }} config profile create quickstart -e <endpoint> -d <database> ``` Use the values checked at the [previous step](#ping) as parameters. For example, to create a connection profile to a local YDB database created using the self-hosted deployment scenario [in Docker](../self_hosted/ydb_docker.md), run the following command: ```bash -{{ ydb-cli }} config profile create db1 -e grpc://localhost:2136 -d /local +{{ ydb-cli }} config profile create quickstart -e grpc://localhost:2136 -d /local ``` Check that the profile is OK with the `scheme ls` command: ```bash -{{ ydb-cli }} -p db1 scheme ls +{{ ydb-cli }} -p quickstart scheme ls ``` ## Executing an YQL script {#yql} -The {{ ydb-short-name }} CLI `yql` command lets you execute any command (both DDL and DML) in [YQL](../../yql/reference/index.md), an SQL dialect supported by {{ ydb-short-name }}: +The {{ ydb-short-name }} CLI `yql` command lets you execute any command (both DDL and DML) in [YQL](../../yql/reference/index.md), a SQL dialect supported by {{ ydb-short-name }}: ```bash {{ ydb-cli }} -p <profile_name> yql -s <yql_request> ``` -For example: +e.g.: * Creating a table: - ```bash - {{ ydb-cli }} -p db1 yql -s "create table t1( id uint64, primary key(id))" - ``` + ```bash + {{ ydb-cli }} -p quickstart yql -s "create table t1( id uint64, primary key(id))" + ``` * Adding a record: - ```bash - {{ ydb-cli }} -p db1 yql -s "insert into t1(id) values (1)" - ``` + ```bash + {{ ydb-cli }} -p quickstart yql -s "insert into t1(id) values (1)" + ``` * Data selects: - ```bash - {{ ydb-cli }} -p db1 yql -s "select * from t1" - ``` + ```bash + {{ ydb-cli }} -p quickstart yql -s "select * from t1" + ``` -If you get the `Profile db1 does not exist` error, that means you neglected to create a profile in the [previous step](#profile). +If you get the `Profile quickstart does not exist` error, this means that you failed to create a profile during the [previous step](#profile). ## Specialized CLI commands {#ydb-api} @@ -106,6 +106,6 @@ Executing commands via `ydb yql` is a nice and easy way to get started. However, The YDB CLI supports individual commands with complete sets of options for any existing YDB API. For a full list of commands, see the [YDB CLI reference](../../reference/ydb-cli/index.md). -## Next step {#next} +## Learn more about YDB {#next} -Go to [YQL - Getting started](../yql.md).
\ No newline at end of file +Proceed to the [YQL - Getting started](../yql.md) article to learn more about YDB. diff --git a/ydb/docs/en/core/getting_started/_includes/yql.md b/ydb/docs/en/core/getting_started/_includes/yql.md index c2c39bcd84..299a712f20 100644 --- a/ydb/docs/en/core/getting_started/_includes/yql.md +++ b/ydb/docs/en/core/getting_started/_includes/yql.md @@ -18,22 +18,23 @@ In {{ ydb-short-name }}, you can make YQL queries to a database using: * [{{ ydb-short-name }} CLI](#cli) -* [{{ ydb-short-name }} SDK](../sdk.md) +* [{{ ydb-short-name }} SDK](../../reference/ydb-sdk/index.md) {% include [yql/ui_execute.md](yql/ui_execute.md) %} ### {{ ydb-short-name }} CLI {#cli} -To enable script execution using the {{ ydb-short-name }} CLI, do the following: +To execute scripts using the {{ ydb-short-name }} CLI, first do the following: -* [Install the CLI](../cli.md#install). -* Define and check [DB connection parameters](../cli#scheme-ls). -* [Create a `db1` profile](../cli.md#profile) configured to connect to your database. +1. [Install the {{ ydb-short-name }} CLI](../../reference/ydb-cli/install.md). +1. [Create a profile](../../reference/ydb-cli/profile/create.md) configured to connect to your database. -Save the text of the scripts below to a file. Name it `script.yql` to be able to run the statements given in the examples by simply copying them through the clipboard. Next, run `{{ ydb-cli }} yql` indicating the use of the `db1` profile and reading the script from the `script.yql` file: +{% include [ydb-cli-profile.md](../../_includes/ydb-cli-profile.md) %} + +Save the text of the scripts below to a file. Name it `script.yql` to be able to run the statements given in the examples by simply copying them through the clipboard. Next, run the `{{ ydb-cli }} yql` command with the `quickstart` profile, reading the script from the `script.yql` file: ```bash -{{ ydb-cli }} --profile db1 yql -f script.yql +{{ ydb-cli }} --profile quickstart yql -f script.yql ``` ## Working with a data schema {#ddl} @@ -82,16 +83,46 @@ For a description of everything you can do when working with tables, review the To execute the script via the {{ ydb-short-name }} CLI, follow the instructions given under [Executing YQL queries in the {{ ydb-short-name }} CLI](#cli) in this article. +### Creating a column-oriented table {#create-olap-table} + +A table with the specified columns is created [using the YQL `CREATE TABLE` command](../../yql/reference/syntax/create_table.md). Make sure the primary key and partitioning key are defined in the table. The data types that are acceptable in analytical tables are specified in [Supported data types in column-oriented tables](../../concepts/datamodel/table.md#olap-data-types). + +Make sure to use the `NOT NULL` constraint when defining the primary key columns. The other columns are optional by default and may contain `NULL`. {{ ydb-short-name }} does not support `FOREIGN KEY` limits. + +To build a series directory, create a table named `views` by running the following script: + +```sql +CREATE TABLE views ( + series_id Uint64 NOT NULL, + season_id Uint64, + viewed_at Timestamp NOT NULL, + person_id Uint64 NOT NULL, + PRIMARY KEY (viewed_at, series_id, person_id) +) +PARTITION BY HASH(viewed_at, series_id) +WITH ( + STORE = COLUMN, + AUTO_PARTITIONING_MIN_PARTITIONS_COUNT = 10 +) +``` + +For a description of everything you can do when working with tables, review the relevant sections of the YQL documentation: + +* [CREATE TABLE](../../yql/reference/syntax/create_table.md): Create a table and define its initial parameters. +* [DROP TABLE](../../yql/reference/syntax/drop_table.md): Delete a table. + +To execute the script via the {{ ydb-short-name }} CLI, follow the instructions given under [Executing YQL queries in the {{ ydb-short-name }} CLI](#cli) in this article. + ### Getting a list of existing DB tables {#scheme-ls} Check that the tables are actually created in the database. {% include [yql/ui_scheme_ls.md](yql/ui_scheme_ls.md) %} -To get a list of existing DB tables via the {{ ydb-short-name }} CLI, make sure that the prerequisites under [Executing YQL scripts in the {{ ydb-short-name }} CLI](#cli) are complete and run the [`scheme ls` command](../cli.md#ping): +To get a list of existing DB tables via the {{ ydb-short-name }} CLI, make sure you have met the prerequisites under [Executing YQL scripts in the {{ ydb-short-name }} CLI](#cli), then run the `scheme ls` command: ```bash -{{ ydb-cli }} --profile db1 scheme ls +{{ ydb-cli }} --profile quickstart scheme ls ``` ## Operations with data {#dml} @@ -100,9 +131,9 @@ Commands for running YQL queries and scripts in the YDB CLI and the web interfac ### UPSERT: Adding data {#upsert} -The most efficient way to add data to {{ ydb-short-name }} is through the [`UPSERT`](../../yql/reference/syntax/upsert_into.md) statement. It inserts new data by primary keys regardless of whether data by these keys previously existed in the table. As a result, unlike regular `INSERT` and `UPDATE`, it does not require a data pre-fetch on the server to verify that a key is unique. When working with {{ ydb-short-name }}, always consider `UPSERT` as the main way to add data and only use other statements when absolutely necessary. +The most efficient way to add data to {{ ydb-short-name }} is through the [`UPSERT` command](../../yql/reference/syntax/upsert_into.md). It inserts new data by primary keys regardless of whether data by these keys previously existed in the table. As a result, unlike regular `INSERT`and `UPDATE`, it does not require a data pre-fetch from the server to verify that a key is unique before it runs. When working with {{ ydb-short-name }}, always consider `UPSERT` as the main way to add data and only use other statements when absolutely necessary. -All statements that write data to {{ ydb-short-name }} support working with both subqueries and multiple entries passed directly in a query. +All commands that write data to {{ ydb-short-name }} support working with both samples and multiple logs passed directly in a query. Let's add data to the previously created tables: @@ -178,7 +209,7 @@ To learn more about the commands for selecting data, see the YQL reference: ### Parameterized queries {#param} -Transactional applications working with a database are characterized by the execution of multiple similar queries that only differ in parameters. Like most databases, {{ ydb-short-name }} will work more efficiently if you define variable parameters and their types and then initiate the execution of a query by passing the parameter values separately from its text. +Transactional applications working with a database are characterized by the execution of multiple similar queries that only differ in parameters. Like most databases, {{ ydb-short-name }} will work more efficiently if you define updateable parameters and their types and then initiate the execution of a query by passing the parameter values separately from its text. To define parameters in the text of a YQL query, use the [DECLARE](../../yql/reference/syntax/declare.md). @@ -201,7 +232,7 @@ WHERE sa.series_id = $seriesId AND sa.season_id = $seasonId; To make a parameterized select query, make sure the prerequisites of the [Executing YQL scripts in the {{ ydb-short-name }} CLI](#cli) section of this article are met, then run: ```bash -{{ ydb-cli }} --profile db1 yql -f script.yql -p '$seriesId=1' -p '$seasonId=1' +{{ ydb-cli }} --profile quickstart yql -f script.yql -p '$seriesId=1' -p '$seasonId=1' ``` For a full description of the ways to pass parameters, see the [{{ ydb-short-name }} CLI reference](../../reference/ydb-cli/index.md). @@ -209,7 +240,3 @@ For a full description of the ways to pass parameters, see the [{{ ydb-short-nam ## YQL tutorial {#tutorial} You can learn more about YQL use cases by completing tasks from the [YQL tutorial](../../yql/tutorial/index.md). - -## Next step {#next} - -Go to [YDB SDK - Getting started](../sdk.md). diff --git a/ydb/docs/en/core/getting_started/self_hosted/_includes/ydb_docker/04_request.md b/ydb/docs/en/core/getting_started/self_hosted/_includes/ydb_docker/04_request.md index ea135712bd..0c180a4f02 100644 --- a/ydb/docs/en/core/getting_started/self_hosted/_includes/ydb_docker/04_request.md +++ b/ydb/docs/en/core/getting_started/self_hosted/_includes/ydb_docker/04_request.md @@ -1,6 +1,6 @@ ## Making queries {#request} -Install the YDB CLI and execute queries as described in [YDB CLI - Getting started](../../../cli.md), using the endpoint and database location specified at the beginning of this article. For example: +[Install](../../../../reference/ydb-cli/install.md) the YDB CLI and run a query, for example: ```bash ydb -e grpc://localhost:2136 -d /local scheme ls @@ -12,7 +12,7 @@ To ensure a connection using TLS is successful, add the name of the file with th ydb -e grpcs://localhost:2135 --ca-file ydb_certs/ca.pem -d /local scheme ls ``` -A precompiled version of the [YDB CLI](../../../../reference/ydb-cli/index.md) is also available within the image: +A pre-built [YDB CLI](../../../../reference/ydb-cli/index.md) version is also available within the image: ```bash docker exec <container_id> /ydb -e grpc://localhost:2136 -d /local scheme ls @@ -20,5 +20,4 @@ docker exec <container_id> /ydb -e grpc://localhost:2136 -d /local scheme ls , where -`<container_id>`: The container ID output when you [start](#start) it. - +`<container_id>`: Container ID that is output when you [start](#start) the container.
\ No newline at end of file diff --git a/ydb/docs/en/core/getting_started/self_hosted/_includes/ydb_local.md b/ydb/docs/en/core/getting_started/self_hosted/_includes/ydb_local.md index b8e1cd619e..2e34f87f54 100644 --- a/ydb/docs/en/core/getting_started/self_hosted/_includes/ydb_local.md +++ b/ydb/docs/en/core/getting_started/self_hosted/_includes/ydb_local.md @@ -4,13 +4,13 @@ This section describes how to deploy a local single-node {{ ydb-short-name }} cl ## Connection parameters {#conn} -As a result of completing the steps below, you'll get a YDB database running on a local machine that you can connect to using the following: +As a result of completing the steps described below, you'll get a YDB database running on your local machine, which you can connect to using the following parameters: - [Endpoint](../../../concepts/connect.md#endpoint): `grpc://localhost:2136` - [DB path](../../../concepts/connect.md#database): `/Root/test` - [Authentication](../../../concepts/auth.md): Anonymous (no authentication) -## Installation {#install} +## Installing {#install} Create a working directory. In this directory, run a script to download an archive with the `ydbd` executable file and libraries required for using {{ ydb-short-name }}, as well as a set of scripts and auxiliary files to start and stop a server: @@ -22,7 +22,7 @@ curl https://binaries.ydb.tech/local_scripts/install.sh | bash ## Starting {#start} -The local YDB server can be started in two modes: +You can start a local YDB server with a disk or in-memory storage: {% list tabs %} @@ -64,7 +64,7 @@ To stop the server, run the following command in the working directory: ## Making queries via the YDB CLI {#cli} -[Install the YDB CLI](../../../reference/ydb-cli/install.md) and make queries as described in [YDB CLI - Getting started](../../cli.md). To do this, use the endpoint and DB path specified [in the beginning of this article](#conn). For example: +[Install](../../../reference/ydb-cli/install.md) the YDB CLI and run a query, for example: ```bash ydb -e grpc://localhost:2136 -d /Root/test scheme ls diff --git a/ydb/docs/en/core/index.yaml b/ydb/docs/en/core/index.yaml index f23341e15a..4360315ba2 100644 --- a/ydb/docs/en/core/index.yaml +++ b/ydb/docs/en/core/index.yaml @@ -8,8 +8,8 @@ meta: title: YDB links: - title: Getting started - description: Creating a database, connecting to it, setting up access permissions, and performing basic operations with it - href: getting_started/ + description: Deploy your cluster and perform basic operations with data + href: administration/quickstart - title: Concepts description: How YDB works, its features and available usage modes href: concepts/ @@ -31,6 +31,3 @@ links: - title: Managing a cluster description: Configuring, maintaining, monitoring, and performing diagnostics of YDB clusters href: cluster/ - - title: Useful links - description: Links to various resources related to YDB - href: getting_started/useful_links
\ No newline at end of file diff --git a/ydb/docs/en/core/reference/ydb-cli/_includes/commands.md b/ydb/docs/en/core/reference/ydb-cli/_includes/commands.md index 456c38027d..3fc402a1a6 100644 --- a/ydb/docs/en/core/reference/ydb-cli/_includes/commands.md +++ b/ydb/docs/en/core/reference/ydb-cli/_includes/commands.md @@ -6,7 +6,7 @@ General syntax for calling {{ ydb-short-name }} CLI commands: {{ ydb-cli }} [global options] <command> [<subcommand> ...] [command options] ``` -, where: +where: - `{{ ydb-cli}}` is the command to run the {{ ydb-short-name }}CLI from the OS command line. - `[global options]` are [global options](../commands/global-options.md) that are common for all {{ ydb-short-name }} CLI commands. @@ -35,10 +35,10 @@ Any command can be run from the command line with the `--help` option to get hel | [import file tsv](../export_import/import-file.md) | Importing data from a TSV file | | [import s3](../export_import/s3_import.md) | Importing data from S3 storage | | [init](../profile/create.md) | Initializing the CLI, creating a [profile](../profile/index.md) | -| [operation cancel](../operation-cancel.md) | Aborting long running operations | -| [operation forget](../operation-forget.md) | Deleting long running operations from the list | -| [operation get](../operation-get.md) | Status of long running operations | -| [operation list](../operation-list.md) | List of long running operations | +| [operation cancel](../operation-cancel.md) | Aborting long-running operations | +| [operation forget](../operation-forget.md) | Deleting long-running operations from the list | +| [operation get](../operation-get.md) | Status of long-running operations | +| [operation list](../operation-list.md) | List of long-running operations | | [scheme describe](../commands/scheme-describe.md) | Description of a data schema object | | [scheme ls](../commands/scheme-ls.md) | List of data schema objects | | [scheme mkdir](../commands/dir.md#mkdir) | Creating a directory | @@ -71,10 +71,12 @@ Any command can be run from the command line with the `--help` option to get hel | [topic drop](../topic-drop.md) | Deleting a topic | | [topic consumer add](../topic-consumer-add.md) | Adding a consumer to a topic | | [topic consumer drop](../topic-consumer-drop.md) | Deleting a consumer from a topic | +| [topic consumer offset commit](../topic-consumer-offset-commit.md) | Saving a consumer offset | | [topic read](../topic-read.md) | Reading messages from a topic | | [topic write](../topic-write.md) | Writing messages to a topic | {% if ydb-cli == "ydb" %} [update](../commands/service.md) | Update the {{ ydb-short-name }} CLI [version](../commands/service.md) | Output details about the {{ ydb-short-name }} CLI version {% endif %} -[workload](../commands/workload/index.md) | Generate the yql workload | Execute a YQL script (with streaming support) +[workload](../commands/workload/index.md) | Generate the workload +[yql](../yql.md) | Execute a YQL script (with streaming support) diff --git a/ydb/docs/en/core/reference/ydb-cli/_includes/index.md b/ydb/docs/en/core/reference/ydb-cli/_includes/index.md index ba93a68ba0..9126db280b 100644 --- a/ydb/docs/en/core/reference/ydb-cli/_includes/index.md +++ b/ydb/docs/en/core/reference/ydb-cli/_includes/index.md @@ -2,11 +2,7 @@ The {{ ydb-short-name }} CLI provides software for managing your data in {{ ydb-short-name }}. -To learn how to use the {{ ydb-short-name }} CLI, see [Installing the {{ ydb-short-name }} CLI](../install.md). - -When connecting to a database, you need to authenticate. To connect for the first time, you can use the quick recipe from the [Authentication](../../../getting_started/auth.md) section under "Getting started". - -Full information about defining DB connection and authentication parameters is given in the [Connecting to a database and authenticating with the {{ ydb-short-name }} CLI](../connect.md) article in this section. +To use the {{ ydb-short-name }} CLI, first [install](../install.md) it and then set up the [connection and authentication](../connect.md). For a full description of {{ ydb-short-name }} CLI commands, see the following articles of this section: diff --git a/ydb/docs/en/core/reference/ydb-cli/commands/_includes/dir.md b/ydb/docs/en/core/reference/ydb-cli/commands/_includes/dir.md index 78bd08ae24..c783b2cb21 100644 --- a/ydb/docs/en/core/reference/ydb-cli/commands/_includes/dir.md +++ b/ydb/docs/en/core/reference/ydb-cli/commands/_includes/dir.md @@ -32,13 +32,13 @@ Examples: - Creating a directory at the database root ```bash - {{ ydb-cli }} --profile db1 scheme mkdir dir1 + {{ ydb-cli }} --profile quickstart scheme mkdir dir1 ``` - Creating directories at the specified path from the database root ```bash - {{ ydb-cli }} --profile db1 scheme mkdir dir1/dir2/dir3 + {{ ydb-cli }} --profile quickstart scheme mkdir dir1/dir2/dir3 ``` ## Deleting a directory {#rmdir} @@ -116,13 +116,13 @@ EPathTypeDir, path state: EPathStateNoChanges, alive children: <count> In all CLI commands to which the object name is passed by the parameter, it can be specified with a directory, for example, in [`scheme describe`](../scheme-describe.md): ```bash -{{ ydb-cli }} --profile db1 scheme describe dir1/table_a +{{ ydb-cli }} --profile quickstart scheme describe dir1/table_a ``` The [`scheme ls`](../scheme-ls.md) command supports passing the path to the directory as a parameter: ```bash -{{ ydb-cli }} --profile db1 scheme ls dir1/dir2 +{{ ydb-cli }} --profile quickstart scheme ls dir1/dir2 ``` ## Using directories in YQL {#yql} diff --git a/ydb/docs/en/core/reference/ydb-cli/commands/_includes/secondary_index.md b/ydb/docs/en/core/reference/ydb-cli/commands/_includes/secondary_index.md index 91e64899c2..809be3a496 100644 --- a/ydb/docs/en/core/reference/ydb-cli/commands/_includes/secondary_index.md +++ b/ydb/docs/en/core/reference/ydb-cli/commands/_includes/secondary_index.md @@ -43,14 +43,14 @@ To retrieve the status of all index-building operations, use `operation list bui {% include [ydb-cli-profile.md](../../../../_includes/ydb-cli-profile.md) %} -Adding a synchronous index built on the `air_date` column to the `episodes` table [created previously](../../../../getting_started/yql.md): +Adding a synchronous index built on the `air_date` column to the `episodes` table [created previously]({{ quickstart-path }}): ```bash {{ ydb-cli }} -p quickstart table index add global-sync episodes \ --index-name idx_aired --columns air_date ``` -Adding to the [previously created](../../../../getting_started/yql.md) `series` table an asynchronous index built on the `release_date` and `title` columns, copying to the index the `series_info` column value: +Adding to the [previously created]({{ quickstart-path }}) `series` table an asynchronous index built on the `release_date` and `title` columns, copying to the index the `series_info` column value: ```bash {{ ydb-cli }} -p quickstart table index add global-async series \ diff --git a/ydb/docs/en/core/reference/ydb-cli/export_import/_includes/file_structure.md b/ydb/docs/en/core/reference/ydb-cli/export_import/_includes/file_structure.md index f240be1f78..36ff17ec65 100644 --- a/ydb/docs/en/core/reference/ydb-cli/export_import/_includes/file_structure.md +++ b/ydb/docs/en/core/reference/ydb-cli/export_import/_includes/file_structure.md @@ -1,21 +1,21 @@ -# File structure of data export +# File structure of an export -The file structure described below is used for exporting data both to the file system and S3-compatible object storage. When using S3, file path is written to the object key, while export directory is a key prefix. +The file structure outlined below is used to export data both to the file system and an S3-compatible object storage. When working with S3, the file path is added to the object key, and the key's prefix specifies the export directory. ## Directories {#dir} -Each directory in a database corresponds to a directory in the file structure. The directory hierarchy in the file structure corresponds the directory hierarchy in the database. If some DB directory contains no objects (neither tables nor subdirectories), this directory's file structure contains a single zero size file named `empty_dir`. +Each database directory has a counterpart directory in the file structure. The directory hierarchy in the file structure matches the directory hierarchy in the database. If a certain database directory includes no items (neither tables nor subdirectories), the first structure of such a directory includes one file of zero size named `empty_dir`. ## Tables {#tables} -Each DB table also has a corresponding same-name directory in the file structure directory hierarchy, which contains: +For each table in the database, there's a same-name directory in the file structure's directory hierarchy that includes: -- The `scheme.pb` file with information about the table structure and its parameters in [text protobuf](https://developers.google.com/protocol-buffers/docs/reference/cpp/google.protobuf.text_format) format. -- One or more `data_XX.csv` files with data in `CSV` format, where `XX` is the file sequence number. Data export starts with the `data_00.csv` file. Each subsequent file is created once the size of the current file exceeds 100 MB. +- The `scheme.pb` file describing the table structure and parameters in the [text protobuf](https://developers.google.com/protocol-buffers/docs/reference/cpp/google.protobuf.text_format) format +- One or more `data_XX.csv` files with the table data in `csv` format, where `XX` is the file's sequence number. The export starts with the `data_00.csv` file, with a next file created whenever the current file exceeds 100 MB. -## Data files {#datafiles} +## Files with data {#datafiles} -Data is stored in `.csv` files, one file line per table entry, without a row with column headers. URL-encoded format is used for string representation. For example, a file line for a table with uint64 and utf8 columns containing the number 1 and the string "Привет" ("Hello" in Russian), respectively, looks like this: +The format of data files is `.csv`, where each row corresponds to a record in the table (except the row with column headings). The urlencoded format is used for rows. For example, the file row for the table with the uint64 and utf8 columns that includes the number 1 and the Russian string "Привет" (translates to English as "Hi"), would look like this: ``` 1,"%D0%9F%D1%80%D0%B8%D0%B2%D0%B5%D1%82" @@ -23,7 +23,7 @@ Data is stored in `.csv` files, one file line per table entry, without a row wit ## Example {#example} -When exporting tables created within a tutorial when [Getting started with YQL](../../../../getting_started/yql.md#create-table) in the "Getting started" section, the following file structure is created: +When you export the tables created under [{#T}]({{ quickstart-path }}) in Getting started, the system will create the following file structure: ``` ├── episodes @@ -37,9 +37,9 @@ When exporting tables created within a tutorial when [Getting started with YQL]( └── scheme.pb ``` -File `series/scheme.pb` contents: +Contents of the `series/scheme.pb` file: -``` +``` columns { name: "series_id" type { diff --git a/ydb/docs/en/core/reference/ydb-cli/export_import/_includes/import-file.md b/ydb/docs/en/core/reference/ydb-cli/export_import/_includes/import-file.md index 26ba7d85ff..bf01388fef 100644 --- a/ydb/docs/en/core/reference/ydb-cli/export_import/_includes/import-file.md +++ b/ydb/docs/en/core/reference/ydb-cli/export_import/_includes/import-file.md @@ -1,6 +1,6 @@ # Importing data from a file to an existing table -With the `import file` subcommand, you can import data from [CSV]{% if lang == "ru" %}(https://ru.wikipedia.org/wiki/CSV){% endif %}{% if lang == "en" %}(https://en.wikipedia.org/wiki/Comma-separated_values){% endif %} or [TSV]{% if lang == "ru" %}(https://ru.wikipedia.org/wiki/TSV){% endif %}{% if lang == "en" %}(https://en.wikipedia.org/wiki/Tab-separated_values){% endif %} files to an existing table. +With the `import file` command, you can import data from [CSV]{% if lang == "ru" %}(https://ru.wikipedia.org/wiki/CSV){% endif %}{% if lang == "en" %}(https://en.wikipedia.org/wiki/Comma-separated_values){% endif %} or [TSV]{% if lang == "ru" %}(https://ru.wikipedia.org/wiki/TSV){% endif %}{% if lang == "en" %}(https://en.wikipedia.org/wiki/Tab-separated_values){% endif %} files to an existing table. The command implements the `BulkUpsert` method, which ensures high efficiency of multi-row bulk upserts with no atomicity guarantees. The upsert process is split into multiple independent parallel transactions, each covering a single partition. When completed successfully, it guarantees that all data is upserted. @@ -27,17 +27,17 @@ General format of the command: * `--skip-rows NUM`: A number of rows from the beginning of the file that will be skipped at import. The default value is `0`. * `--header`: Use this option if the first row (excluding the rows skipped by `--skip-rows`) includes names of data columns to be mapped to table columns. If the header row is missing, the data is mapped according to the order in the table schema. -* `--delimiter STRING`: The data column delimiter character. You cannot use the tabulation character as a delimiter in this option. For tab-delimited import, use the `import file tsv` subcommand. Default value: `,`. +* `--delimiter STRING`: The data column delimiter character. You can't use the tabulation character as a delimiter in this option. For tab-delimited import, use the `import file tsv` subcommand. Default value: `,`. * `--null-value STRING`: The value to be imported as `NULL`. Default value: `""`. * `--batch-bytes VAL`: Split the imported file into batches of specified sizes. If a row fails to fit into a batch completely, it's discarded and added to the next batch. Whatever the batch size is, the batch must include at least one row. Default value: `1 MiB`. * `--max-in-flight VAL`: The number of data batches imported in parallel. You can increase the value of this parameter to accelerate importation of large files. The default value is `100`. -* `--newline-delimited` — a flag which guarantees that there are no newline characters inside records. If the flag is set, and import is performed from a file, then different import threads work with the different parts of a source file. This allows to provide maximized performance when loading sorted datasets into partitioned tables, as load is distributed across all partitions. +* `--newline-delimited`: This flag guarantees that there will be no line breaks in records. If this flag is set, and the data is loaded from a file, then different upload streams will process different parts of the source file. This way, you can ensure maximum performance when uploading sorted datasets to partitioned tables, by distributing the workload across all partitions. ## Examples {#examples} {% include [ydb-cli-profile.md](../../../../_includes/ydb-cli-profile.md) %} -Before performing the examples, [create a table](../../../../getting_started/yql.md#create-table) named `series`. +Before performing the examples, [create a table]({{ quickstart-path }}) named `series`. ### Import file {#simple} diff --git a/ydb/docs/en/core/reference/ydb-cli/export_import/_includes/s3_export.md b/ydb/docs/en/core/reference/ydb-cli/export_import/_includes/s3_export.md index 7574f4a778..4b0e2fd790 100644 --- a/ydb/docs/en/core/reference/ydb-cli/export_import/_includes/s3_export.md +++ b/ydb/docs/en/core/reference/ydb-cli/export_import/_includes/s3_export.md @@ -1,157 +1,158 @@ -# Exporting data to S3-compatible storage
-
-The `export s3` command starts exporting data and information on the server side about data schema objects to S3-compatible storage, in the format described under [File structure](../file_structure.md):
-
-```bash
-{{ ydb-cli }} [connection options] export s3 [options]
-```
-
-{% include [conn_options_ref.md](../../commands/_includes/conn_options_ref.md) %}
-
-## Command line parameters {#pars}
-
-`[options]`: Command parameters:
-
-### S3 connection parameters {#s3-conn}
-
-To run the command to export data to S3 storage, specify the [S3 connection parameters](../s3_conn.md). Since data is exported by the YDB server asynchronously, the specified endpoint must be available to establish a connection on the server side.
-
-### List of exported items {#items}
-
-`--item STRING`: Description of the item to export. You can specify the `--item` parameter multiple times if you need to export multiple items. `STRING` is set in `<property>=<value>,...` format with the following mandatory properties:
-- `source`, `src`, or `s`: Path to the exported directory or table, `.` indicates the DB root directory. If you specify a directory, all of its items whose names do not start with a dot and, recursively, all subdirectories whose names do not start with a dot are exported.
-- `destination`, `dst`, or `d`: Path (key prefix) in S3 storage to store exported items.
-
-`--exclude STRING`: Template ([PCRE](https://www.pcre.org/original/doc/html/pcrepattern.html)) to exclude paths from export. Specify this parameter multiple times for different templates.
-
-### Additional parameters {#aux}
-
-| Parameter | Description |
---- | ---
-| `--description STRING` | Operation text description saved to the history of operations. |
-| `--retries NUM` | Number of export retries to be made by the server.</br>Defaults to `10`. |
-| `--compression STRING` | Compress exported data.</br>If the default compression level is used for the [Zstandard](https://en.wikipedia.org/wiki/Zstd) algorithm, data can be compressed by 5-10 times. Compressing data uses the CPU and may affect the speed of performing other DB operations.</br>Possible values:</br><ul><li>`zstd`: Compression using the Zstandard algorithm with the default compression level (`3`).</li><li>`zstd-N`: Compression using the Zstandard algorithm, where `N` stands for the compression level (`1` — `22`).</li></ul> |
-| `--format STRING` | Result format.</br>Possible values:</br><ul><li>`pretty`: Human-readable format (default).</li><li>`proto-json-base64`: [Protocol Buffers](https://en.wikipedia.org/wiki/Protocol_Buffers) in [JSON](https://en.wikipedia.org/wiki/JSON) format, binary strings are [Base64](https://en.wikipedia.org/wiki/Base64)-encoded.</li></ul> |
-
-## Running the export command {#exec}
-
-### Export result {#result}
-
-If successful, the `export s3` command outputs summary information about the enqueued operation to export data to S3, in the format specified in the `--format` option. The export itself is performed by the server asynchronously. The output summary shows the operation ID that you can use later to check the operation status and perform actions on it:
-
-- In the default `pretty` output mode, the operation ID is displayed in the id field with semigraphics formatting:
-
- ```
- ┌───────────────────────────────────────────┬───────┬─────...
- | id | ready | stat...
- ├───────────────────────────────────────────┼───────┼─────...
- | ydb://export/6?id=281474976788395&kind=s3 | true | SUCC...
- ├╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴┴╴╴╴╴╴╴╴┴╴╴╴╴╴...
- | StorageClass: NOT_SET
- | Items:
- ...
- ```
-
-- In the proto-json-base64 output mode, the operation ID is in the "id" attribute:
-
- ```
- {"id":"ydb://export/6?id=281474976788395&kind=s3","ready":true, ... }
- ```
-
-### Export status {#status}
-
-Data is exported in the background. To find out the export status and progress, use the `operation get` command with the operation ID **enclosed in quotation marks** and passed as a command parameter. For example:
-
-```bash
-{{ ydb-cli }} -p db1 operation get "ydb://export/6?id=281474976788395&kind=s3"
-```
-
-The `operation get` output format is also set by the `--format` option.
-
-Although the operation ID is in URL format, there is no guarantee that it is maintained in the future. It should only be interpreted as a string.
-
-You can track the export progress by changes in the "progress" attribute:
-
-- In the default `pretty` output mode, an export operation that completed successfully is displayed as "Done" in the `progress` field with semigraphics formatting:
-
- ```
- ┌───── ... ──┬───────┬─────────┬──────────┬─...
- | id | ready | status | progress | ...
- ├──────... ──┼───────┼─────────┼──────────┼─...
- | ydb:/... | true | SUCCESS | Done | ...
- ├╴╴╴╴╴ ... ╴╴┴╴╴╴╴╴╴╴┴╴╴╴╴╴╴╴╴╴┴╴╴╴╴╴╴╴╴╴╴┴╴...
- ...
- ```
-
-- In the proto-json-base64 output mode, the completed export operation is indicated with the `PROGRESS_DONE` value of the `progress` attribute:
-
- ```
- {"id":"ydb://...", ...,"progress":"PROGRESS_DONE",... }
- ```
-
-### Completing the export operation {#forget}
-
-When running the export operation, a directory named `export_*` is created in the root directory, where `*` is the numeric part of the export ID. This directory stores tables with a consistent snapshot of exported data as of the export start time.
-
-Once the export is done, use the `operation forget` command to make sure the export is completed: the operation is removed from the list of operations and all files created for it are deleted:
-
-```bash
-{{ ydb-cli }} -p db1 operation forget "ydb://export/6?id=281474976788395&kind=s3"
-```
-
-### List of export operations {#list}
-
-To get a list of export operations, run the `operation list export/s3` command:
-
-```bash
-{{ ydb-cli }} -p db1 operation list export/s3
-```
-
-The `operation list` output format is also set by the `--format` option.
-
-## Examples {#examples}
-
-{% include [example_db1.md](../../_includes/example_db1.md) %}
-
-### Exporting a database {#example-full-db}
-
-Exporting all DB objects whose names do not start with a dot and that are not stored in directories whose names start with a dot to the `export1` directory in `mybucket` using the S3 authentication parameters from environment variables or the `~/.aws/credentials` file:
-
-```
-ydb -p db1 export s3 \
- --s3-endpoint storage.yandexcloud.net --bucket mybucket \
- --item src=.,dst=export1
-```
-
-### Exporting multiple directories {#example-specific-dirs}
-
-Exporting items from DB directories named dir1 and dir2 to the `export1` directory in `mybucket` using the explicitly set S3 authentication parameters:
-
-```
-ydb -p db1 export s3 \
- --s3-endpoint storage.yandexcloud.net --bucket mybucket \
- --access-key VJGSOScgs-5kDGeo2hO9 --secret-key fZ_VB1Wi5-fdKSqH6074a7w0J4X0 \
- --item src=dir1,dst=export1/dir1 --item src=dir2,dst=export1/dir2
-```
-
-### Getting operation IDs {#example-list-oneline}
-
-To get a list of export operation IDs in a format suitable for handling in bash scripts, use the [jq](https://stedolan.github.io/jq/download/) utility:
-
-```bash
-{{ ydb-cli }} -p db1 operation list export/s3 --format proto-json-base64 | jq -r ".operations[].id"
-```
-
-You'll get an output where each new line shows an operation's ID. For example:
-
-```
-ydb://export/6?id=281474976789577&kind=s3
-ydb://export/6?id=281474976789526&kind=s3
-ydb://export/6?id=281474976788779&kind=s3
-```
-
-You can use these IDs, for example, to run a loop to end all the current operations:
-
-```bash
-{{ ydb-cli }} -p db1 operation list export/s3 --format proto-json-base64 | jq -r ".operations[].id" | while read line; do {{ ydb-cli }} -p db1 operation forget $line;done
-```
+# Exporting data to S3-compatible storage + +The `export s3` command starts exporting data and information on the server side about data schema objects to S3-compatible storage, in the format described under [File structure](../file_structure.md): + +```bash +{{ ydb-cli }} [connection options] export s3 [options] +``` + +{% include [conn_options_ref.md](../../commands/_includes/conn_options_ref.md) %} + +## Command line parameters {#pars} + +`[options]`: Command parameters: + +### S3 connection parameters {#s3-conn} + +To run the command to export data to S3 storage, specify the [S3 connection parameters](../s3_conn.md). Since data is exported by the YDB server asynchronously, the specified endpoint must be available to establish a connection on the server side. + +### List of exported items {#items} + +`--item STRING`: Description of the item to export. You can specify the `--item` parameter multiple times if you need to export multiple items. `STRING` is set in `<property>=<value>,...` format with the following mandatory properties: +- `source`, `src`, or `s`: Path to the exported directory or table, `.` indicates the DB root directory. If you specify a directory, all of its items whose names do not start with a dot and, recursively, all subdirectories whose names do not start with a dot are exported. +- `destination`, `dst`, or `d`: Path (key prefix) in S3 storage to store exported items. + +`--exclude STRING`: Template ([PCRE](https://www.pcre.org/original/doc/html/pcrepattern.html)) to exclude paths from export. Specify this parameter multiple times for different templates. + +### Additional parameters {#aux} + +| Parameter | Description | +--- | --- +| `--description STRING` | Operation text description saved to the history of operations. | +| `--retries NUM` | Number of export retries to be made by the server.</br>Defaults to `10`. | +| `--compression STRING` | Compress exported data.</br>If the default compression level is used for the [Zstandard](https://en.wikipedia.org/wiki/Zstandard) algorithm, data can be compressed by 5-10 times. Compressing data uses the CPU and may affect the speed of performing other DB operations.</br>Possible values:</br><ul><li>`zstd`: Compression using the Zstandard algorithm with the default compression level (`3`).</li><li>`zstd-N`: Compression using the Zstandard algorithm, where `N` stands for the compression level (`1` — `22`).</li></ul> | +| `--format STRING` | Result format.</br>Possible values:</br><ul><li>`pretty`: Human-readable format (default).</li><li>`proto-json-base64`: [Protocol Buffers](https://en.wikipedia.org/wiki/Protocol_Buffers) in [JSON](https://en.wikipedia.org/wiki/JSON) format, binary strings are [Base64](https://en.wikipedia.org/wiki/Base64)-encoded.</li></ul> | + +## Running the export command {#exec} + +### Export result {#result} + +If successful, the `export s3` command prints summary information about the enqueued operation to export data to S3, in the format specified in the `--format` option. The export itself is performed by the server asynchronously. The summary shows the operation ID that you can use later to check the operation status and perform actions on it: + +- In the default `pretty` mode, the operation ID is displayed in the id field with semigraphics formatting: + + ``` + ┌───────────────────────────────────────────┬───────┬─────... + | id | ready | stat... + ├───────────────────────────────────────────┼───────┼─────... + | ydb://export/6?id=281474976788395&kind=s3 | true | SUCC... + ├╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴┴╴╴╴╴╴╴╴┴╴╴╴╴╴... + | StorageClass: NOT_SET + | Items: + ... + ``` + +- In the proto-json-base64 mode, the operation ID is in the "id" attribute: + + ``` + {"id":"ydb://export/6?id=281474976788395&kind=s3","ready":true, ... } + ``` + +### Export status {#status} + +Data is exported in the background. To find out the export status and progress, use the `operation get` command with the operation ID **enclosed in quotation marks** and passed as a command parameter. For example: + +```bash +{{ ydb-cli }} -p quickstart operation get "ydb://export/6?id=281474976788395&kind=s3" +``` + +The `operation get` format is also set by the `--format` option. + +Although the operation ID is in URL format, there is no guarantee that it is maintained in the future. It should only be interpreted as a string. + +You can track the export progress by changes in the "progress" attribute: + +- In the default `pretty` mode, successfully completed export operations are displayed as "Done" in the `progress` field with semigraphics formatting: + + ``` + ┌───── ... ──┬───────┬─────────┬──────────┬─... + | id | ready | status | progress | ... + ├──────... ──┼───────┼─────────┼──────────┼─... + | ydb:/... | true | SUCCESS | Done | ... + ├╴╴╴╴╴ ... ╴╴┴╴╴╴╴╴╴╴┴╴╴╴╴╴╴╴╴╴┴╴╴╴╴╴╴╴╴╴╴┴╴... + ... + ``` + +- In the proto-json-base64 mode, the completed export operation is indicated with the `PROGRESS_DONE` value of the `progress` attribute: + + ``` + {"id":"ydb://...", ...,"progress":"PROGRESS_DONE",... } + ``` + +### Completing the export operation {#forget} + +When running the export operation, a directory named `export_*` is created in the root directory, where `*` is the numeric part of the export ID. This directory stores tables with a consistent snapshot of exported data as of the export start time. + +Once the export is done, use the `operation forget` command to make sure the export is completed: the operation is removed from the list of operations and all files created for it are deleted: + +```bash +{{ ydb-cli }} -p quickstart operation forget "ydb://export/6?id=281474976788395&kind=s3" +``` + +### List of export operations {#list} + +To get a list of export operations, run the `operation list export/s3` command: + +```bash +{{ ydb-cli }} -p quickstart operation list export/s3 +``` + +The `operation list` format is also set by the `--format` option. + +## Examples {#examples} + +{% include [ydb-cli-profile.md](../../../../_includes/ydb-cli-profile.md) %} + +### Exporting a database {#example-full-db} + +Exporting all DB objects whose names do not start with a dot and that are not stored in directories whose names start with a dot to the `export1` directory in `mybucket` using the S3 authentication parameters from environment variables or the `~/.aws/credentials` file: + +``` +ydb -p quickstart export s3 \ + --s3-endpoint storage.yandexcloud.net --bucket mybucket \ + --item src=.,dst=export1 +``` + +### Exporting multiple directories {#example-specific-dirs} + +Exporting items from DB directories named dir1 and dir2 to the `export1` directory in `mybucket` using the explicitly set S3 authentication parameters: + +``` +ydb -p quickstart export s3 \ + --s3-endpoint storage.yandexcloud.net --bucket mybucket \ + --access-key VJGSOScgs-5kDGeo2hO9 --secret-key fZ_VB1Wi5-fdKSqH6074a7w0J4X0 \ + --item src=dir1,dst=export1/dir1 --item src=dir2,dst=export1/dir2 +``` + +### Getting operation IDs {#example-list-oneline} + +To get a list of export operation IDs in a format suitable for handling in bash scripts, use the [jq](https://stedolan.github.io/jq/download/) utility: + +```bash +{{ ydb-cli }} -p quickstart operation list export/s3 --format proto-json-base64 | jq -r ".operations[].id" +``` + +You'll get a result where each new line shows an operation's ID. For example: + +``` +ydb://export/6?id=281474976789577&kind=s3 +ydb://export/6?id=281474976789526&kind=s3 +ydb://export/6?id=281474976788779&kind=s3 +``` + +You can use these IDs, for example, to run a loop to end all the current operations: + +```bash +{{ ydb-cli }} -p quickstart operation list export/s3 --format proto-json-base64 | jq -r ".operations[].id" | while read line; do {{ ydb-cli }} -p quickstart operation forget $line;done +``` + diff --git a/ydb/docs/en/core/reference/ydb-cli/export_import/_includes/s3_import.md b/ydb/docs/en/core/reference/ydb-cli/export_import/_includes/s3_import.md index de244a53c5..9aba155be3 100644 --- a/ydb/docs/en/core/reference/ydb-cli/export_import/_includes/s3_import.md +++ b/ydb/docs/en/core/reference/ydb-cli/export_import/_includes/s3_import.md @@ -1,6 +1,6 @@ -# Importing data from S3-compatible storage +# Importing data from an S3 compatible storage -Running the `import s3` command starts, on the server side, importing data and information about data schema objects from S3-compatible storage in the format described in the [File structure](../file_structure.md) article: +The `import s3` command starts, on the server side, the process of importing data and schema object details from an S3-compatible storage, in the format described in the [File structure](../file_structure.md) section: ```bash {{ ydb-cli }} [connection options] import s3 [options] @@ -8,9 +8,9 @@ Running the `import s3` command starts, on the server side, importing data and i {% include [conn_options_ref.md](../../commands/_includes/conn_options_ref.md) %} -Unlike [`tools restore`](../tools_restore.md), the `import s3` command always creates entire objects, meaning that none of the objects being imported (neither directories nor tables) should exist for the command to run successfully. +As opposed to the [`tools restore` command](../tools_restore.md), the `import s3` command always creates objects in entirety, so none of the imported objects (directories or tables) should already exist. -If you need to import additional data from S3 to existing tables, you can copy the S3 contents to the file system (for example, using [S3cmd](https://s3tools.org/s3cmd)) and run the [`tools restore`](../tools_restore.md) command. +If you need to import some more data to your existing S3 tables (for example, using [S3cmd](https://s3tools.org/s3cmd)), you can copy the S3 contents to the file system and use the [`tools restore`](../tools_restore.md) command. ## Command line parameters {#pars} @@ -18,84 +18,83 @@ If you need to import additional data from S3 to existing tables, you can copy t ### S3 connection parameters {#s3-conn} -To run the command to import data from S3, make sure to specify the [S3 connection parameters](../s3_conn.md). Since data import is performed asynchronously by the YDB server, the specified endpoint must be available to establish a server-side connection. +To run the command to import data from an S3 storage, specify the [S3 connection parameters](../s3_conn.md). As data is imported by the YDB server asynchronously, the specified endpoint must be available so that a connection can be established from the server side. ### List of imported objects {#items} -`--item STRING`: Description of the object to import. The `--item` parameter can be specified several times if you need to import multiple objects. The `STRING` format is `<property>=<value>,...`, with the following properties required: - -- `source`, `src`, or `s`: Path to S3 (key prefix) specifying the directory or table to import. -- `destination`, `dst`, or`d`: Path to the DB that will store the imported directory or table. The final element of the path must not exist. All directories specified in the path will be created if they don't exist. +`--item STRING`: Description of the item to import. You can specify the `--item` parameter multiple times if you need to import multiple items. `STRING` is set in `<property>=<value>,...` format with the following mandatory properties: +- `source`, `src` or `s` is the path (key prefix) in S3 that hosts the imported directory or table +- `destination`, `dst`, or `d` is the database path to host the imported directory or table. The destination of the path must not exist. All the directories along the path will be created if missing. ### Additional parameters {#aux} -`--description STRING`: Operation text description stored in the history of operations. `--retries NUM`: Number of import retries the server will make. Defaults to 10. -`--format STRING`: Result output format. - +`--description STRING`: A text description of the operation saved in the operation history +`--retries NUM`: The number of import retries to be made by the server. The default value is 10. +`--format STRING`: The format of the results. - `pretty`: Human-readable format (default). -- `proto-json-base64`: Protobuf that supports JSON values encoded as binary strings using base64 encoding. +- `proto-json-base64`: Protobuf in JSON format, binary strings are Base64-encoded. -## Importing data {#exec} +## Importing {#exec} -### Import result {#result} +### Export result {#result} -If successful , the `import s3` command outputs summary information about the enqueued operation for importing data from S3 in the format specified in the `--format` option. The actual import operation is performed by the server asynchronously. The summary displays the operation ID that can be used later to check the status and actions with the operation: +If successful, the `import s3` command prints summary information about the enqueued operation to import data from S3 in the format specified in the `--format` option. The import itself is performed by the server asynchronously. The summary shows the operation ID that you can use later to check the operation status and perform actions on it: -- In the `pretty` output mode used by default, the operation identifier is output in the id field with semigraphics formatting: +- In the default `pretty` mode, the operation ID is displayed in the id field with semigraphics formatting: - ``` - ┌───────────────────────────────────────────┬───────┬─────... - | id | ready | stat... - ├───────────────────────────────────────────┼───────┼─────... - | ydb://import/8?id=281474976788395&kind=s3 | true | SUCC... - ├╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴┴╴╴╴╴╴╴╴┴╴╴╴╴╴... - | Items: - ... - ``` + ``` + ┌───────────────────────────────────────────┬───────┬─────... + | id | ready | stat... + ├───────────────────────────────────────────┼───────┼─────... + | ydb://import/8?id=281474976788395&kind=s3 | true | SUCC... + ├╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴╴┴╴╴╴╴╴╴╴┴╴╴╴╴╴... + | Items: + ... + ``` -- In proto-json-base64 output mode, the ID is in the "id" attribute: +- In the proto-json-base64 mode, the operation ID is in the "id" attribute: - ``` - {"id":"ydb://export/8?id=281474976788395&kind=s3","ready":true, ... } - ``` + ``` + {"id":"ydb://export/8?id=281474976788395&kind=s3","ready":true, ... } + ``` ### Import status {#status} -Data is imported in the background. You can get information about the status and progress of the import operation by running the `operation get` command with the **quoted** operation ID passed as the command parameter. For example: +Data is imported in the background. To get information on import status, use the `operation get` command with the operation ID **enclosed in quotation marks** and passed as a command parameter. For example: ```bash -{{ ydb-cli }} -p db1 operation get "ydb://import/8?id=281474976788395&kind=s3" +{{ ydb-cli }} -p quickstart operation get "ydb://import/8?id=281474976788395&kind=s3" ``` -The format of the `operation get` command output is also specified in the `--format` option. +The `operation get` format is also set by the `--format` option. -Although the operation ID format is URL, there is no guarantee that it's retained later. It should only be interpreted as a string. +Although the operation ID is in URL format, there is no guarantee that it is maintained in the future. It should only be interpreted as a string. -You can track the completion of the import operation by changes in the "progress" attribute: +You can track the import by changes in the "progress" attribute: -- In the `pretty` output mode used by default, a successful operation is indicated by the "Done" value in the `progress` field with semigraphics formatting: +- In the default `pretty` mode, successfully completed export operations are displayed as "Done" in the `progress` field with semigraphics formatting: - ``` - ┌───── ... ──┬───────┬─────────┬──────────┬─... - | id | ready | status | progress | ... - ├──────... ──┼───────┼─────────┼──────────┼─... - | ydb:/... | true | SUCCESS | Done | ... - ├╴╴╴╴╴ ... ╴╴┴╴╴╴╴╴╴╴┴╴╴╴╴╴╴╴╴╴┴╴╴╴╴╴╴╴╴╴╴┴╴... - ... - ``` + ``` + ┌───── ... ──┬───────┬─────────┬──────────┬─... + | id | ready | status | progress | ... + ├──────... ──┼───────┼─────────┼──────────┼─... + | ydb:/... | true | SUCCESS | Done | ... + ├╴╴╴╴╴ ... ╴╴┴╴╴╴╴╴╴╴┴╴╴╴╴╴╴╴╴╴┴╴╴╴╴╴╴╴╴╴╴┴╴... + ... + ``` -- In proto-json-base64 output mode, a completed operation is indicated by the `PROGRESS_DONE` value of the `progress` attribute: +- In the proto-json-base64 mode, the completed export operation is indicated with the `PROGRESS_DONE` value of the `progress` attribute: - ``` - {"id":"ydb://...", ...,"progress":"PROGRESS_DONE",... } - ``` + ``` + {"id":"ydb://...", ...,"progress":"PROGRESS_DONE",... } + ``` -### Ending the import operation {#forget} +### Completing the import operation {#forget} -Once the data is imported, use the `operation forget` command to make sure the import operation is removed from the list of operations: +When the import is complete, use `operation forget` to delete the import from the operation list: ```bash -{{ ydb-cli }} -p db1 operation forget "ydb://import/8?id=281474976788395&kind=s3" +{{ ydb-cli }} -p quickstart operation forget "ydb://import/8?id=281474976788395&kind=s3" ``` ### List of import operations {#list} @@ -103,31 +102,31 @@ Once the data is imported, use the `operation forget` command to make sure the i To get a list of import operations, run the `operation list import/s3` command: ```bash -{{ ydb-cli }} -p db1 operation list import/s3 +{{ ydb-cli }} -p quickstart operation list import/s3 ``` -The format of the `operation list` command output is also specified in the `--format` option. +The `operation list` format is also set by the `--format` option. ## Examples {#examples} -{% include [example_db1.md](../../_includes/example_db1.md) %} +{% include [ydb-cli-profile.md](../../../../_includes/ydb-cli-profile.md) %} -### Importing data to the DB root {#example-full-db} +### Importing to the database root {#example-full-db} -Importing the contents of the `export1` directory in the `mybucket` bucket to the root of the database, using S3 authentication parameters from environment variables or the `~/.aws/credentials` file: +Importing to the database root the contents of the `export1` directory in the `mybucket` bucket using the S3 authentication parameters taken from the environment variables or the `~/.aws/credentials` file: ``` -ydb -p db1 import s3 \ +ydb -p quickstart import s3 \ --s3-endpoint storage.yandexcloud.net --bucket mybucket \ --item src=export1,dst=. ``` ### Importing multiple directories {#example-specific-dirs} -Importing objects from the dir1 and dir2 directories of the `mybucket` S3 bucket to the same-name DB directories using explicitly specified authentication parameters in S3: +Importing items from the dir1 and dir2 directories in the `mybucket` S3 bucket to the same-name database directories using explicitly specified S3 authentication parameters: ``` -ydb -p db1 import s3 \ +ydb -p quickstart import s3 \ --s3-endpoint storage.yandexcloud.net --bucket mybucket \ --access-key VJGSOScgs-5kDGeo2hO9 --secret-key fZ_VB1Wi5-fdKSqH6074a7w0J4X0 \ --item src=export/dir1,dst=dir1 --item src=export/dir2,dst=dir2 @@ -135,13 +134,13 @@ ydb -p db1 import s3 \ ### Getting operation IDs {#example-list-oneline} -To get a list of import operation IDs in a format that is convenient for processing in bash scripts, use [jq](https://stedolan.github.io/jq/download/): +To get a list of import operation IDs in a bash-friendly format, use the [jq](https://stedolan.github.io/jq/download/) utility: ```bash -{{ ydb-cli }} -p db1 operation list import/s3 --format proto-json-base64 | jq -r ".operations[].id" +{{ ydb-cli }} -p quickstart operation list import/s3 --format proto-json-base64 | jq -r ".operations[].id" ``` -You'll get an output where each new line contains the operation ID. For example: +You'll get a result where each new line shows an operation's ID. For example: ``` ydb://import/8?id=281474976789577&kind=s3 @@ -149,9 +148,9 @@ ydb://import/8?id=281474976789526&kind=s3 ydb://import/8?id=281474976788779&kind=s3 ``` -These IDs can be used, for example, to run a loop that will end all current operations: +You can use these IDs, for example, to run a loop to end all the current operations: ```bash -{{ ydb-cli }} -p db1 operation list import/s3 --format proto-json-base64 | jq -r ".operations[].id" | while read line; do {{ ydb-cli }} -p db1 operation forget $line;done +{{ ydb-cli }} -p quickstart operation list import/s3 --format proto-json-base64 | jq -r ".operations[].id" | while read line; do {{ ydb-cli }} -p quickstart operation forget $line;done ``` diff --git a/ydb/docs/en/core/reference/ydb-cli/export_import/_includes/tools_dump.md b/ydb/docs/en/core/reference/ydb-cli/export_import/_includes/tools_dump.md index fa64fcc774..7b4f03bf89 100644 --- a/ydb/docs/en/core/reference/ydb-cli/export_import/_includes/tools_dump.md +++ b/ydb/docs/en/core/reference/ydb-cli/export_import/_includes/tools_dump.md @@ -1,6 +1,6 @@ # Exporting data to the file system -The `tools dump` command dumps data and information about data schema objects to the client file system in the format described in the [File structure](../file_structure.md) article: +The `tools dump` command dumps the database data and objects schema to the client file system, in the format described in the [File system](../file_structure.md): ```bash {{ ydb-cli }} [connection options] tools dump [options] @@ -10,44 +10,44 @@ The `tools dump` command dumps data and information about data schema objects to `[options]`: Command parameters: -`-p PATH` or `--path PATH`: Path to the DB directory whose objects are to be dumped or path to the table. By default, the DB root directory. The following will be dumped: all subdirectories whose names do not begin with a dot and tables whose names do not begin with a dot inside these subdirectories. To dump such tables or the contents of such directories, you can explicitly specify their names in this parameter. +`-p PATH` or `--path PATH`: Path to the database directory with objects or a path to the table to be dumped. The root database directory is used by default. The dump includes all subdirectories whose names don't begin with a dot and the tables in them whose names don't begin with a dot. To dump such tables or the contents of such directories, you can specify their names explicitly in this parameter. -`-o PATH` or `--output PATH`: Path to the directory in the client file system that data should be dumped to. If the specified directory doesn't exist, it will be created. Anyway, the entire path to it must exist. If the specified directory does exist, it must be empty. If the parameter is not specified, a directory with a name in `backup_YYYYDDMMTHHMMSS` format will be created in the current directory, where YYYYDDMM indicates the date and HHMMSS the export start time. +`-o PATH` or `--output PATH`: Path to the directory in the client file system to dump the data to. If such a directory doesn't exist, it will be created. The entire path to it must already exist, however. If the specified directory exists, it must be empty. If the parameter is omitted, a directory with the name `backup_YYYYDDMMTHHMMSS` will be created in the current directory, with YYYYDDMM being the date and HHMMSS: the time when the dump began. -`--exclude STRING`: Pattern ([PCRE](https://www.pcre.org/original/doc/html/pcrepattern.html)) for excluding paths from the export destination. This parameter can be specified several times for different patterns. +`--exclude STRING`: Template ([PCRE](https://www.pcre.org/original/doc/html/pcrepattern.html)) to exclude paths from export. Specify this parameter multiple times for different templates. -`--scheme-only`: Only dump information about data schema objects and no data. +`--scheme-only`: Dump only the details about the database schema objects, without dumping their data -`--consistency-level VAL`: Consistency level. Possible options: +`--consistency-level VAL`: The consistency level. Possible options: +- `database`: A fully consistent dump, with one snapshot taken before starting dumping. Applied by default. +- `table`: Consistency within each dumped table, taking individual independent snapshots for each table dumped. Might run faster and have a smaller effect on the current workload processing in the database. -- `database`: Fully consistent export with a single snapshot taken before starting the export operation. Applied by default. -- `table`: Consistency within each table being dumped with separate independent snapshots taken for each such table. It can run faster and have less impact on handling the current DB load. +`--avoid-copy`: Do not create a snapshot before dumping. The consistency snapshot taken by default might be inapplicable in some cases (for example, for tables with external blobs). -`--avoid-copy`: Do not create a dump snapshot. The snapshot used by default to ensure consistency may not be applicable in some cases (such as for tables with external blobs). - -`--save-partial-result`: Do not delete the result of a partially completed dump. If this option is not enabled, the result will be deleted in case an error occurs when dumping data. +`--save-partial-result`: Don't delete the result of partial dumping. Without this option, the dumps that terminated with an error are deleted. ## Examples -{% include [example_db1.md](../../_includes/example_db1.md) %} +{% include [ydb-cli-profile.md](../../../../_includes/ydb-cli-profile.md) %} ### Exporting a database -With a directory named `backup_...` automatically created in the current directory: +With automatic creation of the `backup_...` directory In the current directory: ``` -{{ ydb-cli }} --profile db1 tools dump +{{ ydb-cli }} --profile quickstart tools dump ``` -To the specified directory: +To a specific directory: ``` -{{ ydb-cli }} --profile db1 tools dump -o ~/backup_db1 +{{ ydb-cli }} --profile quickstart tools dump -o ~/backup_quickstart ``` -### Exporting the structure of tables in the specified DB directory and its subdirectories +### Dumping the table structure within a specified database directory (including subdirectories) ``` -{{ ydb-cli }} --profile db1 tools dump -p dir1 --scheme-only +{{ ydb-cli }} --profile quickstart tools dump -p dir1 --scheme-only ``` + diff --git a/ydb/docs/en/core/reference/ydb-cli/export_import/_includes/tools_restore.md b/ydb/docs/en/core/reference/ydb-cli/export_import/_includes/tools_restore.md index 45a4a80286..7955c5dc1b 100644 --- a/ydb/docs/en/core/reference/ydb-cli/export_import/_includes/tools_restore.md +++ b/ydb/docs/en/core/reference/ydb-cli/export_import/_includes/tools_restore.md @@ -1,6 +1,6 @@ # Importing data from the file system -The `tools restore` command creates data schema objects in the DB and imports to them the data from the file system that was previously dumped there with the `tools dump` command or prepared manually following the rules described in the [File structure](../file_structure.md) article: +The `tools restore` command creates the items of the database schema in the database, and populates them with the data previously exported there with the `tools dump` command or prepared manually as per the rules from the [File structure](../file_structure.md) article: ```bash {{ ydb-cli }} [connection options] tools restore -p PATH -i PATH [options] @@ -8,83 +8,81 @@ The `tools restore` command creates data schema objects in the DB and imports to {% include [conn_options_ref.md](../../commands/_includes/conn_options_ref.md) %} -If a table already exists in the database, no changes are made to its schema. This may prevent the data import operation from being performed if some columns of the imported files are missing in the DB table or have an incorrect type. +If the table already exists in the database, no changes will be made to its schema. If some columns present in the imported files are missing in the database or have mismatching types, this may lead to the data import operation failing. -Data import to DB tables is performed using the [YQL `REPLACE`](../../../../yql/reference/syntax/replace_into.md) statement. If before the start of the import operation, the table contained any entries, those of them whose keys exist in the imported files are replaced with data from the files. The entries with keys that are missing in the imported files remain unchanged. +To import data to the table, use the [YQL `REPLACE` command](../../../../yql/reference/syntax/replace_into.md). If the table included any records before the import, the records whose keys are present in the imported files are replaced by the data from the file. The records whose keys are absent in the imported files aren't affected. ## Required parameters {#mandatory} -`-p PATH` or `--path PATH`: Path to the DB directory that data will be imported to. To import data to the root directory, specify `.`. Any missing directories specified in the path will be created. +`-p PATH` or `--path PATH`: Path to the database directory the data will be imported to. To import data to the root directory, specify `.`. All the missing directories along the path will be created. -`-i PATH` or `--input PATH`: Path to the directory in the client file system that data will be imported from. +`-i PATH` or `--input PATH`: Path to the directory in the client system the data will be imported from. ## Optional parameters {#optional} -`[options]`: Optional command parameters: +`[options]`: Optional parameters of the command: -`--restore-data VAL`: Data import flag, 1 (yes) or 0 (no), defaults to 1. If 0, the import operation will only create schema objects without data being restored to them. If there is no data in the file system (only the data schema is exported), it doesn't matter if you change the flag value. +`--restore-data VAL`: Enables/disables data import, 1 (yes) or 0 (no), defaults to 1. If set to 0, the import only creates items in the schema without populating them with data. If there's no data in the file system (only the schema has been exported), it doesn't make sense to change this option. -`--restore-indexes VAL`: Index import flag, 1 (yes) or 0 (no), defaults to 1. If 0, when running the import operation, secondary indexes will neither be registered in the data schema nor populated with data. +`--restore-indexes VAL`: Enables/disables import of indexes, 1 (yes) or 0 (no), defaults to 1. If set to 0, the import won't either register secondary indexes in the data schema or populate them with data. -`--dry-run`: Mode for checking if the data schema in the DB and file system match without making any changes to the DB, 1 (yes) or 0 (no), defaults to 0. If this mode is enabled, it is checked that: +`--dry-run`: Matching the data schemas in the database and file system without updating the database, 1 (yes) or 0 (no), defaults to 0. When enabled, the system checks that: +- All tables in the file system are present in the database +- These items are based on the same schema, both in the file system and in the database -- All tables in the file system are present in the DB. -- The object data schema in the DB and file system match. +`--save-partial-result`: Save the partial import result. If disabled, an import error results in reverting to the database state before the import. -`--save-partial-result`: Save the result of an incomplete import operation. If this option isn't enabled, in the event of an error during the import operation, the state of the DB is restored to the point before the operation is started. +### Workload restriction parameters {#limiters} -### Load limit parameters {#limiters} +Using the below parameters, you can limit the import workload against the database. -The following parameters let you limit the load on the DB generated by data import processes. +{% note warning "Attention!" %} -{% note warning "Attention" %} - -Some of the parameters listed below have valid default values. This means that even if none of them is specified in the `tools restore` command call, the load will still be limited. +Some of the below parameters have default values. This means that the workload will be limited even if none of them is mentioned in `tools restore`. {% endnote %} -`--bandwidth VAL`: Limits the amount of data that can be imported per second, defaults to 0 (not set). `VAL` stands for the data volume that is specified as a prefixed number like 2MiB. -`--rps VAL`: Limit on the number of requests per second for importing data packets to the DB, defaults to 30. -`--in-flight VAL`: Limit on the number of concurrently executed requests, defaults to 10. -`--upload-batch-rows VAL`: Limit on the number of rows in the imported batch, defaults to 0 (unlimited). `VAL` stands for the amount of rows, specified as a number with an optional decimal prefix, such as 1K. -`--upload-batch-bytes VAL`: Limit on the size of an imported batch, defaults to 512KB. `VAL` stands for the data volume that is specified as a prefixed number like 1MiB. -`--upload-batch-rus VAL`: Only applies to Serverless databases, limits the use of Request Units (RU) per import of a single batch, defaults 30 RUs. The batch size is selected for the specified value. `VAL` stands for the amount of RUs, specified as a number with an optional decimal prefix, such as 100 or 1K. +`--bandwidth VAL`: Limit the workload per second, defaults to 0 (not set). `VAL` specifies the data amount with a unit, for example, 2MiB. +`--rps VAL`: Limits the number of queries used to upload batches to the database per second, the default value is 30. +`--in-flight VAL`: Limits the number of queries that can be run in parallel, the default value is 10. +`--upload-batch-rows VAL`: Limits the number of records in the uploaded batch, the default value is 0 (unlimited). `VAL` determines the number of records and is set as a number with an optional unit, for example, 1K. +`--upload-batch-bytes VAL`: Limits the batch of uploaded data, the default value is 512KB. `VAL` specifies the data amount with a unit, for example, 1MiB. +`--upload-batch-rus VAL`: Applies only to Serverless databases to limit Request Units (RU) that can be consumed to upload one batch, defaults to 30 RU. The batch size is selected to match the specified value. `VAL` determines the number of RU and is set as a number with an optional unit, for example, 100 or 1K. ## Examples {#examples} -{% include [example_db1.md](../../_includes/example_db1.md) %} +{% include [ydb-cli-profile.md](../../../../_includes/ydb-cli-profile.md) %} -### Importing data to the DB root +### Importing to the database root -From the current directory of the file system: +From the current file system directory: ``` -{{ ydb-cli }} -p db1 tools restore -p . -i . +{{ ydb-cli }} -p quickstart tools restore -p . -i . ``` -From the specified directory of the file system: +From the current file system directory: ``` -{{ ydb-cli }} -p db1 tools restore -p . -i ~/backup_db1 +{{ ydb-cli }} -p quickstart tools restore -p . -i ~/backup_quickstart ``` -### Importing data to the specified DB directory +### Uploading data to the specified directory in the database -From the current directory of the file system: +From the current file system directory: ``` -{{ ydb-cli }} -p db1 tools restore -p dir1/dir2 -i . +{{ ydb-cli }} -p quickstart tools restore -p dir1/dir2 -i . ``` -From the specified directory of the file system: +From the current file system directory: ``` -{{ ydb-cli }} -p db1 tools restore -p dir1/dir2 -i ~/backup_db1 +{{ ydb-cli }} -p quickstart tools restore -p dir1/dir2 -i ~/backup_quickstart ``` -Checking if the data schema in the DB and file system match: +Matching schemas between the database and file system: ``` -{{ ydb-cli }} -p db1 tools restore -p dir1/dir2 -i ~/backup_db1 --dry-run +{{ ydb-cli }} -p quickstart tools restore -p dir1/dir2 -i ~/backup_quickstart --dry-run ``` - diff --git a/ydb/docs/en/core/reference/ydb-cli/operation-cancel.md b/ydb/docs/en/core/reference/ydb-cli/operation-cancel.md index cd1d0d75f3..13d4426586 100644 --- a/ydb/docs/en/core/reference/ydb-cli/operation-cancel.md +++ b/ydb/docs/en/core/reference/ydb-cli/operation-cancel.md @@ -1,6 +1,6 @@ -# Canceling long running operations +# Canceling long-running operations -Use the `ydb operation cancel` subcommand to cancel the specified long running operation. Only an incomplete operation can be canceled. +Use the `ydb operation cancel` subcommand to cancel the specified long-running operation. Only an incomplete operation can be canceled. General format of the command: @@ -10,9 +10,9 @@ General format of the command: * `global options`: [Global parameters](commands/global-options.md). * `options`: [Parameters of the subcommand](#options). -* `id`: The ID of the long running operation. The ID contains characters that can be interpreted by your command shell. If necessary, use shielding, for example, `'<id>'` for bash. +* `id`: The ID of the long-running operation. The ID contains characters that can be interpreted by your command shell. If necessary, use shielding, for example, `'<id>'` for bash. -View a description of the command to obtain the status of a long running operation: +View a description of the command to obtain the status of a long-running operation: ```bash {{ ydb-cli }} operation cancel --help diff --git a/ydb/docs/en/core/reference/ydb-cli/operation-get.md b/ydb/docs/en/core/reference/ydb-cli/operation-get.md index 273693b47e..9a3a15ba0e 100644 --- a/ydb/docs/en/core/reference/ydb-cli/operation-get.md +++ b/ydb/docs/en/core/reference/ydb-cli/operation-get.md @@ -1,6 +1,6 @@ -# Obtaining the status of long running operations +# Obtaining the status of long-running operations -Use the `ydb operation get` subcommand to obtain the status of the specified long running operation. +Use the `ydb operation get` subcommand to obtain the status of the specified long-running operation. General format of the command: @@ -10,9 +10,9 @@ General format of the command: * `global options`: [Global parameters](commands/global-options.md). * `options`: [Parameters of the subcommand](#options). -* `id`: The ID of the long running operation. The ID contains characters that can be interpreted by your command shell. If necessary, use shielding, for example, `'<id>'` for bash. +* `id`: The ID of the long-running operation. The ID contains characters that can be interpreted by your command shell. If necessary, use shielding, for example, `'<id>'` for bash. -View a description of the command to obtain the status of a long running operation: +View a description of the command to obtain the status of a long-running operation: ```bash {{ ydb-cli }} operation get --help @@ -28,10 +28,10 @@ View a description of the command to obtain the status of a long running operati {% include [ydb-cli-profile](../../_includes/ydb-cli-profile.md) %} -Obtain the status of the long running operation with the `ydb://buildindex/7?id=281489389055514` ID: +Obtain the status of the long-running operation with the `ydb://buildindex/7?id=281489389055514` ID: ```bash -ydb -p db1 operation get \ +ydb -p quickstart operation get \ 'ydb://buildindex/7?id=281489389055514' ``` diff --git a/ydb/docs/en/core/reference/ydb-cli/operation-list.md b/ydb/docs/en/core/reference/ydb-cli/operation-list.md index 798e13a1c8..3075f34631 100644 --- a/ydb/docs/en/core/reference/ydb-cli/operation-list.md +++ b/ydb/docs/en/core/reference/ydb-cli/operation-list.md @@ -1,6 +1,6 @@ -# Getting a list of long running operations +# Getting a list of long-running operations -Use the `ydb operation list` subcommand to get a list of long running operations of the specified type. +Use the `ydb operation list` subcommand to get a list of long-running operations of the specified type. General format of the command: @@ -15,7 +15,7 @@ General format of the command: * `export/s3`: The export operations. * `import/s3`: The import operations. -View a description of the command to get a list of long running operations: +View a description of the command to get a list of long-running operations: ```bash {{ ydb-cli }} operation list --help @@ -33,10 +33,10 @@ View a description of the command to get a list of long running operations: {% include [ydb-cli-profile](../../_includes/ydb-cli-profile.md) %} -Get a list of long running build index operations for the `series` table: +Get a list of long-running build index operations for the `series` table: ```bash -ydb -p db1 operation list \ +ydb -p quickstart operation list \ buildindex ``` diff --git a/ydb/docs/en/core/reference/ydb-cli/profile/_includes/create.md b/ydb/docs/en/core/reference/ydb-cli/profile/_includes/create.md index 5273de0e27..a75afbffb6 100644 --- a/ydb/docs/en/core/reference/ydb-cli/profile/_includes/create.md +++ b/ydb/docs/en/core/reference/ydb-cli/profile/_includes/create.md @@ -58,18 +58,18 @@ The profile will update with the parameters entered on the command line. Any pro ### Examples {#create-cmdline-examples} -#### Creating a profile for connecting to a test database {#quickstart} +#### Creating a profile to connect to a test database {#quickstart} -You can use the `quickstart` profile to connect to a database in a single-node cluster {{ ydb-short-name }}: +To connect to a DB in a single-node {{ ydb-short-name }} cluster, you can use the `quickstart` profile: ```bash ydb config profile create quickstart --endpoint grpc://localhost:2136 --database <path_database> ``` -* `path_database`: the database path. Specify one of the values: +* `path_database`: Database path. Specify one of these values: - * `/Root/test`: If you deployed the cluster using an executable file. - * `/local`: If you used a Docker image. + * `/Root/test`: If you used an executable to deploy your cluster. + * `/local`: If you deployed your cluster from a Docker image. #### Creating a profile from previous connection settings {#cmdline-example-from-explicit} @@ -173,7 +173,7 @@ Next, you'll be prompted to sequentially perform the following actions with each Creating a new `mydb1` profile: -1. Run the command: +1. Run this command: ```bash {{ ydb-cli }} config profile create mydb1 @@ -211,8 +211,6 @@ Creating a new `mydb1` profile: Please enter your numeric choice: ``` - If you are not sure what authentication mode to choose, use the recipe from [Authentication](../../../../getting_started/auth.md) under "Getting started". - All the available authentication methods are described in [{#T}](../../../../concepts/auth.md). The set of methods and text of the hints may differ from those given in this example. If the method you choose involves specifying an additional parameter, you'll be prompted to enter it. For example, if you select `4` (Use service account key file): diff --git a/ydb/docs/en/core/reference/ydb-cli/profile/_includes/index.md b/ydb/docs/en/core/reference/ydb-cli/profile/_includes/index.md index 0ff2a930a4..61e22e4b00 100644 --- a/ydb/docs/en/core/reference/ydb-cli/profile/_includes/index.md +++ b/ydb/docs/en/core/reference/ydb-cli/profile/_includes/index.md @@ -3,20 +3,18 @@ A profile is a named set of DB connection parameters stored in a configuration file in the local file system. With profiles, you can reuse data about DB location and authentication parameters, making a CLI call much shorter: - Calling the `scheme ls` command without a profile: - - ```bash - {{ ydb-cli }} \ - -e grpsc://some.host.in.some.domain:2136 \ - -d /some_long_identifier1/some_long_identifier2/database_name \ - --yc-token-file ~/secrets/token_database1 \ - scheme ls - ``` + ```bash + {{ ydb-cli }} \ + -e grpsc://some.host.in.some.domain:2136 \ + -d /some_long_identifier1/some_long_identifier2/database_name \ + --yc-token-file ~/secrets/token_database1 \ + scheme ls + ``` - Calling the same `scheme ls` command using a profile: - - ```bash - {{ ydb-cli }} -p db1 scheme ls - ``` + ```bash + {{ ydb-cli }} -p quickstart scheme ls + ``` ## Profile management commands {#commands} @@ -31,4 +29,3 @@ A profile is a named set of DB connection parameters stored in a configuration f Profiles are stored locally in a file named `~/ydb/config/config.yaml`. {% include [location_overlay.md](location_overlay.md) %} - diff --git a/ydb/docs/en/core/reference/ydb-cli/table-drop.md b/ydb/docs/en/core/reference/ydb-cli/table-drop.md index f17f1f5093..8f7df9486b 100644 --- a/ydb/docs/en/core/reference/ydb-cli/table-drop.md +++ b/ydb/docs/en/core/reference/ydb-cli/table-drop.md @@ -31,5 +31,5 @@ To view a description of the table delete command: To delete the table `series`: ```bash -{{ ydb-cli }} -p db1 table drop series -```
\ No newline at end of file +{{ ydb-cli }} -p quickstart table drop series +``` diff --git a/ydb/docs/en/core/reference/ydb-cli/table-ttl-reset.md b/ydb/docs/en/core/reference/ydb-cli/table-ttl-reset.md index ddd28ffc2c..85231f2b9c 100644 --- a/ydb/docs/en/core/reference/ydb-cli/table-ttl-reset.md +++ b/ydb/docs/en/core/reference/ydb-cli/table-ttl-reset.md @@ -1,6 +1,6 @@ # Resetting TTL parameters -Use the `table ttl reset` subcommand to disable [TTL](../../concepts/ttl.md) for the specified table. +Use the `table ttl reset` subcommand to reset [TTL](../../concepts/ttl.md) for the specified table. General format of the command: @@ -12,7 +12,7 @@ General format of the command: * `options`: [Parameters of the subcommand](#options). * `table path`: The table path. -View a description of the TTL reset command: +View the description of the TTL reset command: ```bash {{ ydb-cli }} table ttl reset --help @@ -22,9 +22,9 @@ View a description of the TTL reset command: {% include [ydb-cli-profile](../../_includes/ydb-cli-profile.md) %} -Disable TTL for the `series` table: +Reset TTL for the `series` table: ```bash -{{ ydb-cli }} -p db1 table ttl reset \ +{{ ydb-cli }} -p quickstart table ttl reset \ series ``` diff --git a/ydb/docs/en/core/reference/ydb-cli/table-ttl-set.md b/ydb/docs/en/core/reference/ydb-cli/table-ttl-set.md index c2c10ac049..89b14a5c50 100644 --- a/ydb/docs/en/core/reference/ydb-cli/table-ttl-set.md +++ b/ydb/docs/en/core/reference/ydb-cli/table-ttl-set.md @@ -22,7 +22,7 @@ View a description of the TTL set command: | Name | Description | ---|--- -| `--column` | The name of the column that will be used to calculate the lifetime of the rows. The column must have the [numeric](../../yql/reference/types/primitive.md#numeric) or [date and time](../../yql/reference/types/primitive.md#datetime) type.<br>In case of the numeric type, the value will be interpreted as the time elapsed since the beginning of the [Unix epoch](https://ru.wikipedia.org/wiki/Unix-время). Measurement units must be specified in the `--unit` parameter. | +| `--column` | The name of the column that will be used to calculate the lifetime of the rows. The column must have the [numeric](../../yql/reference/types/primitive.md#numeric) or [date and time](../../yql/reference/types/primitive.md#datetime) type.<br>In case of the numeric type, the value will be interpreted as the time elapsed since the beginning of the [Unix epoch](https://en.wikipedia.org/wiki/Unix_time). Measurement units must be specified in the `--unit` parameter. | | `--expire-after` | Additional time before deleting that must elapse after the lifetime of the row has expired. Specified in seconds.<br>The default value is `0`. | | `--unit` | The value measurement units of the column specified in the `--column` parameter. It is mandatory if the column has the [numeric](../../yql/reference/types/primitive.md#numeric) type.<br>Possible values:<ul><li>`seconds (s, sec)`: Seconds.</li><li>`milliseconds (ms, msec)`: Milliseconds.</li><li>`microseconds (us, usec)`: Microseconds.</li><li>`nanoseconds (ns, nsec)`: Nanoseconds.</li></ul> | | `--run-interval` | The interval for running the operation to delete rows with expired TTL. Specified in seconds. The default database settings do not allow an interval of less than 15 minutes (900 seconds).<br>The default value is `3600`. | @@ -31,12 +31,12 @@ View a description of the TTL set command: {% include [ydb-cli-profile](../../_includes/ydb-cli-profile.md) %} -Set TTL fro the `series` table +Set TTL for the `series` table ```bash -{{ ydb-cli }} -p db1 table ttl set \ +{{ ydb-cli }} -p quickstart table ttl set \ --column createtime \ --expire-after 3600 \ --run-interval 1200 \ series -```
\ No newline at end of file +``` diff --git a/ydb/docs/en/core/reference/ydb-cli/toc_i.yaml b/ydb/docs/en/core/reference/ydb-cli/toc_i.yaml index 9e93f653cd..0d58ea2206 100644 --- a/ydb/docs/en/core/reference/ydb-cli/toc_i.yaml +++ b/ydb/docs/en/core/reference/ydb-cli/toc_i.yaml @@ -59,6 +59,8 @@ items: href: topic-consumer-add.md - name: Deleting a topic consumer href: topic-consumer-drop.md + - name: Saving a consumer offset + href: topic-consumer-offset-commit.md - name: Reading messages from a topic href: topic-read.md - name: Writing messages to a topic @@ -114,4 +116,4 @@ items: - name: Key-Value load href: workload-kv.md - name: Topic load - href: workload-topic.md + href: workload-topic.md diff --git a/ydb/docs/en/core/reference/ydb-cli/tools-copy.md b/ydb/docs/en/core/reference/ydb-cli/tools-copy.md index cc20069196..1b57000862 100644 --- a/ydb/docs/en/core/reference/ydb-cli/tools-copy.md +++ b/ydb/docs/en/core/reference/ydb-cli/tools-copy.md @@ -31,23 +31,23 @@ View a description of the command to copy a table: Create the `backup` folder in the DB: ```bash -{{ ydb-cli }} -p db1 scheme mkdir backup +{{ ydb-cli }} -p quickstart scheme mkdir backup ``` Copy the `series` table to a table called `series-v1`, the `seasons` table to a table called `seasons-v1`, and `episodes` to `episodes-v1` in the `backup` folder: ```bash -{{ ydb-cli }} -p db1 tools copy --item destination=backup/series-v1,source=series --item destination=backup/seasons-v1,source=seasons --item destination=backup/episodes-v1,source=episodes +{{ ydb-cli }} -p quickstart tools copy --item destination=backup/series-v1,source=series --item destination=backup/seasons-v1,source=seasons --item destination=backup/episodes-v1,source=episodes ``` View the listing of objects in the `backup` folder: ```bash -{{ ydb-cli }} -p db1 scheme ls backup +{{ ydb-cli }} -p quickstart scheme ls backup ``` Result: ```text episodes-v1 seasons-v1 series-v1 -```
\ No newline at end of file +``` diff --git a/ydb/docs/en/core/reference/ydb-cli/topic-alter.md b/ydb/docs/en/core/reference/ydb-cli/topic-alter.md index c190e9db22..1745c91b33 100644 --- a/ydb/docs/en/core/reference/ydb-cli/topic-alter.md +++ b/ydb/docs/en/core/reference/ydb-cli/topic-alter.md @@ -38,7 +38,7 @@ The command changes the values of parameters specified in the command line. The Add a partition and the `lzop` compression method to the [previously created](topic-create.md) topic: ```bash -{{ ydb-cli }} -p db1 topic alter \ +{{ ydb-cli }} -p quickstart topic alter \ --partitions-count 3 \ --supported-codecs raw,gzip,lzop \ my-topic @@ -47,7 +47,7 @@ Add a partition and the `lzop` compression method to the [previously created](to Make sure that the topic parameters have been updated: ```bash -{{ ydb-cli }} -p db1 scheme describe my-topic +{{ ydb-cli }} -p quickstart scheme describe my-topic ``` Result: diff --git a/ydb/docs/en/core/reference/ydb-cli/topic-consumer-add.md b/ydb/docs/en/core/reference/ydb-cli/topic-consumer-add.md index 6e19accddc..ceb2965dcb 100644 --- a/ydb/docs/en/core/reference/ydb-cli/topic-consumer-add.md +++ b/ydb/docs/en/core/reference/ydb-cli/topic-consumer-add.md @@ -32,7 +32,7 @@ View the description of the add consumer command: Create a consumer with the `my-consumer` name for the [previously created](topic-create.md) `my-topic` topic. Consumption will start as soon as the first message is received after August 15, 2022 13:00:00 GMT: ```bash -{{ ydb-cli }} -p db1 topic consumer add \ +{{ ydb-cli }} -p quickstart topic consumer add \ --consumer my-consumer \ --starting-message-timestamp 1660568400 \ my-topic @@ -41,7 +41,7 @@ Create a consumer with the `my-consumer` name for the [previously created](topic Make sure the consumer was created: ```bash -{{ ydb-cli }} -p db1 scheme describe my-topic +{{ ydb-cli }} -p quickstart scheme describe my-topic ``` Result: diff --git a/ydb/docs/en/core/reference/ydb-cli/topic-consumer-offset-commit.md b/ydb/docs/en/core/reference/ydb-cli/topic-consumer-offset-commit.md new file mode 100644 index 0000000000..c83a67e5c2 --- /dev/null +++ b/ydb/docs/en/core/reference/ydb-cli/topic-consumer-offset-commit.md @@ -0,0 +1,43 @@ +# Saving a consumer offset + +Each topic consumer has a [consumer offset](../../concepts/topic.md#consumer-offset). + +You can use the `topic consumer offset commit` command to save the consumer offset for the consumer that you [added](topic-consumer-add.md). + +General format of the command: + +```bash +{{ ydb-cli }} [global options...] topic consumer offset commit [options...] <topic-path> +``` + +* `global options`: [Global parameters](commands/global-options.md). +* `options`: [Parameters of the subcommand](#options). +* `topic-path`: Topic path. + +Viewing the command description: + +```bash +{{ ydb-cli }} topic consumer offset commit --help +``` + +## Parameters of the subcommand {#options} + +| Name | Description | +---|--- +| `--consumer <value>` | Consumer name. | +| `--partition <value>` | Partition number. | +| `--offset <value>` | Offset value that you want to set. | + +## Examples {#examples} + +{% include [ydb-cli-profile](../../_includes/ydb-cli-profile.md) %} + +For `my-consumer`, set the offset of 123456789 in `my-topic` and partition `1`: + +```bash +{{ ydb-cli }} -p db1 topic consumer offset commit \ + --consumer my-consumer \ + --partition 1 \ + --offset 123456789 \ + my-topic +``` diff --git a/ydb/docs/en/core/reference/ydb-cli/topic-create.md b/ydb/docs/en/core/reference/ydb-cli/topic-create.md index b9f63a5aea..4ffef33a43 100644 --- a/ydb/docs/en/core/reference/ydb-cli/topic-create.md +++ b/ydb/docs/en/core/reference/ydb-cli/topic-create.md @@ -37,7 +37,7 @@ View the description of the create topic command: Create a topic with 2 partitions, `RAW` and `GZIP` compression methods, message retention time of 2 hours, and the `my-topic` path: ```bash -{{ ydb-cli }} -p db1 topic create \ +{{ ydb-cli }} -p quickstart topic create \ --partitions-count 2 \ --supported-codecs raw,gzip \ --retention-period-hours 2 \ @@ -47,7 +47,7 @@ Create a topic with 2 partitions, `RAW` and `GZIP` compression methods, message View parameters of the created topic: ```bash -{{ ydb-cli }} -p db1 scheme describe my-topic +{{ ydb-cli }} -p quickstart scheme describe my-topic ``` Result: diff --git a/ydb/docs/en/core/reference/ydb-cli/topic-drop.md b/ydb/docs/en/core/reference/ydb-cli/topic-drop.md index e3bce7884f..9f3ddea4d3 100644 --- a/ydb/docs/en/core/reference/ydb-cli/topic-drop.md +++ b/ydb/docs/en/core/reference/ydb-cli/topic-drop.md @@ -30,5 +30,5 @@ View the description of the delete topic command: Delete the [previously created](topic-create.md) topic: ```bash -{{ ydb-cli }} -p db1 topic drop my-topic +{{ ydb-cli }} -p quickstart topic drop my-topic ``` diff --git a/ydb/docs/en/core/reference/ydb-cli/topic-overview.md b/ydb/docs/en/core/reference/ydb-cli/topic-overview.md index f6cb2d5038..38057f5f6c 100644 --- a/ydb/docs/en/core/reference/ydb-cli/topic-overview.md +++ b/ydb/docs/en/core/reference/ydb-cli/topic-overview.md @@ -7,5 +7,6 @@ Using {{ ydb-short-name }} CLI commands, you can perform the following operation * [{#T}](topic-drop.md). * [{#T}](topic-consumer-add.md). * [{#T}](topic-consumer-drop.md). +* [{#T}](topic-consumer-offset-commit.md). * [{#T}](topic-read.md). * [{#T}](topic-write.md). diff --git a/ydb/docs/en/core/reference/ydb-cli/topic-read.md b/ydb/docs/en/core/reference/ydb-cli/topic-read.md index b7fd21b50f..7537b1763a 100644 --- a/ydb/docs/en/core/reference/ydb-cli/topic-read.md +++ b/ydb/docs/en/core/reference/ydb-cli/topic-read.md @@ -84,32 +84,32 @@ In all the examples below, a topic named `topic1` and a consumer named `c1` are * Reading a single message with output to the terminal: If the topic doesn't contain new messages for this consumer, the command terminates with no data output: ```bash - {{ ydb-cli }} -p db1 topic read topic1 -c c1 + {{ ydb-cli }} -p quickstart topic read topic1 -c c1 ``` * Waiting for and reading a single message written to a file named `message.bin`. The command keeps running until new messages appear in the topic for this consumer. However, you can terminate it with `Ctrl+C`: ```bash - {{ ydb-cli }} -p db1 topic read topic1 -c c1 -w -f message.bin + {{ ydb-cli }} -p quickstart topic read topic1 -c c1 -w -f message.bin ``` * Viewing information about messages waiting to be handled by the consumer without committing them. Up to 10 first messages are output: ```bash - {{ ydb-cli }} -p db1 topic read topic1 -c c1 --format pretty --commit false + {{ ydb-cli }} -p quickstart topic read topic1 -c c1 --format pretty --commit false ``` * Output messages to the terminal as they appear, using newline delimiter characters and transforming messages into Base64. The command will be running until you terminate it with `Ctrl+C`: ```bash - {{ ydb-cli }} -p db1 topic read topic1 -c c1 -w --format newline-delimited --transform base64 + {{ ydb-cli }} -p quickstart topic read topic1 -c c1 -w --format newline-delimited --transform base64 ``` * Track when new messages with the `ERROR` text appear in the topic and output them to the terminal once they arrive: ```bash - {{ ydb-cli }} -p db1 topic read topic1 -c c1 --format newline-delimited -w | grep ERROR + {{ ydb-cli }} -p quickstart topic read topic1 -c c1 --format newline-delimited -w | grep ERROR ``` * Receive another non-empty batch of no more than 150 messages transformed into base64, delimited with newline characters, and written to the `batch.txt` file: ```bash - {{ ydb-cli }} -p db1 topic read topic1 -c c1 \ + {{ ydb-cli }} -p quickstart topic read topic1 -c c1 \ --format newline-delimited -w --limit 150 \ --transform base64 -f batch.txt ``` diff --git a/ydb/docs/en/core/reference/ydb-cli/topic-write.md b/ydb/docs/en/core/reference/ydb-cli/topic-write.md index 449545e84c..c277e6706f 100644 --- a/ydb/docs/en/core/reference/ydb-cli/topic-write.md +++ b/ydb/docs/en/core/reference/ydb-cli/topic-write.md @@ -49,22 +49,22 @@ All the examples given below use a topic named `topic1`. * Writing a terminal input to a single message Once the command is run, you can type any multi-line text and press `Ctrl+D` to input it. ```bash - {{ ydb-cli }} -p db1 topic write topic1 + {{ ydb-cli }} -p quickstart topic write topic1 ``` * Writing the contents of the `message.bin` file to a single message compressed with the GZIP codec ```bash - {{ ydb-cli }} -p db1 topic write topic1 -f message.bin --codec GZIP + {{ ydb-cli }} -p quickstart topic write topic1 -f message.bin --codec GZIP ``` * Writing the contents of the `example.txt` file delimited into messages line by line ```bash - {{ ydb-cli }} -p db1 topic write topic1 -f example.txt --format newline-delimited + {{ ydb-cli }} -p quickstart topic write topic1 -f example.txt --format newline-delimited ``` * Writing a resource downloaded via HTTP and delimited into messages with tab characters ```bash - curl http://example.com/resource | {{ ydb-cli }} -p db1 topic write topic1 --delimiter "\t" + curl http://example.com/resource | {{ ydb-cli }} -p quickstart topic write topic1 --delimiter "\t" ``` * [Examples of YDB CLI command integration](topic-pipeline.md) diff --git a/ydb/docs/en/core/reference/ydb-sdk/example/go/_includes/run_custom.md b/ydb/docs/en/core/reference/ydb-sdk/example/go/_includes/run_custom.md index 96e0982f2d..8d82e690bc 100644 --- a/ydb/docs/en/core/reference/ydb-sdk/example/go/_includes/run_custom.md +++ b/ydb/docs/en/core/reference/ydb-sdk/example/go/_includes/run_custom.md @@ -22,5 +22,3 @@ For example: ( export YDB_ACCESS_TOKEN_CREDENTIALS="t1.9euelZqOnJuJlc..." && cd ydb-go-examples && \ go run ./basic -ydb="grpcs://ydb.example.com:2135?database=/somepath/somelocation" ) ``` - -{% include [../../_includes/pars_from_profile_hint.md](../../_includes/pars_from_profile_hint.md) %}
\ No newline at end of file diff --git a/ydb/docs/en/core/reference/ydb-sdk/example/java/_includes/run_custom.md b/ydb/docs/en/core/reference/ydb-sdk/example/java/_includes/run_custom.md index f97f8c87f1..3e9c58f840 100644 --- a/ydb/docs/en/core/reference/ydb-sdk/example/java/_includes/run_custom.md +++ b/ydb/docs/en/core/reference/ydb-sdk/example/java/_includes/run_custom.md @@ -22,5 +22,3 @@ For example: ( cd ydb-java-examples/basic_example/target && \ YDB_ACCESS_TOKEN_CREDENTIALS="t1.9euelZqOnJuJlc..." java -jar ydb-basic-example.jar grpcs://ydb.example.com:2135?database=/somepath/somelocation) ``` - -{% include [../../_includes/pars_from_profile_hint.md](../../_includes/pars_from_profile_hint.md) %} diff --git a/ydb/docs/en/core/reference/ydb-sdk/example/python/_includes/run_custom.md b/ydb/docs/en/core/reference/ydb-sdk/example/python/_includes/run_custom.md index db3f073ac7..feea41d9c9 100644 --- a/ydb/docs/en/core/reference/ydb-sdk/example/python/_includes/run_custom.md +++ b/ydb/docs/en/core/reference/ydb-sdk/example/python/_includes/run_custom.md @@ -21,5 +21,3 @@ For example: YDB_ACCESS_TOKEN_CREDENTIALS="t1.9euelZqOnJuJlc..." \ python3 ydb-python-sdk/examples/basic_example_v1/ -e grpcs://ydb.example.com:2135 -d /path/db ) ``` - -{% include [../../_includes/pars_from_profile_hint.md](../../_includes/pars_from_profile_hint.md) %} diff --git a/ydb/docs/en/core/toc_i.yaml b/ydb/docs/en/core/toc_i.yaml index 082d992cbc..80c66a5eb7 100644 --- a/ydb/docs/en/core/toc_i.yaml +++ b/ydb/docs/en/core/toc_i.yaml @@ -3,8 +3,10 @@ items: - name: Contents href: index.yaml - name: Getting started - include: { mode: link, path: getting_started/toc_p.yaml } - + href: administration/quickstart.md +# - name: Getting started +# hidden: true +# include: { mode: link, path: getting_started/toc_p.yaml } # Main - { name: Concepts, include: { mode: link, path: concepts/toc_p.yaml } } - { name: Tutorials, include: { mode: link, path: operations/toc_p.yaml } } diff --git a/ydb/docs/en/core/yql/reference/_includes/index/intro.md b/ydb/docs/en/core/yql/reference/_includes/index/intro.md index a1cd5dcef1..6a83b74293 100644 --- a/ydb/docs/en/core/yql/reference/_includes/index/intro.md +++ b/ydb/docs/en/core/yql/reference/_includes/index/intro.md @@ -1,3 +1,12 @@ +--- +title: What is YQL? YDB Query Language overview +description: YQL (YDB Query Language) is a universal declarative query language for data storage and processing systems, a dialect of SQL. You can get started with YQL in the web interface after you create a database. +keywords: + - yql + - what is yql + - YDB Query Language +--- + # YQL - Overview *YQL* (YDB Query Language) is a universal declarative query language for YDB, a dialect of SQL. YQL has been natively designed for large distributed databases, and therefore has a number of differences from the SQL standard. @@ -10,12 +19,8 @@ YDB tools support interfaces for sending YQL queries and receiving their executi - [YDB SDK](../../../../reference/ydb-sdk/index.md) This documentation section contains the YQL reference that includes the sections: - - [Data types](../../types/index.md) with a description of data types used in YQL - [Syntax](../../syntax/index.md) with a full list of YQL commands - [Built-in functions](../../builtins/index.md) with a description of the available built-in functions -You can also take a tutorial to get familiar with the basic YQL commands, in the section - -- [YQL tutorial](../../../tutorial/index.md) - +You can also take a tutorial to get familiar with the basic YQL commands, in the [YQL tutorial](../../../tutorial/index.md) section. diff --git a/ydb/docs/en/core/yql/toc_i.yaml b/ydb/docs/en/core/yql/toc_i.yaml index f6779a637f..77ef1da7ca 100644 --- a/ydb/docs/en/core/yql/toc_i.yaml +++ b/ydb/docs/en/core/yql/toc_i.yaml @@ -1,6 +1,8 @@ items: - name: Overview href: reference/index.md +- name: Getting started with YQL + href: ../getting_started/yql.md - include: { mode: link, path: reference/toc_i.yaml } - name: YQL tutorial include: { mode: link, path: tutorial/toc_i.yaml }
\ No newline at end of file diff --git a/ydb/docs/ru/core/reference/ydb-cli/export_import/_includes/file_structure.md b/ydb/docs/ru/core/reference/ydb-cli/export_import/_includes/file_structure.md index bf4d73d13b..edb569cc9b 100644 --- a/ydb/docs/ru/core/reference/ydb-cli/export_import/_includes/file_structure.md +++ b/ydb/docs/ru/core/reference/ydb-cli/export_import/_includes/file_structure.md @@ -1,4 +1,3 @@ - # Файловая структура выгрузки Описанная ниже файловая структура применяется для выгрузки как в файловую систему, так и в S3-совместимое объектное хранилище. При работе с S3 в ключ объекта записывается путь к файлу, а директория выгрузки является префиксом ключа. diff --git a/ydb/docs/ru/core/reference/ydb-cli/topic-consumer-offset-commit.md b/ydb/docs/ru/core/reference/ydb-cli/topic-consumer-offset-commit.md index 60a95d759e..f383cf84e7 100644 --- a/ydb/docs/ru/core/reference/ydb-cli/topic-consumer-offset-commit.md +++ b/ydb/docs/ru/core/reference/ydb-cli/topic-consumer-offset-commit.md @@ -1,6 +1,6 @@ # Сохранение позиции чтения -Каждый писатель топика обладает [позицией чтения](../../concepts/topic.md#consumer-offset) +Каждый писатель топика обладает [позицией чтения](../../concepts/topic.md#consumer-offset). С помощью команды `topic consumer offset commit` можно сохранить позицию чтения [добавленного ранее](topic-consumer-add.md) читателя. |