Skip to content

Commit 076bed9

Browse files
nhilth-vngcloudgitbook-bot
authored andcommitted
GITBOOK-736: No subject
1 parent a7c4072 commit 076bed9

13 files changed

Lines changed: 45 additions & 94 deletions
444 KB
Loading
18.6 KB
Loading
15.6 KB
Loading
34 KB
Loading
13.6 KB
Loading
12.5 KB
Loading
33.4 KB
Loading

English/vdb/kafka-cluster-kds/README.md

Lines changed: 10 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
Kafka Cluster DB is a new service on the vDB platform, providing a powerful and flexible Kafka server cluster to manage real-time event streaming. With Kafka Cluster DB, you can easily build large-scale data processing applications, messaging systems, and centralized logging with high scalability, data durability, and outstanding performance.
44

5-
<figure><img src="../../.gitbook/assets/image (2) (1) (1) (1) (1) (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
5+
<figure><img src="../../.gitbook/assets/Kafka-Cluster-Database.jpg" alt=""><figcaption></figcaption></figure>
66

77
## Features <a href="#tinh-nang-moi" id="tinh-nang-moi"></a>
88

@@ -32,14 +32,12 @@ Kafka Cluster DB is a new service on the vDB platform, providing a powerful and
3232

3333
Comparison between Kafka Cluster DB Managed Service and Traditional Kafka Cluster (self-managed)
3434

35-
| **Criteria** | **Kafka Cluster DB Managed Service** | **Traditional Kafka Cluster** |
36-
| --------------------------- | -------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------- |
37-
| **Cluster management** | The service provider (GreenNode vDB) is responsible for managing, maintaining, upgrading and monitoring the Kafka cluster. | Users self-manage the entire Kafka cluster, including installation, configuration, maintenance, upgrades and monitoring. |
38-
| **Configuration and deployment** | Easy to configure and deploy through web interface or API. | Requires in-depth Kafka knowledge and system administration skills for installation and configuration. |
39-
| **Scaling** | Easy to scale by adding resources through the management interface. | Requires manual scaling process, which can be complex and time-consuming. |
40-
| **Monitoring and troubleshooting** | The service provider (GreenNode vDB) provides monitoring tools and troubleshooting support. | You self-monitor and troubleshoot, requiring expertise and experience. |
41-
| **Cost** | Typically lower long-term costs due to savings on infrastructure investment, operational costs and personnel costs. | Higher long-term costs due to the need to invest in infrastructure, operations and personnel. |
42-
| **Flexibility** | May be limited in customization and control compared to self-managed clusters. | Allows full customization and control of the Kafka cluster. |
43-
| **Suitable for** | Businesses that want to focus on application development and don't want to invest heavily in infrastructure management. | Businesses with strong technical teams that want full system control and can handle incidents themselves. |
44-
45-
&#x20;
35+
| **Criteria** | **Kafka Cluster DB Managed Service** | **Traditional Kafka Cluster** |
36+
| ---------------------------------- | -------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------ |
37+
| **Cluster management** | The service provider (GreenNode vDB) is responsible for managing, maintaining, upgrading and monitoring the Kafka cluster. | Users self-manage the entire Kafka cluster, including installation, configuration, maintenance, upgrades and monitoring. |
38+
| **Configuration and deployment** | Easy to configure and deploy through web interface or API. | Requires in-depth Kafka knowledge and system administration skills for installation and configuration. |
39+
| **Scaling** | Easy to scale by adding resources through the management interface. | Requires manual scaling process, which can be complex and time-consuming. |
40+
| **Monitoring and troubleshooting** | The service provider (GreenNode vDB) provides monitoring tools and troubleshooting support. | You self-monitor and troubleshoot, requiring expertise and experience. |
41+
| **Cost** | Typically lower long-term costs due to savings on infrastructure investment, operational costs and personnel costs. | Higher long-term costs due to the need to invest in infrastructure, operations and personnel. |
42+
| **Flexibility** | May be limited in customization and control compared to self-managed clusters. | Allows full customization and control of the Kafka cluster. |
43+
| **Suitable for** | Businesses that want to focus on application development and don't want to invest heavily in infrastructure management. | Businesses with strong technical teams that want full system control and can handle incidents themselves. |

English/vdb/kafka-cluster-kds/bat-dau-voi-kafka-cluster.md

Lines changed: 25 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -11,29 +11,32 @@ Refer to the GreenNode login guide [here](../../identity-and-access-management-i
1111

1212
## Step 2: Create a Kafka Cluster
1313

14-
1. Click the "Create Kafka Cluster" button.&#x20;
14+
1. Click the "Create Kafka Cluster" button.
15+
16+
<figure><img src="../../.gitbook/assets/step2-create-kafka.png" alt=""><figcaption></figcaption></figure>
1517

16-
<figure><img src="../../.gitbook/assets/image (744).png" alt="" width="228"><figcaption></figcaption></figure>
1718
2. Fill in the following information:
18-
* **Name:** Name your Kafka cluster for easy identification and management.
19-
* **Kafka Version:** Choose the Kafka version that suits your needs. Different versions may have different features and performance.
20-
* **Flavor (CPU, RAM):** Select the hardware configuration (CPU and RAM) for brokers in the Kafka cluster. This configuration affects the processing capability and performance of the cluster.
21-
* **Number of Brokers:** Determine the number of brokers in the Kafka cluster. The number of brokers affects the fault tolerance and scalability of the cluster.
22-
* **Storage per Broker (IOPS and Capacity):** Choose the storage capacity and IOPS (Input/Output Operations Per Second) for each broker. This affects the data storage capacity and read/write performance of the cluster.
23-
* **Network (VPC, Subnet):** Select the virtual network (VPC) and subnet where the Kafka cluster will be deployed. When initialized, the Kafka cluster will be in private mode (accessible only from within the VPC). After successful initialization, you can enable public access if needed.
24-
* **Access Method Control (mTLS, SASL):** Choose the authentication and authorization method for clients connecting to the Kafka cluster. mTLS uses client and server certificates, SASL uses username and password.
25-
* **Encryption Mode:** Choose the data encryption mode. By default, encryption in transit (data encrypted when transmitted between client and broker) and within cluster (data encrypted when stored on brokers) are enabled.
26-
* **Config Group:** Select the configuration group to apply to the Kafka cluster. The configuration group contains detailed settings for Kafka operations.
27-
3. Click the "Create" button to start the Kafka cluster creation process.&#x20;
28-
29-
<figure><img src="../../.gitbook/assets/image (745).png" alt=""><figcaption></figcaption></figure>
19+
20+
* **Name:** Name your Kafka cluster for easy identification and management.
21+
* **Kafka Version:** Choose the Kafka version that suits your needs. Different versions may have different features and performance.
22+
* **Flavor (CPU, RAM):** Select the hardware configuration (CPU and RAM) for brokers in the Kafka cluster. This configuration affects the processing capability and performance of the cluster.
23+
* **Number of Brokers:** Determine the number of brokers in the Kafka cluster. The number of brokers affects the fault tolerance and scalability of the cluster.
24+
* **Storage per Broker (IOPS and Capacity):** Choose the storage capacity and IOPS (Input/Output Operations Per Second) for each broker. This affects the data storage capacity and read/write performance of the cluster.
25+
* **Network (VPC, Subnet):** Select the virtual network (VPC) and subnet where the Kafka cluster will be deployed. When initialized, the Kafka cluster will be in private mode (accessible only from within the VPC). After successful initialization, you can enable public access if needed.
26+
* **Access Method Control (mTLS, SASL):** Choose the authentication and authorization method for clients connecting to the Kafka cluster. mTLS uses client and server certificates, SASL uses username and password.
27+
* **Encryption Mode:** Choose the data encryption mode. By default, encryption in transit (data encrypted when transmitted between client and broker) and within cluster (data encrypted when stored on brokers) are enabled.
28+
* **Config Group:** Select the configuration group to apply to the Kafka cluster. The configuration group contains detailed settings for Kafka operations.
29+
30+
3. Click the "Create" button to start the Kafka cluster creation process.
31+
32+
<figure><img src="../../.gitbook/assets/step2-after-click-create-kafka.png" alt=""><figcaption></figcaption></figure>
3033

3134
## Step 3: Create a Topic
3235

3336
1. After the Kafka cluster is successfully created, access the management page of that Kafka cluster.
3437
2. Find and click the "Create Topic" section.
3538

36-
<figure><img src="../../.gitbook/assets/image (748).png" alt=""><figcaption></figcaption></figure>
39+
<figure><img src="../../.gitbook/assets/step3-create-topic.png" alt=""><figcaption></figcaption></figure>
3740

3841
3. Fill in the following information:
3942

@@ -48,14 +51,14 @@ Refer to the GreenNode login guide [here](../../identity-and-access-management-i
4851

4952
1. In the Kafka cluster management page, find and click the "Create Kafka User" section.
5053

51-
<figure><img src="../../.gitbook/assets/image (749).png" alt=""><figcaption></figcaption></figure>
54+
<figure><img src="../../.gitbook/assets/step5-topic-user.png" alt=""><figcaption></figcaption></figure>
5255

53-
2. Fill in the following information:
56+
1. Fill in the following information:
5457

5558
* **Name:** Name the Kafka user.
5659
* **Permissions:** In the permission section, click "Add Permission" to select Produce (write data to topic) and Consume (read data from topic) permissions for each topic this user needs to access.
5760

58-
<figure><img src="../../.gitbook/assets/image (750).png" alt=""><figcaption></figcaption></figure>
61+
<figure><img src="../../.gitbook/assets/step4-permission-button.png" alt=""><figcaption></figcaption></figure>
5962

6063
* **Access Method:** Choose the access method for the Kafka user (mTLS or SASL) depending on the method enabled for the Kafka cluster.
6164

@@ -89,9 +92,10 @@ wget https://archive.apache.org/dist/kafka/3.7.0/kafka_2.13-3.7.0.tgz
8992
tar -xzf kafka_2.13-3.7.0.tgz
9093
```
9194

92-
5. Next, download the TLS certificates to access the Kafka cluster.&#x20;
95+
5. Next, download the TLS certificates to access the Kafka cluster.
96+
97+
<figure><img src="../../.gitbook/assets/step5-topic-user.png" alt=""><figcaption></figcaption></figure>
9398

94-
<figure><img src="../../.gitbook/assets/image (746).png" alt=""><figcaption></figcaption></figure>
9599
6. On the newly initialized server, upload and extract the TLS certificates and unzip using the command below:
96100

97101
Note: The User ID will be the directory name after extracting the downloaded certificate
@@ -119,7 +123,7 @@ You need to allow this server to access the Kafka cluster as a private client by
119123

120124
Note: Port 9094 for mTLS and 9096 for SASL
121125

122-
<figure><img src="../../.gitbook/assets/image (751).png" alt=""><figcaption></figcaption></figure>
126+
<figure><img src="../../.gitbook/assets/step5-port.png" alt=""><figcaption></figcaption></figure>
123127

124128
8. Produce message to Kafka topic
125129

English/vdb/opensearch-cluster-database-ods/bat-dau-voi-opensearch-cluster/day-du-lieu-hoac-logs-tu-server-vao-mot-opensearch-cluster-da-khoi-tao.md

Lines changed: 6 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,8 @@
1-
# Push data or event logs from Logstash into an initialized OpenSearch Cluster
1+
# Push data or event logs from Logstash into an OpenSearch Cluster
22

33
## Prerequisites
44

5-
Suppose you have successfully initialized an OpenSearch Cluster with the following parameters:&#x20;
6-
7-
<figure><img src="../../../.gitbook/assets/opensearch5.png" alt=""><figcaption></figcaption></figure>
5+
Suppose you have successfully initialized an OpenSearch Cluster with the following parameters:
86

97
Next, proceed to push sample data into OpenSearch Dashboards or push event logs from Logstash into OpenSearch.
108

@@ -33,16 +31,15 @@ curl -H "Content-Type: application/json" -X PUT "https://<<OpenSearch_ReceiveLog
3331

3432
You can get the `OpenSearch_ReceiveLogs_Endpoint` information from the vDB Portal and replace `<<Master_User_Password>>` with the master account password you previously created.
3533

36-
Example:&#x20;
34+
Example:
3735

3836
```bash
3937
# 2. Create index and data.
4038
curl -H "Content-Type: application/json" -X PUT "https://open-search-dem-53461-5cfxl-hcm03.vdb-opensearch.vngcloud.vn:9200/ecommerce" -k -H "Authorization: Basic $(echo -n 'master-user:123456789aA@' | base64)" --data-binary "@ecommerce-field_mappings.json"
4139
curl -H "Content-Type: application/json" -X PUT "https://open-search-dem-53461-5cfxl-hcm03.vdb-opensearch.vngcloud.vn:9200/ecommerce/_bulk" -k -H "Authorization: Basic $(echo -n 'master-user:123456789aA@' | base64)" --data-binary "@ecommerce.ndjson"
4240
```
4341

44-
[\
45-
](https://liemnt5-cidr-11430-2ue3z-hcm03.vdb-opensearch.vngcloud.tech)The result will display as follows:&#x20;
42+
[<br>](https://liemnt5-cidr-11430-2ue3z-hcm03.vdb-opensearch.vngcloud.tech)The result will display as follows:
4643

4744
```bash
4845
curl -H "Content-Type: application/json" -X PUT "https://open-search-dem-53461-5cfxl-hcm03.vdb-opensearch.vngcloud.vn:9200/ecommerce" -k -H "Authorization: Basic $(echo -n 'master-user:123456789aA@' | base64)" --data-binary "@ecommerce-field_mappings.json"
@@ -56,7 +53,7 @@ curl -H "Content-Type: application/json" -X PUT "https://open-search-dem-53461-5
5653
**Step 3: Check data on OpenSearch Dashboards**
5754

5855
1. Access and log in to **OpenSearch Dashboards**
59-
2. Go to **Management**, select **Dashboard Management**&#x20;
56+
2. Go to **Management**, select **Dashboard Management**
6057

6158
<figure><img src="../../../.gitbook/assets/image (7) (1) (1) (1) (1) (1) (1) (1) (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
6259

@@ -158,7 +155,7 @@ If logs appear, it means Logstash has successfully sent data to OpenSearch.
158155
#### **Step 5: View logs on OpenSearch Dashboards**
159156

160157
1. Access and log in to **OpenSearch Dashboards**
161-
2. Go to **Management**, select **Dashboard Management**&#x20;
158+
2. Go to **Management**, select **Dashboard Management**
162159

163160
<figure><img src="../../../.gitbook/assets/image (7) (1) (1) (1) (1) (1) (1) (1) (1) (1) (1).png" alt=""><figcaption></figcaption></figure>
164161

0 commit comments

Comments
 (0)