Fix multifaucet DB exec in CI by replacing docker_container_exec with…#736
Fix multifaucet DB exec in CI by replacing docker_container_exec with…#736vivekgsharma wants to merge 30 commits intov1.0-devfrom
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughReplaced multifaucet docker_container_exec tasks with shell-based Changes
Sequence Diagram(s)sequenceDiagram
participant Workflow as GitHub Actions
participant Repo as dash-network-configs
participant Generator as config generator
participant Provisioner as ./bin/deploy (Terraform/Ansible)
participant Git as Git remote
Workflow->>Repo: clone dash-network-configs
Workflow->>Repo: check for `NETWORK_NAME` files (*.yml, *.tfvars, *.inventory)
alt all files present
Workflow->>Workflow: set resume_mode = true\ncopy existing files into networks/
Workflow->>Provisioner: run ./bin/deploy -p -f (provisioning-only)
else none present
Workflow->>Generator: generate configs
Generator->>Workflow: produced files (yml, tfvars, inventory)
Workflow->>Provisioner: run ./bin/deploy -f (full deploy)
else some present
Repo-->>Workflow: fail workflow (partial config detected)
end
Workflow->>Git: attempt push configs (use write-key if provided)
alt push succeeds
Git-->>Workflow: push OK
else push fails
Workflow-->>Workflow: emit warning / exit gracefully
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
ansible/roles/multifaucet/tasks/main.yml (1)
47-50: Consider addingchanged_when: falsefor idempotent SQL tasks.The
CREATE TABLE IF NOT EXISTSSQL is idempotent, but withoutchanged_when: false, Ansible will report the task as "changed" on every run. This applies to the subsequentINSERT IGNOREtasks as well.♻️ Proposed fix
- name: Create promo codes table ansible.builtin.shell: | docker exec multifaucet-db-1 mysql -h localhost -D multifaucet -u multifaucet --password="multifaucet" \ -e 'CREATE TABLE IF NOT EXISTS faucet_promo_codes (id int NOT NULL, code varchar(255), minimum_payout int, maximum_payout int, PRIMARY KEY (id));' + changed_when: false🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ansible/roles/multifaucet/tasks/main.yml` around lines 47 - 50, The SQL tasks (e.g., the "Create promo codes table" task that runs the 'CREATE TABLE IF NOT EXISTS faucet_promo_codes' command and the follow-up tasks that run 'INSERT IGNORE') are idempotent but will always report changed; update these tasks to include changed_when: false so Ansible does not mark them as changed on every run. Locate the task named "Create promo codes table" and other tasks executing INSERT IGNORE against faucet_promo_codes and add changed_when: false to each task definition.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@ansible/roles/multifaucet/tasks/main.yml`:
- Around line 52-55: The "Load MN promo code" ansible task's single-line SQL
exceeds the 160-char limit; split the SQL string across multiple lines inside
the ansible.builtin.shell block so no line is longer than 160 chars. Edit the
task named "Load MN promo code" and replace the long -e 'INSERT ...' single line
with a multiline SQL (preserve the docker exec ... -e '<SQL>' structure), e.g.
put the INSERT clause, the VALUES clause, and each value on separate lines while
keeping the masternode_collaterals.mn and {{ masternode_collaterals.mn | int +
10 }} variables intact and correctly quoted/templated so YAML indentation and
shell quoting remain valid.
- Around line 57-61: The shell task "Load HPMN promo code" contains a single
long -e SQL string that exceeds the 160-char limit; split the SQL argument
across multiple lines inside the existing ansible.builtin.shell block so no
single source line is >160 chars, keeping the docker exec invocation and Jinja
vars (masternode_collaterals.hpmn) intact and preserving proper quoting/escaping
for the -e argument and the INSERT IGNORE statement.
---
Nitpick comments:
In `@ansible/roles/multifaucet/tasks/main.yml`:
- Around line 47-50: The SQL tasks (e.g., the "Create promo codes table" task
that runs the 'CREATE TABLE IF NOT EXISTS faucet_promo_codes' command and the
follow-up tasks that run 'INSERT IGNORE') are idempotent but will always report
changed; update these tasks to include changed_when: false so Ansible does not
mark them as changed on every run. Locate the task named "Create promo codes
table" and other tasks executing INSERT IGNORE against faucet_promo_codes and
add changed_when: false to each task definition.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: a4aca3fd-09f1-4b2b-8737-be9d111ea648
📒 Files selected for processing (1)
ansible/roles/multifaucet/tasks/main.yml
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@ansible/roles/mn_init/tasks/main.yml`:
- Around line 42-50: The task "Build masternode names list from mnlist"
currently sets mn_names to [] when mnlist is not a mapping; change it to
validate mnlist and fail fast: if mnlist is a mapping keep the existing
expression to build mn_names (using mnlist | dict2items | map(attribute='key') |
list), otherwise call ansible.builtin.fail with a clear error message indicating
mnlist must be a mapping so the playbook errors on bad input instead of silently
producing an empty mn_names.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 6a4ed640-6b28-437b-ab9e-b70dc2037f76
📒 Files selected for processing (1)
ansible/roles/mn_init/tasks/main.yml
| - name: Build masternode names list from mnlist | ||
| ansible.builtin.set_fact: | ||
| mn_names: >- | ||
| {{ | ||
| (mnlist | dict2items | map(attribute='key') | list) | ||
| if (mnlist is mapping) | ||
| else [] | ||
| }} | ||
|
|
There was a problem hiding this comment.
Fail fast on invalid mnlist types instead of silently no-oping.
Line [48] defaults to [] for non-mapping mnlist, which can make this role pass while doing nothing if input shape is wrong. Since mnlist is expected as a mapping (see ansible/deploy.yml), this should be treated as a configuration error.
Proposed fix
+- name: Validate mnlist input type
+ ansible.builtin.assert:
+ that:
+ - mnlist is mapping
+ fail_msg: "mnlist must be a mapping of masternode_name -> config object."
+ when: mnlist is defined and (mnlist | length > 0)
- name: Build masternode names list from mnlist
ansible.builtin.set_fact:
- mn_names: >-
- {{
- (mnlist | dict2items | map(attribute='key') | list)
- if (mnlist is mapping)
- else []
- }}
+ mn_names: "{{ (mnlist | default({}) | dict2items | map(attribute='key') | list) }}"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - name: Build masternode names list from mnlist | |
| ansible.builtin.set_fact: | |
| mn_names: >- | |
| {{ | |
| (mnlist | dict2items | map(attribute='key') | list) | |
| if (mnlist is mapping) | |
| else [] | |
| }} | |
| - name: Validate mnlist input type | |
| ansible.builtin.assert: | |
| that: | |
| - mnlist is mapping | |
| fail_msg: "mnlist must be a mapping of masternode_name -> config object." | |
| when: mnlist is defined and (mnlist | length > 0) | |
| - name: Build masternode names list from mnlist | |
| ansible.builtin.set_fact: | |
| mn_names: "{{ (mnlist | default({}) | dict2items | map(attribute='key') | list) }}" |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@ansible/roles/mn_init/tasks/main.yml` around lines 42 - 50, The task "Build
masternode names list from mnlist" currently sets mn_names to [] when mnlist is
not a mapping; change it to validate mnlist and fail fast: if mnlist is a
mapping keep the existing expression to build mn_names (using mnlist |
dict2items | map(attribute='key') | list), otherwise call ansible.builtin.fail
with a clear error message indicating mnlist must be a mapping so the playbook
errors on bad input instead of silently producing an empty mn_names.
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
.github/workflows/create-devnet.yml (1)
328-331: Consider restricting config push to successful deployments.With
if: always(), this step runs even when the deploy step fails. Combined with the soft-failexit 0on push errors, partially-created or corrupted configs from a failed deployment could be committed to the config repo.While lines 348-351 handle the "no files" case, config files may exist but be in an invalid state if the deployment failed mid-way (e.g., terraform applied but ansible failed).
Option 1: Only push on success (safest)
- name: Push configs to dash-network-configs - if: always() + if: success() env:Option 2: Add explicit condition excluding resume mode failures
- name: Push configs to dash-network-configs - if: always() + if: success() || (failure() && steps.existing_configs.outputs.resume_mode != 'true') env:🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/create-devnet.yml around lines 328 - 331, The workflow step "Push configs to dash-network-configs" currently uses "if: always()", which lets it run even when deployment failed; change its condition to run only after a successful deploy by replacing "if: always()" with a success predicate such as "if: ${{ success() }}" or, if your deploy job/step is named (e.g. steps.deploy), use "if: ${{ steps.deploy.outcome == 'success' }}" to explicitly require the deploy succeeded (or add additional checks to exclude resume-mode failures) so configs are pushed only on successful deployments.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/create-devnet.yml:
- Around line 285-292: The grep/sed regexes for updating TFVARS_FILE are
inconsistent: the earlier check uses Perl-style \s while the later grep -q and
sed use basic regex where \s is literal, causing misses and duplicate keys
(symbols: CURRENT_MAIN_DOMAIN, DEFAULT_MAIN_DOMAIN, TFVARS_FILE). Fix by using a
consistent POSIX regex (e.g., replace \s* with [[:space:]]* in both the grep and
sed expressions) or enable extended/perl regex (use grep -E/-P and sed -E) so
the existence check and the sed substitution both match lines like `main_domain
= ""` and update instead of appending. Ensure the same pattern is used in the
grep -q '^main_domain[[:space:]]*=' and sed substitution for
^main_domain[[:space:]]*= so duplicates are avoided.
---
Nitpick comments:
In @.github/workflows/create-devnet.yml:
- Around line 328-331: The workflow step "Push configs to dash-network-configs"
currently uses "if: always()", which lets it run even when deployment failed;
change its condition to run only after a successful deploy by replacing "if:
always()" with a success predicate such as "if: ${{ success() }}" or, if your
deploy job/step is named (e.g. steps.deploy), use "if: ${{ steps.deploy.outcome
== 'success' }}" to explicitly require the deploy succeeded (or add additional
checks to exclude resume-mode failures) so configs are pushed only on successful
deployments.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: e37ba8ff-ba8f-4113-af18-6ebb88a93c36
📒 Files selected for processing (2)
.github/workflows/create-devnet.ymlansible/roles/mn_protx_config/tasks/main.yml
| if [[ -z "$CURRENT_MAIN_DOMAIN" ]]; then | ||
| echo "Setting main_domain to $DEFAULT_MAIN_DOMAIN..." | ||
| if grep -q '^main_domain\s*=' "$TFVARS_FILE"; then | ||
| sed -i "s|^main_domain\\s*=.*|main_domain = \"$DEFAULT_MAIN_DOMAIN\"|" "$TFVARS_FILE" | ||
| else | ||
| echo "main_domain = \"$DEFAULT_MAIN_DOMAIN\"" >> "$TFVARS_FILE" | ||
| fi | ||
| fi |
There was a problem hiding this comment.
Regex character class mismatch may cause incorrect tfvars updates.
Line 282 uses grep -oP (Perl regex) where \s* matches whitespace. However, lines 287-288 use basic grep -q and sed without -E/-P flags, where \s is treated literally, not as a whitespace class.
If the tfvars file contains main_domain = "" (with spaces around =), line 282 will detect it as empty, but line 287's grep won't match ^main_domain\s*=, causing line 290 to append a duplicate main_domain key instead of updating the existing one.
Proposed fix: use consistent regex syntax
- if grep -q '^main_domain\s*=' "$TFVARS_FILE"; then
- sed -i "s|^main_domain\\s*=.*|main_domain = \"$DEFAULT_MAIN_DOMAIN\"|" "$TFVARS_FILE"
+ if grep -qE '^main_domain[[:space:]]*=' "$TFVARS_FILE"; then
+ sed -i -E "s|^main_domain[[:space:]]*=.*|main_domain = \"$DEFAULT_MAIN_DOMAIN\"|" "$TFVARS_FILE"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if [[ -z "$CURRENT_MAIN_DOMAIN" ]]; then | |
| echo "Setting main_domain to $DEFAULT_MAIN_DOMAIN..." | |
| if grep -q '^main_domain\s*=' "$TFVARS_FILE"; then | |
| sed -i "s|^main_domain\\s*=.*|main_domain = \"$DEFAULT_MAIN_DOMAIN\"|" "$TFVARS_FILE" | |
| else | |
| echo "main_domain = \"$DEFAULT_MAIN_DOMAIN\"" >> "$TFVARS_FILE" | |
| fi | |
| fi | |
| if [[ -z "$CURRENT_MAIN_DOMAIN" ]]; then | |
| echo "Setting main_domain to $DEFAULT_MAIN_DOMAIN..." | |
| if grep -qE '^main_domain[[:space:]]*=' "$TFVARS_FILE"; then | |
| sed -i -E "s|^main_domain[[:space:]]*=.*|main_domain = \"$DEFAULT_MAIN_DOMAIN\"|" "$TFVARS_FILE" | |
| else | |
| echo "main_domain = \"$DEFAULT_MAIN_DOMAIN\"" >> "$TFVARS_FILE" | |
| fi | |
| fi |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/create-devnet.yml around lines 285 - 292, The grep/sed
regexes for updating TFVARS_FILE are inconsistent: the earlier check uses
Perl-style \s while the later grep -q and sed use basic regex where \s is
literal, causing misses and duplicate keys (symbols: CURRENT_MAIN_DOMAIN,
DEFAULT_MAIN_DOMAIN, TFVARS_FILE). Fix by using a consistent POSIX regex (e.g.,
replace \s* with [[:space:]]* in both the grep and sed expressions) or enable
extended/perl regex (use grep -E/-P and sed -E) so the existence check and the
sed substitution both match lines like `main_domain = ""` and update instead of
appending. Ensure the same pattern is used in the grep -q
'^main_domain[[:space:]]*=' and sed substitution for ^main_domain[[:space:]]*=
so duplicates are avoided.
There was a problem hiding this comment.
♻️ Duplicate comments (1)
.github/workflows/create-devnet.yml (1)
292-296:⚠️ Potential issue | 🟡 MinorRegex character class inconsistency may cause duplicate
main_domainentries.Line 287 uses Perl regex (
-oP) where\s*correctly matches whitespace, but lines 292-293 use basic grep/sed where\sis treated as a literal backslash-s character. If the tfvars file containsmain_domain = ""with spaces around=, the detection will find it empty but the update pattern won't match, causing a duplicate key to be appended.🔧 Proposed fix using POSIX character classes
- if grep -q '^main_domain\s*=' "$TFVARS_FILE"; then - sed -i "s|^main_domain\\s*=.*|main_domain = \"$DEFAULT_MAIN_DOMAIN\"|" "$TFVARS_FILE" + if grep -qE '^main_domain[[:space:]]*=' "$TFVARS_FILE"; then + sed -i -E "s|^main_domain[[:space:]]*=.*|main_domain = \"$DEFAULT_MAIN_DOMAIN\"|" "$TFVARS_FILE"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/create-devnet.yml around lines 292 - 296, The grep/sed block that updates TFVARS_FILE uses grep -q '^main_domain\s*=' and sed -i "s|^main_domain\\s*=.*|main_domain = \"$DEFAULT_MAIN_DOMAIN\"|" which mixes Perl-style \s with basic grep/sed, causing the sed match to fail and append a duplicate; update the sed (and grep if desired) to use POSIX whitespace classes (e.g. [[:space:]]*) so both detection and replacement consistently match variants like `main_domain = ""` — modify the grep -q and the sed -i patterns to use [[:space:]]* (or switch both to a tool that supports \s) while keeping TFVARS_FILE and DEFAULT_MAIN_DOMAIN unchanged.
🧹 Nitpick comments (3)
ansible/roles/mn_find_collateral/tasks/main.yml (1)
32-37: Consider cleaning up the informal comment.The comment "Wow does this work?" on line 32 reads as leftover debug/discovery commentary. If this dynamic fact-setting pattern is now established and tested, consider replacing it with a clarifying comment or removing it.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@ansible/roles/mn_find_collateral/tasks/main.yml` around lines 32 - 37, The informal comment "Wow does this work?" above the "Set global masternode facts" task is leftover debugging chatter; remove or replace it with a concise, clarifying comment explaining the purpose of the task (e.g., "Export per-masternode collateral facts for later use") so its intent is clear; keep the dynamic fact names (collateral_ok_{{masternode_name | replace("-","_")}}, collateral_txid_{{masternode_name | replace("-","_")}}, collateral_vout_{{masternode_name | replace("-","_")}}) unchanged..github/workflows/create-devnet.yml (2)
186-186: Consider adding error handling for the git clone operation.If the clone fails (network issues, authentication problems, repo unavailable), the subsequent file checks will fail with confusing error messages. Adding explicit error handling would make troubleshooting easier.
💡 Proposed improvement
- git clone git@github.com:dashpay/dash-network-configs.git /tmp/dash-network-configs-source + if ! git clone git@github.com:dashpay/dash-network-configs.git /tmp/dash-network-configs-source; then + echo "::error::Failed to clone dash-network-configs repository" + exit 1 + fi🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/create-devnet.yml at line 186, Add explicit error handling around the git clone command so failures produce a clear, actionable message instead of causing downstream file-check errors; specifically, after the git clone git@github.com:dashpay/dash-network-configs.git /tmp/dash-network-configs-source step detect non-zero exit status (or check for the existence of /tmp/dash-network-configs-source/.git) and fail the job with an informative error message indicating clone failure and likely causes (network/auth/repo unavailable), and optionally retry or print git error output to aid debugging.
369-377: Silent push failure may go unnoticed in workflow results.Both push attempts use
exit 0on failure, meaning the workflow will report success even if configs weren't pushed. While the::warning::annotations help, they're easy to miss in the Actions UI. Consider whether a failed push should be more visible or require explicit acknowledgment.If the current behavior is intentional (configs push is best-effort), this is fine. Otherwise, you might want to set an output variable or use
exit 1to make failures more visible.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In @.github/workflows/create-devnet.yml around lines 369 - 377, The workflow currently swallows git push failures by using "exit 0" in both branches after GIT_SSH_COMMAND='ssh -i ~/.ssh/id_ed25519_write -o StrictHostKeyChecking=no' git push and the fallback git push; update this so failures are surfaced: either change the "exit 0" to "exit 1" to fail the job on push error, or instead set a workflow output/step variable (e.g., set-output or echo "::set-output name=config_push_failed::true") and add a later step that requires explicit acknowledgment; modify the two push error handlers accordingly so the chosen behavior is applied to both the GIT_SSH_COMMAND push and the fallback git push.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In @.github/workflows/create-devnet.yml:
- Around line 292-296: The grep/sed block that updates TFVARS_FILE uses grep -q
'^main_domain\s*=' and sed -i "s|^main_domain\\s*=.*|main_domain =
\"$DEFAULT_MAIN_DOMAIN\"|" which mixes Perl-style \s with basic grep/sed,
causing the sed match to fail and append a duplicate; update the sed (and grep
if desired) to use POSIX whitespace classes (e.g. [[:space:]]*) so both
detection and replacement consistently match variants like `main_domain = ""` —
modify the grep -q and the sed -i patterns to use [[:space:]]* (or switch both
to a tool that supports \s) while keeping TFVARS_FILE and DEFAULT_MAIN_DOMAIN
unchanged.
---
Nitpick comments:
In @.github/workflows/create-devnet.yml:
- Line 186: Add explicit error handling around the git clone command so failures
produce a clear, actionable message instead of causing downstream file-check
errors; specifically, after the git clone
git@github.com:dashpay/dash-network-configs.git /tmp/dash-network-configs-source
step detect non-zero exit status (or check for the existence of
/tmp/dash-network-configs-source/.git) and fail the job with an informative
error message indicating clone failure and likely causes (network/auth/repo
unavailable), and optionally retry or print git error output to aid debugging.
- Around line 369-377: The workflow currently swallows git push failures by
using "exit 0" in both branches after GIT_SSH_COMMAND='ssh -i
~/.ssh/id_ed25519_write -o StrictHostKeyChecking=no' git push and the fallback
git push; update this so failures are surfaced: either change the "exit 0" to
"exit 1" to fail the job on push error, or instead set a workflow output/step
variable (e.g., set-output or echo "::set-output name=config_push_failed::true")
and add a later step that requires explicit acknowledgment; modify the two push
error handlers accordingly so the chosen behavior is applied to both the
GIT_SSH_COMMAND push and the fallback git push.
In `@ansible/roles/mn_find_collateral/tasks/main.yml`:
- Around line 32-37: The informal comment "Wow does this work?" above the "Set
global masternode facts" task is leftover debugging chatter; remove or replace
it with a concise, clarifying comment explaining the purpose of the task (e.g.,
"Export per-masternode collateral facts for later use") so its intent is clear;
keep the dynamic fact names (collateral_ok_{{masternode_name |
replace("-","_")}}, collateral_txid_{{masternode_name | replace("-","_")}},
collateral_vout_{{masternode_name | replace("-","_")}}) unchanged.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 9ce3ecf2-28b9-4eeb-9d94-8a4fbc2b707d
📒 Files selected for processing (2)
.github/workflows/create-devnet.ymlansible/roles/mn_find_collateral/tasks/main.yml
90d0922 to
bf599ed
Compare
Testting Devnet creationg autmation using CI
Summary by CodeRabbit
Refactor
Bug Fixes
Chores