[New Integration] Kafka OpenTelemetry Input Package#18344
[New Integration] Kafka OpenTelemetry Input Package#18344
Conversation
Vale Linting ResultsSummary: 1 warning, 1 suggestion found
|
| File | Line | Rule | Message |
|---|---|---|---|
| packages/kafka_input_otel/docs/README.md | 36 | Elastic.Latinisms | Latin terms and abbreviations are a common source of confusion. Use 'for example' instead of 'e.g'. |
💡 Suggestions (1)
| File | Line | Rule | Message |
|---|---|---|---|
| packages/kafka_input_otel/docs/README.md | 21 | Elastic.Wordiness | Consider using 'because' instead of 'since'. |
The Vale linter checks documentation changes against the Elastic Docs style guide.
To use Vale locally or report issues, refer to Elastic style guide for Vale.
| @@ -0,0 +1,84 @@ | |||
| receivers: | |||
There was a problem hiding this comment.
🟡 Medium input/input.yml.hbs:1
When none of logs_topic, metrics_topic, or traces_topic are configured, the template produces an invalid OpenTelemetry Collector configuration with empty receivers: and pipelines: sections. The Collector fails at startup because both sections require at least one entry. Consider adding a top-level guard (e.g., wrap everything in {{#if}} checking at least one topic) or making one topic required in the manifest.
🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file packages/kafka_input_otel/agent/input/input.yml.hbs around line 1:
When none of `logs_topic`, `metrics_topic`, or `traces_topic` are configured, the template produces an invalid OpenTelemetry Collector configuration with empty `receivers:` and `pipelines:` sections. The Collector fails at startup because both sections require at least one entry. Consider adding a top-level guard (e.g., wrap everything in `{{#if}}` checking at least one topic) or making one topic required in the manifest.
…umer group vars Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
73491dc to
908a051
Compare
| statements: | ||
| - set(attributes["data_stream.type"], "metrics") | ||
| - set(attributes["data_stream.dataset"], "kafkareceiver") | ||
| - set(attributes["data_stream.namespace"], "ep") |
There was a problem hiding this comment.
🟢 Low policy/test-default.expected:48
The transform/componentid-1 processor at lines 36-48 configures log_statements and metric_statements to set data_stream attributes, but is missing trace_statements. The traces pipeline at lines 129-136 uses this same transform processor, so traces pass through without having their data_stream.type, data_stream.dataset, and data_stream.namespace attributes set. This causes traces to be indexed without the required routing attributes, potentially resulting in documents not reaching the traces-*-* data stream.
- - set(attributes["data_stream.namespace"], "ep")
+ - set(attributes["data_stream.namespace"], "ep")
+ trace_statements:
+ - context: span
+ statements:
+ - set(attributes["data_stream.type"], "traces")
+ - set(attributes["data_stream.dataset"], "kafkareceiver")
+ - set(attributes["data_stream.namespace"], "ep")🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file packages/kafka_input_otel/_dev/test/policy/test-default.expected around line 48:
The `transform/componentid-1` processor at lines 36-48 configures `log_statements` and `metric_statements` to set `data_stream` attributes, but is missing `trace_statements`. The traces pipeline at lines 129-136 uses this same transform processor, so traces pass through without having their `data_stream.type`, `data_stream.dataset`, and `data_stream.namespace` attributes set. This causes traces to be indexed without the required routing attributes, potentially resulting in documents not reaching the `traces-*-*` data stream.
💔 Build Failed
Failed CI StepsHistory
cc @zmoog |
Summary
kafka_input_otel) that uses thekafkareceiverto consume OTLP-encoded logs, metrics, and traces from Kafka-compatible brokers, including Azure Event Hubdynamic_signal_types: trueotlp_json/otlp_proto(EDOT kafkareceiver constraint)This is a POC to explore using the OTel Kafka receiver as a transport layer for ingesting telemetry from Azure Event Hub.
Key design decisions
$Default) with optional per-signal overrides.Known limitations
otlp_jsonandotlp_protoencodings (EDOT restriction)Test plan
elastic-package lintpasseselastic-package buildsucceeds🤖 Generated with Claude Code