OpenTelemetry Deployment and Integration Guide #
This guide provides detailed instructions for deploying OpenTelemetry in your Kubernetes cluster to instrument applications and forward telemetry data to StackBooster. The setup includes:
- Configuring the OpenTelemetry Collector
- Setting up Instrumentation for auto-instrumentation capabilities
- Configuring Prometheus integration
- Instrumenting various application types
Prerequisites
- kubectl configured to access your cluster
- Prometheus already configured and sending metrics to StackBooster servers
1. Deploy OpenTelemetry Collector and Instrumentation #
Create a file named otel-config.yaml
with the following content (take a look into line 61, put there valid namespace name):
apiVersion: opentelemetry.io/v1beta1
kind: OpenTelemetryCollector
metadata:
name: stackbooster-otel
namespace: stackbooster
spec:
mode: deployment
resources:
limits:
cpu: 1
memory: 2Gi
requests:
cpu: 200m
memory: 400Mi
podAnnotations:
prometheus.io/scrape: "true"
prometheus.io/port: "8889"
prometheus.io/path: "/metrics"
config:
receivers:
otlp:
protocols:
grpc:
endpoint: 0.0.0.0:4317
max_recv_msg_size_mib: 24
http:
endpoint: 0.0.0.0:4318
max_request_body_size: 25165824
jaeger:
protocols:
grpc:
endpoint: 0.0.0.0:14250
thrift_http:
endpoint: 0.0.0.0:14268
processors:
batch:
send_batch_size: 10000
timeout: 10s
memory_limiter:
check_interval: 1s
limit_percentage: 80
spike_limit_percentage: 25
exporters:
debug:
verbosity: detailed
prometheus:
endpoint: 0.0.0.0:8889
namespace: stackbooster
connectors:
spanmetrics:
histogram:
explicit:
buckets: [1, 2, 5, 10, 25, 50, 75, 100, 250, 500, 750, 1000]
# ADD YOUR CUSTOM DIMENSIONS HERE
dimensions:
- name: operation
- name: k8s.namespace.name # Add namespace
- name: k8s.pod.name # Add pod name
- name: k8s.service.name # Add service name
dimensions_cache_size: 1000
metrics_flush_interval: 15s
service:
pipelines:
traces:
receivers: [otlp, jaeger]
processors: [memory_limiter, batch]
exporters: [debug, spanmetrics]
metrics/spanmetrics:
receivers: [spanmetrics]
processors: [memory_limiter, batch]
exporters: [prometheus]
metrics:
receivers: [otlp]
processors: [memory_limiter, batch]
exporters: [debug, prometheus]
telemetry:
metrics:
readers:
- pull:
exporter:
prometheus:
host: 0.0.0.0
port: 8888
---
apiVersion: v1
kind: Service
metadata:
name: otel-unified-collector
namespace: stackbooster
annotations:
prometheus.io/scrape: "true"
prometheus.io/port: "8889"
prometheus.io/path: "/metrics"
spec:
selector:
app.kubernetes.io/component: opentelemetry-collector
ports:
- name: otlp-http
port: 4318
targetPort: 4318
- name: otlp-grpc
port: 4317
targetPort: 4317
- name: prometheus-metrics # Add this port
port: 8889
targetPort: 8889
---
apiVersion: opentelemetry.io/v1alpha1
kind: Instrumentation
metadata:
name: default-instrumentation
namespace: <<namespace name>>
spec:
propagators:
- tracecontext
- baggage
- b3
sampler:
type: parentbased_traceidratio
argument: "1"
java:
env:
- name: OTEL_EXPORTER_OTLP_PROTOCOL
value: "grpc"
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: "http://otel-unified-collector.open-telemetry:4317"
- name: OTEL_RESOURCE_ATTRIBUTES
value: "k8s.namespace.name=$(K8S_NAMESPACE_NAME),k8s.pod.name=$(K8S_POD_NAME),k8s.service.name=$(OTEL_SERVICE_NAME)"
- name: K8S_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: K8S_NAMESPACE_NAME
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: OTEL_SERVICE_NAME
valueFrom:
fieldRef:
fieldPath: metadata.labels['app.kubernetes.io/name']
nodejs:
env:
- name: OTEL_EXPORTER_OTLP_PROTOCOL
value: "http/protobuf"
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: "http://otel-unified-collector.open-telemetry:4318"
- name: OTEL_RESOURCE_ATTRIBUTES
value: "k8s.namespace.name=$(K8S_NAMESPACE_NAME),k8s.pod.name=$(K8S_POD_NAME),k8s.service.name=$(OTEL_SERVICE_NAME)"
- name: K8S_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: K8S_NAMESPACE_NAME
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: OTEL_SERVICE_NAME
valueFrom:
fieldRef:
fieldPath: metadata.labels['app.kubernetes.io/name']
python:
env:
- name: OTEL_EXPORTER_OTLP_PROTOCOL
value: "http/protobuf"
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: "http://otel-unified-collector.open-telemetry:4318"
- name: OTEL_RESOURCE_ATTRIBUTES
value: "k8s.namespace.name=$(K8S_NAMESPACE_NAME),k8s.pod.name=$(K8S_POD_NAME),k8s.service.name=$(OTEL_SERVICE_NAME)"
- name: K8S_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: K8S_NAMESPACE_NAME
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: OTEL_SERVICE_NAME
valueFrom:
fieldRef:
fieldPath: metadata.labels['app.kubernetes.io/name']
dotnet:
env:
- name: OTEL_EXPORTER_OTLP_PROTOCOL
value: "grpc"
- name: OTEL_EXPORTER_OTLP_ENDPOINT
value: "http://otel-unified-collector.open-telemetry:4317"
- name: OTEL_RESOURCE_ATTRIBUTES
value: "k8s.namespace.name=$(K8S_NAMESPACE_NAME),k8s.pod.name=$(K8S_POD_NAME),k8s.service.name=$(OTEL_SERVICE_NAME)"
- name: K8S_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: K8S_NAMESPACE_NAME
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: OTEL_SERVICE_NAME
valueFrom:
fieldRef:
fieldPath: metadata.labels['app.kubernetes.io/name']
Apply the configuration:
kubectl apply -f otel-config.yaml
2. Instrumenting Applications #
Node.js Applications #
Add the following annotation to your Pod or Deployment specification:
annotations:
instrumentation.opentelemetry.io/inject-nodejs: "true"
Example Deployment snippet:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nodejs-app
# ...
spec:
template:
metadata:
annotations:
instrumentation.opentelemetry.io/inject-nodejs: "true"
# ...
NodeJS Applications #
Add the following annotation to your Pod or Deployment specification:
annotations:
instrumentation.opentelemetry.io/inject-nodejs: "true"
Java Applications #
Add the following annotation to your Pod or Deployment specification:
annotations:
instrumentation.opentelemetry.io/inject-java: "true"
Python Applications #
Add the following annotation to your Pod or Deployment specification:
annotations:
instrumentation.opentelemetry.io/inject-python: "true"
.NET Applications #
Add the following annotation to your Pod or Deployment specification:
annotations:
instrumentation.opentelemetry.io/inject-dotnet: "true"
Go Applications #
Go instrumentation requires elevated privileges, we will provide required configuration later.
3. Verification #
To verify that your applications are correctly instrumented and sending data to StackBooster:
- Check that the OpenTelemetry Collector is running:
kubectl get pods -n open-telemetry
- Verify the collector logs to ensure data is being received:
kubectl logs -n open-telemetry deployment/stackbooster-otel-collector
Verify that your Prometheus is scraping the OpenTelemetry metrics.
Check the StackBooster dashboard to confirm metrics are being received.
Troubleshooting #
Common Issues #
1. Pods not being instrumented: #
- Verify that the annotation is correctly applied
- Check the OpenTelemetry operator logs
- Ensure your application matches the supported versions
2. Metrics not appearing in StackBooster: #
- Verify the Prometheus ServiceMonitor configuration
- Check that the OpenTelemetry Collector is properly configured
- Verify network connectivity to the StackBooster servers
3. High resource consumption: #
- Adjust the resource limits in the OpenTelemetryCollector configuration
- Consider adjusting the batch processor settings
Support #
For additional assistance with StackBooster integration, please contact our support team at support@stackbooster.com.