Add alloy

This commit is contained in:
2025-12-03 15:29:37 +01:00
parent 0912f7866e
commit 29b2adfb55
19 changed files with 473 additions and 0 deletions

34
grafana_alloy/Dockerfile Normal file
View File

@@ -0,0 +1,34 @@
ARG BUILD_FROM
FROM $BUILD_FROM
ARG \
BUILD_ARCH \
BUILD_VERSION \
GRAFANA_ALLOY_VERSION
LABEL \
io.hass.version=${BUILD_VERSION} \
io.hass.type="addon" \
io.hass.arch="${BUILD_ARCH}"
RUN apt-get update && \
apt-get install -y --no-install-recommends \
unzip \
gettext-base \
curl && \
rm -rf /var/lib/apt/lists/* && \
apt clean && \
ARCH="${BUILD_ARCH}" && \
if [ "${BUILD_ARCH}" = "aarch64" ]; then ARCH="arm64"; fi && \
curl -J -L -o /tmp/alloy.zip "https://github.com/grafana/alloy/releases/download/v${GRAFANA_ALLOY_VERSION}/alloy-linux-${ARCH}.zip" && \
cd /tmp && \
unzip alloy.zip && \
mv alloy-linux-${ARCH} /usr/local/bin/alloy && \
chmod +x /usr/local/bin/alloy && \
rm -rf /tmp/alloy*
COPY rootfs /
RUN chmod +x /run.sh /etc/cont-init.d/alloy_setup.sh /etc/services.d/alloy/run
ENTRYPOINT []
CMD ["/run.sh"]

84
grafana_alloy/README.md Normal file
View File

@@ -0,0 +1,84 @@
# Grafana Alloy
[Grafana Alloy](https://grafana.com/docs/alloy) combines the strengths of the leading collectors into one place. Whether observing applications, infrastructure, or both, Grafana Alloy can collect, process, and export telemetry signals to scale and future-proof your observability approach.
Currently, this add-on supports the following components:
- [prometheus.scrape](https://grafana.com/docs/alloy/latest/reference/components/prometheus/prometheus.scrape/) - Sends metrics to Prometheus write endpoint.
- [prometheus.exporter.unix](https://grafana.com/docs/alloy/latest/reference/components/prometheus/prometheus.exporter.unix/) - Uses the [node_exporter](https://github.com/prometheus/node_exporter) to expose Home Assistant Hardware and OS metrics for \*nix-based systems.
- [prometheus.exporter.process](https://grafana.com/docs/alloy/latest/reference/components/prometheus/prometheus.exporter.process/) - Enables [process_exporter](https://github.com/ncabatoff/process-exporter) to collect Home Assistant process stats from /proc.
- [loki.write](https://grafana.com/docs/alloy/latest/reference/components/loki/loki.write/) - Sends logs to Loki instance.
- [loki.source.journal](https://grafana.com/docs/alloy/latest/reference/components/loki/loki.source.journal/) - Collects Home Assistant Journal logs to send to Loki.
## Installation
1. Add [repository](https://github.com/wymangr/hassos-addons) to Home Assistant.
1. Search for "Grafana Alloy" in the Home Assistant add-on store and install it.
1. Disable "Protection mode" in the add-on panel. (Optional, [see below for more details](#protection-mode))
1. Update configuration on the add-on "Configuration" Tab. See options below.
1. Start the add-on.
1. Check the `Logs` to confirm the add-on started successfully.
1. You can also visit the Grafana Alloy Web UI by visiting `http://<homeassistnat_ip>:12345` in your browser.
## Protection Mode
Disabling protection mode is optional, however there are a few things that I found don't work without disabling it. Most the limitations are around host processes. Per the Home Assistant Docs: _"Allow the container to run on the host PID namespace. Works only for not protected add-ons."_
Note: These are just the limitations I found, there may be other incorrect or missing metrics.
**Only disable the protection mode if you know, need AND trust the source of this add-on.** Always review the code of an add-on before disabling protection mode.
### Limitations:
**prometheus.exporter.process**
- If Protection mode is enabled, the only process that will be collected is the one for Alloy. There will be no metrics for host processes.
**prometheus.exporter.unix**
- Process related metrics won't display any host process information with protection mode enabled.
- Disk metrics will only show mount data for the Alloy add-on, no host mount data will be collected with protection mode enabled.
**loki.source.journal**
No limitations that I found.
## Configuration
| Config | Description | Default value | Required |
| ---------------------------- | ---------------------------------------------------------------------------------------- | ----------------------------------- | --------------------------- |
| `enable_prometheus` | Enable sending metrics to Prometheus. If enabled, prometheus_write_endpoint is required. | true | No |
| `prometheus_write_endpoint` | Full URL to send metrics to. | http://prometheus:9090/api/v1/write | If `enable_prometheus`=true |
| `enable_unix_component` | Enables prometheus.exporter.unix component to collect node_exporter metrics. | true | No |
| `enable_process_component` | Enables prometheus.exporter.process component to collect process_exporter metrics. | true | No |
| `prometheus_scrape_interval` | How frequently to scrape the targets of this scrape configuration. | | No |
| `servername_tag` | servername tag value value. | HomeAssistant | No |
| `instance_tag` | Overwrite the default metric "instance" tag. | | No |
| `enable_loki` | Enable sending logs to Loki. If enabled, loki_endpoint is required. | false | No |
| `loki_endpoint` | Full Loki URL to send logs to. | http://loki:3100/api/v1/push | No |
| `enable_loki_syslog` | Listen for syslog messages over UDP or TCP connections and forwards them to loki. | false | No |
| `override_config` | If enabled, all other options will be ignored and you can supply your own Alloy config. | false | No |
| `override_config_path` | Path to Override Alloy config file. HA config directory is counted to /config. | /config/alloy/example.alloy | If `override_config`=true |
If `override_config` is true and a valid Alloy config file is supplied in `override_config_path`, all other options will be ignored.
## Support
- Tested on `aarch64` and `amd64`.
## Todo
- [x] Add more customization options (Enable/disable components, scrape_interval, etc..)
- [ ] Add Github workflows
- [ ] Build and publish a docker image so users don't have to build the image on every install
- [x] Verify all permissions added to `config.yaml` are required and remove unneeded ones
## Example Data
https://grafana.com/grafana/dashboards/1860-node-exporter-full/
![prometheus.exporter.unix Example](images/prometheus.exporter.unix.png)
https://grafana.com/grafana/dashboards/8378-system-processes-metrics/
![prometheus.exporter.process Example](images/prometheus.exporter.process.png)
![Loki Log Example](images/loki.png)

View File

@@ -0,0 +1,68 @@
#include <tunables/global>
profile grafana_alloy flags=(attach_disconnected,mediate_deleted) {
#include <abstractions/base>
# Capabilities
file,
signal (send) set=(kill,term,int,hup,cont),
# S6-Overlay
/init ix,
/bin/** ix,
/usr/bin/** ix,
/run/{s6,s6-rc*,service}/** ix,
/package/** ix,
/command/** ix,
/etc/services.d/** rwix,
/etc/cont-init.d/** rwix,
/etc/cont-finish.d/** rwix,
/run/{,**} rwk,
/dev/tty rw,
# Bashio
/usr/lib/bashio/** ix,
/tmp/** rwk,
# Access to options.json and other files within your addon
/data/** rw,
# Start new profile for service
/usr/local/bin/alloy cx -> alloy,
profile alloy flags=(attach_disconnected,mediate_deleted) {
#include <abstractions/base>
ptrace (trace,read),
# Receive signals from S6-Overlay
signal (receive) peer=*_grafana_alloy,
# Access to options.json and other files within your addon
/data/** rw,
# Access to mapped volumes specified in config.json
/share/** rw,
# Temp files (Loki)
/tmp/.positions.* rw,
# certificates
/etc/ssl/certs/{,**} r,
/usr/share/ca-certificates/{,**} r,
# Access required for service functionality
/usr/local/bin/alloy rm,
/config/** rw,
/etc/alloy/config.alloy r,
/var/log/journal/{,**} r,
/etc/nsswitch.conf r,
/proc/{,**} r,
/sys/** r,
/etc/hosts r,
/etc/resolv.conf r,
/bin/bash rix,
/bin/echo ix,
/etc/passwd r,
/dev/tty rw,
}
}

6
grafana_alloy/build.yaml Normal file
View File

@@ -0,0 +1,6 @@
---
build_from:
aarch64: ghcr.io/hassio-addons/debian-base:9.1.0
amd64: ghcr.io/hassio-addons/debian-base:9.1.0
args:
GRAFANA_ALLOY_VERSION: 1.12.0

50
grafana_alloy/config.yaml Normal file
View File

@@ -0,0 +1,50 @@
---
name: "Grafana Alloy"
description: "Grafana Alloy"
version: "1.12.0"
slug: "grafana_alloy"
arch:
- aarch64
- amd64
- armv7
- armhf
ports:
12345/tcp:
5514/udp:
5601/tcp:
ports_description:
12345/tcp: Alloy web server
5514/udp: Alloy UDP syslog
5601/tcp: Alloy TCP syslog
journald: true
host_network: true
host_pid: true
map:
- type: homeassistant_config
path: /config
options:
enable_prometheus: true
prometheus_write_endpoint: http://prometheus:9090/api/v1/write
enable_unix_component: true
enable_process_component: true
prometheus_scrape_interval: 15s
servername_tag: "HomeAssistant"
enable_loki: false
loki_endpoint: http://loki:3100/api/v1/push
enable_loki_syslog: false
override_config: false
override_config_path: "/config/alloy/example.alloy"
schema:
enable_prometheus: bool
prometheus_write_endpoint: str?
enable_unix_component: bool
enable_process_component: bool
prometheus_scrape_interval: list(15s|30s|60s)
servername_tag: str?
instance_tag: str?
enable_loki: bool
loki_endpoint: str?
enable_loki_syslog: bool
override_config: bool
override_config_path: str?

BIN
grafana_alloy/icon.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 56 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 392 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 379 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 287 KiB

BIN
grafana_alloy/logo.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 56 KiB

View File

@@ -0,0 +1,10 @@
$PROMETHEUS_CONFIG
$UNIX_CONFIG
$PROCESS_CONFIG
$ALLOY_CONFIG
$LOKI_CONFIG

View File

@@ -0,0 +1,160 @@
#!/usr/bin/env bashio
readonly CONFIG_DIR=/etc/alloy
readonly CONFIG_FILE="${CONFIG_DIR}/config.alloy"
readonly CONFIG_TEMPLATE="${CONFIG_DIR}/config.alloy.template"
if bashio::config.true 'override_config'; then
if bashio::config.is_empty 'override_config_path'; then
bashio::config.require 'override_config_path' "Config override is Enabled, must set override_config_path"
fi
else
# Add Prometheus Write Endpoint
if bashio::config.true 'enable_prometheus'; then
bashio::config.require 'prometheus_write_endpoint' "You need to supply Prometheus write endpoint"
EXTERNAL_LABELS=""
RELABEL_CONFIG=""
# Prometheus Write Endpoint
if bashio::config.has_value 'prometheus_write_endpoint'; then
PROMETHEUS_ENDPOINT="$(bashio::config "prometheus_write_endpoint")"
fi
# Servername External Label
if bashio::config.has_value 'servername_tag'; then
EXTERNAL_LABELS="
external_labels = {
\"servername\" = \"$(bashio::config "servername_tag")\",
}"
fi
# Relabel "instance" tag if configured
if bashio::config.has_value 'instance_tag'; then
RELABEL_CONFIG="
write_relabel_config {
action = \"replace\"
source_labels = [\"instance\"]
target_label = \"instance\"
replacement = \"$(bashio::config "instance_tag")\"
}"
fi
export PROMETHEUS_CONFIG="
prometheus.remote_write \"default\" {
endpoint {
url = \"$PROMETHEUS_ENDPOINT\"
metadata_config {
send_interval = \"$(bashio::config "prometheus_scrape_interval")\"
}
$RELABEL_CONFIG
}
$EXTERNAL_LABELS
}"
## Enable prometheus.exporter.unix
if bashio::config.true 'enable_unix_component'; then
export UNIX_CONFIG="
prometheus.exporter.unix \"node_exporter\" { }
prometheus.scrape \"unix\" {
targets = prometheus.exporter.unix.node_exporter.targets
forward_to = [prometheus.remote_write.default.receiver]
scrape_interval = \"$(bashio::config "prometheus_scrape_interval")\"
}"
fi
## Enable prometheus.exporter.process
if bashio::config.true 'enable_process_component'; then
export PROCESS_CONFIG="
prometheus.exporter.process \"process_exporter\" {
matcher {
name = \"{{.Comm}}\"
cmdline = [\".+\"]
}
}
prometheus.scrape \"process\" {
targets = prometheus.exporter.process.process_exporter.targets
forward_to = [prometheus.remote_write.default.receiver]
scrape_interval = \"$(bashio::config "prometheus_scrape_interval")\"
}"
fi
export ALLOY_CONFIG="
prometheus.exporter.self \"alloy\" { }
prometheus.scrape \"self\" {
targets = prometheus.exporter.self.alloy.targets
forward_to = [prometheus.remote_write.default.receiver]
scrape_interval = \"$(bashio::config "prometheus_scrape_interval")\"
}"
fi
# Add Loki to config if endpoint is supplied
if bashio::config.true 'enable_loki'; then
bashio::config.require 'loki_endpoint' "You need to supply Loki endpoint"
if bashio::config.has_value 'servername_tag'; then
labels="{component = \"loki.source.journal\", servername = \"$(bashio::config "servername_tag")\"}"
else
labels="{component = \"loki.source.journal\"}"
fi
if bashio::config.true 'enable_loki_syslog'; then
syslog_config="
loki.source.syslog \"syslog\" {
listener {
address = \"0.0.0.0:5601\"
labels = { component = \"loki.source.syslog\", protocol = \"tcp\" }
}
listener {
address = \"0.0.0.0:5514\"
protocol = \"udp\"
labels = { component = \"loki.source.syslog\", protocol = \"udp\"}
}
forward_to = [loki.write.endpoint.receiver]
}"
else
syslog_config=""
fi
export LOKI_CONFIG="
loki.relabel \"journal\" {
forward_to = []
rule {
source_labels = [\"__journal__systemd_unit\"]
target_label = \"unit\"
}
rule {
source_labels = [\"__journal__hostname\"]
target_label = \"nodename\"
}
rule {
source_labels = [\"__journal_syslog_identifier\"]
target_label = \"syslog_identifier\"
}
rule {
source_labels = [\"__journal_container_name\"]
target_label = \"container_name\"
}
rule {
action = \"drop\"
source_labels = [\"syslog_identifier\"]
regex = \"audit\"
}
}
loki.source.journal \"read\" {
forward_to = [loki.write.endpoint.receiver]
relabel_rules = loki.relabel.journal.rules
labels = $labels
path = \"/var/log/journal\"
}
$syslog_config
loki.write \"endpoint\" {
endpoint {
url = \"$(bashio::config "loki_endpoint")\"
}
}"
fi
envsubst < $CONFIG_TEMPLATE > $CONFIG_FILE
fi

View File

@@ -0,0 +1,18 @@
#!/usr/bin/env bashio
# ==============================================================================
# Home Assistant Community Add-on: Grafana Alloy
# Runs the Grafana Alloy
# ==============================================================================
OVERRIDE_CONFIG=$(bashio::config 'override_config_path')
if bashio::config.false 'override_config'; then
CONFIG_FILE=/etc/alloy/config.alloy
else
CONFIG_FILE=$OVERRIDE_CONFIG
fi
bashio::log.info "Starting Grafana Alloy with ${CONFIG_FILE}"
bashio::log.info "$(cat ${CONFIG_FILE})"
# Run Alloy
exec /usr/local/bin/alloy run --server.http.listen-addr=0.0.0.0:12345 --disable-reporting --storage.path=/data $CONFIG_FILE

5
grafana_alloy/rootfs/run.sh Executable file
View File

@@ -0,0 +1,5 @@
#!/usr/bin/env bashio
/etc/cont-init.d/alloy_setup.sh
exec /etc/services.d/alloy/run

View File

@@ -0,0 +1,38 @@
---
configuration:
enable_prometheus:
name: Enable Prometheus Metrics
description: Enable sending metrics to Prometheus. If enabled, prometheus_write_endpoint is required
prometheus_write_endpoint:
name: Prometheus Write Endpoint
description: Full URL to send metrics to.
enable_unix_component:
name: Enable Unix System Metrics
description: Enables prometheus.exporter.unix component to collect node_exporter metrics
enable_process_component:
name: Enable Process Metrics
description: Enables prometheus.exporter.process component to collect process_exporter metrics
prometheus_scrape_interval:
name: Prometheus Scrape Interval
description: How frequently to scrape the targets of this scrape configuration
servername_tag:
name: Servername Tag
description: servername tag value
instance_tag:
name: Instance Tag
description: Overwrite the default metric "instance" tag
enable_loki:
name: Enable Loki
description: Enable sending logs to Loki. If enabled, loki_endpoint is required
loki_endpoint:
name: Loki Endpoint
description: Full Loki URL to send logs to
override_config:
name: Override Config
description: If enabled, all other options will be ignored and you can supply your own Alloy config
override_config_path:
name: Override Config Path
description: Path to Override Alloy config file
enable_loki_syslog:
name: Enable Loki Syslog
description: Listen for syslog messages over UDP or TCP connections and forwards them to loki