Using Telegraf to Feed JSON Data From an API Into Influx

Introduction

Every once in a while, I like to get my hands dirty and play around with technology. Part of the reason is that I learn best by doing. Another part of that is I love technology and need to prove to myself every once in a while that “I’ve still got it.

My employer recently launched a metrics platform that ingests data in Influx Line Protocol format, stores the data, and provides a SaaS UI to visualize the data. We built our own collector for SNMP and Streaming Telemetry (gNMI) data but do not yet support grabbing JSON data via an API call. I was reading through the Telegraf documentation and realized it should be relatively easy to configure Telegraf to do that. It just so happens I have a Google Wifi setup in my home that exposes an API with some interesting data. In this blog, I will document how I got this all working. While it does use my employer’s commercial SaaS offering to store and visualize the data, the collection method would work with any system that can ingest data in Influx Line Protocol format, including an InfluxDB database with a Grafana frontend.

Docker Setup

I am a big fan of installing software using Docker so that I can keep my host clean as I play around with things. I also like to use docker-compose instead of docker run commands so that I can easily upgrade the containers. For Telegraf I used the following docker-compose.yml file:

---
version: '3.3'
services:
telegraf:
container_name: telegraf
image: 'docker.io/telegraf:latest'
environment:
- KENTIK_API_ENDPOINT="https://grpc.api.kentik.com/kmetrics/v202207/metrics/api/v2/write?bucket=&org=&precision=ns"
- KENTIK_API_TOKEN=(REDACTED)
- KENTIK_API_EMAIL=(REDACTED)
volumes:
- '/home/jryburn/telegraf:/etc/telegraf'
restart: unless-stopped
network_mode: host

Telegraf Configuration

Once I got the container configured, I built a telegraf.conf file to collect the data from the API endpoint on the Google Wifi using the HTTP input with a JSON data format. Using the .tag will put the data into the tag set when it is exported to Influx. Using the .field will put the data into the field set when it is exported to Influx. Once all the data we want to collect is defined, we define our outputs. The influx output is pretty simple. I just configured the http output to use the influx format. I do have to define some custom header fields to authenticate the API call to my influx endpoint. I am also outputting the data to a file to make troubleshooting easier.

# Define the inputs that telegraf is going to collect
[global_tags]
device_name = "basement-ap"
location = (REDACTED)
vendor = "Google"
description = "Google Wifi"

[[inputs.http]]
urls = ["http://192.168.86.1/api/v1/status"]
data_format = "json_v2"
# Exclude url and host items from tags
tagexclude = ["url", "host"]

[[inputs.http.json_v2]]
measurement_name = "/system" # A string that will become the new measurement name
[[inputs.http.json_v2.tag]]
path = "wan.localIpAddress" # A string with valid GJSON path syntax
type = "string" # A string specifying the type (int,uint,float,string,bool)
rename = "device_ip" # A string with a new name for the tag key
[[inputs.http.json_v2.tag]]
path = "system.hardwareId" # A string with valid GJSON path syntax
type = "string" # A string specifying the type (int,uint,float,string,bool)
rename = "hardware-id" # A string with a new name for the tag key
[[inputs.http.json_v2.tag]]
path = "software.softwareVersion" # A string with valid GJSON path syntax
type = "string" # A string specifying the type (int,uint,float,string,bool)
rename = "software-version" # A string with a new name for the tag key
[[inputs.http.json_v2.tag]]
path = "system.modelId" # A string with valid GJSON path syntax
type = "string" # A string specifying the type (int,uint,float,string,bool)
rename = "model" # A string with a new name for the tag key
[[inputs.http.json_v2.field]]
path = "system.uptime" # A string with valid GJSON path syntax
type = "int" # A string specifying the type (int,uint,float,string,bool)

# # A plugin that stores metrics in a file
[[outputs.file]]
## Files to write to, "stdout" is a specially handled file.
files = ["stdout", "/etc/telegraf/metrics.out"]
data_format = "influx" # Data format to output.
influx_sort_fields = false

# A plugin that can transmit metrics over HTTP
[[outputs.http]]
## URL is the address to send metrics to
url = ${KENTIK_API_ENDPOINT} # Will need API email and token in the header
data_format = "influx" # Data format to output.
influx_sort_fields = false

## Additional HTTP headers
[outputs.http.headers]
## Should be set manually to "application/json" for json data_format
X-CH-Auth-Email = ${KENTIK_API_EMAIL} # Kentik user email address
X-CH-Auth-API-Token = ${KENTIK_API_TOKEN} # Kentik API key
Content-Type = "application/influx" # Make sure the http session uses influx

This is what the JSON payload looks like when I do a curl against my AP. It should help better understand the fields I configured Telegraf to look for.

{
"dns": {
"mode": "automatic",
"servers": [
"192.168.1.254"
]
},
"setupState": "GWIFI_OOBE_COMPLETE",
"software": {
"blockingUpdate": 1,
"softwareVersion": "14150.376.32",
"updateChannel": "stable-channel",
"updateNewVersion": "0.0.0.0",
"updateProgress": 0.0,
"updateRequired": false,
"updateStatus": "idle"
},
"system": {
"countryCode": "us",
"groupRole": "root",
"hardwareId": "GALE C2E-A2A-A3A-A4A-E5Q",
"lan0Link": true,
"ledAnimation": "CONNECTED",
"ledIntensity": 83,
"modelId": "ACjYe",
"oobeDetailedStatus": "JOIN_AND_REGISTRATION_STAGE_DEVICE_ONLINE",
"uptime": 794184
},
"vorlonInfo": {
"migrationMode": "vorlon_all"
},
"wan": {
"captivePortal": false,
"ethernetLink": true,
"gatewayIpAddress": "x.x.x.1",
"invalidCredentials": false,
"ipAddress": true,
"ipMethod": "dhcp",
"ipPrefixLength": 22,
"leaseDurationSeconds": 600,
"localIpAddress": "x.x.x.x",
"nameServers": [
"192.168.1.254"
],
"online": true,
"pppoeDetected": false,
"vlanScanAttemptCount": 0,
"vlanScanComplete": true
}
}

Once I start the docker container, I can tail the logs using docker logs -f telegraf and see the telegraf software loading up and starting to collect metrics.

2024-02-25T17:15:44Z I! Starting Telegraf 1.29.4 brought to you by InfluxData the makers of InfluxDB
2024-02-25T17:15:44Z I! Available plugins: 241 inputs, 9 aggregators, 30 processors, 24 parsers, 60 outputs, 6 secret-stores
2024-02-25T17:15:44Z I! Loaded inputs: http
2024-02-25T17:15:44Z I! Loaded aggregators:
2024-02-25T17:15:44Z I! Loaded processors:
2024-02-25T17:15:44Z I! Loaded secretstores:
2024-02-25T17:15:44Z I! Loaded outputs: file http
2024-02-25T17:15:44Z I! Tags enabled: description=Google Wifi device_name=basement-ap host=docklands location=(REDACTED) vendor=Google
2024-02-25T17:15:44Z I! [agent] Config: Interval:10s, Quiet:false, Hostname:"docklands", Flush Interval:10s
/system,description=Google\ Wifi,device_ip=x.x.x.x,device_name=basement-ap,location=(REDACTED),model=ACjYe,os-version=14150.376.32,serial-number=GALE\ C2E-A2A-A3A-A4A-E5Q,vendor=Google uptime-sec=2801i 1708881351000000000

Now I hop over to the UI where I am sending the influx data and I can see I am collecting the data there as well.

Conclusion

There is a lot more that could be done with the setup. Many wifi access points and SD-WAN controllers have a rich set of data available via their APIs. The challenge is that they do not support exporting this data via SNMP or Streaming Telemetry. By configuring Telegraf to collect and export this data in Influx, you can graph and monitor the metrics available via those APIs in the same UI that you are monitoring your SNMP or Streaming Telemetry data. I hope this blog was helpful. Happy Networking!

Leave a comment

Create a website or blog at WordPress.com

Up ↑