Skip to main content

Reading sensor data

This guide covers the endpoints you use to read sensor readings from your sites. The API returns one value per 15-minute bucket, with no further server-side smoothing or gap filling. Buckets without any underlying data are omitted from the response.

Aggregation

Sensors at different sites emit at different cadences (1, 5, 10, or 15 minutes depending on the gateway configuration). We normalise everything to a 15-minute grid by averaging the raw samples within each bucket, so consumers always see the same time resolution.

Available endpoints​

Sensor metadata and sensor readings are separate resources:

  • Metadata: GET /v1/connect/sensors/{id} returns the sensor's identifiers, type, and the measurement channels it reports (each with its own unit). Call this once per sensor and cache the result.
  • Readings: GET /v1/connect/sensors/{id}/readings returns the time series. Units are advertised by the metadata endpoint and do not appear on individual rows.

Metadata response​

{
"id": "c57e75b1-9860-4ea5-a7cf-9d04704974a4",
"external_id": "capteur-1",
"connection_point_id": "6d570914-485b-4cd8-8c93-ada58293e20b",
"measurements": [{ "key": "irradiance", "unit": "W/m2" }]
}

measurements is a list because a single physical sensor can carry multiple channels (for example irradiance alongside temperature). Pick the one you want on the readings endpoint with the measurement query parameter. It is optional when the sensor reports only one channel, required otherwise.

Query parameters for readings​

ParameterTypeRequiredDescription
startstringYesStart of the window, ISO 8601 with timezone (Z or offset). Inclusive.
endstringYesEnd of the window, ISO 8601 with timezone (Z or offset). Exclusive.
measurementstringSee noteWhich of the sensor's measurements[].key channels to return. Optional when the sensor reports a single channel; required when more than one.
pagenumberNo1-based page number. Defaults to 1.
page_sizenumberNoNumber of readings per page. Defaults to 1000, maximum 10000.

Readings are always returned in ascending order by timestamp.

Example request​

curl -sS \
-H "Authorization: Bearer $TOKEN" \
"https://api.powernaut.io/v1/connect/sensors/$SENSOR_ID/readings?start=2026-01-15T00:00:00Z&end=2026-01-16T00:00:00Z&measurement=irradiance"
Timezone and UTC

We accept an ISO 8601 start and end with either UTC (Z) or an offset (e.g. +01:00). Response timestamps are always returned in UTC.

Readings response format​

{
"data": [
{
"timestamp": "2026-01-15T10:00:00Z",
"value": 612.4
},
{
"timestamp": "2026-01-15T10:15:00Z",
"value": 618.7
}
],
"total": 70080
}

Field meanings:

  • timestamp is the start of the 15-minute bucket (ISO 8601 UTC). The bucket covers [timestamp, timestamp + 15min).
  • value is the average of the raw samples the sensor emitted within the bucket. The unit comes from the sensor's metadata (measurements[i].unit) and is the same for every row in a response.
  • total is the total number of buckets across all pages for the requested window.

Missing data behaviour​

If no usable samples landed in a 15-minute bucket, that bucket is omitted from the response. There are no null rows.

For most real sites buckets are contiguous, but downtime or transmission gaps will leave holes. If your analysis needs a fixed grid, reindex client side. With pandas:

import pandas as pd

df = pd.DataFrame(response["data"])
df["timestamp"] = pd.to_datetime(df["timestamp"], utc=True)
series = df.set_index("timestamp")["value"].asfreq("15min")
# Missing buckets are now present as NaN.

Pagination​

A full year of 15-minute data is roughly 35,000 readings per sensor, so most real queries will span multiple pages. Iterate until you have collected total rows:

import pandas as pd
import requests

def fetch_readings(sensor_id, start, end, token, page_size=10_000):
rows = []
page = 1
while True:
resp = requests.get(
f"https://api.powernaut.io/v1/connect/sensors/{sensor_id}/readings",
params={"start": start, "end": end, "page": page, "page_size": page_size},
headers={"Authorization": f"Bearer {token}"},
timeout=60,
).json()
rows.extend(resp["data"])
if len(rows) >= resp["total"]:
break
page += 1
return pd.DataFrame(rows)

For very long windows (multi-year), prefer splitting the window rather than paging through tens of thousands of rows in one loop. Month-sized slices usually give a good tradeoff between request count and payload size.

Error handling​

StatusErrorWhen
400Bad Requeststart or end missing, malformed, or start >= end.
403ForbiddenYour API credentials lack the sensor-read scope.
404Not FoundThe sensor_id does not exist or is not visible to your partner.
422Unprocessable Entitypage_size exceeds the maximum, or page is negative.

Example error body:

{
"error": "Bad Request",
"message": "`start` must be earlier than `end`.",
"code": "INVALID_TIME_WINDOW"
}

Best practices​

Fetch metadata once, data many times​

GET /v1/connect/sensors/{id} returns the sensor's unit, external_id and site association. These are stable, so cache them in your application and avoid re-requesting them on every data pull.

Pick a sensible page_size​

10000 is the maximum and minimises round trips for bulk loads. For interactive dashboards where latency matters more than throughput, 1000 is usually a better fit.

Align your window to 15 minutes​

Because readings are bucketed into 15-minute intervals, windows that cross bucket boundaries (e.g. 10:07:00) return the intersecting buckets. Aligning start and end to :00, :15, :30 or :45 produces predictable row counts.

Do not rely on evenly spaced timestamps​

Missing intervals are omitted. Always treat the response as an irregular time series and reindex if your downstream analysis assumes a fixed frequency.