Working with timestamps
Timestamps represent the "period up to"
Our devices collect data over a 5–30 second period for Short Energy, and a 5 minute period for Long Energy. The data for the previous period (e.g. 30s or 5m) is aggregated on the device and transmitted to our servers at the end of the period. The timestamp represents the data that is collected in that prior period—or 'in arrears'.
In this sense, the timestamp for a data packet from our API can be considered as representing 'the past X seconds up to the timestamp'.
e.g. a Long Energy packet with the timestamp 1712498400 (12am on 7 Apr 2024) contains the aggregated data from 6 Apr 2024 @ 23:55:01 to 7 Apr @ 00:00:00.
API requests are based on timestamp
When you request data from our API, we return the data with timestamps >= the requested time, aggregated to the granularity period specified.
For example, if you request Long Energy data for 1712498400 with 5 minute granularity, you will get data with the timestamp 1712498400 and above.
This means that the data returned is actually what was recorded for the previous 5 minute period up to that timestamp.
Granluarity sets the aggregation period for the data. This is aggregated based on the collected 5 minute data and timestamps.
Thus, if you request data for Long Energy data for 1712498400 with 15 minute granularity, the system will aggregate based on timestamps from 1712498400 + 15 minutes.
This means that the 15 minute Long Energy record with the timestamp 1712498400 is an aggregation of timestamps 1712498400 (00:00), 1712498700 (00:05), and 1712499000 (00:10). Technically, then, this is the data from 11:55:01 the prior day, through to 00:14:59 in the current day.
Please consider this if you need to report on the basis of when the data was actually recorded on device.
For example, if you need the data that was collected between 00:00:01 and 00:15:00, you will:
- Need to use 5 minute data for your reporting
- Offset your timestamps by +5 minutes to get the actuals corresponding to the period
Working with Unix epoch timestamps and timezones
The Wattwatchers API uses Unix timestamps for passing date-time values to the API (e.g. via query string parameters) and for data that is returned via the API (e.g. the timestamp
attribute of energy data).
We use seconds-based timestamps, as integers, without a milliseconds component.
This timestamp format is timezone agnostic, which is one of the reasons we use it. It provides a consistent method of representing time regardless of timezone and Daylight Savings rules.
Converting to Unix timestamp
Most programming languages and common date-time libraries provide an easy way to convert from native/timezone aware datetime objects to Unix time.
Examples:
let yourDate = new Date('2012.08.10');
let timestamp = parseInt((yourDate.getTime()/1000).toFixed(0));
Similarly, the moment.js library provides a method for returning the Unix timestamp:
let yourDate = moment();
let timestamp = yourDate.unix();
from datetime import datetime
now = datetime.now()
timestamp = datetime.timestamp(now)
And again, popular python date-time libraries, such as pendulum provide options for easily getting the Unix time:
import pendulum
dt = pendulum.now()
timestamp = dt.int_timestamp
Typically when a programming language or library provides a timezone aware date-time object, the language/library will automatically convert this value to the correct Unix timestamp when calling these the relevant method/property.
This means you can work with your native or library-based date-time objects as you need, and then do the conversion at the last minute, just before calling our API.
For us mere humans, there are a number of handy online tools, such as the Epoch Converter, for translating timestamps to and from the appropriate UTC or local time—e.g. when you are trying to troubleshoot debug output or manually interacting with the API via tools such as Postman.
Working with Timezones and Daylight Savings Time (DST)
Whether you need to consider timezones when interacting with the Wattwatchers API will depend on your specific application.
If you are just polling data—i.e. retrieving the latest data from a device on a rolling basis—at granularities less than an hour
(e.g. 5m
, 15m
or 30m
) you shouldn't need to worry about timezones.
If your application is timezone aware—e.g. you provide a user interface to your end user presenting data in their timezone—you may need to consider timezones in three ways:
1. fromTs
and toTs
may need to be adjusted
For example, for Australia/Sydney
, which applies an hour offset in Daylight Saving Time, you may need to adjust your fromTs
or toTs
to reflect the target timezone if the dates selected cross the timezone boundary.
For example: You want to retrieve 14 days' worth of data in the 15m
granularity for the period spanning 1 Apr 2019 to 14 Apr 2019. This timeframe crosses the Daylight Savings time boundary, so you'd need to adjust the fromTs
and toTs
accordingly. In this case fromTs=1554037200
and toTs=1555250399
.
By comparison, if you were to do this query for the same date-time, but in UTC, the values would be fromTs=1554076800
and toTs=1555286399
.
2. Different number of items returned
As the Daylight Savings crossover may add or subtract a period of time (e.g. 1 hour in the case of the Australia/Sydney
timezone), the data returned will reflect this crossover.
For example: You want to retrieve 14 days' worth of data in the hour
granularity for the period spanning 1 Apr 2019 to 14 Apr 2019. This timeframe crosses the Daylight Savings time boundary.
The query for this would look like:
?fromTs=1554037200&toTs=1555250399&granularity=hour&timezone=Australia/Sydney
Running this query will return 337 items.
But if you perform the same query for a period that does not cross the DST boundary, say 1 Mar 2019 (fromTs=1551358800
) to 14 Mar 2019 (toTs=1552568399
), you will get 336 items. So what's going on?
Because in the first query we are crossing the Daylight Savings period, exiting Daylight Savings in this case, an extra hour is added, and thus an extra entry is returned within the result.
Conversely, if we run a similar query when entering a Daylight Savings period, for example 1 Oct 2018 (fromtTs=1538316000
) to 14 Oct 2018 (toTs=1539521999
), we will get 335 entries (1 less hour.)
3. Non-standard duration returned
When you query make a query with a granularity greater than the Daylight Savings offset (e.g. 1 hour in the case of Australia/Sydney
), the aggregated data will respond to accommodate the Daylight Savings offset.
Extending the example above: You want to retrieve 14 days' worth of data in the day
granularity for the period spanning 1 Apr 2019 to 14 Apr 2019. This timeframe crosses the Daylight Savings time boundary, and the granularity is greater than the Daylight Savings offset (1 hour).
This will result in the query:
?fromTs=1554037200&toTs=1555250399&granularity=day&timezone=Australia/Sydney
Provided the device has a full recording of data intervals in the day, this will result in all but one entry having a duration of 86400
, which corresponds to 24 hours, in seconds.
However, the entry corresponding to 7 Apr 2019 (which encompasses the DST boundary) will have a duration of 90000
, which corresponds to 25 hours.
And carrying the example to its conclusion, the same granularity for period 1 Oct 2018 to 14 Oct 2018 results in a duration of 82800
, which corresponds to 23 hrs.