The Wattwatchers API uses Unix timestamps for passing date-time values to the API (e.g. via query string parameters) and for data that is returned via the API (e.g. the timestamp attribute of energy data).

We use seconds-based timestamps, as integers, without a milliseconds component.

This timestamp format is timezone agnostic, which is one of the reasons we use it. It provides a consistent method of representing time regardless of timezone and Daylight Savings rules.

Converting to Unix timestamp

Most programming languages and common date-time libraries provide an easy way to convert from native/timezone aware datetime objects to Unix time.

For example, in Javascript:

let yourDate = new Date('2012.08.10');
let timestamp = parseInt((yourDate.getTime()/1000).toFixed(0));

Similarly, the moment.js library provides a method for returning the Unix timestamp:

let yourDate = moment();
let timestamp = yourDate.unix();

Another example, this time in python3:

from datetime import datetime
now = datetime.now()
timestamp = datetime.timestamp(now)

And again, popular python date-time libraries, such as pendulum provide options for easily getting the Unix time:

import pendulum
dt = pendulum.now()
timestamp = dt.int_timestamp

Typically when a programming language or library provides a timezone aware date-time object, the language/library will automatically convert this value to the correct Unix timestamp when calling these the relevant method/property.

This means you can work with your native or library-based date-time objects as you need, and then do the conversion at the last minute, just before calling our API.

For us mere humans, there are a number of handy online tools, such as the Epoch Converter, for translating timestamps to and from the appropriate UTC or local time—e.g. when you are trying to troubleshoot debug output or manually interacting with the API via tools such as Postman.

Working with Timezones and Daylight Savings Time (DST)

Whether you need to consider timezones when interacting with the Wattwatchers API will depend on your specific application.

If you are just polling data—i.e. retrieving the latest data from a device on a rolling basis—at granularities less than an hour (e.g. 5m, 15m or 30m) you shouldn't need to worry about timezones.

If your application is timezone aware—e.g. you provide a user interface to your end user presenting data in their timezone—you may need to consider timezones in three ways:

1. fromTs and toTs may need to be adjusted

For example, for Australia/Sydney, which applies an hour offset in Daylight Saving Time, you may need to adjust your fromTs or toTs to reflect the target timezone if the dates selected cross the timezone boundary.

For example: You want to retrieve 14 days' worth of data in the 15m granularity for the period spanning 1 Apr 2019 to 14 Apr 2019. This timeframe crosses the Daylight Savings time boundary, so you'd need to adjust the fromTs and toTs accordingly. In this case fromTs=1554037200 and toTs=1555250399.

By comparison, if you were to do this query for the same date-time, but in UTC, the values would be fromTs=1554076800 and toTs=1555286399.

2. Different number of items returned

As the Daylight Savings crossover may add or subtract a period of time (e.g. 1 hour in the case of the Australia/Sydney timezone), the data returned will reflect this crossover.

For example: You want to retrieve 14 days' worth of data in the hour granularity for the period spanning 1 Apr 2019 to 14 Apr 2019. This timeframe crosses the Daylight Savings time boundary.

The query for this would look like:
?fromTs=1554037200&toTs=1555250399&granularity=hour&timezone=Australia/Sydney

Running this query will return 337 items.

But if you perform the same query for a period that does not cross the DST boundary, say 1 Mar 2019 (fromTs=1551358800) to 14 Mar 2019 (toTs=1552568399), you will get 336 items. So what's going on?

Because in the first query we are crossing the Daylight Savings period, exiting Daylight Savings in this case, an extra hour is added, and thus an extra entry is returned within the result.

Conversely, if we run a similar query when entering a Daylight Savings period, for example 1 Oct 2018 (fromtTs=1538316000) to 14 Oct 2018 (toTs=1539521999), we will get 335 entries (1 less hour.)

3. Non-standard duration returned

When you query make a query with a granularity greater than the Daylight Savings offset (e.g. 1 hour in the case of Australia/Sydney), the aggregated data will respond to accommodate the Daylight Savings offset.

Extending the example above: You want to retrieve 14 days' worth of data in the day granularity for the period spanning 1 Apr 2019 to 14 Apr 2019. This timeframe crosses the Daylight Savings time boundary, and the granularity is greater than the Daylight Savings offset (1 hour).

This will result in the query:
?fromTs=1554037200&toTs=1555250399&granularity=day&timezone=Australia/Sydney

Provided the device has a full recording of data intervals in the day, this will result in all but one entry having a duration of 86400, which corresponds to 24 hours, in seconds.

However, the entry corresponding to 7 Apr 2019 (which encompasses the DST boundary) will have a duration of 90000, which corresponds to 25 hours.

And carrying the example to its conclusion, the same granularity for period 1 Oct 2018 to 14 Oct 2018 results in a duration of 82800, which corresponds to 23 hrs.