Practically any BMS that can talk BACNET, OPC or Web Services can communicate directly with BuildingIQ’s built in drivers. Luckily, this covers up to 80% – 90% of buildings that we review and avoids a significant upfront cost. Outside of those protocols we can implement third party translation devices which cover almost 99% of the remaining systems. However, it is worth noting that many of these older, legacy system may have networks that are already constrained and even if a third party translation device can ‘talk’ to the system, it may be next to impossible to interface to more than a handful of points.
By far the biggest component of a BuildingIQ installation is the provision of the three relatively simple components required to be in place before we get started. These components are (1) internet connection, (2) BMS connection and (3) metering connection. Depending on the IT requirements and the state of the controls network that can take anywhere from no time (already exists) or 3-6 months in the worst case.
Now, with those building blocks in place the rest of the process is quite quick with only two components requiring man hours. Firstly, the physical hardware is installed onsite by plugging into power and network(s). This can take as little as 5 minutes. The next step of mapping and configuration can take some more time and is generally the toughest and the most labor intensive part of any systems integration projects. We have invested heavily in a range of tools to support the speed and reliability of this process and can complete the mapping and configuration process on a mid-sized building in as little as half a day. That’s correct – whereas a typical systems integration project can take many man-days or even weeks, our mapping and configuration process is generally completed in an order of magnitude less time and can even be done remotely from anywhere in the world.
BuildingIQ supports the creation of virtual points and virtual meters. Automating the work around of failed sensors is not something we have currently implemented however through our monitoring, notifications and error rejection processes we currently cover most scenarios that can cause disruption.
In order to keep our point count low and be as kind as possible to a wide range of network capacities, focusing on the Air Handler allows us to achieve this while also exerting significant influence over the system. Thinking about the way control commands ‘cascade’ through a control system, the air handler is able to influence downstream behavior at the thermostat (VAV) as well as upstream to central plant. For example, consider the impact on central plant if each of the air handlers ramps the supply temperature setpoint up or down by a few degrees: the chilled water or hot water valves will respond to meet the new setpoint and, in turn, the central plant will respond to the new load presented to it. In this way, BuildingIQ can have deep influence over the loading of the plant and therefore help push the loads to the most appropriate operating point without actually taking direct control of the central plant.
The model of the building is essentially a parameterized system of differential equations that describe the many plant dynamics, heat transfer characteristics, occupant heat loads and a wide range of other dynamics that are unique to each building. By feeding the building data into these equations, the model fitting algorithm finds the most appropriate parameters that minimize the error between actual data and the model estimate. The algorithm used to fit parameters is essentially the same “cost function” approach used in the subsequent optimization process whereby instead of adjusting model parameters so as to minimize model error, the optimization adjusts space temperature targets to minimize 24 hour operating costs.
The model search for optimal parameters is done each night. The optimization search for ideal target temperatures over the next 24 hours is conducted every 3-4 hours thereby creating a continuously updating “sliding window” of target temperatures.
The “cost function” mentioned above uses an advanced gradient descent methodology. This approach explores the vast search space by taking sample results for proposed solutions, trying new solutions and tracking the improvement relating to those changes. The algorithm gradually zeros in on the ideal solution whilst also avoiding “local minima” which is a trap that many earlier algorithms such as the Hartman Loop can be susceptible to. Local minima occur where the algorithm thinks it has reached the optimal solution but, in reality, it has only found a small “pocket” in the multi-dimensional space where a small move in any direction would result in a worse result. Modern algorithms are able to continue searching in random areas of the space outside of the local minima so as to ensure that this phenomenon is avoided as much as possible.
The optimization process will stop when improvements to the cost function result are so small that they are below the configured tolerance.
The algorithmic approach implemented by BuildingIQ is a significant departure from traditional functional descriptions, sequences of operations and generally hard coded logic. That said, a variety of information is used to inform the process and constrain the optimization. This information includes comfort constraints, schedules and the client preference between energy savings and dollar savings. At a lower level within the site appliance, BuildingIQ is also able to implement a range of additional controls constraints that are developed in conjunction with the site’s operators. These controls constraints capture a range of important site limitations and may include a range of measures such as ensuring a certain static pressure range is not implemented or that supply temperatures must be kept within a certain range at certain temperatures.
While we are talking about sequences, it is also worth reiterating the complimentary nature between BuildingIQ optimization and other ‘tuning’ type work conducted at site. Consider a well ‘tuned’ chiller plant where the staging has been refined over time and perhaps a range of measures and upgrades have also been performed. Looking at this scenario, the work conducted to date could be looked at as “delivering the required load as efficiently as possible” whereas BuildingIQ takes on the role of “presenting the best possible load to that plant at the best possible time”. This approach extends the investment made to date and ensures that we avoid inefficient part load conditions as much as possible. The existing tuning might do its best to deliver against a 2% load efficiently but isn’t it better to drive the system toward its “sweet spot” or, at the very least, take some load away from the central plant for as long as possible until a more substantial load can be presented.
The graphs that we looked at during the demo are near-real-time with a maximum typical delay of around 1 minute. Note that we also use a “change of value” type approach so some points may be a little older if they haven’t been updated for an hour or two.
To learn more about what is covered in this section we recommend you read the following White Papers:
Metering, Tariffs, M&V
We utilize a range of selectable algorithms that build mathematical models of the relationship between not only consumption and temperature but also humidity/enthalpy, building schedule, occupancy and a range of other possible model inputs that vary based on input availability and the selected algorithm. One of the most interesting approaches is called “Support Vector Machine” – some more details of which are available in our M&V White Paper.
To calculate the savings, we can replicate not only the utility tariff but the structure of most 3rd party supplier contracts depending on the structure. For instance, a “flat rate plus demand charge” from a 3rd party supplier can be readily implemented within BuildingIQ Configuration however it may not be possible to configure some less traditional purchasing structures.
The core of BuildingIQ’s optimization and M&V systems have been designed to operate with only the utility data and build a weather corrected baseline on the previous years’ data. This approach can significantly reduce upfront costs, time delays as well as improving accuracy through the use of utility quality systems. A statistician may speak out at this point and question whether the component of HVAC savings within the total utility data is “within the range of uncertainty in the baseline model” which we admit is a good point. To that end, the system is built to utilize more targeted sub meters if they are already available or if the customer would like to go to the expense and disruption of installing additional HVAC sub meters so as to provide greater certainty. That investment is at the customer’s discretion and history would dictate that most customers will accept some uncertainty and prefer to put funds into upgrades that directly deliver savings.
It is also important to mention that in the absence of historical data, BuildingIQ can utilize a process of selected online and offline periods to populate the baseline. Since the integration is done at a network level, no legacy programing is overwritten by BuildingIQ. This allows us to easily switch on and off to test the impact of the system – something not readily done with a six-figure plant upgrade. Needless to say, it is a very powerful proof point when showing a customer their data matching the baseline when we are switched off and, conversely, showing loads below baseline when we are in operation.
Yes – we interface to interval reads through a variety of methods. In some instances we implement a traditional pulse collection device however more and more utilities are making data available in far more ‘friendly’ formats such as daily emails, FTP and web services. We are also investigating some solutions which can talk with smart meters through mesh technologies such as Zigbee.
Our preferred method for collecting truly dynamic pricing is through a web services integration with the utility or retailer. It is exciting to note that we are seeing some markets developing dynamic pricing structures that are based on readily accessible information such as the temperature at the nearest major airport – this allows the dynamic tariff to be readily calculated without a more complex integration to the energy supplier.
Using the standard measure of “R Squared”, we have seen values typically in the range of roughly 0.8 to 0.9 at the hourly level. Where this gets interesting, and in order to do an Apples to Apples comparison, is when we look at the hourly model wrapped up to a daily level. By doing this, even a relatively low R2 of around 0.7 at the hourly level will typically equate to a daily R2 of over 0.97.
Our guiding benchmark is the site’s performance against itself and the key measure for this is the M&V results. Other benchmarking approaches give a nice snapshot of relative performance for city, state and national comparison however they generally provide limited insight into what to actually do about poor performance. By comparison, detailed M&V provides deep insight into performance at an hour-by-hour level.
To learn more about what is covered in this section we recommend you read the following White Papers: