
Randy Cambell knows: sometimes halfway is worse than nothing at all.
The Heat Network Code of Practice is likely to become intertwined with building regs. In particular, heat networks that comply with the Code could be treated more favourably under SAP.
But as I highlighted in the last post, we’ve got a problem: there’s currently no such thing as a Code-compliant heat network. For SAP to reference the Code, some form of Code compliance regime will be required.
DECC has said it wants to keep any such regime light touch, which seems reasonable. But, as I hope to describe in this post, a light touch regime could greatly damage the heat market. In other words, the wrong compliance regime would be worse than no regime at all.
To start with, I want to declare that I’m a strong supporter of the Code of Practice. It’s the first document in the UK that lays out firm, sensible requirements for heat networks. Crucially, it provides a basis for clients to demand better networks, even if they’re not experts themselves. So even leaving SAP aside, some way to prove compliance with the Code is absolutely necessary.
But what might this proof look like? Here are three examples:
- One strategy would involve self-certification, where project teams give assurance that they’ve followed the Code at each stage. This assurance could also be accompanied by an evidence pack.
- Under a more involved approach, third party assessors might inspect statements and evidence packs and provide assurance that a project complies.
- A stricter compliance regime would include clear performance targets. Using measured data, project teams would have to validate actual performance against these targets to comply with the Code.
Some might suggest a combination of approaches, starting with an easier regime and tightening up over time.
While opinions differ on how compliance should work, there’s one thing I think everyone can agree on: if you actually follow the Code, you should get a network that’s significantly better than current practice.
This discussion forces us to ask: what happens if compliance, whether light touch or heavy-handed, doesn’t result in better networks? What if it’s possible for projects to achieve compliance without meeting some of the Code’s more challenging but essential requirements?
If this happens, any observer will say it must be due to a fault of the technology or flaws in the Code: the project team did everything they were supposed to – it was a Code-compliant scheme after all – but it ended up pants. Heat networks (or the Code) just don’t work.
So whatever compliance regime we choose, it must include the key ingredients that result in better networks. It’s no use easing into it. From the start, the regime must be strict enough to stop poor projects achieving compliance, otherwise we severely undermine the credibility of heat networks, or the Code, or both.
This doesn’t preclude a light touch approach, as long as by light you mean focused on the essentials. So what’s essential?
Borrowing from ITIL (a nerdy but hugely valuable framework for IT services), if you want to improve anything you need to follow the following steps:
- Define the objective
- Assess the baseline
- Set clear, measurable targets
- Design and execute solution to achieve targets
- Check targets have been achieved
We can lift the answer to point 1 out of the foreword to the Code: our objective is to design, build and operate heat networks to a high quality to deliver customer satisfaction. For point 2, our baseline is this: in the absence of improved practice and scrutiny, heat network performance is likely to be poor (e.g. high return temps, high network losses and high cost of heat).
Point 3 is where Code compliance comes in. We need clear targets that support our objective. While these targets should include some softer binary requirements (e.g. will the scheme follow Heat Trust principles of customer protection?), they should mainly be performance targets with hard numbers attached and a clear means of measuring and verifying each one.
The list of targets can be kept short, keeping teams focused on the bits that matter most. Our aim here is not to create admin but to make it clear from the start what the team will have to achieve, and then hold them to it.
Point 4 is self-explanatory.
At point 5, the team measures against each of the targets from step 3. If they pass, the client gets what they wanted: a good network – oh, and the project is Code-compliant too. If they don’t pass, the team have got to put it right or fail to comply.
A lot rides on those targets. They could determine whether a contractor achieves practical completion and gets paid. They could influence SAP scores and building regs compliance.
With so much at stake, you can see why CIBSE and others might baulk at the prospect of the Code being used in this way. But unfortunately, the alternative (a looser approach that extends compliance to poorly performing projects) is bound to severely damage the credibility of heat networks and the Code, and bring on a policy backlash.
Either we’re confident that the Code will result in better networks, in which case we should make sure that its essential principles are enforced as part of compliance… or we’re not, in which case we should change the Code.
Do you really think BRE will favour code compliant networks over over networks?
Previously BRE assumed 10% distribution losses for heat networks – based on absolutely no evidence whatsoever – and got a lot of egg on their face as a result.
Do you think BRE will do the same again – assuming that code compliant networks are more efficient than other networks again based on absolutely no evidence whatsoever – and risk a lot of egg on their face all over again?
I strongly suspect they’ll be much more wary of making baseless modelling assumptions going forwards.
For operators of good quality networks that weren’t for some reason built to code, there’s still the PCDB route to proving the actual performance of your network regardless of origin:
http://www.ncm-pcdb.org.uk/sap/page.jsp?id=19
http://www.ncm-pcdb.org.uk/sap/pcdbdetails.jsp?pid=45&id=210000&type=501
The code of practice (3.5.4 – maximum distribution losses 15%) ought to end badly designed primary networks. (up to the building) If designed right and the buyer is spending £millions and therefore supervising the energy centre and buried pipe details it’ll probably be commissioned right too.
The code won’t prevent poor quality secondary networks (within the building) though, as there’s no upper limit* set on internal distribution losses in the code.
Let’s hope that either:
BRE understand the difference between the network up to the building (well covered by the code of practice) and the building itself (largely outside the scope of the code) when discriminating between code-compliant and non-code networks as far as distribution loss factors are concerned.
BRE set the “deemed performance” so low that everybody has to go the PCBD route and declare the actual losses or full modelling results including losses up to the *dwelling* rather than the *building* that could contain the dwelling.
*In review I had suggested 30% as a maximum and 15% as best practice (on an annualised basis) but drawing the system boundary (do you stop at the pipe, the HIU, or the DHW tank and any recirculation DHW loops?) was controversial. There was also a concern that the 82/71C contingent within the CIBSE membership would take up arms against you rather than coming with you if the target is so far from what they do today that they believe it to be unachievable.
Re-wording the code to replace *building* with *dwelling* or similar terminology for *final customer* may also work. Say a maximum 30% losses up to the *final customer* rather than 15% losses on the primary network and unlimited losses between primary network and the final customer. It’s those risers and corridors that are the hard part.
Casey, Marko,
In my view its probably not possible to define a calculation method for the likely heat loss from a network that is detailed enough to be accurate, yet simple enough to be legislated. Just as it is not sensible to try to accurately calculate the annual heat losses from a building due to air infiltration.
The planning conditions still wield a lot of influence at the post-construction, pre-occupation stage. If the planners required an as-built SAP update before occupation, and if an in-situ test of the heat network losses was required as part of SAP, it could give a much-improved estimate of efficiency in use. This is just like air tightness testing, which we know has dramatically improved build quality.
I’ve a few thoughts on what this test would look like. Have you considered how it could be done?
Bertie
Bertie, Marko, Casey,
It’s entirely possible to put in a test regime at the post-construction, pre-occupation stage that is meaningful and yet simple enough to work in practice.
The key is to have a performance framework that accurately reflects underlying system performance, with a set of KPIs that are able to differentiate between “good” and “bad” – or in this case, “compliant”.
Marko, while I can appreciate the sentiment around 15% or 30% losses respectively, I think that efficiency is a red herring in the case of network performance and should be avoided. Saying that one network has losses of 15% and another has losses of 30% doesn’t really tell us anything about what the underlying performance of those two networks is. As it happens, I did a presentation on this topic earlier this week, where I put up an example of a “bad” network, which everyone could agree was bad, then a “good” network that everyone could agree was good. I then showed that the % losses from the “bad” network were much lower than the “good” (“bad” often also equates to high usage, which means that the % losses come down, as these are fixed).
Losses are all about kWh/dwelling losses pa, or W/dwelling real time.
So, how do we go about measuring performance and checking compliance?
Simplistically, heat networks break down into three parts: generation, distribution and consumption. The KPIs for managing and accessing these are (or should be) quite different.
For generation, we want to look at plant room efficiency (or ultimately for the operator £/kWh of heat generated). This is easy to quantify and for most new heat networks (read: gas boilers in the basement of a multi-storey residential development) can be measured simply by having two meters: a gas meter and a heat meter on the boundary of the plant room. There are other lower level KPIs that we might then want to look at, but fundamentally, that’s what counts.
For consumption/demand, we want to look at kWh/dwelling pa. This, however, is much more of a build (losses from fabric of building, flow restrictors on taps, etc) and behaviour (e.g. meters) issue, so I won’t go into this here. However, kWh/dwelling does obviously have a pretty big impact on cost of heat pa, which is ultimately what we want to be focusing on at a system level.
Then we come to distribution.
Marko, I agree entirely with your point that definition is important and that there are all sorts of ways to cut the cake – i.e. where does the network end and the dwelling start? If we have a network with an HIU and indirect supply (cue the howls of rage from certain quarters) then I think we should be defining the “distribution network” as being everything between the plant room boundary (read: plant room meter) and the HIU (more specifically, at the meter on the HIU).
We (FairHeat) have put quite a lot of work into developing a performance framework for monitoring network performance. Ultimately, I would argue that the correct measure of network performance is kWh losses/dwelling per annum. Provided that metering systems are in place (sigh) calculating losses between plant room meter and dwelling meters is relatively straightforward.
There is then a cascade of different KPIs within the performance framework, which measure different aspects of network performance and tie fairly neatly back to the ADE CIBSE Heat Network Code of Practice (bringing this back to Casey’s compliance question).
So, average network flow temperature, average network return temperature, % bypass flows etc. are all things that can be calculated from meter reads. These can then all be referenced against minimum standards within the Code and ideally (hands clasped in fervent prayer) against the specification for the scheme.
We would argue that dwelling level performance should also be measured for every dwelling and checked for compliance, as poor performance at a dwelling level will kill the network – and it takes a remarkably small number of poorly performing HIUs to destroy performance (0.8m3 of let-by at 78°C doesn’t help your return temps…).
The best measure of the impact of an individual dwelling on your network is the volume weighted average return temperature for that dwelling. We have been putting together a practical testing regime for use post-construction, but pre-occupation and are trialing this as part of a DECC sponsored project.
VWART also figures heavily in a testing regime for HIUs that we have developed as part of the same project. We are currently finishing up testing (at SP in Sweden) for 5 HIUs from leading manufacturers, so will have data back on what volume weighted average return temperatures should be expected for given HIUs. This means that onsite performance can be checked against reference benchmarks and anomalous performance identified. It will also mean that “compliance” can be assessed as part of the procurement process.
We will be publishing the results from that testing in April (more on that in a separate post).
Coming back to Casey’s original point, I am also concerned that taking a light touch approach would risk killing the HN Code of Practice. If “compliance” turns into a box ticking exercise then we are likely to end up getting schemes that are deemed to be “Code compliant” but which have rubbish performance. That would kill the Code.
My view is that we shouldn’t be too prescriptive on exactly how people implement the code, but we should be cast iron on what constitutes compliance from a performance perspective. So I’m all for point 3. on Casey’s ITIL based 5 step process: “3. Set clear, measurable targets”
Given the data that Guru now has access to, I’m sure that it wouldn’t be a large stretch for CIBSE to come up with realistic targets based on empirical data.
Once we have those, putting together a regime to test for compliance wouldn’t be difficult – and being “compliant” would actually mean something…
Gareth
(Oh damn, did I just do a Marko! 😉
Losses at kWh/dwelling/year are a good metric for comparing the performance of as-built heat networks serving similar layouts.
They’re still unfair for comparing networks serving low density/low energy housing with high density/high energy housing – you might want to consider kWh/dwelling/km/year.
I would NOT use kWh/dwelling/year (or kWh/dwelling/km/year) if I was a planner deciding whether or not a heat network is appropriate, or for comparing heat networks with alternative heating systems. Whether you use generation efficiency and % distribution losses (the current approach) or generation efficiency and fixed distribution losses (the proposed approach) is much of a muchness at this point, because you’re comparing that particular scheme with non heat-network alternatives.
Using fixed distribution losses has more basis in reality and I’d support this, but best of luck getting this into BREDEM/SAP. Targeting 30% losses – for new builds that all have similar heat loads and are all built to a similar density – is much the same thing and easier to shove through the existing frameworks.
(Did Marko just propose accepting a political compromise? What’s got into me?)
VWART – The Danes worked out that volumetric weighted average return temperature was a very simple proxy for a multitude of heat network sins a long time ago, higher end heat meters report this by default, and best practice heat network operators charge users according to volumetric average return temperature in addition to total kWh). (often with a charge for peak kWh too to incentivise sensible load profiles
The Odense scheme – billed on volume only – effectively did the same thing using only a clockwork water meter. Your supply is at X degC. The more you can pull out each litre the through lowering return temperatures the better for you – the charge for each litre is the same.
The real fun comes with sub-KPIs I think. Yes my VWART is high, lovely, but why is my VWART so high? Split it by space heating mode, DHW tapping mode, DHW keep-hot mode, network keep-hot mode, and accidental let-by/open bypasses. Compare against theoretical VWART for space heating using those emitters and a given weather profile, theoretical DHW tapping returns using that plate/tank coil and a given tapping profile, and theoretical keep-hot requirements. (I think I copied you guys on the suggestions to BRE on this one)
Metering for it is challenging though: the instrumentation specified can’t normally discriminate between these regimes and the theoretical comparisons bear little resemblance to users behaviour.
An expert will take one look at the system, grasp a few flow and return pipes by hand, and stick a screwdriver on the odd valve to instantly know why the VWART is high. You can’t use that opinion in court though, and the finance director might rightly be skeptical. This is the true value of instrumentation I think.
Seems pointless until DECC takes an interest in anything other than electric heat pumps, as promoted beyond all reasonable bounds by its ex-Chief Scientist.
Most heat networks I hear of have awful losses, in kWh/y.dwelling, or kWh/m.y, due to systems of the past being laid to supply what we assume are the buildings of the future.
The most worthwhile action might be to replicate a system like Lystrup in the UK. Or such a system with a flow temp. of 65, not 55 degC.
Bear in mind though that if the heat is CHP it would otherwise be thrown away so the CO2 emissions from 15-20% heat main losses are less than from electricity network losses of 12% (the average for the low voltage distribution system at 415 V.)
Marco,
Agreed that single standards for kWh/dwelling can’t be applied from high-density developments to low density. Indeed, I should have made it clear that what I am referring to is as being a “network” (using the Heath Network Metering & Billing Regulations terminology) are communal heating networks and heating networks within buildings connected to district heating networks. So, my definition of “network” should have referred to the meter at the plant room boundary or the building level meter (at the entry point to the building).
With regards to fixed distribution losses vs. % losses, although I understand the rationale for a political compromise, I wouldn’t roll over just yet.
Two issues with the % losses approach:
1. It doesn’t reflect underlying performance dynamics and may lead to counter productive outcomes; and
2. Measurability at the post-construction, pre-occupation stage.
With regards to the first point, I mentioned the Heat Networks Metering & Billing Regulations above and they provide a neat illustration of the problem.
For the cost effectiveness test under the HNM&B regs, it is assumed that installing meters will reduce benchmark demand by 20% per annum. Using the % losses approach, installing end-point meters will therefore result in increased system losses.
As example, if we take the estimated heat losses from the DECC’s Heat Estimator tool, then a flat in SE England will have a demand of 6,218kWh per annum. Let’s say system losses are 1,600 kWh/dwelling pa (no comment). That gives 26% system losses – so under your 30% target. Now, say that we plan to install meters. Demand is therefore projected to drop 20% to 4,974kWh pa. Network losses remain at 1,600kWh/dwelling pa, but “increase” to 32%… which is now above the target. So we don’t install meters?
And the Passivhaus scheme that Bertie Dixon’s folks at Max Fordham are designing for Camden? Are we going to hold them to a % losses target?
It’s a related issue on the measurability point – what demand figure do we use? When the building is empty, demand will be low, so… we use the actual losses % we measure? In which case it will never pass. Or do we use some fictional demand figure? And then when the building is occupied, the consumption is going to be way lower than the numbers commonly used in modelling. So there is always going to be a discrepancy between design and reality (as demand will always be lower than forecast). As a result, if we use a % losses approach, reality will always be worse than reality and we will have a “compliant” schemes where the % losses are way higher than forecast – which will result in a lack of credibilty for the approach.
Or we can stick to our guns and go the fixed kWh/dwelling pa (w/dwelling) route. In which case: (a) we will be able to directly measure losses and assess compliance at the post-construction, pre-occupation stage; and (b) we should have a reasonable expectation that forecast and reality will be in the same vicinity.
Very quick side note on your VWART point. Agree totally that it’s when you get down to VWARTs for DHW, space heating and standby that life gets really interesting. The output from the HIU tests we are carrying out in Sweden will include individual VWARTs for each of these elements.
These can then be tested as part of the post-construction, pre-occupation stage compliance tests. As you say, this will quickly identify where performance is falling down, whether it be poor balancing of rads, incorrect pump settings, DHW set point being too high, faulty valve etc.
On the disaggregation point, you are also correct; it is difficult to separate our space heating from DHW and standby when there is just a single stream of meter reads. However, it’s not impossible. This is an issue that Casey and his team (including a math PhD) have been working on for the past year as part of the DECC project. The good news is that they have cracked it and the system being released at the end of March will report VWART and the various sub-VWARTS on a per dwelling basis.