The Limits of Cap and Trade

This blog is my pragmatic view of environmental issues and, to be honest, goes way down in the weeds because most environmental issues are not simple. Spoiler alert this one is the worst yet.

The reason for this blog post is to document the possibility of a “bad thing” in New York that has a reasonable chance of occurring in late August and September of 2018 and possibly as soon as late this summer. I am worried about compliance and a potential threat to electric system operations with the Cross State Air Pollution Rule (CSAPR) NOx Ozone Season trading program. If the feces get entangled in the impeller remember you heard it here before it happened so you will know that the agencies were told that their plans were risky. Unfortunately in order to describe the “bad thing” you likely need some background information that may put you to sleep.

Before proceeding a disclaimer. Before retirement from the electric generating industry, I was actively analyzing air quality regulations that could affect company operations. The opinions expressed in this post do not reflect the position of any of my previous employers or any other company I have been associated with, these comments are mine alone.

Background

First off, you off to know about trading programs. EPA does a good job describing the fundamentals of cap and trade. What you need to know about this pollution control approach is that there are two components: the cap and tradable allowances for the pollutant covered. The cap sets a limit on the total regional emissions that must be met over a trading season such as a year or during the ozone season from May through September. The cap is set at a level such that the pollutant of interest will be reduced to levels that are supposed to improve air quality to the appropriate standard. Setting the cap level correctly is critically important: too high and the environmental objectives won’t get met and too low and the market mechanism won’t work. It is necessary to measure the emissions accurately and transparently because for every ton of pollution emitted affected sources have to surrender an allowance. EPA’s Acid Rain program is the poster child for a successful cap and trade program because greater than required reductions occurred, earlier than expected and with much lower costs than projected.

The key to cap and trade success is that sources that can implement the most cost-effective controls can install those controls, limit their emissions to less than their allocations and trade their excess allowances to sources with more expensive options. The result is that the cap is met in the most cost-effective manner. My particular concern with cap and trade programs in general, and this one in particular, is that in order for it to be successful somebody has to be able to over-control. The problem is if the cap is set so low that there are no options for sources to over control then there are no chances for generating excess allowances so no one has anything to trade. In the worst case affected sources will only run until they have no more allowances and then they will have to shut down.

Environmental NGOs have argued that cap and trade programs do not guarantee that all the sources reduce their emissions so they claim that it is not fair because some sources will not lower emissions and local air quality will not improve everywhere. However, there are national ambient air quality standards that cannot be exceeded for any pollutant that has a pronounced local impact so no sources should be over an emissions limit that causes those problems. Also the pollutants covered in cap and trade programs are related to regional problems such as acid rain and ozone where the local effects are small. Nonetheless, due to a court settlement, the CSAPR rules include a limitation on state emissions to limit interstate trading to prevent this, in my opinion, non-existent problem.

Updates to the CSAPR were proposed in 2015 and finalized in 2016 that included changes to address this problem. The feature that the CSAPR update rule added for this concern is called the compliance assurance mechanism. In addition to the cap a second level was set to limit interstate transfers. For each state in the trading program, the state’s allowance cap budget plus the newly defined variability limit constitutes that state’s assurance level. Each state’s assurance levels takes into account the inherent variability in the state’s baseline emissions from year to year. The intent is that emissions in states can exceed the assurance level due to natural variability (e.g., hot weather making units run more) but including this means that sources in states cannot rely on out-of-state allowances for routine compliance. In 2017, or any later year, if a state’s total emissions are greater than the sum of the state’s budget and variability limit the assurance provisions are triggered. In this case EPA’s rationale is that the state is using more allowances than necessary for inherent variability and is therefore relying on interstate transfers for compliance. When this provision is triggered, EPA determines which facilities exceeded their individual assurance level and requires them to surrender additional allowances equal to three times the excess over the assurance level.

Because the ozone limit has been racheted down over the years there still are many areas that do not attain the current national ambient air quality standard for ozone. The CSAPR NOX Ozone Season trading program is specifically designed to reduce interstate ozone transport that contributes to that problem. Note, that this is the fifth round of NOx reduction programs for New York. As a result, the easy, cheap and quick NOx control options have already been implemented. It is recognized that pollution control costs increase exponentially as the efficiency increases so any further reductions will be expensive and probably cannot be implemented quickly.

Because the cap level is so important I need to explain how EPA determines the cap size. I could easily double the length of this post and surely put to sleep anyone who has read this far if I were to explain in detail how EPA set this cap. Instead and briefly, they use the production cost model Integrated Planning Model (IPM) to analyze the impacts of air quality policies. This is a massive model that purports to estimate how the entire United States utility sector will react to changes in air quality regulations. In order to do that they have to model not only generator operations, fuel costs and control equipment strategies, but also the transmission system. However, the transmission component has been critically flawed when it comes to New York. In particular, the largest load center in the state, New York City, is mostly on islands, there are limits to the transmission available and consequently there are limits to how much electricity can be transmitted to the City. In order to model the entire United States IPM over-simplifies the New York transmission grid. As a result, EPA IPM modeling projects that the least cost solution is to simply generate power elsewhere and significantly under-estimates the amount of power that has to be generated in the City and Long Island, the resulting emissions necessary to keep the lights on, and sets a cap too low to accommodate the New York City constraint.

The New York allocations from EPA in the draft CSAPR update rule had the same flaw as previous programs because of this short-coming. I was responsible for some comments on the draft and we had some success. The final rule changed the New York allocation for a different reason and raised the final allocation.  In 2016 the New York NOx Ozone Season budget was 10,157 allowances. Even with additional allowances, the final CSAPR 2017 NOx Ozone Season Budget is only 5,135 allowances which is close to a 50% reduction. The 2016 NOx actual ozone season emissions in New York were 6,521 tons which is a 64% reduction from the start of the last New York NOx Ozone Season program in 2008. On the face of it then if Ozone Season emissions in 2017 are the same as 2016, then there will be a 20% shortfall of 1,386 tons.

There is another complication. EPA allows banking, i.e. unused allowances are carried forward and can be used in later years. However, the final regulation for the CSAPR update rule included a reduction in the allowance banks. New York affected sources argued, in vain, that because we had already made significant reductions due to other state initiatives that it would be unfair to discount the banked allowances that were earned as a result of those control investments. EPA calculated that there was a bank of 350,000 allowances in the affected states at the end of 2016. EPA argued that the size of the bank would have precluded additional reduction investments until the bank was reduced considerably so they promulgated a reduction to the total of aggregated variability limits times 1.5. The resulting across the board three to one reduction with no consideration of individual interim state actions was a major hit to NY compliance strategies. If historical emissions remain constant, the affected New York sources only have a bank of 3,060 allowances to cover the shortfall of 1,386 tons.

The EPA allocations are to the state and each state has the right to determine how those allowances are allocated to the affected sources. In order to account for new sources the New York Department of Environmental Conservation sets aside 5% of the total allocation for any new sources that come on line during the year. Previously, any unused allowances eventually were returned to the affected sources. Unfortunately, the Cuomo Administration also had plans for the New York allocations. After the “success” of a new and outside the legislature branch revenue stream from the auction of CO2 allowances for the Regional Greenhouse Gas Initiative, the Administration got wind of these allowances and immediately thought they could do the same thing. However, auctioning this kind of allowance is a whole different ball game and they did not try to auction all the allowances. Instead they siphoned off 10% of the allowances to the Energy Efficiency and Renewable Energy Technology (EERET) account and required that any unused new source set-aside allowances would also go to EERET. So instead of the affected sources getting the full allocation of 5,135 allowances they were only allocated 4,362 allowances. Affected sources in New York begged the State to give them the right of first refusal to buy the allowances that were skimmed off but the language in the rule specified sale on the “open market”. Consequently the State refused to incorporate that request into the sale and, to add insult to injury, specified that all the allowances had to be purchased in one batch. The NY 2017 allowances went to Louisiana and the 2018 allowances went to Texas where because of the size of those state budgets they are a fraction of the variability limit so they will most likely be used there. As best as I can tell the sale of those allowances must have netted over $280,000 for the 2017 EERET allowances.

The final consideration in this tale of an obscure air quality compliance issue is the size of the allowance bank. Academics and environmental NGOs cannot abide large margins between allowances and emissions and, in the case of the RGGI allowance margin, are arguing that the margin should be very small. Their rationale is that if allowances are scarce for those sources that need them to run then they will have to buy them at higher costs which in the case of the RGGI will increase the cost of carbon and eventually influence behavior. On the pragmatic side of affected source compliance however, there are advantages to a comfortable allowance margin. Without delving even deeper into the mire of allowance compliance, there can be regulatory and financial implications in the event that there is an allowance monitoring error that increases emissions discovered after the compliance reconciliation deadline and the affected source does not have enough allowances in its account to cover the difference. Environmental staff associated with emissions monitoring generally recommend keeping at least a 5% buffer in the allowance bank for that contingency. Furthermore EPA acknowledges that there is inherent variability in year to year emissions as specified in their CSAPR variability limit of 21% so companies that provide power to the public like to have banks available to cover operational variations. In my opinion an allowance bank of under 5% is very risky and I would recommend a minimum of 25% to cover operational and monitoring contingencies. The key point is that except in rare instances this issue has only been a theoretical problem at most companies for almost all cap and trade programs.

After I completed the draft of this post I found a recent report on the CSAPR Ozone Season allowance market that may be of interest.

The “Bad Thing”

My congratulations if you have made it this far.

My particular concern is New York compliance with the CSAPR NOx Ozone Season limit. To date no New York cap and trade program has had to deal with a constrained market and I vaguely recall only one instance of a constrained market in any cap and trade program.

Because there is only one update of emissions during the ozone season (at the end of July when the May and June data are submitted), facilities will not necessarily know whether the state has triggered the state’s assurance level with its requirement to surrender additional allowances at the end of the Ozone Season. The result is that facilities will be reluctant to exceed their assurance levels because they will not know whether they only need allowances to cover just the excess or three times the excess because the state exceeded the assurance level cap. There is another aspect to this issue that should not be ignored. Electric generating companies have very strong compliance policies and are very reluctant to even give the perception that they have exceeded their emission limits. It is possible company policies will limit emissions to the assurance level and no higher.

My scenario for a bad thing is that New York will be unable to meaningfully further reduce NOx emissions in the near term. If the next couple of summers are warm that will exhaust the current allowance bank to de minimus levels. The ultimate problem with a cap and trade program is that if allowances are not available then the only compliance option is to not run. There is little question in my mind that CSAPR allowances will be available somewhere but that may not be enough to prevent localized operational disruption due to allowance compliance uncertainties. The cumulative effect of the EPA constraints on interstate trading, the uncertainty of the status of emissions relative to the compliance assurance mechanism, and the lower than appropriate cap on NY emissions exacerbated by the Cuomo administration’s unwillingness to give NY affected sources the opportunity to purchase the allowances taken by the State means that New York State affected sources could easily be in uncharted territory. It is not clear how they will react but risking a compliance penalty is not in their best interests.

So my perfect storm worst case scenario is two warm summers that pushes the state close to the compliance assurance limit and reduces the NY allowance bank for one or more affected-source companies to low levels after the June emissions data are known in early August (it takes a month for the data to get reported). Companies with the small number of allowances available find they cannot purchase enough allowances on the market to cover their emissions and possible CAM penalties or find that the costs are so high they don’t think they can recover the cost of purchasing allowances so they get to the point where they simply have to tell the system operator that their units cannot run. This will precipitate a controversy at best and, in an order of magnitude less likely worst case, could even threaten grid reliability. I don’t think the last possibility is very likely but I do think that bringing system reliability into danger because of the regulatory decisions by EPA and NYS that ignored industry recommendations in this instance is possible.

 

How Much for the Paris Climate Agreement

For those of you who are worried that Trump’s decision to pull out of the Paris Climate Agreement is a bad thing I would ask you to consider two pragmatic questions: How much was US participation going to change future temperature and how much would it have cost?

Bjorn Lomborg used the standard MAGICC climate model to determine how much the future temperature would change due to the United States “Nationally Determined Contribution”. He found that the full US promise for the COP21 climate conference in Paris will reduce temperature rise by 0.031°C.  Mike Hulme posted an estimate for the difference of 0.3°C. For the record, my opinion of the MAGICC model is that it is too sensitive to CO2 so I think the impact would be even less the lower bound. In any event, atmospheric temperature is reported to the nearest whole degree Fahrenheit in the US which is 0.55°C. In other words the temperature change range expected from our Paris commitment is lower than the reporting limit so the change is nothing that could be observed.

So how much was it going to cost? If you bothered to listen to the Trump speech on his decision he talked about the Green Climate Fund. If you have any doubts about the decision look up the briefing note by Climate Focus. I found this quote particularly interesting:

“The Paris Decision, serving as guidance for the implementation of the Paris Agreement and pre-2020 action, ‘strongly urges developed country Parties to scale up their level of financial support, with a concrete roadmap to achieve the goal of jointly providing USD 100 billion annually by 2020 for mitigation and adaptation’ (para 115). The Decision furthermore mentions that prior to 2025 the COP shall set a new ‘collective quantified goal from a floor of USD 100 billion per year’ (para 54). The reason both quantitative targets are missing from the actual Agreement is a pragmatic one – in doing so the COP has enabled the US President to adopt the Agreement as ‘sole-executive agreement’ under US law, without the requirement for the US Senate to approve.”

Think about this paragraph. The Paris agreement wanted to start financial support for mitigation and adaptation at $100 billion per year with plans for going even higher in the future. Clearly all the countries on the receiving end of this largesse are in favor of this agreement and clearly the United States was expected to provide the largest chunk of that money. It is particularly telling that the agreement was crafted without quantitative targets so it could be adopted as a ‘sole-executive agreement’ under US law, without the requirement for the US Senate to approve. If it had included quantitative arguments then US voters would have found out how much money the United States was supposed to dole out when the treaty was debated in the Senate.

In summary we were supposed to pay out billions and billions for an agreement that would not have measurably changed global warming. How is getting out of that a bad thing?

Pragmatic Environmentalist of New York Principle 6: Iron Law of Climate

This is a background post for one of the principles that I believe define pragmatic environmentalists. Other principles are listed at the end of this post.

Roger Pielke, Jr has defined the “iron law” as follows: While people are often willing to pay some price for achieving climate objectives, that willingness has its limits.

Dr. Pielke calls this the iron law of climate but it applies to all environmental objectives so this is closely related to Pragmatic Environmentalist Principle 4: We can do almost anything but we cannot do everything. The fact is that trying to reduce more and more risks costs increasingly more money. Eventually that cost becomes too much to bear and people will stop supporting those costs even if they reduce risk.

For example consider the proposal to get 100% of our energy from wind, water, and solar by Delucchi and Jacobsen. The author of the A Chemist in Langley blog is a pragmatic environmentalist. He has posted frequently on this proposal in his posts on the fossil free future proposal by Delucchi and Jacobsen and other similar initiatives.   In the context of this principle he specifically has observed that “It places tight, and poorly supported, restrictions on a number of important baseline clean energy technologies and in doing so results in a proposal that is ruinously expensive.”

I agree that the proposal for 100% renewable is technologically possible but the economic costs would not be supported by most of society simply because of the enormous costs. Because renewable sources are intermittent and diffuse the electric energy system would have to be overhauled to included storage for intermittency and a vastly different transmission system to address the diffuse sources. Dr. Jesse Jenkins at the Energy Collective Blog points out the difficulty of relying on renewable energy for more than 40% of the energy supplies. While the installed cost of renewables might approach conventional sources the real concern is that all the other aspects necessary to maintain the electrical grid have to be addressed and those costs are overlooked by many advocates.

Pragmatic Environmentalist of New York Principles

Principle 1: Environmental Issues are Binary: In almost all environmental issues there are two sides. Pragmatic environmentalism is all about balancing the risks and benefits of the two sides of the issue. In order to do that you have to show your work.

Principle 2: Sound Bite Environmental Issue Descriptions: Sound bite descriptions in the media necessarily only tell one side of the story. As a result they frequently are misleading, are not nuanced, or flat out wrong.

Principle 3: Baloney Asymmetry Principle: Alberto Brandolini: “The amount of energy necessary to refute BS is an order of magnitude bigger than to produce it.”

Principle 4: We can do almost anything we want, but we can’t do everything: Environmental initiatives often are presented simply as things we should do but do not consider that in order to implement those initiatives tradeoffs are required simply because the resources available are finite.

Principle 5: Observation on Environmental Issue Stakeholders: The more vociferous/louder the claims made by a stakeholder the more likely that the stakeholder is guilty of the same thing.

Replacement Power for Indian Point – Energy Storage

In two earlier posts I addressed potential replacement of New York’s Indian Point nuclear station power using new projects that are licensed or under construction suggested by the Governor and the alternative use of renewables and energy efficiency as proposed by environmental organizations. The latest proposal, commissioned by the New York Battery and Energy Storage Technology Consortiums, claims that energy storage, along with a portfolio of other clean energy sources, can replace Indian Point. This post examines their proposal.

I have been following New York State (NYS) energy policy for a long time. Before retirement from a Non-Regulated Generating company, I was actively analyzing air quality regulations that could affect company operations and those regulations were often indirectly or directly tied to NYS energy policy. The opinions expressed in this post do not reflect the position of any of my previous employers or any other company I have been associated with, these comments are mine alone. I am motivated to write these posts on energy policy because the majority of what you hear in public is, in my opinion, overly optimistic about the viability of new technologies and rarely portrays costs realistically.

In January 2017 New York’s Governor Andrew Cuomo announced the premature closure of the Indian Point Energy Center located 25 miles north of New York City. Cuomo claims that Indian Point produces 2,000 megawatts of electrical power and that “more than enough replacement power to replace this capacity will be available by 2021”. Previously I showed that while the capacity can be replaced by projects that are either under construction or licensed, that insinuations elsewhere that the replacement will be air pollution free are not correct. Shortly after his announcement environmental organizations proposed using renewables and energy efficiency exclusively but I showed that was technically possible but the realistic costs of replacing all the capabilities of Indian Point with those resources would be too expensive to be a realistic option.

Indian Point Energy Storage Replacement Option

The New York Battery and Energy Storage Technology Consortium hired Strategen Consulting to evaluate the use of energy storage as a potential replacement for Indian Point. The study presentation is a slide show that includes some excellent graphics that show the mix of generation capacity in Downstate New York. In addition to the retirement of Indian Point the study explains that there are a couple of other issues that could exacerbate the capacity problem. They point out that attaining the new ozone ambient air quality standard has triggered a NYS process to address old peaking turbines in New York City and that the City of New York is requiring that all use of #6 fuel oil be phased out. As a result, they claim there could be a capacity shortfall of greater than 1,000 MW by 2023.

The primary goal of this post is to discuss the energy storage proposal but let me quickly address the peakers and #6 fuel oil phase-out problems. The process for determining what to do with the peaking turbines has just begun and while the NYS Department of Environmental Conservation has shown that those peakers contributed to ozone in 2011, there are indications that the relationship is weakening to the point where additional modeling is necessary to determine what is happening today as opposed to five years ago. The phaseout of #6 fuel oil is not a major technological problem so the only reason that capacity would be affected is if the costs for switch over are too high. I have seen no indications that any of the affected facilities will shut down due to this requirement. Both issues are valid problems but the evolving regulatory framework is too early in the process to claim that the 1,000 MW shortfall claimed is anything more than a low probability.

Battery Storage Replacement

Until this issue came up I had not paid much attention to battery storage other than recognizing that in order to make intermittent renewable power dispatchable you have to have storage. In my mind, that simply equated to building enough batteries to store the renewable energy for when it is needed. Not surprisingly, it turns out that it is more complicated than that. Late last year PG&E reported on the results of a battery storage demonstration project that described how the batteries were used on the grid and how they were paid to operate. The project participated in the day ahead energy market which is used to procure the majority of supply to meet that day’s predicted electric load. The California ISO also has a real-time energy market and the battery system provided services for short-term fluctuations from the day ahead forecast. In addition to the energy market batteries can be used for the ancillary services of frequency regulation and spinning reserves and the demonstration project participated in those markets.  Note that the energy storage association has a longer list of battery technology applications.

The rosy projection for the use of energy storage depicted in the Strategen report is at odds with the results of the PG&E 18-month trial of electricity storage on the grid encompassing 6 MW of storage at two sites. For example, the report notes that it takes more energy to charge the batteries than battery discharges. Note that there is a significant discrepancy between the installed costs of the trial and the proposed Indian Point replacement. The trial report states that the “fully installed cost of the 2 MW / 14 MWh Vaca BESS was approximately $11,000,000, which equates to $783/kWh or $5,500/kW”. The assumed installed cost by Strategen is $1600/kW or nearly three times less than the actual trial cost presuming that these numbers are apples to apples.

Todd Kiefer described the results of the CAISO battery storage trial on the T&D World blog.  Mr. Keifer summarizes the report as a cautionary tale. The report shows that batteries are not cost-effective:

“The report included two external studies that found that cost of battery storage must come down to about $800/kW to achieve economic break-even.  However that number has two false assumptions baked in: a 20-year service life and only 15-minutes of storage capacity.  To aggressively dispatch the batteries as was done in the trial to maximize revenue requires at least 30 minutes of storage capacity and would consume the 4,500-cycle service life within 10 years.  With these adjustments, the real break-even cost is approximately $200/kW.  Indeed, $197/kW is the estimate PG&E itself empirically found to be the break-even cost for a typical month in 2015.  This is a factor of 27 cheaper than the Vaca system cost of $5,500/kw.”

Even assuming that the Strategen cost estimate of $1600/kW is possible the breakeven costs are much lower.

The Strategen report indicates that energy storage could be used to replace the peaking turbines at risk because of the ozone attainment requirements. Mr. Keifer explains that shaving peaks through energy arbitrage is an obvious use of batteries but notes:

“This time-shifting of generation to match consumption peaks involves techniques such as peak shaving and load leveling; these are easy to envision and model and optimize when looking at yesterday’s load and price curves, but very difficult to do in real-time when the load and price are varying stochastically and neither the height nor timing of the actual load peak can be known or recognized till well after the fact.  In practice, energy arbitrage only generated enough revenue to barely cover operating expenses.”

The basis of the Strategen report is that energy storage can be used to replace traditional generating units such as Indian Point or the peaking turbines. The emphasis on that claim is capacity but that neglects all the other services those generating units provide to the grid. Mr. Kiefer notes that the most lucrative use of batteries is frequency regulation or the response to fluctuations in the mismatches between load and generation. In the current CAISO market Frequency Regulation is the highest-value product but the project was not cost-effective for that application. Importantly, if the battery owner wants to maximize revenue by optimizing the system for frequency regulation, that precludes using the system to shave the peak because the batteries are maintained close to 50% charge levels and stand ready to charge or discharge rapidly to damp out momentary dips and spikes in grid frequency that mark mismatches between generation and load.

Mr. Kiefer describes a surprising finding that is directly applicable to the Indian Point replacement situation.

“The wholesale electricity price varied so much by geographic location on the California grid that often it was not economical for the two battery arrays to store surplus power being generated by wind or solar farms.  California now has enough “renewable” energy capacity that it can produce negative locational marginal price (LMP) in the vicinity of the wind and solar farms.  However, these low prices do not necessarily propagate as far as the electricity storage sites.  This is often blamed on “grid congestion” as if to say it is a shortcoming of the pre-existing grid, but in reality this bottlenecking is a predictable consequence of adding large capacities of remote, diffuse, and uncontrollably intermittent generators at the fringes of the grid far from the load centers that consume their power.  If batteries are to be used for energy arbitrage, they would be optimally co-located at the fringes with the wind or solar farms.  However, if they are to be used for frequency regulation, they are better located near the loads in cities and industrial centers.  Since the revenue stream of the latter is much more attractive than the former, it is likely that the utilities would prefer downtown rather than desert locations for assets they own.  That leaves solar and wind developers to install storage at their sites.”

In New York State the majority of the renewable sites are far from New York City so this suggests that the batteries for replacing the ancillary services will have to be near the City but in order to use the batteries for peak shaving they will have to be located near the renewable facilities.

The Strategen report claims low costs but only “through long-term contracting arrangements (e.g. 10-20 years)”.  In his cost analysis of the report, Mr. Keifer notes that “To aggressively dispatch the batteries as was done in the trial to maximize revenue requires at least 30 minutes of storage capacity and would consume the 4,500-cycle service life within 10 years.” Obviously a long-term contract greater than the expected service life is not a good deal.

Conclusion

The experience of the California demonstration project suggests that costs are a major concern with respect to this proposal.  Moreover, because wind and solar are diffuse collecting that energy and delivering it where it is needed requires the use of the transmission grid so the role that Indian Point provides for grid support services should not be ignored. Batteries can provide many of those support services and in some cases better than a generating unit but in order to provide all those services with batteries you will need more than the number needed to simply replace the capacity. How many more batteries is unknown, how the batteries will work within the system best needs to be determined, and how long the battery systems will survive providing different services is another key element in a cost comparison. Integrating small amounts is not an issue but there is a point when the incompatible nature of those resources compared to traditional generating plants like Indian Point has to be resolved and that also increases costs significantly.  The failure of the Cuomo Administration to admit much less address the grid issues is disappointing.

Unintended Consequences

I was working on this post at the same time that Planning Engineer posted on Renewable resources and the importance of generation diversity at Climate Etc. There were a couple of related and relevant comments made that brought up some unintended consequences for the widespread use of batteries that I reproduce below.  These comments show that the perception that batteries are unencumbered by additional environmental impacts are incorrect.

Albert Hopfer works in the battery design and manufacture field and notes:

We in this business (the biggest of that business) are struggling with cell suppliers to get the cells and cell types needed for our products. Lithium does not grow on trees they exist and produced as either rock-ore or brine. Brine has become the (only) profitable choice. Brine requires 9-12 months to “sun dry” producing the necessary oxides for Lithium battery grade product.

That being said, you can imagine the scarcity Lithium would become and the change in cost/selling price. Gas auto sales in the US 2016 was at the 14 million level. Electric cars next to zero in comparison. Yet, today, EV’s and other large format battery needs (new) are exhausting supplies of Lithium. Lithium is also use in glass and other product production.

If the US grid and transport systems became dependent on storage using wind and solar the US would immediately be dependent on foreign Lithium since the US reserves are minor compared to places like South America etc. We do not want to go there.

M Anderson has been been “mapping, analyzing, and helping change complex global infrastructure networks, on the ground, in more than 40 nations for decades. (Autos, energy, water, food, mobile communications, waste, etc).” He explained that rooftop solar and local battery storage have several hidden “landmines” that can only be seen by looking at their total life-cycle from mining to recycling and waste.

The first landmine is that the “Lithium batteries in Teslas – and in emerging home batteries – contain 1200-2,000 pounds of traditional, recyclable, materials AND lithium that cannot be easily recycled. At current useful lives, these batteries will be coming into the waste-recycling streams at volume in about 7-12 years.”

The second landmine is the “17-25 year recycling periodicity of the massive “e-waste” in modern solar panels.”

He goes on to explain that one of the unique, negative, environmental effects of the widespread use of lithium batteries is that each new product made requires new mining of lithium and other materials and emphasizes the point that in the case of electric vehicles the difference in recycling potential with respect to existing vehicles means that “EVERY new EV made requires more new mining and material production than any of the existing 1.2 billion petrol-fired vehicles on Earth.”

As a result the environmental effects of mining Lithium will be exacerbated because so much more will be required.

 

RGGI as the Electric Sector Compliance Tool to Achieve 2030 State Climate Targets

This is another in a series of posts on the Regional Greenhouse Gas Initiative (RGGI). The program includes periodic reviews to consider program successes, impacts, and design elements. In the current program review process one of the big issues is whether the cap on CO2 emissions should continue to decrease after 2020 when the current program ends. Not surprisingly many environmental organization advocate continued reductions based on reductions made to date and cite a report prepared by Synapse Energy Economics entitled “The RGGI Opportunity 2.0, RGGI as the Electric Sector Compliance Tool to Achieve 2030 State Climate Targets” (hereinafter the “Synapse Study”).  In previous posts I have looked at emission reductions and this post looks at the claims made in the Synapse Study.

I have been involved in the RGGI program process since its inception. In the final years before my retirement I analyzed air quality regulations that could affect electric generating company operations. The opinions expressed in this post do not reflect the position of any of my previous employers or any other company I have been associated with, these comments are mine alone. I am motivated to write these posts on RGGI because the majority of the stakeholder opinions expressed at meetings and in submitted comments are, in my opinion, very naïve relative to the actual burden implanting their preferred alternatives, overly optimistic about the potential value of continued RGGI reductions, and ignore the potential for serious consequences if things don’t work out as planned.

The Synapse Study was commissioned by the Sierra Club, Pace Energy and Climate Center and Chesapeake Climate Action Network. The introduction to the Synapse report states that it “…builds upon Synapse’s prior analysis of emission reductions in the electric and transportation sectors by additionally analyzing emission reductions in the building sector to create a robust least-cost buildout of compliance with RGGI states’ 2030 climate goals.” During the stakeholder process associated with the current RGGI program review a number of organizations have endorsed the implementation of a steeply declining RGGI emissions cap between 2020 and 2030, primarily based on the assertions found in the Synapse Study. Specifically, two comments posted to the Program Review record on June 29, 2016 from a total of 23 environmental and health organizations contained the following statement:

“Throughout the program review, numerous groups have requested that the RGGI states model a scenario with capped emissions declining by 5% of the 2020 baseline each year through the modeling horizon, resulting in a 2030 cap level of 39 million tons. This request is based on the dual justification that A) such a trajectory is consistent with RGGI emissions reductions to date, and B) multisector analysis has demonstrated that this is the most cost-effective pathway to achieving the RGGI states’ 2030 economy-wide GHG commitments”.

In my previous posts I have pointed out that the emissions trajectory to date is not reflective of further reductions because emissions from the primary source of past reductions (coal and residual oil generation) are already close to de minimus levels. This post addresses the “multi-sector analysis” prepared by Synapse for future reductions. I focus on the viability of the measures proposed to reduce CO2 emissions and whether all the necessary costs for implementation have been included for the ultimate cost-effectiveness evaluation.

Based on the following analysis of various statements this study does not make a strong case. I did not try to cover every proposed aspect of the multi-sector analysis. Instead I evaluated four components. The Synapse study text is shown in italics; my comments in regular text.

 Half of Emission Reductions Come from the Electric Sector (p. 7, Synapse Study)

Electric-sector efficiency and renewables are responsible for nearly half of the additional required reductions in 2030. Figure 5 presents emission reductions in the electric sector for the baseline and 40 percent emission reduction policy scenarios. In the 40 percent emission reduction scenario, Northeast states’ electric sector emissions are capped at 39 million short tons in 2030 compared to the currently mandated RGGI cap of 78 million short tons in 2020.

The latest EIA data indicates that 2014 electric sector emissions (Table 1) were 86.1 million tons compared to the 78 million ton current cap. Note that coal and residual oil made up 34.6 million tons of the total sector emissions and that natural gas emissions were 51.5 million tons. Since 1990 most of the electric sector emission reductions have been as the result of coal and residual oil reductions primarily due to retirements and changes in operations driven by economics and not necessarily RGGI. Natural gas use went up as it displaced the other two fuels.

These data suggest that future reductions necessary to meet the current 2020 cap of 78 million tons is close enough to be achievable. However, an additional 39 million ton reduction is necessary to meet the proposed 2030 cap. In a previous post I showed the bounds for CO2 reductions that could be attributed to RGGI investments to date. The upper bound is an econometric model that estimates that emissions would have been 24 percent higher (31.9 million tons) without the program. RGGI estimates that emissions would have been 17% higher (22.6 million tons) than without a program. If you assume that all the savings in fossil fuel use earned by RGGI investments only displaced natural gas rather than historical use as the RGGI estimate did, then emissions would have been only 5% higher (4.2 million tons). My point is that future reductions will have to come as the result of RGGI and state programs not fuel economics and depending on how you calculate the impact of RGGI programs to date this could be relatively easy or not. Given that future reductions will be from displacing natural gas I believe it is more difficult that presumed by this study.

Efficiency, Wind, and Solar Drive Down Electric-Sector Emissions (p. 8, Synapse Study)

Under the 40 percent emission reduction scenario new, lower RGGI caps drive deeper, more wide-spread changes in the RGGI states’ electric system. Figure 6 reports the impact of these measures in terms of generation by resource. Coal, oil, and natural gas-fired generation are replaced by efficiency and renewables. Note that electric sector generation is lower in the 40 percent emission reduction scenario than in the RGGI baseline even though substantial generation is needed to power electric vehicles and heat pumps: savings from energy efficiency outweigh additional electricity sold to owners of electric vehicles and heat pumps.

 Renewables supply one-half of the RGGI region’s electric generation in 2030 (p. iv, Synapse Study). Adding 50,000 gigawatt-hours of new wind and solar in the 40 percent emission reduction scenario results in a future where half of all electricity generation comes from renewable resources in 2030, compared to just 30 percent in the baseline RGGI scenario.

The Synapse study neglects a major aspect of the electric system in their assumptions that renewables can replace coal, oil, and natural gas to the extent proposed. The electric power system is very complex and must operate within narrow parameters while balancing loads and resources and supporting synchronism. In countries like Germany, it has only been possible to develop an aggressive level of renewable generation in their power system because Germany is able to rely on neighboring country’s conventional facilities in the grid for load support. Synapse has presumed that renewables in the RGGI system can provide the necessary ancillary support but has not shown that it can provide all the important parameters provided by central power stations. For example:

Conventional rotating machinery such as coal, nuclear, and gas plants as well as hydro generation provide a lot of support to the system. This includes reactive power (vars), inertia, regulation of the system frequency and the capability to ramping up and down as the load varies. Most renewable resources lack these important capabilities and are only intermittently available (e.g., not dispatchable). Unlike conventional generators that rotate at constant speed, wind turbines must rotate at variable speeds so that their rotational energy offers no support to the system.

Some, but not all of the disadvantages of solar and wind energy can be mitigated at extra costs through electronic and mechanical means. When these resources make up only a small percentage of the generation on the system, overall system stability is not adversely impacted in a significant way. Stated another way, when the overall system is robust enough, utilities can allow a small percentage of solar “lean” on the system and still provide a stable source of electricity. As the penetration of solar and wind energy increases the system robustness will degrade and reliability will be compromised without costly improvements. Such additional costs are not generally applied to the evaluation of renewable resources at this time, and it certainly appears that the Synapse study has failed to take any of these types of costs and issues into account.

As noted above, the German grid relies on its neighbors to provide a wide range of support services. It may not be possible for the RGGI electrical systems to support the Synapse study’s presumed high penetration of renewable power and the provision of those services are not incorporated in their costs projections. In order for the Synapse projections to work in a real-world scenario the RGGI grid operators will also end up relying on neighboring power systems to provide this support, thus promoting  “emissions leakage”.

Energy Efficiency Savings Are One-Third of Total Emission Reductions (p. 9, Synapse Study)

 Efficiency measures will continue to lower consumers’ bills. Applying Massachusetts’ expected electric energy efficiency savings in terms of percent of sales—based on their current three-year plan—to all RGGI states lowers electric sales by 11 percent by 2030. These efficiency savings have been determined to be cost effective in Massachusetts.

This presumption does not account for the current state of energy efficiency in other RGGI states. If a state is presently more efficient than Massachusetts then it is inappropriate to assume that the same rate of efficiency savings is possible simply because easier energy efficiency targets have already been implemented.

Wallet Hub analyzed Energy Efficiency RGGI State Rankings using data from the U.S. Census Bureau, the National Climatic Data Center, the U.S. Energy Information Administration and the Federal Highway Administration. Their conclusions are highlighted below:

“To identify the most energy-efficient states, WalletHub analyzed data for 48 states based on two key dimensions, including “home-energy efficiency” and “car-energy efficiency.” We obtained the former by calculating the ratio between the total residential energy consumption and annual degree days. For the latter, we divided the annual vehicle miles driven by gallons of gasoline consumed. Each dimension was weighted proportionally to reflect national consumption patterns.

In order to obtain the final ranking, we attributed a score between 0 and 100 to correspond with the value of each dimension. We then calculated the weighted sum of the scores and used the overall score to rank the states. Together, the points attributed to the two major categories add up to 100 points.

Home-Energy Efficiency – Total Points: 55

Home-Energy Efficiency = Total Residential Energy Consumption per Capita / Degree-Days

Car-Energy Efficiency – Total Points: 45

Car-Energy Efficiency = Annual Vehicle Miles Driven / Gallons of Gasoline Consumed

The Wallet Hub 2015 Energy Efficiency RGGI State Rankings are listed in Table 2. Four states are more efficient than Massachusetts and New York and Vermont are markedly more efficient than Massachusetts. Therefore, the presumption that New York and Vermont will be able to reduce emissions by 11% or the same as the Massachusetts expected electrical energy efficiency savings level is difficult to justify and appears to be unfounded.

1.3 Million Electric Heat Pumps Replace Oil Heaters (p. 11, Synapse Study)

In 2015, over 4 million families in the RGGI region were still heating their homes with oil. By 2030, this number is expected to shrink to 3 million households in the RGGI baseline scenario as households move to more efficient forms of heating. These oil furnaces and boilers would release 20.4 million short tons of CO2 into the atmosphere in 2030.

The 40 percent emission reduction scenario shifts 1.3 million of the remaining 3 million households from oil to air-source heat pumps by 2030 (see Figure 9). Heat pumps are appliances that use electricity to absorb heat energy in cold areas (i.e., outside) and transfer it to indoor areas. Heat pumps have the advantage of being able to work in reverse—not only can they provide heating in winter months, but they take the place of a central air conditioning systems in the summer months. Heat pump technology has existed for decades, and these units are commonplace in Europe and Asia, but high-performing systems that function well in cold-weather climates as in many of the Northeast states have just recently begun to make inroads in the United States. By shifting heating consumption from inefficient, high-emitting oil boilers and furnaces to highly efficient heat pumps, 9 million short tons of CO2 are avoided.

 Despite the Synapse disclaimer heat pumps are at a disadvantage in cold climates like New York and other RGGI states there are physical issues: “An air-source heat pump works well as long as temperatures are above freezing. Below that temperature, less heat is available, and the pump may have to rely on its supplemental heating coil to warm your home. This coil uses electricity to heat and will increase heating costs.”

Traditionally, heat pumps have not enjoyed a wide level of penetration in housing markets because consumers are primarily interested in cost of ongoing operation. Furthermore, there does not appear to be any in-place incentives that would lead to a wide-scale shift from conventional oil- and gas-fired boilers to heat pumps. Absent some regulatory requirement or financial incentive program, the Synapse study assumption that one million conventional furnaces will be replaced with heat pumps over a fifteen year period has no basis in fact and appears to be a highly unlikely scenario.

Ten Million Electric Vehicles Offset 28 Million Short Tons of CO2 (p. 12, Synapse Study)

The 40 percent emission reduction scenario adds 10 million battery electric vehicles in the nine RGGI states by 2030, above what is currently in place and expected in the baseline forecast (see Figure 10).9 The stock of electric vehicles in the RGGI baseline is based on the Energy Information Administration’s 2015 projections and reaches 46,000 vehicles in the RGGI region in 2030. In contrast, Synapse’s 40 percent emission reduction scenario assumes that one-third of the RGGI region’s light-duty vehicles run on electricity by 2030 based on the Federal Highway Administration’s projection of the potential for electric vehicle adoption. These new electric vehicles reduce total RGGI state emissions by 28 million short tons of CO2 in 2030.

The Energy Information Administration’s Annual Energy Outlook 2016 includes tables with projections of future vehicle stocks. The vehicular data are categorized by region not state so it was not possible to reproduce the Synapse Study numbers. Table 40. Light-Duty Vehicle Stock by Technology Type notes that, nationwide, in the reference case there are 340,481 conventional light duty cars in the 100 mile electric vehicle and 200 mile electric vehicle classes in 2016 and in 2030 predicts there will be 3,534,097 vehicles in the reference case and 3,542,276 vehicles in the reference case without the Clean Power plan.

In New York there are about nine million light duty vehicles registered. The Synapse study claims that it is possible to replace one third of the 8.7 million gas powered vehicles with electric vehicles, which equates to 2.9 million electric vehicles in New York by 2030. That would be 82% of the EIA projected total for 2030.  In order to reach the projected total in the Synapse study presumption, over 190,000 per year would have to be sold. At the current time, there are no regulatory structures and only insufficient financial incentives in place to support this massive level of electric vehicle penetration.

Presuming for a moment that it would be possible to effect such a significant change in the driving habits of New Yorkers, the Synapse study has not taken into consideration that there are significant infrastructure requirements for the 2.9 million electric vehicles projected. One of the greatest impediments to the further development of electric vehicle market is that charging stations in public places have not yet been implemented on a widespread basis nor has a satisfactory cost model been developed on how to finance such a massive infrastructure build-out. Clearly, if one in every third car parked on a New York City street is to be an electric vehicle, significant costly changes to the existing electrical systems must come about. The Synapse study has not taken those costs into account in its presumption that an unprecedented change in the vehicle market will lead to the emission reductions proposed.

Summary

I looked at four components of the Synapse study: electric sector, energy efficiency, home heating and electric vehicles. Synapse claims 103 million ton CO2 emission reductions when these recommendations are implemented. In the following I offer alternative estimates of the tons saved based on the aforementioned evaluation.

In the Synapse electric sector analysis (section 2.4, p. 7) half of the emission reductions or 39 million tons come from the electric sector. The big unknown is how much renewable will displace the natural gas usage and it was argued above that as little as 4.3 million tons of reductions occurred because of RGGI itself. Synapse assumes that the state renewable portfolio standards will be implemented but does not explain how. For example, they note that:

For New York, in addition to modeling the existing RPS (approximately 24 percent of retail electric sales by 2015), we modeled an additional 3,000 MW of utility-scale photovoltaic (PV) solar added by 2023 and an additional 1,600 MW of wind added by 2029, in line with the New York State Energy Research and Development Authority’s (NYSERDA) projections for capacity that will come online as a result of the NY-Sun and Large-Scale Renewables programs

The missing piece is how much generation will that presumed additional capacity generate and how much will it displace natural gas generation. It is not enough to assume that it is a straight ratio because the ancillary services provided by traditional power plants would not be included. Synapse did not account for electric system reliability issues in its projected penetration of renewables into the RGGI states power system. For an upper bound estimate of potential reductions I used the 2014 EIA data and assume that petroleum products are already at their de minimus level, coal goes to zero, and that natural gas stays the same assuming that the replacement of coal by natural gas equals the reduction by renewables. Using those assumptions there are only 22.6 million tons of savings.

The energy efficiency Synapse projection (section 2.6, p. 9) accounts for one third of total emission reductions or 27 million tons. I think that Synapse overestimated the amount of emissions savings that are available from energy efficiency programs absent additional policy and/or regulatory structures.  Their analysis presumes that the Massachusetts rate of energy efficiency improvement can be applied to four states that have better efficiency but that is inappropriate. I assumed that the expected energy improvement of 11% in those four states would only be half of that, scaled the reductions as a function of residential and commercial end use and determined that instead of 17 million tons of electric energy efficiency there would only by 12.1 million tons. The remaining 10 million tons of Synapse energy efficiency comes from gas energy. They simply assumed without much justification that it could be improved by 1% per year but I accept their guess.

For home heating (section 2.7, p. 11) Synapse projects that 1.3 million heat pumps will replace of the 3 million home heating furnaces still on oil saving 9 million tons of CO2. Their analysis does not address the physical constraints of heat pumps in freezing weather. Synapse has not provided a plausible structure that will bring about its projected transition to electrically-powered heat pumps. Rather than their high bound assumption that 43% of the furnaces will be converted to heat pumps I think a realistic lower bound for conversion is 10% so the CO2 savings would be 2.1 million tons.

The Synapse electric vehicle scenario (section 2.8, p. 12) projects that there will be 10 million electric cars in RGGI by 2030 and also notes that EIA projects only 46,000 more electric vehicles by 2030. That equates to some kind of incentive program to get RGGI drivers to buy 9,954,000 electric vehicles and even if the incentive is only $1,000 that is over $9 billion dollars. Synapse failed to appreciate the complexities and costs associated with a massive conversion of driving preferences to electric vehicles. In the real world a more realistic estimate would be RGGI incentivizes an order of magnitude more electric vehicles (460,000) than EIA and the CO2 savings would be 1.3 million tons.

Synapse claims 103 million ton CO2 emission reductions when their recommendations for these four components are implemented. However, based on my evaluation I expect only 51.6 million tons of reductions from these four components.

RGGI Containment Reserves

This is another in a series of posts on the Regional Greenhouse Gas Initiative. Previous posts have looked at how the program has been working from the viewpoint of an outsider. This post is a technical discussion of two components of the system currently being discussed in the 2016 Program Design process: the Cost Containment Reserve (CCR) and the proposed Emissions Containment Reserve (ECR).

I have been involved in the RGGI program process since its inception. Before retirement from a Non-Regulated Generating company, I was actively analyzing air quality regulations that could affect company operations and was responsible for the emissions data used for compliance. The opinions expressed in this post do not reflect the position of any of my previous employers or any other company I have been associated with, these comments are mine alone. I am motivated to write these posts on RGGI because the majority of the stakeholder opinions expressed at meetings and in submitted comments are, in my opinion, overly optimistic about the potential value of continued RGGI reductions and ignore the potential for serious consequences if things don’t work out as planned.

Overview of Containment Reserves

The RGGI website has an overview of the cap and auction system that includes a description of the CCR. One of the original stakeholder concerns was cost. In order to put a limit on the cost of allowances the CCR was established to add allowances to the program when the cost exceeds a threshold. The theory is that adding allowances will reduce the allowance cost. The trigger price for adding 5,000,000 million allowances in 2014 or 10,000,000 allowances thereafter was $4 in 2014, $6 in 2015, $8 in 2016, and $10 in 2017, rising by 2.5 percent each year thereafter. The CCR has been triggered twice, in the first quarter of 2014 and the third quarter of 2015, adding 15,000,000 allowances to the system.

One of the primary stakeholder topics of the RGGI 2016 program review is future reductions in the cap. As it stands now the cap declines by 2.5% per year until 2020 and remains at the level. Proponents of future reductions claim that past performance suggests the cap can continue to decline. The ECR was proposed as a possible solution to the cap adjustments. In particular, the ECR decreases the number of allowances auctioned if the price gets too low. The theory is that if the costs are too low that means there is a surplus of allowances and the cap can be lowered accordingly. It seems to me that it could be used as the mechanism to adjust the cap in the future in addition to its price control aspect.

RGGI Allowance Status

The status of the cap and implications on compliance and cost need to be addressed in the context of these containment reserves. Another stakeholder topic in the 2016 Program Design Review is the appropriate size of the allowance bank. The allowance bank is the surplus allowances over and above the compliance requirements. In the first two compliance periods of RGGI the number of authorized allowances far exceeded actual emissions. As a result the bank of allowances was so large that the RGGI states made interim adjustments to subsequent auctions to lower the number of allowances available.

The appropriate size of the bank is controversial. Advocates for more reductions want a smaller bank so that reductions occur sooner. Proponents of higher allowance prices want to reduce the size of the bank because fewer available allowances should drive the price up. However, there are reasons that the bank should not get too small. Emissions are directly proportional to operating times which are strongly related to weather-related demand. Affected sources want to have sufficient banked allowances in their accounts to be able to supply power in periods of increased demand. In addition, companies prefer to have a margin in order to address monitoring problems. Ultimately, if insufficient allowances are available then affected sources will not be able to operate.

In that context consider the RGGI allowance status today. In the first compliance period (2009-2011), 429,381,635 allowances were auctioned, sold or awarded, the CO2 emissions were 382,075,544 tons so the margin between the available allowances and emissions (the allowance bank) at the end of the compliance period was 47,306,091 allowances. The allowance bank at the end of the second compliance period was 120,175,954 allowances. As it stood at the end of 2016 the allowance bank as defined as the difference between the allowances auctioned, sold, or awarded and the RGGI allowances retired was 173,105,751. Note, however, that there is a difference between the 2015-2016 allowances retired and emissions. RGGI retires 50% of the compliance obligation at the end of the year instead of waiting until the end of the compliance period to retire all the emissions. If all the emissions are withdrawn the allowance bank at the end of 2016 is 90,446,582 allowances. For comparison purposes the 2016 total emissions were 80,624,392 tons.

There is an affected source or compliance entity concern with the allowance bank trends. The RGGI allowance database is not as open and transparent as the EPA allowances databases for their programs. EPA provides ownership information on all allowances but RGGI does not. The RGGI market monitoring reports provide the only breakdown between the allowances held by compliance entities and that is only the percentage of the total bank held by compliance entities. At the end of the first compliance period 97% or 45,886,908 of the allowances were owned by entities that needed the allowances to comply with the requirements of the program. At the end of the second compliance period the compliance entity share was down to 81% or 97,342,522 allowances. At the end of 2016 the compliance entity share was only 54% or 93,477,106 allowances. However, remember that RGGI only considers the retired allowances so the compliance entities have to cover 82,659,170 tons of emissions with their share at the end of 2016. Consequently the true compliance entity share of the allowance bank is only 12% or 10,817,936 allowances and that concerns the compliance entities.

In particular, the trend shows that compliance entities will have to go to the non-compliance entities to obtain enough allowances to operate. This problem will be exacerbated if RGGI cuts the cap and/or makes adjustments to the allowance bank. In my former life I was responsible for trading program compliance tracking and had input into corporate compliance strategy. Environmental staff and all corporations I have dealt with consider compliance the highest priority and the only way to insure that is to only generate emissions less than allowances in hand. Theoretically, you could generate and then go out to the market to cover what you emitted but, especially in the case of a constrained market, there is a compliance risk. RGGI is evolving towards the ultimate constrained market and there are two serious potential consequences. Firstly, affected sources could be forced to purchase allowances from an entity that knows they have a compliance obligation and could require them to pay an exorbitant amount for the allowances. This has two downsides: the windfall of money is not anything that will be invested like the RGGI auction proceeds (i.e., there is no societal benefit to those higher priced allowances) and eventually the price will show up on ratepayer bills[1].

Secondly, a company could choose not to run because they don’t have the allowances and that will affect the power system badly, in the worst case it could even affect reliability. Were it up to me because environmental compliance is number one I would advise option 2.

Emission Containment Reserve

I think the ECR is an elegant solution to the issue of whether or not the cap should continue to decline. Ideally, the RGGI cap should create an allowance market where the price is within the acceptable range determined by the RGGI states. The CCR prevents the cost from getting too high. The ECR works on the lower end of the range to keep the price from getting too low. As an alternative to a declining cap the ECR determines the appropriate cap level by withdrawing allowances from the market to the point that the price rises above the low threshold of the target range. As proposed the withdrawn allowances are put into a reserve but it could be set up so that the allowances displace the allowances in the CCR and when there are more than 10,000,000 allowances displaced then allowances from the cap would be withdrawn. I want to emphasize however, that my support of this approach is in lieu of a specified declining cap.

In my opinion future reductions will not be as easy as the past. The majority of the reductions in the RGGI region to date have occurred because of coal unit retirements and cutbacks in the use of residual oil which were driven by the economics of low natural gas fuel prices. However, most of the coal facilities have retired and residual oil emissions are about as low as they can go so future reductions will have to displace more economic natural gas. Therefore, the ECR approach addresses this uncertainty. If further reductions are easily obtained then the ECR will lower the cap. If not then the RGGI program is not in danger of non-compliance meeting an artificial cap.

Unfortunately RGGI seems to be headed towards a mandated declining cap. The last round of proposed policy cases did not even consider the possibility that further reductions may not be available or even slower than expected primarily because all scenarios assume compliance with proposed state programs. While it is appropriate to include the scenario in which it is assumed that the state programs that propose to increase renewable power come to fruition, the fact is that they will necessarily have to increase costs. In my opinion a scenario which only considers economics of conversion also should be included to address the possibility that the political winds could change. In addition, the timing component should be included. Industry has had to deal with licensing delays for years and there is no reason to expect that the infrastructure necessary for the renewable deployments required will not also be affected.

Finally, the issue of the compliance entity share of allowances relative to a declining cap should be addressed. The unintended consequence of further reductions is to exacerbate the cost implications and increased reliability risks. Ultimately, society will pay those costs and I predict that the blame will fall back on the generating companies and not those who recklessly advocate more and more reductions. The existing program has worked as well. The CCR has limited allowance costs from increasing too much and the ECR seems to be a viable alternative to not only limit costs that go too low but also to be the mechanism that reduces the cap based on what is happening to the generating mix.

[1] There are those folks who would applaud increasing costs to the generating companies. To them I say that electricity is an essential driver to the health and welfare of our society. As with all essentials, electricity should be as reliable, secure and cost-effective as possible. It is no less important than food and water. Somehow, someway those increased costs will impact society, most likely impacting those who can least afford it the most.

Murphy Editorial “EPA Chief is wrong on the greenhouse gas effect”

On April 18, 2017 the Syracuse Post Standard published a featured editorial by Dr. Cornelius Murphy, Jr. “EPA Chief is wrong on the greenhouse gas effect”. I was given the opportunity to submit a rebuttal but was asked to make it the same length. This presents a problem because of the Baloney Asymmetry Principle, the third of my pragmatic environmentalist principles. In particular, the amount of information necessary to refute BS is an order of magnitude bigger than to produce it. This post rebuts his arguments.

Dr. Murphy’s editorial is an example of the straw man fallacy prominent amongst the critics of the current EPA. He describes the science behind the greenhouse effect and claims that Administrator Pruitt disagrees with those facts to support his claim that Pruitt must not be allowed to provide direction and policy for CO2 mitigation. The Catastrophic Anthropogenic Global Warming (CAGW) hypothesis espoused by Dr. Murphy claims that mankind’s emissions of greenhouse gases are responsible for the recent observed warming of the globe and, unless stopped soon, will have catastrophic impacts on the planet. This post addresses the catastrophic component of global warming which, I believe, are not obvious by simply “looking around” as Murphy suggests.

Robust scientific theories and hypotheses rely on a combination of both empirical and correlative evidence. In the case of a theory that cannot be directly tested through a controlled experiment, we have to rely on long term observations and comparison of projections based on the theory against the observations. Empirical observations and correlative evidence for the CAGW hypothesis are not as obvious as Murphy implies.

I have no issues with Dr. Murphy’s description of the greenhouse effect. The basic greenhouse gas theory is not controversial. Carbon dioxide is a greenhouse gas.  It retards radiative cooling.  All other factors held equal, increasing the atmospheric concentration of CO2 will lead to a somewhat higher atmospheric temperature.  It is not controversial that CO2 has risen in the last century or that at least half of the increase was due to mankind. It is also obvious that average temperatures are increasing over that same period. Dr. Murphy said that Administrator Pruitt “doesn’t think that CO2 is responsible for heating our planet”, but I don’t think Mr. Pruitt would dispute any of the aforementioned facts.

However, those facts do not necessarily lead to catastrophe and there is a healthy debate on most policy-relevant aspects of global warming. In the first place, there is a predicted warming due to greenhouse gases when all factors are held equal but all other things are never held equal in meteorology. In that case, doubling the concentration of atmospheric CO2 from its pre-industrial level would reduce outgoing infrared radiation by about 4 watts per meter squared and the temperature of the atmosphere would increase about 1.2 deg. C. Note that about half of this warming has already occurred. So clearly some of the observed warming is caused by this effect.

The most recent warming period started before the recent rise in CO2. There have been other warm periods of the same magnitude of the current period in the last two thousand years when anthropogenic CO2 was not the driver. At a minimum the CAGW theory has to explain why the causes of the warming between 1900 and 1940 which is of the same order of magnitude as the current warming are not playing a role now.

Dr. Murphy says that Administrator Pruitt should look at what is happening around him and cites several examples: “We have wild extremes in temperatures but annual average global temperatures continuing to rise. The temperatures of our Great Lakes are 6 degrees above average, extreme weather events challenge us all too frequently, and we experience mega droughts globally on a regular basis.” I address those points below.

I do not dispute that annual average global temperatures continue to rise. However, to me, if there is a valid concern about rising temperatures, then we should be able to find evidence that heat waves are increasing. The EPA climate change indicators high and low temperatures web page lists several parameters associated with temperature. The heat wave index shows an overwhelming spike in the 1930’s but there is no suggestion of a recent trend. There is a trend in the area of unusually hot temperatures graph but I wonder how they addressed the development of heat islands in that data so I am skeptical. I see nothing happening to warrant alarm.

His only quantitative claim is that “the temperatures of our Great Lakes are six degrees above average” but his claim does not withstand scrutiny.  According to EPA’s Climate Change Indicators web page on Great Lake temperatures: “Since 1995, average surface water temperatures have increased slightly for each of the Great Lakes”, but that is nowhere near six degrees.  The web site Great Lakes Statistics lists the current temperatures relative to their period of record starting in 1992 and all five lakes are currently less than two degrees above the mid-April average.

Dr. Murphy says extreme weather events challenge us all too frequently insinuating that they are getting worse but, again, looking at data indicates no cause for alarm. In recent testimony before the House of Representatives, Dr. Roger A. Pielke, Jr. addressed trends of extreme events in the United States. He noted that global weather-related disaster losses as a percentage of Global GDP are trending down since 1990; that there is no trend in hurricane landfall frequency or intensity; that the IPCC noted no evidence of a trend for floods; that US flood impacts are going down; and that there low confidence in observed trends for hail or tornadoes.

Dr. Murphy says that we experience mega droughts globally on a regular basis but Dr. Pielke quotes the IPCC: “there is low confidence in detection and attribution of changes in drought over global land areas since the mid-20th century”. If the Intergovernmental Panel on Climate Change concludes no trends in droughts then the only way we can interpret the regular basis comment is that this has been the case in the past and it continues today. His statement is not wrong but it also is not cause for alarm upon inspection either.

Because it is impossible to run a controlled experiment on Earth’s climate (there is no control planet), the only way to “test” the CAGW hypothesis is through models.  If the CAGW hypothesis is valid, then the models should demonstrate predictive skill. However the models are not predicting temperatures well enough to meet that standard because they predict a sensitivity to CO2 two to three times greater than that supported by observations. Dr. Curry’s summary of the global climate models makes five points about the use of these models for this purpose:

  1. GCMs have not been subject to the rigorous verification and validation that is the norm for engineering and regulatory science.
  2. There are valid concerns about a fundamental lack of predictability in the complex nonlinear climate system.
  3. There are numerous arguments supporting the conclusion that climate models are not fit for the purpose of identifying with high confidence the proportion of the 20th century warming that was human-caused as opposed to natural.
  4. There is growing evidence that climate models predict too much warming from increased atmospheric carbon dioxide.
  5. The climate model simulation results for the 21st century reported by the Intergovernmental Panel on Climate Change (IPCC) do not include key elements of climate variability, and hence are not useful as projections for how the 21st century climate will actually evolve.

Finally, Dr. Murphy notes that Pruitt is not a scientist but is an attorney. Although Dr. Murphy is a chemist and not a meteorologist like me I don’t believe that a person’s background necessarily means much. Look at the evidence yourself. When you check the numbers and claims like I did then you can determine whether or not to believe whoever is making the claims. In this case I find little support for Dr. Murphy’s claims but readers should decide themselves.

I have not found sufficient evidence to convince me that CO2 mitigation efforts are appropriate at this time. While it is very likely that human activities are the cause of at least some of the warming over the past 150 years the question is how much.  There is no robust statistical correlation to indicate that CO2 is the primary driver.  The failure of the climate models outline above clearly demonstrates the CAGW hypothesis is flawed.

I conclude that our children and grandchildren are not in imminent danger from CAGW and would be better served by investments to make society more resilient to observed extreme weather rather than trying to mitigate CO2 emissions to try to prevent the speculative weather projected by the flawed models. I believe Administrator Pruitt’s agenda to reign in the ill-conceived CO2 mitigation programs of the Obama Administration is appropriate. On the other hand, I do not agree with any plans to cut the climate monitoring and observing programs at EPA and elsewhere. I support research into all the causes of climate change not just anthropogenic causes. Ultimately, until such time that a cheaper alternative to fossil fuels is available society will continue to use them because of their tremendous benefits. If you believe that we society should stop using fossil fuels then research and development for alternatives is appropriate.