SOME THOUGHTS ABOUT POWER GRIDS

Advertisement

After cold weather messed with Texas, how worried should we be?

BEFORE I WAS asked to come develop a cybersecurity degree program at UA Little Rock, I worked in the power industry for 15 years, starting as a programmer right out of college. Pretty quickly I moved over to cybersecurity and was director of cybersecurity and critical infrastructure, which covered generation facilities, substations, and control centers.

The National Electric Reliability Corporation, known as NERC, creates cybersecurity standards with fines as much as a million dollars a day for violations, and I spent a good portion of my power industry career working on that committee writing standards. While my focus was on cybersecurity, through that work I got to see a lot of different operational environments across the U.S., giving me a better understanding of the power grid, its strengths, and some of its constraints. Prior to coming to academia, I also worked with Dr. Carolina Cruz out of the Emerging Analytics Center and Dr. Alan Mantooth at the SEEDS Center in Fayetteville. SEEDS, which is an acronym forSecure Evolvable Energy Delivery Systems, is one of the Department of Energy’s R&D centers. I was the industry chair, doing research on power grid cybersecurity issues.

I would say that most people don’t realize just how resilient the overall grid is. The majority of Americans interact with the grid at the local utility level, so they don’t know what goes on behind the scenes to keep the engine up and running. It’s a huge machine, one of the largest ever built, and every generator, for the most part, is synchronized. So there’s a lot of inertia built into that machine to generate power, but there are also a lot of options and contingency plans, both for the market and for reliability.

Many modern-day operations of the power grid were influenced by the 2003 blackout in New York, which was caused by a software glitch in the alarm system at the control room of a power company in Akron, Ohio, which failed to alert operators that they needed to redistribute the load after overloaded transmission lines came in contact with foliage. The result was a cascading blackout in the Northeast U.S. and into Canada.

That incident heightened the awareness of just how fragile the machine can be. But it also became an opportunity to institute changes, and now we have organizations around the U.S. that are monitoring this 24/7. They have all sorts of contingencies in place, and the fact that we haven’t had another huge outage in almost two decades is testament to how well it works.

In terms of our own region, the situation in Arkansas is very different from that of Texas. It’s all a matter of engineering, and since Texas operates its own power grid, that grid is affected by Texas engineering decisions. In designing any power plant, there are different constraints, including temperature constraints, that need to be considered in getting gas to the plant. The question is always, “Do we invest in those upgrades?” If I’m in the Texas climate, then maybe I don’t winterize my plant. But if I’m farther north, absolutely I do.

Arkansas, on the other hand, is part of the Eastern Interconnection, one of the two main electrical grids in the North American power transmission grid, so we have more options in terms of power generation, and we certainly have less load to worry about than they do in Texas.

*

ALL OF THAT said, there are things we could and should be doing going forward. While the power grid is one of the most resilient infrastructure systems ever built, there’s no real guarantee of “up time.” Yes, the ability of grid operators to perform coordinated “rolling blackouts” demonstrates the depth of contingency planning built into the system, but prolonged outages such as the one in Texas demonstrate the possibility of outages lasting beyond our worst-case contingency plans.

In Texas the system still worked, because they didn’t have a full-scale grid collapse. There were reports that they were a few seconds away from it, but it didn’t happen. If it had collapsed, every breaker throughout the entire region would’ve tripped off, and they would’ve had to send people out there to manually turn everything back on. That in itself would be a real challenge to the operators, because they would have to balance load and generation as the power comes back up. So as power comes to one neighborhood, they would have to make sure enough generation is available before bringing on the next neighborhood. It’s a very big deal if the power grid does go into full collapse.

Data centers have the advantage of geographic diversity, and most centers are designed to operate for at least a week, through power supplied by diesel operators. But these must be regularly tested, coordinated with UPS (uninterruptible power supply) systems, and then eventually refueled. It’s a difficult and costly process. Once you’re running off of commercial power, and you’re powering those uninterruptible power supply batteries, it’s a complicated task to keep that up for very long. It also puts a lot of stress on the system.

From my viewpoint as an IT and cybersecurity professional, 24/7 regional power availability isn’t the primary problem to solve for computing infrastructure. There’s more resiliency in Cloud computing, because in the case of a major data center failure, your server infrastructure must be able to pick up another location. The Cloud eliminates the danger of becoming affected by any regional catastrophes.

So that’s part of the answer. It doesn’t help us with heat and cold and all the other contingences, but if we’re only talking about IT, then the Cloud reduces our dependency on strictly regional computing resources.

And yet rapid changes in our culture are posing challenges that I don’t think we’ve quite grappled with. I’m talking about IoT.

Twenty-five years ago, if there’d been a two-week power outage in Texas, things would have felt a lot different. Back then, most people didn’t need to get on a computer to actually perform work. But we’re fast becoming dependent on automation and Internet conductivity, and as we deploy more and more IoT equipment and devices, we’re creating a computing model that’s very different from what we currently have. It’s even different from Cloud computing, where we have our critical computing resources in a dedicated facility with multiple different options for resiliency.

With IoT, we get the convenience of computing everywhere, which is great. But all these things require power, and it’s not a problem we can solve with batteries—there are just too many “things” to power. So we need to figure out how to keep all these devices going in a way that doesn’t solely rely on commercial power.

Dependency on power is definitely a big deal, and as we go further into this computing model—from manufacturing to city services to automated driving—we need to figure out what happens when something like a Texas prolonged outage occurs, and whether or not we can sustain that.

It’s an engineering problem, and as cool as today’s technology is, we have to ask ourselves what engineering constraints we need to be planning for, and are they resilient enough to be fully automated? Because a lot of times what happens with automation is, we tend to get lazy and just go, “Okay, we’re done with that. We don’t have to do that task anymore.” But when it fails, we either have to go back to performing it manually, or we need to have a backup plan in place.

*

I TEND TO think about the power grid mostly from the perspective of cybersecurity, of course. We as a society rely so much on electricity that any adversarial nation-state is bound to go after the power grid. Once you breach it, you have lots of options. That’s why the Russians are trying to get into our systems: not because they want to wreak havoc like a terrorist, but because they want options.

Cyberattacks are different, and they can impact the grid, given the right scenario, but it’s much more difficult than it may seem. The way infrastructures operate in the U.S., each generator, each generation facility, each control center is segmented from one another. There are some connections, but they don’t have a common cyber connection, so it makes it very difficult to attack a lot of different resources.

And if you go after a single resource in the grid, then you don’t get a lot of bang for your buck because there are so many different options. If you take out a substation, it can back-feed from another substation. If you take out a generation facility, you would have to take out several at the same time to bring about grid instability.

So here’s my bottom line: Don’t believe everything you see in a Jason Bourne movie.

 


Philip Dale Huff, Ph.D., is Assistant Professor, Emerging Analytics Center UA Little Rock
Original post from Our partner Arkansas Center for Data Sciences
www.acds.co/post/guest-column-march-2021
Advertisement

Get the magazine

Great for classrooms, offices or lobbies. ITArkansas is all about helping people find a career in tech regardless of the path they take.

The magazine