Frontier Grid Tech - How Robotics, Cloud Infrastructure and SaaS are Transforming the Power Sector

Leo Trudel | October 2022

Technology is transforming the power sector. Successive waves of innovation have enabled utilities to improve their operations, customer interactions and overall performance. As we have explored in this series, the Grid Tech market is set to expand significantly in the coming decade and innovative technology deployments are ramping up. In this piece, we turn our attention to some of the larger technology trends that are impacting other sectors and beginning to take hold in the utilities space. For a full background on this Grid Tech series download our report “The Decade of Deployment

To highlight these trends in context, our Grid Tech matrix illustrates, at the highest level, recent developments. We are increasingly seeing these technologies embedded in solution sets, each moving at different paces and with various impacts. As power companies develop multi-year technology roadmaps, understanding these trends and how they may provide opportunity will serve them well. For example, we are seeing AI/ML technologies feature in over 10% of our use cases we are tracking, with significant improvements to traditional utility applications. Overall, this is a unique moment in the power sector where ambitious climate goals can be accelerated by deploying mature Industry 4.0 technology that have found applications in other industries. To further illustrate these trends, in this piece we will explore three areas and how adoption of robotics, cloud infrastructure, and software-as-a-service (SaaS) is impacting the sector.

Robotics – Timely and Efficient Automation  

Robotic applications are presenting an efficiency opportunity for the energy sector. Utilities are testing a suite of mechanical and generally mobile devices that can perform repetitive tasks and dynamic movements. Traditional utilities have relied on manual labor to perform most of their O&M, but in a bid to bring down costs and generate operational improvements, utilities are slowly introducing robotics to address their labor requirements more effectively. In addition to performing tasks, robots are particularly effective at capturing data which is useful for software applications that leverage AI and machine learning capabilities.

Across the sector today, we are seeing four primary robot types take hold, colloquially known as drones, subs, climbers, and dogs. Each type is differentiated by its mobile capabilities, which determine the range of interactions each model is capable of.

In addition to these devices, we are also seeing specialty robots emerge, that are purpose built to perform one task or one or specific motion. Applications include transmission line robots that role along transmission lines, transmission insulator robots which climb electrical insulators, underground cable robots which are designed to maneuver through confined spaces, and transformer robots which swim through transformer oil.

With improvements in battery technology, advanced computing, and communication infrastructure, robots have become much more mobile and capable of performing nuanced, situational-based work. Utilities have little use for stationary robots used on assembly lines, but, as asset heavy organizations that require millions of physical interactions and datapoint captures every day, utilities can leverage mobile robots to unlock value. There are dozens of potential use cases that could provide value to electric utilities across generation, transmission, and distribution. Broadly speaking, we are seeing four sweeping categories that account for robot-based use cases in utilities as highlighted by the graphic below.

For a robot to perform work or collect data, it must be equipped with one or more payloads. A payload is the sensing equipment carried by a robot that allows it to interact with its environment. Payloads can be broken down into three categories: information collection, specimen collection, and complex physical interactions. Information collection is by far the most common of the three payloads and includes sensors, such as LiDAR or those that can detect visible and nonvisible light spectra. Specimen collection includes instrumentation that can capture physical samples. Complex physical interaction payloads include robotic arms that can perform dynamic tasks, such as open doors.

Utilities stand to benefit significantly by having a dedicated robotics focus at this stage. In terms of benefits and building out business cases, robotics-based use cases can support a wide breadth of business objectives:

  • Revenue generation (e.g., using drones to create digital twins for transmission towers and AI to detect the inventory of existing attachments and available space for leasing new ones).

  • Reduced expenses through better O&M (e.g., using drones to image transmission infrastructure and AI to perform asset inspections).

  • Worker safety (e.g., using a Dog to handle hazardous nuclear waste or inspect a high voltage substation).

  • Information retention (better data capture in environments of high employee turnover).

  • Organizational evolution (infrastructure must become more complex and digitally enabled to accommodate energy transition, and robotics can add significant value if deployed correctly)

Cloud Infrastructure – Enabling a More Dynamic Grid Architecture

Cloud infrastructure for the scope of this article refers to remote server farms that host cloud-based software, or software-as-a-service. For practical purposes, cloud infrastructure is differentiated from “on-prem” servers in that the latter can communicate using closed protocols whereas cloud servers are accessible over the internet. For most utilities, core system IT infrastructure is owned and managed in-house, and utilities have been reluctant to fully migrate to the cloud for two primary reasons. Firstly, concerns about security make utility executives hesitant to move sensitive data off private networks. In general, utility executives and cybersecurity professionals perceive that containing all sensitive and non-sensitive data to local networks reduces their chances of being hacked. Secondly, by and large, utilities cannot rate base cloud expenditures, but they can pass IT infrastructure costs to their customers. Despite the fact that paying for cloud infrastructure is substantially less expensive than owning and operating proprietary server stacks, some utilities are hesitant to migrate to the cloud because it would impact their margin. Traditional regulation is evolving around cloud infrastructure as we highlighted in an earlier piece, with examples across NY, PA, & AL, and we expect this trend to broaden in the medium term, making it easier for utilities to justify cloud migration.

Tangentially, utilities are in the early days of integrating edge processing capabilities. Edge processors are not cloud services as they are managed in-house and have very limited functionality; however, they are distributed across asset infrastructure (as opposed to residing with on-prem servers) and often communicate with cloud services using open protocols, making them theoretically more vulnerable to hacking. Edge processes perform basic computations and/or data reduction work. This allows deployed applications to make faster decisions, both by processing data onsite and by using less bandwidth for applications that require server processing, which reduces latency.

However, as we look over the next decade, utilities will increasingly move applications to the distributed processing paradigms, including both cloud and edge. Three quarters of US utilities already have some applications in the cloud, if most of their sensitive data continues to be hosted on-prem and constrained to private networks. As the regulatory structure evolves, this will change the nature, speed, and type of applications that utilities will bring online.

Any application that uses software can be deployed to a cloud server. The key advantages of cloud servers are driven by economies of scale. Dedicated server farms tend to be more efficiently run than are servers managed by IT professionals with numerous responsibilities to the organization. Server farms are also able to amortize their equipment more efficiently through containerization. Containerization is the practice of fractionalizing servers into many, virtual servers. This is useful because different software applications require different server environments to run correctly, and non-containerized servers generally support only one environment at a time. Virtual servers, or containers, can support numerous environments in a single server and thus support a wider variety of applications at one time. Server farms can also remain at full capacity around the clock: as one client reaches the end of their workday and starts scaling back their server usage, server farms can replace that client’s containers with another’s in a different time zone who is scaling up for the workday. As a result, servers on server farms are more likely to always run at 100% capacity and thus be used more efficiently.

Contrast this with a proprietary server scheme, where servers are likely to run well below their 100% threshold and at a small fraction of their total capacity between normal work hours. Additionally, in-house IT teams have the added challenge of managing situations where computing requirements exceed 100% capacity – at a server farm, admins can simply spin up new containers on different servers to keep things running smoothly. A utility would simply have to deal with added latency, system crashes, and reboots. Finally, server farms are more likely to be up kept to date. For these and other reasons, cloud services are far more efficient than organizations that maintain local server stacks, which reduces the relative cost of cloud vs. on-prem.

In addition to cost advantages, cloud services provide utilities with more computing options and a more robust set of software capabilities. Most software developers are cloud-native, and vendors are increasingly reluctant to deploy to local servers, as discussed in the following section. Usage of cloud infrastructure in utilities is growing, and it will grow significantly if regulatory bodies allow it to be added to utilities’ rate bases. In terms of impact on the Grid Tech landscape, cloud technologies will serve as a critical component for enabling the Grid Tech capabilities that energy the transition requires, such as Internet of Things (IoT), Artificial Intelligence (AI), and Extended Reality (XR).

Software-as-a-Service (SaaS) – Better Value

Software-as-a-Service includes essentially all cloud software which is generally monetized through monthly or annual subscriptions. While the utility industry has yet to migrate from local servers to SaaS solutions in a meaningful way, most other industries have, and the software community has adjusted its pricing models accordingly. As SaaS began to take hold in the early 2000s, pricing remained consistent with traditional models with revenue adhering to a per-seat pricing structure. Over the last decade, SaaS companies have become better at innovating ways to make their products stickier, and pricing models are trending towards value-based subscriptions as opposed to per-user pricing schemes.

This emerging shift in how software is priced is based in part on capabilities unique to SaaS: unlike with locally installed software, in which vendors have essentially no ability to monitor customer utilization, all computing in SaaS occurs on the vendor side and can be easily tracked. Vendors can run analytics to estimate not only how often their products are being used, but they can track which users are using it, popular use cases, and how much value it’s likely deriving.

A second driver of value-based pricing is SaaS business models. SaaS companies generally have low, per-user costs; most companies can spin up new client instances very quickly, and the marginal cost of adding new customers is extremely low, with gross margins frequently exceeding 90%. Their primary cost drivers for SaaS companies are software development, customer acquisition, and in some cases, installation (the cost of which is oftentimes charged upfront to the client). For SaaS firms to cover their overhead and customer acquisition costs, they need to ensure that most of their existing customers renew their contracts and expand their annual spend to keep margins high. Value-based pricing is a common solution. By ensuring that customers are always getting value, they can reduce the risk of losing customers and having to reinvest in customer acquisition.

Utilities are developing their cloud and SaaS strategies at present. While certain applications such ADMS may well remain on-prem in the near term, we expect to see growing SaaS solutions in the VPP, Mobile Workforce Management, and other analytic applications. As such, utilities have a unique opportunity to take advantage of customer-friendly pricing models that a growing number of SaaS companies provide. By doing so, they capitalize on an opportunity not only to migrate to the cloud, but also to ensure that their software subscriptions are providing continuous value at scale, improving their margins and investment payback time. Usage of value-based pricing is expected to increase as SaaS companies improve their understanding of how utilities derive value from their software products.

Managing Frontier Technology Complexity

Over the course of the next decade, electric utilities will increasingly turn toward digital solutions that provide automation and decision support. Use of robotics, advanced digital capabilities, cloud infrastructure, and SaaS will transform this industry from an analog one to new digital frontiers. Additional trends we expect to see mature this decade are the adoption of distributed ledgers, quantum computing, extended reality (XR), better customer integration, and orchestration of different types of grid assets just to name a few. These changes will drive ROI for the utility while ensuring reliability for customers and improved environmental stewardship.

It is critical for the power sector to identify the most valuable technology trends, create high-value partnerships, and deliver at scale. The first step in this process is identifying commercially available solutions and viable technology partners. Readers can access more information on the Grid Tech market and the Grid Tech 150 in our research hub and dive deeper in our complementary Grid Tech report, The Decade of Deployment.

Previous
Previous

A Distributed Paradigm - How Utilities Can Prepare for VPP Operations

Next
Next

The Grid Tech 150 - Identifying the Leading Utility Partners and Applications