Real-time Machine Learning at the edge of power systems

The Monash Smart Energy City project provides a lens into the Clayton campus of Monash University, where distributed energy resources (DERs) appear as if they were instruments in a research laboratory. This transformed the campus into a research environment for experiments on the “smarts” that coordinate DERs, bridging the gap between operational practices and research.

An outcome of the project is the Distributed and Intelligent Power Systems (DIPS) platform (see our recent announcement). DIPS brokers the large volume of data from sensors in a dynamic power environment, enabling predictive analysis and archiving for later use. 

This blog is about the pilot DIPS experiment, which employs machine learning (ML) to forecast and estimate grid conditions for improved control and operational actions. Since machine learning can detect complex and abstract features directly from data without human biases, we wanted to explore a range of ML approaches to find the most suitable for forecasting at the edge of critical power systems.

Figure 1: Dips architecture with ML use case

ML-influenced forecasting at the power edge 

How frequently do we need to watch an AC power line such that we can react to stability or power quality problems? Our PMUs measure the electrical signals at a specific location on a power line at the microsecond scale, aggregating measurements into reports at around one hundred times a second. It is worth geeking out, just for a moment, to appreciate the scale of data the paradigm shift from base-load to two-way will cause.  If we were to measure and store every PMU measurement as a JSON, this would require 400 MB per channel of each sensor per day. We are working with three phases for voltage and current this means six channels, or 2.4 GB for every sensor every day. This mounts up quickly. The scalable approach is to keep the data out near the PMU, centralising only the data that is important to future research and accurate monitoring for grid stability. 

Thus we feed the data generated by a PMU directly into an ML algorithm, which requires an industrial computer directly attached to the PMU. We call this a DIPS Edge device. From a middleware or cloud perspective, DIPS orchestrates work and data between DIPS Edge devices (those purposefully decentralised facets) and a DIPS Cloud (those purposefully centralised facets). Traditional power systems are centralised, whilst the web 3.0 era of technology will be decentralised. We need to take society, including our researchers and their partners, through the journey from one to the other and ultimately help the industry find the sweet spot.

The purpose of the ML algorithm is to convert the raw PMU data into information for forecasting. Our approach communicates the reduced but quality assured forecasting data to the DIPS Cloud for archiving purposes, thus eliminating the data deluge problem! Forecasting involves projecting the (electrical) signal into the future. When we see a significant deviation between the forecast and reality, it could imply an anomaly or change in the grid. 

Figure 2: Dips ML forecasting architecture

The model uses convolutions to analyse time and frequency features at different resolutions. These features, along with other statistical characteristics are used to create the forecast at each resolution.

In terms of accuracy, our pilot’s forecasting achieves ~1.3% Error Vector Magnitude (EVM) for current signals and ~0.78% EVM for voltage signals, which is encouraging given the PMU itself exhibits ~1.6% EVM.

Figure 3: To understand grid stability we need to observe current and voltage magnitude and angle. For simplicity, here we show only the current magnitude at a point in time. The observed current magnitude is blue and the model’s forecast is green. During training, the model also aims to reproduce the original signal (backwards in time) which is shown in grey.

In addition to decentralisation, we introduced the DevOps methodology for developing and deploying ML models. This includes establishing a digital twin that mirrors the production environment in a synthetic/virtual world. The production and digital twin share the same CI/CD process, thus translating webscale (digital) engineering into the energy/building space.

AUTHORS
Mitchell Hargreaves, Junior Deep Learning Engineer, Data Futures Institute
Dr Ayesha Sadiq, Research Software Specialist, Monash eResearch Centre
Sharnelle Lai, Marketing Officer, Monash eResearch Centre
Dr Steve Quenette, Deputy Director, Monash eResearch Centre
Dr. Reza Razzaghi, Senior Lecturer, Department of Electrical and Computer Systems Engineering