Events Calendar

PhD Preliminary Oral Exam – Shashini De Silva

Towards Secure Data Analytics for Power Grid

We develop secure data analytics to mitigate the impact of PMU data falsification on machine learning algorithms in power systems. We study two phases of this problem (i) when the mathematical model for the adversarial data is known and (ii) when certain underlying characteristics of the vulnerability levels of the data entries are known. In the first phase, we present a sparse error correction framework to treat PMU measurements that are potentially corrupted due to a GPS spoofing attack. Here we exploit the sparse nature of a GPS spoofing attack, which is that only a small fraction of PMUs are affected by the attack. We first present attack identifiability conditions (in terms of network topology, PMU locations, and the number of spoofed PMUs) under which data manipulation by the spoofing attack is identifiable. The identifiability conditions have important implications on how the locations of PMUs affect their resilience to GPS spoofing attacks. Furthermore, to effectively correct spoofed PMU data, we present a sparse error correction approach wherein computation tasks are decomposed for smaller zones to ensure scalability. We demonstrate the efficacy of the proposed approach through experimental results obtained from numerical simulations with the RTS 96 and IEEE 300-bus test cases.  In the second phase we consider the problem of making the classifier design resilient to test data falsification when vulnerability characteristics of the data entries are known apriori. In the literature, a few countermeasures have been proposed to defend machine learning algorithms against test data falsification, but a common assumption employed therein is that feature entries of test data are equally vulnerable to falsification. However vulnerability levels of data entries to falsification attacks can differ significantly depending on how data creation and transmission procedures are secured. In our work, we present an attack-cost-aware adversarial learning framework that takes into account the (potentially inhomogeneous) vulnerability characteristics of test data entries in designing an attack-resilient classifier. We aim to leverage this cost-aware learning framework to defend PMU-based ML rules operating in the power grids from PMU data falsification.

Major Advisor: Jinsub Kim
Minor Advisor: Xiaoli Fern
Committee: Raviv Raich
Committee: Eduardo Cotilla-Sanchez
GCR: Leonard Coop

Wednesday, June 3 at 2:00pm to 4:00pm

Virtual Event
Event Type

Lecture or Presentation

Event Topic

Research

Website

https://oregonstate.zoom.us/j/9387151...

Organization
Electrical Engineering and Computer Science
Contact Name

Dakota Nelson

Contact Email

eecs-gradinfo@oregonstate.edu

Subscribe
Google Calendar iCal Outlook

Recent Activity