Thanks to visit codestin.com
Credit goes to github.com

Skip to content

Power Load Classification and Prediction System Based on Deep Learning Algorithms. 基于深度学习算法的电力负载分类与预测系统(该项目已参加2024年第17届中国大学生计算机设计大赛4C2024)

License

Notifications You must be signed in to change notification settings

duyu09/Powerload-Classification-and-Prediction-System

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

39 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation



Power Load Classification and Prediction System Based on Deep Learning Algorithms

基于深度学习算法的电力负载分类与预测系统

Documentation Languages

English | Simplified Chinese | Tiếng Việt

This document was translated by OpenAI's GPT-5 model and is for reference only. Please refer to the Simplified Chinese version as the authoritative source.

Project Overview

      Empowering energy conservation and emission reduction through the digital economy, and enhancing efficiency with high-tech innovation, has become a major trend across industries and households. However, the irrational use of electric resources in China remains severe and urgently needs to be addressed. According to the China Energy Conservation Product Certification Center, the standby power consumption of household appliances in an average urban family leads to considerable energy loss every day. The standby power consumption of 400 million televisions across the nation amounts to 2.92 billion kilowatt-hours per year — equivalent to one-third of the annual electricity generation of the Daya Bay Nuclear Power Plant. Therefore, the necessity of discovering and controlling electricity waste through power monitoring is increasingly evident. Among these technologies, non-intrusive monitoring is particularly vital. Our team has developed a Power Load Classification and Prediction System based on Non-Intrusive Technology, which utilizes artificial intelligence to help households and small businesses reduce electricity expenses and promote energy conservation.

      Based on the above background, we designed a power load classification and prediction system tailored for household users and small-scale enterprises. The classification function informs users in real time about the operational status of each electrical appliance, while the prediction function forecasts short-term energy consumption. Whether for homes or factories, such an intelligent system is indispensable for monitoring appliance operation and identifying unnecessary power usage. This not only helps save energy but also reduces electricity costs. Moreover, short-term power consumption forecasting allows enterprises to plan power usage more efficiently — especially under tiered electricity pricing schemes.

      After logging into the system, household users can view the real-time power consumption waveform, the total electricity consumed in the past 50 minutes, and the appliances currently running as determined by the model (e.g., “water heater,” “computer,” or combinations of multiple devices). Users can also view the AI model’s 20-minute future consumption prediction (the forecasted power curve).

Competitor Analysis

      As power systems grow in scale, existing intrusive load monitoring products can no longer meet demand and are being phased out. Current products on the market mainly employ traditional non-intrusive algorithms based on Decision Trees, Random Forests, and Support Vector Machines (SVM), or ensemble learning combinations thereof. Common load forecasting algorithms include Regression Analysis, ARIMA, and deep learning-based LSTM models. However, their prediction accuracy remains unsatisfactory and lacks specificity for power load scenarios — these methods merely address generic time-series classification or forecasting tasks without considering the unique properties of power data.

The following figure compares the predicted power curves obtained under identical training and testing datasets using the ARIMA algorithm, the LSTM algorithm, and our proposed system:

System Design Concept

  • AI Model: A cascaded model structure divides the task into two sequential modules: classification/decomposition and prediction. The classification module uses a deep fully connected neural network to process time-windowed features and achieve appliance recognition. The decomposition task is accomplished through classification. The prediction model, inspired by generative language models, is built around the Transformer structure — two deep neural networks connected via appliance-type labels.

  • System Architecture: The architecture follows a cloud-collaborative pattern of distributed applications and centralized computing, suitable for modern supercomputing platforms. It consists of edge devices (power meters), distributed backend and database systems, AI models deployed at the supercomputing center for classification and prediction, and a WeChat Mini Program as the client interface. This design helps alleviate latency issues caused by high concurrency in large-scale user requests.

  • Frontend and Backend Design: The frontend is developed using the WeChat Developer Tools, and waveform visualization is implemented via the ECharts line chart component. The frontend periodically polls the backend for updates. For performance, the backend service is built in C++17 with a single-Reactor multithreaded architecture. The server core includes modules for logging, thread pool management, I/O multiplexing, HTTP handling, buffer management, and blocking queues. HTTP requests are read using scatter-read techniques and parsed efficiently via finite-state machines and regular expressions.

Model Workflow System Macro Architecture

Model Details

  • Classification and Decomposition Model:
    Feature Engineering: The input data is first windowed and grouped based on power values. The number of samples in each group within a window is counted to form a frequency distribution. Each frequency vector is normalized by window length to generate feature vectors, which are then compressed using PCA.
    Model Design: A Fully Connected Neural Network (FNN) takes the feature vector for each window as input and outputs the one-hot encoded appliance category. A 3-layer FNN (28, 14, 7) is used for single-appliance classification, and another 3-layer FNN (84, 42, 21) handles power-label decomposition (21 appliance combinations).
    We compared FNN with GRU models and found their maximum accuracies to be 91% and 90%, respectively. However, GRU’s inference requires higher power consumption because it considers inter-window dependencies. Assuming that each time window is independent allows the FNN to focus solely on its contained features, passing label data to the prediction model’s embedding layer for further processing. The prediction model then captures temporal dependencies — each performing its role.
    For these reasons, we selected the fully connected architecture. The classification model structure is illustrated below:



  • Power Load Prediction Model:
    For time-series forecasting of electric power, the model processes the data as follows: first, frame segmentation is performed, and then each frame is windowed. Each time window produces a feature vector, and multiple feature vectors form a feature matrix. These vectors consist of transient features (high-frequency variations) and steady-state features (low-frequency stability).
    Transient features are derived from FFT amplitudes and phases, excluding the DC component since its mean information is already contained in steady-state features.
    Steady-state features include maximum, minimum, mean, RMS, and crest factor values — guided by electrical theory. Appliance category embeddings are incorporated into the steady-state vector through an Embedding layer.
    Inspired by generative language models, the feature matrix is treated as a “sentence embedding matrix.” It passes through 5–6 Transformer Blocks (we selected 6) followed by attention-based averaging to produce the prediction vector (analogous to token embeddings in language models). Finally, the transient part of the prediction vector (in frequency domain) undergoes inverse FFT to generate the time-series prediction of power consumption.

The power prediction model structure is illustrated below:

Model Performance on Experimental Dataset

The dataset was collected from real electrical devices, totaling 2 million minutes of data with 1-minute granularity. The prediction task shown below uses 100 minutes of data to forecast the next 20 minutes. All data are normalized.
The dataset involves proprietary enterprise data and is not open-sourced.

Deployment and Usage

  • Client:
    The project uses a WeChat Mini Program as the frontend. Simply open the WeChat_client directory in the WeChat Developer Tool, then either publish it or package it with third-party tools.

  • Distributed Backend (Client Service):
    The backend module is developed in C++17 for Linux systems (non-cross-platform). Use CMake to build the executable:

mkdir build
cd build
cmake ..
make

The generated executable webserver will appear in the build directory. Ensure that the powerload.db SQLite database file is in the same directory. If C++ deployment is inconvenient, a Python Flask backend can be used instead, functionally equivalent though slightly less efficient (/Python_backend/app.py).

  • Distributed Backend (Data Processing) and AI Model Server: This component resides in the Python_backend directory. It provides two options for backend-AI communication:

    • HTTP-based: (app-backend.py and app-ai.py)
    • RabbitMQ-based: (app-backend-mq.py and app-ai-mq.py) Run the AI model server first, then the distributed backend. The binary model files for classification and prediction are located in the classify and prediction directories, respectively.
  • Model Training: Ensure that the dataset folder exists under Python_backend, containing multiple CSV files named after appliance labels or combinations. Execute the following commands to train the classification/decomposition model (~3 min on CPU) and the prediction model (~2 hrs on CPU):

Python ./classify/Classification_Train_v6.3.py
Python ./classify/prediction.py

You may modify hyperparameters according to your dataset. In prediction.py, the following code sections may need adjustment:

# Located in main function
batches, labels_batched, categories, category2filename_dict = data_process("./dataset")
model = train(batches, labels_batched, categories, category2filename_dict, num_heads=27, num_blocks=6, \
        lr=0.00004, epochs=13, static_vector_len=7, total_number_categories=21)
torch.save(model, "model-large.pt")

# Located in data_process function
data = get_data(os.path.join(directory, filename + ".csv"), skip_header=1, usecol=4)

In Classification_Train_v6.3.py, modify parameters as needed:

# Located near file start
dc = r'../dataset'
model_save_path = r'Classification-Model-v6.3-2.2.keras'
matrix_save_path = r'Classification_feature-2.2.mat'
dict_save_path = r"label2index_dict.pkl"
pca_save_path = r"pca.pkl"
testRate = 0.15
frameLength = 800
step = 800
max_value = 3000000
eps = 75
lamb = 0.001
pca_n_components = 67
es_patience = 3
power_column = -1

Client GUI Examples

The following screenshots show four main interfaces: login, user homepage, real-time waveform and classification/decomposition, and power prediction waveform.

Project Demo Video

PLDA-Video.mp4


Copyright Notice

Copyright © 2024 The research and development group for Power Load Classification and Prediction System Based on Deep Learning Algorithm, Faculty of Computer Science & Technology, Qilu University of Technology (Shandong Academy of Sciences)

All rights reserved by the R&D Group of the project “Power Load Classification and Prediction System Based on Deep Learning Algorithms,” Faculty of Computer Science & Technology, Qilu University of Technology (Shandong Academy of Sciences).

  • Group Members:

    • DU Yu (Chinese: 杜宇; Vietnamese: ĐỖ Vũ; Faculty of Computer Science & Technology, Qilu University of Technology (Shandong Academy of Sciences), No.202103180009)
    • JIANG Chuan (Chinese: 姜川; Vietnamese: KHƯƠNG Xuyên; Faculty of Computer Science & Technology, Qilu University of Technology (Shandong Academy of Sciences), No.202103180020)
    • LI Xiaoyu (Chinese: 李晓语; Vietnamese: LÝ Hiểu Ngữ; Faculty of Computer Science & Technology, Qilu University of Technology (Shandong Academy of Sciences), No.202103180001)
    • LI Qinglong (Chinese: 李庆隆; Vietnamese: LÝ Khánh Long; Faculty of Computer Science & Technology, Qilu University of Technology (Shandong Academy of Sciences), No.202103180027)
    • ZHANG Yiwen (Chinese: 张一雯; Vietnamese: TRƯƠNG Nhất Văn; Faculty of Computer Science & Technology, Qilu University of Technology (Shandong Academy of Sciences), No.202103180051)
  • Advisors:

    • JIA Ruixiang (Chinese: 贾瑞祥; Vietnamese: GIẢ Thụy Tường; Lecturer, Department of Software Engineering, Faculty of Computer Science & Technology, Qilu University of Technology (Shandong Academy of Sciences))
    • CHEN Jing (Chinese: 陈静; Vietnamese: TRẦN Tĩnh; Shandong Computer Center (National Supercomputing Center in Jinan))

This project is open-sourced under our custom license agreement. Before obtaining the source code by any means, please read and understand the LICENSE thoroughly.

  • Additional Notes:

    • This project participated in the 17th China Collegiate Computing Competition (4C2024), Artificial Intelligence Practice Track.
    • The project LOGO was generated using the CogView AI art tool by Zhipu Qingyan (Chinese: 智谱清言), then modified for use. LOGO Meaning: The metallic ring represents a power meter, symbolizing our non-intrusive monitoring approach. The internal stripes signify power waveform data and also resemble city skyscrapers, representing how our system serves urban power management. The wooden background symbolizes sustainable development and carbon reduction.

Special Acknowledgement

  • Qilu University of Technology (Shandong Academy of Sciences) (Chinese: 齐鲁工业大学(山东省科学院); Vietnamese: Đại học Công nghiệp Tề Lỗ (Viện Khoa học tỉnh Sơn Đông))

  • Faculty of Computer Science and Technology, National Supercomputing Center in Jinan (Chinese: 计算机科学与技术学部,国家超级计算济南中心; Vietnamese: Học bộ Khoa học và Kỹ thuật Máy tính, Trung tâm Tính toán Siêu Máy tính Quốc gia Tế Nam)


  • Developer Association of Qilu University of Technology (Shandong Academy of Sciences) (Chinese: 齐鲁工业大学(山东省科学院)开发者协会; Vietnamese: Hiệp hội Nhà phát triển Đại học Công nghiệp Tề Lỗ (Viện Khoa học tỉnh Sơn Đông))

Links

About

Power Load Classification and Prediction System Based on Deep Learning Algorithms. 基于深度学习算法的电力负载分类与预测系统(该项目已参加2024年第17届中国大学生计算机设计大赛4C2024)

Topics

Resources

License

Stars

Watchers

Forks

Contributors 2

  •  
  •