References

[1]

Bingqing Chen, Weiran Yao, Jonathan Francis, and Mario Bergés. Learning a distributed control scheme for demand flexibility in thermostatically controlled loads. In 2020 IEEE International Conference on Communications, Control, and Computing Technologies for Smart Grids (SmartGridComm), volume, 1–7. 2020. doi:10.1109/SmartGridComm47815.2020.9302954.

[2]

Davide Deltetto. Data-driven coordinated building cluster energy management to enhance energy efficiency, comfort and grid stability. PhD thesis, Politecnico di Torino, 2020.

[3]

Davide Deltetto, Davide Coraci, Giuseppe Pinto, Marco Savino Piscitelli, and Alfonso Capozzoli. Exploring the potentialities of deep reinforcement learning for incentive-based demand response in a cluster of small commercial buildings. Energies, 2021. doi:10.3390/en14102933.

[4]

Gauraang Dhamankar, Jose R. Vazquez-Canteli, and Zoltan Nagy. Benchmarking Multi-Agent Deep Reinforcement Learning Algorithms on a Building Energy Demand Coordination Task. RLEM 2020 - Proceedings of the 1st International Workshop on Reinforcement Learning for Energy Management in Buildings and Cities, pages 15–19, 2020. doi:10.1145/3427773.3427870.

[5]

Ruben Glatt, Felipe Leno da Silva, Braden Soper, William A. Dawson, Edward Rusu, and Ryan A. Goldhahn. Collaborative energy demand response with decentralized actor and centralized critic. In Proceedings of the 8th ACM International Conference on Systems for Energy-Efficient Buildings, Cities, and Transportation, 333–337. New York, NY, USA, 11 2021. ACM. URL: https://dl.acm.org/doi/10.1145/3486611.3488732, doi:10.1145/3486611.3488732.

[6]

Anjukan Kathirgamanathan, Kacper Twardowski, Eleni Mangina, and Donal P. Finn. A Centralised Soft Actor Critic Deep Reinforcement Learning Approach to District Demand Side Management through CityLearn. In Proceedings of the 1st International Workshop on Reinforcement Learning for Energy Management in Buildings & Cities, 11–14. New York, NY, USA, 11 2020. ACM. URL: https://dl.acm.org/doi/10.1145/3427773.3427869, doi:10.1145/3427773.3427869.

[7]

Fazel Khayatian, Zoltán Nagy, and Andrew Bollinger. Using generative adversarial networks to evaluate robustness of reinforcement learning agents against uncertainties. Energy and Buildings, 251:111334, 2021. URL: https://www.sciencedirect.com/science/article/pii/S0378778821006186, doi:https://doi.org/10.1016/j.enbuild.2021.111334.

[8]

Gyorgy Zoltan Nagy. The CityLearn Challenge 2021. 2021. URL: https://doi.org/10.18738/T8/Q2EIQC, doi:10.18738/T8/Q2EIQC.

[9]

Zoltan Nagy, José R. Vázquez-Canteli, Sourav Dey, and Gregor Henze. The citylearn challenge 2021. In Proceedings of the 8th ACM International Conference on Systems for Energy-Efficient Buildings, Cities, and Transportation, BuildSys '21, 218–219. New York, NY, USA, 2021. Association for Computing Machinery. URL: https://doi.org/10.1145/3486611.3492226, doi:10.1145/3486611.3492226.

[10]

Kingsley Nweye, Max Langtry, Ruchi Choudhary, and Gyorgy Zoltan Nagy. The CityLearn Challenge 2023 Dataset. 2024. URL: https://doi.org/10.18738/T8/SXFWTI, doi:10.18738/T8/SXFWTI.

[11]

Kingsley Nweye, Bo Liu, Peter Stone, and Zoltan Nagy. Real-world challenges for multi-agent reinforcement learning in grid-interactive buildings. Energy and AI, 10:100202, 2022. URL: https://www.sciencedirect.com/science/article/pii/S2666546822000489, doi:https://doi.org/10.1016/j.egyai.2022.100202.

[12]

Kingsley Nweye, Siva Sankaranarayanan, and Zoltan Nagy. Merlin: multi-agent offline and transfer learning for occupant-centric energy flexible operation of grid-interactive communities using smart meter data and citylearn. 2023. URL: https://arxiv.org/abs/2301.01148, doi:10.48550/ARXIV.2301.01148.

[13]

Kingsley Nweye, Siva Sankaranarayanan, and Zoltan Nagy. MERLIN: Multi-agent offline and transfer learning for occupant-centric operation of grid-interactive communities. Applied Energy, 346:121323, September 2023. URL: https://www.sciencedirect.com/science/article/pii/S0306261923006876 (visited on 2023-06-08), doi:10.1016/j.apenergy.2023.121323.

[14]

Kingsley Nweye, Sankaranarayanan Siva, and Gyorgy Zoltan Nagy. The CityLearn Challenge 2022. 2023. URL: https://doi.org/10.18738/T8/0YLJ6Q, doi:10.18738/T8/0YLJ6Q.

[15]

Kingsley E Nweye, Allen Wu, Hyun Park, Yara Almilaify, and Zoltan Nagy. Citylearn: a tutorial on reinforcement learning control for grid-interactive efficient buildings and communities. In ICLR 2023 Workshop on Tackling Climate Change with Machine Learning. 2023. URL: https://www.climatechange.ai/papers/iclr2023/2.

[16]

Aisling Pigott, Constance Crozier, Kyri Baker, and Zoltan Nagy. Gridlearn: multiagent reinforcement learning for grid-aware building energy management. Electric Power Systems Research, 213:108521, 2022. URL: https://www.sciencedirect.com/science/article/pii/S0378779622006320, doi:https://doi.org/10.1016/j.epsr.2022.108521.

[17]

Giuseppe Pinto, Davide Deltetto, and Alfonso Capozzoli. Data-driven district energy management with surrogate models and deep reinforcement learning. Applied Energy, 304:117642, 2021. URL: https://www.sciencedirect.com/science/article/pii/S0306261921010096, doi:https://doi.org/10.1016/j.apenergy.2021.117642.

[18]

Giuseppe Pinto, Anjukan Kathirgamanathan, Eleni Mangina, Donal P. Finn, and Alfonso Capozzoli. Enhancing energy management in grid-interactive buildings: a comparison among cooperative and coordinated architectures. Applied Energy, 310:118497, 2022. URL: https://www.sciencedirect.com/science/article/pii/S0306261921017128, doi:https://doi.org/10.1016/j.apenergy.2021.118497.

[19]

Giuseppe Pinto, Marco Savino Piscitelli, José Ramón Vázquez-Canteli, Zoltán Nagy, and Alfonso Capozzoli. Coordinated energy management for a cluster of buildings through deep reinforcement learning. Energy, 2021. doi:10.1016/j.energy.2021.120725.

[20]

Rongjun Qin, Songyi Gao, Xingyuan Zhang, Zhen Xu, Shengkai Huang, Zewen Li, Weinan Zhang, and Yang Yu. Neorl: a near real-world benchmark for offline reinforcement learning. 2021. URL: https://arxiv.org/abs/2102.00714, doi:10.48550/ARXIV.2102.00714.

[21]

Yude Qin, Ji Ke, Biao Wang, and Gennady Fedorovich Filaretov. Energy optimization for regional buildings based on distributed reinforcement learning. Sustainable cities and society, 78:103625–, 2022.

[22]

Filip Tolovski. Advancing renewable electricity consumption with reinforcement learning. 2020. arXiv:2003.04310.

[23]

José R. Vázquez-Canteli, Sourav Dey, Gregor Henze, and Zoltan Nagy. The citylearn challenge 2020. In Proceedings of the 7th ACM International Conference on Systems for Energy-Efficient Buildings, Cities, and Transportation, BuildSys '20, 320–321. New York, NY, USA, 2020. Association for Computing Machinery. URL: https://doi.org/10.1145/3408308.3431122, doi:10.1145/3408308.3431122.

[24]

José R. Vázquez-Canteli, Jérôme Kämpf, Gregor Henze, and Zoltan Nagy. Citylearn v1.0: an openai gym environment for demand response with deep reinforcement learning. In Proceedings of the 6th ACM International Conference on Systems for Energy-Efficient Buildings, Cities, and Transportation, BuildSys '19, 356–357. New York, NY, USA, 2019. Association for Computing Machinery. URL: https://doi.org/10.1145/3360322.3360998, doi:10.1145/3360322.3360998.

[25]

Jose Vazquez Canteli and Zoltan Nagy. The CityLearn Challenge 2020. 2020. URL: https://doi.org/10.18738/T8/ZQKK6E, doi:10.18738/T8/ZQKK6E.

[26]

Jose R Vazquez-Canteli, Sourav Dey, Gregor Henze, and Zoltan Nagy. Citylearn: standardizing research in multi-agent reinforcement learning for demand response and urban energy management. 2020. URL: https://arxiv.org/abs/2012.10504, doi:10.48550/ARXIV.2012.10504.

[27]

Jose R. Vazquez-Canteli, Gregor Henze, and Zoltan Nagy. Marlisa: multi-agent reinforcement learning with iterative sequential action selection for load shaping of grid-interactive connected buildings. In Proceedings of the 7th ACM International Conference on Systems for Energy-Efficient Buildings, Cities, and Transportation, BuildSys '20, 170–179. New York, NY, USA, 2020. Association for Computing Machinery. URL: https://doi.org/10.1145/3408308.3427604, doi:10.1145/3408308.3427604.

[28]

José R. Vázquez-Canteli and Zoltán Nagy. Reinforcement learning for demand response: a review of algorithms and modeling techniques. Applied Energy, 235:1072–1089, 2019. URL: https://www.sciencedirect.com/science/article/pii/S0306261918317082, doi:https://doi.org/10.1016/j.apenergy.2018.11.002.

[29]

Cheng Yang, Jihai Zhang, Fangquan Lin, Li Wang, Wei Jiang, and Hanwei Zhang. Combining forecasting and multi-agent reinforcement learning techniques on power grid scheduling task. In 2023 IEEE 2nd International Conference on Electrical Engineering, Big Data and Algorithms (EEBDA), volume, 1576–1580. 2023. doi:10.1109/EEBDA56825.2023.10090669.

[30]

Huiliang Zhang, Di Wu, and Benoit Boulet. Metaems: a meta reinforcement learning-based control framework for building energy management system. arXiv preprint arXiv:2210.12590, 2022.

[31]

Zhan, Sicheng, Lei, Yue, and Chong, Adrian. Comparing model predictive control and reinforcement learning for the optimal operation of building-pv-battery systems. E3S Web of Conf., 396:04018, 2023. URL: https://doi.org/10.1051/e3sconf/202339604018, doi:10.1051/e3sconf/202339604018.