A multi-agent reinforcement learning approach for investigating and optimising peer-to-peer prosumer energy markets. (15th March 2023)
- Record Type:
- Journal Article
- Title:
- A multi-agent reinforcement learning approach for investigating and optimising peer-to-peer prosumer energy markets. (15th March 2023)
- Main Title:
- A multi-agent reinforcement learning approach for investigating and optimising peer-to-peer prosumer energy markets
- Authors:
- May, Ross
Huang, Pei - Abstract:
- Abstract: Current power grid infrastructure was not designed with climate change in mind, and, therefore, its stability, especially at peak demand periods, has been compromised. Furthermore, in light of the current UN's Intergovernmental Panel on Climate Change reports concerning global warming and the goal of the 2015 Paris climate agreement to constrain global temperature increase to within 1.5–2 ° C above pre-industrial levels, urgent sociotechnical measures need to be taken. Together, Smart Microgrid and renewable energy technology have been proposed as a possible solution to help mitigate global warming and grid instability. Within this context, well-managed demand-side flexibility is crucial for efficiently utilising on-site solar energy. To this end, a well-designed dynamic pricing mechanism can organise the actors within such a system to enable the efficient trade of on-site energy, therefore contributing to the decarbonisation and grid security goals alluded to above. However, designing such a mechanism in an economic setting as complex and dynamic as the one above often leads to computationally intractable solutions. To overcome this problem, in this work, we use multi-agent reinforcement learning (MARL) alongside Foundation – an open-source economic simulation framework built by Salesforce Research – to design a dynamic price policy. By incorporating a peer-to-peer (P2P) community of prosumers with heterogeneous demand/supply profiles and battery storage intoAbstract: Current power grid infrastructure was not designed with climate change in mind, and, therefore, its stability, especially at peak demand periods, has been compromised. Furthermore, in light of the current UN's Intergovernmental Panel on Climate Change reports concerning global warming and the goal of the 2015 Paris climate agreement to constrain global temperature increase to within 1.5–2 ° C above pre-industrial levels, urgent sociotechnical measures need to be taken. Together, Smart Microgrid and renewable energy technology have been proposed as a possible solution to help mitigate global warming and grid instability. Within this context, well-managed demand-side flexibility is crucial for efficiently utilising on-site solar energy. To this end, a well-designed dynamic pricing mechanism can organise the actors within such a system to enable the efficient trade of on-site energy, therefore contributing to the decarbonisation and grid security goals alluded to above. However, designing such a mechanism in an economic setting as complex and dynamic as the one above often leads to computationally intractable solutions. To overcome this problem, in this work, we use multi-agent reinforcement learning (MARL) alongside Foundation – an open-source economic simulation framework built by Salesforce Research – to design a dynamic price policy. By incorporating a peer-to-peer (P2P) community of prosumers with heterogeneous demand/supply profiles and battery storage into Foundation, our results from data-driven simulations show that MARL, when compared with a baseline fixed price signal, can learn a dynamic price signal that achieves both a lower community electricity cost, and a higher community self-sufficiency. Furthermore, emergent social–economic behaviours, such as price elasticity, and community coordination leading to high grid feed-in during periods of overall excess photovoltaic (PV) supply and, conversely, high community trading during overall low PV supply, have also been identified. Our proposed approach can be used by practitioners to aid them in designing P2P energy trading markets. Highlights: Multi-agent reinforcement learning has been used for optimising the energy sharing market. The developed strategy respects data privacy and requires no data sharing between prosumers. Emergent social–economic behaviour such as price elasticity has been observed. The learned dynamic price policy outperforms a benchmark fixed price strategy. Compared to the fixed price strategy, community net profit increased by 28.64%. … (more)
- Is Part Of:
- Applied energy. Volume 334(2023)
- Journal:
- Applied energy
- Issue:
- Volume 334(2023)
- Issue Display:
- Volume 334, Issue 2023 (2023)
- Year:
- 2023
- Volume:
- 334
- Issue:
- 2023
- Issue Sort Value:
- 2023-0334-2023-0000
- Page Start:
- Page End:
- Publication Date:
- 2023-03-15
- Subjects:
- Peer-to-peer market -- Community-based market -- Dynamic pricing -- Multi-agent systems -- Multi-agent reinforcement learning -- Proximal Policy Optimisation
Power (Mechanics) -- Periodicals
Energy conservation -- Periodicals
Energy conversion -- Periodicals
621.042 - Journal URLs:
- http://www.sciencedirect.com/science/journal/03062619 ↗
http://www.elsevier.com/journals ↗ - DOI:
- 10.1016/j.apenergy.2023.120705 ↗
- Languages:
- English
- ISSNs:
- 0306-2619
- Deposit Type:
- Legaldeposit
- View Content:
- Available online (eLD content is only available in our Reading Rooms) ↗
- Physical Locations:
- British Library DSC - 1572.300000
British Library DSC - BLDSS-3PM
British Library HMNTS - ELD Digital store - Ingest File:
- 25682.xml