Blockchain Second Layer Solutions: State Channels vs Side Chains

Blockchain is a fast-growing disruptive technology that is designed to improve credibility in record keeping and transactions. Its role in building trust and verified accountability is an essential service for modern dynamic transactions.

blockchain solution

The blockchain provides a way to store verified tamperproof data that is accessible anywhere in the world at any time. The blockchain is essentially an immutable trusted database that can be used for reference when handling disputes, authenticating transactions, proving ownership and many more.

The Blockchain and How it Works

Looking at blockchain technology only in terms of its connection to Bitcoin is a somewhat limited view. This notion was emphasized by Jaspreet Bindra, the former Senior Vice President – Digital Transformation of Mahindra Group in India. In his words, defining blockchain as the technology behind Bitcoin or Cryptocurrency, or Ether is like explaining the internet (solely) as the technology behind emails.

cryptocurrency energy consumption

In simple terms, the blockchain is a tamperproof decentralized digital ledger that keeps a permanent record of a wide variety of verified transactions and data. These include information on property ownership, business mergers, federal documents, shares, stocks and many more.

Conventional transactions need trusted third parties to verify the information presented by the traders. These third parties include banks, financial institutions, credit review boards and governmental agencies. You need to verify the authenticity of documents, ownership, identity and monetary status of the traders before making a deal. These verification processes can be costly and time-consuming.

In the blockchain, every transaction or related data is verified and recorded in an individual block. The block is then permanently linked to any previous similar transaction and corresponding ledgers. The links are characterized by complex cryptography that is unique to the users involved and the specific transaction. Each block is linked and validated by the previous one, saving time and money spent on conventional due diligence.

Also, the blockchain data is decentralized. General ledgers can only be stored in one location at a time; probably in a vault or safety deposit box. However, data in the blockchain is stored in multiple ledgers that are updated simultaneously, adding another layer of security against hackers. 

To successfully tamper with any blockchain entry, a hacker would have to alter the entire chain. He would also need to edit the ledgers of everyone else on the network in question.

Defining Second Layer Blockchain Solutions

Blockchain technology is yet to scale up and dominate the world markets. While its potential is recognized globally, blockchain technology has been held back by its inherent limitations. The fundamental challenge that limits blockchain scalability today is the speed of its transactions.

The verification of blockchain transactions takes time and a lot of computational power. Yet, these processes are part of what sets the blockchain apart from conventional transactions.

These speed limitations have hindered the integration of blockchain technology with faster mainstream transactions. At its core, the Bitcoin Blockchain can only handle five transactions per second (TPS) while Ethereum handles 10 – 15. This is a stark contrast to Visa that can handle up to 24,000 TPS3.

Second Layer Blockchain Solutions were developed to accelerate the completion of blockchain transactions. They are a type of secondary framework built on pre-existing blockchain systems. Second layer systems take sets of transactions and compute them outside the main blockchain (off-chain). This reduces the load on the main chain, freeing up computational power and resources for other functions.

By isolating sets of transactions off-chain, the second layer solutions can increase the number of transactions the blockchain can handle in a day. This system is an essential component of scaling up the blockchain to compete with conventional systems like Visa.

Types of Second Layer Blockchain Solutions

Second layer blockchain solutions are a series of intricate protocols designed to enhance the operation of the blockchain. They are designed with elaborate algorithms and systems to increase transaction speed, verification and security. This article highlights the general idea and operation of the two principal Second Layer block solutions, State Channels and Side Chains, in simple terms.

1. What Is A State Channel?

A State Channel is a blockchain second layer solution that allows a group of participants to perform an unlimited number of private transactions off-chain. Unlike conventional on-chain transactions, the state channel transactions are not made public. They are only visible to participants on the channel. Only the initial and final state of the transactions is recorded in the main blockchain.

State channels enable people who need to make several exchanges between themselves to maintain a blockchain ledger. Recording multiple small transfers is cumbersome on the blockchain because each transaction needs to be verified and confirmed by miners. This can slow down the type of fast-paced exchanges the state channel participants need.

blockchain technology

State channels enable groups to perform secure, fast and low-cost transactions using blockchain technology. The state channel solutions in use today hold the promise of high scalability with some capable of doing thousands of transactions per second.

How State Channels work

With State Channels, the participants rely on mutual agreements that are signed with their blockchain encryption signatures for verification. The participants create a smart contract elaborating the state of their transactions before going off-chain.

While off-chain, the participants can perform as many transactions as they desire without depending on miners’ verifications. They also don’t require the formation of new blocks per transaction.

Once the transactions are complete, the participants mutually sign a close-out transaction. Close-out transactions are unique in that they are recorded in a new block on-chain. To continue transacting after a close-out transaction, the state channel participants need to re-open the state channel with a unique encryption signature.

State Channel Security

A state channel is verified by its participants and their mutual smart contract. Yet, once the parties have finished their transactions off-chain, the final state is recorded in a new block on-chain. This way, the transactions can be done faster off-chain and secured permanently on-chain.

The smart contract design secures transactions within the state channel. It also acts as the ‘Judge’ between the participants. Smart contract designs vary.

The underlying state channel security mechanism requires all participants to sign off on each transaction. Each transaction bearing the participants, digital signatures overwrites the previous one, preventing one participant from altering the final state of transactions in the absence of their counterpart.

Some channels use a timer which updates or locks the on-chain state of the transactions automatically. Once the timer runs out, it automatically issues a close-out transaction and updates the main chain, closing the state channel based on the last verified transaction. Any new attempt to unlock the state channel creates new encryption and restarts the timer.

Examples of popular state channel projects

1. Celer Network

2. The Lighting Network

3. Trinity

4. Raiden Network

5. Liquidity

2. What Is A Side Chain?

Sidechains are smaller blockchains that run parallel to the main blockchain or main chain. They act like branches of the main chain. During operation, they transfer assets to and from the main chain to reduce congestion and facilitate scalability. Carrying out your transactions on a side chain can significantly increase the blockchain’s TPS. 

How side chains Work

Sidechains have a similar structure and operational mechanism to the blockchain (main chain). Unlike state channels, every transaction in a side chain is recorded and forms a new block. Yet, sidechain blocks can be verified faster because they need fewer verifications and distributed consent than the main chain.

Second layer solution

The sidechain is linked to the main chain via a two-way peg that allows the transfer of assets between the two chains. Assets are transferred at a predetermined rate such that the blockchain is consistently updated of the state of transactions on the side chain.

Performing transactions on sidechains ease the computational burden and congestion of the main chain, allowing participants to carry out faster transactions. Sidechains are permanent and not limited to a set group of users. They also facilitate cryptocurrency interchangeability.

Sidechain Security

The blockchain’s main selling point is the security of your data. Yet, the security processes are time-consuming and costly. Increasing the speed of transactions often results in simplifying the main chain security processes.

A secure sidechain assures users of faster yet safer transactions by periodically securing or backing up its transactions on the main chain. This idea is the same behind the use of a two-way peg to consistently transfer assets between the primary and side chains.

Each sidechain is independent of the main chain meaning that it has its miners and dedicated computation power. If a sidechain’s security is compromised, it doesn’t affect the main chain’s protection and vice versa.

Some sidechains enlist federation groups to act as a go-between when transferring assets to and from the main chain. Although this adds a layer of security, it also increases the waiting period before a participant can actively perform transactions on the sidechain.

Examples of Popular Sidechain Solutions

1. Plasma

2. Rootstock (RSK)

3. Matic

4. Liquid

5. Alpha


The energy industry is embracing blockchain technology as a unique way to record, track and manage transactions in the electricity market. The decentralization of electricity generation in Europe opened doors to a variety of challenges. The grid is now connected to multiple varied electricity producers, mini-grids and renewable energy resources. 

Many of the technical electricity generation and distribution challenges are being addressed by the smart grid and other innovative solutions. Yet, the financial dynamics and accountability challenges are more intricate.

The blockchain provides a faster, more efficient and tamperproof way to track of electricity generation and consumption. This effectively increases the security, speed and accuracy of payments issued to the energy producer. This is especially valuable in markets where the electricity price varies with demand.

Using the blockchain second layer solutions to manage the financial side of alternative energy transactions can enhance smart grid dynamics. It can allow customers, mini-grids and utility-scale renewable energy projects to interact freely and in real-time. It can also accelerate the process of generating green certificates.

Recording production data on the blockchain can simplify the process of verification of electricity generation levels. It means that independent grid tie electricity producers such as rooftop solar, community solar and other projects can get paid faster and more accurately.


CO2 Emission In Different Regions

In September 2019, the Global Climate Change Strike saw about 7.6 million people in 185 countries take to the streets. People from all over the world joined forces to protest against prevailing climate injustices that continue to propagate global warming. More than half of the protesters were in Europe. 

climate change

In 2015, the leaders of 196 countries came together in Paris to map practical steps to curb global warming. The result was the negotiation and formation of the Paris Agreement, designed to limit the increase of global temperatures to below 20Celcius. 

The Paris Agreement was followed by the establishment of national plans and targets primarily to reduce CO2 emission levels. Yet, in the four years since the Paris Agreement was formed, CO2 emissions have continued to rise. At the current rate, 2019 will reach near-record high CO2 Emission levels

What are CO2 Emissions and why are they a Problem?

The increased concentration of Greenhouse Gases (GHG) in the atmosphere is what leads to global warming. These gases trap heat energy within the earth’s atmosphere and prevent solar radiation from escaping into space and thus increasing the earth’s temperature through a phenomenon known as the Greenhouse effect. 

There are four gases in the atmosphere that are classified as Green House Gases. These are Carbon Dioxide (CO2), Methane (CH4), Nitrous Oxide (N2O) and Fluorinated Gases. Fluorinated gases include Ozone (O3), Hydrofluorocarbons (HFC) and Chlorofluorocarbons (CFC) among others. Water vapour is also a potent GHG. However, we usually overlook water since the increased global temperatures lead to more vapour, rather than direct human activity.

GHGs have always been part of our atmosphere. However, their levels have been rising critically since the beginning of the industrial revolution. At lower concentrations, GHGs are useful for regulating the earth’s average temperature. Without GHGs, NASA estimates that the earth’s average temperature could drop to – 180C which would be a drastic shift from its current level of about 140C. A 32-degree drop in average temperatures would threaten the existence of life on earth.

CO2 emission

Carbon Dioxide represents the lion’s share of GHGs. In 2015, the Center For Climate And Energy Solutions reported that CO2 represented 76% of the GHG in the Atmosphere. By 2017, Carbon Dioxide represented about 82% of the GHGs. This data shows that CO2 emissions are the most significant drivers of the greenhouse effect and global warming. As such, most of the policies aimed at controlling global warming are designed to reduce CO2 emissions. 

How Are CO2 Emissions Generated

Traditionally, regular activities such as respiration, burning firewood and decay of organic matter were the primary producers of CO2. Trees and plants absorb and regulate CO2 levels naturally. The plants and trees retain the carbon in CO2 and release oxygen back into the atmosphere. As such, we refer to forests and large vegetated areas as carbon sinks. 

Burning fossil fuels is the primary contributor to the generation of excessive amounts of CO2. Fossil fuels are made up of decayed compressed organic matter that has built up for thousands of years. This organic matter was trapped safely within the earth’s crust until humans discovered its energy potential. In an instant, burning fossil fuels releases carbon that has been accumulated over several centuries.

In the 1800s, people began burning fossil fuels to strengthen industrial developments. As the industrial revolution picked up speed, so did the rate of CO2 generated. The 1850s marked the first time that the CO2 levels did not revert to their previously balanced levels. This change was because humans began producing CO2 faster than it could be absorbed. Since then, global CO2 levels have continued to rise in response to the extensive use of Coal, Oil and Gas.

Global CO2 Ranking

Carbon dioxide emissions characteristically have a long lifetime. Once emitted, CO2 can continue absorbing heat in the atmosphere for more than 1,000 years. GHGs tend to diffuse in the atmosphere and are not concentrated above the regions where they were generated. 

There are several ways to evaluate the regional impact of CO2 emissions on the climate. In this article, we will consider the following three methods to assess regional CO2 emissions

  1. Current CO2 emissions 
  2. Per capita CO2 emissions
  3. Cumulative CO2 emissions

Current CO2 Emissions

In 2017, a total of 36.153 GtCO2 was generated in the world. Three top countries, China, USA and India, cause about 48% of these emissions. The list below shows the top five CO2 producers in 2017 and their emissions levels in Gigatonnes (Gt).

  • China 9.839Gt
  • USA 5.270Gt
  • India 2.467Gt
  • Russia 1.693Gt
  • Japan 1.205Gt

China overtook the USA as the leading producer of CO2 emissions globally in 2006. The country has consistently generated more CO2 emissions than the USA and EU-28 combined since 2011. In 2017 the EU-28 countries collectively produced 3.544 GtCO2

Green House Gas effect

The bulk of China’s carbon emissions are because they use coal-fired power plants. The upsurge in China’s CO2 emission levels come as a result of the country’s rapid industrialization. It has taken about 35 years for China to rise from an agrarian to an industrial society

From a Regional viewpoint, Asia generated about 46.7% of the global carbon emissions in 2017. North America came in second with 17.5% while Europe collectively produced 15.7%. The list below shows the top five regions according to the total CO2 emissions in 2017.

  • Asia 16.918Gt
  • North America 6.333Gt
  • Europe 5.693Gt
  • Middle East 2.672Gt
  • Africa 1.332Gt

Per Capita CO2 Emission

Classifying the carbon emissions of a region based on political boundaries only tells one side of the story. A region’s CO2 emissions responsibility is better represented as a function of its population. That is dividing the total CO2 emissions by the population of the area to find the data per capita. This system of measurement shows the amount of CO2 emissions attributed to each individual. 

Despite its high carbon emission levels, China’s rank is 52nd on the global rating with 7tCO2 per capita because China is home to close to 20% of the world’s population. Yet, each person in the EU-28 region collectively generates the same carbon emissions as in China with 7tCO2 per capita.

Holding the 11th position, the USA produces 16 tCO2 per capita. Of the top 3 CO2 producers, the USA has the highest CO2 levels per capita. The third highest producer of CO2 in 2017, India, is ranked 133rd with 1.8 tCO2 per capita.  

The highest-ranked country under this system of classification is Qatar, with 49 tCO2 per capita. The second is Curacao, followed by Trinidad and Tobago. Below is a list of the top five CO2 producers per capita according to the 2017 global carbon atlas reports.

  • Qatar 49t
  • Curacao 39t
  • Trinidad and Tobago 30t
  • Kuwait 25t
  • United Arab Emirates 25t

Cumulative CO2 Emissions

To get a clearer idea of each country’s actual contribution to global warming, it helps to look at the bigger picture. CO2 emissions continuously absorb energy for more than 1,000 years after being emitted in the atmosphere. With this in mind, CO2 emissions from as far back as 1,000 AD should still have an impact on our current climate. 

climate change

In effect, it is worth evaluating countries based on their lifetime contribution of CO2 emissions. Gauging a country’s current emissions on the backdrop of their lifetime emissions can draw a clearer picture of its global warming responsibility.

Carbon emissions data can be evaluated back to 1750 when humans started burning fossil fuels. Historical CO2 emissions are estimated based on the fossil fuel production data of each region. 

The data shows that the USA is the all-time leading contributor to CO2 emissions in the world with 397Gt. China is second, and the former Soviet Union is third. The list below shows the top five all-time CO2 generators in the world and their lifetime carbon emissions.

  • USA 397.157Gt
  • China 213.843Gt
  • Former USSR 179.966Gt
  • Germany 89.661Gt
  • UK 77.761Gt


When assessing the regional impact of CO2 emissions, it is valuable to have a broad view. While CO2 emissions have a global impact, they are caused by regional and local actions. 

The transport and energy sectors represent 29% and 28% of the carbon emissions generated in 20175. The industry was responsible for about 22%, while the commercial and residential areas generated 12%. The agricultural sector of the global economy generated only 9% of the 2017 carbon emissions.

Many countries are adopting the use of cleaner fuels, electric and hybrid vehicles, and more efficient transportation systems. These nations are well on their way to reducing their carbon emissions sustainably. Integration of renewable energy systems in power generation has also made a significant impact when reducing carbon emissions of leading developed countries such as the UK. However, there is a need to consider ramping up carbon-negative solutions if the countries will meet the Paris Agreement targets. 


The Past, The Present, And The Promising Future of EVs

A careful look into the existence of Electric Vehicles portrays such a beautiful story. Even, a movie titled “Who Killed the Electric Car?” was made in 2005 to characterize the life of Electric Vehicles.

As a solution provider for renewable energy and smart energy management, this is our perspective to the story.

“IN THE BEGINNING” – A Little About The Past 

Once upon a time, Electric Vehicles (EV) dominated the automobile market; in fact, they once outsold the Internal Combustion Engine (ICE) vehicle in the United States. Unfortunately, due to some critical reasons explained below, the fossil fuel engine dominated most cars on the road. 

In the early 1900s, the following events occurred that led to a tremendous decline in the use of EVs.

  • There was an increase in the exploration of crude oil, which led to petrol being cheaper and more accessible. So it was more comfortable and affordable to operate a petrol engine vehicle.
  • At a point in time, the petrol engine required a hand crank to start the engine. But after the invention of the electric starter in 1903, you could start an ICE without stress.
  • The government constructed more roads. Hence, there was a need to cover more distance and the EVs at that time were limited by how far they could travel.
  • There was a significant improvement in the production process of Ford motors that led to the boom of the ICE vehicles, thereby making them more affordable.
  • Also, technological innovations led to vast improvements in the design of the Internal Combustion Engine.
  • The invention of mufflers in 1879 reduced the unbearable noise produced by ICE.

All these led to a deep plunge in the use of electric vehicles all over the world. Even though the electric-powered engine was still adopted by other means of transportation (for example, the rail transport system), almost all the cars on our roads were running on fossil fuel.

How Did We Get To This Present State in The Adoption of EVs?

A few decades after the oblivion of electric vehicles, there was a spark in concern about the environmental impact of fossil fuels and fears that crude oil was depleting. This concern led to some research and development for alternative energy sources.

However, that scare wasn’t enough to drive the production of EVs around the world until about three decades ago. In the early 1990s, tighter environmental policies came into place. United State’s Clean Air Act Amendment, 1990, and the Energy Policy Act, 1992, gave rise to a renewed interest in electric vehicles by automobile manufacturers.

Within a couple of years, General Electric and Toyota responded with new models of hybrid electric vehicles that paved the way for other EVs by Tesla, Chrysler, Ford, Nissan and many more. 

Source: IEA analysis based on country submissions, complemented by ACEA (2019); EAFO (2019); EV Volumes (2019); Marklines (2019); OICA (2019).

By the end of 2018, the total number of EVs was estimated to exceed 5.1 million. If you compare this to the overall global estimate of over 1 billion vehicles, EVs are just less than 1%. This draws us to the conclusion that we have a long way to go in the fight for a sustainable environment. 

What Then Does The Future Hold?

There’s been a lot of forecasts predicting the prospect of electric vehicles around the world. However, with the present statistics, it’s sufficient to say that if there’ll be any significant development, it’ll depend on our commitment to sustaining our environment and stricter government policies to encourage the adoption of more EVs. 

If we judge from the performance of a few countries, there could be a glimpse of hope. But the remaining nations need to catch up to this movement of efficient transportation that is environmentally friendly.  

How Promising is The Future For EVs?

Cost of Production

The currently-high cost of producing EVs is a significant factor that hinders it from having a global widespread. Today, owning an EV is luxurious. A typical one can cost well over $30,000. If only EVs can be made affordable for the middle class, there’ll be more of them on our roads. 

Also, in the production of electric vehicles, the cost of manufacturing the batteries carry a significant percentage of the total cost. This has been a considerable challenge and has gained lots of attention in research and development. For instance, there’s the global quest for reducing the cost of producing lighter and bigger-capacity lithium batteries and, Toyota is working to have solid-state batteries in their EVs by 2020.

The future looks bright for EVs, as more vehicle manufacturers like Tesla, Volkswagen, Hyundai, and KIA are taking initiatives to produce electric cars that will be affordable and travel long-range. In addition, research shows that the cost of battery packs are expected to reduce significantly in the near future. This takes us to the second factor:

Government Incentives and Climate Policies

This is arguably the most prominent factor that encourages the adoption of EVs all over the world. Many countries need to be commended on their efforts in this regard, as they have set the pace in supporting the manufacturing, sales, and ownership of EVs according to their climate policies stipulated for a sustainable environment. 

Countries like Norway, Switzerland, China, India, United Kingdom, United States, Japan, to mention a few, have situated incentives on EVs. They include tax reductions and exemptions, subsidies, bonus-malus, exception from road tax, parking fees and toll fees, the building of recharge points and many more. You can read the full details of Government incentives for most countries.

While there are still many inactive governments, in this regard, the countries with the current incentives need to improve their support. For instance, there could be incentives to support manufacturing companies by cutting down the cost incurred in the production EVs. Or there could be more investment in research and development. All these will facilitate a wider spread of EVs around the world.

As it stands, these two factors are most vital to seeing the diffusion of electric vehicles globally among several other factors.

In conclusion

It’s important to know that there are opposing arguments based on the facts that:

  1. Having more electric vehicles will lead to a massive increase in the demand for energy. This will directly impact the distribution utilities and cause more challenges in effectively managing a balanced grid.
  2. The process of manufacturing the batteries generates toxic chemicals that are harmful to the environment. As a result of this, humans, animals, plants, and other organisms are at a high risk of toxic exposure.

However, there are solutions/measures in place to address these issues. For instance, there are smart grid management systems that are designed to optimize the demand for energy from your electric car. It is called V2G (Vehicle-to-Grid), and it works by selling energy from your EV to the grid during peak hours and recharging your vehicle during the off-peak hours – since your car is usually parked more than 70% of the time. Also, there are enhanced safety measures put in place for our interaction with the lithium-ion batteries.


Trustless coordination mechanism for smart grid energy markets, a game theoretic approach

Increasing the amount of  installed renewable energy sources such as solar and wind is an essential step towards the decarbonization of the energy sector.

From a technical point of view, however, the stochastic nature of distributed energy resources (DER) causes operational challenges. Among them, unbalance between production and consumption, overvoltage and overload of grid components are the most common ones.

As DER penetration increases, it is becoming clear that incentive strategies such as Net Energy Metering (NEM) are threatening utilities, since NEM doesn’t reward prosumers to synchronize their energy production and demand.

In order to reduce congestions, distributed system operators (DSOs) currently use a simple indirect method, consisting of a bi-level energy tariff, i.e. the price of buying energy from the grid is higher than the price of selling energy to the grid. This encourages individual prosumers to increase their self-consumption. However, this is inefficient in regulating the aggregated power profile of all prosumers.

Utilities and governments  think that a better grid management can be achieved by making the distribution grid ‘smarter’, and they are currently deploying massive amount of investments to enforce this vision.

As I explained in my previous post on the need of decentralized architectures for new energy markets,  the common view of the scientific community is that a smarter grid requires an increase in the amount of communication between generators and consumers, adopting near real-time markets and dynamic prices, which can steer users’ consumption during periods in which DER energy production is higher, or increase their production during high demand. For example, in California a modification of NEM that allows prosumers to export energy from their batteries during evening peak of demand has been recently proposed.

But as flexibility will be offered at different levels and will provide a number of services, from voltage control for the DSOs to control energy for the transmission system operators (TSOs), it is important to make sure that these services will not interfere with each other. So far, a comprehensive approach towards the actuation of flexibility as a system-wide leitmotiv, taking into account the effect of DR at all grid levels, is lacking.

In order to optimally exploit prosumers’ flexibility, new communication protocols are needed, which coupled with a sensing infrastructure (smart meters), can be used to safely steer aggregated demand in the distribution grid, up to the transmission grid.

The problem of coordinating dispatchable generators is well known by system operators and has been studied extensively in the literature. When not taking into account grid constraints, this is known under the name of economic dispatch, and consists in minimizing the generation cost of a group of power plants . When operational constraints are considered, the problem increases in complexity, due to the power flow equations governing currents and voltages in the electric grid. Nevertheless, several approaches are known for solving this problem, a.k.a. optimal power flow (OPF), using approximations and convex formulations of the underlying physics. OPF is usually solved in a centralized way by an independent system operator (ISO). Anyway, when the number of generators increases, as in the case of DERs, the overall problem increases in complexity but can be still effectively solved by decomposing it among generators.

The decomposition has other two main advantages over a centralized solution, apart from allowing faster computation. The first is that generators do not have to disclose all their private information in order for the problem to be solved correctly, allowing competition among the different generators. The second one is that the computation has no single point of failure.

In this direction, we have recently proposed a multilevel hierarchical control which can be used to coordinate large groups of prosumers located at different voltage levels of the distribution grid, taking into account grid constraints. The difference between power generators and prosumers is that the latter do not control the time of generated power, but can operate deferrable loads such as heat pumps, electric vehicles, boilers and batteries.


Fig.1 Hierarchical nature of the electric grid. The grid is divided in different voltage levels (low, medium, high), each of which is operated by different entities (DSOs, TSOs). Coordination can be exploited sending messages with a forward/backward strategy following this structure.

The idea is that prosumers in the distribution grid can be coordinated only by means of a price signal sent by their parent node in the hierarchical structure, an aggregator.  This allows the algorithm to be solved using a forward-backward communication protocol. In the forward passage each aggregator receives a reference price from its parent node and sends it downwards, along to its reference price, to its children nodes (prosumers or aggregators), located in a lower hierarchy level. This mechanism is propagated along all the nodes, until the terminal nodes (or leafs).  Prosumers in leaf nodes solve their optimization problems as soon as they are reached by the overall price signal. In the backward passage, prosumers send their solutions to their parents, which collect them and send the aggregated solution upward.

Apart from this intuitive coordination protocol, the proposed algorithm has other favorable properties. One of them is that prosumers only need to share information on their energy production and consumption with one aggregator, while keeping all other parameters and information private. This is possible thanks to the decomposition of the control problem. The second property is that the algorithm exploits parallel computation of the prosumer specific problems, ensuring minimum overhead communication.

However, being able to coordinate prosumers is not enough.

The main difference between the OPF and DR problem, is that the latter involves the participation of self-serving agents, which cannot be a-priori trusted by an independent system operator (ISO). This implies that if an agent find it profitable (in terms of its own economic utility), he will compute a different optimization problem from the one provided by the ISO. For this reason, some aspects of DR formulations are better described through a game theoretic framework.

Furthermore, several studies have focused on the case in which grid constraints are enforced by DSOs, directly modifying voltage angles at buses. Although this is a reasonable solution concept, the current shift of generation from the high voltage network to the low voltage network lets us think that in the future prosumers and not DSOs could be in charge of regulating voltages and mitigating power peaks.

With this in mind, we focused on analyzing the decomposed OPF using game theory and mechanism design, which study the behavior and outcomes of a set of agents trying to maximize their own utilities $latex u(x_i,x_{-i})&s=1$, which depend on their own actions $latex x_i &s=1$ and on the action of the other agents $latex x_{-i}&s=1$, under a given ‘mechanism’. The whole field of mechanism design tries to escape from the Gibbard–Satterthwaite theorem, which can be perhaps better understood by means of its corollary:

If a strict voting rule has at least 3 possible outcomes, it is non-manipulable if and only if it is dictatorial.

It turns out, that the only way to escape from this impossibility result, is adopting money transfer. As such, our mechanism must define both  an allocation rule and a taxation (or reward) rule. In this way, the overall value seen by the agents is equal to their own utility augmented by the taxation/remuneration imposed by the mechanism:

$latex v_i (x_i,x_{-i})= u_i(x_i,x_{-i}) + c_i(x_i,x_{-i}) &s=1$

Anyway, monetary transfers are as powerful as perilous. When designing taxes and incentives, one should always keep in mind two things:

  • Designing wrong incentives could result in spectacular failures, as we learned from the case of a very anecdotal misuse of incentives from British colonial history, known as the cobra effect
  • If there is a way to fool the mechanism, self-serving prosumers will almost surely find it out.  Know that some people will do everything they can to game the system, finding ways to win that you never could have imaginedSteven D. Levitt

A largely adopted solution concept, used to rule out most of the strategic behaviors from agents (but not the same as strategyproof mechanism), is the one of ex-post Nash Equilibrium (NE), or simply equilibrium,  which is reached when the following set of problems are jointly minimized:

\min_{x_i \in \mathcal{X}_i} & \quad v(x_i, x_{-i}) \quad \forall i \in \{N\} \\
s.t. & \quad Ax\leq b

where $latex x_i \in \mathcal{X}_i &s=1$ means that the agents’ actions are constrained to be in the set $latex \mathcal{X}_i &s=1$, which could include for example the prosumer’s battery maximum capacity or the maximum power at which the prosumer can draw energy from the grid.  The linear equation in the second row represents the grid constraints, which is a function of the actions of all the prosumers, $latex x = [x_i]_{i=1}^N &s=1$, where N is the number of prosumers we are considering.

Rational agents will always try to reach a NE, since in this situation they cannot improve their values given that the other prosumers do not change their actions.

Using basic optimization notions, the above set of problems can be reformulated using KKT conditions, which under some mild assumptions ensure that the prosumers’ problems are optimally solved. Briefly, we can augment the prosumers objective function using a first order approximation, through a Lagrangian multiplier $latex \lambda_i$, of the coupling constraints and using the indicator function to encode their own constraints:

$latex \tilde{v}_i (x_i,x_{-i}) = v_i (x_i,x_{-i})  + \lambda_i (Ax-b) + \mathcal{I}_{\mathcal{X}_i} &s=1$

The KKT conditions now reads

0& \in \partial_{x_i} v_i(x_i,\mathrm{x}_{-i}) + \mathrm{N}_{\mathcal{X}_i} + A_i^T\lambda \\
0 & \leq \lambda \perp -(Ax-b) \geq 0
\end{aligned} &s=1

where $latex \mathrm{N}_{\mathcal{X}_i}&s=1$ is the normal cone operator, which is the sub-differential of the indicator function.

Loosely speaking, Nash equilibrium is not always a reasonable solution concept, due to the fact that multiple equilibria usually exists. For this reasons equilibrium refinement concepts are usually applied, in which most of the equilibria are discarded a-priori. Variational NE (VNE) is one of such refinement. In VNE, the price of the shared constraints paid by each agent is the same. This has the nice economic interpretation that all the agents pay the same price for the common good (the grid). Note that we have already considered all the Lagrangian multiplier as equal $latex \lambda_i = \lambda \quad \forall i \in \{N\}&s=1$ in writing the KKT condition.

One of the nice properties of the VNE is that for well behaving problems, this equilibrium is unique. Being unique, and with a reasonable economic outcome (price fairness),  rational prosumers will agree to converge to it, since at the equilibrium no one is better off changing his own actions while the other prosumers’ actions are fixed. It turns out that a trivial modification of the parallelized strategy we adopted to solve the multilevel hierarchical OPF can be used to reach the VNE.

On top of all this, new economic business models must be actuated in order to reward prosumers for their flexibility. In fact, rational agents would not participate in the market if the energy price they pay is higher than what they pay to their current energy retailer. One of such business models is the aforementioned Californian proposal to enable NEM with the energy injected by electrical batteries.

Another possible use case is the creation of an self-consumption community, in which a group of prosumers in the same LV grid, pays only at the point of common coupling with the grid of the DSO (which e.g. could be the LV/MV transformer in figure 1). In this way, if the group of prosumers is heterogeneous (someone is producing energy while someone else is consuming), the overall cost that they pay as a community will be always less than what they would have paid as single prosumers, at the loss of the DSO. But if this economic surplus drives the prosumers to take care of power quality in the LV/MV, the DSO could benefit from this business model, delegating part of its grid regulating duties to them.

How does blockchain fits in? Synchronizing thousands of entities connected to different grid levels is a technically-hard task. Blockchain technology can be used as a thrust-less distributed database for creating and managing energy communities of prosumers willing to participate to flexibility markets. On top of the blockchain, off-chain payment channels can be used to keep track of the energy consumed and produced by prosumers and to disburse payments in a secure and seamless way.

Different business models are possible, and technical solutions as well. But we think that in the distribution grid, the economic value lies in shifting the power production and consumption of the prosumers, enabling a really smarter grid.

At Hive Power we are enabling the creation of energy sharing communities where all participants are guaranteed to benefit from the participation, reaching at the same time a technical and financial optimum for the whole community.

Key links:

[maxbutton id=”1″]


Reimagining A Cryptocurrency Energy Economy with Hive Power

We’ve always claimed that our vision at Hive Power is to create a world that shares energy to ensure a better and brighter future. One of the ways we are doing just that, is by bringing solutions to a range of challenges being faced not just in the energy market, but also across blockchain technologies and projects. We thought we would take a moment to explore some of the key problems cryptocurrency energy initiatives are currently facing and how Hive Power is reimagining better ways forward.

Problem 1: The energy market needs to evolve towards a more sustainable, economic future.

As the energy market has begun to shift away from larger power plants to smaller decentralized sources, a major gap exists in the development of exchange models that can help keep energy usage sustainable, while also lowering prices for consumers.

The Hive Power Solution: While, the energy market is certainly facing an array of challenges, we created a turnkey solution utilizing Ethereum based smart contracts to mitigate the most pressing energy industry issues.   

We designed the Hive Token (HVT) to allow users to create and manage what we call “Hives”- distributed energy market platforms that can be implemented using Ethereum based smart contracts.  Just think, if you or your neighbor had an excess of energy, but, also had the ability to share this usable power with another individual in your community, it could strategically reduce costs for all involved, optimize the consumption and create a more sustainable model for future demands on the grid.  For those already looking to control their own energy future, registering for our ICO at might be your solution.

Problem 2: Lack of research and expertise across cryptocurrency teams and leadership  

Blockchain projects are often led by teams that may understand cryptocurrency, but have a very limited expertise in the business sector with which they intend to operate. This has left buyers, enthusiasts and users with a quickly dwindling value.  The failures of a large number of projects can often be attributed to a lack of research and expertise on all levels.

The Hive Power Team Difference: The team at Hive Power has always been proud to be a part of  the world’s most well regarded and sophisticated research institutions exploring the energy industry. In fact, our startup was incubated at SUPSI (The University of Applied Sciences and Applied Arts of Southern Switzerland) and developed based on deep knowledge and expertise earned over years of academic and corporate research. From our CEO’s career as asset manager for European solar parks to our COO Davide still leading the Energy systems research sector at SUPSI today, we maintain an in-depth knowledge of  the challenges we face and the solutions we can offer. A detailed guide to both can be found within our white-paper

Problem 3:  Conceptual projects lacking real-world business initiatives.

 A wide variety of cryptocurrency projects have dynamic sounding solutions to some of the world’s most pressing social and financial problems, but no real idea on how to achieve them, or the actual operational capability to see them through.

The Hive Power Way: Hive Power is more than a concept or just a sales pitch that has been coupled with a flashy white paper. We have developed our technology prototype and are already working with world-acclaimed partners who span the corporate, public and academic sectors. It is these strong relationships that have already helped to move our project from its idea phase, into being actionable. It’s not only that partners like Landis-Gyr, and the Swiss Government’s center for energy research believe in and support the work that we are doing,  but that they are actually hand-in-hand helping us to enter the market successfully. Have a great idea on who else we should partner with next? Bring it up on our telegram community and join the conversation.

Make sure to stay tuned for “Reimagining a Cryptocurrency Energy Economy Part 2”, where we will provide depth on the core technologies we are working, how they are being implemented and how we differentiate from our competitors.  

Key links:

[maxbutton id=”1″]

Why the energy market needs decentralized architectures

One of the reasons for the high popularity of cryptocurrencies is that they allow a decentralized economic system. This point is unanimously considered pivotal in the crypto world, where the worst insult for a project is calling it ‘centralized’. However, the definition of decentralization is often hardly understood. Even worse, a recent study shows that cryptocurrencies are not so decentralized as one might think, considering that the top four miners in Bitcoin and the top three miners in Ethereum control more than 50% of the hash rate.

Decentralization is often explained in terms of the communication architecture of a network, and a distinction is usually made between decentralized and distributed networks. This very famous picture explains the differences eloquently:

Differences between centralized, decentralized and distributed architectures

The picture is somehow self-explanatory, but we can try to give a tentative definition of the three architectures:

  • centralized architecture: information passes from a single node
  • decentralized architectures: not all information passes from a single node
  • distributed architectures: nodes communicate only with their neighbors

This (very personal) definition makes distributed architectures a subclass of decentralized architectures.
This picture was firstly published in 1964 by Paul Baran, a pioneer in the field of computer networks. When Baran published his work, he was considering distributed networks for increasing the resilience of the national communication structure in the context of a possible atomic war:

…it can be shown that highly survivable system structures can be built, even in the thermonuclear era

— Baran, Paul. “On distributed communications networks.” IEEE transactions on Communications Systems 12.1 (1964): 1–9.

From this point of view, in which the communication network is operated by a trusted entity (the government) and the architecture has the only purpose to guarantee communication, there is no further need to consider decentralization under other aspects. In this post on the meaning of decentralization, Vitalik Buterin explains why, for cryptocurrencies and distributed ledger technologies, there is the need to expand the definition of the decentralization grade of a system.

Briefly speaking, three key aspects can be considered:

  • architectural decentralization: this coincides with the definition given by Baran. The system should be geographically dislocated in order to be robust against malicious attacks. The implicit assumption is that attacker’s costs are sublinear: destroying a very big central unit is cheaper than destroying thousands of smaller dislocated communication units.
  • political decentralization: decisions concerning the protocols running the network should be made by several individuals/organizations with no concurrent interests in manipulating the network. This aspect is important once we exit from a ‘us’ against ‘them’ mind setting, in which the network operator is trusted and the system must be protected against outsider’s attacks. Note that in the case of cryptocurrencies cartels formation is completely expected — Vlad Zamfir. The History of Casper — Chapter 4.
  • logical decentralization: this concerns the ‘state’ of the system, where ‘state’ means all the data available through the network. The bittorrent platform is logically decentralized, since the network’s data is stored by the peers, each of them storing only a part of the whole available data. The Ethereum network, and DLTs in general, are logically centralized, since it’s desirable that all the peers see a coherent (the same) network state at any time. This coherency comes at the cost of highly redundant data structures and at the course of the CAP theorem: if the system gets partitioned, only one property among consistency and availability can be guaranteed.

Now that we have a good grasp on the meaning of, and problems connected to, (de)centralized networks, we can start to discuss decentralized architectures for energy markets.

More and more renewables are being installed in the distribution grid, especially photovoltaics. Solar panels are highly stochastic energy sources with deep volatility. This volatility calls for an increased flexibility of the demand and creates a sweet spot for the creation of energy communities implementing local energy markets. These markets will have 3 kinds of participants:

  • Local producers, who can sell their excess energy at higher prices
  • Consumers with flexible loads (e.g. water heaters, heat pumps, EV chargers), who can get a discount for load shifting
  • Battery owners, who can sell their storage capability

These three actors could have different motivations in participating to such energy community, among which:

  • A reduction of their electricity bills thanks to the increased self-consumption of the community. The energy communities will also be able to sell their flexibility to distribution system operators, generating additional profits for their members.
  • An increase in the share of locally produced clean energy that they consume.

Let us focus on the specific problem of the choice of the architecture for such a market design.

First of all, we don’t have to start from scratch, the electrical grid is also a network with its own architecture. The existing Infrastructure is massive. It was developed by billions of dollars and can be divided essentially into:

  • Electrical transmission and distribution network (cables, transformers, capacitor banks, FACTS, etc…)
  • Communication infrastructure for metering and control (PLC, optical fibers, etc.)

The network architecture of the electric power grid is also peculiar. In particular, it is divided into different voltage levels. The choice of the voltage level is a function of:

  • The distance to be covered
  • The amount of power that needs to be transported

Per unit of length high voltage lines are more expensive than medium and low voltage ones, but the amount of power they can transport and the distance they can cover is much higher, thanks to lower losses. Indeed, raising the voltage by a factor of 10 reduces the current by a corresponding factor of 10 and therefore the RI² losses by a factor of 100.

In distribution systems, once the electricity approaches the point of consumption the voltage is gradually decreased thereby decreasing the cost of the lines and reducing the possible dangers in case of a short circuit.

Example of transmission grid in southern Switzerland. Red, green, yellow and blue lines represent 380, 220, 150 and 50 kV lines, respectively. Source: AET.

In the above figure, we can see the high voltage (HV) and medium voltage (MV) lines in the region around Hive Power’s headquarter, ranging from 380 kV (red lines) to 50 kV (blue lines).

Example of LV distribution grid, showing a typical radial structure. Source: IEEE.

The low voltage (LV) network, which is much more ubiquitous, like the one shown in the above figure. In the most common case, the topology of the low voltage network is radial, that means it has a simple tree-like structure.

This structure naturally partitions the system. From the physical point of view, the effects of a LV network on the upper MV level, can be taken into account only by means of the total power at the transformer. In other words, it is not required to know the power consumption of all the buildings in the LV network at a given time to effectively control the MV level, nor it is required that a prosumer located in the LV A knows about all the energy produced or consumed by all the prosumers in the LV B to effectively exchange energy.

And now a very important point:

“The mechanism design of new energy markets must explicitly consider the effect of traded energy on the electrical grid. The energy prices must reflect the state of the grid.”

This point is essential for understanding how we view the market and its communication infrastructure. In the presence of distributed generation from renewable energy, e.g. PV, the power production gets highly synchronized. This synchronization is a possible hazard for the electrical grid, since it can overload electrical lines. Furthermore, power production from renewable energies is highly volatile, influencing the local power quality.

New energy markets are in charge of mitigating the effect of an increasing penetration of renewable generators in the electric grid. Common view among the scientific community is that this could be done by means of demand response programs, in which the energy price is changed dynamically, based on the state of the grid.

These considerations lead us to analyze another sub-class of decentralized architectures, which is a very good candidate for decentralized energy markets: hierarchical structures. Hierarchical structures are essentially tree-like structures, in which each node can be a terminal or a branching node. Terminal nodes are the ‘leaves’ of the tree, and have no downwards connections. In our energy markets, terminal nodes are single prosumers.

A tree like structure, in which blue hexagons represent terminal nodes and red hexagons branching nodes. The orange hexagons gather nodes with the same parent nodes into groups.

The picture above depicts an example of a hierarchical structure, in which the blue terminal nodes with a same parent node, are gathered together in a group. Note that this structure is sort of fractal, that is, a group can be seen as a single terminal node when seen from the upper level.

Back to the energy markets and decentralized systems!

How does this architecture fit into the aforementioned classification, and why does it make sense for decentralized energy markets? Let’s reconsider each point one by one:

  • architecture: the architecture is geographically decentralized, but not fully distributed. That is, not all the nodes are equally important, from the point of view of an attacker which would like to make the whole system unavailable. Consider anyway that it is true also for the electric system. Furthermore, and more importantly, remember that the groups are decoupled, physically and logically. This means that, if for some reasons communication with the root node (the one located at the top level), is lost, prosumers in the communities in the lower levels can still effectively trade energy among peers in the same community.
  • political: the hierarchical architecture make the branching nodes pivotal for the energy market to work. This empowers the owners of the branching nodes, with respect to simple prosumers. In order to eliminate this issue, we can introduce a governance system regulated by smart contracts.
  • logical: the hierarchical architecture does not influence logical (de)centralization per se. Anyway, remember that the system we want to operate is decoupled, and physical effects can be taken into account by means of aggregated power on upper levels. That is, both energy trading and grid control are possible if information is aggregated at each branch of the structure. This aggregation would both avoid unnecessary information flows and preserve the privacy of the prosumers: only aggregated information about energy consumption is available at higher levels of the structure; furthermore, even prosumers belonging to the same groups have only aggregated informations about each others.

Hierarchical structures have also another peculiar aspect which is strictly related to mechanism design, a field of economics and game theory which aims at building market rules that induce a desired effect on the market equilibria. For instance, the CASPER protocol of Ethereum is seen by its creators as the result of applying mechanism design to cryptoeconomy.

Designing a market that turns competition into cooperation

One of the most celebrated outcomes of mechanism design is the revelation principle, that states that:

If the market is incentive compatible, we can restrict the study to the situation in which each participant is willing to disclose its private information.

This means that no agents would have incentives in lying about their power forecasts nor expected utility of using a certain amount of energy. Let’s do an example to clarify the implications.

Consider that each market player would adopt an optimal strategy (in terms of outcomes) given his private information. A player could lie in reporting his private information if he finds he has some advantages in doing so. For example, imagine we have designed a market in which prosumers pay a price proportional to their consumption and, if they consume more than the average, they pay an additional fee. If prosumer A declares he’s going to consume a lot of energy in the next market period, the other prosumers could increase their consumption plans, since they believe to be under-the-average consumers. In the next step, prosumer A consumes much less than the one he had previously declared, but there is no time for the other prosumers to synchronize again with updated information. As a result, A is now an under-the-average consumer. What is happened is that A, lying about his private information, has prevented the risk of paying the additional fee, to the detriment of the others.

How can this be avoided? In the simplest form, prosumers (leaf nodes) can agree to communicate their private information to a super-partes entity (their parent node), which would play the optimal strategy for them. Finding the optimal strategy generally involves solving a pre-defined optimization problem. The important thing is that each prosumers has previously agreed on how this optimal strategy is found, and that all of them consider the super-partes entity as trustful. In this case, prosumers have no interests in lying, since doing so they would incur in a payoff reduction, by definition!

In view of the above mentioned benefits, at Hive Power we decided to design our distributed energy market platform making use of aggregators. Of course these aggregators should either be trusted or, even better, auditable.

In my next posts I will discuss how we will:

  • model the market in a dynamic and stochastic setting
  • take into account grid constraints
  • preserve user privacy

I will also discuss alternative solutions for the intra-group communication. Stay tuned!

At Hive Power we are enabling the creation of energy sharing communities where all participants are guaranteed to benefit from the participation, reaching at the same time a technical and financial optimum for the whole community.

Key links:

[maxbutton id=”1″]

Demo Hive: Our First Successful Implementation of a Blockchain-based Energy Market

As a consequence of the foreseen significant increase in stochastic generation in the electrical grid, the need for flexibility and coordination at demand side is expected to rise. Decentralized energy markets are among the most promising solutions allowing to boost coordination between production and consumption, by allowing even small actors to capitalize on their flexibility. The main purpose of Hive Power is to develop a blockchain-based platform to support groups of prosumers that want to create their own energy market. The core element of this framework is the so-called Hive, i.e. an implementation of an energy market based on blockchain technology (see our white paper on to have detailed informations about Hive Power platform).

This article describes Demo Hive, the first testbed developed by our team and presented during the Energy Startup Day 2017 in Zurich, Switzerland on November 30th 2017. Practically, the demo is a simple but also meaningful case of a hive; it is constituted by a producer and a consumer, the so-called workers. A third element is the QUEEN, whose aim is to manage the interaction between the workers and the external grid and to track the measurements related to the power consumed/produced by the workers. The producer, following named SOLAR, simulates a photovoltaic plant with a nominal power of 5 kWp. Instead the other worker (LOAD) generates data about a load consumption. Fig. 1 shows the demo testbed.

Fig1: The Demo Hive testbed

Essentially, the main hardware components of Demo Hive are:

  • two SmartPIs, one for each worker. This device is constituted by an acquisition board for the electrical measurements (voltages and currents) connected to a Raspberry Pi 3. In Fig. 1 the two workers are the black boxes on the bottom.
  • A Raspberry Pi 3 in order to provide the Queen functionalities.
  • A 5G router to provide the Internet connectivity and a WLAN inside the testbed.

Energy tokenization:

One of the most meaningful aim of Demo Hive is to tokenize the produced/consumed energy and to save the related information on a blockchain. For that reason an ERC20-compliant smart contract was deployed on the Ropsten network in order to create a demo token, called DHT, which has the following fixed value:

  • 1 DHT = 1 cts = 0.01 CHF

The basic idea of Demo Hive is that LOAD owns a certain amount of DHTs and sends part of them to the producers (typically SOLAR, but also the external grid through QUEEN) to buy energy. In the following chapter this aspects will be exhaustively described.

Operation mode:

A set of applications runs on the aforementioned devices to actuate the Demo Hive platform, a part of them developed by Hive Power. In this article only the main behavior of the demo testbed will be described, avoiding to explain all the code in details. The following image reports the software interactions inside the demo and outside with the Ropsten network.

Fig 2: Demo Hive software interactions

As written in our whitepaper, periodically the real Hive platform will save data about the tokenized energy on a blockchain. This is quite unconvenient in a demo testbed because the period can be too long. For that reason the demo software considers virtual days with a duration of just 10 minutes. This means the SOLAR worker produces in 10 minutes the same energy really performed in 24 hours. Similarly the power measurements, in a real application performed off-chain and usually acquired every 15 minutes, in Demo Hive are measured every 5 seconds. As shown in Fig. 2, during the virtual day of 10 minutes the power measurements are saved by the workers in QUEEN (black arrows) in an InfluxDB database, a time-series oriented DBMS commonly used in monitoring applications. When the simulated day ends, the workers energies are calculated and tokenized in DHTs considering the following static tariffs.

  • Buy on grid: 20 cts/kWh
  • Sell on grid: 5 cts/kWh
  • Buy in the Hive: 10 cts/kWh
  • Sell in the Hive: 10 cts/kWh

Consider that LOAD/SOLAR worker can only buy/sell energy. Instead QUEEN, managing the interface with the grid, is allowed to perform both the operations. At the end of a simulated day a tokenization algorithm tries to maximize the hive autarky using the following rules (see also Fig. 2):

LOAD buys 𝑬_𝑺𝑶𝑳𝑨𝑹 from SOLAR (10 cts/kWh) and 𝑬_𝑳𝑶𝑨𝑫−𝑬_𝑺𝑶𝑳𝑨𝑹   from QUEEN (20 CHF/kWh)
SOLAR sells 𝑬_𝑳𝑶𝑨𝑫 to LOAD (10 CHF/kWh) and 𝑬_𝑺𝑶𝑳𝑨𝑹−𝑬_𝑳𝑶𝑨𝑫 to QUEEN (5 CHF/kWh)

Practically the workers exchange all the available energy in the hive, exploiting the more convenient tariffs.

Thus, the energies are tokenized in DHTs and the related tokens (as written before, 1 DHT = 1 cts) sent by buyers (LOAD or QUEEN) to sellers (SOLAR or QUEEN) according to the aforementioned algorithm. In Fig. 2 these operations are represented by the red and light blue arrows. The DHTs transfers are then saved on the Ropsten blockchain. This can be performed because on each demo device a geth client maintains a node synchronized to the Ethereum testnet network. In order to minimize the required disk space, the geth instances run the Ethereum light client protocol. The Ropsten accounts of the components are reported below:

Simulation results:

As explained above, the Demo Hive testbed simulates “virtual” days with a duration of 10 minutes. During a single day the produced/consumed power of the two workers is saved every 5 seconds. At the end of the day (i.e. 10 minutes) the related energies are calculated, tokenized and saved on Ropsten network. In order to have days with both the aforementioned cases of the autarky algorithm (i.e. solar production > load consumption and solar production < load consumption) the following power profiles are taken into account for the workers:

  • SOLAR: two profiles are considered, the former (following named CLEAR) with a significant production, related to a day without clouds. Instead the latter (following named CLOUDY) has a poor production, simulating an overcast day. The sequence of the profiles in the simulated days is a continuous alternation, i.e. after a CLEAR day there is a CLOUDY one, and so on.
  • LOAD: a unique typical profile is taken into account as baseline, then every day a noise is added to it. As a consequence, during the simulated days the resulting profiles are always similar, but never equal.

Fig. 3 shows an example of two simulated days. It is simple to note the difference between the CLEAR and CLOUDY cases.

Fig 3: Profiles of two simulated days (light blue: SOLAR, dark yellow: LOAD)

The profiles shown in Fig. 3 were performed during the Energy Startup Day 2017. Considering the first profile (CLEAR), it is simple to understand how the SOLAR production exceed the LOAD consumption. As a consequence, all the energy needed by LOAD is locally bought in the hive from SOLAR producer at the convenient Hive tariff (i.e. 10 cts/kWh). On the other hand, the remaining amount of produced energy not bought by LOAD will be sold by SOLAR on the grid with a less convenient tariff (i.e. 5 cts/kWh). Acting as described, the local energy exchanging is maximized and, consequently, the two workers realize to save/profit money taking advantage of the Hive tariffs.

In the second case (CLOUDY profile), the production is not able to cover all the consumption. Thus, LOAD has to buy part of the needed energy from the grid paying 20 cts/kWh.

At the end of the simulated day the savings/profits data are then tokenized and the related DHTs distributed by the consumer (e.g. LOAD in a CLOUDY case) to the producers (e.g. SOLAR and QUEEN in a CLOUDY case) in order to pay the used energy. In the following list the energy profits/costs in DHTs are reported comparing the cases of Demo Hive against a business as usual (BAU) situation, where the hive market does not exist (i.e. only the grid tariffs, 20/5 cts/kWh to buy/sell energy, are available).

  • Solar revenues:
12:00-12:10 (CLEAR):
HIVE = 432 DHT
BAU = 254 DHT
12:10-12:20 (CLOUDY):
HIVE = 135 DHT
BAU = 68 DHT
  • Load costs:
12:00-12:10 (CLEAR):
HIVE = 356 DHT
BAU = 713 DHT
12:10-12:20 (CLOUDY):
HIVE = 590 DHT
BAU = 725 DHT
HIVE-BAU = -123 DH

It is easy to note how the saved/earned money of LOAD/SOLAR is much higher during the CLEAR day, being the solar production able to cover all the energy needed inside the hive. The following list reports the precise amounts:

  • LOAD saves 3.57 CHF during CLEAR days
  • LOAD saves 1.23 CHF during CLOUDY days
  • SOLAR earns 1.78 CHF during CLEAR days
  • SOLAR earns 0.67 CHF during CLOUDY days

The following URLs report the Ropsten transactions details related to the simulated days.

Next steps:

The Demo Hive testbed implements a very simple case of hive. It is a significant starting point for the development of the complete framework, but some improvements have to to be implemented. The following list reports the most meaningful features still to develop.

  • Prototype of a “blockchain-ready” meter: SmartPi device is based on a Raspberry Pi 3 board, a great hardware platform for prototyping and initial tests but not projected to be easily integrated in an industrial product. In order to develop a blockchain meter, naturally necessary in our framework, the idea of Hive Power is to take into account more industrial-oriented hardware platforms and using them to substitute the SmartPi devices.
  • Power profiles: Currently the workers profiles are quite similar during the “simulated days” of 10 minutes. Practically there is a precise alternation of clear and overcast days for the SOLAR production. Regarding the LOAD, every simulated day a noise is added to the same predefined profile. In order to have a more realistic situation, new profiles have to be considered (e.g. two different LOAD profiles, the former for workdays and the latter related to the weekend)
  • State channels: in our demo testbed, the power measurements are now acquired every 5 seconds and the related data saved in an database running on QUEEN. In order to have a fully decentralized approach, our idea is to handle power data using State Channels technology avoiding to use a local database.
  • More workers: To have a more realistic simulation of a Hive energy market, the number of workers should be increased.
  • Prosumer/Storage worker: Currently being in Demo Hive only a consumer (LOAD) and a producer (SOLAR), it will be meaningful to introduce prosumer and storage workers in order to have a complete market. It is interesting to consider that with storage systems it would be possible to implement load-shifting algorithms to maximize the costs savings.
  • Dynamic tariffs: In Demo Hive only static tariffs are taken into account for the energy buying/selling. Clearly, this is not a realistic situation and consequently a dynamic system of tariffs has to be implemented.
  • World conquering: …is coming 🙂

At Hive Power we are working hard on our demo testbed to continuously improve it and add more functionalities.

Key links:

[maxbutton id=”1″]

Installing Raiden on a Raspberry Pi 3

The integration of IoT and Ethereum is emerging as a powerful solution for data management using the blockchain technology. Unfortunately, at present the speed and storage requirements of a typical IoT application exceed the capabilities of the public Ethereum blockchain. The reasons are mainly two, the former is the block time, currently too high to be able to track IoT data. The latter is the gas cost: each transaction has a cost on Ethereum, thus the global amount of gas paid for all the transactions would be highly expensive. As a consequence, currently an interface among the “fast” world of IoT and the “slow but decentralized” one of Ethereum is needed.

Raiden ( is a framework for the fast management of transactions. Being built as a off-chain solution, it provides a fast exchange of data among the Raiden nodes using the state channels technology, avoiding the long response time and the gas costs related to on-chain transactions. On the other hand, the starting and the end points of each Raiden state channel are tracked on the blockchain (currently only on Ropsten network) with the related initial and final balances.

In this article the procedure to install Raiden ( on a Raspberry Pi 3 is explained. I chose this well-known hardware platform because it is widely used for IoT applications. It is assumed that Raspbian Jessie 8 is the operating system running on the Raspberry Pi 3.

Raspberry Pi 3 used for the Raiden installation

The installation steps can be summarized into the following list. For each point an explanation is reported together with the related bash commands.

  • Step1: Installation of libraries and tools required by the following steps.
# sudo apt-get install geth pip cmake libboost-all-dev
  • Step2: Creation of a temporary swap file to avoid memory overflows. The available memory in a Raspberry Pi 3 is 1 GB and ~20% of RAM is used by Raspbian and other processes. The remaining RAM is not sufficient for the following compilation processes, so a swap space has to be used. I performed some tests without swap, but I always encountered memory overflows. In the example below I created a swap file of 0.5 GB, enough for the compilations. Consider that after the following two steps the swap file can be deleted and so the space reallocated to the root partition.
# sudo dd if=/dev/zero of=/swap bs=1M count=512
# sudo mkswap /swap
# sudo swapon /swap
  • Step3: Installation from source of the tool Z3 (, required by the solc compiler. On the Raspberry Pi this process was very long, about some hours.
# mkdir ~/software
# cd ~/software
# wget
# unzip
# cd z3-master
# python scripts/
# cd build
# make
# sudo make install
  • Step4: Installation from source of the Solidity compiler (solc). This software is required by Raiden but, unfortunately, currently no binary package is available for the hardware architecture of Raspberry Pi 3 (armv7). This is the cause of the compilations and, definitely, the main reason of this article. Under the computational point of view, also this step has a meaningful duration, although faster than Step3.
# cd ~/software
# git clone --recursive
# cd solidity-0.x.y
# scripts/
# scripts/
  • Step5: Installation of Raiden. The final step is much simpler than Step3 and Step4, being Raiden a set of Python scripts, thus no compilation is required.
# cd ~/software
# git clone
# cd raiden-x.y
# sudo pip install --upgrade -r requirements.txt
# sudo python develop

Now all the tools are properly installed and we can start using them. Currently Raiden network is available only on Ropsten network, thus the first passage is to start geth in light mode (the unique I realized to launch on a Raspberry Pi) and sync it to the test network.

# geth --testnet --light --v5disc --cache 1024 --rpc --rpcport 8545 --rpcaddr --rpccorsdomain "*" --rpcapi "eth,net,web3" console

Once the node is synced, a Ropsten account has to be created and an amout of at least 0.1 ETH assigned to it. After that we can start Raiden as shown below:

# raiden --keystore-path  ~/.ethereum/testnet/keystore

To test if Raiden has been successfully installed, it is possible to query its REST interface with the following command and to check the result:

# curl -X GET

If you find something similar, you have Raiden working on your Raspberry Pi 3, have fun!

At Hive Power we are using blockchain-enabled embedded devices to create energy sharing communities where all participants are guaranteed to benefit from the participation.

Join Hive Power Telegram chat:

Learn more about Hive Power:

Can blockchain boost the distribution of renewables?

In the latter years the world faced an amazing adoption of renewable energy sources, mainly solar and wind. Initially, the rapid advancement of these technologies was driven by strong incentive programs, first in Europe and therefore in the rest of the world; in parallel, technical improvements and economies of scale of manufacturers, mainly Asian, have allowed a rapid decline of renewable electricity costs.

Today, in different parts of the world, renewable energies are already cheaper than fossil fuels, but there are several cases where their adoption can be stalled by uncertain self-consumption rates of the energy produced. In fact, with current tariffs scheme, a very important factor for calculating the financial return of a new photovoltaic plant is the percentage of self-consumption. This is due to the big difference between the prices of energy purchased and sold to the local network, so without a significant energy consumption during the sunny hours, the majority of the energy produced is injected to the grid. This asynchrony can make solar not convenient.

Example of mismatch between solar energy production and house’s demand

New energy exchange models

To change this scenario, new energy exchange models are needed, which should encourage users to a more rational use of energy at the local level, introducing a new simple and cheap billing scheme. The natural candidate to optimize decentralized energy exchange processes is the blockchain, a decentralized technology based on distributed databases, which allows users to sign contracts and make payments with negligible marginal cost, all with a high assurance of reliability and without a central body. Blockchain technology and new business models that stimulate the local energy exchange — for example between neighbours or business entities in the same district — can facilitate the diffusion of photovoltaic plants and optimize the distribution and use of batteries, that will have a prominent role in the coming years.

A number of start-ups and big utilities are moving towards this new scenario, some with proprietary systems that are a bit in contrast with the distributed market logic, others trying to create an open platform that can integrate components from third parties. These models need also to couple with different use cases, such as micro grids, self consumption communities, condominiums and low voltage district grids.

An automated and reliable future

Additional factors that will encourage the use of smart energy management technologies at local level are the physical constraints of the power grid. Copper cables connecting residential, commercial and industrial areas could face issues in the future, causing a new wave of renovation costs for the electric grids. To prevent this regrettable future, new market models should be designed to encourage the users to use (and store) energy helping the power grid to maintain voltage and frequency within safety range. All these mechanisms will be dominated by artificial intelligence algorithms, so no effort will be required to the user, these new smart tools will work in what many already call “machine economy”.

At Hive Power we are enabling the creation of energy sharing communities where all participants are guaranteed to benefit from the participation, reaching at the same time a technical and financial optimum for the whole community.

Join Hive Power Telegram chat: