Lorenzo Vallecchi is an Italy-based renewable energy specialist with over 20 years of experience in operations, consulting & journalism in Italy, Canada, North Africa & the Middle East. In this guest article, Lorenzo explains how he foresees that energy services offered by private Bitcoin and AI activities will cause a blurring of boundaries and “hybridization” between industries and moral codes, public and private realms, for-profit and not-for-profit activities.
A new hybrid sector
Energy, technology, and Bitcoin present both similarities and differences as distinct sectors. My interest lies in exploring the emerging common ground among these three areas — specifically, renewables, artificial intelligence (AI), and Bitcoin mining. Bitcoin mining and AI data centers sit at the intersection of the new technology and renewable energy sectors. Compared to traditional tech and energy industries, these fields appear to be forming a new hybrid sector, one that increasingly incorporates characteristics from both worlds.
One possible metaphor to describe this hybrid sector is that of supercritical fluids. At extremely high temperatures and pressures, fluids (gases and liquids) enter a fourth phase of matter known as “supercritical.” In this state, the fluid is neither fully a liquid nor a gas but exhibits properties of both. It occupies more volume than a liquid but less than a gas. Similarly, in a context of increasing “heat” and “pressure” in the social, economic, political, environmental and climate realms, Bitcoin and AI data centers can be seen as supercritical fluids.
They are less capital-, energy-, and infrastructure-intensive than the traditional energy sector, yet more so than the traditional tech sector. I will briefly examine what such hybridization means for this potential new sector and the three pillars that support it. Furthermore, I will argue that the increasing convergence of money, energy, and knowledge not only signals but also actively contributes to the upcoming hybridization of moral codes, the blurring of public and private spheres, and the overlapping of for-profit and not-for-profit activities.
Tech is becoming capital intensive
Using some of the largest tech companies as a reference point, we can observe that the collective capital expenditures (CAPEX) of Meta, Google, and Microsoft have roughly tripled over the last five years.
“CAPEX is not only growing larger, but the rate of growth is set to accelerate this year as they invest in the AI boom. Combined CAPEX at MSFT, GOOG, and META is set to grow around 70% in 2024. As a percentage of sales, CAPEX will grow from 13% of sales in 2023 to around 20% in 2024,” according to John Huber, Managing Partner of Saber Capital Management.
On the one hand, this trend is positive and expected for companies nurturing a new business. They are simply investing in their future and continued success. “But what is eye-catching about the big tech companies is the sheer size of this growth in dollar terms and the increasing rate of change: CAPEX is now over three times depreciation expenses,” says Huber.
Until recently, tech companies had very little physical capital, meaning they had fewer assets to depreciate and amortize on their balance sheets. However, with the advent of AI and Bitcoin, physical equipment in the form of GPUs and ASICs, their periodic replacement, and the power and data infrastructure needed to run them have taken on a much more central role in tech corporations’ business structures and priorities. As these companies’ CAPEX continues to increase at an accelerating rate, depreciation expenses for equipment will also rise, potentially impacting their earnings and free cash flow.
Meta, Google, and Microsoft collectively have roughly three-quarters of a trillion dollars of capital invested. It might become much more challenging for them over the next decade to achieve returns on capital as high as they have in the past. While tech companies will likely continue generating better returns than public utilities, they will face, for the first time in their history, the frictions of infrastructure development and the challenges of hardware upgrades — issues typically reserved for more traditional brick-and-mortar, steel-and-concrete businesses.
Tech companies may manage to balance their significantly larger capital expenses, or even offset them, with even higher growth in sales and earnings, maintaining the free cash flow necessary for new investments in capital assets. However, this outcome is not guaranteed. The issue is not whether tech companies can outpace capital expense growth with an even higher rate of profit growth. The point is that the playing field for AI-based tech companies is evolving compared to the one on which traditional tech companies have thrived for the last couple of decades. In response to this new CAPEX-heavy environment, tech companies themselves are changing, mimicking the evolutionary dynamics of the natural world.
In other words, tech companies’ business models used to resemble the bodies of rock climbers — lean, light, strong, and nimble. Now, they increasingly resemble the bodies of shot-put throwers or heavyweight boxers — bulky, heavy, strong, and relatively slower. This places AI companies closer to Bitcoin mining companies, which have always been capital-intensive in their shorter history.
“There is a capital question, that is, at what point does it stop being worth putting the capital in, but I actually think before we hit that, you are going to run into energy constraints,” Meta’s Mark Zuckerberg recently said in an interview on Dwarkesh Patel’s podcast, referring to the physical constraints of building new power infrastructure and their lengthy permitting processes.
Tech is becoming energy intensive
“Power demand from generative AI will increase at an average annual rate of 70% through 2027, mostly due to the growth of data centers,” according to a note by Morgan Stanley Research.
“As the pace of efficiency gains in electricity use slows and the AI revolution gathers momentum, Goldman Sachs Research estimates that data center power demand will grow by 160% by 2030,” according to a recent article by the investment bank.
“Electricity consumption from data centers, artificial intelligence (AI), and the cryptocurrency sector could double by 2026… After globally consuming an estimated 460 terawatt-hours (TWh) in 2022, data centers’ total electricity consumption could exceed 1,000 TWh in 2026,” according to a recent report by the International Energy Agency (IEA).
“The power needs of data centers are expected to grow… from between 3 and 4 percent of total US power demand today to between 11 and 12 percent in 2030,” with consumption increasing to 606 TWh and demand jumping from 25 GW in 2024 to more than 80 GW in 2030, according to a recent McKinsey report.
Although traditional data centers, AI, and crypto-mining accounted for less than 2% of global power consumption in 2022, most observers believe that AI-based tech companies and Bitcoin miners are becoming much more energy-intensive than in the past.
It should be noted that previous forecasts of energy consumption by emerging technologies, such as the Internet and smartphones, have been significantly exaggerated. For example, in 2007, the US Environmental Protection Agency projected that energy usage in the US by data centers and related technologies would double between 2005 and 2010. However, increased efficiencies in technology, computing, and devices — such as the widespread adoption of LED lighting — combined with reduced demand due to the 2008–2009 recession and the offshoring of some energy-intensive industries, prevented a sharp rise in electricity consumption.
A study conducted in 2011 found that the electric load from data centers and computing resources in the US increased by only 36% between 2005 and 2010. Although this growth was considerable, it was three times less than initially feared. Similar and much more alarmist warnings about the risk of Bitcoin mining consuming most of the world’s energy have been thoroughly disproven.
“In 2023, our total data center electricity consumption grew by 17%,” Google reported in its Environmental Report 2024. Google’s total electricity consumption for its data centers was 25.3 TWh in 2023, compared to 15.1 TWh in 2019, which Google uses as its base year for decarbonization calculations.
As a rough but relevant proxy data point for the tech industry, that represents a 67.5% increase in power consumption over four years, an average of about 16.9% per year. While this is a substantial increase, it is nowhere near the 70% annual growth rate mentioned by Morgan Stanley Research, and still below the 22.8% annual growth rate implied by Goldman Sachs’s prediction. Google’s annual power consumption growth is also significantly lower than the roughly 30% increase estimated by the IEA for 2026, although the IEA’s figure also included crypto mining in its calculations.
Regardless, even if current predictions for the growth of energy consumption in this new sector should be taken with a large grain of salt, it is prudent to assume that AI and Bitcoin data centers will indeed increase the energy intensity of the traditional tech sector.
Much like their new capital profile, tech companies will likely continue generating better returns than traditional brick-and-mortar businesses. However, they will have to contend, for the first time in their history, with power access and infrastructure bottlenecks more similar to those typically faced by energy-intensive industries such as steel or cement. Power infrastructure does not scale up at the same speed as power demand for AI or Bitcoin mining.
Growing coupling of AI and Bitcoin mining
The increasing capital and energy intensity of the tech sector is narrowing the gap between traditional tech activities and Bitcoin data centers, which have been historically and still are both capital and energy intensive. From an infrastructural point of view, this potential convergence is starting to be seen in the cross-investments that Bitcoin mining and AI businesses are making in each other.
For example, Bitcoin miner Core Scientific recently announced a 12-year, $3.5-billion, 200-megawatt (MW) deal with cloud provider CoreWeave to supply infrastructure for machine learning and other AI-related activities. Coatue Management, which is also a shareholder in CoreWeave, has invested $150 million in cryptocurrency miner Hut 8 to build AI-related infrastructure.
Bitcoin miner TeraWulf is pursuing large-scale AI ventures, including a 2 MW AI/High Performance Computing (HPC) infrastructure pilot project and a 20 MW colocation AI/Bitcoin mining pilot project.
Bitcoin miner IREN has also diversified into AI cloud services with 816 NVIDIA H100 GPUs, plus an additional 504 GPUs contracted with Poolside AI, while also serving other customers in the reserved and on-demand market. The company reported a 21% revenue increase in its June 2024 investor update, reflecting higher utilization of its GPU fleet.
A Morgan Stanley Research note titled “Powering GenAI: Assessing the Crypto-to-Data Center Conversion Opportunity” argues that “given the focus on de-bottlenecking the power grid to enable data center growth, we assess the conversion of crypto mining sites and find significant potential benefits.”
Demand response from data centers’ standpoint
“Demand response” refers to an adjustment in power demand by large users who are paid to reduce their energy consumption during times of grid stress to help maintain grid balance. These programs have existed for decades, supporting grid reliability and reducing the need for “peaker plants” — power plants that are often fossil-fuel intensive and are used during peak demand periods.
From an energy supply perspective, a large demand response capability is equivalent to suddenly adding significant generation capacity to the power grid, enhancing its resilience and flexibility.
“Demand-side flexibility is important for both avoiding additional emissions and maintaining grid reliability, especially in light of the expected rapid rise in demand from data centers over the next several years. Bitcoin mining is not only capable of providing this flexibility, but is actually doing so,” according to a recent Bitcoin Policy Institute (BPI) report.
AI data centers can serve a similar, though more nuanced, demand response purpose. This nuance arises from both economic and operational factors.
Economically, “profit margins for AI compute, at least presently, are much higher than profit margins for Bitcoin mining. While the newest Bitcoin mining machines generate only $0.17-$0.20 per kilowatt-hour (kWh) of revenue, Nvidia GPUs generate $3-$5 per kWh, a 17-to-25-fold difference,” the BPI report states. This suggests that AI companies have less economic incentive to power down their machines for demand response compensation than Bitcoin miners do.
Another economic difference between Bitcoin mining and AI concerns the cost-effectiveness of computational power. “Developing 1 MW of installed power for mining costs much less in terms of capital than developing 1 MW for AI,” said Francesca Failoni, co-founder of the Italy-based miner Alps Blockchain, in a recent podcast. Consequently, AI data centers tend to contract for less energy because developing additional capacity costs hundreds of millions of dollars, unlike in Bitcoin mining. This is why mining companies can secure large energy contracts, ranging from 100 to 200 MW and potentially scaling even higher to the terawatt (TW) level in the not-so-distant future. Hence Morgan Stanley’s suggestion that miners’ already installed and interconnected electric capacity could be acquired by AI companies to power their activities.
From an operational standpoint, computational environments vary. Besides the technical need to cool its hardware, Bitcoin mining has a single, focused workload: finding and adding a new valid block to the Bitcoin blockchain approximately every 10 minutes. AI, however, handles various workloads and users, requiring either online, individual, real-time computation and outputs or delay-tolerant batch computation that can be processed offline and postponed according to different deadlines, as determined by the AI company.
Delay-tolerant batch computation is typically used for training AI models, such as Large Language Models (LLMs) and Large Vision Models (LVMs). This process involves feeding large amounts of data into a machine-learning system, where algorithms sort, cluster, and analyze the data over days or weeks to develop the capability to generate complex text or create realistic images based on simple prompts from the end user. The user’s request for a refined output is called an “inference,” which requires online, interactive, real-time computation each time a person makes an individual request.
There are also some minor differences in the racking and power distribution units used by Bitcoin mining and AI, according to IREN’s co-founder and co-CEO Daniel Roberts. “About 80% of global data centers operate with rack densities of 5kW or less (Uptime Institute Report). NVIDIA H100 GPU reference architecture [for AI] is for 40–45kW rack density. IREN operates >70kW rack densities,” he noted in a recent post on X.
“The biggest challenge for traditional data centers is managing these workloads, particularly cooling and density (i.e., a data center is essentially a building with thousands of heaters running 24/7),” he added.
“Currently, both training and inference primarily use GPUs, which have extremely high power density. By density, we mean that traditional data centers have an average density of 5–10 kilowatts per rack, whereas AI servers now require 60 or more kilowatts per rack,” according to the BPI paper.
While Bitcoin mining’s singular operation can be powered up or down at will without disrupting its medium-to-long-term goals, the key challenge for AI activities is efficiently and fairly allocating power curtailment across different workloads.
“No single class of workloads can adjust enough power to align datacenter demand with fluctuations in energy supply and carbon intensity. When most of the datacenter’s power is attributed to online workloads, modulating only delay-tolerant, batch workloads would be insufficient for sustainability and would incur prohibitive performance losses,” according to a 2023 paper by researchers at Meta, the University of Pennsylvania, and Harvard University.
For instance, 30–40% of Google’s workloads are delay-tolerant with a 24-hour deadline, and 20–30% of Meta’s workloads are delay-tolerant with varying deadlines. Although 70% of Microsoft’s Azure workloads are labeled delay-tolerant, the degree of tolerance is unspecified, according to the study, published on Cornell University’s arXiv research-sharing platform.
“Without a thorough analysis of performance implications and a commitment to fairly distributing demand response among diverse workloads, realizing demand response in hyperscale data centers remains a lofty ambition,” warn the study’s authors from Meta, the University of Pennsylvania, and Harvard.
Since demand response by AI data centers is more complex than by Bitcoin miners, researchers and operators are dedicating considerable time to studying and fine-tuning the interplay between AI and demand response. My aim here is not to explore all possible ways to efficiently allocate demand response services to different AI workloads. The key point is that there are approaches and models AI companies can adopt to optimize power usage and curtailments across various AI activities, as the cited paper illustrates.
The rationale for AI companies participating in demand response services is also strengthened when considering not only their economic incentives or disincentives but also climate-related factors based on their own carbon-reduction targets, government-mandated goals, or carbon taxes. These could have a significant economic impact if aligned with carbon reduction targets.
Bitcoin’s demand response capabilities, though relatively new, have likely been explored and tested more thoroughly than AI’s. However, it’s important to note that demand response by AI data centers is not just a theoretical concept studied by academic researchers without practical application.
Less than a year ago, Google announced that they “developed and piloted a new way to reduce our data centers’ electricity consumption when there is high stress on the local power grid, by shifting some non-urgent compute tasks to other times and locations, without impacting the Google services you use every day.”
In response to energy shortages and grid reliability challenges, Google implemented various measures across its data centers in Europe, Asia, and the U.S. during 2022 and 2023. In Europe, amid natural gas shortages and high energy prices, the company scheduled daily power reductions at data centers in several countries to support electricity demand reduction and energy security efforts. In Taiwan, the company participated in a grid reliability program by reducing power usage at its data centers during peak summer periods, helping to manage the island’s constrained grid. In the U.S., similar power consumption reductions were made during extreme weather events to assist local grids in maintaining reliability and managing increased power demand.
Addressing the economic incentives for demand response, Google stated that “realizing the full potential of demand response as a reliable, cost-effective, and clean resource for the grid will require better frameworks to incentivize large energy users to operate more flexibly. We look forward to working with other large energy users, our grid partners, and policymakers to incorporate greater demand-side flexibility that will help grids become more efficient, cost-effective, and clean.”
Demand response from grid operators’ standpoint
The addition of intermittent renewable energy sources to the power generation mix and the increased intensity and frequency of extreme weather events have raised serious concerns about grid reliability. As a result, demand response has had to evolve to enable a more dynamic and responsive grid that can meet the challenges of this new energy and climate landscape.
The experiences of independent system operators (ISOs) or regional transmission organizations (RTOs) like ERCOT in Texas demonstrate that Bitcoin mining can play a significant role in demand response initiatives managed by grid operators. When a grid emergency occurs — typically due to high demand, extreme weather, or transmission issues — ERCOT dispatches demand response partners to reduce energy usage within minutes, usually 10 or 30.
In this evolving context, traditional demand response methods are being replaced by more sophisticated, flexible, decentralized, and automated approaches:
- Manual curtailment of large industrial loads has transitioned to automated and remote curtailment of both small and large loads.
- Long response lead times, which used to range from several hours to a day, have been replaced by systems that can respond in real-time or within minutes.
- Capacity-only programs focused on grid-wide constraints have evolved into multiple program types. These address both system-wide and localized issues, offering different compensations and on-bill savings for peak avoidance.
- Single value streams, where each asset contributed to only one specific aspect of grid stability, have shifted to flexible value streams. Now, any asset can participate in multiple markets and contribute to various value streams.
Demand response programs offered by grid operators, in which companies can participate, can be categorized as follows:
- Economic Programs: These respond to price spikes, paying companies to reduce load during high-price periods, thus helping to stabilize energy prices.
- Capacity Programs: Designed to alleviate grid-wide constraints, these programs have longer notification windows and fewer events.
- Ancillary Programs: These provide services like frequency regulation with very short response times. They offer higher payments but require greater operational flexibility.
- Utility Programs: Managed by individual utilities to address localized needs, often in partnership with demand response companies.
- Peak Shaving and Price Avoidance: Aimed at reducing energy usage during high-price periods and peak demand times to minimize electricity bills.
In summary, today’s grid operators need a variety of flexible resources that can be called upon to support both widespread and localized issues. They must plan not only for anticipated capacity issues but also be able to address sudden drops in supply or spikes in demand quickly. Additionally, they need to maintain a constant grid frequency and respond to deviations in real-time to prevent equipment damage. These measures help keep the lights on and prevent blackouts and brownouts while representing multiple value streams for participating companies.
Participation in demand response services can now be tailored more flexibly than in the past to cover whatever percentage of operations each company can sustain or feels comfortable with. It could be as limited as a bikini or as comprehensive as a full wetsuit.
Different types of demand response programs increase the likelihood that not only Bitcoin mining, but also more variable AI activities can find the right fit for their diverse operating profiles and company needs. It is now common practice for grid operators to work with each demand response participant to identify their specific energy reduction potential in each market, creating tailored strategies for different participants.
In ERCOT’s case, for instance, demand response partners specify the hours they are available to participate by bidding into ERCOT’s day-ahead energy market. When a grid imbalance occurs, participants reduce energy consumption automatically via an on-site relay, installed and maintained by the local Distribution System Operator (DSO). The mutual goal is to maximize value while minimizing the impact on participants’ operations. Data analysis and optimization algorithms are employed to provide a clear picture of demand response participation and management from the perspectives of both grid operators and participating companies.
As long as each detailed energy reduction plan is well-calibrated to meet the needs of both grid operators and demand response service providers, it doesn’t make a significant difference whether a company is only able to curtail 2%, 20%, or 90% of their consumption for 1/8, 1/4, or 3/4 of the time.
How much can AI data centers “demand respond”?
Let’s consider AI’s main division of computational labor between model training and inference requests by end users. Contrary to what one might expect based on the different time sensitivities of delay-tolerant and interactive workloads, real-time inference workloads may offer just as much demand response potential as delay-tolerant training workloads, if not more.
The two main AI activities have different sources of “legroom” or “headroom” for their potential contribution to demand response. Model training’s flexibility is predictably derived from an excess of time available to complete a task, while inference’s flexibility comes from an operational surplus of power that typically characterizes these tasks and could potentially be freed up for short periods.
This second source of potential flexibility, related to inference, is connected to power “oversubscription” by data centers. In this context, oversubscription means that the total power required by the IT equipment exceeds the actual power supplied by the facility’s infrastructure or the contractual power limit agreed upon with the utility provider.
Power oversubscription is possible due to the statistically low likelihood of simultaneous peak power operations across multiple servers. For instance, researchers found that over six months, a group of 5,000 servers at Google never exceeded 72% of their aggregate peak power, according to data reported by Future Generation Computer Systems. In other words, data center operators often install more servers than the power infrastructure can support at full load, assuming that not all equipment will run at maximum capacity simultaneously. This strategy allows them to maximize the use of available space and resources while potentially reducing costs.
On the other hand, “LLM training clusters incur massive and coordinated power peaks due to large-scale synchronous training jobs. Hence, they significantly strain the data center power delivery infrastructure and offer very little headroom (about 3%) for power oversubscription. In contrast, despite high peak power utilization at the server level, LLM inference clusters offer substantial power headroom (about 21%) at the cluster level, making them excellent candidates for power oversubscription,” according to a recent study on power management opportunities for large language models in the cloud by the Association for Computing Machinery.
Training has a higher peak and average power draw compared to inference. Training also experiences large swings in power consumption over short periods, up to 37.5% of the provisioned power capacity within 2 seconds, whereas inference only incurs changes of up to 9%. Furthermore, inference power consumption shows a daytime pattern since it is an interactive workload; however, over the course of a few seconds, its power usage remains relatively stable compared to training, as noted in the study.
“These differences imply that training tends to put much higher strain on the cluster power delivery infrastructure compared to inference,” the study authors say.
It’s worth noting that this study focuses on power management optimization not in the context of demand response, but for greater operational efficiency in AI data centers. It thus proposes an oversubscription approach that would allow for “deploying 30% more servers in existing clusters with minimal performance loss.” For my own purposes, it seems plausible to argue that the extra power headroom associated with inference workloads could serve as a proxy measure of operational flexibility and be included in the potential demand response power budget of AI data centers.
Industry representatives in the AI sector were not immediately available for comment on how they are planning or structuring their demand response services. As an alternative to firsthand information, we can use the evidence observed by the authors of the Association for Computing Machinery paper and the breakdown between delay-tolerant and real-time computing mentioned for Meta and Google in the arXiv study as a basis for making an educated guess about what demand response for AI data centers might look like.
If 30–40% of Google’s workloads are delay-tolerant with a 24-hour deadline and 20–30% of Meta’s are delay-tolerant with varying deadlines, it can be inferred that 60–70% of Google’s workloads are real-time, interactive inferences and 70–80% of Meta’s workloads are also inference-based. If the power oversubscription headroom afforded by inference workloads is around 20%, as indicated by the Association for Computing Machinery report (21%), and as shown by the Future Generation Computer Systems report, it’s possible to envisage that 12–14% of Google’s conceivable inference-related power oversubscription could at least in part be dedicated to demand response services over a 24-hour period. In Meta’s case, 14–16% of its conceivable inference-related power oversubscription could at least in part be dedicated to demand response services.
If one considers that about 3% of the overcapacity headroom could be used by more power-constrained batch-computing, the oversubscription possibly available for demand response would be anywhere between 9 and 13% of data centers’ total computational power. An overall headroom of about 10% of power oversubscription by AI workloads being available for demand response could be a realistic starting point.
We then have to consider other sources of power consumption besides computation. Computational workloads represent about 40% of the total electricity demand of a data center, with cooling requirements to maintain stable processing efficiency making up roughly another 40%. The remaining 20% of consumption comes from other associated IT equipment, according to the IEA.
Assuming that cooling and secondary IT equipment consumption cannot be reduced during demand response events, leaving 60% of total consumption unchanged, the potential reduction of computational workloads for demand response would account for 3.6–5.2% of the total electric consumption of an AI data center. If we assume that further efficiencies can be found to reduce the consumption of cooling and secondary IT equipment during demand response events, we could take the higher end of the range and posit that about 5% of the total electric consumption of AI data centers could be made available for demand response.
This back-of-the-napkin calculation would apply to capacity demand response programs, making unused raw power available for the grid. Assuming a 120% oversubscription of computational workloads, a 40% quota of consumption dedicated to computation, and considering only the inference side of AI for simplicity, as inference workloads approach the ∼83% mark of servers’ maximum energy consumption and the 40% mark of the total contractual power (83% of 120% = 99.6%, i.e., the full capacity available for computation within the contracted capacity offered by the grid operator), AI companies could also provide other demand response programs, such as reducing loads during high prices, providing frequency regulation services, or peak shaving during emergency events.
Even if model training is more power-constrained than interactive inference workloads, the time flexibility that batch computing can afford is also a potential source of demand response services. A number of complex power-control nuances must be considered in all these areas, but none should be impossible to address and resolve.
As new data centers operate in the low hundreds of MW range, and people like Mark Zuckerberg can already envision TW-level facilities, a 5% contribution of data centers’ overall capacity to demand response services could add dozens of MW of power flexibility and energy resilience to the electric grid. If the oversubscription headroom turned out to be closer to 30% than 20% of the nameplate capacity, AI’s demand response capabilities could increase further.
Currently, profit margins for AI activities are much higher than those for Bitcoin mining, so demand response services need to become economically more appealing for AI companies through a combination of favorable market design and regulatory tools.
As distributed energy resources, both large and small, increasingly become essential components of the electric grid, their strategic infrastructural role should be more directly recognized in relation to their contribution to power generation and grid flexibility. In terms of market design and regulatory tools, new energy-intensive and capital-intensive industries like Bitcoin mining and AI could be more deeply involved through a mix of better incentives and mandated participation in demand response services.
What could it mean for this new hybrid sector?
I have written about the potential for integration of the electricity system and Bitcoin mining in other occasions in the past. Utilities, renewable energy project developers and grid operators have a real incentive to mesh the electricity and Bitcoin sectors into an Internet of networks. Now, AI workloads are also blending in both Bitcoin and renewable energy infrastructure and operations, with the possibility that three separate industries will slowly shape into one new hybrid sector.
This blending may be actual, with Bitcoin and AI activities (and power operators) providing coordinated energy services under the same physical roof, or virtual, with operators running separate Bitcoin, AI and energy activities in different sites, providing coordinated demand response and other services with the same logic as a virtual power plant.
Looking at business models, operations and price formation, the marginal costs of Bitcoin mining and AI activities do not tend to zero, like renewables’.
If a miner wants to increase their hash rate they can do so only by adding mining equipment and increasing power consumption, or if they don’t run their already existing fleet at capacity, they can crank it up and only pay for more electricity. AI companies will face similar constraints, as they become more capital and energy intensive, even if their profit margins are now higher than Bitcoin miners’. One way or another both their marginal costs trend higher in the short term if they want to produce more. On the other hand, computing hardware will become more energy efficient overtime, so that each unit of production will require less energy to be produced. Electricity itself can become cheaper if operators are able to tap into stranded energy, use behind-the-meter sources of electricity at power generation sites, or apply AI-based capabilities to make energy generation and consumption more efficient.
The end result of these dynamics is that in the long term the price of a hash of calculation for Bitcoin or a token of data for AI training and inferences will probably tend to level off around the cost of “production” of one of their units, just like traditional commodities. As for minerals and agricultural commodities, this nascent sector will develop markets for the wholesale trading of hashing and AI token commodities. This is already happening for Bitcoin hashing with instruments akin to traditional future contracts and the like. AI operators, to the best of my knowledge, still don’t have the option to buy and sell tokens or token-services for AI modelling or inference processes in a regulated or an over-the-counter market. But the AI segment already features “wholesalers” that offer cloud services for other companies. It might be just a matter of time before a market develops for trading AI computational services.
A hybrid Bitcoin-AI-energy sector should prove to be more flexible, resilient, profitable and stronger than the three of them as separate entities. Diversification and vertical integration are key elements. When Bitcoin mining goes through less profitable phases, AI activities can supplement revenues of integrated operators, and vice versa. Both Bitcoin miners and AI activities can help energy players and grid operators develop new power infrastructure and run it smoothly once it’s built and connected. AI will inform all activities not just on the demand side as a workload, but as a productivity and optimization tool on the supply side of any service or product on offer.
The different pricing structures of the three separate sectors could remain distinct within the various functions of this new hybrid sector, providing a diversified and potentially more resilient business model. Alternatively, these different pricing structures could converge toward a common standard, primarily based on a pay-for-service model. This latter scenario would avoid the pitfalls of ad-based “free” services from the social media era.
All of this will not occur in a regulatory vacuum. On the contrary, regulations, market designs, and various forms of state subsidies — or the absence thereof — will play a significant role in shaping this new sector. Its hybrid nature will attract regulatory attention from energy, financial, and tech watchdogs, which could lead to a gridlock of conflicting laws and rules. If regulators want to avoid this, they should develop new, multi-layered capabilities and adopt a multidisciplinary approach to their work. This, of course, assumes good faith, an evidence-based approach to regulations, and as neutral a stance as one can expect from public institutions that are increasingly partisan and politically driven. On the other hand, integrating Bitcoin mining, AI activities, and energy services into a more unified infrastructure could make it more challenging and costly — both economically and politically — for regulators to intervene with a heavy-handed, ideological approach.
A two-dimensional hybridization
As the technology sector becomes increasingly energy- and capital-intensive, with ever-larger data centers owned by major companies, AI is simultaneously expanding tech’s capabilities in the opposite direction. AI can further decentralize operations by enabling individuals to use AI agents to carry out tasks and plans that, until recently, would have required multiple company staff members.
“Solopreneurs are now a thing. A relatively smart individual with no programming experience can now build things on their own,” investor Nic Carter recently commented in a long post on X, discussing the financial performance of AI chip manufacturer Nvidia.
An intermediate step in this shift from large to small could be facilitated by edge data centers — smaller, widely distributed facilities that bring computing closer to local communities. This approach can more easily bypasses the interconnection bottlenecks that mega-data centers create for the grid, particularly in dense urban areas. While this would alleviate some infrastructure constraints, it would not fundamentally change the dominance of large corporations in the sector.
The trend of simultaneously scaling up to large data centers while also moving toward distributed capabilities is not new. We have seen this pattern before, with the transition from mainframe computers to personal computers and then to the growth of connected server farms in the opposite direction. This simultaneous scaling up and down mirrors developments in the energy sector. Historically, a few large plants were solely responsible for electricity production, but they are now being complemented by millions of small domestic, residential, industrial, and commercial solar energy systems. Conversely, the same solar technology is being scaled up to gigawatt-level projects — a trend that is also being replicated by battery storage solutions, both at large and small scales.
This indicates that hybridization is occurring not just across industries but also along dimensional scales. Different dimensions do not blend in the same way that sectors do. Mixing ingredients like eggs, oil, and lemons — representing different “sectors” — can create a new product like mayonnaise, a hybrid “sector.” However, combining large-scale tuna fishing with shore-based sea bass angling does not yield a qualitatively different “sector.” Instead, it provides opportunities for both large and small players to effectively utilize a sector’s technologies and benefit from its opportunities.
This kind of dimensional hybridization tends to blur the traditional boundaries between public and private realms, or at least sets the stage for such a blurring. Traditionally, especially in Europe, large, expensive initiatives with significant impacts on basic needs — such as healthcare, education, infrastructure, energy, transportation, water, defense, law enforcement, judicial systems, etc. — have been more public than private, and more non-profit than for-profit, and this continues to be largely the case. Meanwhile, the production of more personalized goods and services — like clothing, books, music, hairstyling, dining, electronics, jewelry, sports, etc. — has traditionally been and continues to be more private and profit-driven than public and non-profit. While there are always meaningful exceptions to this rule, the separation between the public sector providing non-profit essential services and the private sector providing profitable non-essential services remains relevant.
The new hybrid sector formed by the convergence of renewables, Bitcoin mining, and AI-based capabilities for handling vast data points in complex systems is contributing to blur the lines between public and private, and between profit and non-profit, in the areas of finance, energy, and governance. Traditional public sectors that overtime have been privatized or where public and private enterprises coexist tend to be heavily regulated, like the energy sector or education. Hybridization across scales may lead to a further blending of public and private functions. In these and other foundational areas, individuals now have more powerful tools than ever before to act independently or in federated ways, less reliant on the state or other centralized entities — akin to a supercritical fluid that embodies different dimensions of the social construct and its underlying physical infrastructure.
Practically speaking, this means that more segments of the supply chain for public services could be offered privately, and more segments of traditionally private services could enter the public realm, including voluntary, non-profit activities and the commons — a broad set of natural and cultural resources shared by many.
This does not imply that the state and other centralized corporate or public entities will suddenly be replaced by solopreneurs, self-sovereign individuals, virtual communes, federated city-states, or any other new forms of organization. However, this evolving scenario challenges one of the fundamental principles that have shaped my worldview: Jane Jacobs’ “Systems of Survival.”
A new system of survival?
“Systems of Survival” is an eminently readable and clear book written by the late urbanist and activist Jane Jacobs in 1992 as a Platonic dialogue, to highlight the distinct but equally necessary moral foundations of commerce and politics, as its subtitle suggests.
In just over 200 pages, Jacobs argues that the essence of social life can be distilled into two sets of underlying rules that govern two separate realms — commerce and politics — providing the framework for our actions. Both realms, with their respective sets of rules, contribute equally to defining us as a species, forming the foundation for our survival and remarkable development on Earth.
I will not delve into the specifics of Jacobs’ Systems of Survival and the two moral “syndromes” at its core, as her book is still well worth reading to fully appreciate her insights. What I want to emphasize here is a basic consequence she derives from these two distinct sets of guiding principles: long-term prosperity and social harmony flourish when each realm operates by its own rules, while conflict and discord arise when we attempt to apply the rules of commerce to the political realm or political rules to the commercial realm. According to Jacobs, moral codes that are effective and necessary in one realm do not work, are counterproductive, or even destructive when applied to the other.
This functional distinction of moral codes has long provided a useful framework for understanding what works and what does not in the world. However, the potential emergence of a new hybrid realm — one that crosses industry sectors and dimensional scales — suggests that the explanatory power of Jane Jacobs’ model might start to diminish.
The validity of her logic (like that of any heuristic) is largely grounded in the types of physical, natural, social, cultural, economic, financial, and energy “infrastructure” that underpin the realities of individuals and larger groupings. During the second half of the last century and into the first decade of the current one, these infrastructures provided a relatively coherent, resilient, and stable base, at least for the more developed parts of the world. Although the latest iterations of these infrastructures still exist and continue to provide the basic bricks of our lives, cracks have been forming along all the edifices of society. Social norms, trade relationships, and political arrangements are all ultimately based on trust and mutual acceptance of shared rules. These norms, relationships, and arrangements behave somewhat like tectonic plates — slow-moving in terms of human time, but always in motion along or against one another. It seems that the pressure is building, and temperatures are rising, making it possible that the fault lines separating these infrastructures will shift, breaking and reconfiguring the tectonic plates into a new arrangement. The distinctions between public and private, for-profit and not-for-profit, may start to fade, giving way to a new moral blend, achieving a different balance, and forming new hybrid patterns within newly defined realms. In this context, other traditional distinctions, such as labor and capital, may also blur.
For example:
- One might assume that the advent of AI will favor capital over labor, as AI could render many jobs redundant, concentrating wealth and power in the hands of a few AI super-companies. However, AI cannot function without data — trillions of fresh data points originating from countless individual interactions across various sectors and spheres. Most of these data points begin and end with individuals, who could claim rightful ownership of their data as raw material or even semi-finished products. Many people may lose their jobs in the short term, before new types of jobs are created within the AI economy, but to the extent that individuals can own and control their data, they could become partial owners of AI’s most valuable form of capital, while continuing to be “laborers,” employees, entrepreneurs, or recipients of some form of universal basic income (UBI), if AI’s impact on the economy and social fabric proves to be extreme. Some might argue that workers are already minor shareholders of capital through whatever corporate stocks they hold in their pension funds or as retail investors. But in this case, exposure to a sector’s profits and capital is indirect, stemming from savings as an outsider rather than from direct participation in companies’ operations along the supply chain of an industry. As originators and owners of data, individuals and organizations would become insiders in the AI industry, active participants on both the supply and demand sides. In a new hybrid reality, where individuals would take on more entrepreneurial risks and rewards tied to data ownership and management, a fundamental “capitalist-labourer” overlap may become less of an oxymoron than today. I wouldn’t be overly surprised if the potential for UBI were somehow tied to people’s control over and origination of data.
- Another potential twist in the traditional division between for-profit and not-for-profit activities could emerge in the context of Bitcoin mining. The Bitcoin subsidies that miners receive — newly “minted” bitcoin rewards for adding a valid block to the blockchain — are set to halve every four years, until they cease entirely in 2140. As these rewards diminish, two main scenarios could unfold:
- Large, corporate Bitcoin miners survive or even thrive. Transaction fees could compensate for decreasing subsidies in a hyperbitcoinized economy. If this happens, assuming a fixed block size, fees paid to include a transaction in a block would become increasingly unaffordable for the vast majority of people. The Bitcoin blockchain could then resemble a decentralized version of the Fedwire system, with most transactions conducted and recorded on secondary networks. This would necessitate countless nodes to facilitate transactions on these secondary layers, each generating a small fee. While this could be considered a business, as more nodes spread and become integrated with or connected to devices like solar inverters, batteries, vehicles, heaters, etc. their revenue-generating role may be overshadowed by their essential function in supporting the strategic money-energy infrastructure within a new internet of networks. The value of nodes would lie more in preserving and protecting the individual capital and social value embodied in Bitcoin than in the operational revenue from each layer-2 transaction. In this sense, much of the transaction validation and routing could become akin to ecosystem services, or natural capital, providing for human well-being and quality of life, similar to how nature offers food, natural pollination, clean air and water, waste decomposition, and flood control. Thus, a significant portion of transaction validation and routing might effectively become a not-for-profit, “natural” ecosystem activity.
- Large, corporate Bitcoin miners fail to compensate waning subsidies with increased transaction fees. This could lead to the gradual disappearance of many miners, either because hyperbitcoinization does not occur, leaving space only for the largest and most efficient operators, or because bitcoinization does not go far enough, again allowing only the most efficient miners to survive. In this scenario, the security of the Bitcoin network could be increasingly compromised by the dwindling number of large miners and the growing centralization of block creation under a few operators. While this outcome cannot be ruled out, it would likely encourage a not-for-profit approach. If most large miners or even all of them were to become unprofitable and progressively disappear, it is not inconceivable that a multitude of small, retail, domestic miners, built into or connected with increasingly electrified, digitally connected, AI-enabled devices, would step in to replace the role of large miners in order to protect the primary blockchain and preserve the value of its main Bitcoin asset across different network layers. Once again, small nodes would provide services more akin to ecosystem services than for-profit activities.
- A final example of the potential blurring of traditional lines can be seen in the social media sphere, where the evolution is moving from centralized, for-profit platforms like X (formerly Twitter), LinkedIn, or Instagram to decentralized, not-for-profit, commons-style services such as those provided by Nostr. Nostr’s underlying philosophy is to empower individuals to own, control, and carry their digital identity, content, and followers from one social media platform to another. “Nostr is a public domain communication protocol that stands for ‘Notes and Other Stuff Transmitted by Relays,’ and it’s emerging as a credible way to add that kind of interoperability between ecosystems,” investor and strategist Lyn Alden said in a recent article. “In short, it’s an open-source protocol that enables decentralized social media, but also a lot more. It’s a simple set of foundational building blocks that, if widely adopted, could gradually reshape ‘the Web’ as we know it. Instead of a separate set of siloed social ecosystems, we could gravitate toward a more interoperable set of ecosystems, with more of the power dispersed to content creators and their audience, and away from middlemen corporations.”
These examples illustrate possible shifts from large-to-small, private-to-public, and for-profit to not-for-profit/commons trajectories within the socioeconomic landscape undergoing hybridization. AI applications, the decentralized renewable energy paradigm supported by Bitcoin mining, and AI data centers could also encourage opposite dynamics, from small to large, from public to private, and from unprofitable to profitable. A possible early example of this opposite trajectory may be virtual power plants (VPPs):
- AI-enabled, blockchain-based VPPs can now aggregate and manage a large number of small-scale renewable energy generators, energy storage devices, and flexible loads like Bitcoin mining operations, effectively turning them into a big VPP. This can enable smaller energy producers to fuse into a larger entity, participating in the wholesale electricity market and ancillary power markets resembling typical large utilities and operators. Growing decentralized, blockchain-based platforms, like Power Ledger already allow for peer-to-peer energy trading, enabling small-scale renewable energy producers to sell their excess power directly to consumers or businesses without traditional intermediaries. This could lead small, fragmented, unprofitable, individual entities to emergence into large, profitable, decentralized energy markets that compete with traditional utilities at similar scales.
Overall, a new hybrid sector comprising renewable energy, Bitcoin mining, and AI computing may contribute to the development of new types of physical, social, cultural, economic, financial, and energy “infrastructure,” leading to a new hybrid, supercritical configuration of society with newly redefined moral codes at its core — a new system of survival better suited to the increasingly distributed, decentralized, polarized, flexible, and fragmented contours of society and its economy.
This evolving new plumbing of society could help dismantle and rebalance two stereotypes: that of selfish, greedy entrepreneurs being mainly interested in private gains; and that of a vampire-like public sector being mainly interested in its resource-sucking self-preservation. What lies at the core of these stereotypes could be recombined into a new model where both hybridized camps are perceived and act as better stewards of social harmony and economic growth.
The simultaneous productivity gains of some sectors and workers and the unemployment increase among other sectors and workers could also be so intense as to help reverse the stigma laid upon “welfare bums”, blurring class and work-ethic divisions. We could all find ourselves being made suddenly redundant, while also being new engines of the AI economy through data origination and control.
Would this still fuzzy setup be able to balance all its elements effectively, establish new conditions for a more harmonious society, foster a more prosperous economy, ensure a fairer distribution of wealth, and create a more sustainable relationship with nature? If the erosion of trust, the environment, and the climate continues, our only choice may be not just to rearrange the cards on the table, but to rearrange the table itself, allowing more participants to join and share its offerings. The early signs suggest that reality may be heading toward some kind of hybrid, supercritical future.
Find Lorenzo on Medium