A Technical Overview of Polkadot's JAM Protocol

AdvancedSep 14, 2024
This article offers a technical analysis of Polkadot’s newly proposed JAM protocol, helping readers understand how JAM introduces a new level of scalability to the Polkadot ecosystem.
A Technical Overview of Polkadot's JAM Protocol

The following is a detailed explanation of Polkadot1, Polkadot2, and how they evolved into JAM. (For more details, see: https://www.navalmanack.com/almanack-of-naval-ravikant/how-to-think-clearly). This article is aimed at technical readers, especially those who may not be deeply familiar with Polkadot but have some knowledge of blockchain systems and are likely acquainted with technologies from other ecosystems.
I believe this article serves as a good precursor to reading the JAM Gray Paper. (For more details, see: https://graypaper.com/)

Background knowledge

This article assumes the reader is familiar with the following concepts:

Introduction: Polkadot1

Let’s first revisit the most innovative features of Polkadot1.

Social Aspects:

Technical Aspects:

Sharded Execution: Key Points

Currently, we are discussing a Layer 1 network that hosts other Layer 2 “blockchain” networks, similar to Polkadot and Ethereum. Therefore, the terms Layer 2 and Parachain can be used interchangeably.

The core issue of blockchain scalability can be framed as: There is a set of validators that, using the crypto-economics of proof-of-stake, ensures that the execution of certain code is trustworthy. By default, these validators need to re-execute all the work of one another. As long as we enforce that all validators always re-execute everything, the entire system remains non-scalable.

Please note that, as long as the principle of absolute re-execution remains unchanged, increasing the number of validators in this model does not actually improve the system’s throughput.

This demonstrates a monolithic blockchain (as opposed to a sharded one). All network validators process inputs (i.e., blocks) one by one. In such a system, if Layer 1 wishes to host more Layer 2s, then all validators must re-execute all Layer 2s’ work. Clearly, this method does not scale.

Optimistic rollups address this issue by only re-executing (fraud proofs) when fraud is claimed. SNARK-based Rollups address this issue by leveraging the fact that the cost of validating SNARK proofs is significantly lower than the cost of generating them, thereby allowing all validators to efficiently verify SNARK proofs. For more details, refer to the “Appendix: Scalability Space Diagram.”

A straightforward solution for sharding is to divide the validator set into smaller subsets and have these smaller subsets re-execute Layer2 blocks. What are the problems with this approach? We are essentially sharding both the execution and economic security of the network. Such a Layer2 solution has lower security compared to Layer1, and its security decreases further as we divide the validator set into more shards.

Unlike optimistic rollups, where re-execution costs cannot always be avoided, Polkadot was designed with sharded execution in mind. It allows a portion of validators to re-execute Layer 2 blocks while providing enough cryptoeconomic evidence to the entire network to prove that the Layer 2 block is as secure as if the full validator set had re-executed it. This is achieved through a novel (and recently formalized) mechanism known as ELVES. (For more details, see: https://eprint.iacr.org/2024/961)

In short, ELVES can be seen as a “suspicious rollups” mechanism. Through several rounds of validators actively querying other validators on whether a given Layer 2 block is valid, we can confirm with high probability the block’s validity. In case of any dispute, the full validator set is quickly involved. Polkadot co-founder Rob Habermeier explained this in detail in an article. (For more details, see: https://polkadot.com/blog/polkadot-v1-0-sharding-and-economic-security#approval-checking-and-finality)

ELVES enable Polkadot to possess both sharded execution and shared security, two properties that were previously thought to be mutually exclusive. This is the primary technical achievement of Polkadot1 in scalability.

Now, let’s move forward with the “Core” analogy. A sharded execution blockchain is much like a CPU: just as a CPU can have multiple cores executing instructions in parallel, Polkadot can process Layer 2 blocks in parallel. This is why Layer 2 on Polkadot is called a parachain, and the environment where smaller validator subsets re-execute a single Layer 2 block is called a “core.” Each core can be abstracted as “a group of cooperating validators.”

You can think of a monolithic blockchain as processing a single block at a time, whereas Polkadot processes both a relay chain block and a parachain block for each core in the same time period.

Heterogeneity

So far, we’ve only discussed scalability and sharded execution offered by Polkadot. It’s important to note that each of Polkadot’s shards is, in fact, a completely different application. This is achieved through the metaprotocol stored as bytecode: a protocol that stores the definition of the blockchain itself as bytecode in its state. In Polkadot 1.0, WASM is used as the preferred bytecode, while in JAM, PVM/RISC-V is adopted.

This is why Polkadot is referred to as a heterogeneous sharded blockchain. (For more details, see: https://x.com/kianenigma/status/1790763921600606259) Each Layer 2 is a completely different application.

Polkadot2

One important aspect of Polkadot2 is making the use of cores more flexible. In the original Polkadot model, core leasing periods ranged from 6 months to 2 years, which suited resource-rich enterprises but was less feasible for smaller teams. The feature that allows Polkadot cores to be used more flexibly is called “Agile Coretime.” (For more details, see: https://polkadot.com/agile-coretime) In this mode, core lease terms can be as short as a single block or as long as a month, with a price cap for those wishing to lease for longer periods.

The other features of Polkadot 2 are gradually being revealed as our discussion progresses, so there’s no need to elaborate on them further here.

In-Core vs On-Chain Operations

To understand JAM, it’s important to first grasp what happens when a Layer 2 block enters the Polkadot core. The following is a simplified explanation.

Recall that a core consists mainly of a set of validators. So when we say “data is sent to the core,” it means the data is passed to this set of validators.

  1. A Layer 2 block, along with part of the state of that Layer 2, is sent to the core. This data contains all the information needed to execute the Layer 2 block.

  2. A portion of the core validators will re-execute the Layer 2 block and continue with tasks related to consensus.

  3. These core validators provide the re-executed data to other validators outside the core. According to the ELVES rules, these validators may decide whether or not to re-execute the Layer 2 block, needing this data to do so.

It’s important to note that, so far, all these operations are happening outside the main Polkadot block and state transition function. Everything occurs within the core and the data availability layer.

  1. Finally, a small portion of the latest Layer 2 state becomes visible on Polkadot’s main relay chain. Unlike previous operations, this step is much cheaper than re-executing the Layer 2 block, and it affects Polkadot’s main state. It is visible in a Polkadot block and is executed by all Polkadot validators.

From this, we can explore a few key operations that Polkadot is performing:

  • From Step 1, we can conclude that Polkadot has introduced a new type of execution, different from traditional blockchain state transition functions. Normally, when all network validators perform a task, the main blockchain state gets updated. This is what we call on-chain execution, and it’s what happens in Step 3. However, what happens inside the core (Step 1) is different. This new form of blockchain computation is referred to as in-core execution.
  • From Step 2, we infer that Polkadot has a native Data Availability (DA) layer, and Layer 2s automatically use it to ensure that the evidence of their execution remains available for a certain period. However, the data blocks that can be posted to the DA layer are fixed, containing only the evidence required for Layer 2 re-execution. Furthermore, the parachain code does not read the DA layer data.

Understanding this forms the foundation for grasping JAM. Here’s a summary:

  • In-Core Execution: Refers to operations that take place inside the core. These operations are rich, scalable, and secured through cryptoeconomics and ELVES, offering the same security as on-chain execution.
  • On-Chain Execution: Refers to operations executed by all validators. These are economically guaranteed to be secure by default, but they are more costly and restricted since everyone performs all tasks.
  • Data Availability: Refers to Polkadot validators’ ability to guarantee the availability of certain data for a period of time and provide it to other validators.

JAM

With this understanding, we can now introduce JAM.

JAM is a new protocol inspired by Polkadot’s design and fully compatible with it, aiming to replace the Polkadot relay chain and make core usage fully decentralized and unrestricted.

Built on Polkadot 2, JAM strives to make the deployment of Layer 2s on the core more accessible, offering even more flexibility than the agile-coretime model.

  • Polkadot 2 allows Layer 2 deployment on the core to be more dynamic.
  • JAM aims to allow any application to be deployed on Polkadot’s core, even if those applications aren’t structured like blockchains or Layer 2s.

This is achieved mainly by exposing the three core concepts discussed earlier to developers: on-chain execution, in-core execution, and the DA layer.

In other words, in JAM, developers can:

  • Fully program both in-core and on-chain tasks.
  • Read from and write to Polkadot’s DA layer with arbitrary data.

This forms the basic overview of JAM’s goals. Needless to say, much of this has been simplified, and the protocol is likely to evolve further.

With this foundational understanding, we can now delve into some of the specifics of JAM in the following chapters.

Services and Work Items

In JAM, what were previously referred to as Layer 2s or parachains are now called Services, and what were previously referred to as blocks or transactions are now called Work Items or Work Packages. Specifically, a work item belongs to a service, and a work package is a collection of work items. These terms are intentionally broad to cover use cases beyond blockchain/Layer 2 scenarios.

A service is described by three entry points, two of which are fn refine() and fn accumulate(). The former describes what the service does during in-core execution, and the latter describes what it does during on-chain execution.

Finally, the names of these entry points are the reason the protocol is called JAM (Join Accumulate Machine). Join refers to fn refine(), which is the phase where all Polkadot cores process a large volume of work in parallel across different services. After data is processed, it moves to the next stage. Accumulate refers to the process of accumulating all of these results into the main JAM state, which happens during the on-chain execution phase.

Work items can precisely specify the code they execute in-core and on-chain, as well as how, if, and from where they read or write content in the Distributed Data Lake.

Semi-Consistency

Reviewing existing documentation on XCM (Polkadot’s selected language for parachain communication), all communication is asynchronous (for more details, see here). This means that once a message is sent, you cannot wait for its response. Asynchronous communication is a symptom of inconsistency in the system, and one of the primary downsides of permanently sharded systems like Polkadot 1, Polkadot 2, and Ethereum’s existing Layer 2 ecosystems.

However, as described in Section 2.4 of the Graypaper, a fully consistent system that remains synchronous for all its tenants can only scale to a certain degree without sacrificing universality, accessibility, or resilience.

  • Synchronous ≈ Consistency || Asynchronous ≈ Inconsistency

This is where JAM stands out: by introducing several features, JAM achieves a novel intermediate state known as a semi-consistent system. In this system, subsystems that communicate frequently can create a consistent environment with one another, without forcing the entire system to remain consistent. This was best described by Dr. Gavin Wood, the author of the Graypaper, in an interview: https://www.youtube.com/watch?t=1378&v=O3kRAVBTkfs&embeds_referring_euri=https%3A%2F%2Fblog.kianenigma.nl%2F&source_ve_path=OTY3MTQ

Another way to understand this is by viewing Polkadot/JAM as a sharded system, where the boundaries between these shards are fluid and dynamically determined.

Polkadot has always been sharded and fully heterogeneous.

Now, it is not only sharded and heterogeneous, but these shard boundaries can be flexibly defined—what Gavin Wood refers to as a “semi-consistent” system in his tweets and the Graypaper. (please see: https://x.com/gavofyork?ref_src=twsrc%5Etfw、https://graypaper.com/)

Several features make this semi-consistent state possible:

  1. Access to stateless, parallel in-core execution, where different services can only interact synchronously with other services within the same core and specific block, as well as on-chain execution, where services can access results from all services across all cores.
  2. JAM does not enforce any specific service scheduling. Services with frequent communication can provide economic incentives to their schedulers to create work packages containing these frequently communicating services. This allows these services to run within the same core, making their interactions appear synchronous, even though they are distributed.
  3. Additionally, JAM services can access the DA layer and use it as a temporary yet extremely cost-effective data layer. Once data is placed in the DA, it eventually propagates to all cores but is immediately available within the same core. Therefore, JAM services can achieve a higher degree of data access by scheduling themselves within the same core across consecutive blocks.

It is important to note that while these capabilities are possible within JAM, they are not enforced at the protocol level. Consequently, some interfaces are theoretically asynchronous but can function synchronously in practice due to sophisticated abstractions and incentives. CorePlay, which will be discussed in the next section, is an example of this phenomenon.

CorePlay

This section introduces CorePlay, an experimental concept in the JAM environment that can be described as a new smart contract programming model. As of the time of writing, CorePlay has not been fully defined and remains a speculative idea.

To understand CorePlay, we first need to introduce the virtual machine (VM) chosen by JAM: the PVM.

PVM

PVM is a key detail in both JAM and CorePlay. The lower-level details of PVM are beyond the scope of this document and are best explained by domain experts in the Graypaper. However, for this explanation, we will highlight a few key attributes of PVM:

  • Efficient metering
  • The ability to pause and resume execution

The latter is especially crucial for CorePlay.

CorePlay is an example of how JAM’s flexible primitives can be used to create a synchronous and scalable smart contract environment with a highly flexible programming interface. CorePlay proposes that actor-based smart contracts be deployed directly on JAM cores, allowing them to benefit from synchronous programming interfaces. Developers can write smart contracts as if they were simple fn main() functions, using expressions like let result = other_coreplay_actor(data).await? to communicate. If other_coreplay_actor is on the same JAM core in the same block, this call is synchronous. If it’s on another core, the actor will be paused and resumed in a subsequent JAM block. This is made possible by JAM services, their flexible scheduling, and PVM’s capabilities.

CoreChains Service

Finally, let’s summarize the primary reason JAM is fully compatible with Polkadot. Polkadot’s flagship product is its agile-coretime parachains, which continue in JAM. The earliest deployed services in JAM will likely be referred to as CoreChains or Parachains, enabling existing Polkadot-2-style parachains to run on JAM.

Further services can be deployed on JAM, and the existing CoreChains service can communicate with them. However, Polkadot’s current products will remain robust, simply opening new doors for existing parachain teams.

Appendix: Data Sharding

Most of this document discusses scalability from the perspective of execution sharding. However, we can also examine this issue from a data sharding standpoint. Interestingly, we find this is similar to the semi-consistent model mentioned earlier. In principle, a fully consistent system is superior but unscalable, while a fully inconsistent system scales well but is suboptimal. JAM, with its semi-consistent model, introduces a new possibility.

  • Fully Consistent Systems: These are platforms where everything is synchronized, such as Solana or those exclusively deployed on Ethereum Layer 1. All application data is stored on-chain and easily accessible by all other applications. This is ideal from a programmability standpoint but not scalable.
  • Inconsistent Systems: Application data is stored off Layer 1 or in different, isolated shards. This is highly scalable but performs poorly in terms of composability. Polkadot and Ethereum’s rollup models fall into this category.

JAM offers something beyond these two options: it allows developers to publish arbitrary data to the JAM DA layer, which serves as a middle ground between on-chain and off-chain data. New applications can be built that leverage the DA layer for most of their data, while only persisting absolutely critical data to the JAM state.

Appendix: Scalability Landscape

This section revisits our perspective on blockchain scalability, which is also discussed in the Graypaper, though presented here in a more concise form.

Blockchain scalability largely follows traditional methods from distributed systems: vertical scaling and horizontal scaling.

Vertical scaling is what platforms like Solana focus on, maximizing throughput by optimizing both code and hardware to their limits.

Horizontal scaling is the strategy adopted by Ethereum and Polkadot: reducing the workload that each participant needs to handle. In traditional distributed systems, this is achieved by adding more replica machines. In blockchain, the “computer” is the entire network of validators. By distributing tasks among them (as ELVES does) or optimistically reducing their responsibilities (as in Optimistic Rollups), we decrease the workload for the entire validator set, thus achieving horizontal scaling.

In blockchain, horizontal scaling can be likened to “reducing the number of machines that need to perform all operations.”

In summary:

  1. Vertical scaling: High-performance hardware + optimization of monolithic blockchains.
  2. Horizontal scaling:
    • Optimistic Rollups
    • SNARK-based Rollups
    • ELVES: Polkadot’s Cynical Rollups

Appendix: Same Hardware, Kernel Upgrade

This section is based on Rob Habermeier’s analogy from Sub0 2023: Polkadot: Kernel/Userland | Sub0 2023 - YouTube (see: https://www.youtube.com/watch?v=15aXYvVMxlw), showcasing JAM as an upgrade to Polkadot: a kernel update on the same hardware.

In a typical computer, we can divide the entire stack into three parts:

  1. Hardware
  2. Kernel
  3. User Space

In Polkadot, the hardware—the core infrastructure providing computation and data availability—has always been the cores, as previously mentioned.

In Polkadot, the kernel has so far consisted of two main parts:

  1. The Parachains Protocol: a fixed, opinionated way of utilizing the cores.
  2. A set of low-level functionalities, such as the DOT token and its transferability, staking, governance, etc.

Both of these exist in Polkadot’s Relay Chain.

User space applications, on the other hand, are the parachains themselves, their native tokens, and anything built on top of them.

We can visualize this process as follows:

Polkadot has long envisioned moving more core functionalities to its primary users—parachains. This is precisely the goal of the Minimal Relay RFC. (For more details, see: Minimal Relay RFC)

This means that the Polkadot Relay Chain would only handle providing the parachain protocol, thereby reducing the kernel space to some extent.

Once this architecture is implemented, it will be easier to visualize what the JAM migration will look like. JAM will significantly reduce Polkadot’s kernel space, making it more versatile. Additionally, the Parachains protocol will move to user space, as it is one of the few ways to build applications on the same core (hardware) and kernel (JAM).

This also reinforces why JAM is a replacement for the Polkadot Relay Chain, not for parachains.

In other words, we can view the JAM migration as a kernel upgrade. The underlying hardware remains unchanged, and much of the old kernel’s content is moved to user space to simplify the system.

Disclaimer:

  1. This article is reprinted from [Polkadot Ecological Research Institute], All copyrights belong to the original author [Polkadot Ecological Research Institute]. If there are objections to this reprint, please contact the Gate Learn team, and they will handle it promptly.
  2. Liability Disclaimer: The views and opinions expressed in this article are solely those of the author and do not constitute any investment advice.
  3. Translations of the article into other languages are done by the Gate Learn team. Unless mentioned, copying, distributing, or plagiarizing the translated articles is prohibited.

A Technical Overview of Polkadot's JAM Protocol

AdvancedSep 14, 2024
This article offers a technical analysis of Polkadot’s newly proposed JAM protocol, helping readers understand how JAM introduces a new level of scalability to the Polkadot ecosystem.
A Technical Overview of Polkadot's JAM Protocol

The following is a detailed explanation of Polkadot1, Polkadot2, and how they evolved into JAM. (For more details, see: https://www.navalmanack.com/almanack-of-naval-ravikant/how-to-think-clearly). This article is aimed at technical readers, especially those who may not be deeply familiar with Polkadot but have some knowledge of blockchain systems and are likely acquainted with technologies from other ecosystems.
I believe this article serves as a good precursor to reading the JAM Gray Paper. (For more details, see: https://graypaper.com/)

Background knowledge

This article assumes the reader is familiar with the following concepts:

Introduction: Polkadot1

Let’s first revisit the most innovative features of Polkadot1.

Social Aspects:

Technical Aspects:

Sharded Execution: Key Points

Currently, we are discussing a Layer 1 network that hosts other Layer 2 “blockchain” networks, similar to Polkadot and Ethereum. Therefore, the terms Layer 2 and Parachain can be used interchangeably.

The core issue of blockchain scalability can be framed as: There is a set of validators that, using the crypto-economics of proof-of-stake, ensures that the execution of certain code is trustworthy. By default, these validators need to re-execute all the work of one another. As long as we enforce that all validators always re-execute everything, the entire system remains non-scalable.

Please note that, as long as the principle of absolute re-execution remains unchanged, increasing the number of validators in this model does not actually improve the system’s throughput.

This demonstrates a monolithic blockchain (as opposed to a sharded one). All network validators process inputs (i.e., blocks) one by one. In such a system, if Layer 1 wishes to host more Layer 2s, then all validators must re-execute all Layer 2s’ work. Clearly, this method does not scale.

Optimistic rollups address this issue by only re-executing (fraud proofs) when fraud is claimed. SNARK-based Rollups address this issue by leveraging the fact that the cost of validating SNARK proofs is significantly lower than the cost of generating them, thereby allowing all validators to efficiently verify SNARK proofs. For more details, refer to the “Appendix: Scalability Space Diagram.”

A straightforward solution for sharding is to divide the validator set into smaller subsets and have these smaller subsets re-execute Layer2 blocks. What are the problems with this approach? We are essentially sharding both the execution and economic security of the network. Such a Layer2 solution has lower security compared to Layer1, and its security decreases further as we divide the validator set into more shards.

Unlike optimistic rollups, where re-execution costs cannot always be avoided, Polkadot was designed with sharded execution in mind. It allows a portion of validators to re-execute Layer 2 blocks while providing enough cryptoeconomic evidence to the entire network to prove that the Layer 2 block is as secure as if the full validator set had re-executed it. This is achieved through a novel (and recently formalized) mechanism known as ELVES. (For more details, see: https://eprint.iacr.org/2024/961)

In short, ELVES can be seen as a “suspicious rollups” mechanism. Through several rounds of validators actively querying other validators on whether a given Layer 2 block is valid, we can confirm with high probability the block’s validity. In case of any dispute, the full validator set is quickly involved. Polkadot co-founder Rob Habermeier explained this in detail in an article. (For more details, see: https://polkadot.com/blog/polkadot-v1-0-sharding-and-economic-security#approval-checking-and-finality)

ELVES enable Polkadot to possess both sharded execution and shared security, two properties that were previously thought to be mutually exclusive. This is the primary technical achievement of Polkadot1 in scalability.

Now, let’s move forward with the “Core” analogy. A sharded execution blockchain is much like a CPU: just as a CPU can have multiple cores executing instructions in parallel, Polkadot can process Layer 2 blocks in parallel. This is why Layer 2 on Polkadot is called a parachain, and the environment where smaller validator subsets re-execute a single Layer 2 block is called a “core.” Each core can be abstracted as “a group of cooperating validators.”

You can think of a monolithic blockchain as processing a single block at a time, whereas Polkadot processes both a relay chain block and a parachain block for each core in the same time period.

Heterogeneity

So far, we’ve only discussed scalability and sharded execution offered by Polkadot. It’s important to note that each of Polkadot’s shards is, in fact, a completely different application. This is achieved through the metaprotocol stored as bytecode: a protocol that stores the definition of the blockchain itself as bytecode in its state. In Polkadot 1.0, WASM is used as the preferred bytecode, while in JAM, PVM/RISC-V is adopted.

This is why Polkadot is referred to as a heterogeneous sharded blockchain. (For more details, see: https://x.com/kianenigma/status/1790763921600606259) Each Layer 2 is a completely different application.

Polkadot2

One important aspect of Polkadot2 is making the use of cores more flexible. In the original Polkadot model, core leasing periods ranged from 6 months to 2 years, which suited resource-rich enterprises but was less feasible for smaller teams. The feature that allows Polkadot cores to be used more flexibly is called “Agile Coretime.” (For more details, see: https://polkadot.com/agile-coretime) In this mode, core lease terms can be as short as a single block or as long as a month, with a price cap for those wishing to lease for longer periods.

The other features of Polkadot 2 are gradually being revealed as our discussion progresses, so there’s no need to elaborate on them further here.

In-Core vs On-Chain Operations

To understand JAM, it’s important to first grasp what happens when a Layer 2 block enters the Polkadot core. The following is a simplified explanation.

Recall that a core consists mainly of a set of validators. So when we say “data is sent to the core,” it means the data is passed to this set of validators.

  1. A Layer 2 block, along with part of the state of that Layer 2, is sent to the core. This data contains all the information needed to execute the Layer 2 block.

  2. A portion of the core validators will re-execute the Layer 2 block and continue with tasks related to consensus.

  3. These core validators provide the re-executed data to other validators outside the core. According to the ELVES rules, these validators may decide whether or not to re-execute the Layer 2 block, needing this data to do so.

It’s important to note that, so far, all these operations are happening outside the main Polkadot block and state transition function. Everything occurs within the core and the data availability layer.

  1. Finally, a small portion of the latest Layer 2 state becomes visible on Polkadot’s main relay chain. Unlike previous operations, this step is much cheaper than re-executing the Layer 2 block, and it affects Polkadot’s main state. It is visible in a Polkadot block and is executed by all Polkadot validators.

From this, we can explore a few key operations that Polkadot is performing:

  • From Step 1, we can conclude that Polkadot has introduced a new type of execution, different from traditional blockchain state transition functions. Normally, when all network validators perform a task, the main blockchain state gets updated. This is what we call on-chain execution, and it’s what happens in Step 3. However, what happens inside the core (Step 1) is different. This new form of blockchain computation is referred to as in-core execution.
  • From Step 2, we infer that Polkadot has a native Data Availability (DA) layer, and Layer 2s automatically use it to ensure that the evidence of their execution remains available for a certain period. However, the data blocks that can be posted to the DA layer are fixed, containing only the evidence required for Layer 2 re-execution. Furthermore, the parachain code does not read the DA layer data.

Understanding this forms the foundation for grasping JAM. Here’s a summary:

  • In-Core Execution: Refers to operations that take place inside the core. These operations are rich, scalable, and secured through cryptoeconomics and ELVES, offering the same security as on-chain execution.
  • On-Chain Execution: Refers to operations executed by all validators. These are economically guaranteed to be secure by default, but they are more costly and restricted since everyone performs all tasks.
  • Data Availability: Refers to Polkadot validators’ ability to guarantee the availability of certain data for a period of time and provide it to other validators.

JAM

With this understanding, we can now introduce JAM.

JAM is a new protocol inspired by Polkadot’s design and fully compatible with it, aiming to replace the Polkadot relay chain and make core usage fully decentralized and unrestricted.

Built on Polkadot 2, JAM strives to make the deployment of Layer 2s on the core more accessible, offering even more flexibility than the agile-coretime model.

  • Polkadot 2 allows Layer 2 deployment on the core to be more dynamic.
  • JAM aims to allow any application to be deployed on Polkadot’s core, even if those applications aren’t structured like blockchains or Layer 2s.

This is achieved mainly by exposing the three core concepts discussed earlier to developers: on-chain execution, in-core execution, and the DA layer.

In other words, in JAM, developers can:

  • Fully program both in-core and on-chain tasks.
  • Read from and write to Polkadot’s DA layer with arbitrary data.

This forms the basic overview of JAM’s goals. Needless to say, much of this has been simplified, and the protocol is likely to evolve further.

With this foundational understanding, we can now delve into some of the specifics of JAM in the following chapters.

Services and Work Items

In JAM, what were previously referred to as Layer 2s or parachains are now called Services, and what were previously referred to as blocks or transactions are now called Work Items or Work Packages. Specifically, a work item belongs to a service, and a work package is a collection of work items. These terms are intentionally broad to cover use cases beyond blockchain/Layer 2 scenarios.

A service is described by three entry points, two of which are fn refine() and fn accumulate(). The former describes what the service does during in-core execution, and the latter describes what it does during on-chain execution.

Finally, the names of these entry points are the reason the protocol is called JAM (Join Accumulate Machine). Join refers to fn refine(), which is the phase where all Polkadot cores process a large volume of work in parallel across different services. After data is processed, it moves to the next stage. Accumulate refers to the process of accumulating all of these results into the main JAM state, which happens during the on-chain execution phase.

Work items can precisely specify the code they execute in-core and on-chain, as well as how, if, and from where they read or write content in the Distributed Data Lake.

Semi-Consistency

Reviewing existing documentation on XCM (Polkadot’s selected language for parachain communication), all communication is asynchronous (for more details, see here). This means that once a message is sent, you cannot wait for its response. Asynchronous communication is a symptom of inconsistency in the system, and one of the primary downsides of permanently sharded systems like Polkadot 1, Polkadot 2, and Ethereum’s existing Layer 2 ecosystems.

However, as described in Section 2.4 of the Graypaper, a fully consistent system that remains synchronous for all its tenants can only scale to a certain degree without sacrificing universality, accessibility, or resilience.

  • Synchronous ≈ Consistency || Asynchronous ≈ Inconsistency

This is where JAM stands out: by introducing several features, JAM achieves a novel intermediate state known as a semi-consistent system. In this system, subsystems that communicate frequently can create a consistent environment with one another, without forcing the entire system to remain consistent. This was best described by Dr. Gavin Wood, the author of the Graypaper, in an interview: https://www.youtube.com/watch?t=1378&v=O3kRAVBTkfs&embeds_referring_euri=https%3A%2F%2Fblog.kianenigma.nl%2F&source_ve_path=OTY3MTQ

Another way to understand this is by viewing Polkadot/JAM as a sharded system, where the boundaries between these shards are fluid and dynamically determined.

Polkadot has always been sharded and fully heterogeneous.

Now, it is not only sharded and heterogeneous, but these shard boundaries can be flexibly defined—what Gavin Wood refers to as a “semi-consistent” system in his tweets and the Graypaper. (please see: https://x.com/gavofyork?ref_src=twsrc%5Etfw、https://graypaper.com/)

Several features make this semi-consistent state possible:

  1. Access to stateless, parallel in-core execution, where different services can only interact synchronously with other services within the same core and specific block, as well as on-chain execution, where services can access results from all services across all cores.
  2. JAM does not enforce any specific service scheduling. Services with frequent communication can provide economic incentives to their schedulers to create work packages containing these frequently communicating services. This allows these services to run within the same core, making their interactions appear synchronous, even though they are distributed.
  3. Additionally, JAM services can access the DA layer and use it as a temporary yet extremely cost-effective data layer. Once data is placed in the DA, it eventually propagates to all cores but is immediately available within the same core. Therefore, JAM services can achieve a higher degree of data access by scheduling themselves within the same core across consecutive blocks.

It is important to note that while these capabilities are possible within JAM, they are not enforced at the protocol level. Consequently, some interfaces are theoretically asynchronous but can function synchronously in practice due to sophisticated abstractions and incentives. CorePlay, which will be discussed in the next section, is an example of this phenomenon.

CorePlay

This section introduces CorePlay, an experimental concept in the JAM environment that can be described as a new smart contract programming model. As of the time of writing, CorePlay has not been fully defined and remains a speculative idea.

To understand CorePlay, we first need to introduce the virtual machine (VM) chosen by JAM: the PVM.

PVM

PVM is a key detail in both JAM and CorePlay. The lower-level details of PVM are beyond the scope of this document and are best explained by domain experts in the Graypaper. However, for this explanation, we will highlight a few key attributes of PVM:

  • Efficient metering
  • The ability to pause and resume execution

The latter is especially crucial for CorePlay.

CorePlay is an example of how JAM’s flexible primitives can be used to create a synchronous and scalable smart contract environment with a highly flexible programming interface. CorePlay proposes that actor-based smart contracts be deployed directly on JAM cores, allowing them to benefit from synchronous programming interfaces. Developers can write smart contracts as if they were simple fn main() functions, using expressions like let result = other_coreplay_actor(data).await? to communicate. If other_coreplay_actor is on the same JAM core in the same block, this call is synchronous. If it’s on another core, the actor will be paused and resumed in a subsequent JAM block. This is made possible by JAM services, their flexible scheduling, and PVM’s capabilities.

CoreChains Service

Finally, let’s summarize the primary reason JAM is fully compatible with Polkadot. Polkadot’s flagship product is its agile-coretime parachains, which continue in JAM. The earliest deployed services in JAM will likely be referred to as CoreChains or Parachains, enabling existing Polkadot-2-style parachains to run on JAM.

Further services can be deployed on JAM, and the existing CoreChains service can communicate with them. However, Polkadot’s current products will remain robust, simply opening new doors for existing parachain teams.

Appendix: Data Sharding

Most of this document discusses scalability from the perspective of execution sharding. However, we can also examine this issue from a data sharding standpoint. Interestingly, we find this is similar to the semi-consistent model mentioned earlier. In principle, a fully consistent system is superior but unscalable, while a fully inconsistent system scales well but is suboptimal. JAM, with its semi-consistent model, introduces a new possibility.

  • Fully Consistent Systems: These are platforms where everything is synchronized, such as Solana or those exclusively deployed on Ethereum Layer 1. All application data is stored on-chain and easily accessible by all other applications. This is ideal from a programmability standpoint but not scalable.
  • Inconsistent Systems: Application data is stored off Layer 1 or in different, isolated shards. This is highly scalable but performs poorly in terms of composability. Polkadot and Ethereum’s rollup models fall into this category.

JAM offers something beyond these two options: it allows developers to publish arbitrary data to the JAM DA layer, which serves as a middle ground between on-chain and off-chain data. New applications can be built that leverage the DA layer for most of their data, while only persisting absolutely critical data to the JAM state.

Appendix: Scalability Landscape

This section revisits our perspective on blockchain scalability, which is also discussed in the Graypaper, though presented here in a more concise form.

Blockchain scalability largely follows traditional methods from distributed systems: vertical scaling and horizontal scaling.

Vertical scaling is what platforms like Solana focus on, maximizing throughput by optimizing both code and hardware to their limits.

Horizontal scaling is the strategy adopted by Ethereum and Polkadot: reducing the workload that each participant needs to handle. In traditional distributed systems, this is achieved by adding more replica machines. In blockchain, the “computer” is the entire network of validators. By distributing tasks among them (as ELVES does) or optimistically reducing their responsibilities (as in Optimistic Rollups), we decrease the workload for the entire validator set, thus achieving horizontal scaling.

In blockchain, horizontal scaling can be likened to “reducing the number of machines that need to perform all operations.”

In summary:

  1. Vertical scaling: High-performance hardware + optimization of monolithic blockchains.
  2. Horizontal scaling:
    • Optimistic Rollups
    • SNARK-based Rollups
    • ELVES: Polkadot’s Cynical Rollups

Appendix: Same Hardware, Kernel Upgrade

This section is based on Rob Habermeier’s analogy from Sub0 2023: Polkadot: Kernel/Userland | Sub0 2023 - YouTube (see: https://www.youtube.com/watch?v=15aXYvVMxlw), showcasing JAM as an upgrade to Polkadot: a kernel update on the same hardware.

In a typical computer, we can divide the entire stack into three parts:

  1. Hardware
  2. Kernel
  3. User Space

In Polkadot, the hardware—the core infrastructure providing computation and data availability—has always been the cores, as previously mentioned.

In Polkadot, the kernel has so far consisted of two main parts:

  1. The Parachains Protocol: a fixed, opinionated way of utilizing the cores.
  2. A set of low-level functionalities, such as the DOT token and its transferability, staking, governance, etc.

Both of these exist in Polkadot’s Relay Chain.

User space applications, on the other hand, are the parachains themselves, their native tokens, and anything built on top of them.

We can visualize this process as follows:

Polkadot has long envisioned moving more core functionalities to its primary users—parachains. This is precisely the goal of the Minimal Relay RFC. (For more details, see: Minimal Relay RFC)

This means that the Polkadot Relay Chain would only handle providing the parachain protocol, thereby reducing the kernel space to some extent.

Once this architecture is implemented, it will be easier to visualize what the JAM migration will look like. JAM will significantly reduce Polkadot’s kernel space, making it more versatile. Additionally, the Parachains protocol will move to user space, as it is one of the few ways to build applications on the same core (hardware) and kernel (JAM).

This also reinforces why JAM is a replacement for the Polkadot Relay Chain, not for parachains.

In other words, we can view the JAM migration as a kernel upgrade. The underlying hardware remains unchanged, and much of the old kernel’s content is moved to user space to simplify the system.

Disclaimer:

  1. This article is reprinted from [Polkadot Ecological Research Institute], All copyrights belong to the original author [Polkadot Ecological Research Institute]. If there are objections to this reprint, please contact the Gate Learn team, and they will handle it promptly.
  2. Liability Disclaimer: The views and opinions expressed in this article are solely those of the author and do not constitute any investment advice.
  3. Translations of the article into other languages are done by the Gate Learn team. Unless mentioned, copying, distributing, or plagiarizing the translated articles is prohibited.
Start Now
Sign up and get a
$100
Voucher!