Site Loader
Rock Street, San Francisco

AbstractAs the world transforms to a tokenized economy, there is a need for an efficient scalable protocol for typical web, enterprise, andIoT applications. 0chain provides a zero-cost, fastest finality, infinitely scalable blockchain for web and IoT applications,essentially providing a zero-cost decentralized cloud. 0chain enables current DApps to move their off-chain code and data ontoour decentralized compute and storage platform. Its self-forking feature enables different verticals and applications to fine-tunetheir needs create separate chains without worrying about the integrity of the blockchain. Unlike a traditional cloud subscriptionmodel, DApps need to hold 0chain tokens to use the blockchain, more like a bank for a free scalable cloud, and as moreapplications use our network, 0chain will grow in its intrinsic value and integrity.1 Motivation1.1 Scaling issueConventional blockchain technology does not scale and has a high economic cost of consensus, which makes it difficult to use forIoT devices and micro-transactions. IoT devices and micro-transactions typically send a lot of data and so, cumulative costs forsuch transactions would be too high for a business to be able to use such data. Take for example the fees of Bitcoin is on average$2.25 per transaction, with Ethereum around $0.41 . This may be fine for high value transactions, but for a single IoT device 2transmitting every minute, it would cost $215k annually on Ethereum, unless we register most of them off-chain and recordperiodically some values on the blockchain. Additionally, the number of transactions per second that can be executed for Bitcoinis about 3, and Ethereum is between 5 to 15, far short of what we need. Consider an IoT application, which has an installation of6M sensors, each transmitting every minute; we need a blockchain that can accommodate at least 100k transactions per second,something that none of the current blockchains support today.1.2 Energy wasteTraditional blockchain technology such as Bitcoin and its derivatives use work-oriented schemes (proof-of-work) to build 3consensus and advance a block. This scheme wastes energy resources, and needs specialized computing power. Indeed thehashing power requirement is so large today that only a few pools mine the bulk of the blocks. This practical economic effectruns against the original purpose of decentralization.1.3 Resource scaling issueA more recent blockchain technology, Ethereum , has incorporated scripts within transactions and use compute, memory, storage, 4and bandwidth resources. While the flexibility of a Turing complete smart contract enables new applications, it complicates themining process and puts a strain on the resources. This led to charging fees (gas) to force contract developers to restrict contractcompute and storage capability. Hence, most of the applications today have computations architected to be off-chain because on-chain computations are too slow and expensive. Depending on the implementation of the code, the gas cost varies from $0.05 to$3, and is also tied to the value of Ether . As Ether token value appreciates, the gas cost increases. Indeed, for just one IoT 5device, the gas cost could easily be $215k annually depending on the number of smart contracts being used to convert raw data tocalibrated visual and analytic sets. The only way to speed up Ethereum is to have sidechains or off-chain transactions whichoccasionally pegs back to the main chain. While this may be a band-aid for current transactions, it will be very difficult for anyIoT or web application to work in this scenario. In the future, Ethereum is expected to adopt Proof-of-Stake, which shouldPatent pending 12 45 scaling problems, but Casper or Plasma have a complicated design and economic incentives of a hybrid Proof-of-Stake 6 7and Proof-of-Work protocol with fraud proofs between the two chains.1.4 Forking issueSeveral prominent blockchains (Bitcoin, Ethereum) have gone through the forking process and this period is destabilizingbecause of uncertainties over the integrity of the forked chain, miner economic incentives, and user demand. Forks happen 8because of the need to change the code that cannot be done with a minor upgrade, and is necessary to meet certain applicationrequirements that were not thought of in the initial design. An additional reason for a fork is to reverse a malicious transactionthat has taken place because of an implementation flaw (DAO ). Bitcoin and Bitcoin Cash went through a volatile period after 9 10a hard fork event, where the latter’s token value fluctuated between $400 and $1000 within a day.1.5 Inflation & VolatilityBoth Bitcoin and Ethereum have a very high inflation rate of mining, although it does reduce over time. Bitcoin started out with100% before settling to its current 4% inflation rate. Ethereum’s current inflation rate is about 14% but is expected to reduceafter a hard fork in future. Even then, there is too much reward going toward the miners — in fact, Bitcoin miners have earned $2billion since its inception . This miner economy is not efficient and would hamper growth of truly decentralized applications, 11that desire a protocol with a fair computing and consensus price and use of less energy resources. Both Bitcoin and Ethereumhave a very high volatility history. Bitcoin lost 30% of its value within 48 hours of Jamie Dimon’s comments, and Ethereum 12lost 20% after fake news surfaced on Vitalik’s car crash. Our protocol grows its intrinsic value as the utility of the applications 13on our network increases over time.2 Multi-Dimensional Blockchain2.1 Multiple DimensionsWe propose a novel blockchain that solves the problem of cost, scalability, fork instabilities, and high inflation. Our blockchainis n-dimensional with multiple chains based on different forkable parameters detailed later in Section 3, with incentives forconsensus, computing, and storage entities, miners, sharders, blobbers, to scale the blockchain with a high level of integrity andsecurity. See Fig. 1. The miners generate and validate a block, sharders store the blocks, and blobbers store unstructured data.With multiple chains under one native token, we enable multiple verticals to be satisfied with forkable parameters, without theneed for a new blockchain. The sharders help reduce compute, memory, storage, and bandwidth requirements of the underlyinghardware implementation, and allow for fast indexing and access of data and code. Note that the definition of sharding here isdifferent from that described in Ethereum and Zilliqa , where sharding implies different consensus sets. In our case, sharding is 14defined as splitting the chain for manageable storage and access. The blobbers help reduce the cost and complexity of storingcontent for web and IoT applications. The purpose of having multiple blockchains is for a single protocol to be applicable forvarious verticals, and to decouple the value of the underlying token with its utility for mining, sharding, and blobbing activities.Fig 1. n-Dimensional protocol scalability with chains, miners, shards, and blobs 6 78 9 1011 1314 https://www.zilliqa.com2M primary minersN secondary minersW bench miners P primary shardsS secondary shardsW bench shardsQ primary blobsS secondary blobsW bench blobsD (data) chainsC (code) chains2.2 Code and Data ChainsThere are protocols today that have the concept of sidechains for better speed and scalability. In the case of Blockstream, the 15sidechains are pegged to the BitCoin network, and so merged-mining can be used for verification of the blocks on the sidechainsto prevent hash attacks on a new set of miners. In a similar vein,’s concept is expected to enable micro-transactions onits off-chain and periodically use fraud proofs to peg the states back to the Ethereum network. These changes to the legacyBitcoin and Ethereum may help patch up scalability of transactions but seem too complex, unstructured, and expensive for a webapplication that needs to scale deterministically at a low cost. As shown in Fig. 2, we present two forked blockchains from thegenesis block on the 0chain network. The purpose of having two chains at the very onset of the network is to separate thetransactions into stateful and stateless buckets. The separation is easier for development as it would then conform to the MVC(Model-View-Controller) architecture that is used by most enterprise grade applications, where model represents data or thedatabase, the controller embody methods that work with data and change states, and the view typifies visualization of the data bythe client. The data-code separation places different memory requirement for the miner’s infrastructure. A stateful chain wouldneed all the states to be in memory to facilitate changes to its states. A stateless chain can be placed in SSD or Disk depending onthe frequency of access. There is hardly any memory requirement to process a data transaction, as there is no need to know theprevious state.An IoT data set, Oracles (real events represented on the blockchain), or published content are examples of ‘stateless’ data that hasno memory or coding requirement for such a transaction. The transactions can be processed much faster and be kept in SSD ordisk after the block is mined. And so, a block time for such a chain can be set to be a shorter time compared to a ‘stateful’ chain.The miner incentives for a data chain is expected to be less than the code chain because the infrastructure cost would be muchless.Fig 2 Dual-chain blockchain protocolA micro-transaction, such as paying for coffee, or a bunch of micro-services, that converts raw IoT data to calibrated data, or todifferent datasets such as hourly and day data averages, are examples of ‘stateful’ code, that needs states, and the code needs to beloaded in memory for faster execution. For a large enterprise application, which may have 1000s of micro-service calls, it makessense to have all the states and byte-code in memory to achieve a result faster than if it were constrained by disk I/O of SSDs.One can conceivably use a larger block time for a code chain for applications that require larger processing time to generate anoutput.2.3 Self-forking Multiple ChainsFig 3 further depicts the self-forked chain dimension of the blockchain. The self-forking process would involve 2/3 majority ofstakeholder votes to allow for a fork proposal. After such a fork has been established, an initial set of miners are chosen for thefuture chain. For a transaction to be sent to this chain, an easy implementation would be to address the transaction to a specificchain address. If there is no address, the transaction would default to the genesis chain (which is a code chain). Miners on eachchain ignore transactions from their buffer queue that are not addressed to them. Depending on whether the data or code is sentto a code or data chain, the miner will treat it as such. So, a byte code sent to a data chain will be treated as just a piece of data,while if sent to the code chain, it will be executed as code.15, smart contracts, code, micro-services, stateful, or Controller chainIoT, Oracle, data, stateless, or Model chainGenesisBlockSplit blockchain into data (model) and code (controller) chainFig. 3 Self-forking chain2.4 Miner MxN dimensionFig. 4 shows the miner dimension of the blockchain in a typical DPoS (delegated proof-of-stake) configuration, where M 16miners are delegated by stakeholders through a voting process. As is typical in a DPoS scheme, one miner produces a blockwhile others verify the block. In order to make the consensus deterministic, instead of probabilistic (Bitcoin, Ethereum, others)we looked at several algorithms such as PBFT and Paxos to address this Byzantine problem. We have drawn inspiration from 17 18Paxos and DPoS schemes, and have innovated around the ability to produce a deterministic consensus within a round ofgenerating a block to be able to confirm it, and have the fastest finality compared to all the blockchains, where a round is definedas the progression of the block through M miners. The miners are then shuffled in a random order by a scheme detailed later inthe paper. In our innovative scheme, we add an additional group of miners that back up the M set of primary miners and form aMxN set where M are the designated primary miners and N are the secondary miners. The purpose of the backup miners is toprevent malicious transactions, DDoS attacks, withholding and censorship by primary miners. Additionally, if the primary mineris offline or has an unusually high latency, backup miners would be able to advance a block to the network. Out of the N blocksgenerated during the block production slot, if n/(2n+1) have the same Block Hash then that block is selected to be added to the 19chain, where n is number of total miners in the active set that need to agree to confirm a block. Typically, the N set would be asmall set, otherwise it would be similar to proof-of-work where all the miners are generating blocks. This architecture results in adynamic decentralization contrary to traditional static decentralization on Ethereuem, Zilliqa, and others. In this scenario, as ourbench miner pool grows we don’t need a big miner set, because the probability of an attack dwindles without the need to engage alarge miner set. Not sure I understand their argument on scalability.If the set of miners is kept small, then the clients can conduct a fast and easy validation of the last m blocks produced todetermine if a transaction has been processed. SPV (simple payment validation), sometimes referred to as a light client validationcan be easily done by syncing with one of the miner nodes, compared to proof-of-work or naïve proof-of-stake. In the latter caseall the nodes are producing blocks and uncles and there is no way to verify that the node is malicious and has the finalized blocks,other than conducting a proper Merkle validation at the light client or completely syncing with the node. The way it is done nowfor Ethereum is to download the block headers and use a distributed hash table for trie nodes to verify a transaction, accountbalance, validate a block, or monitor an event. In our MxN set, only a few miners exist and so by connecting to the miners, theclient can sync up much faster to a node, and once the client establishes the node to be honest, then it can query the node forspecific transactions, account balance, validate a block, or monitor an event. To prove an honest node, one needs to verify thesignatures of the mined blocks, or one can compare the Block Hash of the latest blocks from the M miners and determine if theyare consistent.16 1819 Block Hash is defined as a collection of all hashed transactions4Data chainFuture chains approved by stakeholdersGenesisBlockTx Data/Code : {from: 0x0000to: optionaldata/code: {0x000}chain: 0x0}Depending on which chain its sent, data/code is data orbinary codeMiner ignores transaction if not meant for its chainFig. 4 Miner dimension of the blockchain consisting of primary, secondary, and bench miners2.5 Shuffling schemeThe shuffling of the MxN set is critical as it determines the proper decentralization process, otherwise attackers can hone in onone miner or be that miner that generates the random seed. The idea of generating a random seed is inspired by the methodproposed by Tezos . Each MxN miner generates a hash of a random number in one cycle and post it on the data chain, and in the 20next cycle the miners reveal their random numbers, and then the resulting seed is generated from the combined random numbersof the miners, and is used to deterministically map to a MxN set, such that at least one member of the MxN set is dropped infavor of another from the W bench set. In the next cycle, the miners use the random seed to determine if they are on the activeset, and if they are in the primary or secondary category. In this way, apriori knowledge of miner status is avoided to preventfocused attack on a specific miner. The whole process of generating a shuffled MxN set with a new member from the bench isrecorded and can be verified by anyone. Lets consider

Post Author: admin


I'm Dora!

Would you like to get a custom essay? How about receiving a customized one?

Check it out