Oleg Andreev



Software designer with focus on user experience and security.

You may start with my selection of articles on Bitcoin.

Переводы некоторых статей на русский.



Product architect at Chain.

Author of Gitbox version control app.

Author of CoreBitcoin, a Bitcoin toolkit for Objective-C.

Author of BTCRuby, a Bitcoin toolkit for Ruby.

Former lead dev of FunGolf GPS, the best golfer's personal assistant.



I am happy to give you an interview or provide you with a consultation.
I am very interested in innovative ways to secure property and personal interactions: all the way from cryptography to user interfaces. I am not interested in trading, mining or building exchanges.

This blog enlightens people thanks to your generous donations: 1TipsuQ7CSqfQsjA9KU5jarSB1AnrVLLo

How SegWit makes security better

Some people on the internet underappreciate how important a “malleability” aspect of Bitcoin is which is one of the things that Segregated Witness upgrade is going to fix.

Let’s take a look at Bitcoin Covenants paper from 2016 that introduces a “Vault” feature. I will briefly explain how Vaults work and how would you implement them in today’s Bitcoin without SegWit (today) and with SegWit (in two weeks).

First of all, what is a Vault? A slightly simplified version looks like this: you move funds to a special address V (“Vault”) from which you can only move them into address W (“initiate a withdrawal process”) and no other addresses. There are two paths from W. Path 1 allows moving funds from W anywhere after a 24 hour delay since the initiation of a withdrawal (that is, since the transaction V->W was published) using only an active key A. Path 2 allows moving funds anywhere without a delay, but requires both the active key A and a recovery key R (or multiple recovery keys).

Here’s a diagram:

initial deposit: $ -------> V

initiate withdrawal: V -------> W, using key A

unlock after delay: W --(a)--> *, using key A & 24h delay

recover funds: W --(b)--> *, using keys A & R, no delay

The theory behind Vault is that regular keys are convenient to use (they are always ready), but since they are more vulnerable, we require a publication of a “withdrawal attempt” with a grace period. If the withdrawal was legitimate, user would simply have to wait for the payment to go through. If it was not, the user has time to notice the attempt and will have to dig up the recovery keys: maybe ask friends to co-sign a transaction, or get a printed copy from a deep hole in their backyard. The time delay could be proportional to the amount of funds in question. Small amounts won’t use any vault, medium amounts would use 24 hour delay, long-term savings would use 72-hour delay and a multisig recovery setup with trusted friends/family.

Today Bitcoin does not have a CheckOutputVerify function suggested in the Covenants paper. To simulate it for the Vault case we can use a temporary key V for the Vault address and a pre-signed transaction V->W. Key V is destroyed after the address V is funded. Then, withdrawal can only be initiated by publishing the transaction V->W. Address W can use an already existing feature CheckSequenceVerify that enforces relative timelock (e.g. “24 hours since publication”), plus If/Else branches to allow a recovery path without a delay.

If we did not have transaction malleability, we would simply do the following:

  1. Create a temporary key V.
  2. Create and pre-sign transaction T1 that pays to V.
  3. Create and pre-sign transaction T2 that pays from V to W.
  4. Delete key V.
  5. Publish transaction T1.
  6. Store transaction T2 encrypted with an active key A.
  7. To make a withdrawal: publish transaction T2, pre-sign a delayed spending from transaction T2 into a given address, publish later when timelock is over.

If we do have to deal with transaction malleability, we cannot do steps 1-6 at once. We would have to store the temporary key V for a much longer time, persisted on disk (instead of in RAM for a millisecond), until transaction T1 is well-confirmed and the risk of chain reorganization that has a mutated version of it is very low.

What’s worse: we would introduce a new vulnerability. For regular payments, reorgs may cancel the payment, which can be retried by the sender. If you send to yourself, this is not a problem. But if you erase a temporary key and the pre-signed transaction (T2) is the only way to recover your funds, a reorg can lock you out forever. So you have to wait a significantly long time for your non-trivial amount of bitcoins to be securely locked by transaction T1.

While you wait, your key V has to be stored and backed up. If it is backed up, you need to protect it with a very strong password, but the whole premise of the scheme is to allow storing active key A with a mediocre password because strong passwords do not really exist. So if your key V is leaked while you wait, you lose all protections: attacker will not have to use your specially-crafted transaction T2, but would simply sign anything with key V directly.

So you need to store V stronger than you’d store A. You can achieve it with the same multisig setup you’d use for the recovery key(s) R: ask your friends to act as a collective additional key to your temporary V to pre-sign transaction T2 (using my blind signature scheme to maintain privacy). But now you no longer ask your friends to be only your recovery mechanism (which you may never have to use), but you have to ask them every time you move money into a vault.

As a result, your wallet has to be much more complicated, with a lot of moving parts, more user interactivity, and additional security assumptions instead of doing one obvious thing with one push of a button.

Segregated witness fixes transaction malleability and makes highly desireable security schemes possible. This is much more important than a minor increase in transaction throughput, because Bitcoin is stored most of the time and only occasionally changes hands.

ELI5: How digital signatures actually work

Lets start with whole numbers.

We will use lower-case letters for numbers. Here is a number:

a = 42

We will also use points on some elliptic curve.

Points are simply pairs of very large numbers that satisfy an elliptic curve equation.

We will use upper-case letters for points. Here is a point:

A = (4, 68)

Elliptic curves allow special kind of arithmetic on points. Two points can be “added” to produce another, seemingly random, point on the elliptic curve:

C = A + B

Point can be added to itself several times:

D = C + C + C

When one point is added many times we will say it is “multiplied by a number”:

D = 3·C

Turns out, if you add point A a lot of times (in other words, multiply it by a large enough number) and get another point B, it will be hard to figure out what that number was, provided you are given only the original point A and the resulting point B.

“Hard” means that to find out that number of additions, you cannot simply “divide” B by A, but have to enumerate a lot of possible numbers x to check if x·A produces B. So if x is very large, larger than number of atoms in the universe, checking all possibilities will take too long to bother. At the same time, if one knows correct x, computing x·A is pretty fast. We will use this asymmetry heavily in the discussion below.

Now we have enough building blocks to play a game.

Our players will be Alice (the prover) and Bob (the verifier).

Alice will need to prove that she knows some secret number x in a way that Bob does not learn that number.

Bob will need to verify that Alice indeed knows the number x and did not cheat.

Before we start, we need a base point on the elliptic curve. It does not matter which point exactly, only that it’s a common point for both players.

B := some common point

Then, Alice chooses a number x in a way that’s unpredictable to the verifier:

x := random number

Alice converts her secret number to a point by adding the base point B to itself x times and producing new point X:

X := x·B

Alice sends point X to Bob and they can start the game. Bob knows points B and X, but due to asymmetry of point arithmetic, cannot find out number x efficiently.

Bob now asks Alice to prove the knowledge of such x that X is a result of adding B to itself x times.

Lets start with a naïve approach: what if Alice simply sends number x to Bob? Bob then can compute x·B and check that it equals X received earlier. The proof succeeds, but now one more person knows the secret x which is not very useful.

To improve the scheme we will blind number x so that verifier does not learn it. Alice will choose an additional number r, equally unpredictable to Bob:

r := random number

Then, Alice will add numbers x and r together:

s := x + r

Note that r should be secret as well, otherwise x can be calculated from s and r by a simple subtraction.

Finally, Alice converts r to a point, just like she converted x earlier, by adding base point B to itself r times:

R := r·B

Alice sends the number s and point R to Bob.

The number s here represents a blinded secret, or, in other words, “secret x hidden in some noise r”. Point R is used to tell Bob enough about the noise to verify the proof, but not reveal the actual amount of noise.

Bob adds X (received before the start of the game) and R together:

S := X + R

Then, Bob converts s to the point:

S’ := s·B

And, finally, checks that S equals S’.

This scheme protects the secrecy of x because from number s and points X and R it is very hard to extract underlying numbers x and r.

Unfortunately, this scheme allows Alice to cheat.

Alice knows that Bob will be adding point X to her temporary point R and then asking for an underlying number s. If Alice does not actually know number x, she can subtract point X from the point R:

R := r·B – X

When Bob adds R and X together, X will get cancelled out and only r·B will remain:

S := X + R

S := X + r·B – X

S := r·B

Alice then sets the blinded number s to r without any knowledge of x and satisfies Bob’s check that s·B == R + X.

To prevent Alice from cancelling out X, Bob will use a trick. Instead of adding X to R one time, Bob will add X many times. We will call this number e:

S := e·X + R

What is more important is that Bob will choose the number e only after Alice sends her point R.

Previously Alice was choosing random numbers (x and r), but now it is Bob’s turn.

The number e will not be secret all the time, but only when Alice chooses number r and sends point R.

e must be unpredictable enough to Alice so that she cannot preemptively cancel out e·X from R like she cancelled out X in the previous example.

So the updated scheme looks like this:

Alice chooses random number r and sends point r·B to Bob.

Bob chooses random number e and sends it to Alice.

Alice knows that Bob will add X to R e times, so she has to add x the same number of times:

s := e·x + r

The original secret x is blinded by temporary secret r as usual, but it is also multiplied by number e the sole purpose of which is to prevent Alice cancelling out X from R.

Alice sends s to Bob.

Bob checks that s is really an underlying number for a composite point made of R and X:

s·B == R + e·X

Bob now is certain that Alice could not have come up with number s other than by knowing both numbers r and x, because X and R were fixed before e was given to Alice.

Notice that up until introduction of anti-cheating device e, Bob was not sending anything back to Alice. The protocol became interactive only when Bob required Alice to commit to point R (that contains noise) before telling her a random number e.

As a result, instead of one move (Alice sending some proof to Bob), the protocol became a three-move: Alice sends Bob R, Bob challenges Alice with e, Alice sends back final proof s.

Because of that three-move that looks like a greek letter sigma (∑), the protocol is called “sigma protocol”.

To make it more practical and suitable for digital signatures, we should make it non-interactive.

Alice should be able to produce a proof that can be checked by Bob or anyone else without interacting with Alice again, but maintaining the three-move structure that prevents cheating, yet protects secret numbers.

As it turns out, we can use a hash function to make sure e is not predictable until R is specified. If Alice uses a hash function deemed secure by Bob, then Bob can be perfectly satisfied with e computed pseudo-randomly as follows:

e = Hash( R )

If Alice uses such e, then she will not be able to use it to cancel out e·X from R. She would have to try many possible numbers e, compute many possible R := r·B – e·X, until hash function returns e in output for that R.

If hash function returns large enough numbers and mixes input data well, then such process would take an impossibly long time.

Which means, Alice will not be able to cancel out e·X from her noise commitment R and will be forced to calculate number s correctly using both secrets x and r:

s := e·x + r

The full protocol would look like this:

Alice chooses random r, computes R := r·B, uses hash function to get a random “challenge” e := Hash( R ), and computes s := e·x + r.

Alice sends point R and number s to Bob who verifies that s·B equals R + e·X. In fact, not only Bob, but anyone else in the world can independently verify that proof.

Finally, to make a signature out of the proof, Alice needs to customize hash function with a message that she is signing. This is needed to make sure that a signature for one message cannot be reused for another message.

This customization is typically done by providing not only R to the hash function, but also the message itself:

e := Hash(R, message)

A good hash function would return a different output if even one bit of the message is changed, making already computed s invalid. This is because a different e means that X must be added to R different number of times, therefore number s must be adjusted by the same number of times of secret x. And only the knowledge of x allows doing that, which is equivalent to legitimately signing a different message.

And this is how Schnorr signatures work. E.g. EdDSA standard described in RFC8032.

PS. A compact outline of the protocol:

Setup:

  x := random number     (aka private key)
  X := x·B               (aka public key)

Sign:

  r := random number      (aka nonce)
  R := r·B                (aka commitment)
  e := Hash(R, message)   (aka challenge)
  s := r + e·x            (aka response)
  return (R,s)            (aka signature)

Verify:

  receive (R,s)
  e  := Hash(R, message)
  S1 := R + e·X
  S2 := s·B
  return OK if S1 equals S2

Miners should be hubs on Lightning Network

Some miners are worried about the fees. What if blocks remain small and most transactions are cleared on the Lightning Network — miners will earn very little while the block reward is quickly coming to an end. An ignorant answer is “lets just raise the block size indefinitely”. Slightly less ignorant answer “fees will be higher, and BTC will be worth more, so don’t you worry”. What if the actual answer is: miners could be the best Lightning Network hubs?

Here’s how it could work. Lightning Network scales Bitcoin payments by compressing chains of transactions into a single transaction, protected by mutual security deposits from both sides of each node and some clever smart contract conditions that make cheating more expensive than playing by the rules. Security deposit lock up money and put a natural limit on how much value and through how little hops can be transferred in the network. The more fees a node wants to earn, the more money they have to lock up in order to service more peers.

How miners are in privileged position to profit from Lightning Network? They are recipients of the large amount of coins (reward + fees) that are also unspendable for 100 blocks (≈16 hours, so called “coin maturity”). That means, that unlike any other bitcoin holder that can put bitcoins to a better use any time they want, miners cannot use coins for over 100 blocks. Miner could use these coins to open many payment channels with interested users (they will pay their deposit using a separate transaction, for free).

Miners, by virtue of having access to large amounts of funds, could open between each other fat payment channels to connect each other’s users in much shorter number of hops, making it cheaper and faster for users, but having a larger chunk of off-chain transaction fees.

Per-block reward and on-chain fees would end up locked up for longer than 100-block intervals, which is even healthier for the network: all the miners become motivated to extend a single chain not for a few days, but for months!

Therefore, problem solved and everyone’s happy.

  1. No hard forks are necessary.
  2. Large per-block rewards put to good use.
  3. Coin maturity increased by 10-100x significantly reducing risk of blockchain fork.
  4. Users get channel opening for free.
  5. Users get faster and cheaper LN payments by having lower number of hops.
  6. Power-hungry miners remain in minority and stop pushing stupid consensus changes.
  7. /r/btc goes apeshit.

Fork wars

After some miners declared a hard fork war on Bitcoin to give miners power over the block size, anonymous Bitcoin developer shaolinfry declared a soft fork war in response (so called “user-activated soft fork”, or UASF) aimed at forcing miners to respect segregated witness rules.

It’s interesting to compare consequences of possible outcomes of each war.

If either of these proposals fails to get off the ground: that is, majority of miners ignore HF or majority of economy does not enforce segwit, nothing really changes.

If any proposal leads to a harsh split, the outcome is the same for both of them: majority of hashrate diverges from the rules enforced by economic majority, uncertainty leads to price drop, and (unless things get corrected in one direction or the other), Bitcoin is doomed, experiment is over, everyone can go home.

So what happens when either of proposals actually succeeds?

In the first case, if miners convince stakeholders that Bitcoin Unlimited is the way to go, stakeholders would effectively grant miners the requested powers and accept the cost of risk and difficulties adopting hard forks.

However, if UASF succeeds, that is, users convince miners to enforce additional rules (or at least not interfere with them), then miners would accept the limit of their powers and agree with priority of the users in decision-making around consensus.

The win of a hard fork would demonstrate that Bitcoin is governed by producers of proof of work, and majority of users would simply delegate all “checks and balances” to a minority of users who run mining pools.

The win of a UASF would demonstrate that the role of miners remains restricted and the rules of the protocol are decided by the whole community of users, including miners.

Assets is the new cryptographic primitive

Computer science and applied cryptography in particular, has a hierarchy of building blocks, where higher-order blocks are composed of lower-order blocks.

Roughly, the hierarchy looks like this:

  1. Charge and current in electric circuits
  2. Bits
  3. Bytes
  4. Words
  5. Data structures
  6. Permutations: block ciphers, hash functions
  7. Big numbers
  8. Self-authenticated data structures (e.g. hash-trees)
  9. Symmetric encryption and authentication
  10. Public key cryptography: digital signatures, shared secrets, asymmetric encryption.
  11. Certificates and chains of trust (e.g. X.509, PGP web of trust)
  12. Timestamped append-only logs (e.g. Certificate Transparency)

Blockchain protocols are made of these building blocks in order to offer a new kind of a building block: the digital asset.

Digital assets simplify and expand some schemes that struggle with lower-level primitives such as digital signatures and certificates.

In money: digital assets are bearer instruments that can be exchanged between parties that do not trust each other, while signatures only facilitate point-to-point exchange between trusting parties.

In supply chains: digital assets represent certificates of acceptance enabling end-to-end security for each participant in the supply chain, automating provenance and improving security of payments. E.g. a payment can be locked by condition that a particular set of certificates are produced, instead of deferring it to a third party escrow, increasing the surface of vulnerability.

In consumer payments: digital assets are used to represent not only payment instruments (cash, rewards, loyalty points), but also receipts and sometimes products themselves (tickets and prepaid cards).

In things: digital assets represent access tokens to devices running tamper-resistant computers that can be efficiently delegated, used as a collateral, bought and sold. E.g. lockboxes, vending machines and cars.

What about smart contracts? Aren’t those the next higher-order primitive? Not quite. Smart contracts use formal language to describe context-specific policy, so their impact depends on that context. Smart contracts inside a public key infrastructure (e.g. certificates) enable more sophisticated signing rules, but only within limitations and assumptions of such infrastructure. Smart contracts that control digital assets take advantage of their bearer instrument nature secured by entire blockchain network that acts as very slow and very secure computer. Smart contracts are important, but play a supportive role in systems built on top of digital assets.

Whenever you wonder how could a blockchain protocol help with a given problem, reframe the question in terms of digital assets. If there is something that can be defined as a digitally transferrable thing and benefit from automation and improved security of such transfers, then you have a reason to consider blockchain as part of your design. If not, then blockchain is probably not what you need: it would be either irrelevant (e.g. health records on blockchain) or grossly inefficient (e.g. arbitrary computation environment).

What does cryptoanarchy mean

The word “anarchy” simply means rejection of a notion that any human being (or group thereof) can be a kind and wise despot to rule all other human beings. However, many people think “anarchy” means absense of order, lack of ethics and complete apocalypse.

“Cryptoanarchy” means a practical path to protecting human beings against wanna be despots (that is, protecting humans against each other).

Why is it “crypto”? Lets be more specific about what we want to achieve:

  1. Organizations that do what you like should be protected against censorship and shutdown. If you and others want to participate in them, you should have cheap and safe means to do so in a hostile environment.
  2. Organizations that do what you don’t like should not be able to extract support from you. They should rely on voluntary support from their participants and waste as many resources as possible to gain involuntary support.

In other words, we need asymmetric security for society: cooperation should be cheap, intervention should be expensive.

“Crypto” comes in play because most cryptographic primitives and protocols are built around the idea of asymmetric security: cheap to use, expensive to break.

Turns out, personal communications, money and financial markets are all digital and can be secured via cryptography. Identities can be obfuscated to protect against physical attacks. Money can be decentralized and therefore censorship-resistant, and also cheap to secure. Entire financial markets can be distributed and adaptive to support complex economical relationships even when some parts of the economy are under political pressure.

Implementing these two points does not automatically guarantee happiness to everyone. But it certainly improves individual liberty and gives room for many people to figure out how they want to live without being locked into a vision of a narrow group of psychopaths.

End-to-end security

Security of the internet gets attention from two angles: security of client-server connections (think TLS) and end-to-end encryption for personal communications (PGP and modern messaging services). These two approaches often seem independent in general and complimentary in a narrow scope of specialized applications.

That’s the problem. Because even when we eliminate obsolete ciphersuites, require Certificate Transparency everywhere, shorten the number of all-mighty root CAs, and even implement post-quantum crypto throughout TLS, that won’t make the arbitrary connections much more secure. Because TLS is only able to guarantee that you are talking privately to a computer that has a magic number in its memory. And since the computer is willing to talk to any random computer on the entire internet and is probably written in more than 100 lines of code in an insane 30-year old Stack of Outdated Assumptions, that magic number may have already been leaked to some other computers to which you might be talking right now.

What is needed is an end-to-end security throughout the internet. Not only for the person-to-person messaging, but any other website. Just like a Git repository keeps track of every line of text attributed to a concrete author, every piece of information should be a part of a hierarchical structure where each element is signed off and/or encrypted by a responsible individual.

When I read an article by N. in a newspaper, I don’t want a vague security promise from a newspaper’s sysadmin to manage the TLS certificate securely. I want a cryptographic signature of the author, editor and the publisher to see that the content I’m reading is exactly the one signed off by the parties involved. So even if the server is hopelessly compromised, attacker will not be able to forge individual authors’ writings.

When I go to insurance website, I want to have all the data end-to-end encrypted and signed by my doctors and other parties directly involved in the process, not just the connection being secured by some TLS certificate that no one knows when was or will be compromised.

When I go to my bank, not only my personal transactions, but also any instructions to my banker should be signed by me and the messages from the bankers should be signed by them.

Every piece of information should be end-to-end secure. On a small level it is about a piece of javascript being linked via its cryptographic hash instead of a foreign CDN address. On a big level, it is all pieces of information must be signed off by end parties responsible for those pieces.

On one hand, it sounds like a project Xanadu squared: even more bold and unrealistic version of the “better web” than a famous 54-year old vaporware.

On the other hand, black markets and financial industry move in the direction of end-to-end security revolution. Bitcoin gives individuals radical levels of end-to-end monetary security: people not only invest in it and build innovative long overdue services, but also lose their mind and participate in all sorts of scammy ventures that can exist only because of the security Bitcoin provides.

Recently, financial industry began to take notice and started figuring out how to bring similar end-to-end security to their own activity. My teammates at Chain and I produced a Chain Protocol that lays foundation for the end-to-end security for financial assets. What we released so far is just a small step, but hopefully in the right direction. It is aimed at helping the industry to disentangle the mess which is the network of clearing houses, brokers, correspondent banks and myriads of other intermediaries. Similar to Bitcoin, traditional assets can be controlled directly, without a thick layer of previously unavoidable promises, fragile relationships and point-to-point reconciliations.

Bitcoin has spawned renewed interest in cryptography which lead us to end-to-end messaging and blockchain protocols for digital assets. Hopefully, the trend will continue and we will obtain end-to-end security for all other kinds of activity on the internet.

Time and money

Time is not easily measured in money. It’s rather money that can be measured in time.

Time is not fungible, money is. Yesterday’s missed opportunity not necessarily comes back today, even if you have all day. Everyone has their own money, but timeline is shared by all of us.

You can buy some time with money, but that will be different time, not the one you want to get back. Good thing, though, you can buy money with time and that will be the same money.

Turns out, it’s more accurate to say “money is time”, not “time is money”. Because making money always takes time, while some time cannot be bought back with any amount of money.

Original vision of Bitcoin

Some people feel bad about Bitcoin being harder to scale than any successful centralized system such as Myspace or Altavista. They often claim that “I signed up for a P2P Electronic Cash System, not a settlement layer” which is a way to say that Satoshi envisioned something else than what we have today.

I’d like to challenge this argument, even though I realize that it is absolutely irrelevant: whatever Satoshi thought he was doing, existence and evolution of Bitcoin is not subject to anyone’s wishful thinking, but to a humankind’s ability to actually improve it.

So Satoshi called Bitcoin an “electronic cash system”. What does that mean?

First of all, “cash” means something else than “quick settlement”. It primarily means a bearer instrument as opposed to a contract with a third party providing credit (as with credit cards, for instance). When accepting “cash” instead of a credit card, I am somewhat protected against reversal of the transaction by a third party — a credit card company. But how exactly am I protected? Turns out, there is another third party involved: a centrally controlled mint (e.g. a central bank) that provides difficult-to-counterfeit notes and uses a subsidized (by taxes) police force to discover and eliminate counterfeiters. So instead of two third parties (CB + CC company), cash leaves only one (CB) in our threat model. CB also adds a risk of debasement of currency, so if you receive 0.10% of total currency today, tomorrow it may turn out to be just 0.09%. You are essentially paying a rent on money with little assurances of stability of that rent. Also note a somewhat hidden cost of tax-subsidized minting and law enforcement to protect authenticity of the money.

Lets scroll back a few hundred years to the age of silver and gold coins. “Cash” was more decentralized: gold is gold no matter what face is printed on it. But why have faces on gold coins at all? Elementary, Watson: because it’s a huge pain in the arse to verify the coin on the spot. So central mints were used to provide hard-to-counterfeit stamps that allow quicker verification of coin validity. Mints were still a source of debasement risk, but at least some independent verification was more possible and debasement could not have been done overnight (as the saying goes, Rome was not debased in one day).

So even precious metal coins are not better than paper cash (if they were, paper would never take off in the first place): they seem to be decentralized, but related costs are so high, that to make them useful we still need centralized authorities built around them.

Is it the kind of cash Satoshi attempted to turn into electronic form? Lets read bitcoin.pdf from the very beginning:

A purely peer-to-peer version of electronic cash would allow online payments to be sent directly from one party to another without going through a financial institution.

Note that the Mint (or Central Bank) involved in coinage and printing paper bills is as much a financial institution as your bank or Visa.

If one is to create a truly decentralized bearer instrument, one must make both issuance and authenticity checks decentralized, not simply the transfer mechanism. And the only workable way that we know so far is to collectively (as a civilization) continuously build a proof-of-work chain of transfers authenticated via public key cryptography.

Folks who focus on payments without discussing issuance and corresponding holding costs (risks of debasement) are like grasshoppers who spend before saving, so when winter comes they all go to ants begging for the share of what the ants saved. This has happened all the time in the history: first, ants abstain from unnecessary spending in order to save some food for later, then ants are forced to abstain even more by all the stupid and hungry grasshoppers who come to take their savings. A truly decentralized cash would prevent the most badass grasshoppers issuing and debasing their own currency and asymmetric security of public key cryptography allows all ants, no matter how poor or rich, have equally cheap protection against even the strongest of grasshoppers.

But lets get back to our “original vision of Bitcoin”. We now clearly see that it is not fair to compare Bitcoin’s performance to performance of Visa electronic payments (large throughput, but a lot of trusted third parties and risks of reversal and censorship) or even paper bills or minted coins.

So how does Bitcoin compare to a fully decentralized gold bullion then, the best known decentralized money before Bitcoin? How many transactions a day can the naked chunks of gold settle around the world? How quick is each payment verification? How do costs scale with different amounts of payments, from the smallest to the largest? We will leave answering these questions as an exercise to the reader and jump right to the conclusion:

Decentralized physical cash sucks in all ways imaginable compared to Bitcoin. Bitcoin is faster, cheaper and safer than any other forms of decentralized cash that ever existed.

In addition, if you take Bitcoin and build payment layers on top of it by relaxing some underlying security requirements, you will still get better electronic cash than the paper cash today: faster, easier to verify, better protected against debasement etc etc.

Satoshi was building a basis layer for electronic cash by eliminating trusted third parties as a requirement. He succeeded. Everything else is simply an optimization. If some optimizations relax security requirements of Bitcoin (e.g. need some level of centralization), then they do not belong to Bitcoin, but to additional layers around Bitcoin.

Bitcoin is designed to be free from intervention as in “fuck you”.


Discuss: Reddit, HN.

Problem with Proof of Stake and “coin voting” in general

The problem with “voting by coins” is that most coins do not vote. This leaves a small fraction of UTXO to actually vote which is not representative and highly volatile since anyone risking to use idle keys to a large stash of coins can dramatically affect the voting outcome.

Most coins are locked up well “under matress” with multisig, time locks and possibly even with HSM-controlled keys. Also, pubkeys to long-term stashes do not want to be exposed from under their hashes in order to be better protected against a QC development in the long term.

In other words, most coins that matter, cannot and will not vote.

This leaves only the least important coins to perform voting. Obviously, the result of such voting will be worthless.

UPDATE: it is possible that people annotate output scripts with a dummy “voting hash” that commits to a separate pubkey, intended only for voting and stored elsewhere. But then security of the voting keys is not equivalent to the security of bitcoin keys which is what we want to begin with: that voters perfectly map to actual bitcoin holders.

Craig Wright is a brilliant troll

First, it is very easy to prove if you are Satoshi:

  1. Take the key from genesis block or from transaction paying to Hal Finney.
  2. Sign a message that includes your meatspace name, your relation to Bitcoin and the recent timestamp in a form of a recent block hash (that’s how you prove that the message could not be fabricated long time ago and/or with careful choice of contents).
  3. Publish the message and the signature anywhere.

No one has done that yet.

Second, it’s quite easy to prove that all claims by Mr. Wright regarding his link to Satoshi are either irrelevant (such as general knowledge of how Bitcoin works) or outright fabricated (such as backdated PGP key demonstrated last year, or a signature copy-pasted from a Satoshi’s transaction).

Third, it’s easy to see how mass media and some prominent voices in the Bitcoin space are turning “burden of proof” upside down. Some express doubts, but still prefer to trust (!) and believe (!!) even after being educated about the invalidity of the presented “evidence”. They still wait for a “better proof” coming from Mr. Wright.

In this story Mr. Wright brilliantly demonstrated who should not be trusted anymore on any matters in finance, cryptography or Bitcoin. Oh, what the heck, who is not to be trusted, period.

PS. This video is just perfect: https://www.youtube.com/watch?v=H2euMNmsb_s

wat

Why I think hysteria about block size is market manipulation by big buyers

Yesterday I was bored and tweeted that people flood reddit and blogs with concerns about the block limit in order to buy as many coins as possible before July’s halving that will trigger a huge price increase and expose to the whole world how important and valuable Bitcoin has become.

Seriously speaking, I don’t see another explanation for seemingly inconsistent behaviour on part of some people than either outright stupidity or participation in a mild short term conspiracy aimed at supressing the price until the next mining reward drop.

1. If someone’s business model is really at stake, they’d be coding real scalability solutions rather than debating opinions and appealing to authority.

I can understand how respectable Bitcoin businesses such as online exchanges and payment processors earn fees from the users’ activity. They obviously would like to process as many transactions as possible in order to earn as much commission as possible. Nothing wrong with it. However, if that’s really the case, then these companies should really invest into better codebase, improved block propagation techniques, better wallets etc - in order to be able to say “hey, we’ve improved the overall infrastructure and now we can raise the stupid limit” .

However, the only people who actually fix the infrastructure are those who care about long-term value of Bitcoin which is self-consistent and does not need any conspiracy theory to explain.

2. Some people point to ETH pumping as an evidence that people sell BTC for ETH.

This is total bullshit. Ethereum is much harder to scale. Dumping BTC for ETH because of scaling concerns makes no sense.

3. Some think that miner’s hashrate should decide hard fork matters, but yet do not like miner-enforced soft fork that improves Bitcoin in multiple ways

If miners “should” decide some matters, wouldn’t it be easier to just implement whatever you want using their existing powers (soft forks) rather than demanding that they have more power?

These inconsistent arguments can be explained either by total stupidity, by a big conspiracy theory (“USG wants to sabotage Bitcoin”) or by a small conspiracy theory (“Bitcoin is going to eat the world in a few months and we need to win some time to improve our position in it ahead of Chinese/Russians/Americans”).


Discussion on Reddit

Bitcoin Maximalism

Ok, here’s a rant in favor of so-called “bitcoin maximalism”.

TL;DR: Bitcoin will win the “cryptographic gold” title and every other altcoin imaginable will die. All fancy features like higher capacity, smart contracts etc will be bolted on top of Bitcoin as long as it’s safe to do with all excessive demand satisfied by commercial blockchain networks, separate layers and protocols on the side and on top of Bitcoin.

Why I’m so sure? Lets bust some myths.

“Bitcoin must scale to accomodate more users and more transactions, otherwise it will be dumped for another system”

If another system demonstrates how it can offer the same level of safety as Bitcoin (e.g. not being highly centralized and vulnerable to opinions and politics) while allowing higher capacity, it will immediately be implemented by Bitcoin via soft or hard fork with full support from major holders. It will be much less risky than to replay 7 years of market price discovery. We’ve already have seen examples when bugfixes and improvements are smoothly deployed via soft forks.

“Bitcoin must support fancy features like Ethereum has does in order to not lose to ETH”

If stakeholders are seriously considering this, they’d rather hardfork into Aethereum preserving all their balances than buying into a corporate offering which Ethereum is and aspires to become to an even bigger extent.

Also, Ethereum is much-much harder to scale and harder to upgrade to better privacy options than BTC. So if Bitcoin cannot survive because “it does not scale”, then Ethereum could not for sure as well.

“If the miners adopt a hard fork to boost capacity, Big Holders will be required to follow the larger hashrate”

No. Big Holders tolerate existing mining cartel only as long as it behaves. The mining cartel knows very well that Big Holders are those who give the value to BTC that’s converted into their daily earnings and that these holdings are well-protected by tons of irreversible proof of work. Should the mining cartel decide to play dirty, a different proof of work algorithm will be adopted (still cheaper than to buy into a completely new blockchain) and someone else will get paid for mining all blocks after the block N. Coins will be immediately dumped on the legacy chain and safely kept on the new chain with a different PoW.

But most importantly, and above all these specific issues, there’s one fundamental property of Bitcoin:

Should there be a precedent of a market abandoning one consensus in favor of another without all possible attempts to maintain it, that would become an eternal proof that such consensus is not safe long-term and can be sabotaged infinite number of times to satisfy politics du jour.

And that’s the main reason why Bitcoin will not go away after multi-billion dollar capitalization achieved over 7 years of expensive market activity. If miners want to stay in the game, Bitcoin will be infinitely extended with soft forks to address real concerns (those that put on-chain value at risk, not somebody’s business model). And if miners decide to fool around, they’d be hard-forked out of the game, not the other way around. In the worst case a bad precedent hurting stakeholders will trigger a nuclear war: everyone will lose money and all decentralized blockchain experiments will be considered irredeemably failed.

None of the above are due to specific design decisions. Bitcoin is the civilization’s consensus first of all, no matter how beautiful, ugly, efficient or inefficient it is. Should we prove just once that we can’t reach consensus, we will not deserve a second chance.


Discussion on Reddit

How segregated witness is not the same as bumping block size limit

“Segregated witness” (“segwit”) is a proposed feature to improve transaction mutability, enable smooth script upgrades and double Bitcoin capacity by moving signature scripts out of the transaction inputs into a separate data structure committed to a block using a new rule compatible with older nodes.

Bumping block size limit is a hard fork: first, everyone must agree to follow new rules, then everyone willing to verify a payment to themselves has to download and verify bigger blocks. So a minority of less-powerful miners and/or recipients is out of luck: they have to beef up their bandwidth and CPU resources or disconnect from the network. This is how “hard fork” works.

Segregated witness, among other things, increases capacity of the blocks without forcing everyone to validate bigger blocks. If you expect old-style transactions, you can still validate 1 Mb base blocks as you always did. However, if you wish to accept payments using segwit transactions, you have two options: 1) either validate additional data (that is, loading and validating all segregated signature scripts that do not fit into base blocks); 2) or trust majority of miners to validate these for you, then you can validate only base blocks ignoring segwit data, or even just use SPV proofs.

Segregated witness can only be used safely if the super-majority of miners enforce it. This can be done in two ways: validating segwit transactions according to the new rules, or not mining segwit transactions yourself and only trusting other miners to mine segwit transactions correctly (see below on why it’s not a huge security hole).

If you are a miner with sufficient resources, you can fully enforce and validate segwit transactions at the expense of larger consumed bandwidth and higher CPU consumption.

If you receive payments and have sufficient resources, you can accept both old-style and segwit transactions doing full validation yourself (at the expense of higher consumed bandwidth and higher CPU consumption).

If you wish to receive only old-style transactions, you can safely ignore all extra overhead of segwit transactions.

If your resources are very constrained, you can opt into accepting old-style transactions at old costs and using SPV proofs (trusting miners) to validate segwit payments. You may choose supporting segwit transactions for lower-value payments and require old-style transactions for higher-value payments if you only can afford old-style validation.

If you are mining with constrained resources, then you may resort to not mining segwit transactions at all and trust other miners to validate segwit transactions (if any) correctly. You can validate old-style transactions at no extra cost. Why can you trust others not to mess with you? It’s easy. Imagine some miner with 20% hashrate directs half of their hashrate (10%) to create blocks with invalid segwit data. They make you lose 10% of earnings, but they themselves lose 50% of their income because half of their blocks are invalid. The cost of attacking a constrained minority of miners is hugely asymmetric: large-scale attack makes the attacker run out of money much faster than the victim.

As a result, segwit allows scaling Bitcoin capacity in a opt-in way. Those who want to take advantage of extra capacity need to expend extra resources, but those who do not want to use the feature (no matter how small that minority is), do not need to expend any extra resources at all. Therefore, censorship-resistance property of Bitcoin remains unchanged.

Why Bitcoin is called Bitcoin

— Would you like to know why it is called “Bitcoin”?

Jane touched her glasses to show she’s preparing for one of those lengthy and passionate discussions. She sipped her orange juice and continued, without waiting for an answer.

— The closest who has ever come to creating Bitcoin was Nick Szabo. Have you read his pieces on bit gold, secure property titles and smart contracts?

— I’ve heard of bit gold. It was a precursor of Bitcoin which did not take off, right?

— Not quite. Nick never proposed any specific protocol or an algorithm, only an overview. Bit gold was just an open-ended idea. It was not clear how exactly such bit gold “coins” should be generated in a trustless manner and how their ownership could be verified. Also, in his proposal gold coins were not fungible. Their value depended on scarcity defined by complexity of per-coin proof-of-work. There were a few other problems. Nick identified the need for a secure title registry, but never proposed a concrete protocol to make it work on a global scale.

— So what ingredient was missing then?

Mike started feeling impatient. It’s not the first time he would be involved in a conversation filled with words “trustless”, “ledger” or “coins”. He prepared to listen for a hundredth time about mechanics of Bitcoin, signatures, hashing and all that.

— Ha! There was none.

Mike looked genuinely puzzled.

— Look, Nick actually laid down all the ideas necessary for a functional system: proof of work for scarcity, need for secure decentralized title registry, smart contracts. All pieces of the puzzle were there, just not arranged as needed.

Jane’s eyes sparkled and she made a dramatic pause.

— Enlighten me :)

— What if you make scarce not the bit gold coins themselves, but the entire title registry? And make it so scarce that there could only be one, which automatically solves the synchronization problem. Individual coins then become perfectly fungible because they all (eventually) share the same proof of work. And since the proof of work gets stale over time and we need to add new transactions, we could timestamp new transactions with extra proof of work thus maintaining the scarcity by piling up all proofs of work into a one giant proof. Issuance of new units follows naturally: some programmed amount could be allocated for each batch of proof-of-work.

— Impressive. Does that mean that Nick is Satoshi?

— I’m not sure. Satoshi did not mention Nick Szabo’s writings at all. Either Nick naively tried to hide his relation to Bitcoin, or it was someone inspired by Nick who tried to direct attention to him.

— Or it is still Nick and he tries to make us think precisely that :)

— Either way, Bitcoin is clearly a result of studying Nick Szabo’s work which was incomplete without this tiny, but powerful unifying idea.

— You promised to tell me why it is called “Bitcoin”.

— Don’t you see it already? The ledger, blockchain, is just a single coin of bit gold with scarcity maintained by a growing proof of work. Hence “bit coin”, singular.

— Whoa. And this coin records its own history of ownership in itself. Fascinating! Sounds like a science fiction.

— It gets better! There are a few other interesting things that become evident from that perspective.

— I’m all ears.

— It’s getting late now. Lets continue next time.