Oleg Andreev



Software designer with focus on user experience and security.

You may start with my selection of articles on Bitcoin.

Переводы некоторых статей на русский.



Product architect at Chain.

Author of Gitbox version control app.

Author of CoreBitcoin, a Bitcoin toolkit for Objective-C.

Author of BTCRuby, a Bitcoin toolkit for Ruby.

Former lead dev of FunGolf GPS, the best golfer's personal assistant.



I am happy to give you an interview or provide you with a consultation.
I am very interested in innovative ways to secure property and personal interactions: all the way from cryptography to user interfaces. I am not interested in trading, mining or building exchanges.

This blog enlightens people thanks to your generous donations: 1TipsuQ7CSqfQsjA9KU5jarSB1AnrVLLo

Getting there from here

Inspired by You Can’t Get There From Here c2.com article

Every incremental development process suffers from increasing module coupling by definition. Smaller steps give you flexibility to turn around a current point in a development process, but not to jump out of it. With incremental process you are reaching local optimum: the best solution for the problem you are not solving today. But this is not the real issue (at least, you can sell it to someone else). The issue is that you can’t move incrementally from the local optimum due to high coupling. The only way out is to take independent components which are suitable for the new task, jump out of the current point and set up new process based on these components. Efficiency of this jump is measured in total relevance of all these components.

In other words, we need some insurance that some critical amount of investment (1 month, $100K etc.) is not thrown away as a whole thing. To achieve this we should keep the work splitted into small distinct pieces, each of the acceptably low cost.

It is usually recommended to refactor the code in order to extract abstract entities and generalize their API. However, it looks like a stupid game in the same playground: a single project directory tree with 1000 files in it.

Lets take a look at search tree balancing principle: each node should have some optimal number of children. If it has too many children, we have to evaluate linear search in the node. If it has too few, we have to evaluate linear search through the linked list instead of a tree.

Our asset is the code. The efficient evaluation of the code requires to keep it in a good shape. This could mean the following:
- N lines of code per method
- M public methods per class/module (+ M private)
- F modules/files per folder
- L levels of folders per library/dependency
- D libraries/dependency per product/another library.

Each figure is average. You can have 10*N lines method as long as there are ten N/10 line methods. The ultimate goal is to have maximum L*F*M*N lines of code per program (as well as M*N lines per class).

Figures could be something like that: N=7, M=7, F=17, L=3, D=7.

The idea is to limit the amount of code you work with. If you do so you would be pushed to extract least coupled parts out of the project, therefore making them more valuable individually and giving more focus to the essentials.

This implies slightly different mindset comparing to traditional refactoring. You do not look for a way to restructure the program just for making it cleaner: you look for a way to keep as little code as possible by extracting least relevant code into separate external modules.