“Bits do not naturally have Colour. Colour, in this sense, is not part of the natural universe. Most importantly, you cannot look at bits and observe what Colour they are. I encountered an amusing example of bit Colour recently: one of my friends was talking about how he’d performed John Cage’s famous silent musical composition 4'33” for MP3. Okay, we said, (paraphrasing the conversation here) so you took an appropriate-sized file of zeroes out of /dev/zero and compressed that with an MP3 compressor? No, no, he said. If I did that, it wouldn’t really be 4'33" because to perform the composition, you have to make the silence in a certain way, according to the rules laid down by the composer. It’s not just four minutes and thirty-three seconds of any old silence.“
Oleg Andreev

Software designer with focus on user experience and security.
You may start with my selection of articles on Bitcoin.
Переводы некоторых статей на русский.
Product architect at Chain.
Author of Gitbox version control app.
Author of CoreBitcoin, a Bitcoin toolkit for Objective-C.
Author of BTCRuby, a Bitcoin toolkit for Ruby.
Former lead dev of FunGolf GPS, the best golfer's personal assistant.
I am happy to give you an interview or provide you with a consultation.
I am very interested in innovative ways to secure property and personal interactions: all the way from cryptography to user interfaces. I am not interested in trading, mining or building exchanges.
This blog enlightens people thanks to your generous donations: 1TipsuQ7CSqfQsjA9KU5jarSB1AnrVLLo
“So in summary, OOM-safety is wrong:
- Because it increases your code size by 30%-40%
- You’re trying to be more catholic than the pope, since various
systems services you build on and interface with aren’t OOM-safe
anyway
- You are trying to solve the wrong problem. Real OOM wil be signalled
via SIGKILL, not malloc() returning NULL.
- You are trying to solve the wrong problem. Make sure your app never loses
data, not only when malloc() returns NULL
- You can barely test the OOM codepaths”
(via nem)
Start with algorithms
— I find OOP methodologically wrong. It starts with classes. It is as if mathematicians would start with axioms. You do not start with axioms - you start with proofs. Only when you have found a bunch of related proofs, can you come up with axioms. You end with axioms. The same thing is true in programming: you have to start with interesting algorithms. Only when you understand them well, can you come up with an interface that will let them work.— Can I summarize your thinking as “find the [generic] data structure inside an algorithm” instead of “find the [virtual] algorithms inside an object”?
— Yes. Always start with algorithms.
Usually you start deciding what components your application consists of, then you write some code to glue them together. Later, you face a change in the requirements and start “fixing” the object model with a scotch tape. When you run out of tape you finally redesign your object model to fit the algorithm. Otherwise, if you focus on the algorithm instead of data structures, you’ll spend less time on (re)writing the code.
OOP, however, is orthogonal to this idea. Objects still encapsulate code (algorithms) and data (requirements). Requirements are set through the object’s interface. The only difference is that you should design objects from the algorithms perspective, not the abstract data relations. This is why relational database should be normalized, tuples should have as little number of fields as possible, object should do only one job etc.
Kaganov on security
Very good post (google translation) on airport security improvement and security strategies in general.
object.or { default_value }
Little helper to deal with nils, empty strings and arrays.
class ::Object
def blank?; false end
def or(default = nil)
blank? ? (block_given? ? yield : default) : self
end
def and
blank? ? nil : yield(self)
end
end
class ::FalseClass
def blank?; true end
end
class ::NilClass
def blank?; true end
end
class ::Array
def blank?; compact.empty? end
end
class ::String
def blank?; strip.empty? end
end
class ::Hash
def blank?; values.empty? end
end
Examples:
" ".or "Untitled" # => "Untitled"
" ".or { calculate_value } # => "42"
[nil].or { ["apple", "orange"] } # => ["apple", "orange"]
"data".and {|data| Wrapper.new(data) } # => wrapper
" ".and { ... } # => nil
I would also suggest treating 2+ spaces as one or more tabs to avoid tab vs. spaces debates. See also my article on DSSV.
In college computer science classes, we learn all about b*trees and linked lists and sorting algorithms and a ton of crap that I honestly have never, ever used, in 25 years of professional programming. (Except hash tables. Learn those. You’ll use them!)
What I do write – every day, every hour – are heuristics that try to understand and intuit what the user is telling me, without her having to learn my language.
The field of computer interaction is still in its infancy. Computers are too hard to use, they require us to waste our brains learning too many things that aren’t REAL knowledge, they’re just stupid computer conventions.
On Heuristics and Human Factors by Wil Shipley.Thanks to @groue for the link
Thanks to Pierlo for the link.
“If you’re familiar with how Objective-C objects are declared […] blocks are Objective-C objects. This may not seem strange in Objective-C but the reality is that even in pure C or C++, blocks are still Objective-C objects and the runtime support for blocks handles the retain/release/copy behaviors for the block in an Objective-C messaging manner.”
by Matt Gallagher
DRY and evolution
When you up to implement a feature similar to what you already have, there’s a huge temptation to refactor and abstract existing code right away. Sometimes you even have a perfect idea how it should be done.
Nevertheless, Don’t Do That.
Take an existing class, copy it, rename and update to meet your needs. Test, tweak, test again. You will see clearly how it is different from the original code. Don’t rush to extract common code, let yourself test/tweak more. Of course, don’t let it stay unDRY for a long time: it may become hard to refactor them later when you forget what you were actually doing.
In other words you should let your code evolve in a natural way like Darwin prescribed. Replicate and mutate the code: you will see better solution among the options. Then according to the Rule Of Survival Of The Fittest delete everything not worth living.
In many-many cases this technique helps to avoid wasting time on fitting code into wrong abstractions built on pure imagination.
[Blames himself for the mistakes of the past.]
![[slides] Google: Designs, Lessons and Advice from Building Large Distributed Systems](http://66.media.tumblr.com/tumblr_krph4t56h31qzqkn1o1_400.png)