Tages Anzeiger redesign suggestion.
“Bits do not naturally have Colour. Colour, in this sense, is not part of the natural universe. Most importantly, you cannot look at bits and observe what Colour they are. I encountered an amusing example of bit Colour recently: one of my friends was talking about how he’d performed John Cage’s famous silent musical composition 4'33” for MP3. Okay, we said, (paraphrasing the conversation here) so you took an appropriate-sized file of zeroes out of /dev/zero and compressed that with an MP3 compressor? No, no, he said. If I did that, it wouldn’t really be 4'33" because to perform the composition, you have to make the silence in a certain way, according to the rules laid down by the composer. It’s not just four minutes and thirty-three seconds of any old silence.“
“So in summary, OOM-safety is wrong:
- Because it increases your code size by 30%-40%
- You’re trying to be more catholic than the pope, since various
systems services you build on and interface with aren’t OOM-safe
anyway
- You are trying to solve the wrong problem. Real OOM wil be signalled
via SIGKILL, not malloc() returning NULL.
- You are trying to solve the wrong problem. Make sure your app never loses
data, not only when malloc() returns NULL
- You can barely test the OOM codepaths”
— I find OOP methodologically wrong. It starts with classes. It is as if mathematicians would start with axioms. You do not start with axioms - you start with proofs. Only when you have found a bunch of related proofs, can you come up with axioms. You end with axioms. The same thing is true in programming: you have to start with interesting algorithms. Only when you understand them well, can you come up with an interface that will let them work.
— Can I summarize your thinking as “find the [generic] data structure inside an algorithm” instead of “find the [virtual] algorithms inside an object”?
— Yes. Always start with algorithms.
Usually you start deciding what components your application consists of, then you write some code to glue them together. Later, you face a change in the requirements and start “fixing” the object model with a scotch tape. When you run out of tape you finally redesign your object model to fit the algorithm. Otherwise, if you focus on the algorithm instead of data structures, you’ll spend less time on (re)writing the code.
OOP, however, is orthogonal to this idea. Objects still encapsulate code (algorithms) and data (requirements). Requirements are set through the object’s interface. The only difference is that you should design objects from the algorithms perspective, not the abstract data relations. This is why relational database should be normalized, tuples should have as little number of fields as possible, object should do only one job etc.
Very good post (google translation) on airport security improvement and security strategies in general.
class ::Object
def blank?; false end
def or(default = nil)
blank? ? (block_given? ? yield : default) : self
end
def and
blank? ? nil : yield(self)
end
end
class ::FalseClass
def blank?; true end
end
class ::NilClass
def blank?; true end
end
class ::Array
def blank?; compact.empty? end
end
class ::String
def blank?; strip.empty? end
end
class ::Hash
def blank?; values.empty? end
end
Examples:
" ".or "Untitled" # => "Untitled"
" ".or { calculate_value } # => "42"
[nil].or { ["apple", "orange"] } # => ["apple", "orange"]
"data".and {|data| Wrapper.new(data) } # => wrapper
" ".and { ... } # => nil
I would also suggest treating 2+ spaces as one or more tabs to avoid tab vs. spaces debates. See also my article on DSSV.
In college computer science classes, we learn all about b*trees and linked lists and sorting algorithms and a ton of crap that I honestly have never, ever used, in 25 years of professional programming. (Except hash tables. Learn those. You’ll use them!)
What I do write – every day, every hour – are heuristics that try to understand and intuit what the user is telling me, without her having to learn my language.
The field of computer interaction is still in its infancy. Computers are too hard to use, they require us to waste our brains learning too many things that aren’t REAL knowledge, they’re just stupid computer conventions.
”—On Heuristics and Human Factors by Wil Shipley.Thanks to @groue for the link
Thanks to Pierlo for the link.
“If you’re familiar with how Objective-C objects are declared […] blocks are Objective-C objects. This may not seem strange in Objective-C but the reality is that even in pure C or C++, blocks are still Objective-C objects and the runtime support for blocks handles the retain/release/copy behaviors for the block in an Objective-C messaging manner.”
by Matt Gallagher
When you up to implement a feature similar to what you already have, there’s a huge temptation to refactor and abstract existing code right away. Sometimes you even have a perfect idea how it should be done.
Nevertheless, Don’t Do That.
Take an existing class, copy it, rename and update to meet your needs. Test, tweak, test again. You will see clearly how it is different from the original code. Don’t rush to extract common code, let yourself test/tweak more. Of course, don’t let it stay unDRY for a long time: it may become hard to refactor them later when you forget what you were actually doing.
In other words you should let your code evolve in a natural way like Darwin prescribed. Replicate and mutate the code: you will see better solution among the options. Then according to the Rule Of Survival Of The Fittest delete everything not worth living.
In many-many cases this technique helps to avoid wasting time on fitting code into wrong abstractions built on pure imagination.
[Blames himself for the mistakes of the past.]
“You, as a programmer, should be programming EVERY LINE as defensively as possible. You should assume that every other method in the program is out to kill you, and they are intentionally passing you corrupted buffers and pointers, and it’s your job to handle all that crap and make the best of it. (And, if you can repair it, do so, and if not, raise an exception so you can catch it in beta testing, instead of silently failing and/or corrupting data.)”
@wilshipley
“Here are five things that will kill your startup before software security does:
— Slowness
— Poor graphic design
— XML
— The RIAA
— Product Marketing Managers
The graveyards in this town are littered with the corpses of startups that pinned their hopes on advanced security. Better engineers than you have tried and failed. Theo de Raadt coordinated the first large-scale security codebase audit. His reward: Only two remote holes in the default install!”
This is an evolving document describing sRuby, a subset of Ruby that can be easily compiled to fast low-level code. The purpose of developing sRuby is to use it to implement a Ruby virtual machine. However, we anticipate that it can be used to write Ruby extensions that needs to bridge the gap between Ruby and a low-level language © in an easy and portable way.
Robert Feldt
April 4, 2001
Suppose you have that kind of user interface:

The efficient rendering algorithm would be the following:
1. Cell should render its content manually: that is not using multiple views, but using a single contentView which is updated programmatically in drawInRect: method.
2. Each time cell is rendered, it checks the cache of downloaded images. If image is not present, cell schedules request to download it. Cell should remember the request URL to be notified when data is ready.
3. Download request goes to FILO queue with async NSURLConnection delegate. Async API uses the main thread which is notified when data is ready.
4. The queue size should be limited by a maximum number of visible cells. The most recently requested images go to the end of the queue, while the head of the queue is truncated each time new request is added. This way scheduled requests are removed when corresponding cells become invisible.
5. Download callback should lookup [tableView visibleCells] to find the exact cell which requests the downloaded image. The cell could be different than the one started the request (remember cell recycling!). You cannot just call [tableView reloadData]: it works poorly when a lot of images are loaded while you scroll.
6. In addition to downloaded images cache, there should be pre-rendered images cache: each time cell draws itself and finds the downloaded image, it should rescale and crop the image. The rescaled image should be put in the cache, otherwise scrolling won’t be smooth. Of course, if the downloaded image is already sized as needed, this step could be omitted. See UIGraphicsBeginImageContext.
Toolkit summary:
— download requests queue with async i/o (no NSOperations, no background threads)
— manual cell rendering (no subviews)
— individual cell update (do not reloadData)
— pre-rendered images cache
— discard pending downloads for invisible cells
“Tracing and reference counting are uniformly viewed as being fundamentally different approaches to garbage collection that possess very distinct performance properties. We have implemented high-performance collectors of both types, and in the process observed that the more we optimized them, the more similarly they behaved — that they seem to share some deep structure.” David F. Bacon at al.
Great article explaining multithreading issues on multiprocessor systems.
Reference-counting is traditionally considered unsuitable for multi-processor systems. According to conventional wisdom, the update of reference slots and reference-counts requires atomic or synchronized operations. In this work we demonstrate this is not the case by presenting a novel reference-counting algorithm suitable for a multi-processor system that does not require any syn- chronized operation in its write barrier (not even a compare-and-swap type of synchronization). A second novelty of this algorithm is that it allows eliminating a large fraction of the reference- count updates, thus, drastically reducing the reference-counting traditional overhead. This paper includes a full proof of the algorithm showing that it is safe (does not reclaim live objects) and live (eventually reclaims all unreachable objects).
1. Some GCs do pointer recognition in the arbitrary data array (e.g. Boehm-Demers-Weiser GC); this is not necessary if GC should track objects of the known structure (e.g. Steve Dekorte’s GC)
2. If we track object references only, there’s no need to fight fragmentation of the GC-managed heap: all entries are of the same size.
3. After coding Obj-C for a while, I’ve noticed that the only issue which should be resolved by some kind of garbage collector is circular references. Retained properties and autorelease pools already help to avoid manual retain/release calls. That is: allocation should be always succeeded by autorelease and all the properties should be nullified on deallocation (this could be done automatically).
I wonder if it is possible to use a simple reference-counting mechanism with a simple referential cycles resolution, thinking that way we can imagine a very simple and efficient garbage collector.
References:
1. Minimizing Reference Count Updating with Deferred and
Anchored Pointers for Functional Data Structures by Henry Baker
2. Concurrent Cycle Collection in Reference Counted Systems by David F. Bacon and V.T. Rajan
1. When using sandbox user account, do not sign in using Preferences, just sign out and sign in when your app asks so.
2. Transaction receipt should be Base64 encoded before sending to Apple verification service. This is not mentioned in the docs.
3. Subscriptions should not be restored using restore API. Do not even try: Apple suggests sending relevant data from your application server.
4. Title/description you set in iTunes connect do not really matter if you supply products from the server. Only product id and the price tier matter. Everything else you supply by yourself.
5. Do not forget to sort the products elsewhere since it is impossible to sort them on iTunes Connect site.
6. Transaction receipt should be verified at apple server for two reasons: 1) secure validation; 2) this is the only way to get transaction details
“That is what happens when a conventional generational garbage collector is used for a program whose object lifetimes resemble the radioactive decay model. The collector simply assumes that young objects have a shorter life expectancy than old objects, and concentrates its effort on collecting the generations that contain the most recently allocated objects. In the radioactive decay model, these are the objects that have had the least amount of time in which to decay, so the generations in which they reside contain an unusually low percentage of garbage. For the radioactive decay model, therefore, a conventional generational collector will perform worse than a similar non-generational collector.”
Paper by William D. Clinger and Lars T. Hansen (PostScript)
The major feature of a dynamic language is interactivity. With Smalltalk you may run the program and inspect/change it at runtime. This implies some GUI for VM with built-in crappy text editor: you don’t edit files, you edit objects now.
This does not sound very comfortable for many reasons. First, you would always want to have a “canonical” state of your application which is not affected by runtime mutations: that is, plain text files stored under some version control. Next, you would like to use a different text editor or GUI and it is much simpler to achieve when you operate with plain files instead of fancy VM/language-specific API.
How do we combine interactivity of Smalltalk with text file editing? Let’s take the puriest OO language ever designed: Io.
1. Each file contains an expression.
2. The only way to load the file is to evaluate it in context of some object: object doFile("file.io"). The return value would be a result of the expression in the file.
3. We may have a convention that some files return a prototype object: the object which is used as a prototype for other objects created in runtime.
4. To load “prototype object” we use a special loader object which would track the file-to-object mapping: Article := Loader load("article.io")
5. Loader monitors the filesystem and when some file is changed, it loads it into another object and replaces the prototype with that object: Article become(load("article.io"))
6. At that point, all articles in the system suddenly have another version of Article proto.
You have to follow some safety rules. For instance, proto’s descendants should not modify the proto and rely on such modifications.
Of course, this method still does not allow you to change/inspect any object in the system. For this to work you may put a breakpoint message somewhere and use a debugger after the proto is reloaded and VM stepped on that message. Or wire some Smalltalk-like GUI to your app.
Simple proto-based reloading helps development a lot and in contrast to class loading methods with full app reload, works faster and for full range of source code including all libraries. Rails dependency system does not reload gems, but does a pretty good job with constant reloading. All ruby/rails issues with global data applied.
Now, working with Objective-C where nil eats messages, I realized that the code is more elegant, but it takes significant amount of time to debug it. You create if/else branches and breakpoints to trace the nil, then you fix the bug which causes it and erase the conditional code. You get your elegant code back and wait for another issue to arise later.
“Essentially, Haas Unica came about as a result of analysing the original version of Helvetica, its variants (as they were in 1980) and similar faces and seeking to improve them - to produce the ultimate archetypal sans serif face. A single face to unite them all, if you like. ”
See also: From Helvetica to Haas Unica (flickr set)
The paper discusses how thread-oriented programming is more efficient (in terms of performance and development cost) than event-oriented.
My personal observation is that cooperative multitasking (based on coroutines, fibers) requires less and easier to read code comparing to evented callback-based code.
The Objective C syntax is poisoned with nested square brackets:
[[[Class alloc] initWithApple:a andOrange:o] autorelease];
First, lets move opening bracket behind the name of the receiver:
Class[alloc][initWithApple:a andOrange:o][autorelease];
You may agree that this is much easier to write now. However, at this point we lose compatibility with ANSI C (think buffer[index]).
Lets omit brackets for messages without arguments and use a space as a delimiter:
Class alloc [initWithApple:a andOrange:o] autorelease;
At this point we may get back compatibility with ANSI C by making a non-context free grammar (parser should recognize that a[b:c] could not be used for index operations).
You can implement exactly that syntax in Io using the standard language features.
Stylesheet and javascript URLs and content should be controlled by application code. Putting static files into public folder is so nineties.
Before starting a work on a distinct feature, you create a branch:
$ git checkout -b myfeature
You write code, create fast commits, merge in master, rewrite code etc.
$ git checkout master$ git merge myfeature --squash
Now you have merged all the changes into the working tree, but not committed in the master branch (because of --squash option)
You may git add some files to produce nice commits as described in the previous article.
These rules are designed for an easy code review using “git log -p”. This command shows the history of commits with patches.
1. Commit message should include task reference number (# of ticket/case in bug tracker, url of wiki etc.). If there’s no reference number, then the ticket must be really trivial or include refactoring only.
2. Commit represents an atomic working patch. No “WIP” commits with undefined behavior are allowed. In your private branches you can do whatever you want, but when merging to master, you must aggregate commits in a set of working patches. If you don’t do that, the single feature would be spread among 30 commits with arbitrary code being written and erased between the start and the end.
3. Commit should be small. You should split a big commit in a few independent ones. More safe commits should be stored first. Good example: you had fixed some performance issue. First, commit a benchmark which shows the previous performance, then commit an updated code. This helps to test the previous code using newer benchmark without manipulating code by hand.
Rule 2 tells you not to pollute master branch with tons of WIP commits and rule 3 tells you to squash WIP commits wisely: do not put everything in a huge patch.
It is much easier to follow these rules when you look what others do with the code using git log each time you pull updates.
“There are two basic type of method: ones that return an object other than self, and ones that cause an effect. […]
As a general philosophy, it’s better to try and make your methods one type or the other. Try to avoid methods that both do something and return a meaningful object. Sometimes it will be unavoidable, but less often than you might expect.”
“But Apple require that this app be paid, not free, in order for us to offer In App Purchase. So lets look at that again, the same user downloads the app for $0.99 assuming it’s a one time payment, then launches the app to find that he only gets 30 days of service for the $0.99 he just paid. Furious he leave one star reviews all over the place even though we went to great lengths in the iTunes description to spell out the exact nature of the subscription and costs (but no one actually ever reads that stuff).”