Bitcoin is a peer-to-peer digital currency. It does not depend on any particular organization or person and it is not backed by any commodity like gold or silver. Bitcoin is a name for both: the currency and the protocol of storage and exchange. Just like dollars or gold, Bitcoin does not have much direct use value. It is valued subjectively according to one’s ability to exchange it for goods.
This FAQ complements the bigger Bitcoin FAQ: https://en.bitcoin.it/wiki/FAQ You may start here and then proceed with Bitcoin Wiki for more details.
If you have already heard of Bitcoin mining and exchange, or you would like to know more about it, see below “Who is interested in Bitcoin?”.
Bitcoin is designed to be a faster, cheaper and a more secure currency. It is fast because verification of transfers is completely automated and does not involve human supervision. Security is achieved by having every participant do the verification himself using well-known cryptographic methods. Bitcoin is designed to prevent double-spending, stealing and creating money out of nothing. The original software source code is open and available to everyone for review and improvement.
Bitcoins do not exist as distinct items of information. They only appear as records in a global transaction history that is stored and synchronized between all participating computers. Transactions are grouped into blocks that are cryptographically signed in such a way that they are computationally hard to produce. Such scheme guarantees that no one can revert a transaction or double-spend bitcoins.
To own and spend bitcoins each participant only needs an address and a corresponding secret key. This key allows to send bitcoins from that address. To receive bitcoins a key is not needed; you only need to give the sender your address.
A person may have unlimited addresses and keys. A collection of keys is called a wallet.
Keys are used to sign new transactions in order to verify the ownership of the address. Then every client in the network can verify that the signature is valid and that the entire chain of transactions is done by actual holders of their keys. Therefore, one may steal bitcoins only by stealing secret keys.
Bitcoins are not created upfront and distributed to some privileged persons. Instead, they are given as a reward to anyone for verifying and securing transactions. Transactions are secured by being put into blocks that are computationally expensive to generate. People who create blocks are called miners.
The reward for creating a block is contained in the first transaction that sends 25 BTC from nowhere to any address chosen by the creator of the block (reward was 50 BTC before December 2012).
The reward is halved approximately every 4 years until a total of 21 million bitcoins are generated around the year 2140. More than 10 million bitcoins are available already. Every participating computer checks that the reward is generated at a constant speed and has a correct value. See the chart here:
The minimum amount of bitcoins to be transferred is 0.00000001 BTC. This gives more than 2000 trillion of smallest units. If everybody finds it useful in the future, the format can be changed to allow even smaller values.
No. Bitcoins are not printed, they are earned. In a way, all 21 million bitcoins already exist. You may earn them through an exchange or by validating and securing transactions. The rate at which new blocks are created is kept more-or-less stable by the protocol, so everybody can accurately account for money supply changes.
The supply is designed to be constant in order to avoid undermining the value of Bitcoin in favor of less inflationary instruments (e.g. physical gold). At the same time, bitcoins are introduced gradually to motivate early adopters to create a secure and efficient network.
There are normally three reasons why people get interested in bitcoins:
An “average” person can safely ignore the first two reasons.
Mining, the process of creating blocks of transactions, was possible on a home computer some time ago, but now it is profitable only using a custom-designed hardware. Bitcoin network adjusts the difficulty of mining to keep the rate of block creation constant (6 blocks per hour). As more people are throwing their resources into mining, the process becomes more expensive.
Speculation on currency exchange is also very competitive and does not significantly differ from any stock market.
Therefore, this FAQ focuses on the third reason: using Bitcoin in exchange for goods and services.
Bitcoin is a very young currency launched in 2009, but it already covers a surprisingly wide variety of goods and services. You can pay for personal services, buy digital and physical goods: books, games, movies, etc. So far you cannot buy groceries, but some coffee shops and restaurants already accept bitcoins.
There are wallet apps for computers and smartphones. There are different ways to buy bitcoins offline in physical form. Several companies develop processing services and debit cards. Right now Bitcoin is not always convenient or easy to use, but the trend is very strong towards more and better services.
See a list of places where one can buy, earn or spend bitcoins:
You need a software or a web service in order to manage your wallet and make transactions. A wallet is a collection of private keys (like passwords, but much longer), it does not contain any bitcoins itself. Each Bitcoin address has a corresponding private key that allows you to send money from that address. Addresses and keys are free to create and anyone can have as much of them as they want. To increase privacy, it is recommended to use a new address for each transaction. Popular Bitcoin software does that for you automatically.
Unlike Visa, MasterCard or PayPal, all Bitcoin transactions are final and cannot be reversed. Chargeback thus can only be performed through the good will of the seller. On the other hand, Bitcoin transactions do not only express transfer of funds, but they can also express complex contractual agreements. For instance, one can create a transaction between a seller, a buyer and a mediator. If the seller and the buyer agree on a transaction, the mediator cannot cancel it. But if there is a conflict, then the mediator may side with either buyer or seller to decide who receives the money. In this way Bitcoin provides a much stronger protection against fraud without a requirement to trust the mediator. This idea may be extended to a larger amount of participants to facilitate collective fund raising or insurance.
See the discussion here:
Bitcoin is certainly in a “grey area”. So far no attempt has been made to penalize bitcoin users. However, certain activities that are illegal with other currencies (fraud, money laundering, illegal purchases, etc.) are illegal with Bitcoin as well. Since some central banks may see Bitcoin as a competitor that undermines their control over money supply, one may expect laws affecting Bitcoin in the future.
Bitcoin requires access to the internet and a special software to create and verify transactions. To stop people from using Bitcoin, one would have to suppress communication channels. Bitcoin is facing the same risk as any other internet protocol: being filtered or denied by the internet service providers. However, there is no single organization to shut down to cause major disturbances in the network. For example, if a popular currency exchange is closed, one can always use another exchange service or even trade in person. In a sense, Bitcoin is as difficult to shut down as BitTorrent.
The value of Bitcoin (and all the other goods for that matter) is purely subjective and depends on each individual valuation. Of course, the valuations may be aggregated and averaged, but they all stand on a shaky ground of each individual’s decision to buy or abstain from buying. The same applies to dollars, gold, oil and groceries.
There is no objective value of Bitcoin, but there are several common reasons why people use it. First, every day Bitcoin proves itself as a robust registry of money ownership: nobody can revert transactions, freeze accounts or take somebody else’s money. Second, it provides better privacy than modern banking. Third, there is no risk that some day the amount of bitcoins has suddenly increased and your savings have lost their value.
Yes. Early adopters took the risk of spending their time and energy on a project, which turned out to be useful for the people who joined later. The more confidence people have in the network, the more they are willing to invest in it, thus increasing the Bitcoin price.
No. Bitcoin does not promise any dividends. There is no central issuer and anyone who generates bitcoins makes the process more expensive for himself and the other miners, but at the same time increases reliability of Bitcoin for everyone.
Just like any other currency or stock, Bitcoin is also subject to speculative bubbles and bursts. Part of its value is based on the willingness of the users to spend and receive, while the other part is based on the anticipation of an increase or decrease of such willingness. If that anticipation grows too much, Bitcoin may quickly gain in value until no one will want to buy it anymore. Then the people will sell until the price goes down to a “normal” level. These speculative spikes will get smaller as the market grows and each individual share of bitcoins decreases.
Some people are spending their energy printing metal coins with sophisticated patterns to make forgery more difficult. This activity is useless only if nobody wants to buy or use these coins.
Transactions are secured by putting them into blocks that are computationally expensive to generate. One has to spend time and electricity to verify and secure transactions to prevent double spending and illegitimate creation of money. Bitcoins are supplied as a reward to those who spend their resources to keep the network secure while it is young and growing. Money is not added because some amount of electricity is spent. It is electricity that is spent because people are demanding that much security and quality from the network. Automatically adjusted difficulty ensures that the amount of power to be spent is determined by the current demand in bitcoins, no more no less.
By design, every transaction may include a fee for it to be included in a block. Right now this fee is usually zero for big enough transactions and insignificantly small for small transactions (in order to prevent spam). When the reward gets smaller, these fees will become the main motivation for generating blocks.
Blocks appear at a constant rate (6 blocks per hour) and every block has a limited size (1 Mb). Today the typical block size is 50-200 Kb. When the rate of transactions increases, they will start competing for a place in a block. This will in turn increase the average fee. The protocol may be changed in the future to allow bigger blocks.
If you are not generating blocks, you will not spend much electricity. To store bitcoins you only need a wallet with secret keys. To transfer bitcoins you need an application that synchronizes transactions with the rest of the network. To do both you may use an app for your computer or a mobile phone, or a web service.
No. The payment is sent by relaying a signed transaction to the network. All you need to do is to give another person one of your addresses to send bitcoins to. To verify the payment, you can check the transaction status on or using a similar service. Digital signature is required only for spending bitcoins, not receiving them.
Even if you use a debit card with an escrow service that holds your keys, you will still benefit from the more competitive and non-inflationary nature of bitcoins. You may keep most of your savings on your personal computer, or transfer them easily and at low cost to any escrow in any country. Every escrow service in the world will need to compete with each other and with those who hold bitcoins by themselves.
Transactions are secured by being included in a block. Blocks are generated approximately every 10 minutes. Including the time to propagate a transaction through the network, today it usually takes about 15 minutes to verify inclusion in a block. For better security, one can wait until more blocks are added after the block with the transaction.
Transactions are grouped into blocks and each block contains the signature of the previous block, thus making up a chain of blocks.
The security of the system is based on computational difficulty to generate blocks parallel to the main chain. The more blocks are created after the block containing your transaction, the harder it is to fork the chain and make the transaction invalid. Therefore, no transaction is 100% confirmed. Instead, there is a confirmation number — a number of blocks built after the transaction. Zero confirmations means that the transaction is not yet included in any block (unconfirmed). One confirmation means that the transaction is included in one block and there are no more blocks after it yet.
Today for small transactions one or two confirmations (10-20 minutes) are considered enough. For bigger transactions it is recommended to wait for at least six confirmations (1 hour). One known exception is 120 confirmations required by the protocol for the use of generated bitcoins. This is because miners (those who create blocks) have the most of computing power in the network and must have extra incentive to play fairly and generate blocks in the main chain without attempting to double-spend their rewards.
Each block has a cryptographically signed reference to the previous block (parent). This way blocks form a chain. It is perfectly possible to have two blocks referencing the same parent block (the chain is forked). In this case we can think of two parallel chains diverging at some point. The main chain is by definition a chain of blocks with the maximum total difficulty.
Whenever miners accidentally generate parallel blocks, only one of these blocks is considered to be a part of the main chain. If later more blocks are added to some other block, then that block and all blocks after it will become part of the main chain.
The reward for the block and transaction fees are valid only for the blocks in the main chain. This motivates the miners to build on top of the main chain and avoid creation of parallel blocks. Otherwise, it is simply a waste of time and electricity if the block becomes abandoned by the network.
The transactions that are not in the main chain are not lost. All valid blocks (including the abandoned ones) are distributed among participants in the network.
When it is evident that some block will never again become a part of the main chain, a miner will interpret transactions in that block as unconfirmed and will include them in his new block. This means that now they collect the fees from these transactions while the owner of the abandoned block does not receive the 50 BTC reward or the transaction fees.
For the person who made the transaction this means an extra delay in the transaction confirmation (typically 10-20 minutes).
Bitcoin is not anonymous, but rather pseudonymous. All transactions, addresses and amounts are visible to everyone. But every address is just a random number and is not associated with an identity unless deliberately revealed by its owner. If one reveals that they are an owner of a particular address, then everyone will be able to see the chain of transactions involving that address. Addresses are free to create and it is recommended to create a new address for each transaction. This makes it hard to track how many bitcoins one has or where they are sent to or received from.
To further increase privacy one may use “laundering” servers. The servers randomly exchange bitcoins between all their users in order to make it more difficult to trace their source. In the jurisdictions that prohibit laundering money, some people use online casinos as a plausible way to clear the trace of money at the expense of about 10% of the amount lost in gambling. But if you are not doing anything illegal, the usual level of anonymity provided by changing addresses should be enough.
Miners create blocks. To create a block one needs to create a file containing unconfirmed transactions (that are not yet included in any other block), add a timestamp, a reference to the latest block and a transaction sending 50 bitcoins from nowhere to any address. Then, the miner needs to compute a signature for the block (which is basically a very long number). This signature is called hash and the process of computing is called hashing.
Computing a single hash takes very little time. But to make a valid block, the value of its hash must be smaller than some target number. The hash function is designed to be hard to reverse. That is, you cannot easily find some file contents that will produce the desired hash. You must alternate the contents of the given file and hash it again and again until you get a certain number. In the case of Bitcoin, there is a field in a file called “nonce” which contains any number. Miners increment that number each time they compute a hash until they find a hash small enough to be accepted by other clients. This may take a lot of computing resource depending on how small is the target hash value. The smaller the value, the smaller the probability of finding a valid hash.
There is no guarantee that you need to spend a certain amount of time to find a hash. You may find it quickly or not find it at all. But in average, the small enough value of block hash takes time to create. This constitutes a protection against creation of a parallel chain: to fork the chain you will need to spend more resources than the people who created the original blocks.
Here are some parameters of the Bitcoin chain. They may be different for alternative currencies based on the Bitcoin software (like Namecoin).
Points #5 and #6 imply that the total number of bitcoins will not exceed 21 million.
It is a limitation of a transaction format (amount is a 64-bit number). This can be changed in the future if people will need to send smaller amounts.
The 10 minute interval is designed to give enough time for the new blocks to propagate to other miners and allow them to start computation from a new point as soon as possible. If the interval was too short, miners would frequently create new blocks with the same parent block, which would lead to a waste of electricity, a waste of network bandwidth and delays in transaction confirmations. If it was too long, a transaction would take longer to get confirmed.
The block size is limited to make a smoother propagation through the network, the same reason why the 10 minute interval was chosen. If the blocks were allowed to be 100 Mb in size, they would be transferred slower, potentially leading to many abandoned blocks and a decrease in the overall efficiency.
Today a typical size of a block is 50-200 Kb which makes a lot of room for growth. In the future it is possible to increase block size when the networks get faster. Decreasing time interval would not change much because the security of transactions depends on the actual time, not the number of blocks.
The difficulty of mining is adjusted every 2016 blocks (approx. every two weeks). This gives miners enough time to adjust their hardware, but at the same time prevents the blocks to be created too quickly as the total computational power grows.
Initial reward of 50 BTC is purely arbitrary. If it were 500 BTC, then it would not change anything in the market structure, just change the nominal prices by a factor of 10.
According to the Austrian theory of money, any money supply is “good” in a sense that any differences in money supply are purely nominal. If everyone suddenly wakes up with twice as much money in their wallet, it would not change anything in the world since the money has almost no direct use value. What matters are the relative differences in amounts.
If Bitcoin allowed unlimited mining, it would allow perpetual shift of wealth from productive uses to miners. As a limited commodity, Bitcoin itself does not encourage any particular type of work. By being neutral, it appeals more to non-miners, than it would be otherwise.
Mining rewards are decreasing (instead of being constant) to motivate earlier miners to secure the network while it is young and more vulnerable.
The reward is changed every 210 000 blocks (about four years) to ensure an optimal growth of the network. If the interval was too short, all the bitcoins would have been generated too quickly before a wide network could be created. If the interval was too long, then it would have effectively decreased the reward of the early adopters making the network more vulnerable.
The protocol is a list of rules that every client must follow in order to validate transactions and have their transactions validated by others. Hence, if you change the rules for yourself, other clients will simply reject your transactions and you probably will not be able to accept theirs. This makes it hard to change the protocol.
If there is a change that a vast majority of clients will find useful, then it is possible to publicly agree that starting with the block number X, new rules will apply. This will give a certain amount of time to everyone to update the software.
Please send your questions and comments here: oleganza@gmail.com
Twitter: @oleganza
If you like this FAQ, you may donate 0.1 BTC on this address: 1TipsuQ7CSqfQsjA9KU5jarSB1AnrVLLo.
Scrypt uses a very big vector of pseudorandom bit strings. Straightforward implementation generates vector once and uses it to generate keys. To efficiently parallelize computation, one must spend a lot of memory or compute elements on the fly. The trick is that the whole vector itself is very expensive to compute, which makes it more generally efficient to perform computations on a single CPU using a single vector.
The message-eating nil is a built-in or optional feature of some programming languages that lets you ignore message execution on nil (or null) object. Objective-C does that. In SmallTalk, Ruby and some other languages you can add this behaviour in runtime.
Why is it useful: for instance, accessing person.address.street_name will simply yield nil if either person, address or street_name are nil. Another example is iterating nil instead of a list without checking if it is nil.
Some like message-eating nil because it saves a lot of boring code and helps avoiding some silly crashes. Others dislike the feature on the ground that it hides errors and makes it more difficult to reason about all the code paths.
However, here I present a definitive answer to the question whether your next programming language should or should not support message-eating nil.
Nil should be message-eating.
Here is why: when you switch from a language without message-eating nil to the language which has one, you only spend a week or two adapting to the new style and being puzzled from time to time where the hell the data is missing. After a longer period of time, you will change your style and find it useful and easy to program using this feature. But when you switch from such language to the one without message-eating nil, you will notice just how much useless if/then conditions are being added in your code. And when you forget adding one somewhere, you will get silly crashes in production code. Silly because you already know the nil would have been handled if it was allowed to be propagated.
Please, allow nil to eat messages.
PS. If you think of adding message eating to NilClass in Ruby, remember that metaprogramming can be dangerous.
Xcode 4.4 is capable of subscript syntax and allows migrating the code from [arr objectAtIndex:0] to arr[0]. A few tweaks needed, however:
1. While OS X 10.8 SDK already contains objectAtIndexedSubscript: and objectForKeyedSubscript:, iOS 5.1 SDK does not. To make compiler happy, you should add this:
#if __IPHONE_OS_VERSION_MAX_ALLOWED < 60000
@interface NSDictionary(IGSubscripts)
- (id)objectForKeyedSubscript:(id)key;
@end
@interface NSMutableDictionary(IGSubscripts)
- (void)setObject:(id)obj forKeyedSubscript:(id )key;
@end
@interface NSArray(IGSubscripts)
- (id)objectAtIndexedSubscript:(NSUInteger)idx;
@end
@interface NSMutableArray(IGSubscripts)
- (void)setObject:(id)obj atIndexedSubscript:(NSUInteger)idx;
@end
#endif
2. To deploy new stuff back to iOS 5 and iOS 4, you need ARCLite. Somehow ARC itself works without extra configuration, but subscripting requires explicit linker flag:
In project settings add to Other Linker Flags: “-fobjc-arc”
3. If you use Edit -> Refactor -> Convert to Modern Objective-C Syntax, it will replace -[NSLocale objectForKey:] with a square brackets syntax, which is not supported by NSLocale. This is the only bug I have found with the automatic translation. NSCache and NSUserDefaults are not touched by the migrator.
The problem is that it is not supported by anybody except for the rare libertarian thinkers. Here’s my (very inaccurate) demonstration of common types of mindsets showing why it is so.
Entrepreneurs by the very definition of their job have to deal with whatever situation is at hand and not waste time discussing political philosophy. They need to get the job done within an existing framework, whether it is a monarchy, democracy or socialism. Theory of justice is not a good guidance, especially if it goes against the existing rules. Entrepreneur tends to be efficient first, moral second. Otherwise the more efficient one will take his place. Entrepreneur’s perspective is solely from his own enterprise and its profitability. If it is not so, there will quickly come another one, more focused and more efficient to win the customers.
Politicians by definition oppose libertarianism. Politicians fight for power, their piece of the pie. While the entrepreneur tries to maneuver within the existing rules as efficiently as possible in order to create his product, the politician is interested in changing or creating rules according to his own ideas of what’s good and bad. (Sure enough, a single person may combine both roles, but it is useful to analyze them separately).
Regular wage earners do not fight for power like politicians, nor do they build their enterprises and products. They focus on their own work and life and prefer a stable income. Those of them who are interested in any social philosophy are not going to like libertarianism very much for it does not promise them anything in particular. Every politician, left or right, promises safety, stable prices and free stuff, but only a libertarian will promise you that you are going to earn what you deserve, no less, no more.
Who remains then? Those people who are not starving, who have the time and cultural background to study things, who have no desire or skills to fight for power or bring about a particular enterprise, are in a most favorable position to start learning libertarianism. And still many of them would not be convinced at least for the reasons outlined above.
Here are the implications of this realization. First of all, there is no threat (or hope) that the libertarian movement will suddenly make a big impact. Then, those who have any interest in libertarianism have to admit that they will not attract many supporters due to the very nature of the theory. When people say that libertarians “define their reality in their own head” it is a sign that they are not interested a priori. There is little hope that people will “get interested” if you repeat your idea over and over again. This just pisses them off. Look at the people around you: everybody is interested in their own benefits (material or psychic).
A socialist who promises a particular policy and particular effects based on carefully chosen historical data points is by far more efficient in convincing a random person than a libertarian who carefully analyzes the nature of all human actions and then comes up with something vague and unimpressive like “everybody will be able to freely pursuit their own happiness”. To believe a socialist you just need to be convinced by some data points and concrete goals. But to believe a libertarian you need to study all that stuff yourself because on the surface it looks either “crazy” or “simplistic”. Not many people have the time and energy to even try.
People discuss how Apple has plenty of room to grow by indicating market share in terms of units which is very small (less than 10% among mobile phones). However, the only correct way to predict growth is by looking at the revenue. And not only Apple’s revenue, but also at the general money supply and revenue distribution in all other industries and/or competitors.
First of all, units do not tell you much. There are very different categories of products and prices hidden behind the units. A person who buys a $50 phone does not usually consider buying an iPhone for $500. Or he may consider buying an iPhone instead of buying a cheap phone, handheld game console, wristwatch, a calculator and a flight to visit parents. We cannot predict anything here in terms of units because we cannot compare things in aggregate — only individuals may compare importance of particular units for themselves.
Fortunately, in our economy we already have an efficient instrument: money. We use money to exchange units of any good imaginable. Each person allocates his money to various needs according to his personal subjective valuation. First, he allocates money to the most urgent need, then to less urgent and so forth. So one might prefer paying for his wedding ring instead of an iPhone this month while another one will decide to pay for an Samsung phone instead of going to a hairdresser. These seemingly incomparable things are comparable only through the money allocated to them. When masses of consumers increasingly prefer one product more to all other possibilities to spend their money, you get a nice growing revenue chart.
This means, a revenue is an indicator of how much value all consumers put on your product. What is a revenue share then? First, it depends on how you define the market, that is what products you compare your product with. In case of iPhone, it could be a mobile phone market: all the money people spend on mobile phones.
Today Apple has about 8% unit share and 40% of revenue share in mobile phones. This tells us that so far people spend 40% of their “phone budget” on iPhones and 60% on other phones.
So how do we know how iPhone sales will behave in the future? Revenue share is not enough to decide as “phone budget” may grow or shrink comparing to other goods. We need additional charts for all other possible goods people spend money for. So if we see that people allocate less for gaming devices (and other specialized gadgets replaceable by mobile apps) we may guess that they will spend more on an iPhone as a replacement. But if at the same time they spend more money on food, water and guns, it might mean they are preparing for bad times and iPhone sales won’t grow much.
So Apple’s room of growth today is determined by how much money people are willing to give them comparing to all other possibilities. Thus, we need to understand the whole market: sales of related gadgets, rent prices, migration, inflation etc. And 8% of mobile phone units tells us a lot less than 40% of mobile phone market revenue share and a general mobile phone market growth (which is total amount of money spent on all the phones).
So if in two years people will allocate on phones in general twice as much money as today and if they will spend on Apple phones twice as much money they spend today, Apple will get 80% revenue share and 4 times bigger revenue.
What about profits? Apple is said to gain about 75% of the profits. This does not tell us much about future growth direction, but rather the speed and precision of that growth (or decline). Profits show efficiency of the company. Today Apple is more efficient than others, so it’s getting more money back to reinvest into production. Meaning it has a stronger influence on the market than other companies, but it still does not mean in what direction it will go as it is up to consumers and Apple shareholders and management to decide.
What about “every phone will be a smartphone”? Well, if 80% of people still prefer to pay very little for their phones (that are becoming smarter over time) and Apple is still having a high price for iPhone, the unit share will not change dramatically. On the other hand, if Apple invents an iPhone which replaces your car and a wife at the current price, their unit share will increase significantly as people will rush to pick a $500 iPhone instead a car or a wife if it can replace both of them.
John Gruber writes:
I’ve always thought Apple’s cash hoard was about freedom. That cash meant — and means — that they don’t have to answer to anyone. […] Apple can’t control its stock price; that’s in the hands of investors. But it can control how much cash it keeps in reserve. If investors sour (or the market crashes) and the stock price dips, Apple could take itself private.
This quote is a good example of a common misconception about who really owns the company. It is in fact, shareholders who own and control company. They are simply happy to delegate that control to current board of director for as long, as it is doing great. But when it doesn’t, it’ll quickly be replaced and shares will be sold to less pessimistic people.
About freedom: it only makes sense to speak about freedom where some coercion takes place. Like government regulations and taxes or somalian piracy. Otherwise, it’s all about mutually-benefitial partnerships. Better cash just helps negotiating things, but it does not buy any more freedom from any of the partners.
Apple is not even an acting entity. It’s a mode of acting of a group of people: shareholders, directors, employees. The company is used only to pool investments and limit liability of the owners and employees. And the real owners of the Apple assets are, of course, shareholders.
Therefore, Apple cannot get “free” from “investors” by buying itself and getting private. Tim Cook and others may buy Apple shares for their own money, but Apple’s cash does not belong to them at all.
Today shareholders are happy with what Tim Cook is doing. If tomorrow he announces a really stupid way to use company’s cash, owners will either sell shares or put another board of directors and CEO.
What Apple can really do? They can invest into a startup, people, factories etc. They can buy government to liberate themselves from regulations and taxes.
They can also pay dividends, but that is not a smart thing to do. There are only two ways you can spend dividends: consume them or re-invest in some other companies. While Apple is the fastest and biggest growing company, it makes very little sense to invest dividends into something else or to consume them. See also: Dustin Curtis on the subject.
It’s quite self-evident that many developers think that good tools “attract” developers to the platform and increase business prospects. But that is of course bullshit.
Microsoft, Apple, Google are not in business of selling developer tools. They sell their actual products to actual customers and optimize their production process to make their products better.
Improvement of a developer tool is not a function of your, developer’s, satisfaction and productivity. It is a function of your productivity and whatever design choices the company makes about their actual products. So you are only one part of equation. And normally, the smaller one.
If the iPhone is memory- and CPU-constrained, Apple is free to decide to not use a garbage collection and thus make developers less productive. They might lose non-paying developer (who potentially would’ve written a killer app), but would gain real paying customers.
Now, from the developer perspective, they are in the business of making products for people, not coding for fun. So they need a platform with a demand for their products. (Those who code for fun do not affect business decisions anyway.) A tool quality here plays only a role of production costs like many other costs. If the tool is so abysmal that its productivity cost consumes all the profits, then the platform won’t attract a developer. But if it’s good enough, it’ll be of course used provided the platform brings income to the developer. In other words, the primary force is the customer demand for vendor’s and 3rd parties’ products. Every other factor is a secondary.
Here’s a quite recent quote:
“But for now, Objective-C remains difficult to approach; only the appeal of writing hit iOS apps seems to be driving its popularity.”
Actually, the only purpose of Objective-C is to write hit apps (and the only purpose of apps is to satisfy paying customers). And if it remains “difficult to approach” for you, then you in particular do not envision some particular app worth writing given the current costs of mastering the tool.
Some people are asking Apple to fix the Radar. They demand a better UI, ability to open and comment on some bugs, integration with Xcode.
This is such a bullshit. Developers do not need a radar. Apple needs it. And they make it good enough for themselves, not for 3rd party developers. If the UI sucks and they get 10 times less bugs than people would love to file, it must be something they are okay with.
I personally, never care much about radar. If I noticed an annoying issue worth filing, I would file a bug in a minute and be done with that. I don’t care about browsing existing issues and figuring out if there’s already a duplicate. It’s not my job, after all. Apple guys should know better which bug contains new info and which is a pure duplicate. They deal with multitudes of product versions and different devices. I just have a couple of devices in some particular configurations.
Radar is a black hole. It would be more comfortable to get a quick response like “yes, we care, stay tuned”. But what for? Apple told us many-many times: please file radars, we keep track of all of them and nothing gets unnoticed. Do you really need this statement to be repeated for every request? I’m happy with “fire and forget” method: I spend very minimal time “managing bugs” and Apple somehow is able to fix the problems over time. They won’t tell you their roadmap anyway, so what feedback do you want after all?
Imagine Apple allows discussing radars in public. Now, instead of that many individual “votes” carefully filed by developers and classified as duplicates by Apple engineers, there will be less individual bugs and those will be covered with less informative comments like “me too” and “+1”. Essentially, that would mean that not Apple, but most active users are now classifying the issues, which makes Apple less efficient in figuring out their priorities. And the “most active developers” is absolutely not the same as the “most paying customers”.
Better UI and Xcode integration. Apple just needs UI suitable for their own comfort, not more. If they make UI very slick and fast, much more people will file the same issues and Apple engineers would have to sort out much bigger pile of duplicate bugs. And do not forget that every feature is a responsibility. Do they really need to constantly spend more time on a fancy bug reporting UI when the existing one works just fine?
Conclusion: file bugs if you wish and forget about them. When submitted, it’s now Apple’s job to deal with them. If you want to participate beyond that (that is, fix the bugs), then you already know what to do.
Following the discussion on Hacker News how MS had resources and liberty to experiment with their developer tools, but the new Visual Studio 11 is committee-designed and is based on the same crufty UI which is many years old now.
Both Apple and Microsoft are free to throw away old problematic UI and rethink some parts from scratch. If anybody is hurt in the process, it’s not a bottom line. Microsoft sells Excel and Apple sells computers, they don’t have to prove anybody anything with their developer tools, except for themselves. And they have all resources and expertise for that, of course.
However, there is a difference in approaches. Apple, comparing to others, is not afraid to break things while executing some ideas they think are great. For instance, when Xcode 4 was released, it integrated Interface Builder very neatly into main window, but they broke support for third-party UI components. So while some people would have to type a bit of boring code to setup those components, many others are enjoying unified workflow. Also, the first builds of Xcode were very slow. Xcode team found a way to represent tons of useful information in only three panels with very little cognitive load, but didn’t take time to optimize the performance. It was super-useful and super-annoying at first. But that was a non-trivial decision from their part.
When Apple released Final Cut Pro X with awesome new design, increased productivity and 70% price drop, they omitted some very important features. They finally added missing stuff, but before that they got a huge shitstorm from the customers for not having all they needed from the beginning.
As with Xcode 4 and FCP X, nobody was forced (even in a weak sense of this word) to upgrade. Xcode 3 worked well (I was using it while working on a slow notebook), FCP 7 was not killed too. What Apple had is basically a courage for an “release early, release often” type of operation. Why did they make these particular compromises, not the others? It was important for them to make a great new design and try it out as soon as possible at full scale. There is absolutely nothing interesting in performance optimization and bug fixes. You already know it’ll be awesome if you do them.
When you are designing, you are taking risk to decide for others and you don’t know if your are right until you show it. It also means, you cannot ship half-designed product. The reaction on unfinished design (non-thought through, that is) is skewed. It does not help you to understand if you made it right. It’s not a problem if it’s slow and buggy, those things do not obscure (at least, should not obscure) the vision of how product works.
Since you have to spend time designing every aspect of the product to the last detail to get sensible feedback, you don’t have much time left to resolve less relevant issues. You are already quite late, for that matter. And if your decisions need some improvement it’s better to know it before you start optimizing them.
So in the end, every new Apple product has what they consider a finished design with new ideas, but with rough edges like crashes, performance issues or some less relevant features omitted to be reworked later. Those having time to polish secondary aspects of their products are not pushing forward hard enough.
If you like the post, follow @oleganza on Twitter and buy my well-designed version control app: Gitbox. It’s super-smart, fast, and, of course, sometimes buggy ;-)
If you define some variables in projects settings that are included in Info.plist (usually, it is build version), you may notice that Info.plist is not always up to date. This is because when you change the variable in project settings, the Info.plist is not actually modified and Xcode may skip its compilation.
If you try to add Run Script phase with “touch $PROJECT_DIR/Info.plist”, it won’t help much because it will always run after processing of Info.plist. At best, Info.plist will be up to date every other run. It is very confusing to say the least.
How to fix:
1. Add a new target “Other” — “Aggregate” with a name “TouchInfoPlist”
2. Add Run Script phase with this line: touch $PROJECT_DIR/Info.plist
3. Go to your actual product targets, select Build Phases tab and add dependency TouchInfoPlist.
4. Edit schemas and remove schema “TouchInfoPlist”. You’ll never need to run it directly.
Enjoy always up to date Info.plist.
Apple has a lot of great documentation: from the very basic guides and tutorials down to particular API references. But for a very long time newcomers were wondering: where should I start?
Now you know the answer:
For iOS: Start Developing iOS Apps Today
For OS X: Your First Mac App
iOS guide is even better: it shows you all necessary areas from the beginning to the end giving relevant links at each step. Just start with first page and move on.
Yesterday you had many great ways to create and digitally distribute your content. First, you could make a website with gorgeous latest web-technologies which is perfectly accessible with every modern browser on all major computer platforms. Or create an ebook based on the open standard Epub, also supported by all major reading software and devices. Third option is to make a movie (encoded in a couple of wildly supported video formats: from FLV to H.264 and Ogg). Finally, you could write a native app for Windows, Mac, iOS, Android etc.
Every platform vendor: Microsoft, Apple, Google, Facebook, Adobe and others work hard every day to give you better ways to create and communicate with people.
Today Apple released three apps: iBooks 2, iBooks Author and iTunes U. Every app is available for free. iBooks now runs interactive books created with iBooks Author and iTunes U integrates books and apps with video materials in a very useful UI.
So today you have one more wonderful option in addition to those listed above. Yet, some people start screaming about “lock in”, how Apple does not care enough about education to make everything open; about many evils of proprietary formats, proprietary apps, proprietary operating systems and proprietary devices.
These people are fantasizing a world where everything is right, cheap and every good is available in an infinite number of options. And at least one of those options is exactly what is right for them (and there must also be a right one for any other person too for fairness’ sake).
Guess what. This is the world we are living in. And everybody has her own idea of the perfect world order and while we are not enslaving or destroying ourselves, our world indeed moves towards better ways to live a life. Nothing is 100% right for you, but it is so for everybody. And this is why we all are working together everyday to make us happier.
Today’s Apple announcement is just one more achievement of human civilization, in addition to iPad, Android, Windows XP, World Wide Web, printing press and alphabet. Choose what you like in any combination and go do something great with it.
I have a single-core chip on iPhone 4 and an app with OpenGL rendering controlled by touch events.
This morning the app was rendering graphics on the main thread. 90% of CPU time was spent on graphics, 10% on gesture recognition and related computations. Overall CPU utilization was about 30%.
These 30% were noticeable: touch events were processed with delays and frame rate was low and not very stable.
In order to make app snappier, I moved the OpenGL rendering into separate serial dispatch queue. Now event loop was much less loaded, and I expected overall improvement. Not a higher framerate, but more stable one and with more accurate touch recognition.
In reality, the rendering was indeed slightly smoother, but touches were still delayed.
Profiler was showing now that 70% of CPU time was spent on graphics (in background thread) and 30% was spent on gesture recognition. Also, overall CPU usage increased up to 50%.
In terms of raw performance of algorithms nothing was changed at all. Threading code that was added is a simple dispatch_async() call consuming almost no time.
Now, I have a theory that explains this. Since the main event loop became 90% less loaded after moving graphics to background thread, it was able to process more touch events per second. Gesture recognition computations increased load on a single CPU core making graphics rendering slower than expected.
In result, framerate was not improved, but became more stable and touches didn’t get much smoother because of the increased pressure on CPU and main thread.
It turns out that rearranging stuff on a single core does not really help unless it is accompanied by actual performance improvements.
void(^OABlockConcat(void(^block1)(), void(^block2)()))()
{
block1 = [[block1 copy] autorelease];
block2 = [[block2 copy] autorelease];
if (!block1) return block2;
if (!block2) return block1;
void(^block3)() = ^{
block1();
block2();
};
return [[block3 copy] autorelease];
}
1. Download “Default Apps” Preference Pane: http://www.macupdate.com/app/mac/14618/rcdefaultapp
2. Select “URLs”
3. Select the URL schema. E.g. “github-mac”.
4. Change the app.

In May, 2010 I started experimenting with a UI for staging and committing to Git repository because it’s not easy to “git rm” a bunch of files that are already moved to trash (tab does not help). Also, I was tired of typing “gs” (“git status”) all the time just to see that everything is the way I want. Soon I realized I could make the app much more useful if I could integrate history cleanly without too much additional elements. I’ve spent a couple of days thinking about it and realized that the “stage” is a special case of a commit. I quickly came up with a two-pane window — history on the left and changes on the right. Stage was a special item in the top of the history. Since then a lot of good design decisions were coming quickly. Branches went to the drop-down menus in the toolbar with pull/push button between them.
Gitbox 0.9 looked very simple.
Quickly, everybody at our Pierlis office started using it. Staging, committing, pull, push became much more efficient than in Terminal or any other GUI starting with version 0.1. It was my first Mac app (I’ve already done some iOS projects for our customers), I loved it but still I was shy to release it. In June on WWDC10 my boss told me once more that “real artists ship” and I shipped Gitbox with version 0.9 for free. It looked clumsy and I really worried that nobody would like it.
But it was a success. A lot of people got interested by a concept of “one-button” Git client. Never before version control app, especially for Git, was expected to be “simple” or “minimalistic” yet useful. A lot of people dismissed it for a lack of features. But still many fell in love with it from day one and continued using it daily. For many it became the only way to work with Git repos without exploding their brains.
I got quite serious about it. I decided to polish it, add several more important features, streamline the UI (mostly by adding a sidebar for the repos) and released it in the end of November 2010 for $40 with a free version limited by 3 repositories (and no time limit).
It was a success again. Over a couple of days it made about $3000 which was totally unexpected as I was hoping for at least upgrading my notebook. Over months, it was steadily bringing some noticeable profit that helped paying for my wedding and honeymoon later in 2011.
I was amazed that something I designed, wrote, productized and marketed was actually appreciated by consumers. I even appreciate that keygens were released within a week after
each new version release.
I developed a long list of features and tweaks, rearchitectured the whole app several times (hello, responder chain and GCD!), was trying to keep focus and regularly deliver updates with both fixes, improvements and new wonderful features. It is still a “weekend” project, so I don’t have a lot of time to work on it. But what matters more is to keep going and improve it every day, at least a little bit.
Gitbox is still lacking some interesting things like built-in diff viewer, line-by-line staging, tree view or submodules. Those will come soon. But many more important things were already done: very responsive UI, instant full-history search (even by diff contents), undo for common operations like commit, pull and push (and more to be added in later updates), ubiquitous drag and drop and powerful keyboard shortcuts. Also, a lot of stuff was ignored that would cripple and complicate the UI. Some power features were delayed until the right place for them was found.
I consider Gitbox a right UI to do Git version control. The core design principles worked very well so far and allowed me to extend it to more and more new capabilities. UI is still, like in v1.0 very simple and clean, but is able to do so much more and will in the future.
Thank you all.

Some people would love to use proportional fonts for the programming code, but blame languages and text editing software for not being ready for them yet. The problem is that most proportional fonts (e.g. Helvetica, Lucida Grande) make punctuation characters too narrow making many statements hard to read.
Compare:
Monospaced font: if ([link isKindOfClass:[NSURL class]])
Proportional font: if ([link isKindOfClass:[NSURL class]])
Programming languages usually contain a lot of little syntactical features that are important to read and notice. One extra pair of parentheses or a semicolon may change the whole meaning of a line of code. And, unlike the real text, you cannot make a single typo.
If monospaced fonts look clunky, that’s because the programming languages normally require equal attention to every single character.
When you need to do CPU-heavy work, there is a nice pattern using GCD: jump to a background queue to do the work, and then back to caller’s queue to report the results. Do not forget to retain the caller’s queue because it may be deallocated while doing background work. Although compiler inserts retains for the ObjC objects referenced within blocks, queues are declared as opaque C structs (dispatch_queue_t), so they won’t be automatically retained.
dispatch_queue_t callerQueue = dispatch_get_current_queue();
dispatch_retain(callerQueue);
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
// Do the work in the other thread...
// Example: NSArray* items = parseJSON(data);
dispatch_async(callerQueue, ^{
// Report results to the caller...
// Example: [self didParseItems:items];
dispatch_release(callerQueue);
});
});
Ruby is very similar to Objective-C because both have a lot of similarities with Smalltalk. However, Ruby is a messy language on many levels. Many consider it a mostly a feature, not a bug, but it indeed makes interoperability with Objective-C less nice.
1. Ruby has a messy syntax. It inherits several features from Perl, Bash, Python and C-like languages and mix them together without any clean distinction. You have always many ways to do the same thing and it is never obvious which one is the best. With Objective-C it is much easier to write a lot of boring and obvious code than with Ruby. Of course, this property appeals to coders who love funny things in their code, but it does not play well with people who want to ship huge complex applications and systems in time.
2. Ruby has a messy standard library. Many things in standard library are not well-thought, not consistent with other things, redundant or lacking essential functionality. Things like File vs. FileUtils, Date/Time/DateTime + ActiveSupport extensions demonstrate horrible inconsistencies at some core things.
3. Ruby has a messy culture. Ruby coders *love* smart tricks and hate boring code. Everybody wants to DRY all the code for better or worse or pretend being a cool meta-programming lisp hacker (not the best professional quality in itself). People are not careful about containing their libraries as humble as possible. Many people are okay to inject methods into core classes and do a lot of meta programming tricks and be proud of it. All this leads to maintenance issues and is not compatible with Objective C culture.
These three key things make Ruby incompatible with Objective-C or Cocoa framework.
It’s important to add that all of this is not unique for Ruby, but pretty much for any hobby technology, particularly backed up by some open source community.
Apple clearly puts a lot of effort into making LLVM infrastructure and Objective-C a powerful general-purpose toolkit for themselves. If they miss some feature, they are more likely to add it to LLVM and/or ObjC rather than spinning off additional separate projects. They already have WebKit and JS and that covers a lot of what they want to do. And they are bold enough and smart enough to improve things in both technologies whenever they want or need to.
Forget about MacRuby, MacPython and other languages in that respect, they are clearly are only interesting to people who focus more on typing codes than on shipping products.
I would love to have the following feature in Objective-C:
@property(retain, auto) id something;
Where “auto” attribute would mean:
1) automatic @synthesize (already present in modern Clang compiler)
2) “nonatomic” if not stated otherwise
3) automatic release (if “retain” or “copy” is used) and nullifying of instance variable on dealloc
Xcode 4 slowly replaces Xcode 3 for my projects. Benefits become more important than problems. So here is my list of things I like and dislike about Xcode 4:
Like:
1. Tabs
2. “Open Quickly” with fuzzy search by filename (cmd+shift+O), like cmd+T in Textmate.
3. Jump bar
4. Smarter autocompletion
Finally, the tabs and jump bar turned out to be the single reason to use Xcode 4 despite all of its annoyances. On a complex project window management consumes a lot of time.
Dislike:
1. The major problem with Xcode 4 is performance: switching between tabs, opening sidebars, autocompletion — all happen with noticeable delay even on iMac with i7 processor and 8 Gb of RAM. Bonus track for the latest Macbook Air: sometimes Xcode 4 eats 150-200% of CPU doing something (no matter whether debugger is launched or not). And when the debugger is connected to iOS Simulator fans just never stop. I hope this will be resolved with next updates.
2. No retina display mode yet in Interface Builder (but IB 4 is slow as hell, so I use IB 3 instead anyway).
3. Stupid (== “too smart”) assistant editor. When I choose another file manually, it is sometimes reset to some counterpart. And Open Quickly always opens in the left pane. Assistant editor has no value to me until these issues are fixed.
4. LLVM and LLDB. Turns out, the latest LLVM compiler has subtle bugs with nested blocks (sometimes self becomes an incorrect pointer) and LLDB does not display values for properties which were declared without corresponding ivars.
There are some things about Xcode 4 I don’t really care about:
1. Git support. I have a Gitbox anyway, which offers complete set of operations (committing from Xcode is a nightmare and pushing is simply not possible). Surprisingly, I’ve never used blame or in-editor diff. Those features sound useful, but in practice are rarely needed and feel sluggish.
2. IB drag-and-drop to source code to create actions and outlets. Xcode is slow and produces some awful code which needs cleanup anyway. Same thing goes for code snippets that take more time to find/drag/edit than to simply type the stuff.
3. iTunes-like status panel. I usually collapse the toolbar anyway, so I don’t see all those distracting animations.
When you develop some interesting software, you care to make architecture simple, boring and flexible as much as possible. When you think hard about user interaction or work around some weird system integration, the code should get out of the way . Whatever language you choose, you try to stick to some limited set of features and use them predictably and consistently. Ninja tricks are interesting on their own, but better not get mixed with an actual product.
As the complexity and amount of stuff to be designed and improved grows, you are becoming more conservative about any language or technology you use. All the sudden, discussions about C++ template complexities, braces styles etc. look completely strange and irrelevant.
Your code is so simple, you wonder why you should write it at all. Most of the typing is spent on defining objects, giving names and connecting things together. Every interesting algorithm is carefully packaged out of the evolving structure and is rarely touched again.
File system tree does not feel expressive enough. You need some sort of rectangles on the screen to lay out the components and hide some parts within the others for goodness sake.
You become even more productive if some of those rectangles provide immediate feedback, e.g. user interface elements in the Interface Builder.
The future of software development is not the language or syntax, but the interactive tools with instant feedback and sophisticated data organization.
Why are people so obsessed with the NFC buzzword?
The only safe and understandable way to conduct the payment with a phone is with the protocol like this:
1. Shop sends a payment request to your bank (via shop’s bank or directly)
2. You bank pings your phone and waits for confirmation.
3. You take your phone out and confirm the payment. You can do this securely over Wi-Fi, 3G, EDGE, Bluetooth, NFC and any other communication technology which lets you to speak TCP/IP and finally (after going through all the proxies and routers) connect to the Internet.
4. When bank gets the confirmation, it acknowledges the transaction and tells the shop about it.
5. Shop issues a ticket and you walk away.
This protocol is safe (contrary to modern credit card processing) because you never trust someone else’s device (you trust only your phone) and you never give away any secret information (like credit card number or a PIN code).
The only tricky thing here is how to give the shop your banking ID (which can be a phone number), so they can send it to your bank which will contact you for confirmation. This can be done in many different ways:
0. Tell the ID to the shop assistant - simple to understand, but needs remembering and typing in. Since we need a phone to confirm payment anyway, we don’t even count it.
1. Show the barcode on the phone’s screen and scan it. You need to launch a payment app anyway. Why not to display you bank ID as a barcode on the first screen so you can simply scan it.
2. Use NFC to announce your ID to the shop. This like barcode scanning, but without optical scanning. It has its issues, though. If you have many devices nearby, the receiver may confuse your device with someone else’s, or recognize your slower. Everybody knows how slow bluetooth is.
3. Do it in reverse with NFC: the shop will publish you bill and (if you are lucky) only your phone will see it, so you can send it to your bank.
To me, the most usable ways to conduct payments are #1 and #2. And #1 seems to be simpler and faster (but feels “lo-tech”).
Bottom line is: NFC is not a requirement for payments with a phone. You need any communications tech to connect to your bank and can use different ways to announce your banking ID.
I also hope, that the phone payments won’t be done in the same way as credit card processing is done. That is, by giving away secret codes and trusting the shop to confirm the transaction.
Gitbox started selling in November using old-school method: download a free version from gitboxapp.com, then upgrade to a paid version by buying a license.
Today Gitbox is available on Mac App Store as well. What this means to you?
If you have already purchased Gitbox, you don’t need to “connect” it to App Store. First, it is impossible to do for free: you’ll have to buy it again, from Apple. Second, you won’t miss much. Gitbox is a single-version application: there is no “lite”, “full”, “appstore” or “non-appstore” variant. The functionality is all the same. (Only difference is that binary in appstore has different autoupdating and license checking mechanisms.) Gitbox already provides automatic updates for free. There is one nice feature of the non-appstore purchase: updates can be released within minutes instead of a week.
Note that App Store marks the app as “installed” if it sees it on disk even if it was never downloaded from Apple. If you want to purchase it from Apple (maybe you have not yet purchased a license), then you should drop the app to Trash and restart App Store: the purchase button will become available. Your preferences won’t be affected.
So how do you decide where to purchase an app? Both distribution channels are great: appstore is more controlled, but sometimes much more convenient, another one is more flexible, but less integrated into OS. I believe it is important to keep both options available to you, but I want to avoid any confusion. So here is my policy:
1. Prices and discounts will always be the same and synchronized for both stores. The app is the same, hence the price is the same.
2. I will do my best to release big updates synchronously on both stores. I usually don’t release more often than once a week or two, so it is very possible to adjust to the appstore review delays.
3. In case of security updates or critical bug fixes, I will post an update immediately even if the appstore does not publish it as quick as I do on my website.
Enjoy Gitbox and buy it where you like. You will get the same support and love everywhere.
How do you add a view (spinner, text field, button etc.) into the cells of NSTableView or NSOutlineView?
Simple:
1. Keep a reference to the view in your NSCell.
2. In drawInteriorWithFrame:inView: you should create the view if needed and add to the controlView if needed. controlView is provided as a second argument to this method.
3. Position the view according to the cellFrame (first argument to the drawing method).
4. Do not forget to retain or nullify the view in copyWithZone method. Remember that copy and copyWithZone copy instance variables as-is without retaining object pointers when you might need that.
Correction on January 6th, 2011:
There is no point in keeping a reference to a view in the cell. After the cell is drawn it is often deallocated immediately, so the view will stay visible forever. You need to keep the reference to the view in some external non-volatile object: a view, a view controller, or a model.
I’m happy to announce that Gitbox reached its first major milestone: a first commercial release. It is a great version control app for working with Git repositories. Instead of cutting down the powerful, but complicated concepts of Git, Gitbox embraces them with a truly elegant user interface. Many people start actually using branches in Git thanks to Gitbox.
Download Gitbox 1.0 now. Use coupon GITBOXNOV before December 1st to get 30% discount.
Since the last preview version, a lot of things have changed. I have worked out a strong vision of what kind of product I want to create. As a part of it, I have redesigned the user interface and reengineered the underpinnings. Now all the repositories live inside a single window and the app itself is running on Grand Central Dispatch (GCD) on Snow Leopard. Translation: Gitbox is faster and easier to use.
A couple of thoughts on licensing policy: usually commercial software comes in two flavors: a full and a trial. Here’s the problem: when I download a trial version it is usually limited to 14-30 days of free use. I may try the software for a couple of minutes, then put it aside and forget about it until I have a real need in it (or some very handy feature is released). When I come back to the newer version, it appears I cannot try it any longer!
Gitbox does not do that. You may try it right now and for as long as you want. You also have all the features available with only one fair limitation: only one repository opened at a time. Why is it fair? Because if you don’t find Gitbox useful enough to pack it with all your repositories and use it every day, I don’t want your money. Instead, I would be happy to listen to you and make it better.
When you do buy a license, you get more than you paid for. First, all updates are free (some really cool features are coming soon). Second, you may use the app on all your machines without any sorts of spyware, activation etc. The only limitation is that the license is for personal use. If you want to buy Gitbox for a group, you should buy an appropriate number of individual licenses. Contact me if you’d like to get a discount in that case.
I will release new features and incremental design improvements regularly in a form of free software updates. As the app becomes more powerful and better designed, the price is likely to rise. Since the updates are free, this idea should convince you to buy a license early at a lower price ;-)
I’m very thankful to my family, colleagues at Pierlis and all the folks who were using preview versions and giving a lot of priceless feedback.
Let’s get it started.
In a software business, the functionality is an asset, but code is a liability. The less code needs your attention, less costs and risks you have.
OOP is all about making stuff work, packaging it into an object with as small interface as possible, and building other stuff around without going back and tinkering with that package. Note to Java people: it does _not_ mean the object should fit everything. It should fit at least a single task and be reliable at that task. The point is in reliability, not reusability.
This concept is called “incapsulation”. It is not the way to make the code nice. It is the way to minimize your costs and risks and finally ship.
“All normal operations on a binary search tree are combined with one basic operation, called splaying. Splaying the tree for a certain element rearranges the tree so that the element is placed at the root of the tree.
A top-down algorithm can combine the search and the tree reorganization into a single phase.”
http://en.wikipedia.org/wiki/Splay_tree
The splay tree modifies itself every time it is searched becoming more and more efficiently organized over time.
Everyday as a software developer you have to invent some abstractions. Simply speaking, you have to decide where to put the new code. After you decide this, you write more code and repeat the process. Sometimes the earlier decisions need to be changed and you refactor the existing code. Now you decide where to put the old code.
I really need a hint. The OOP folks teach us to model the real world. Just look at the problem domain, they say, and you will see where the things belong. It works great until you hit some system-specific pure abstractions and there is no natural metaphor to help you.
Try another approach. Since the initial question is where to put the code, and the refactoring is about moving the code around, why not to make the code itself easily movable? How about making the code copy-paste friendly?
The first idea which comes to your mind is to wrap it in the object. Yes, it might solve the problem. But at what cost? Creating an object means defining the interface (class, protocol, whatever) which creates another entity in the program and eats a part of your brain. Not always a good idea when you are already stuck finding out where’s the best place for just ten lines of code.
When you are trying to solve a problem, do not hurry creating another one. Relax, put the code somewhere where it is easier to move from and make it depend on the nearby code as little as possible. Usually you do so by putting the dependent data in some local variables. You can later transform them into function arguments or object properties.
When you make the code movable, you can (sic!) move it around and isolate more and more over time. Maybe 5 minutes later you will discover you don’t need it at all. Or that it should be simplified and moved in a function. Or that it should have more functionality and become an object. Or that it should be split in two different tasks. All of these questions become much easier to answer when you keep the code simple, stupid, light and isolated just enough. Just enough to copy and paste it.
Early approaches to concurrency
When machines were big and slow, there was no concurrency in software. Machines got faster and people figured out how to make multiple processes running together. Concurrent processes proved being extremely useful and the idea was brought further to the per-process threads. Concurrency was useful because it powered graphical interactive applications and networking systems. And those were becoming more and more popular and more advanced.
For some tasks concurrent processes and threads presented very difficult challenges. The threads participate in a preemptive multitasking, that is the system where the threads are forced-switched by the kernel every N milliseconds. At the same time, the threads has a shared access to the files, system interfaces and in-process memory. The threads do not know when they are about to be switched by the system, which makes it difficult to safely retain and release control over the shared resources. As a partial solution, different sorts of locks where invented to make multi-threaded programs safe, but those didn’t make the work any easier.
A typical code in a multi-threaded environment:
prepareData();
lock(firstResource);
startFirstOperation();
unlock(firstResource);
prepareMoreData();
lock(secondResource);
startSecondOperation();
unlock(secondResource);
finish();
Modern concurrency
Next approach to concurrency was based on a realization that the problem of shared resources lays in the very definition of the “shared”. What if you create a resource with a strictly ordered access to it? Sounds counter-intuitive: how can this be concurrent? Turns out, if you design the interface like a message box (that is only one process reads it and nobody blocks waiting for a response), you may build many of such resources and they will work concurrently and safely. This idea was implemented in many sorts of interfaces: unix sockets, higher-level message queues and application event loops. Finally, it found its way into the programming languages.
Probably, the most wide-spread programming language today, JavaScript, features function objects that capture the execution state for later execution. This greatly simplifies writing highly concurrent networking programs. In fact, a typical JavaScript program runs on a single thread, and yet it can control many concurrent processes.
Mac OS X 10.6 (Snow Leopard) features built-in global thread management mechanism and language-level blocks making writing concurrent programs as easily as in JavaScript, but taking advantage of any amount of available processing cores and threads. It is called Grand Central Dispatch (GCD) and what it does is perfectly described by a “message box” metaphor. For every shared resource you wish to access in concurrent and non-blocking way, you assign a single queue. You access a resource in a block which sits in the queue. When the block is executed, it will have an exclusive access to the resource without blocking anybody else. To access another resource with the results of the execution, you will have to post another block to another queue. The same design is possible without blocks (or “closures”), but it turned to be more tedious and limiting and resulting in less concurrent, slower or unstable programs.
The modern concurrent code looks like that:
prepareData();
startFirstOperation(^{
prepareMoreData();
startSecondOperation(^{
finish();
})
})
Every call with a block starts some task in the other thread or at a later time. The block-based API has two major benefits: the block has access to the lexically local data and executes in a proper thread. That is it eliminates the need for explicit locks or moving and storing the local data explicitly just for making it available in a proper thread.
Think of it this way: every block of code inside the curly brackets is executed in parallel with the code it was created in.
Future directions
The upcoming generation of software already is or will be written this way. But block-based approach still isn’t perfect. You have to manage queues and blocks explicitly. Some experimental languages and systems already have a transparent support for “continuations”: that is the code looks linear, in a blocking fashion, but the process jumps between different contexts and never blocks any threads:
prepareData();
startFirstOperation();
prepareMoreData();
startSecondOperation();
finish();
This is much more natural and looks like a naïve approach which we started with and fixed with the locks. However, to make it work concurrently we have to learn GCD and take it to the next level.
When you start some operation which operates on a different resource and can take some time, instead of wrapping the rest of your code within a block, you put the current procedure in a paused state and let the other procedure to resume it later.
Imagine that instead of the discrete blocks of code, the kernel manages continuously executed routines. These routines look very much like threads with an important exception: each routine gives up the execution voluntary. This is called cooperative multitasking and such routines are called coroutines (рус. сопрограммы). Still, though, each routine can be assigned to a thread just like a block or be rescheduled from one thread to another on demand. So we retain the advantage of the multi-processing systems.
Example: you have a web application which does many operations with shared resources: reads/writes to a database, communicates with another application over the network, read/writes to the disk and finally streams some data to the client. All the operations should usually be ordered for each request, but you don’t want to make thread wait each time you have some relatively long-running operation. Also, it is not efficient to run multiple preemptive threads: there is a cost of switching the threads and you get all sorts of troubles with random race conditions. GCD and blocks help for the most part, but if you use them to make every single operation on a shared resource, you will get an enormously deep nested code. Remember: even writing to a log means accessing a shared file system which better be asynchronous.
15 years later
Today, a lot of trivial operations like writing to a disk or accessing a local database do not deserve asynchronous interfaces. They seem fast enough and you still can drop more threads or CPU to make some things faster. However, the coroutines will make even these trivial tasks asynchronous and sometimes a little bit faster. So why is that important anyway?
The coroutines are important because every shared resource will get its independent, isolated coroutine. That means, every resource will have not only private data and private functionality, but also a private right for execution. The whole resource will be encapsulated as good as any networking server. The file system, every file, every socket, external device, every process and every component of an application will have a coroutine and a complete control on when to execute and not execute. This will mean that there is no need for a shared memory and a central processor. The whole RAM+CPU tandem can be replaced with a GPU-like system with hundreds of tiny processors with private memory banks. The memory access will become much faster and the kernel will not need to waste energy switching threads and processes.
A single design change which makes programming easier will make a shift to much-much more efficient architecture possible. It won’t be just faster, it will be efficient: while the servers could be 100 times more productive, the personal devices could be 10 times faster while consuming 10 times less energy.
30 years later
By the time operating systems will support coroutines and a truly multi-processor architecture, new applications will emerge with capabilities we can only dream about. Things like data mining, massive graphics processing and machine learning work mostly in the huge data centers. Twenty years later this will be ubiquitous just like a 3D games on the phones today. These task will require more memory space. Finally, the common storage memory will be merged with RAM and processor and processing of huge amounts of data will become much more efficient.
Given such a great advance in technology, humanity will define its unpredictably unique way to educate and entertain itself. As we get closer to that time, it will become more clear what is going to be next.
There are two very different kinds of information visualizations. And I don’t have pies and bars in mind.
The first kind is for presenting the knowledge. You have already discovered some interesting facts and now need to present them clearly to the reader. Your task is to design a most appropriate form for the data, that the knowledge will become obvious. (Of course, you should not be dishonest and lie to a reader by making perspective distortions or shifting the zero point.) Sometimes you may even drop the figures and just show the data geometrically. The charts should have little noise: no lines, no ticks, no labels. Curves can be smooth and some nice 3D effects can be applied. Present as little data as necessary. Prefer geometric objects to tables.
The second kind is for discovering the knowledge. You have raw data and no particular idea about what could be so interesting about it. In such case you need a format which lets you discover the knowledge. Comparing to the first kind of visualization, here you might want to have more additional information, most probably an interactive table or graph to play with. Add some additional hints: the mesh, absolute and relative values, confidence intervals etc. Of course, this form should be much more honest and complete than the presentation of a first kind. No special effects, no smooth curves. Prefer tables and simple charts to fancy geometric objects.
When presenting a data, first thing to do is to decide what kind of problem do you solve. If you present a raw data, make it easy to work with it and find the knowledge. If you have already found a knowledge, present it in the most clear form.
1. Breaking an article into multiple pages.
Page is a physical limitation of a paper medium. Sometimes the text does not fit and you have to drop the last paragraph on another page. On the screen you have plenty of vertical space and there is no excuse to cut the reader’s context.
2. A lot of iPad newspaper apps simulate multi-column layout. They shouldn’t.
The purpose of a multi-column layout is to make articles’ layout more flexible on a big newspaper page. On a wide page you can fit a couple of articles and an ad. But the screen is not that wide.
Narrow columns also require small font size, which is a problem on a display of a resolution under 300 dpi.
Narrow columns require manually-tuned hyphenation and sometimes font width-adjustment. It is a requirement for the books as well, but a more narrow column looks even worse. Unfortunately, it is not the case for the digital media today.
If the column does not fit the screen, you constantly have to scroll down and up when reading a page: down when finishing the first column and up to proceed to the next one.
You can scroll and zoom the page on screen. If you make a single scrollable and zoomable column, you don’t need to provide font size control or worry about how much of content is visible. The reader can choose the more comfortable size of the page for herself.
3. A lot of people use footnotes on the web. This is horrendous: you have to leave the current line and scroll down. And even if you scroll by clicking a footnote number, you then have to scroll back. And even if you have a link from the footnote back (like in a wikipedia article), browser doesn’t scroll exactly to the position at which you were before.
On the screen you have a plenty of vertical space. And if you don’t use multiple columns (which you should not), you have some space on the side. That means, you may put some notes in the block of smaller font right below the paragraph, or on the side.
Summary
Do not break articles in pages. Do not break text in the columns. Make text column scrollable and zoomable. Make the footnotes immediately under the paragraph, or put them on the side.
There are some predictions or wishlists floating in the tubes regarding an anticipated update to Mac OS X. Some of them are more probable, some less and some are just plain crazy. Let me give you my predictions and some commentary.
1. The next cat name is likely to be “Lion”. This is based entirely on a single picture from the invitation picture and also is the least interesting prediction. I don’t think it is going to be the “last” release in any sense.
2. The merge with iOS. First, Mac OS X already has some UI features borrowed from iOS: navigation buttons in Dock stacks, iPhoto and iTunes. There will be more of them. Maybe scrollview will be updated with more flat scrollbars, maybe some bouncing will appear (and if so, it will be off by default for the existing software).
No way there will be a touch-controllable UI for the existing applications. The apps are not designed at all for the multi-touch and the size of the finger. Even if Windows 7 supports this, there’s no reason for Apple to follow the same path. However, taking in account the dual-mode touch screen patent, it seems more probable that Mac OS X might be transformed into iOS device on demand. But Apple does not favor dual-mode UIs: this just creates confusion for users and developers. The Front Row is a rare example of a second UI mode (transforming Mac into a focused media player). But the iOS is considered more or less a full-feature environment with far reacher user interface then the Front Row and at least as rich as Mac OS X. It is very unlikely that the iMac or MacBook will have two personalities which complete with each other and cooperate badly producing a huge confusion.
So believing in a strong movement towards touch UI everywhere, we may expect not a dual-mode, but per-window fusion of the iOS apps into Mac OS X. This has it’s issues also: still the file sharing is not as smooth as what we expect on a desktop OS, the iPad screen in portrait mode does not fit in the MacBook screen. And again, if you can touch and drag the iOS window, why not to touch and drag other windows? And if you can touch and drag all the windows, why not touch all the buttons? And the screen should be oriented horizontally just as keyboard or trackpad today. This is not easy to solve.
So the UIKit multi-touch will eventually show up in some version of Mac OS X, but it is not as easy as some may believe. The less improbable prediction: the Mac OS X will have a very conservative, slow introduction of touchscreen with emphasized limitations to minimize confusion as much as possible.
3. AppStore for Mac OS X. This is a really good idea in pure form, but once again has some conceptual difficulties. Apple will not lock the Mac OS X as they did with iOS, so it will compete with other distribution channels and may be forced to lower their 30% cut. At the very same time they would have to retain approval process to filter out crappy software they will sell. Developers who are not happy with the commission and the approval process, will go distribute the apps on their own. But this is very hard to debate because there’s still no third party app store for Mac, so the place seems vacant. Or it is vacant because no one could build a viable store business yet. Anyway, the Apple is the most likely company to succeed at this, and if executed perfectly, it will attract a lot of developers and make themselves and Apple much more money, and drag the Mac even further in the market share race.
4. Resolution independence (making UI 1.25-1.5 times bigger). Mac OS X team works on resolution independence for more that 4 years already. And still, on Snow Leopard the implementation is buggy and far from being anywhere close to “beta” status. The conceptual problem here is that this technology is aimed at scales of 1.25 and 1.5, not 2.0 like on iPhone. And this is not as simple as multiplying everything by two. I guess the displays with 2x higher resolution (for MacBooks at least) will become affordable before the 1.25 scale will be fixed for all the shipping apps.
Oh, do not forget that the “retina display” approach does not make things bigger for people with poor vision, it makes them sharper. The sharper text somewhat easier to see, but not as easy as 1.5x bigger one. Apple may realize that system-wide smooth resolution scaling is not worth tinkering with and full screen zoom is just enough for solving vision problems. My bet is on retina displays and old resolution independence framework being put on the shelf.
5. Finder improvements. Some folks dream about tabbed Finder. The problem is that file system is hard enough already. Adding tabs just complicates the look of the Finder and makes file system even scarier. Even if the tabs find their way into Finder, they will be disabled by default. Just like the tabs in the earlier versions of Safari were disabled.
What would be really cool is a merge of the Dock Stacks with the Quick Look and a merge of Quick Look with other apps. This is a pure speculation. Have you noticed how easy it is to jump through the folders in the Dock Stack? Buttons are big and once you find the file you want, the window disappears automatically. The Quick Look also disappears easily. Finder on the other hand, creates clutter: you have too many individual Finder windows all over the desktop. The tabs do not remove the clutter, they just organize it. Maybe what we need is not organizing it manually, but having something like a “recent folders” list and jumping through them using Quick Look.
How many times you’ve started a movie in Quick Look and played it way too long to forget that it is not a stand-alone player? And then you do something with Finder and the movie disappears! Take a look at the iCal: if you open an event, the popover window will appear with details. This window behaves much like the Quick Look: do something else and it disappears. But if you move is a little bit, it will transform into a stand-alone panel which will stick on the screen until you close it. The same idea can apply to Quick Look. It will be super-useful to transform a folder preview into a Finder window, a movie preview into a Quick Time window etc.
6. iChat with FaceTime, iCal like in iPad, iLife, iWork updates: this all is possible. The question is the timing: maybe not all of that will be tomorrow, but only some. I don’t expect super-cool features here, but more like an evolution and improvements.
7. Macs won’t support blu-ray drives. I haven’t heard about blu-ray from any of people I know. Those who really need it may buy an external drive.
8. There won’t be NTFS mounts or built-in VM for Windows. Not because there is a fight with Microsoft. Apple simply doesn’t have time for the features most people don’t need. BootCamp was an important thing in 2006 to bring more customers. Nowadays Apple does not mention “switching” anymore. There is already a plenty of ways to communicate with Windows, both built-in and supplied by third parties.
9. Mac OS X distributed as a free software update. Recently Apple has lobbied an accounting rules change to be able to distribute free updates of iOS for non-subsidized products like iPod touch and iPad. This makes the platform more vibrant and much more devices stay up-to-date. Making Mac OS X update free, Apple can accelerate adoption of their technologies and bring better and more exciting applications to the Mac.
Edit: forgot to add that a lot of goodies from UIKit, MapKit, EventKit etc. might well be ported to the Mac APIs. The NSTableView might learn about recyclable views from UITableView.
Don’t tie external resources lifetime to object lifetime (for instance, file descriptors). Never start any process in constructor/initializer. Have “start” and “stop” methods (or “open”/“close”) dedicated for managing the resource.
Don’t mix data construction and performing a procedure. Whenever you have a method which takes 10 arguments, it is time to create a separate object just for that procedure. Give it 10 properties and a “start” method. Later you’ll be able to add more configuration options and alternative invocation APIs to this object in a clean way.
In general, when you begin understanding OOP, you tend to treat everything as an object. Data structure, complex procedure, network connection, physical device — all become objects. The trick is while you incapsulate all those things into objects, you shouldn’t confuse the *object* with the *thing* it manages. The object is a manager, driver for the thing, but not the thing itself. Object may have language-oriented API and thing-oriented API. Don’t mix them. First API lets you to create, initialize, inspect, destroy object. It must have no impact on the thing. To manipulate the thing, you write thing-specific methods.
Quick test for compliance: you should be able to instantiate a valid object, set properties in any order, inspect it and destroy without any effect on other objects and things around.
Gitbox is a nice little interface for Git. I wrote it primarily for myself and my friends to optimize everyday operations. Go download the app from the website and come back here for details.
Gitbox displays each repository in a little separate window. Each window has 3 parts: branches toolbar, the history and the stage.
Toolbar makes branch operations a lot simpler. You always see which branch you are on and which remote branch is associated with it. Branch menus let you checkout existing branch, a new branch, remote branch or a tag.
The history shows commits for both the current local branch and remote branch. Commits which are not pushed yet have a little green icon, so you will never forget to push your changes. Commits on the remote branch which are not yet merged (pulled) are marked with a blue icon and a light-grey text. And you may switch remote and local branches to compare them.
Stage shows the changes in the working directory. You may stage and unstage a change using checkboxes or just select changes you want to commit and press “Commit” button.
Gitbox updates stage status every time you focus its window and fetches remote branch commits while in background.
Gitbox will be free for a limited amount of time. Prices and conditions will be announced later. Check for updates regularly!
Follow Gitbox updates on twitter @gitboxupdates.
Please send questions, bugs and suggestions to my email address: oleganza@gmail.com (or twitter)
“Crash-only programs crash safely and recover quickly. There is only one way to stop such software – by crashing it – and only one way to bring it up – by initiating recovery.”
“It is impractical to build a system that is guaranteed to never crash, even in the case of carrier class phone switches or high end mainframe systems. Since crashes are unavoidable, software must be at least as well prepared for a crash as it is for a clean shutdown. But then – in the spirit of Occam’s Razor – if software is crash-safe, why support additional, non-crash mechanisms for shutting down?”
iA writes about over-realistic design of iPad apps. I guess, it’s only a beginning: a way to catch attention. As with Mac OS X, Apple and other developers will gradually remove unnecessary pieces as people get more familiar with the device.
— Never do any work that you can get someone else to do for you
— Avoid responsibility
— Postpone decisions
— Managers don’t do any real work
— Premature optimization leaves everyone unsatisfied
— Try not to care
— Just do it!
— It’s not a good example if it doesn’t work
— Steal everything you can from your parents
— Cover your ass
“Each object will do a seemingly insignificant amount of work, but somehow they add up to something much larger. You can end up tracing through the system, looking for the place where a certain calculation happens, only to realize that the calculation has been done and you just didn’t notice it happening.”
When you buy one, the first thing you see is “Connect to iTunes” screen. You need some “big” computer to start using it. If I’d like to buy one to my grandma, who does not have and cannot use a modern desktop computer, I have no problem with initial setup using my macbook.
The only way to back up your data is, again, to connect to iTunes. Most of the apps keep the data on the server (btw, I hate when people say “in the cloud”), but you still have photos, notes and documents on the device. I don’t know whether MobileMe and iWork.com actually back up the data or just share selected files, but they could be easily extended to do just that later when more people will try to use iPad as a primary device, not just as a “node in a digital hub”. Right now, mobileme offers e-mail hosting which can also synchronize notes. But current version of iPhone OS does not offer notes sync using e-mail account (while Mail.app on Mac OS X does).
If my grandma has a problem with her iPad she might lose her pictures and notes. However, my grandma particularly is not going to take much photographs and notes, so that is not much of a problem.
As of now, the only obstacle to making the iPad an only computer in the house is to get rid of big-brother iTunes requirements by replacing it with a internet service to do the very same thing. I bet Apple is moving towards making iTunes a 100% capable web application in addition to desktop version.
Monzy — Kill dash nine
Coder Girl
IE is being mean to me
Write in C
Zed Show — Matz Can’t Patch (show text)
I could hardly believe how beautiful and wonderful the idea of LISP was [McCarthy 1960]. I say it this way because LISP had not only been around enough to get some honest barnacles, but worse, there wee deep flaws in its logical foundations. By this, I mean that the pure language was supposed to be based on functions, but its most important components — such as lambda expressions quotes, and conds — where not functions at all, and instead are called special forms.
Landin and others had been able to get quotes and cons in terms of lambda by tricks that were variously clever and useful, but the flaw remained in the jewel. In the practical language things were better. There were not just EXPRs (which evaluated their arguments), but FEXPRs (which did not). My next questions was, why on earth call it a functional language? Why not just base everything on FEXPRs and force evaluation on the receiving side when needed?
I could never get a good answer, but the question was very helpful when it came time to invent Smalltalk, because this started a line of thought that said “take the hardest and most profound thing you need to do, make it great, an then build every easier thing out of it”. That was the promise of LiSP and the lure of lambda — needed was a better “hardest and most profound” thing. Objects should be it.
”—Alan Key, The Early History of Smalltalk (1969)Tony Albrecht, Technical Consultant at Sony, tells a story about memory performance and object-oriented programming style.
If you read the story carefully, you will notice that the performance problem was actually solved by writing class-specific allocators (that keep each object of the same kind in a continuous array) + doing recursive algorithm in two passes.
Tony knows very well what happens on hardware level, but he is not good at object-oriented programming. Let’s see what is wrong with his code.
In the beginning, their code is not well organized: instead of multiple distinct objects with small independent data structures they had fat nodes with matrices, vectors and other structures built-in. Because of that, the various kinds of data were interleaved and didn’t play well with the cache.
First optimization: they made the code object-oriented by factoring big data structures like Matrix and Vector out of the Node. Then, they used the incapsulation (fundamental principle of OOP) to provide custom allocators with continuous memory zones. This is only possible in a proper object-oriented code when objects of one kind do not interfere with objects of another kind other than with explicit messages. So you can optimize memory layout for Matrices, provided their behavior is not changed, and you will not break some other part of the code. OOP helped to gain 35% of performance.
Second optimization: they splitted the recursive update procedure into two phases to avoid going bottom-top adding WBS (world bounding sphere) more than once per parent node. This would save about 25% of CPU time (assuming binary tree and no overhead on leaves). But they actually got about 50-60% increase because they used continuous allocator for nodes like they did with matrices and vectors.
This is all understandable. But there are two design decisions which are not justified:
1. In the first optimization Tony claimed that “excessive encapsulation is BAD” (slide 20) and thus decided to put raw pointers to the array of matrices and vectors outside of their respective nodes into the loop which iterates over the nodes (slide 89):
for (int k=0; k < innerSize; k++, wmat++, mat++, bs++, wbs++)
{
*wmat = (*parentTransform)*(*mat);
*wbs = bs->Transform(wmat);
}
Do you see those wmat, mat, bs, wbs pointers? These are private things pulled out of node objects under the claim of “excessive encapsulation is BAD”. Now object does not control its data and once you’d like to add another special-effects matrix over the node, you’ll have to learn not only the Node class, but the entire rendering codebase!
This is how it should be done actually:
for (int k=0; k < innerSize; k++)
{
children[k]->updateWithParentTransform(*parentTransform);
}
Where updateWithParentTransform does the job involving wmat, mat, wbs and bs and gives you guarantee that this is the single file where these variables are accessed directly.
Also note that this method will be perfectly inlined by C++ compiler or smart dynamic Smalltalk/Self/JVM system, so the result code will do the same operations and memory accesses as the manually inlined code with “naked” private pointers.
2. The second claim is to “Make the processing global rather than local” (slide 73). This is also awfully wrong. Tony suggests splitting the tree of nodes into arrays of nodes sorted by level. It is not only inflexible (or requires quite complicated algorithms to maintain the invariant), but is also pointless.
We already have these class-specific continuous allocators which put nodes close to each other. We already have extracted huge data structures from the nodes, so that we may keep a lot of nodes in just a fraction of the L2 cache while the rest of it is used for matrix operations. And we already split up the algorithm so that the parent’s boundary is not updated too often. But still, he claims some performance gain out of the fact that nodes are not traversed recursively, but rather linearly using quite a brittle memory layout.
There is no point in that since node objects are so small that most of the data you need to update children using parent’s transformation matrix is already in the cache. And for the cached data there’s no difference how it is positioned: the access time is constant.
But he did not only traded nothing for more complicated code, but also made his life harder to move from a single CPU to multiple CPUs (say, GPU): only recursive algorithms and encapsulation may give you an option to parallelize computation. By flattening algorithms and breaking encapsulation Tony cut himself a way to scale the performance horizontally (or, equally, made it harder to automatic parallelizing compiler to do its job).
It is very important to know how things work on low level, but is also important to know how to incapsulate low level complexity and free your mind for greater deals.
Update: Tony replied showing that I’m not entirely right. (March 18, 2010)
In last two months I had an opportunity to build two versions of the same application: on iPhone and Android. Both applications are basically navigation/tableview-based browsers for existing French website. Absolutely nothing extraordinary about the whole thing, but it is interesting how similar features could be accomplished on competing platforms.
The Good
First of all, you don’t have to register or pay a fee in order to start developing and testing on a device. Also, you may work on Windows and Linux, but I have not tried that out.
There’s a very flexible layout framework which allows you to position elements relative to each other, to the parent and to the content. You may wrap layout around its content, but also tell it to fill N% of free space. Or fill the whole parent width (or height). Android layout are much more flexible and still simple comparing to Cocoa and even HTML+CSS. Even though the Eclipse IB-like plugin sucks, XML layout language is easy to learn and is not hard to type.
Layouts seem to be simpler and more light-weight than iPhone views: in iPhone I have to render tableview cell by hand (all these tedious pixel calculations: paddings, margins and conditional layout depending on missing text) to maintain smooth scrolling; on Android an straight-forward XML layout for a cell was enough. This is a real time-saver.
Resolution-independent metrics are very powerful: you have regular points and pixels (device-dependent), device-independent points (dips) and device- and textscale-independent points (sips). Elements with dip-dimensions will be sized equally on different screens and elements with sip-dimensions will scale according to user preferences.
The Bad
The first bad thing about Android is Java. Though it is not a language or VM. It is the way people write Java code. They do it fundamentally complicated in every single place. My application does not have any sophisticated features. It is damn simple. And iPhone has simple tools for that. Android and Java have tools to build a space shuttle.
Every single thing in Java (and I’m speaking about both legacy java.lang.* and modern android.* APIs) makes you use a couple of classes and a couple of interfaces. The classes themselves usually inherit 4 levels of superclasses and countless interfaces. In sake of code reuse, the core functionality which you use on a single object is usually split across several classes which makes you switch between various doc pages enormous amount of times. This creates a great pressure on developer’s brains: in process of building a 5-minute feature you have to load your head with almost useless hierarchical structures.
Java developers would say that on the other hand you have a rich reusable toolkit. In fact, the simple thing like network connectivity (parse url, asynchronous download over http, setting and reading http headers) could not be done using a single doc. In iPhone I’ve build a little handy OAHTTPQueue around just a couple of Cocoa classes: NSURL, NSURLRequest and NSURLConnection. I was learning Objective-C and Cocoa from scratch and it took just a couple of hours to implement a descent queue. When I switched to Android I already knew what I’m going to build and how should it work. But it took almost three days to get through 3 (!) huge independent packages android.net.*, org.apache.http.* and java.net.*. Each package had some useful bits on its own, but none was easy to take and build something right away. None contained a simple asynchronous API. Finally, I got to take single thread executor from java.util.concurrent and use a blocking HTTP API from org.apache.http. Other options were as high-level as writing to a socket by hand. The devil of Java is very well illustrated by apache HTTP library: not only it has tons of classes and interfaces, these classes are scattered across 10 sub-packages. In Cocoa you can do all the same things with about 20 NSURL* classes, using 3-4 (!) of them 90% of the time.
In average, for each Cocoa class there are 10 classes and interfaces in Android providing about the same functionality. In other words, Cocoa is 10 times more productive than Java.
Android lacks good programming guidelines. With that amount of options Java spits on you, the guidelines are absolutely a must.
It seems, Google does not care much about the phone. I got lots of stupid errors in MapView. It also lacks built-in pins, annotation pop-ups and callbacks for region-changing events! I had to implement pins and pop-ups by myself. And without useful callbacks, there’s a repeating timer which polls MapView properties and triggers a callback after user stopped dragging/zooming the view.
The Ugly
UI is slow and does not feel right. Scrolling momentum is not natural, every screen opens with a noticeable delay, maps are slow and some screens are 16bit. (Hello, it is 2010 already!)
Android device has physical buttons. Only 2 of them wake device from sleep: “menu” and “hang up”. Others are no-op. Very confusing.
Every application has a hidden menu. It pop-up when you click a “menu”, which is a physical button. And then you have to click a screen button. And to go back from wherever you are you have to tap a physical button again.
Android is over-engineered and under-designed platform. It has interesting model of multitasking stack of activities, but it fails completely when it comes to actual UI. There are “back” and “home” physical buttons. Pressing “back” removes current activity from the stack. Pressing “home” pushes Home Screen on top of the stack. All running activities remain in memory. And when you tap application icon from the home it opens the top-most application activity.
There are 3 lists of the applications: on the home screen, on the “placard” which slides over home screen and somewhere in the Settings (where you should go in order to remove an app). When you remove an app from the home screen it is not clear that you are not going to erase it completely (same issue with Dock on the Mac).
I gave HTC Tattoo phone to several people around me: everyone got confused by the navigation.
The End
Android UI is ugly, slow and complicated. Google is happy to put its apps and a search box in the system, but they are not interested in phone sales. Mobile carriers are interested in having an iPhone competitor, but they do not produce anything. Finally, manufacturers from China do not care about UI and global market strategy, they just produce devices.
Apple, on the other hand, is interested in both device sales and app sales. And they care about the whole chain: from hardware to end-user services.
Android seems to be dying already.
The average web-based shop works like this: there’s a front and a back. The front is sexy, the back is not. One group of people goes to the front to buy stuff, other group goes to the back to manage that stuff. These groups of people intersect only when those who go to the back pretend they are buyers. But still, there’s a huge distinction.
On the other hand, the real shop also has a front and a back. But those who sell stuff do not stay in warehouse all day long turning knobs to rearrange stuff on shelves. They actually go to the shop room to put things in place, see how they look and how customers explore them. In other words, a physical shop is more WYSIWYG than a web-based one.
My suggestion is to outsource to warehouse pages as little functions as possible. The shop owner should have a control on what is presented to the people. He should be able to immediately update prices, titles, rearrange things and see some basic stats which help to decide how good something sells.