Oleg Andreev

Software designer with focus on user experience and security.
You may start with my selection of articles on Bitcoin.
Переводы некоторых статей на русский.
Product architect at Chain.
Author of Gitbox version control app.
Author of CoreBitcoin, a Bitcoin toolkit for Objective-C.
Author of BTCRuby, a Bitcoin toolkit for Ruby.
Former lead dev of FunGolf GPS, the best golfer's personal assistant.
I am happy to give you an interview or provide you with a consultation.
I am very interested in innovative ways to secure property and personal interactions: all the way from cryptography to user interfaces. I am not interested in trading, mining or building exchanges.
This blog enlightens people thanks to your generous donations: 1TipsuQ7CSqfQsjA9KU5jarSB1AnrVLLo
NFC and payments with a phone
Why are people so obsessed with the NFC buzzword?
The only safe and understandable way to conduct the payment with a phone is with the protocol like this:
1. Shop sends a payment request to your bank (via shop’s bank or directly)
2. You bank pings your phone and waits for confirmation.
3. You take your phone out and confirm the payment. You can do this securely over Wi-Fi, 3G, EDGE, Bluetooth, NFC and any other communication technology which lets you to speak TCP/IP and finally (after going through all the proxies and routers) connect to the Internet.
4. When bank gets the confirmation, it acknowledges the transaction and tells the shop about it.
5. Shop issues a ticket and you walk away.
This protocol is safe (contrary to modern credit card processing) because you never trust someone else’s device (you trust only your phone) and you never give away any secret information (like credit card number or a PIN code).
The only tricky thing here is how to give the shop your banking ID (which can be a phone number), so they can send it to your bank which will contact you for confirmation. This can be done in many different ways:
0. Tell the ID to the shop assistant - simple to understand, but needs remembering and typing in. Since we need a phone to confirm payment anyway, we don’t even count it.
1. Show the barcode on the phone’s screen and scan it. You need to launch a payment app anyway. Why not to display you bank ID as a barcode on the first screen so you can simply scan it.
2. Use NFC to announce your ID to the shop. This like barcode scanning, but without optical scanning. It has its issues, though. If you have many devices nearby, the receiver may confuse your device with someone else’s, or recognize your slower. Everybody knows how slow bluetooth is.
3. Do it in reverse with NFC: the shop will publish you bill and (if you are lucky) only your phone will see it, so you can send it to your bank.
To me, the most usable ways to conduct payments are #1 and #2. And #1 seems to be simpler and faster (but feels “lo-tech”).
Bottom line is: NFC is not a requirement for payments with a phone. You need any communications tech to connect to your bank and can use different ways to announce your banking ID.
I also hope, that the phone payments won’t be done in the same way as credit card processing is done. That is, by giving away secret codes and trusting the shop to confirm the transaction.
MacAppStore and external distribution
Gitbox started selling in November using old-school method: download a free version from gitboxapp.com, then upgrade to a paid version by buying a license.
Today Gitbox is available on Mac App Store as well. What this means to you?
If you have already purchased Gitbox, you don’t need to “connect” it to App Store. First, it is impossible to do for free: you’ll have to buy it again, from Apple. Second, you won’t miss much. Gitbox is a single-version application: there is no “lite”, “full”, “appstore” or “non-appstore” variant. The functionality is all the same. (Only difference is that binary in appstore has different autoupdating and license checking mechanisms.) Gitbox already provides automatic updates for free. There is one nice feature of the non-appstore purchase: updates can be released within minutes instead of a week.
Note that App Store marks the app as “installed” if it sees it on disk even if it was never downloaded from Apple. If you want to purchase it from Apple (maybe you have not yet purchased a license), then you should drop the app to Trash and restart App Store: the purchase button will become available. Your preferences won’t be affected.
So how do you decide where to purchase an app? Both distribution channels are great: appstore is more controlled, but sometimes much more convenient, another one is more flexible, but less integrated into OS. I believe it is important to keep both options available to you, but I want to avoid any confusion. So here is my policy:
1. Prices and discounts will always be the same and synchronized for both stores. The app is the same, hence the price is the same.
2. I will do my best to release big updates synchronously on both stores. I usually don’t release more often than once a week or two, so it is very possible to adjust to the appstore review delays.
3. In case of security updates or critical bug fixes, I will post an update immediately even if the appstore does not publish it as quick as I do on my website.
Enjoy Gitbox and buy it where you like. You will get the same support and love everywhere.
Adding custom views to NSCells
How do you add a view (spinner, text field, button etc.) into the cells of NSTableView or NSOutlineView?
Simple:
1. Keep a reference to the view in your NSCell.
2. In drawInteriorWithFrame:inView: you should create the view if needed and add to the controlView if needed. controlView is provided as a second argument to this method.
3. Position the view according to the cellFrame (first argument to the drawing method).
4. Do not forget to retain or nullify the view in copyWithZone method. Remember that copy and copyWithZone copy instance variables as-is without retaining object pointers when you might need that.
Correction on January 6th, 2011:
There is no point in keeping a reference to a view in the cell. After the cell is drawn it is often deallocated immediately, so the view will stay visible forever. You need to keep the reference to the view in some external non-volatile object: a view, a view controller, or a model.
Gitbox 1.0 released
I’m happy to announce that Gitbox reached its first major milestone: a first commercial release. It is a great version control app for working with Git repositories. Instead of cutting down the powerful, but complicated concepts of Git, Gitbox embraces them with a truly elegant user interface. Many people start actually using branches in Git thanks to Gitbox.
Download Gitbox 1.0 now. Use coupon GITBOXNOV before December 1st to get 30% discount.
Since the last preview version, a lot of things have changed. I have worked out a strong vision of what kind of product I want to create. As a part of it, I have redesigned the user interface and reengineered the underpinnings. Now all the repositories live inside a single window and the app itself is running on Grand Central Dispatch (GCD) on Snow Leopard. Translation: Gitbox is faster and easier to use.
A couple of thoughts on licensing policy: usually commercial software comes in two flavors: a full and a trial. Here’s the problem: when I download a trial version it is usually limited to 14-30 days of free use. I may try the software for a couple of minutes, then put it aside and forget about it until I have a real need in it (or some very handy feature is released). When I come back to the newer version, it appears I cannot try it any longer!
Gitbox does not do that. You may try it right now and for as long as you want. You also have all the features available with only one fair limitation: only one repository opened at a time. Why is it fair? Because if you don’t find Gitbox useful enough to pack it with all your repositories and use it every day, I don’t want your money. Instead, I would be happy to listen to you and make it better.
When you do buy a license, you get more than you paid for. First, all updates are free (some really cool features are coming soon). Second, you may use the app on all your machines without any sorts of spyware, activation etc. The only limitation is that the license is for personal use. If you want to buy Gitbox for a group, you should buy an appropriate number of individual licenses. Contact me if you’d like to get a discount in that case.
I will release new features and incremental design improvements regularly in a form of free software updates. As the app becomes more powerful and better designed, the price is likely to rise. Since the updates are free, this idea should convince you to buy a license early at a lower price ;-)
I’m very thankful to my family, colleagues at Pierlis and all the folks who were using preview versions and giving a lot of priceless feedback.
Let’s get it started.
OOP and business
In a software business, the functionality is an asset, but code is a liability. The less code needs your attention, less costs and risks you have.
OOP is all about making stuff work, packaging it into an object with as small interface as possible, and building other stuff around without going back and tinkering with that package. Note to Java people: it does _not_ mean the object should fit everything. It should fit at least a single task and be reliable at that task. The point is in reliability, not reusability.
This concept is called “incapsulation”. It is not the way to make the code nice. It is the way to minimize your costs and risks and finally ship.
Splay tree
“All normal operations on a binary search tree are combined with one basic operation, called splaying. Splaying the tree for a certain element rearranges the tree so that the element is placed at the root of the tree.
A top-down algorithm can combine the search and the tree reorganization into a single phase.”
http://en.wikipedia.org/wiki/Splay_tree
The splay tree modifies itself every time it is searched becoming more and more efficiently organized over time.
A copy-pastable code
Everyday as a software developer you have to invent some abstractions. Simply speaking, you have to decide where to put the new code. After you decide this, you write more code and repeat the process. Sometimes the earlier decisions need to be changed and you refactor the existing code. Now you decide where to put the old code.
I really need a hint. The OOP folks teach us to model the real world. Just look at the problem domain, they say, and you will see where the things belong. It works great until you hit some system-specific pure abstractions and there is no natural metaphor to help you.
Try another approach. Since the initial question is where to put the code, and the refactoring is about moving the code around, why not to make the code itself easily movable? How about making the code copy-paste friendly?
The first idea which comes to your mind is to wrap it in the object. Yes, it might solve the problem. But at what cost? Creating an object means defining the interface (class, protocol, whatever) which creates another entity in the program and eats a part of your brain. Not always a good idea when you are already stuck finding out where’s the best place for just ten lines of code.
When you are trying to solve a problem, do not hurry creating another one. Relax, put the code somewhere where it is easier to move from and make it depend on the nearby code as little as possible. Usually you do so by putting the dependent data in some local variables. You can later transform them into function arguments or object properties.
When you make the code movable, you can (sic!) move it around and isolate more and more over time. Maybe 5 minutes later you will discover you don’t need it at all. Or that it should be simplified and moved in a function. Or that it should have more functionality and become an object. Or that it should be split in two different tasks. All of these questions become much easier to answer when you keep the code simple, stupid, light and isolated just enough. Just enough to copy and paste it.
A history of concurrent software design
Early approaches to concurrency
When machines were big and slow, there was no concurrency in software. Machines got faster and people figured out how to make multiple processes running together. Concurrent processes proved being extremely useful and the idea was brought further to the per-process threads. Concurrency was useful because it powered graphical interactive applications and networking systems. And those were becoming more and more popular and more advanced.
For some tasks concurrent processes and threads presented very difficult challenges. The threads participate in a preemptive multitasking, that is the system where the threads are forced-switched by the kernel every N milliseconds. At the same time, the threads has a shared access to the files, system interfaces and in-process memory. The threads do not know when they are about to be switched by the system, which makes it difficult to safely retain and release control over the shared resources. As a partial solution, different sorts of locks where invented to make multi-threaded programs safe, but those didn’t make the work any easier.
A typical code in a multi-threaded environment:
prepareData();
lock(firstResource);
startFirstOperation();
unlock(firstResource);
prepareMoreData();
lock(secondResource);
startSecondOperation();
unlock(secondResource);
finish();
Modern concurrency
Next approach to concurrency was based on a realization that the problem of shared resources lays in the very definition of the “shared”. What if you create a resource with a strictly ordered access to it? Sounds counter-intuitive: how can this be concurrent? Turns out, if you design the interface like a message box (that is only one process reads it and nobody blocks waiting for a response), you may build many of such resources and they will work concurrently and safely. This idea was implemented in many sorts of interfaces: unix sockets, higher-level message queues and application event loops. Finally, it found its way into the programming languages.
Probably, the most wide-spread programming language today, JavaScript, features function objects that capture the execution state for later execution. This greatly simplifies writing highly concurrent networking programs. In fact, a typical JavaScript program runs on a single thread, and yet it can control many concurrent processes.
Mac OS X 10.6 (Snow Leopard) features built-in global thread management mechanism and language-level blocks making writing concurrent programs as easily as in JavaScript, but taking advantage of any amount of available processing cores and threads. It is called Grand Central Dispatch (GCD) and what it does is perfectly described by a “message box” metaphor. For every shared resource you wish to access in concurrent and non-blocking way, you assign a single queue. You access a resource in a block which sits in the queue. When the block is executed, it will have an exclusive access to the resource without blocking anybody else. To access another resource with the results of the execution, you will have to post another block to another queue. The same design is possible without blocks (or “closures”), but it turned to be more tedious and limiting and resulting in less concurrent, slower or unstable programs.
The modern concurrent code looks like that:
prepareData();
startFirstOperation(^{
prepareMoreData();
startSecondOperation(^{
finish();
})
})
Every call with a block starts some task in the other thread or at a later time. The block-based API has two major benefits: the block has access to the lexically local data and executes in a proper thread. That is it eliminates the need for explicit locks or moving and storing the local data explicitly just for making it available in a proper thread.
Think of it this way: every block of code inside the curly brackets is executed in parallel with the code it was created in.
Future directions
The upcoming generation of software already is or will be written this way. But block-based approach still isn’t perfect. You have to manage queues and blocks explicitly. Some experimental languages and systems already have a transparent support for “continuations”: that is the code looks linear, in a blocking fashion, but the process jumps between different contexts and never blocks any threads:
prepareData();
startFirstOperation();
prepareMoreData();
startSecondOperation();
finish();
This is much more natural and looks like a naïve approach which we started with and fixed with the locks. However, to make it work concurrently we have to learn GCD and take it to the next level.
When you start some operation which operates on a different resource and can take some time, instead of wrapping the rest of your code within a block, you put the current procedure in a paused state and let the other procedure to resume it later.
Imagine that instead of the discrete blocks of code, the kernel manages continuously executed routines. These routines look very much like threads with an important exception: each routine gives up the execution voluntary. This is called cooperative multitasking and such routines are called coroutines (рус. сопрограммы). Still, though, each routine can be assigned to a thread just like a block or be rescheduled from one thread to another on demand. So we retain the advantage of the multi-processing systems.
Example: you have a web application which does many operations with shared resources: reads/writes to a database, communicates with another application over the network, read/writes to the disk and finally streams some data to the client. All the operations should usually be ordered for each request, but you don’t want to make thread wait each time you have some relatively long-running operation. Also, it is not efficient to run multiple preemptive threads: there is a cost of switching the threads and you get all sorts of troubles with random race conditions. GCD and blocks help for the most part, but if you use them to make every single operation on a shared resource, you will get an enormously deep nested code. Remember: even writing to a log means accessing a shared file system which better be asynchronous.
15 years later
Today, a lot of trivial operations like writing to a disk or accessing a local database do not deserve asynchronous interfaces. They seem fast enough and you still can drop more threads or CPU to make some things faster. However, the coroutines will make even these trivial tasks asynchronous and sometimes a little bit faster. So why is that important anyway?
The coroutines are important because every shared resource will get its independent, isolated coroutine. That means, every resource will have not only private data and private functionality, but also a private right for execution. The whole resource will be encapsulated as good as any networking server. The file system, every file, every socket, external device, every process and every component of an application will have a coroutine and a complete control on when to execute and not execute. This will mean that there is no need for a shared memory and a central processor. The whole RAM+CPU tandem can be replaced with a GPU-like system with hundreds of tiny processors with private memory banks. The memory access will become much faster and the kernel will not need to waste energy switching threads and processes.
A single design change which makes programming easier will make a shift to much-much more efficient architecture possible. It won’t be just faster, it will be efficient: while the servers could be 100 times more productive, the personal devices could be 10 times faster while consuming 10 times less energy.
30 years later
By the time operating systems will support coroutines and a truly multi-processor architecture, new applications will emerge with capabilities we can only dream about. Things like data mining, massive graphics processing and machine learning work mostly in the huge data centers. Twenty years later this will be ubiquitous just like a 3D games on the phones today. These task will require more memory space. Finally, the common storage memory will be merged with RAM and processor and processing of huge amounts of data will become much more efficient.
Given such a great advance in technology, humanity will define its unpredictably unique way to educate and entertain itself. As we get closer to that time, it will become more clear what is going to be next.
Two kinds of charts
There are two very different kinds of information visualizations. And I don’t have pies and bars in mind.
The first kind is for presenting the knowledge. You have already discovered some interesting facts and now need to present them clearly to the reader. Your task is to design a most appropriate form for the data, that the knowledge will become obvious. (Of course, you should not be dishonest and lie to a reader by making perspective distortions or shifting the zero point.) Sometimes you may even drop the figures and just show the data geometrically. The charts should have little noise: no lines, no ticks, no labels. Curves can be smooth and some nice 3D effects can be applied. Present as little data as necessary. Prefer geometric objects to tables.
The second kind is for discovering the knowledge. You have raw data and no particular idea about what could be so interesting about it. In such case you need a format which lets you discover the knowledge. Comparing to the first kind of visualization, here you might want to have more additional information, most probably an interactive table or graph to play with. Add some additional hints: the mesh, absolute and relative values, confidence intervals etc. Of course, this form should be much more honest and complete than the presentation of a first kind. No special effects, no smooth curves. Prefer tables and simple charts to fancy geometric objects.
When presenting a data, first thing to do is to decide what kind of problem do you solve. If you present a raw data, make it easy to work with it and find the knowledge. If you have already found a knowledge, present it in the most clear form.
Printed book idioms to be avoided on screen
1. Breaking an article into multiple pages.
Page is a physical limitation of a paper medium. Sometimes the text does not fit and you have to drop the last paragraph on another page. On the screen you have plenty of vertical space and there is no excuse to cut the reader’s context.
2. A lot of iPad newspaper apps simulate multi-column layout. They shouldn’t.
The purpose of a multi-column layout is to make articles’ layout more flexible on a big newspaper page. On a wide page you can fit a couple of articles and an ad. But the screen is not that wide.
Narrow columns also require small font size, which is a problem on a display of a resolution under 300 dpi.
Narrow columns require manually-tuned hyphenation and sometimes font width-adjustment. It is a requirement for the books as well, but a more narrow column looks even worse. Unfortunately, it is not the case for the digital media today.
If the column does not fit the screen, you constantly have to scroll down and up when reading a page: down when finishing the first column and up to proceed to the next one.
You can scroll and zoom the page on screen. If you make a single scrollable and zoomable column, you don’t need to provide font size control or worry about how much of content is visible. The reader can choose the more comfortable size of the page for herself.
3. A lot of people use footnotes on the web. This is horrendous: you have to leave the current line and scroll down. And even if you scroll by clicking a footnote number, you then have to scroll back. And even if you have a link from the footnote back (like in a wikipedia article), browser doesn’t scroll exactly to the position at which you were before.
On the screen you have a plenty of vertical space. And if you don’t use multiple columns (which you should not), you have some space on the side. That means, you may put some notes in the block of smaller font right below the paragraph, or on the side.
Summary
Do not break articles in pages. Do not break text in the columns. Make text column scrollable and zoomable. Make the footnotes immediately under the paragraph, or put them on the side.
Mac OS X Lion predictions
There are some predictions or wishlists floating in the tubes regarding an anticipated update to Mac OS X. Some of them are more probable, some less and some are just plain crazy. Let me give you my predictions and some commentary.
1. The next cat name is likely to be “Lion”. This is based entirely on a single picture from the invitation picture and also is the least interesting prediction. I don’t think it is going to be the “last” release in any sense.
2. The merge with iOS. First, Mac OS X already has some UI features borrowed from iOS: navigation buttons in Dock stacks, iPhoto and iTunes. There will be more of them. Maybe scrollview will be updated with more flat scrollbars, maybe some bouncing will appear (and if so, it will be off by default for the existing software).
No way there will be a touch-controllable UI for the existing applications. The apps are not designed at all for the multi-touch and the size of the finger. Even if Windows 7 supports this, there’s no reason for Apple to follow the same path. However, taking in account the dual-mode touch screen patent, it seems more probable that Mac OS X might be transformed into iOS device on demand. But Apple does not favor dual-mode UIs: this just creates confusion for users and developers. The Front Row is a rare example of a second UI mode (transforming Mac into a focused media player). But the iOS is considered more or less a full-feature environment with far reacher user interface then the Front Row and at least as rich as Mac OS X. It is very unlikely that the iMac or MacBook will have two personalities which complete with each other and cooperate badly producing a huge confusion.
So believing in a strong movement towards touch UI everywhere, we may expect not a dual-mode, but per-window fusion of the iOS apps into Mac OS X. This has it’s issues also: still the file sharing is not as smooth as what we expect on a desktop OS, the iPad screen in portrait mode does not fit in the MacBook screen. And again, if you can touch and drag the iOS window, why not to touch and drag other windows? And if you can touch and drag all the windows, why not touch all the buttons? And the screen should be oriented horizontally just as keyboard or trackpad today. This is not easy to solve.
So the UIKit multi-touch will eventually show up in some version of Mac OS X, but it is not as easy as some may believe. The less improbable prediction: the Mac OS X will have a very conservative, slow introduction of touchscreen with emphasized limitations to minimize confusion as much as possible.
3. AppStore for Mac OS X. This is a really good idea in pure form, but once again has some conceptual difficulties. Apple will not lock the Mac OS X as they did with iOS, so it will compete with other distribution channels and may be forced to lower their 30% cut. At the very same time they would have to retain approval process to filter out crappy software they will sell. Developers who are not happy with the commission and the approval process, will go distribute the apps on their own. But this is very hard to debate because there’s still no third party app store for Mac, so the place seems vacant. Or it is vacant because no one could build a viable store business yet. Anyway, the Apple is the most likely company to succeed at this, and if executed perfectly, it will attract a lot of developers and make themselves and Apple much more money, and drag the Mac even further in the market share race.
4. Resolution independence (making UI 1.25-1.5 times bigger). Mac OS X team works on resolution independence for more that 4 years already. And still, on Snow Leopard the implementation is buggy and far from being anywhere close to “beta” status. The conceptual problem here is that this technology is aimed at scales of 1.25 and 1.5, not 2.0 like on iPhone. And this is not as simple as multiplying everything by two. I guess the displays with 2x higher resolution (for MacBooks at least) will become affordable before the 1.25 scale will be fixed for all the shipping apps.
Oh, do not forget that the “retina display” approach does not make things bigger for people with poor vision, it makes them sharper. The sharper text somewhat easier to see, but not as easy as 1.5x bigger one. Apple may realize that system-wide smooth resolution scaling is not worth tinkering with and full screen zoom is just enough for solving vision problems. My bet is on retina displays and old resolution independence framework being put on the shelf.
5. Finder improvements. Some folks dream about tabbed Finder. The problem is that file system is hard enough already. Adding tabs just complicates the look of the Finder and makes file system even scarier. Even if the tabs find their way into Finder, they will be disabled by default. Just like the tabs in the earlier versions of Safari were disabled.
What would be really cool is a merge of the Dock Stacks with the Quick Look and a merge of Quick Look with other apps. This is a pure speculation. Have you noticed how easy it is to jump through the folders in the Dock Stack? Buttons are big and once you find the file you want, the window disappears automatically. The Quick Look also disappears easily. Finder on the other hand, creates clutter: you have too many individual Finder windows all over the desktop. The tabs do not remove the clutter, they just organize it. Maybe what we need is not organizing it manually, but having something like a “recent folders” list and jumping through them using Quick Look.
How many times you’ve started a movie in Quick Look and played it way too long to forget that it is not a stand-alone player? And then you do something with Finder and the movie disappears! Take a look at the iCal: if you open an event, the popover window will appear with details. This window behaves much like the Quick Look: do something else and it disappears. But if you move is a little bit, it will transform into a stand-alone panel which will stick on the screen until you close it. The same idea can apply to Quick Look. It will be super-useful to transform a folder preview into a Finder window, a movie preview into a Quick Time window etc.
6. iChat with FaceTime, iCal like in iPad, iLife, iWork updates: this all is possible. The question is the timing: maybe not all of that will be tomorrow, but only some. I don’t expect super-cool features here, but more like an evolution and improvements.
7. Macs won’t support blu-ray drives. I haven’t heard about blu-ray from any of people I know. Those who really need it may buy an external drive.
8. There won’t be NTFS mounts or built-in VM for Windows. Not because there is a fight with Microsoft. Apple simply doesn’t have time for the features most people don’t need. BootCamp was an important thing in 2006 to bring more customers. Nowadays Apple does not mention “switching” anymore. There is already a plenty of ways to communicate with Windows, both built-in and supplied by third parties.
9. Mac OS X distributed as a free software update. Recently Apple has lobbied an accounting rules change to be able to distribute free updates of iOS for non-subsidized products like iPod touch and iPad. This makes the platform more vibrant and much more devices stay up-to-date. Making Mac OS X update free, Apple can accelerate adoption of their technologies and bring better and more exciting applications to the Mac.
Edit: forgot to add that a lot of goodies from UIKit, MapKit, EventKit etc. might well be ported to the Mac APIs. The NSTableView might learn about recyclable views from UITableView.
Don’t mix concerns: objects and things
Don’t tie external resources lifetime to object lifetime (for instance, file descriptors). Never start any process in constructor/initializer. Have “start” and “stop” methods (or “open”/“close”) dedicated for managing the resource.
Don’t mix data construction and performing a procedure. Whenever you have a method which takes 10 arguments, it is time to create a separate object just for that procedure. Give it 10 properties and a “start” method. Later you’ll be able to add more configuration options and alternative invocation APIs to this object in a clean way.
In general, when you begin understanding OOP, you tend to treat everything as an object. Data structure, complex procedure, network connection, physical device — all become objects. The trick is while you incapsulate all those things into objects, you shouldn’t confuse the *object* with the *thing* it manages. The object is a manager, driver for the thing, but not the thing itself. Object may have language-oriented API and thing-oriented API. Don’t mix them. First API lets you to create, initialize, inspect, destroy object. It must have no impact on the thing. To manipulate the thing, you write thing-specific methods.
Quick test for compliance: you should be able to instantiate a valid object, set properties in any order, inspect it and destroy without any effect on other objects and things around.
Gitbox
Gitbox is a nice little interface for Git. I wrote it primarily for myself and my friends to optimize everyday operations. Go download the app from the website and come back here for details.
Gitbox displays each repository in a little separate window. Each window has 3 parts: branches toolbar, the history and the stage.
Toolbar makes branch operations a lot simpler. You always see which branch you are on and which remote branch is associated with it. Branch menus let you checkout existing branch, a new branch, remote branch or a tag.
The history shows commits for both the current local branch and remote branch. Commits which are not pushed yet have a little green icon, so you will never forget to push your changes. Commits on the remote branch which are not yet merged (pulled) are marked with a blue icon and a light-grey text. And you may switch remote and local branches to compare them.
Stage shows the changes in the working directory. You may stage and unstage a change using checkboxes or just select changes you want to commit and press “Commit” button.
Gitbox updates stage status every time you focus its window and fetches remote branch commits while in background.
Gitbox will be free for a limited amount of time. Prices and conditions will be announced later. Check for updates regularly!
Follow Gitbox updates on twitter @gitboxupdates.
Please send questions, bugs and suggestions to my email address: oleganza@gmail.com (or twitter)
“Crash-only programs crash safely and recover quickly. There is only one way to stop such software – by crashing it – and only one way to bring it up – by initiating recovery.”
“It is impractical to build a system that is guaranteed to never crash, even in the case of carrier class phone switches or high end mainframe systems. Since crashes are unavoidable, software must be at least as well prepared for a crash as it is for a clean shutdown. But then – in the spirit of Occam’s Razor – if software is crash-safe, why support additional, non-crash mechanisms for shutting down?”
