iovm idea: “activatable” should be a slot property rather than value property. This allows to pass methods around and access them directly by slot name in all the contexts. Current implementation makes you to use getSlot(slotName) syntax to avoid activation.
Excellent write up about “message eating” Nil.
There’s no “srand” function in AS3. That is you cannot seed random number generator.
And when you get the very same results each time swf is loaded you find this:
However, if you need true randomness in an event-driven application, you can do this:
setInterval(Math.random, 10)
Flash Player will run Math.random() in background producing different random values at different points of time.
In a regular priority queue entry with priority N is taken before all the entries with priority M (given N > M).
Sometimes, however, 3-priority entry should not beat one hundred of 1-priority items. It seems natural to introduce a double-linked list of weighted nodes, where newly inserted node can come in front of a limited number of nodes with a total weight less then a weight of a new node.
Illustration. Given a stream of nodes with weights:
[1a, 2, 1b, 3, 1c, 1d, 4, 1e]
weighted queue intermediate states would be:
[1a]
[2, 1a]
[2, 1a, 1b]
[2, 3, 1a, 1b]
[2, 3, 1a, 1b, 1c]
[2, 3, 1a, 1b, 1c, 1d]
[2, 3, 1a, 4, 1b, 1c, 1d]
[2, 3, 1a, 4, 1b, 1c, 1d, 1e]
In such queue priority “N” means skipping N items of priority 1.
Observation: office becomes nicer when everybody leaves :-)
// I dedicate all this code, all my work, to my wife, Darlene, who will // have to support me and our three children and the dog once it gets // released into the public.”—Stack Overflow: What is the best comment in source code you have ever encountered?
Say, you want to keep some auxiliary info inside your git repository: tickets, post-commit/post-receive hooks, wiki pages etc. Storing them inside a folder might not be a good idea: you’d probably want to have same content across all the branches. It is natural to keep such data in a separate branch.
Given that, you can create your “tickets” branch simply by checking out new branch, removing all the code, adding initial files and committing it. This works great until you get bored with the irrelevant history in the tail of the git-log. It is rather easy to disconnect your branch from the old history: just take the latest tree id, create an orphan commit (that is: without parent commits) and reset branch to this commit.
# emit tree id for the latest commit
$ git log -1 --pretty=format:%T
# emit new commit id
$ echo "initial commit" | git commit-tree <tree-id>
# reset current branch to this commit id
$ git reset --hard <commit-id>
Put it in a single bash command:
$ git reset --hard `echo "initial commit" | git commit-tree \`git log -1 --pretty=format:%T\``
Dangling branches are great for keeping meta-data of any sort: .git/config files, tickets, hooks, documentation, tickets.
PS. Since you can store hooks inside repository itself, you can have a self-contained deployment system like Capistrano without any additional tools installed on a server. Hooks can even update themselves on each post-receive hook before actual deployment recipes are run. This allows you to specify all the dependencies in the source repository and even setup them with a single “git push” command. All manual setup you have to do initially is to clone local repository inside .git/hooks folder (yes, inside itself) and check out hooks branch appropriate for your environment. Ain’t that sweet?
Nice article showing efficiency and inefficiency of TraceMonkey. Must read.
FriendFeed stores all entities with all properties in a single table and uses separate tables for specific indexes. After retrieving entities from the index, application reapplies query to fight some data inconsistencies. Eventually, “cleaner” process updates indexes with the actual data. This strategy greatly reduces administration efforts (indexes can be created or update asynchronously) and makes latency 2x lower.
(Thanks Application Error for the link)
DIAGNOSTICS
You don't exist. Go away!
The passwd(5) gecos field couldn't be read
Your parents must have hated you!
The passwd(5) gecos field is longer than a giant static buffer.
Your sysadmin must hate you!
The passwd(5) name field is longer than a giant static buffer.
def movie_events_grouped_by_titles_and_theaters
events = Event.all.inject({}) do |titles, event|
((titles[event.title] ||= {})[event.theater] ||= []) << event
titles
end
end
(my response to the mail list discussion; in russian)
[user]
name = Oleg Andreev
email = oleganza@gmail.com
[apply]
whitespace = strip
[diff]
color = auto
rename = copy
[pager]
color = true
# this one is very cool:
# green means "to be committed"
# red means "not to be committed"
[status]
color = auto
Must read.
Today I have received a letter:
^_^Hello Oleg,
I’m a Io newbie. I was watching some of your sample code on Github (loved funnyFactorial ;-) when I discovered your “learning french” subdir. I’m french and would be pleased to answer / comment / whatever about that language (not so human).
Github offers you a HTTP server for static data at http://yourname.github.com. Publishing is easy: just push content to git@yourname.github.com. (It is two months already, how could I miss!)
$ git rev-list -n 1 HEAD <path/to/folder>
Returns the latest commit, which modified a given path. This is useful to find out whether something has recently changed in the particular folder.
Young hacker looks at the figures: “2 hours for the feature Foo, 4 hours for the feature Bar”. He feels that kind of pressure: “I have to make it! I have to type faster, think faster, test faster.”
This is an awful feeling. So here’s the (possible) solution to this situation: try to think of time as of money you are investing. Tell yourself how much time of your life would you invest into this piece of #$@^ (of course, take into account your rate/salary). Now it looks like you score the feature Foo for just 2 hours: it doesn’t worth 4 hours or more. Spend 10-15 minutes for planning the way to spend that much time and do your best. If some trouble strikes and you’re out of time, just give up. Go to another feature and let this to be discussed on a weekly meeting when there’s time to schedule next iteration.
If the client wants a fixed price for software, you will not have any additional time. In such case - either do a dirty job, or work all the night. You to decide.
And when they drop back to French, discussion becomes a complete nonsense.
At this very moment I’m attending a meeting at The Big Company in France. There are six french folks around me speaking English instead of French. The only reason for that is me — I don’t speak French. It’s a bit hard for everyone to speak and understand English and initially I was a little bit ashamed of that. But soon I realized that the difficulty of speaking English makes everyone to focus on the essentials and prevents spoiling everyone’s time on the nonsense. Sweet.
See also slides (in a form of Factor plain text source code)
Apparently, there’s no conceptual problem with cloning a subdirectory like:
$ git clone git@server:project.git/docs/howto
You should just keep track of those tree objects referencing to “/”, “/docs” and “/docs/howto”, but fetch no references except children of the “/docs/howto” tree.
There’s a problem in Ruby: what if your application requires two libraries and both of them require incompatible versions of another library?
API designers who are smart enough create a namespace for each major version (MyLib::API1, MyLib::API2 etc.) so that you can have multiple versions of the same code in run time.
There’s a better solution however. Io language does not make you to modify the global state: source code can be loaded in any other object. This means that you don’t have to pollute library code with a version-based namespaces but you still able to load as many instances of the library as you want. Just make sure you keep them in your private objects.
Dreams come true:
MyParser := Package load(“XMLParser”, “e1fc39a02d786”)
You definitely should try these.
Exceptions are meant to be… ehm… exceptional. Exceptions are thrown when the code could not be executed further. They are meant to be thrown right at the point where something went wrong and passed up the stack till the program termination. Programmer should only provide “ensure” (in Ruby; or “finally” in Java) block to clean up. Programmer should never use “catch”/“rescue” block. Never.
There’s one little thing, however.
Sometimes you run your program and get silly exceptions like “connection lost” or “division by zero”. You become unhappy about it and you decide to implement an interface to deal with such errors. For example, when connection is lost you could show a message or do something smart (depends on the purpose of your program, of course).
But please remember: never ever catch exceptions you don’t know about (no “rescue nil” or “rescue => e”!). You should be very picky at what you are catching. Uncaught exception simply pops up in a system message or a log entry, so you can learn it. But silently caught exception might hide some nasty error from your eyes. And you wouldn’t be able to see in a stack trace what had happened few milliseconds before.
Both Ben and Yehuda are wrong.
They both use messy metaprogramming where Ruby has a nice solution already. It is a chain of modules and a super method.
If the base functionality is provided in a module it looks like this:
module BaseFeatures
def hello
"Hello, world!"
end
end
module AnyGreetingPlugin
def hello(arg = "world")
super.sub(/world/, arg)
end
end
class MyConfiguration
include BaseFeatures
include AnyGreetingPlugin
include AnotherPlugin
end
If your base functionality is packed in a class, rather than in a module — no problem, the solution is pretty much the same:
class MyConfiguration < BaseFeatures
include AnyGreetingPlugin
include AnotherPlugin
end
Now let me respond to each point from the Ben’s article:
1. “The method names don’t properly describe what is going on”. Module name describes what particular functionality it adds to the methods it contains.
2. “The base method shouldn’t care what other modules are doing, modularity is broken”. That’s not the case when you use regular modules.
3. “Logic is not 100% contained”. Logic is 100% contained: no magical macros anywhere.
4. “Promotes messy code”. Again, nothing to even seem messy.
5. “Exposes a documentation flaw”. When you think modules it is easy to separate concerns and understand how every little aspect of functionality affects everything else. You don’t have to speak any other language except Ruby. You think of module chains and message passing: no structure is created dynamically where not necessary. Only thing you have to do: describe in the documentation what this particular class or module is supposed to do. Then provide examples of a default configurations (when some modules are included in a single class) to make the picture complete. Respect the language. Keep it simple, stupid.
class User < ActiveRecord::Base; end
class Artist < User; end
class Investor < User; endI don’t understand why this would be a very bad idea ? All the users are stored in a same table as they have a lot of attributes and not much differ…
This starts with a naming of a user. As i wrote you recently, “User” name completely hides “role” from you, so it seems natural to put all the roles into User model. However, huge models tend to become harder and harder to modify and understand.
If you think of it that way:
- Person - holds authentication info
- Artist - holds info about music and albums
- Investor - holds info about artists and finance
the following becomes easy to play with:
- Person has many Artists (say, i can create several accounts for a number of my bands)
- Artist info can be edited by a group of People (my band members would like to update the news page/wiki/whatever)
- I (as a person) can represent several investors, or none at all.
- Investor can manage a number of artists, and/or a single artist can have several investors.
The reason to separate models by tasks is the very same as to separate objects from the top global Object class into more specific ones.
Speaking scientifically, it is just about “normalization” of a relational database.
If you have duplicating attributes, you have three equally good options (depending on your situation):
1. Mix them in using a module (e.g. “EntityWithTitleAndDescription”) if the duplication is just a coincidence, not a big deal (just to put some duplication into a single file to keep things cleaner).
2. Implement a separate model and associate it with appropriate models (e.g. “Account” could mediate between Person and Project to manage wikis/pages/documents/artists/etc. to avoid hardcore polymorphism between Person and Project). This is the case in a Basecamp-like app, where people have individual data as well as data, shared by a group (project).
3. Leave duplication as is: Coincidence pattern
Sometimes you have to have STI, but i believe this is not the case. E.g. i have PublicFolder and Inbox subclasses of a class Folder because they are a little special per se, not by their association to other folders.
Process consists of a number of phases. Each phase provides a feedback on its performance.
Instead of defining some performance threshold for each phase to start optimization at and asking ourselves “when should we start optimizing this?”, we should rather ask ourselves “which phase is to be optimized now?”. That is, we should collect all feedback, sort it and start optimizing most important phases first. Naturally, we end the process when we are not getting any visible performance gains anymore.
This strategy can be applied to dynamic programming language runtime as well as to any other controllable process.
At each callsite (source code point, where method dispatch happens) we can count
1) number of calls to a method
2) time spent inside the method
3) time spent in callsite (total time across all methods called at the point).
Time can be measured either in byte code instructions, machine instructions or in microseconds.
Lets look at possible situations:
In real code, we don’t meet frequently called very slow methods: it is often done by bad design and could not be efficiently optimized in runtime. But this chart helps us to define a metric for “hot spot” identification: the place in the code, where we start runtime optimization.
Such “hotspot” metric would be callsite time * number of calls. The higher this number, the higher priority should be given to such callsite at optimization phase.
Why don’t we just start with a top-level method? If we start from the very top, we would spend enormous amount of time optimizing the whole program instead of really interesting parts.
Now i can easily bring my favorite aliases and commands to different environments: macbooks, linux and freebsd servers.
Ruby symbols should be just immutable strings to avoid all that mess with string/symbol keys in option hashes. Just treat every literal string as immutable one. You could even keep this nice syntax with a colon to save some keystrokes.
So, basically:
String.should < ImmutableString
“symbol”.should == :symbol
Parser still fills a table of symbol when it encounters literal strings in the source code. So the unique number is just a string object_id.
When you compare two strings, you compare ids first and if they are not the same, you proceed with byte comparison. No need to convert :symbol.to_s.
How to make string a mutable one? Just #dup or #clone it.
“symbol”.dup.class.should == String