Thursday, August 20, 2009

Aperture: scripting takes you where automator doesn't

Like some unfortunate people, I've suffered from data loss. Mainly by distraction and a trigger-happy finger on the delete key. So I've embraced backup strategies, and my last acquisition was a Western Digital MyBook World Edition with 1TB hard disk inside. I keep it connected to my router, so it's always available from a wireless laptop in the house. I'm happy and comfortable with the setup, so the next step is, no surprise, figure out what I'm going to put there. And, most importantly, how I'm going to do that.


My first use case was my photography collection. I've recently started using Apple's Aperture to handle my pretty basic photography sets. I don't own a SLR, yet, but I have a Canon Powershot S50, which allows to manually configure the machine to take some interesting photos. And it shoots RAWs (albeit in a Canon proprietary CRW format, that can be batch-translated to DNG using the freeware converter from Adobe). So after my photo workflow I end up with an hierarchy with folders (for project types like "Trips and Events", "Experimentations", "Personal", "Work", etc), inside which i have projects.


What I wanted to achieve was a threefold backup: Save the project itself (so that i may open it in other aperture libraries, both on my macbook pro and my desktop mac); Save the masters, raw files; Save the versions, in JPEG. I figured I'd try Apple's little Marvin, the Automator, to help me easily do it. I got excited to see that there were some export actions available for use in workflows! That, however, proved to be insufficient, as there is no action to get the currently selected project. This was the first applescript i used on the workflow, to get just that, and place in a variable to pass on later. The code looks something like this:

tell application "Aperture"


-- identify currently selected image(s) project

set x to selection

set x to item 1 of x

tell library 1

set ap_proj to (get value of other tag "MasterProject" of x)

end tell

end tell

I was pleased with the result, as the MasterProject tag on the currently selected image would tell us the project id, not its name (like we would get if we selected from the entire project list on aperture). But then i found out two drawbacks:There's no way to export a project bundle directly via an action. There's also no way to get the project name to pass as input to the export versions/masters actions, as they require a list of images. So after a visit to the Aperture dictionary on Script Editor (just open the application with it, and browse around), i found out i could export a project bundle via applescript (great), and produce a list of the images in it (also great). But not everything is that easy, as I found out that not only the Export actions don't like to accept input from a "run applescript" action (they require a variable to be set and then got, for them to receive data flow), but when the script is saved (or exported to a plugin), the flow is broken and the export actions cease to work!

I ended up giving up the entire workflow, and scripted the whole thing. I only kept the automator workflow file so that i could easily add stuff like growl warnings. You can download it from here and give it a try. Oh, and if you know an easy way to find out the folder a project is in, to put the bundle in that directory, please, leave a comment or tweet me!
With this script (with a small adaptation) it's also easy for me to loop through all projects and perform a complete backup of the library. This sure beats the workflow-actions way, as there's no amount of variables and loops that can make you run through a list of projects. For these reasons, I believe Automator needs a 3.0 update. Maybe Marvin could take some anti-depressives, and leave the paranoia behind? :)

Monday, June 15, 2009

Mouse-driven love to buyers on Firefox


A while ago I was amazed by the truly great work that the Mozilla Labs team did on their project Ubiquity. If I ever thought about a web-based operating system, this all-in-one launcher is anything I'd ever need: fast, useful and easy to extend. Right now the development is going towards a better integration with the "normal" browser usage, melting it's UI with the amazing toolbar.

There are, however, a huge piece of the user pie that aren't that fond of writing, or using the keyboard, unless absolutely required. I've heard about some prototypes with a mouse-driven ubiquity, but as far as I know that idea never took off. There's still time, though, and all these firefox extra-functionality are pretty much in early development stages.

So it was half surprised and half amazed that I watched as two of the master-minds behind Ubiquity (Aza Raskin and Atul Varna) releasing another revolutionary project from their labs: Mozilla Jetpacks. Jetpacks are a way to extend the browser by using a set of Javascript instructions. The main motivation is that up until now we had to grasp the XUL doctrine just to make a popup message greeting the user, and it's easily seen that the amount of Web developers that already tamed some Javascript is growing by the minute.

Wait a bit, I was babbling something about mouse-lovers getting some attention, and I was just getting carried away... That's true: Jetpacks architecture allows certain Javascript objects/functions to interact with certain browser features (statusbar, menus, making HTTP requests, etc.) One of the newcomers into the recent feature-list on its API is called the Slidebar. It's just like a Firefox sidebar, but it stays hidden unless we hover on a small arrow on the top-left corner - then it slides and shows us icons for each slidebar jetpack we have installed. The first slidebar I've seen was the Wikipedia, allowing users to select some text, click on the Wikipedia icon, and see the mobile Wikipedia page for the selection on the slidebar. Pretty sleek - specially if you don't want to fire up Ubiquity, type "wi" and watch the preview window assuming the "wikipedia it" completion and showing up the results inline. And I know there's people like that :)

I quickly thought to myself that most reference websites would be much easier to access there (as opposed to, e.g., the search-bar, where I find it clumsy to switch between search-engines). But I've also thought it would bring some advantage to Ubiquity in one simple aspect: user input. We're confined to a single sentence on Ubiquity. And despite being oh-so-cool to parse natural language and extract the right meaning out of it (because it is!), some things take more than one sentence to say clearly, and there's no way around it.

I thought about the command I've made for Ubiquity to add financial transactions to Buxfer. The syntax I accept on it is "BUXFER-SPEND <amount> in <description>", where <description> may be a set of words with arbitrary length. My problem with this is twofold. First, it's not easy to extend this with other parameters (like the date of the transaction, its tag(s) ...). Then, a technical issue makes the command keep presenting one completion suggestion per word on the description. (it has to do with not being the direct complement, i think. And doing a manual parser is *not* fun to developer on a sunny afternoon!) And that just sucks...

So my idea was to take advantage of a slidebar to show a form, already filled with the selected text on the browser. The form has the focus on an amount text field, and a submit button (extra fields are easy to add later. Just ask me if you need any particular data to be posted, and I'll look at it). I picked up my ubiquity command code out of my shelf, and used most of it to connect to Buxfer (using its API). Besides the command itself, I had to use a different message display mechanism. I was thinking about giving Yip a try, but it turns out the notification system on Jetpack is perfect as it is - Growl and toaster goodness for everyone (don't know about linux, feel free to show me screenshots with their standards notifications!). Sorry Abi, I'll keep Yip to use on web apps, for now :)


One thing that slowed me down a bit was to find out the Utils object we have on Ubiquity is not on the Jetpack API. But that's pretty much ok, since we have the jQuery framework at our disposal. This way, instead of using the JSON decoding mechanism i used on Ubiquity, I used the jQuery ajax() and getJSON() functions to return native objects instead of JSON strings. And all is good in the world!

Some conclusions out of this fun experiment - great frameworks allow doing great features easily and, more importantly, they allow to have fun when programming them.Just the Bespin being there for anyone to give it a quick test-drive is enough to capture youngster's attention (I just keep trailing back the text to my Emacs buffer using the It's All Text! addon). So I give 10/10 to the Jetpack/Ubiquity efforts (taking into account their early state of development, obviously). Also, I ask those projects developers to unify their support as much as they can. Ultimately, I think an Ubiquity command should be one of the ways a Jetpack could extend the browser!

Without further ado, you can find my command, Buxfer Jetpack here. It should "just work" if you have Firefox with Jetpack installed. If you run into any issues, tell me, so that I may learn what they are and how to fix them. And prevent their repetition!

It's running late, but it has sure been fun to do this. The best part? I may be using this jetpack myself, complementing ubiquity's command. And that makes me being feeding myself with my own dog food.

Monday, June 01, 2009

ELS 2009 - my impressions

I've finally got to attend an international Lisp meeting — one more item crossed off the checklist!

The European Lisp Symposium 2009 was held in Milan, Italy, during the 27-29 of May. The program featured some interesting presentations, and while I'm not going to go a complete overview of them all, in this post I'll go over the bits that got into my head the most. This means, don't be offended or ignored if your work (or the work you were expecting to read about) isn't mentioned here. It probably means my mindset isn't properly configured, yet :)

The first day of the conference was just the reception, where I got to meet great people. Among the entire conference I got to meet people I've already read about in the web-o-sphere such as Stellian Ioanescu, Scott McKay, Mark Tarver, Nikodemus Siivola, Pascal Constanza and Christopher Rhodes. I also got to know some nice people I didn't know, including Didier Verna, Jim Newton, Claus Brod and Edward Dogde. António Leitão, my PhD supervisor and program commitee chair, introduced me to Mark Feeley, from Termite and Gambit Scheme, who told me about a Gambit package, JSS (a multithreaded javascript compiler), which I must try when I get back home. Most of all, despite some ideological divergences between some of the participants, the mood was quite pleasant, and joyful. After all, Lisp is supposed to bring happiness to those who use it.

The second day started with a Scott McKay talk on his life's experiences, and mistakes. He has grown a pretty strong aversion to incremental code fixes to correct fundamental problems. Pretty much all he's done made him believe that full rewrites gives us the opportunity to think better about the design, and that we cannot try to simplify the code by extending it. He is also concerned about the current "big thing": concurrency. May languages are addressing it, and Common Lisp is getting behind each day that passes. Clojure tackled it, but it carries the known set of burdens. Scott recognizes the great one-man effort, but wonders if it's easier to improve Clojure or to design a new Lisp and get it right from design. This could rapidly solve, for instance, the namespaces Common Lisp problem (instead of having symbols, packages, labels, macros, symbol-macros, etc, etc (Pascal counted 9 different namespaces), we could have one single namespace). The same goes to the type system, *everything* should inherit from the same root object type.

Mark Tarver gets an almost similar position, but he "strayed away" from Common Lisp, and went looking for answers on both Scheme and Python. Because he couldn't find them, he designed Qi, a language that is supposed to offer what he believes to be the best language features.

I found the presentation on hygienic macros for the unhygienic world quite motivating. By using a few helper macros, Pascal Constanza demoed how we can emulate the former on Common Lisp, which is quite useful, specially if you come from the Scheme world. All at the expense of a few extra language constructs, but not too distracting ones.

Charlotte Herzeel presented an interesting approach to implement Software Transactional Memory (STM): she implemented a (limited, not feature-complete) Scheme interpreted within Common Lisp. Doing this allowed her to have access each memory access point easily (she only has to address cell and vectors accesses. The thing is, by having her interpreter, she easily tapped to the code segments where data access is done, via reflection. So it becomes easy to experiment with STM algorithms.

I found out the industry is not sticking to widespread CL implementations. At least three different companies (mostly related to the CAD/graphics world, but I think it's a coincidence) rely on SKILL, or some adaptation of it. SKILL is a small footprint Lisp-like scripting language, and that makes it appealing for many industrial domains. However, it appears to be quite obsolete, undocumented and unsupported. It was nice, though, to see some successful uses of Lisp.

I was kinda disappointed not to see Kent Pitman, I believe he canceled at the last minute. I was also looking forward to see some Clozure CL people there, I'd like to hear about what they have in mind about Cocoa and iPhone development. They already have a great Objective C bridge, but as a Common Lisp, things are entitled to be much easier, specially in order to produce competitive small products (there's a big niche for small simple applications on MacOS, due to Apple's design principles)

Marco Antoniotti was a great host on Milan. He organized a great banquet at Osteria del Treno, where we all got to share some more insightful comments about the entire conference and more. The day after that, Marco wore the cicerone cap and took everyone that was still in Milan for a great Futurismo Exhibit, where we got to learn some amazing bits of the Italian cultural inheritance (definitely not my area of expertise, but it was quite interesting to try to interpret some works and sell my ideas to the guide, despite failing most of the times :) ). That afternoon we visited the Duomo, a massive cathedral on the center of Milan. The most impressive thing for me was to realize the wow-factor was even greater from inside of it (on it's roofs), than from the outside!

All in all, it was a memorable experience, everything went great (i wasn't even bugged with the wireless access difficulties that so many people were forced to have, since i had the eduroam credentials already set up on my machine!). Next year it'll be in Lisbon, so I'll be attending to. Meanwhile, I'll look forward to go to Italy again on July for the 6th European Lisp Workshop (within ECOOP).

Thursday, March 05, 2009

Ubiquity in Portugal's Portuguese

I've been keeping up with some of the stirring around Ubiquity development, and one aspect that caught my attention was the adaptation of that amazing interpreter to multiple languages. I've commented before on Aza Raskin's blog about the implications of using Ubiquity in native portuguese. Mozilla's Mitcho Erlewine and Felipe Gomes have brought up the issue again. This time things appear to be moving on in Mozilla Labs for Portuguese language in Ubiquity, which is great! But please, hear me out first for a few small — but relevant — points that need to be considered. In the remainder of this post I'd like to clear them out. Note that these will be personal opinions, and I'm not a linguistics expert, only an interested software engineer.

Felipe's entire post is clear and understandable. But his ideas work for the Brazilian variant of the Portuguese language. Being a native Portuguese (and while not having any kind of grudge against non-native Portuguese), it's somehow odd (even repulsive, pardon me!) to read some of the Brazilian syntax — it's just looks badly written. One simple example is the "gimme a map" command, that Felipe translates to Brazilian as "me dê um mapa", and native Portuguese would translate it as "dê-me um mapa". But this simple example presents other cultural divergences, like the friendly/respectful tone. The English expression "give me" can be translated into "dá-me" or "dê-me" (friendly/respectful, in that order). And the same goes to the command verb "map", that would be translated into "mapeie/mapeia".

Felipe's offers the great suggestion of detecting the word's root, e.g., "map" in "mapeie". And this will work for almost every verb. Except some crucial ones, that are irregular! (I honestly hate grammar irregularities...). Take the verb "to go" (e.g., can be used in a command to go to a specific page/tab/section/<somethingelse>). So one could say "go to gmail". In native portuguese, Felipe's suggestion fails in that the same verb translates into "ir", and is used in a commands such as "vai para o gmail", or "vá para o gmail". Note that not only these commands cannot be easily traced into their infinitive form ("ir"), but both possible tones are different (the accented "a" in "vá" is very much relevant).

I'm really sorry to be a spoil-sports, but this is won't work so simply. Obviously, the set of irregular verbs are finite, and we could add the correct usages to a syntactic rules table. I expect the final code to be optimizable into a feasible speed of interpretation. But I'm not so sure that this rules can be, as Mitcho so desires, reduced to a common denominator to all languages (again, don't get me wrong, I'd love that to be possible, I just don't have a useful knowledge-base of other languages to believe that to be feasible).

And this is why I'm so fond of using some webapps' names as commands, even in portuguese. "twitter <message>", "google <query>" or "wikipedia <expression>" will be understood perfectly in Portugal (and Brazil, for that matter). I know, Ubiquity's future is about natural language interpretation, whatever idiom you may prefer. And that's really something special. But I can't stop but to feel that every language is either going to require a complex interpreter suited for itself, or suffer from aggravated syntactic-sugar syndrome.

If any point I've covered can be countered somehow, just consider, in following blog posts/projects/etc to remember to identify what Portuguese idiom are you referring to when you discuss grammar rules. Again, it's not a matter of any kind of discrimination, but one of correctness, and everybody deserves that.

Thanks for reading!

Wednesday, January 28, 2009

Lisp macros, eval and packages

Today I spent about an hour trying to find out the source of a bug in the code for my PhD prototype. To make the context short, I need to have a class definition to be stored, and loaded afterwards. And I was using Gary King's time-saver metatilities:defclass* to reduce the required coding. The problem was when I wanted to load such a definition in a generic method, I was evaluating the definition expression - one of the acceptable uses of `eval', correct me if I'm wrong, is to interpret something that was just read (in this case, from a serialized string, from other servers). The code in question was something like this:

(defmacro load-data (name attributes)
  "Defines a domain concept, using metatilities:defclass* syntax."
     (metatilities:defclass* ,name (data-item)
       (:metaclass profiled-metaclass)
       (:name-prefix  ,(format nil "~(~A~)" name)))
     (info "Data type loaded: ~A" ',name)))

(defmethod load-definition ((def data-item-definition))
  (eval (macroexpand `(load-data ,(definition-name def)
                                                     ,(data-item-definition-attributes def)))))

What I was getting as a result is that the auxiliary methods created as accessors by defclass* weren't being imported in my packaged, but in CL-USER. Hence, lots and lots of compiler warnings and subsequent errors when trying to use those accessors. I fiddled around, tried to see if I could remove the eval, but the solution, so obvious as I know can see, is quite simple. Just place your evaluated expression after setting the package. The fixed method is the following:

(defmethod load-definition ((def data-item-definition))
  (eval `(progn (in-package :dshow)
                ,(macroexpand `(load-data ,(definition-name def)
                                          ,(data-item-definition-attributes def))))))

Hope you can see this before spending too much time on a similar bug!

Monday, January 26, 2009

Quick Emacs nicity

I can’t remember from whose configuration I got this piece of code. But it makes my lispy screen a little bit cuter, by turning all “(lambda” into the greek character. I tested on aquamacs, and on the Windows GNU Emacs, and it works. If anyone knows the original author of the idea, send him/her my thanks :)

;; Make lambdas appear as the character » (saves space!!)
(defun pretty-lambdas ()
   nil `(("(\\(lambda\\>\\)"
          (0 (progn (compose-region (match-beginning 1) (match-end 1)
                                    ,(make-char 'greek-iso8859-7 107))
(add-hook 'lisp-mode-hook 'pretty-lambdas)
(add-hook 'emacs-lisp-mode-hook 'pretty-lambdas)

Friday, January 02, 2009

On iPhone SDK and its pretty icons on tab bar items

After having made some experimentation around the iPhone SDK, I have now a simple - but useful - native application. I focused on the functionality, and up until now, I had a tab-bar with no icons. But that's not the Apple-way to present things. iPhone apps are pretty, and have simple but elegant icons with a gray tone that turn blue when selected. So i looked upon a library/bundle for some of those standard buttons, and found... none! Ok, no worries, let's find out how to make them. Wait. There isn't a specification for them! Yes, it's a shame, but Apple's iPhone SDK documentation is still a bit unfinished. The good news is that what is done, is easy to read and helpful. I ended up looking at other apps resources, found out they were 32x32 png images, and figured they had to have at least an alpha channel outside the image itself. So I fired Gimp, made a plain brush doodle and removed the background white, saved and tested. And it simply works! All colors are turned into the desired gray tone, and automatically swap to a smooth blue with a top glare when selected. Conclusion: the trick is to experiment. Chances are that what you want to do is *that* simple!