Saturday, October 25, 2008

Milky love for lispers

I've just updated my "Remember The Milk"-talking lisp library. Now you can access their API with a simple CLOS-based object model.

You can fetch the latest version by direct download, SVN and/or by asdf-install from the project page, where I have the important updates:

Recent Changes

* Added an object bridge to make the API even more integrated with the language.

* Methods and functions were coded to be easier to manage a local state of the RTM user data.

Future Work

* Improve synchronization and bandwidth usage by fetching only recently updated data;

* Provide offline work for the local data

Sunday, October 12, 2008

Having a cow aiding getting your things done

I have finally stopped to upgrade my Remember the Milk (RTM) usage to something more GTD-ish (for David Allen’s “Getting Things Done” approach). I already knew about RTM’s organizing abilities. Succinctly, we can have tasks in our inbox, in lists (hard to move from, usability-wise), and have them assigned one or more tags, easily exchangeable and searchable. What I was looking for was a way to have my tasks split into contexts (like calls to make, errands to do, things to do at the computer, things to see, read or listen). Also, I need to see what I can do next, what I have to do someday, and things I must hold on pending, waiting for something/someone. With these two axis for plotting my tasks, I can add stuff to RTM as they come up in my life (via many different ways like their website, Twitter, quicksilver, a Dashboard widget, iPhone...), and they will get placed into Inbox. At the start of the morning, I have to process that list, placing tasks into a proper context List (I prefix those with a @, to visually identify them at a glance). I also tag each task with a “next”, “someday” and “waiting” tag. The final visual output is clean enough for me to feel organized:

Let’s get the webpage hacks discussed first. It’s simple enough, but important for me. The default tag strip wouldn’t do it for me. So I got this amazing Greasemonkey script, called A Bit Better RTM. Three simple features that make all the difference!

Step one, projects. I followed many advices at the RTM forum, and created a list called Projects, where I place project titles (prefixed by a dot, for visual identification), and tag them with a “p-PROJECTNAME” tag. This way, I can easily click that tag and access all tasks (from one or more contexts). It’s really handy.

Step two, smart lists. These are simply saved searches, and we have a decent set of search operators to use. Here’s some I have installed:
->Next: (tag:next AND NOT dueAfter:"1 week of today") OR dueBefore:"1 week of today"
->Someday: tag:someday
->Waiting: tag:waiting
->Tickler: NOT due:never
PhD: tag:PhD OR tagContains:phd
Fun: tag:entertainment OR tag:game OR tag:movie OR tag:night
unTagged: isTagged:false OR (NOT (tagContains:"next" OR tagContains:"someday" OR tagContains:"waiting"))

Tickler is a must see every day, to keep myself from forgetting due tasks. Phd looks for tags like p-phd-thesis, thus looking for all projects related to my Ph.D.. unTagged is a reminder tool. It often happens that I move a task from inbox into a given list, and forget to tag it (all tasks should have a next/someday/waiting tag on it).

Don’t forget that tasks can easily be assigned a priority with a 1-4 key press. Also, I have locations set up so that they appear in the task cloud. This way I can look at all the calls I have to make at the office, for instance.

So with this I feel almost at ease. But I’m kind of a control freak, and I like to add tasks in the easier way I can. After having played with Firefox’s ubiquity to write Twixfer, I started jotting down a quick way to add a task to RTM. The initial ubiquity command was usable, right until it popped up a new tab with a RTM task properties form. I didn’t need that. But someone felt the same, and spared my the effort. rtm-v2 is a command set that logs in, and gives proper feedback of your RTM lists and tasks. If you’re into speed task jotting, it’s worth a try.

That’s my rtm tips for today. Do it like me? Have a better system? Drop me a comment!

Sunday, September 07, 2008

Love Buxfer. Use Twitter. So why not Twixfer?

Today I sat down to find out a different use for Javascript. Browsed through my everyday task list, found out a couple of candidates, and targeted Buxfer transaction creation. I'm also growing fond of Ubiquity, for the Firefox browser (and is now the most important reason I'm not moving to Google's Chrome, but I digress). This post, despite being about a Buxfer contribution, will be more about the Ubiquity development process I underwent.

So let me explain what I aimed for. Ubiquity allows to enter text controls, with auto completion in a HUD over the Firefox window. So I wanted to do something similar to what Bruno Lopes did with his BuxferSubmit Dashboard widget (for Mac OSX) - plain simple addition of a Buxfer transaction, with a description and an amount. All I have to do is (after having Ubiquity installed and running) to setup a Javascript file with a piece of code I'll explain hereafter.

The heart of a Ubiquity command is defined with the commandfunction CmdUtils.CreateCommand(argument-object). This is, in fact, the only call you have to make in that file to set things up. So what's in an argument literal object? the basic attributes, like the name of the command, it's description or the icon to identify it (quite useful!). But there are a few arguments that deserve special attention

  • execute. This must be a function with no arguments that is responsible for doing whatever the command is supposed to do after <return> is pressed.
  • preview. This is the function that is called at each key press on the Ubiquity HUD, to compute the string that appears to help the user find out what the command is going to do.
  • takes. A description of the words the command receives, and a type for them. Each type can be specified by hand, but several are already provided, like noun_arb_text for arbitrary text or noun_type_date for a date string.
  • modifiers. These are words that offer secondary semantics to the command, and are specified just like the “takes” argument elements. Think of examples like “list-birthdays” for today's birthdays and “list-birthdays for next week”. The modifier “for” would be assigned to “next week”, and a special type can be setup to parse the string and compute the date from today.

To find examples about how to use the CreateCommand, load the following URI in Firefox: chrome://ubiquity/content/builtincmds.js (obviously only after having ubiquity up and running).

That's about it. What I found out was that the presence of a preview functionality can often be more useful than the execution of the command! Think about a currency conversion, or an algebraic expression calculation. I don't want to perform anything upon a <return>, except perhaps have the result copied to the clipboard. What I really want is to have the result appear in the HUD and change dynamically as I keep filling in the command text.

Let's move on to the implementation itself. One thing I found out quickly was that the Buxfer's APi needed a username and password to be able to post a transaction. That would be the case in most web applications I know about. But this API doesn't offer, for now, any means to open a page to ask for credentials - nor would I desire that, since I'm typing a textual command to keep the input at a minimum! Ubiquity doesn't have a thorough documentation, yet, and browsing the javascript XUL files within the chrome “filesystem” of a Firefox add-on is not something I like to call user-friendly. Why is this important? I thought the command could take a peek into the Firefox's secured passwords and use the Buxfer, without the user having to care a bit about it. But then again, an exception would have to be made in order for first timers be able to use the command… I was about to set up a proxy web service in Google's App Engine when I remembered two happy coincidences:

  • Buxfer allows Twitter messages to be sent with the same syntax as those we can send from an API (or from a cell phone).
  • Twitter has a nice API, that does exactly what I was aiming for: it points it's requests to they're site, and if you're not logged in, it asks for your credentials using HTTP Basic Authentication. This way, Firefox aks you if it may remember the password, and actually does it :)

The result: I ended up with two commands, one using the Buxfer API, and another using Twitter's. But since Twitter's the most user friendly (easier and faster to use), I'm switching my transaction input to the brand new Twixfer (It's weekend, my imagination needs some more rest to come up with a better name!). Feel free to take a peak into the homepage. The source consists of a couple of files, so I thought a Google Code project would be over-killing it… But if this thing starts to get any sort of user input/contributions, I'll gladly move it there. For now, leave comments at the Twixfer site. As usual, feel free to leave feedback about anything.

[EDIT: Today I had a tip from a gentle soul about retrieving password from Firefox's session. So Buxfer API is usable by itself, now. Check the same link and update your command, if needed. It has lots of cleaning up and code re-factoring done. Nevertheless, it was fun to toy around with the Twixfer concept!]

Tuesday, September 02, 2008

Getting Things Done - the flow-chart

Here's David Allen's graph for Getting Things Done (GTD). Though to use, but has nice results!

Monday, April 14, 2008

Moooing away in Common Lisp

I'v recently found myself needing to use a task manager to make some bulk operations (to be more precise, I wanted to copy all of my tasks to a device with no built-in synchronization mechanism for those). I happen to make use of the great Remember The Milk Web application (I've previously worked with Emacs Planner, but I had to drop it in favor of a more mobile version of my life).

Enter Common Lisp. Perhaps the fastest way to do some scripting in my mind, but I've yet to make a library to use a Web-based API. Turned out it was quite clean, and in about 5 hours of work I had the whole API grokked, coded, tested and hosted in GoogleCode here. (I looked into, but found the google's approach to hosting much more featured for the fast start I was looking for. If this has enough momentum I can look into it again, with more time - the main advantage would be asdf-installability, I suppose.)

I already had experience making project "skeletons" (asdf project file, package declaration, etc.). So the hardest thing I had to do was look into the web API call mechanism and into the md5 encryption. If you don't need the detailed description, tha answers are, respectively, Drakma and Ironclad. ( I also use cl-json to parse RTM's responses into a lisp-friendly format.)

To call a Web-based method, we need to make an HTTP request, and be able to read the response. using drakma (asdf installable and loaded in the usual way), all I had to do was something like:

(http-request rtm-api-endpoint :method :post :parameters '(("param-name" . "param-value")))

With some parameter abstractions, RTM's was easily tamed. A call to one of their methods is now made with the following macro:

(defun rtm-api-call-method (method &optional key-value-pairs &key with-timeline with-authentication (format "json"))
  "Calls `METHOD'.
 - Optionally passes pairs of strings in `KEY-VALUE-PAIRS' in the form
 - Keyword `WITH-TIMELINE', if not null, allows the method to be called within
 - Keyword `WITH-AUTHENTICATION', if not null, allows the method call to be
authenticated with a valid `*RTM-API-TOKEN*'.
 - Keyword `FORMAT' is one of
\"json\" (the default value) or \"rest\", and
specifies the server reply format."

  (declare (special *current-timeline*))
  (let* ((parameters `(("api_key"    . ,rtm-api-key)
                       ("method"     . ,method)
                       ("format"     . ,format)
                       ,@(when with-timeline
                                 `(("timeline" . ,*current-timeline*))

                       ,@(when with-authentication
                            `(("auth_token" . ,*rtm-api-token*))


         (api-sig (compute-rtm-api-sig parameters))

    (multiple-value-bind (result)
        (http-request rtm-api-endpoint
                      :method :post
                      :parameters `(
                                    ("api_sig" . ,api-sig)

      (let* ((response (json-bind (rsp) result rsp))
             (stat (assoc :stat response))

          ((string=  (cdr stat) "ok")
           (rest response)

          ((string=  (cdr stat) "fail")
           (let ((err-info (cdr (assoc :err response))))
             (error "RTM error code ~a: ~a~%" (cdr (assoc :code err-info))
                    (cdr (assoc :msg err-info))

With this (rather big, for the sake of readability, or mayhap due to my lack of experience in mental pretty printing) function, I define a call to an RTM module simply by coding:

(defun rtm-api-tasks-complete (list-id taskseries-id task-id)
  (rtm-api-call-method "rtm.tasks.complete"
                       `(("list_id"       . ,list-id)
                         ("taskseries_id" . ,taskseries-id)
                         ("task_id"       . ,task-id)

                       :with-authentication t
                       :with-timeline t

The only call in the first function definition that cannot be easily guessed how they work is the compute-rpm-api-sig-parameters. This is the function that performs the algorithm required by RTM to encode all parameters and guarantee authenticity in each request (along with a masked API key). Basically it does some mambo-jumbo concatenation with each parameter's name and value, and then performs an MD5 to get the equivalent hash code. Easy, right? But I'm a lazy programmer. That, and I truly believe there's no point in reinventing the wheel, so I looked for encryption functions in Lisp, and I found Ironclad, a collection of most of these functions in Lisp. So producing an md5 functions was as easy as the following:

(defun md5 (string)
  "MD5 uses ironclad to encode `STRING' into an hexadecimal digest string."
   (ironclad:digest-sequence :md5 (ironclad:ascii-string-to-byte-array string))

And that's about it. The rest was toying around, the API works perfectly (kudos to the RTM dev team!), and I was thrilled to have my tasks exported in a jiffy. If you want to find more info, feel free to snoop through the code (it's quite easy to grasp, I think, and relatively small), in the Google Code site: . So now, what would you, gentle reader, do with this API? Drop me some comments with your ideas.

Until the next time!

Thursday, March 13, 2008

Cocoa development, using a Lisp bridge

Let's get a bit away from my PhD for a while. In my spare time, I've been looking at some graphic libraries to make some applications. In the past I've looked for Operating System portability for those apps. I've also delved into the Web world for interfaces. Right now, I've settled for making pretty looking gui's. If you add my new acquisition - a Macbook Pro, my first mac ever! - and I obviously had to try to make a MacOS Cocoa application. They're fashion, they're usable, they mingle with the rest of the desktop manager, and I was just tempted to try to use Lisp to make something like it.

Fortunately, this wasn't so hard, has OpenMCL has a great step by step tutorial (to be taken with a few glasses of water and with the Cocoa / Interface Builder tutorials made available by Apple). This tutorial has detailed instructions of how one can build the Interface with the WYSIWYG tool that comes with Xcode. Then it tells you how to setup the connection points between the interface and the application code. These are called outlets, for the variables in the code that represent a given object (so that you may, for instance, retrieve the text in a text field) and actions, for the functions that are called when something happens in the interface (e.g., user click in a button). After creating the necessary objects (basically you must make them belong to a specific metaclass in order for them to be exported as an ObjectiveC class), all you have to do is call a function that binds together the compiled code, the Interface files (in the nib format) and places a nice application bundle in the directory of your choice. It can't get any easier, you think. And yes, you may code interface changes in you lisp code. But it's extremely easier to follow in Interface Builder guidelines and helpers... You just can't go wrong!

You may find the tutorial at the OpenMCL wiki:

Friday, February 01, 2008

User interface paradigms for data search and offline data access

I've stumbled upon this interesting and illustrated article on data lookup user interface design patterns (on both web and desktop applications, finally we seem to be starting to forget there's a difference!):

Here are two design paradigms for handling large amounts of data, not to be confused (or combined) as web design meets desktop in rich Internet applications.

[From Seek or Show: Two Design Paradigms for Lots of Data « Theresaneil’s Weblog]

The post describes two main groups of patterns, based on the way a user chooses what he/she wants to search (pictures linked from the article mentioned above):

  • The Seek Paradigm - The user has to input a search criteria in order to access some search results.
  • The Show Paradigm - The user is presented with several search results (or categories), and he may drill-down the search by clicking on the desired categories or selecting certain results to eliminate unwanted items.

This got me thinking in terms of offline work, and I can easily qualify both approaches as orthogonal to the online vs. offline decision. It is merely a user interface option, which is a good thing. To understand this, picture a lookup for book titles on a library. To go offline, one would have the client application to fetch them all from the server, thus making both kind of interface patterns possible without contacting the server. Similarly, if we wanted to know how many clients had already rented a book (assuming the algorithm implied a run through each client's details to look for their rental - I know, it's not the smartest way to do that, but it's only an illustrative example!), the client wouldn't be able to fetch all the other clients' details, hence not being able to use any of the search paradigms.

So the effort that has to be done, from the application logic design point of view, is related to how the services (those accessed by the user interface) access the information stored in the database. Let's draw some pseudo-code* for the book rental example above (the second example).

  • Approach 1: The all-powerful service. This is the kind of service we would have written up until a few months ago, considering any sort of offline execution to be an academic dream or an edgy unfeasible mostly unsuccessful experiment. Basically it assumes that it is being executed with super-user privileges, hence able to access all stored data at any time with no constraints whatsoever.
  • Aproach 2: The helpful, aware and conscious service. This is the service that knows that its code may be executed in many places, and so it tries to make a scarce usage of the stored data as much as it can.

Using approach 1 we could code a service like this:

(define-service get-number-of-book-rentals (book)
(let ((all-clients (get-storage-data "clients"))
(result 0))
(dolist (client all-clients)
(set result (+ result
(client-book-rentals client book))))

As you can see, while this would work perfectly on the server, the client would never be able to to the (get-storage-data "clients")call, given its lack of permission to fetch them all to the client-side storage. This is the sort of services that must be thought over and rewritten - or, better yet, automatically translated to something more useful!

By using approach 2, we would instead code something in the lines of:

(define-service get-number-of-book-rentals
(:inputs (book)
:storage-data (books-rent-by-all-clients (get-total-rentals-from-clients)))
(let ((number-of-books 0))
(dolist (client-rentals (filter book-rent-by-all-clients
:field "book"
:value book))
(set number-of-books (+ number-of-books client-rentals)))))

Here it's clear that the service knows what data is going to be needed from the storage. With this information, the server can fetch it for the client when it wants to disconnect, and the client is now able to perform the same functionality without having to run through the clients (read, without having to have access to all clients' personal details). Relevant, noteworthy changes between the examples:

  • the service has to be declared with more information (like the storage-data requirements)
  • the storage data requirements code is always performed on the server. The client counterpart consists of a lookup on the client-side storage for the previously fetched information. This way there is no risk of information not being available.
  • If the service is to be run online, it also runs equally well, with no performance penalties.
  • There must be either a mentality change or a tool to translate services made with the approach 1 to the approach 2!

I'm already laying out some code to achieve something like this. Expect news in the following week or two.

*Disclaimer - despite being in a lisp pseudo-dialect, all examples could exist in the language of your choice. It's really just a matter of personal preference!

Thursday, January 10, 2008

The search for functionality dependency

Having had my PhD thesis proposal (previously presented here) approved by my university, it's time to move along to the next step: thorough description of the solution I propose, followed by its implementation.

The first important aspect I need to address is the grouping of functionalities that may be executed by a given user while offline. The ideal situation would be to pick up a specification of a Web application, then automatically produce a graph of available functionalities, with their dependency hierarchy. From here, the developer (or even the project engineer) could inspect what would be desirable to be performed offline, according to the requirements artifacts. Since we're also aiming for ease of use, on the produced tree, it would be nice to see the characteristics of each functionality, namely when related to access control permissions and data dependencies. This way we could gray out some of the available functionalities that could never be performed offline (e.g., we may want to force every functionality requiring a specific role to be accessed online only). By imposing restrictions, one could easily make a map of he needed functionalities. It seems almost to easy. What we're forgetting is how we're going to get the dependency graph. Yes, I intended to get those via a workflow graph. But let's face it, most Web applications are built without a prior workflow specification artifact ever seeing the light of day. Specifically, my case study doesn't have one (yet, however). So how shall I do this?

  • Write the aforementioned workflow specification by hand. This seems a lot of work to ask of the project development team, specially since all they want is some functionalities (they don't know exactly what or how many in a low-level, only the high-level use-case names are written in the requests, usually). It's also not possible to write it automatically, since it needs domain-specific information, usually more than what is already present (albeit very scattered) in the existing application. So this is a no-go.
  • Code walking. If we inspect each interface user action and the accessed data and services, we could obviously draw a graph. This however poses other problems. Are all programmers doing the right thing? It's not that uncommon to write services that check on other services without needing them, just to see some futile/temporary information, and forgetting about it later on. I know that doesn't happen in a perfect world, but "real" and "perfect" don't always pair along! But there is another show-stopping issue with this approach: we would have to start the "walk" somewhere, so we need to know all the starting functionality calls, along with the list of services itself.
  • User experience tracing. This is quite interesting in theory, if we ask users of several roles to perform their tasks, we can obviously trace the execution flow perfectly by reading the logs only. But we have no means to guarantee the users are going to access all the functionalities they are entitled to perform in our tracing period. That is only possible in a closed, controlled experience, by having users following scripts. But this kills the main purpose - those scripts are as hard to write as the workflow description itself, so it's of no use to us.
  • Functionality reification. By reengineering the application to have the functionality calls represented as objects, we can have the immediate dependencies graph automatically by scanning all user interfaces and listing the graphs of all calls triggered by buttons, links or other events (XHR based, for instance).

I'm going to put my efforts on this last one. There are other advantages with this approach, being studied by some colleagues of mine at INESC-ID and in the Fenix development team. The main ones are related to he presentation mechanism allowed - the organization of the Web application interface can be composed with such objects. I'll get into more details afterwards.

As usual, I appreciate any comments, tips or any other kind of experience sharing from you, dear reader!