next up previous contents
Next: module layout Up: Plug-in Architecture Previous: Plug-in Architecture   Contents


The framework is responsible for:

finding modules
verifying module integrity
loading modules
running modules
integrating module results in the database
safely unloading modules

The functionality is encapsulated in two functions. Upon opening a database, the framework is loaded for the first time and the discovery loop is entered. This loop reads all files in the plug-in subdirectory of the homedirectory. Then it tries to load each file and verifies its integrity by calling a standard function. This somewhat rudimentary testing should suffice to accept or deny any file encountered. More importantly, crashing the system by loading incorrect files should not be possible. Each file that has been identified as atomsnet plug-in is added to the opened database under a special key.

When indexed, files are normally added to the document as xml elements called resources. These elements are placed in the tree-like structure according to their metadata. Plug-ins are added in largely the same way, but are distinguished by their element's name. This mixing of data and code will be discussed in the next paragraph. After initialization all plug-ins are released and unloaded from the execution environment.

When a user drops a file the actual indexing takes place. This is where the plug-in/resource mixing of the datatree becomes useful. Upon starting the indexing process no previous data exists so a general plug-in is called that resides at the root of the datatree. This plug-in is guaranteed to return a result, since it relies solely on filesystem information. After this bootstrap process a recursive process starts where the framework searches through the tree for an applicable plug-in. The search is started from the location where a resource has last been added or altered and continues up the tree. This way all ancestor nodes are travelled. When a node is encountered containing a plug-in this plug-in is called for the current file.

A problem that may occur with this recursive process is that a loop in the dataflow can be created. This problem has not been dealt with explicitly in the testprogram, since too few plug-ins currently exist to create such a loop. Please note that it should ofcourse be dealt with in reallife products where more plug-ins are used.

The rationale behind the recursive indexing process is that recently discovered information can help us select new appropriate plug-ins. This is obviously at least as efficient as calling all plug-ins, while it gives at least as good results as calling a preselected subset.

Let me give a small example. A user wants to index an mp3 file. In the standard package the following process would take place. The bootstrap plug-in writes filesystem data and then calls the MIME-type plug-in, because it is entered at the root of the filesystem subtree. The filesystem plug-in is not called again since recursive plug-in searching stops one level above this node (a somewhat dirty hack). The mime-type plug-in enters information under the audio/mpeg entry of the file-type subtree from where the mp3 specific plug-in is called. Finally, the google plug-in is called because it is entered at the root of the MIME-type subtree. It is obvious that new subtrees can thus be added to the system by creating an appropriate plug-in. Similarly, subtrees can be removed by removing specific plug-ins. This flexibility is the main strong-point of the plug-in based system. The next section deals with coding a plug-in.

next up previous contents
Next: module layout Up: Plug-in Architecture Previous: Plug-in Architecture   Contents