{ Hi! I'm Mike }
I'm a core developer with The Horde Project and a founding parter of Horde LLC - the company behind the world's most flexible groupware platform. This is my personal blog full of random thoughts about development and life in general.
August 13, 2016

METAR/TAF support in Horde_Service_Weather

Back in the Horde 3 days, to power the weather portal block, Horde used PEAR_Services_Weather to obtain weather.com's weather data. When Weather.com discontinued its free weather API, sometime in the Horde 4 release cycle, we created Horde_Service_Weather to interface with a host of other weather APIs.

However, for those users that used METAR and TAF weather data - using the "METAR" portal block - you were still stuck using the older PEAR library since Horde did not have support for this type of data. With the release of PHP 7 this has become even more problematic since the PEAR_Services_Weather library does not support PHP 7.

Enter Horde_Service_Weather_Metar, the latest addition to the Horde_Service_Weather library. It adds support for decoding METAR and TAF data from either a remote web server or a local file. While it is not a 100% drop-in replacement for the PEAR library, it is close. For the end user, this is pretty much all you need to know - your METAR weather data will still be available (and the portal block will look prettier). If you are developer and are interested in the specifics, read on.

Two APIs

As part of the Horde_Service_Weather library, it supports the same API as the other weather drivers. However, METAR and TAF data are not really the same type of weather data as a traditional 3 or 5 day forecast. These are designed for aviation purposes. For example, whereas in the other weather drivers each "period" represents one day, in the METAR driver, the forecast periods (or even the number of periods) are not pre-defined. Each period is only a few hours - with the entire forecast usually covering only 24 hours. TAF data also contains data not typically provided in a typical consumer weather forecast - such as the type/amount/height of each cloud layer. Given this, we provide an additional method to obtain the more detailed data.

// Note that below we use the global $injector to obtain object instances.
// If not using the $injector, substitute your own instances in place of the $injector call.
$params = array(
    'http_client' => $injector->createInstance('Horde_Core_Factory_HttpClient')->create(),
    'cache' => $injector->getInstance('Horde_Cache'),
    'cache_lifetime' => 3600,
    'db' => $injector->getInstance('Horde_Db_Adapter')
);
$weather = new Horde_Service_Weather_Metar($params);

// METAR (Current conditions)
$current = $weather->getCurrentConditions('KPHL');
$data = $current->getRawData();

// Current TAF
$forecast = $weather->getForecast('KPHL');
$data = $forecast->getRawData();

The Horde_Service_Weather_Current_Metar::getRawData() method returns all the parsed METAR properties. We use the same key names as the PEAR_Services_Weather library, so this data can be used directly in place of the PEAR library's data.

Likewise, the Horde_Service_Weather_Forecast_Taf::getRawData() method returns the same data structure as the PEAR_Services_Weather_Metar::getForecast() method.

As mentioned, the "normal" API is still supported and you are free to use it. However, the data available is limited. For example, you can still iterate over the forecast periods, but the information available in each period is still limited to the properties of the Horde_Service_Weather_Period_Base object. Also note that TAF periods only contain weather information that is different from the main forecast section so period objects may not contain all expected typical information at all. This is why it's best to use the getRaw() methods as described above for METAR/TAF data.

// Note that below we use the global $injector to obtain object instances.
// If not using the $injector, substitute your own instances in place of the $injector call.
$params = array(
    'http_client' => $injector->createInstance('Horde_Core_Factory_HttpClient')->create(),
    'cache' => $injector->getInstance('Horde_Cache'),
    'cache_lifetime' => 3600,
    'db' => $injector->getInstance('Horde_Db_Adapter')
);
$weather = new Horde_Service_Weather_Metar($params);

// Current TAF
$forecast = $weather->getForecast('KPHL');
foreach ($forecast as $period) {
   $humidity = $period->humidity;
   // etc...
}

Local or Remote

Horde_Service_Weather_Metar supports obtaining the data from a remote http service or from a local file for maximum flexibility. By default, it tries the NOAA links. If you want to use a different service, you can provide the path in the constructor's parameters.

$params = array(
    'http_client' => $injector->createInstance('Horde_Core_Factory_HttpClient')->create(),
    'cache' => $injector->getInstance('Horde_Cache'),
    'cache_lifetime' => 3600,
    'db' => $injector->getInstance('Horde_Db_Adapter'),
   'metar_path' => 'http://example.com/metar',
   'taf_path' => 'http://example.com/taf'
);
$weather = new Horde_Service_Weather_Metar($params);

Or for those that may already sync weather data, or have their own weather observation systems, you can point to a local file:

$params = array(
    'http_client' => $injector->createInstance('Horde_Core_Factory_HttpClient')->create(),
    'cache' => $injector->getInstance('Horde_Cache'),
    'cache_lifetime' => 3600,
    'db' => $injector->getInstance('Horde_Db_Adapter'),
   'metar_path' => '/var/weather/metar',
   'taf_path' => '/var/weather/taf'
);
$weather = new Horde_Service_Weather_Metar($params);

You can even mix and match the two - if you have a local METAR file but not a local TAF file for example.

Location Database

The PEAR library also had a script that would build a local database of airport weather reporting locations. This was also ported to Horde_Service_Weather as a migration script. Running the migration in the normal way automatically downloads the latest airport datafile and builds the necessary tables. See the horde-db-migrate tool for more information on running migrations.

Use of the database isn't mandatory and if the 'db' parameter is not passed to the constructor it will not be used. However, the database is required for searching location names and performing autocompletion of location names. For example, the following code will search the location database for any locations (either the location name of the ICAO airport code) beginning with $location.

$weather =  new Horde_Service_Weather_Metar(array(
    'cache' => $injector->getInstance('Horde_Cache'),
    'cache_lifetime' => $conf['weather']['params']['lifetime'],
    'http_client' => $injector->createInstance('Horde_Core_Factory_HttpClient')->create(),
    'db' => $injector->getInstance('Horde_Db_Adapter'))
 );

 $locations = $weather->autocompleteLocation($location);

Portal Block

We have also improved Horde's METAR weather portal block by tweaking the layout a bit and by adding the ability dynamically change the location using an ajax autocompleter - just like the exiting weather block.

Why keep both weather blocks in Horde? As mentioned, METAR and TAF data are specialized data designed for aviation use. It doesn't fit neatly into a traditional weather forecast layout. It contains more detailed information and shorter forecast periods. Providing a specific block for this data allows the full data set to be represented without overly complicating the "normal" weather portal display.

March 18, 2016

Vagrant Images for Horde Testing

As a developer with the Horde Project, I spend most of my development time plugging away on our bleeding edge code; Code that lives on the master branch of our Git repository. However, debugging our code and testing fixes the stable FRAMEWORK_5_2 branch presents issues. It's currently all but impossible to easily run this branch directly from a Git checkout so it's often necessary to quickly setup a new test environment from our PEAR packages. In fact, even with our master branch, there are a multitude of different configurations, backends, PHP versions etc... that need testing.

For me, I've found the thing that works best for my workflow is utilizing Vagrant images to quickly bring up new VMs with specific configurations for testing. Since I've accumulated a good number of Vagrant images, I thought it would be a good idea to throw them up on my personal GitHub account. These images all create functional Horde installs running with various different backends, and PHP versions.

There are images based on current Git master and images based on current stable. There is even an image that downloads and compiles current PHP master (currently 7.1-dev) for testing. A thank you to both Michael Slusarz for the original vagrant image I based these on, and to Jan Schneider for cleaning up the vagrant configuration.

The images can be found at https://github.com/mrubinsk/horde-dev-vagrant

Final note of warning: These are meant to be throw-away instances for testing and development use. Of course, it wouldn't be too hard to change the configurations to be more appropriate for production use.

May 12, 2012

Shared SQL Authentication with Horde and Dovecot Part 2

In part 1 of this series, we saw how to configure Dovecot to use a simple SQL table as the user and passwd database. We also saw that it was easy to use the existing shadow passwords. This part will focus on setting up Postfix to use Dovecot's user information to determine where to deliver incoming mail, and finally, how to configure Horde to authenticate against the same data. This will also allow Horde to actually manage your mail users.

Basically, it requires just a few changes in main.cf:

# Since we are using ONLY virtual domain accounts, mydestination should be locahost.
# The domain will be handled by the virtual configuration.
mydestination = localhost


# Tell postfix where to find the virtual mailboxes:
virtual_mailbox_base = /var/vmail

# Tell it what domains are virtual (we only have one so no need for a map)
virtual_mailbox_domains = example.com

# Tell Postfix where to find user/mailbox map. In this case, we point to a confguration file
# for a mysql based map.
virtual_mailbox_maps = mysql:/etc/postfix/mysql_virtual_mailbox_maps.cf

# The same uid settings as in Dovecot:
virtual_minimum_uid = 150
virtual_uid_maps = static:150
virtual_gid_maps = static:8

# Also, be sure you have this to tell Postfix where to find dovecot's authentication socket.
smtpd_sasl_type = dovecot
smtpd_sasl_path = private/auth

Now, for the mysql_virtual_mailbox_maps.cf file:

user = vmail
password = dbpassword
hosts = 127.0.0.1
dbname = mail

# Again, since we are only hosting a single domain, we can hard code it
# in the query for simplicity. %u is the user id (username) %s is the incoming email
# address (username@example.com). Ending it in a '/' signifies that it is Maildir format.
query = SELECT 'example.com/%u/' FROM mailbox WHERE uid ='%s'

That's all there is to it. Postfix will now deliver incoming email to the appropriate user's inbox.

At this point, we have a working email server using SQL auth. Now, let's get Horde configured to use it also. For this, you want to head over to Horde's adminstration UI and select the main Horde configuration. Select the "Auth" tab. From here, you want to select the "SQL Authentication with custom queries" driver. You will then be presented with fields to fill out for both connecting to the database containing the data and for entering the various queries. For this example, we are using UNIX sockets to connect to the database. The following is the resulting section of the conf.php file. You can use the array keys to determine what fields they go in on the administrative UI.

$conf['auth']['params']['socket'] = '/var/run/mysqld/mysqld.sock';
$conf['auth']['params']['protocol'] = 'unix';
$conf['auth']['params']['username'] = 'vmail';
$conf['auth']['params']['password'] = 'dbpasswd';
$conf['auth']['params']['database'] = 'mail';
$conf['auth']['params']['query_auth'] = 'SELECT * FROM mailbox WHERE uid = \L AND pwd = \P';
$conf['auth']['params']['query_add'] = 'INSERT INTO mailbox (uid,pwd) VALUES (\L, \P)';
$conf['auth']['params']['query_getpw'] = 'SELECT pwd FROM mailbox WHERE uid = \L';
$conf['auth']['params']['query_update'] = 'UPDATE mailbox SET uid = \L, pwd = \P WHERE uid = \O';
$conf['auth']['params']['query_resetpassword'] = 'UPDATE mailbox SET pwd = \P WHERE uid = \L';
$conf['auth']['params']['query_remove'] = 'DELETE FROM mailbox WHERE uid = \L';
$conf['auth']['params']['query_list'] = 'SELECT uid FROM mailbox';
$conf['auth']['params']['query_exists'] = 'SELECT 1 FROM mailbox WHERE uid = \L';
$conf['auth']['params']['encryption'] = 'crypt-sha512';
$conf['auth']['params']['show_encryption'] = false;

There are a few things to take note of. First, as mentioned in the UI, \P \L and \O are (respectively) the already encrypted password, the username, and the old username. Since we are using existing shadow passwords, the encryption is set to crypt-sha512. This is why we need both an authentication query and a password query - because we need to load the password first to get the salt so we can verify the user provided password. Also, when the expansions are made, they are already quoted, so do not enclose the \P, \L, or \O in quotes when entering the queries.

The final part of this is to change IMP to use horde's authentication data - in imp/config/backends.local.php:

<?php
$servers['imap']['hordeauth'] = true;

That's it. We now have a fully functional mail server, with Horde able to add/remove/edit the mail accounts - while the end users were able to continue to use their existing passwords. As an added bonus, it is now trivial to setup the Horde application, Passwd to allow users to change their passwords.

May 12, 2012

Shared SQL Authentication with Horde and Dovecot Part 1

I recently had the opportunity to reconfigure a mail server for a client. This client wanted an existing Dovecot/Postfix setup to be moved to use Virtual Mailbox Domains and SQL authentication. The existing setup was a typical out of the box install utilizing system accounts as the mail user base. The driving factor in this was to be able to use Horde to not only authenticate against the mail server, but to also have Horde be able to manage mailbox users.

The requirements for this were pretty simple, so these steps are fairly simplistic. For starters, this server is only hosting a single domain, so there is no need to track different domains in Dovecot's virtual setup. Also, since these servers were already setup and functional, this article will skip the steps for things like setting up TLS and the like. I have done setups like this before, but not since the Horde 3 days, so it might be helpful for others to see what needed to be done.

This article will show how to setup the Dovecot portion of things. The next article will show Postfix, followed by Horde.

The first thing I did, since this was an existing mail system was to install Horde 4 to be sure that any requirements for Horde were already met on server. Specifically, that Horde would have no problems communicating with the IMAP server. Next, it was time to configure Dovecot to use SQL maps for the mailboxes. There are a lot of HOWTOs out there about setting up Dovecot from scratch to do this, and most of those are overly complex for what was needed in this case. First, I created a mail database. Unlike all the other tutorials out there, I only had to create a single table in the database. Since we are only hosting a single domain, and the location of the user mailboxes will be easily calculatable, this table only holds usernames and passwords:

 CREATE TABLE `mailbox` (
  `uid` varchar(255) NOT NULL DEFAULT '',
  `pwd` varchar(255) NOT NULL DEFAULT '',
  PRIMARY KEY (`uid`)
);

Next, it's time to configure Dovecot. The things that needed to be changed in /etc/dovecot.conf:

# Put mailboxes in /var/vmail/{domain}/{username}
mail_location = maildir:/var/vmail/%d/%u

# Limit the uid/gid that can login
first_valid_uid = 150
last_valid_uid = 150


auth default {
# .
# .
  passdb sql {
    # Location of the SQL configuration
    args = /etc/dovecot/dovecot-sql.conf
  } 
  
  userdb sql {
    args = /etc/dovecot/dovecot-sql.conf
  }

  # It's possible to export the authentication interface to other programs:
  socket listen {
    master {
      # Master socket provides access to userdb information. It's typically
      # used to give Dovecot's local delivery agent access to userdb so it
      # can find mailbox locations.
      path = /var/run/dovecot/auth-master
      mode = 0600
      # Default user/group is the one who started dovecot-auth (root)
      user = vmail
      group = mail
    }
    client {
      # The client socket is generally safe to export to everyone. Typical use
      # is to export it to your SMTP server so it can do SMTP AUTH lookups
      # using it.
      #path = /var/run/dovecot/auth-client
      path = /var/spool/postfix/private/auth
      mode = 0660
      user = postfix
      group = postfix
    }
  }
}

These changes tell Dovecot where to find the configuration to use SQL for the user and passwd database, and to export an authentication socket that Postfix can use. We will take care of both of these things later. First, we need to create the vmail user that we told the authentication socket to use and give it the required uid that we specifed. While we are at it, lets also create the directory to hold the virtual mailboxes.

useradd -r -u 150 -g mail -d /var/vmail -s /sbin/nologin -c “Virtual mailbox” vmail
mkdir /var/vmail
chmod 770 /var/vmail/
chown vmail:mail /var/vmail/

Now for /etc/dovecot-sql.conf On some distros, this file will already exist. We just need to tweak it for our situation.

#Database driver
driver = mysql

# Connect string for the database containing the mailbox table.
connect = host=localhost dbname=mail user=vmail password=thedbpasswd

# Since we want to migrate the existing users over from system accounts
# using shadow passwords, we use the CRYPT function.
default_pass_scheme = CRYPT

# The query needed to get the user/password %n contains only the user part of user@example.com
password_query = SELECT uid as user, pwd as password FROM mailbox WHERE uid = '%n'

# The user query. Since we are only hosting a single domain, it can be hardcoded here.
# This simplifies the DB table and queries. Notice we also always return a static uid and gid
# that match the vmail user we created. This causes the vmail system user to be the user 
# used to read the mailbox data.
user_query = SELECT '/var/vmail/example.com/%n' as home, 'maildir:/var/vmail/example.com/%n' as mail, 150 as uid, 8 as gid FROM mailbox WHERE uid = '%n'

Now, remember that we are moving existing Maildir accounts that currently use shadow passwords. This client's system only had a few existing accounts, so I just manually added the entries into the mailbox table we created above. The passwords were just copy/pasted from the system's shadow file:

INSERT INTO mailbox (uid, pwd) VALUES('userone', '$6$xxxxxxxxxxxxxxxxxxx/xxxxxxxxxxxxxxx');

Take note of the password. It's a SHA-512 encrypted password (as indicated by the $6$). The salt is contained in the string itself. This will be important to know when configuring Horde later.

Next, now that the user accounts are setup in the table, we can copy any existing mailboxes over to the new location. In this case, we have Maildir files located in ~/Maildir. This is fairly simple, copy the directory and change ownership:

cp -r ~/userone/Maildir /var/vmail/example.com/userone
cd /var/vmail/example.com
chown -R vmail:mail userone

At this point, the users can now access their mailboxes just as before. Nothing will look different from the user's point of view. In the next part, we will configure Postfix to know where to deliver incoming mail.

December 27, 2011

A Look Back, A Look Ahead

Back in March I wrote about what I planned to focus on once the Horde 4 release process was complete. Well, here we are 9 months later and my personal roadmap has gotten a bit clouded in my head. So I thought with this being the end of the year, it is a good time to take stock and organize what I plan to focus on in the months ahead.

It's always nice to look back and see how well one stuck to plan, and to also take a minute to enjoy one's accomplishments. While there are still things left outstanding on that list, given how little time I have had lately to contribute, I am happy with what I managed to complete.

The work on Ansel that was necessary for a Horde 4 release, including a complete rewrite of the geotagging support, was completed. This has lead to lots of improvements in Horde's mapping library.  I also pushed out an alpha release of iPhoto and Aperture export plugins. These can be downloaded from Ansel's download page. It felt really good to get those things finally off my todo list and out the door. There are lots of enhancement requests

The Hermes Ajax interface, while not feature complete, is functional for day to day time entry. The main missing piece is the search/report functionality, and I hope to complete that in the next few months. Unfortunately, the mobile interface for Hermes has not yet been started. I'll hopefully get to it one of these days, but there seems to always be something else that has more importance for me. It likely won't be finished by the time Hermes for Horde 4 is released.

We had our annual Hackathon this past November in Boston, and LOTS of great work was done by all of our team memebers. Personally - in addition to eating Lucky Burgers and attempting to juggle - I mostly focused on completing the new Service_Weather library, adding basic tag navigation in Trean now that Chuck has migrated Trean away from shares to tags, and a buch of other small bug fixes. It was wonderful to see everybody in person again, a great time was had by all! I'm already itching for our next get together.

The ActiveSync library has also received a considerable amount of work in the last few months; Support for additional devices, improved recurrence series/exception support along with revamped timezone support to name a few. Going forward, I'm looking at implementing the minimum amount of email support that would be required to properly support meeting invitation requests and responses on the device. Once that is figured out, we'll see how much more work it would be for full email support (well, "full" support for what EAS 2.5 allows anyway). There is also talk about implementing a more recent version of the EAS protocol - at least 12.1 - which would give us the ability to not only sync more efficiently, but to sync with Apple's iCal application as well. Stay tuned!

Kronolith has a number of  missing features that haven't been implemented/ported from the traditional view yet. Some years ago, I added support for resource management. At the time, the AJAX view was not released, nor was it even fully functional, so the resource features were only added to the traditional view. These need to be ported to the dynamic view, along with better support for recurrence series editing.

All in all, another busy, but fun, year of Horde development is ahead!

December 23, 2011

Service_Weather for Developers Part 1

As promised in my last post, here is a basic run down on using Horde_Service_Weather in your own projects.

First, make sure you have the package installed. As of this writing, the latest available packaged release is 1.0.0RC2:

If you have not yet installed any Horde 4 packages, you will need to setup PEAR for Horde 4 (this is not meant to be a HOWTO on installing Horde. See the Horde install docs for more information).

// If you have not yet discovered Horde's PEAR channel:
pear channel-discover pear.horde.org

// Install the package:
pear install horde/Horde_Service_Weather

Next, we need to decide on the actual weather data provider. I recommend using Wunderground, as it is, by far, the most complete of the available choices. It requires registration for at least a free developer's account. Once you have your API key, you can create the weather object:

// Parameters for all driver types
// Note that below we use the global $injector to get the HttpClient and Cache instances.
// If not using the $injector, substitute your own instances in place of the $injector call.
$params = array(
    'http_client' => $injector->createInstance('Horde_Core_Factory_HttpClient')->create(),
    'cache' => $injector->getInstance('Horde_Cache'),
    'cache_lifetime' => 3600
);
$params['apikey'] = 'yourAPIKey';
$weather = new Horde_Service_Weather_WeatherUnderground($params);

Of course, if you choose to use, e.g., Google instead of Wunderground, just create the appropriate object:

// Google returns already localized strings,
// just pass it your language code.
$params['language'] = 'en';
$weather = new Horde_Service_Weather_Google($params);

Now we have our weather object, connected to the desired backend data provider. Let's fetch some weather information:

// Set the desired units
// Defaults to Horde_Service_Weather::UNITS_STANDARD
$weather->units = Horde_Service_Weather::UNITS_METRIC;

// Get current conditions.
// The location identifier can take a wide range of formats.
$conditions = $weather->getCurrentConditions('boston,ma');

// Unit labels
$units = $weather->getUnits();

// Basic condition description:
// e.g., "Sunny" or "Partly Cloudy" etc.
echo $conditions->condition;

// Current temp
echo $conditions->temp . $units['temp'];

Of course, lots of other properties are available. Check the documentation for details. Now, let's get a forecast:

// Get a 5 day forecast.
$forecast = $weather->getForecast('boston,ma', Horde_Service_Weather::FORECAST_5DAY);

// Each forecast result contains a collection of "Period" objects:
foreach ($forecast as $period) {
    echo 'Date: ' . (string)$period->date;  // Horde_Date object
    echo 'Hi: ' $period->high . $units['temp'];
    // Display other properties etc...
}

// If you want just a specific period:
$periodOne = $forecast->getForecastDay(0)
;
// Total snow accumulation for the day:
echo $periodOne->snow_total . $units['snow'];

Again, check the documentation for details on available properties.

In the next installment, we'll look at validating locations, searching locations and using a location autocompleter.

Have fun!

December 22, 2011

Service_Weather

With the recent discontinuation of The Weather Channel's public API access, Horde was left without a data feed for weather information other than the aviation style METAR/TAF reports. Weather information has historically been used  in two places in Horde; The WeatherDotCom portal block, and in the Timeobjects module, where we export the weather information to other applications - like Kronolith, Horde's calendaring application.

After an audit of available (and free) weather services, I settled on the following three services as suitable as alternatives to TWC's dead data feed.

Weather Underground: Of the three providers we decided to support, this one provides the most detailed data. You must sign up for an account for your Horde install. There is a free "developer" account option, though it does have relatively low usage limits which may be a problem if you have a large user base. We of course cache every request to help with getting the most out of those limits. They also offer very reasonable paid options as well.

World Weather Online: Another free service that provides a fair amount of data, though it's not as detailed as Weather Underground. Free account required, with higher limits than Weather Underground.

Google: Google does not provide an official weather API, but they do have an API interface that is used internally for Google's weather portal block. The data provided is not very detailed, but if you are looking for a provider that does not require any registration, this might be a solution for you.  No registration, no known limits, though this is unofficial, so keep that in mind.

It's worth noting that the biggest thing missing, even from Weather Underground's feed is the day/night forecast style. They provide an hourly forecast, but no simple day/night forecast. The non-hourly forecast data is provided as a single set of conditions for the entire day. Another fairly well known provider AccuWeather, appears to provide this (and fairly detailed data as well), but sadly, they have informed me that they no longer provide free data feeds - even for FOSS projects. Also, before anyone asks, yes I did look at Yahoo's weather feed. This is an RSS feed, which in and of itself is not a problem, but they only provided very basic data, for only a day or two in the future...not enough for our needs.

The end result of all this is the new Horde_Service_Weather library, a new Weather block, and support for the new weather drivers in the Timeobjects application for exporting the weather to applications like Kronolith.  As a side effect of all this, the weather support in Horde has, IMO, been greatly improved. The weather portal block code received a much needed overhaul including the ability to dynamically change the location being displayed directly from the portal screen, along with autocompletion of the locations.

At the time of this writing, Service_Weather is in Beta, and available via Horde's PEAR server. The new weather block is included in the most recent Horde release, and the latest Timeobjects release contains support for the new code as well.

For developers interested in learning how to use Service_Weather in their own applications - look forward to a blog entry in the near future detailing the usage.

December 5, 2011

Ansel exporter plugins available

I first wrote about my efforts to write an iPhoto export plugin for uploading images directly to Ansel from iPhoto back in November 2008. Three entire years ago. I wrote about my progress again in 2009 along with some screen shots. Since then, I've rewritten it twice and ported it to Aperture. I've been using these plugins myself as part of my workflow ever since.

 

Both Ansel versions 1.x and 2.x are supported by these uploaders. All metadata is retained during export, including keywords. You can create new galleries directly from the plugin, as well as browse a gallery's thumbnails so you can see what images have been previously uploaded. You may configure multiple Ansel servers as well.

I've finally gotten around to fulfilling my promise to publish a binary installer for these, so that users don't have to build them from scratch in XCode. You can now download these directly from Ansel's download page. Please keep in mind these are alpha-level releases. Feel free to report any issues you have to the ansel mailing list, or open a bug report at http://bugs.horde.org.

August 14, 2011

git case sensitivity madness

I do most of my development work on my MacBook. The Mac, by default, uses a case preserving, but insensitive filesystem. This is, by far, by biggest gripe about the OS. Combine this with Git, this leads to a lot of havoc, since git is case sensitive. Since Horde uses Git, this can bite anyone who develops on a similar filesystem. It was a huge issue back when we were refactoring like mad for Horde 4 since there was a lot of file renaming going on.

Nowadays, I rarely run into this issue, but it does still creep up from time to time (like today!), usually during a merge. I used to jump through all kinds of hoops involving setting/unsetting the ignore-case switch in git's config (very bad idea). Now, I've found the following to be a much better way of dealing with it.

$ git pull --rebase

remote: Counting objects: 203, done.
remote: Compressing objects: 100% (93/93), done.
remote: Total 143 (delta 81), reused 81 (delta 39)
Receiving objects: 100% (143/143), 49.49 KiB, done.
Resolving deltas: 100% (81/81), completed with 21 local objects.
First, rewinding head to replay your work on top of it...
error: The following untracked working tree files would be overwritten by checkout:
	passwd/lib/Driver/Adsi.php

Please move or remove them before you can switch branches.
Aborting

# As you can see, the file is present, but as the wrong case.
$ ls passwd/lib/Driver/
adsi.php

# To fix, we have to git mv the file to a different name,
# then git mv it back to the correct name
$ git mv adsi.php adsi.phpX
$ git mv adsi.phpX Adsi.php
$ git commit

# once all are taken care of, pull and rebase
# at this point there might be conflicts,
# but you can --ignore them during the rebase process.
$ git pull --rebase

CONFLICT (rename/delete): Rename passwd/lib/Driver/ldap.php->passwd/lib/Driver/Ldap.php in Fix case sensitivity issues on mac during merge and deleted in HEAD
CONFLICT (rename/delete): Rename whups/lib/Driver/sql.php->whups/lib/Driver/Sql.php in Fix case sensitivity issues on mac during merge and deleted in HEAD
Failed to merge in the changes.
Patch failed at 0001 Fix case sensitivity issues on mac during merge

When you have resolved this problem run "git rebase --continue".
If you would prefer to skip this patch, instead run "git rebase --skip".
To restore the original branch and stop rebasing run "git rebase --abort".

$ git rebase --skip

Hope this helps!

June 16, 2011

The Horde PrettyAutocompleter - Part One

Kronolith 3 includes new tagging features, including an autocomplete feature for adding new tags to events and calendars. Horde has had autocompletion code for ages - autocompletion of email addresses in IMP, for example. In Kronolith, we wanted a more dynamic and fresh interface for tags to go along with the brand new dynamic interface. The result was the PrettyAutocompleter widget. In this entry, I'll explain how it's implemented in Kronolith and how you can adapt it for use in other applications.

 

The PrettyAutocompleter is a stand alone javascript widget that is not limited to tags, in fact, it's also used for attendees in Kronolith's meeting scheduling interface. It's part of the Horde_Core package and lives in prettyautocomplete.js. It extends the Autocompleter object defined in autocomplete.js. You don't have to worry about including any of these files on your own though, the Imple object (see below) takes care of all of this for you.

There are two main components of the autocompleter: The UI and the supporting backend code. First, let's look at the HTML required for the autocompleter. The following is the minimum required to setup an autocompleter.

<input id="kronolithEventTags" name=tags" />
<span id="kronolithEventTags_loading_img" style="display:none;"><img src="spinner.gif" /></span>

Now that we have the HTML setup, let's hook it up to both the javascript that transforms it to the prettyautocompleter and to the backend so it can retrieve the autocomplete choices. For this we use a Horde_Core_Ajax_Imple object. These objects connect UI elements to Ajax actions. Each Horde application can define it's own Imples by adding the classes to application/lib/Ajax/Imple. In the case of Kronolith, we connect it with code similar to this:

$injector->getInstance('Horde_Core_Factory_Imple')->create(
    array('kronolith', 'TagAutoCompleter'),
    array(
        // The name to give the (auto-generated) element that acts as the
        // pseudo textarea.
        'box' => 'kronolithEventACBox',

        // Make it spiffy
        'pretty' => true,

        // The dom id of the existing element to turn into a tag autocompleter
        'triggerId' => 'kronolithEventTags',

        // A variable to assign the autocompleter object to
        'var' => 'eventTagAc'
    )
);

This code transforms the kronolithEventTags element in the above HTML into a PrettyAutocompleter. It also attaches the autocompleter to an Ajax action for retrieving the autocomplete choices. The Imple object that does this is kronolith/lib/Ajax/Imple/TagAutoCompleter.php and it looks like this:

class Kronolith_Ajax_Imple_TagAutoCompleter extends Horde_Core_Ajax_Imple_AutoCompleter
{
    /**
     * Attach the Imple object to a javascript event.
     * If the 'pretty' parameter is empty then we want a
     * traditional autocompleter, otherwise we get a spiffy pretty one.
     *
     * @param array $js_params  See
     *                          Horde_Core_Ajax_Imple_AutoCompleter::_attach().
     *
     * @return array  See Horde_Core_Ajax_Imple_AutoCompleter::_attach().
     */
    protected function _attach($js_params)
    {
        $js_params['indicator'] = $this->_params['triggerId'] . '_loading_img';

        $ret = array(
            'params' => $js_params
        );

        if (empty($this->_params['pretty'])) {
            $ret['ajax'] = 'TagAutoCompleter';
        } else {
            $ret['pretty'] = 'TagAutoCompleter';
        }

        if (!empty($this->_params['var'])) {
            $ret['var'] = $this->_params['var'];
        }

        return $ret;
    }

    /**
     * Method to obtain autocomplete choices. 
     *
     * @param array $args  Arguments passed from the Ajax action. The 'input'
     *                                    parameter contains the text fragment to autocomplete on.
     *
     * @return array  Returns an array of possible choices.
     */
    public function handle($args, $post)
    {
        // Avoid errors if 'input' isn't set and short-circuit empty searches.
        if (empty($args['input']) ||
            !($input = Horde_Util::getFormData($args['input']))) {
            return array();
        }

        $tagger = Kronolith::getTagger();
        return array_values($tagger->listTags($input));
    }

}

Note that there is very little functionality in this class. The bulk of the work is handled by the class it extends - The more general Horde_Core_Ajax_Imple_Autocompleter. The Kronolith_Ajax_Imple_Autocompleter#handle method is automatically called via Ajax by the autocompleter to fetch the choices and then display them in the UI. The $tagger variable is a Kronolith_Tagger object and contains Kronolith specific code for interacting with Horde's Content system. All you need to know right now is that the $tagger->listTags() call provides an array of tag names that start with the text in $input.

The only thing left to do is to initialize the autocompleter. This builds the DOM structure for all the required elements and applies all the styling needed. In Kronolith, this is done when the dynamic interface is loaded (the element remains hidden in Kronolith, however, until the event detail form is displayed).

eventTagAc.init();

The reset() method is used to clear the values from the autocompleter and optionally assign new values. For example, in Kronolith when an existing event is loaded for editing, we prepopulate the autocompleter with that event's existing tags:

// Reset the autocompleter and clear all tags
eventTagAc.reset();

// Reset autocompleter, and prepopulate with two tags.
eventTagAc.reset(['tagOne', 'tagTwo']);

Getting the tags out of the autocompleter to save them is also very easy. However, before actually getting the value out of the autocompleter, we need to make sure that all of it's input is processed. Since the autocompleter uses a comma to trigger adding a new tag to the list of displayed tags, it's possible for the user to type a new tag in the input area, then press save before a adding a comma. We must make sure this last tag is added. This is done by the shutdown() method. Once we are ready to get the value, we can simply access it via the id or name we gave to the initial input element. For example, we gave the input element a dom id of kronolithEventTags and a name of tags. So, to get the current tag value, we can just:

eventTagAc.shutdown();
var tags = $F('kronolithEventTags');

This will be the comma delimited list of tag names.

The following is a bare-bones example script pulling together everything.  Obviously the onload and onclick handlers would normally be written in a less obtrusive way, but for a quick and dirtly example, this is fine. I've written it as if it were part of the Kronolith application. So, if you want to actually test it out and play with the code, just drop it into the root Kronolith directory with a name like example.php. It will pull existing kronolith tag data for the autocompletion, but will obviously not write back out any data.

<?php
/**
 * Autocomplete example
 */

// Setup the application
require_once dirname(__FILE__) . '/lib/Application.php';
Horde_Registry::appInit('kronolith');

// Attach the autocompleter to the ajax action.
// @see Kronolith_Ajax_Imple_TagAutoCompleter
$injector->getInstance('Horde_Core_Factory_Imple')->create(
    array('kronolith', 'TagAutoCompleter'),
    array(
        // The name to give the (auto-generated) element that acts as the
        // pseudo textarea.
        'box' => 'kronolithEventACBox',
        // Make it spiffy
        'pretty' => true,
        // The dom id of the existing element to turn into a tag autocompleter
        'triggerId' => 'kronolithEventTags',

        // A variable to assign the autocompleter object to
        'var' => 'eventTagAc'
    )
);
?>
<head>
 <title>Autocomplete Example</title>
<?php
Horde::includeStylesheetFiles();
Horde::includeScriptFiles();
Horde::outputInlineScript();
?>
<body onload="eventTagAc.init()">
  <div class="kronolithDialogInfo"><?php echo _("To make it easier to find, you can enter comma separated tags related to the event subject.") ?></div>
  <input id="kronolithEventTags" name="tags" />
  <span id="kronolithEventTags_loading_img" style="display:none;"><?php echo Horde::img('loading.gif', _("Loading...")) ?></span>
  <br />
  <a href="#" onclick="eventTagAc.shutdown();alert($F('kronolithEventTags'));">Show me the tags</a>.<br />
  <a href="#" onclick="eventTagAc.reset();">Reset the autocompleter</a><br />
  <a href="#" onclick="eventTagAc.reset(['Personal', 'Fun']);">Reset, with prepopulated tags</a>.
</body>


For the next installment, we'll look at the Previously Used Tags functionality in Kronolith's interface. This is not part of the general PrettyAutocompleter code, but easy to implement.

May 13, 2011

Building a custom Horde_Block

Anyone who has used Horde at all should know what a Horde_Block is. These are the individual bits of content that are displayed on the "portal" page in Horde. Things like the Mail Summary, Calendar Summary etc...

If you have a custom application, or even just need some standalone content presented as a Horde_Block, it's fairly easy to put one together. For this example, let's assume that you have some custom content you want to display as a block, but that it's not tied to any Horde application (similar to the WeatherDotCom block). First order of business is to create a new php file for your block. The easiest way to do that is to copy the Example.php file from skeleton/lib/Block and edit it appropriately. The content of that file when you are done should be similar to:

<?php
/**
 * @package Horde
 */
class Horde_Block_Foo extends Horde_Core_Block
{
    /**
     */
    public function __construct($app, $params = array())
    {
        parent::__construct($app, $params);

        $this->_name = _("Foo Summary");
    }

    /**
     */
    protected function _params()
    {
        return array(
            'color' => array(
                'type' => 'text',
                'name' => _("Color"),
                'default' => '#ff0000'
            )
        );
    }

    /**
     */
    protected function _title()
    {
        return _("Some special Foo content");
    }

    /**
     */
    protected function _content()
    {
        $html  = '<table width="100" height="100" bgcolor="%s">';
        $html .= '<tr><td>&nbsp;</td></tr>';
        $html .= '</table>';

        return sprintf($html, $this->_params['color']);
    }

}

Note the name of the Class is Horde_Block_Foo this file should be saved as horde/lib/Block/Foo.php. If the block were to be called "Bar" instead, the class name would be Horde_Block_Bar and would have been saved as horde/lib/Block/Bar.php - you get the idea.

The main method you are interested in is the _content() method. This is where the block content is generated. The HTML for the block should be built as a string and returned from this method. If you want to be able to configure anything about the block, you should add to the _params() method. In this example, a text value named "color", with a default value or "#ff0000' is available. As shown in the example, to obtain the value of a setting, you use $this->_params['setting_name'].  There are other types of values available as well. For example, if you wanted to provide a select list of choices instead you could do something like:

 array(
           'units' => array(
                'type' => 'enum',
                'name' => _("Units"),
                'default' => 'standard',
                'values' => array(
                    'standard' => _("Standard"),
                    'metric' => _("Metric")
                )
            ),
)

 

This provides a select list named "Units" and allows either "Standard" or "Metric" as choices.

After adding this file and adding the content you want it to show, it will be available as a block to add to your users' portals the next time they login.

March 23, 2011

Personal Roadmap

With the release of the first RCs for Horde 4 and the final release looming less than 2 weeks away, I thought it a great time to start looking ahead at my personal roadmap for Horde 4 development.

The entire Horde team has been pretty much exclusively working on resolving final roadblockers and reworking the release process these last few months. As much fun as it is getting ready for this milestone release, I'm also a bit excited about being able to get back to some projects that have been patiently taking the back seat while the initial Horde 4 release was being prepared.

Some things I'm excited to get back to in the months ahead:

Ansel, the Horde photo gallery application, needs some significant changes to keep up with the recent changes in the Horde_Share library. These *must* be done before Ansel can be released with the next Horde 4 release, so this is likely to be where I focus on immediatley following the release. Additionally, Ansel needs to move away from the Google Maps based geolocation features, and use the new Horde_Map functionality in Horde 4. This would provide the ability to use any number of different mapping backends while changing nothing but a configuration setting. I might even make this a per-gallery setting, so pictures taken while hiking could, theoretically, be placed on a hiking trail maps, while pictures taken while sight seeing could be placed on a traditional map for instance. I'll also hopefully finally get to some of the dozen or so enhancements requests waiting on our ticket tracker!

Hermes is getting an Ajax AND mobile interface (partially sponsored by Alkaloid Networks - thanks Ben!), and I'm pretty excited about working on this. I'd also like to expand on some of our other existing mobile interfaces, like Kronolith and Nag. I also have a bunch of other itch-scratching to do in Nag.

I've been wanting more integration points for our Twitter and Facebook support for a while now. We already have basic clients for these two social networking services, as well as some existing integration such as a Turba driver for Facebook contacts, a TimeObjects driver for Facebook events, birthdays etc... but we really need to add things like auto posting to the user's Twitter/Facebook stream after publishing a new blog post in Jonah, or new photos have been added to an Ansel gallery.

Jonah: I'd like to finally get this application to the point where it can be released. I, along with a number of the other Horde devs, have been using Jonah to power our personal blogs since way before Horde 4 work even started, and it's about time this thing got released. Thanks to Ian Roth for contributing a number of patches on GitHub related to getting it more in-line with Horde 4 code.

Add to these a slew of existing enhancment requests, some articles that are in varoius stages of being complete, the normal bug fixes and support requests that crop up, and some personal coding projects, I'll have enough to fill up my development time for the foreseeable future. Now, all I need is a Horde_Time::create() method to find the time to do all this...

February 9, 2011

Rebuilding CVS

Recent CVS packages provided by Ubuntu include a patch that changes the format the date is written out as. Apparently, this is to preserve backward compatibility with old hook scripts. We discovered this some time ago, while preparing patches for our releases. The problem manifested itself as the packaged tarball having different diffs against the previous version then the official upgrade patches we provided.

We fixed this by repackaging the CVS package we install on our CVS server. I was reminded of this issue recently, when it became necessary to rebuild the host that we host CVS on...of course, I couldn't find the binary package that I had built, so it became necessary to rebuild it again. The following is a summary of what had to be done in case anyone finds it useful.

mkdir my_cvs
cd my_cvs

# if not already installed:
sudo apt-get install build-essential fakeroot dpkg-dev devscripts

#get the cvs source package
apt-get source cvs

#make sure we can build it
apt-get build-dep cvs

# unpack the package
dpkg-source -x cvs_1.12.13-12ubuntu1.dsc

# remove the offending patch
rm cvs-1.12.13/debian/patches/67_date_format_option

# update the version information/changelog
cd cvs-1.12.13
dch -i

# rebuild
debuild -us -uc

The new package will be located in the my_cvs directory...

March 28, 2010

Initial support for ActiveSync added to git master

Work on ActiveSync support for Horde has reached a milestone of sorts. The initial codebase has been merged into the master branch of our Git repository.

The work is not yet production-ready, but has shown to be fairly usable on the devices I am able to test with. There are some basic instructions and other information available on the ActiveSync wiki page.

If you feel adventurous, please feel free to try it out - just please make sure to back up all your data first! If you are able to test on a device not already listed in the wiki, please drop us a note on the dev@lists.horde.org  mailing list so we know how things went.

January 16, 2010

Git cherry goodness

With the recent flurry of development activity over at the Horde Project, there came an increase in the number of topic branches in our git repository.  One of the problems dealing with longer living topic branches is the question of how to keep the topic branch in sync with changes going on in the master branch. 

The best way to do this is to rebase the topic branch from master and then rebase your tracking branch against the remote topic branch.  The problem with doing this is that if you are using a post-receive hook to send out commit announcement emails, pushing your changes back to the server will result in duplicate commit messages being sent for the changes that were rebased from origin.  This makes it difficult to zero in on what the actual relevant change in the topic branch is.

After some digging around in the git documentation and some hacking on the script we use for commit messages, we have come up with a good-enough solution. The git cherry can examine the change sets between the two branches and detect if a specific commit on one branch is present on the other branch or not. What makes this so useful is that it doesn't compare commit ids, since these will be different, it actually looks at the diff of each changeset to see if they are the same change.

You can look at the full post-receive-email script we use by looking at our horde-support repository, but the basic idea is to call git cherry like so:

git cherry master [topic branch] 

and then iterate over the results, ignoring any commit ids that are prefixed with a '-'.

So now, my typical development session might go something like this (some commands added for clarity):

# Switch to master branch git checkout master
git pull --rebase

# Resolve, code, hack... 
git add
git commit
git push

 # Pull any new changes
git pull --rebase 

# Switch to my ActiveSync topic branch 
git checkout ActiveSync 

# Rebase it to keep it in sync with master 
git rebase origin/master 
git add
git commit

# rebase it against remote
git pull --rebase git push

January 16, 2010

Momentum

There's been a flurry of recent "infrastructure" activity over at The Horde Project. One such activity has consolidated the two main code repositories. The horde-hatchery repository has been imported, along with all the history, into the main horde repository. Many thanks to Michael Slusarz for getting this done.

Along with this consolidation, come some improvements to the process of setting up a horde development environment. There is still ongoing discussion as to what the final solution will be, and, as with any big change, there are still some things to be worked out, but hopefully this lowers the barrier for interested developers to get involved in Horde development. See http://horde.org/source/git.php for more information.

Additionally, the scripts that automatically generate the documentation at http://dev.horde.org have been tweaked a bit. We now generate documentation for all code in our git repository, as well as all the code in the stable, H3 line. We are now also generating development snapshots of the main-line H4 applications and libraries.

Some might also notice performance improvements when accessing our services such as bug tracking, source code browsing or our wiki. These services were recently migrated to a more capable server. Thanks to Ben Klang and Alkaloid networks for the server space.

September 27, 2009

Ansel, Kronolith, and more...

Wow, it's been since June 10th, almost 4 months since my last entry. Time flies...especially when you are busy. In the interest of keeping people informed, here are some of the new things I've been working on with regards to The Horde Project,  with an indication as to what version of Horde the work applies to:

Horde_Service_Twitter (H4 Only)

Since stating to use twitter, I figured it would be helpful to have my Twitter timeline appear in Horde, since that's what is usually loaded in my browser. Following my typical NIH rule when it comes to Horde, the result is the new Horde_Service_Twitter library and the twitter_timeline block for Horde's portal.  Horde_Service_Twitter supports authentication to Twitter via both the standard http authentication method as well as via OAuth. The latter making use of the Horde_Oauth library. The portal block allows you to publish a new tweet,  shows the most recent tweets by the people you are following and allows you to reply to a displayed tweet.

The addition of Horde_Service_Twitter, along with Horde_Service_Facebook, adds some exciting possibilities for integration points with other Horde applications. Horde already has some address book and calendar integration with Facebook, but other possibilities include things like automatically posting a notification to Twitter or Facebook when a set of new images are uploaded to Ansel, or maybe when a new blog post is published with Jonah.

 

Ansel (H3 and H4)

Ansel has gotten a fair amount of work recently and is ready for a 1.1 release. The most obvious change is full support of geo-tagging features.  Ansel has always been able to read,and display an image's meta data...but up until now you couldn't do much with any of the location data. Now, Ansel will recognize GPS coordinates in the meta data and display small thumbnails of those images in an embedded Google Map. There are various locations throughout Ansel where you can view these maps. You can also add location data to images that do not contain it as well as edit any existing location data. Full support for reverse geocoding means that you can (re)tag an image by either typing a textual name for the location (such as Philadelphia, PA) or by typing in actual GPS lat/lng coordinates. Of course, you can also (re)tag an image simply by browsing the Google Map and clicking where you want the image to be located.

Ansel's bleeding edge code has officially moved out of Horde's CVS repository and into the git repository, horde-hatchery. A fair amount of refactoring and internal improvements have already been done in getting Ansel and Horde_Image ready for Horde 4. Among these changes is better support for image meta data, with a new driver based on exif tool. This allows recognition of not only EXIF tags, but also IPTC and XMP data as well.

 

 

iPhoto/Aperture Export Plug-Ins (H3 and H4)

Related to the Ansel application, are new export plug-ins for both of Apple's image management applications, iPhoto and Aperture.  Currently available via Horde's horde-hatchery git repository, these plug-ins allow you to upload your images directly to an Ansel server from within iPhoto or Aperture. All meta data is retained when uploaded, including keywords that added using Aperture or iPhoto. You are able to create new galleries from the plug-in's interface, browse thumbnails of existing Ansel galleries (to see what images you have previously uploaded), and choose if the images should be resized (and to what size) before uploading.  Both plug-ins support configuring multiple Ansel servers if you happen to have access to different installations.

Even though these live in horde-hatchery, they will work with both Ansel 1.x as well as the bleeding edge Ansel code that lives in  the hatchery. The iPhoto exporter supports iPhoto '08 and later, and the Aperture exporter is written for Aperture 2.1 or later.  Both require OS X 10.5 or later. They should run on either PPC or Intel hardware, but have only been tested on Intel. Currently they are available only as source (which can easily be compiled using XCode) but a development build should be available shortly.

 

 

Kronolith (H4 only)

I've been tasked with adding support for resource scheduling to Kronolith, and the work is mostly complete. Resources may be invited to events by the event organizer using the existing attendees interface. Resources can be set up to automatically determine if they are available, and respond to the request automatically. There is also support for resource groups. Resource Groups are just a grouping of resources that are similar. When a group is invited to a meeting, the first available resource from that group will accept the invitation. For example, you have 10 projectors available and it doesn't really matter which projector is used for a meeting. Instead of going through all the projectors to see which one is available, you can just invite the projector group to the event. The first projector that is available during the meeting time will accept the invitation.

 

June 11, 2009

Weather forecasts in Kronolith

During some recent quality-time I spent with my schedule (read: "trying to figure out how to add more time to a day"), I had an a-ha moment. Why am I switching back and forth between weather data and my calendar to see the weather for a day of interest in my calendar? Why can't it just display in the calendar?  We already have some code in Horde that interfaces with the weather.com API (thanks to PEAR's Services_Weather package), so why not provide the weather data via the listTimeObjects API so Kronolith can pick it up?

 

The first step along this path was to create a new "mini" application - or an application that does nothing other than expose data via the listTimeObjects API. This resulted in the lightweight TimeObjects application that now lives in the horde-hatchery git repository.  This does put another level of abstraction between the weather data and Kronolith, but what I didn't want to do was start a trend of having to write a new Kronolith Event driver for any new type of time data that might be desired.  With TimeObjects, now all that is needed is to drop a new driver into TimeObjects' lib/Driver/ directory and it will be picked up and exposed via the API.

With the addition of the new TimeObjects application, Kronolith can now display the forecast data for up to the next 5 days directly in the calendar view. The high/low temperature along with the general conditions for that day are displayed, with a tooltip popup showing more detail. Currently, for this to work, you will need a contact in Turba that is marked as your own and containing enough of your location information to satisfy weather.com's service. Horde will also need to be configured with the weather.com api keys...just like the weather.com block requires. A future addition will be to allow choosing (multiple?) locations via a Google map in Horde's preferences.

Also included in TimeObjects is a driver for exposing Facebook Events via the listTimeObjects API. 

Since the listTimeObjects API really isn't documented anywhere other than our source code, a little introduction may be in order.  If you are not interested in Horde internals, you can skip to the end.

The API allows any Horde application to expose it's data as events to be displayed in Kronolith.  For example, via this API, Turba can provide the data needed to display contact birthdays and anniversaries in Kronolith.  Nag can display task due dates etc... For this to work,  an application needs to expose two methods via it's own API: listTimeObjectCategories(), which returns the categories of time objects available (birthday, anniversary etc..) and listTimeObjects() which returns the actual data.  The data returned includes information such as the start and end times, a title, a description, icon, and link.  For more information I will direct you to the phpdoc at http://dev.horde.org.

 

As always, a warning that the TimeObjects code is Horde 4 only, and as such is not considered stable.

June 3, 2009

New Horde Shops Open

There are now two new Horde merchandise shops open. These Spreadshirt shops are in addition to the existing CafePress shop we have.

The shop at horde.spreadshirt.net is for our European customers - as there will be no additional taxes or custom charges. The other shop is at horde.spreadshirt.com and is for our USA customers. Both shops offer flock printed polo shirts and T-shirts as well as some other cool stuff!

May 22, 2009

Your Facebook Stream in Horde

Keeping up with Facebook's Open Stream API, Horde just got a new Horde Block, the Facebook Stream Block. With this block you can view your stream feed (filtered by any of the same filters available on your Facebook Home page),  add a "like" directly from the block,  update your Facebook status, and see how many new notifications you have. This block will replace the previous Facebook Summary block that I wrote about previously.