METAR/TAF support in Horde_Service_Weather

Back in the Horde 3 days, to power the weather portal block, Horde used PEAR_Services_Weather to obtain weather.com's weather data. When Weather.com discontinued its free weather API, sometime in the Horde 4 release cycle, we created Horde_Service_Weather to interface with a host of other weather APIs.

However, for those users that used METAR and TAF weather data - using the "METAR" portal block - you were still stuck using the older PEAR library since Horde did not have support for this type of data. With the release of PHP 7 this has become even more problematic since the PEAR_Services_Weather library does not support PHP 7.

Enter Horde_Service_Weather_Metar, the latest addition to the Horde_Service_Weather library. It adds support for decoding METAR and TAF data from either a remote web server or a local file. While it is not a 100% drop-in replacement for the PEAR library, it is close. For the end user, this is pretty much all you need to know - your METAR weather data will still be available (and the portal block will look prettier). If you are developer and are interested in the specifics, read on.

Two APIs

As part of the Horde_Service_Weather library, it supports the same API as the other weather drivers. However, METAR and TAF data are not really the same type of weather data as a traditional 3 or 5 day forecast. These are designed for aviation purposes. For example, whereas in the other weather drivers each "period" represents one day, in the METAR driver, the forecast periods (or even the number of periods) are not pre-defined. Each period is only a few hours - with the entire forecast usually covering only 24 hours. TAF data also contains data not typically provided in a typical consumer weather forecast - such as the type/amount/height of each cloud layer. Given this, we provide an additional method to obtain the more detailed data.

// Note that below we use the global $injector to obtain object instances.
// If not using the $injector, substitute your own instances in place of the $injector call.
$params = array(
    'http_client' => $injector->createInstance('Horde_Core_Factory_HttpClient')->create(),
    'cache' => $injector->getInstance('Horde_Cache'),
    'cache_lifetime' => 3600,
    'db' => $injector->getInstance('Horde_Db_Adapter')
);
$weather = new Horde_Service_Weather_Metar($params);

// METAR (Current conditions)
$current = $weather->getCurrentConditions('KPHL');
$data = $current->getRawData();

// Current TAF
$forecast = $weather->getForecast('KPHL');
$data = $forecast->getRawData();

The Horde_Service_Weather_Current_Metar::getRawData() method returns all the parsed METAR properties. We use the same key names as the PEAR_Services_Weather library, so this data can be used directly in place of the PEAR library's data.

Likewise, the Horde_Service_Weather_Forecast_Taf::getRawData() method returns the same data structure as the PEAR_Services_Weather_Metar::getForecast() method.

As mentioned, the "normal" API is still supported and you are free to use it. However, the data available is limited. For example, you can still iterate over the forecast periods, but the information available in each period is still limited to the properties of the Horde_Service_Weather_Period_Base object. Also note that TAF periods only contain weather information that is different from the main forecast section so period objects may not contain all expected typical information at all. This is why it's best to use the getRaw() methods as described above for METAR/TAF data.

// Note that below we use the global $injector to obtain object instances.
// If not using the $injector, substitute your own instances in place of the $injector call.
$params = array(
    'http_client' => $injector->createInstance('Horde_Core_Factory_HttpClient')->create(),
    'cache' => $injector->getInstance('Horde_Cache'),
    'cache_lifetime' => 3600,
    'db' => $injector->getInstance('Horde_Db_Adapter')
);
$weather = new Horde_Service_Weather_Metar($params);

// Current TAF
$forecast = $weather->getForecast('KPHL');
foreach ($forecast as $period) {
   $humidity = $period->humidity;
   // etc...
}

Local or Remote

Horde_Service_Weather_Metar supports obtaining the data from a remote http service or from a local file for maximum flexibility. By default, it tries the NOAA links. If you want to use a different service, you can provide the path in the constructor's parameters.

$params = array(
    'http_client' => $injector->createInstance('Horde_Core_Factory_HttpClient')->create(),
    'cache' => $injector->getInstance('Horde_Cache'),
    'cache_lifetime' => 3600,
    'db' => $injector->getInstance('Horde_Db_Adapter'),
   'metar_path' => 'http://example.com/metar',
   'taf_path' => 'http://example.com/taf'
);
$weather = new Horde_Service_Weather_Metar($params);

Or for those that may already sync weather data, or have their own weather observation systems, you can point to a local file:

$params = array(
    'http_client' => $injector->createInstance('Horde_Core_Factory_HttpClient')->create(),
    'cache' => $injector->getInstance('Horde_Cache'),
    'cache_lifetime' => 3600,
    'db' => $injector->getInstance('Horde_Db_Adapter'),
   'metar_path' => '/var/weather/metar',
   'taf_path' => '/var/weather/taf'
);
$weather = new Horde_Service_Weather_Metar($params);

You can even mix and match the two - if you have a local METAR file but not a local TAF file for example.

Location Database

The PEAR library also had a script that would build a local database of airport weather reporting locations. This was also ported to Horde_Service_Weather as a migration script. Running the migration in the normal way automatically downloads the latest airport datafile and builds the necessary tables. See the horde-db-migrate tool for more information on running migrations.

Use of the database isn't mandatory and if the 'db' parameter is not passed to the constructor it will not be used. However, the database is required for searching location names and performing autocompletion of location names. For example, the following code will search the location database for any locations (either the location name of the ICAO airport code) beginning with $location.

$weather =  new Horde_Service_Weather_Metar(array(
    'cache' => $injector->getInstance('Horde_Cache'),
    'cache_lifetime' => $conf['weather']['params']['lifetime'],
    'http_client' => $injector->createInstance('Horde_Core_Factory_HttpClient')->create(),
    'db' => $injector->getInstance('Horde_Db_Adapter'))
 );

 $locations = $weather->autocompleteLocation($location);

Portal Block

We have also improved Horde's METAR weather portal block by tweaking the layout a bit and by adding the ability dynamically change the location using an ajax autocompleter - just like the exiting weather block.

Why keep both weather blocks in Horde? As mentioned, METAR and TAF data are specialized data designed for aviation use. It doesn't fit neatly into a traditional weather forecast layout. It contains more detailed information and shorter forecast periods. Providing a specific block for this data allows the full data set to be represented without overly complicating the "normal" weather portal display.

Vagrant Images for Horde Testing

As a developer with the Horde Project, I spend most of my development time plugging away on our bleeding edge code; Code that lives on the master branch of our Git repository. However, debugging our code and testing fixes the stable FRAMEWORK_5_2 branch presents issues. It's currently all but impossible to easily run this branch directly from a Git checkout so it's often necessary to quickly setup a new test environment from our PEAR packages. In fact, even with our master branch, there are a multitude of different configurations, backends, PHP versions etc... that need testing.

For me, I've found the thing that works best for my workflow is utilizing Vagrant images to quickly bring up new VMs with specific configurations for testing. Since I've accumulated a good number of Vagrant images, I thought it would be a good idea to throw them up on my personal GitHub account. These images all create functional Horde installs running with various different backends, and PHP versions.

There are images based on current Git master and images based on current stable. There is even an image that downloads and compiles current PHP master (currently 7.1-dev) for testing. A thank you to both Michael Slusarz for the original vagrant image I based these on, and to Jan Schneider for cleaning up the vagrant configuration.

The images can be found at https://github.com/mrubinsk/horde-dev-vagrant

Final note of warning: These are meant to be throw-away instances for testing and development use. Of course, it wouldn't be too hard to change the configurations to be more appropriate for production use.

Shared SQL Authentication with Horde and Dovecot Part 2

In part 1 of this series, we saw how to configure Dovecot to use a simple SQL table as the user and passwd database. We also saw that it was easy to use the existing shadow passwords. This part will focus on setting up Postfix to use Dovecot's user information to determine where to deliver incoming mail, and finally, how to configure Horde to authenticate against the same data. This will also allow Horde to actually manage your mail users.

Basically, it requires just a few changes in main.cf:

# Since we are using ONLY virtual domain accounts, mydestination should be locahost.
# The domain will be handled by the virtual configuration.
mydestination = localhost


# Tell postfix where to find the virtual mailboxes:
virtual_mailbox_base = /var/vmail

# Tell it what domains are virtual (we only have one so no need for a map)
virtual_mailbox_domains = example.com

# Tell Postfix where to find user/mailbox map. In this case, we point to a confguration file
# for a mysql based map.
virtual_mailbox_maps = mysql:/etc/postfix/mysql_virtual_mailbox_maps.cf

# The same uid settings as in Dovecot:
virtual_minimum_uid = 150
virtual_uid_maps = static:150
virtual_gid_maps = static:8

# Also, be sure you have this to tell Postfix where to find dovecot's authentication socket.
smtpd_sasl_type = dovecot
smtpd_sasl_path = private/auth

Now, for the mysql_virtual_mailbox_maps.cf file:

user = vmail
password = dbpassword
hosts = 127.0.0.1
dbname = mail

# Again, since we are only hosting a single domain, we can hard code it
# in the query for simplicity. %u is the user id (username) %s is the incoming email
# address (username@example.com). Ending it in a '/' signifies that it is Maildir format.
query = SELECT 'example.com/%u/' FROM mailbox WHERE uid ='%s'

That's all there is to it. Postfix will now deliver incoming email to the appropriate user's inbox.

At this point, we have a working email server using SQL auth. Now, let's get Horde configured to use it also. For this, you want to head over to Horde's adminstration UI and select the main Horde configuration. Select the "Auth" tab. From here, you want to select the "SQL Authentication with custom queries" driver. You will then be presented with fields to fill out for both connecting to the database containing the data and for entering the various queries. For this example, we are using UNIX sockets to connect to the database. The following is the resulting section of the conf.php file. You can use the array keys to determine what fields they go in on the administrative UI.

$conf['auth']['params']['socket'] = '/var/run/mysqld/mysqld.sock';
$conf['auth']['params']['protocol'] = 'unix';
$conf['auth']['params']['username'] = 'vmail';
$conf['auth']['params']['password'] = 'dbpasswd';
$conf['auth']['params']['database'] = 'mail';
$conf['auth']['params']['query_auth'] = 'SELECT * FROM mailbox WHERE uid = \L AND pwd = \P';
$conf['auth']['params']['query_add'] = 'INSERT INTO mailbox (uid,pwd) VALUES (\L, \P)';
$conf['auth']['params']['query_getpw'] = 'SELECT pwd FROM mailbox WHERE uid = \L';
$conf['auth']['params']['query_update'] = 'UPDATE mailbox SET uid = \L, pwd = \P WHERE uid = \O';
$conf['auth']['params']['query_resetpassword'] = 'UPDATE mailbox SET pwd = \P WHERE uid = \L';
$conf['auth']['params']['query_remove'] = 'DELETE FROM mailbox WHERE uid = \L';
$conf['auth']['params']['query_list'] = 'SELECT uid FROM mailbox';
$conf['auth']['params']['query_exists'] = 'SELECT 1 FROM mailbox WHERE uid = \L';
$conf['auth']['params']['encryption'] = 'crypt-sha512';
$conf['auth']['params']['show_encryption'] = false;

There are a few things to take note of. First, as mentioned in the UI, \P \L and \O are (respectively) the already encrypted password, the username, and the old username. Since we are using existing shadow passwords, the encryption is set to crypt-sha512. This is why we need both an authentication query and a password query - because we need to load the password first to get the salt so we can verify the user provided password. Also, when the expansions are made, they are already quoted, so do not enclose the \P, \L, or \O in quotes when entering the queries.

The final part of this is to change IMP to use horde's authentication data - in imp/config/backends.local.php:

<?php
$servers['imap']['hordeauth'] = true;

That's it. We now have a fully functional mail server, with Horde able to add/remove/edit the mail accounts - while the end users were able to continue to use their existing passwords. As an added bonus, it is now trivial to setup the Horde application, Passwd to allow users to change their passwords.

Shared SQL Authentication with Horde and Dovecot Part 1

I recently had the opportunity to reconfigure a mail server for a client. This client wanted an existing Dovecot/Postfix setup to be moved to use Virtual Mailbox Domains and SQL authentication. The existing setup was a typical out of the box install utilizing system accounts as the mail user base. The driving factor in this was to be able to use Horde to not only authenticate against the mail server, but to also have Horde be able to manage mailbox users.

The requirements for this were pretty simple, so these steps are fairly simplistic. For starters, this server is only hosting a single domain, so there is no need to track different domains in Dovecot's virtual setup. Also, since these servers were already setup and functional, this article will skip the steps for things like setting up TLS and the like. I have done setups like this before, but not since the Horde 3 days, so it might be helpful for others to see what needed to be done.

This article will show how to setup the Dovecot portion of things. The next article will show Postfix, followed by Horde.

The first thing I did, since this was an existing mail system was to install Horde 4 to be sure that any requirements for Horde were already met on server. Specifically, that Horde would have no problems communicating with the IMAP server. Next, it was time to configure Dovecot to use SQL maps for the mailboxes. There are a lot of HOWTOs out there about setting up Dovecot from scratch to do this, and most of those are overly complex for what was needed in this case. First, I created a mail database. Unlike all the other tutorials out there, I only had to create a single table in the database. Since we are only hosting a single domain, and the location of the user mailboxes will be easily calculatable, this table only holds usernames and passwords:

 CREATE TABLE `mailbox` (
  `uid` varchar(255) NOT NULL DEFAULT '',
  `pwd` varchar(255) NOT NULL DEFAULT '',
  PRIMARY KEY (`uid`)
);

Next, it's time to configure Dovecot. The things that needed to be changed in /etc/dovecot.conf:

# Put mailboxes in /var/vmail/{domain}/{username}
mail_location = maildir:/var/vmail/%d/%u

# Limit the uid/gid that can login
first_valid_uid = 150
last_valid_uid = 150


auth default {
# .
# .
  passdb sql {
    # Location of the SQL configuration
    args = /etc/dovecot/dovecot-sql.conf
  } 
  
  userdb sql {
    args = /etc/dovecot/dovecot-sql.conf
  }

  # It's possible to export the authentication interface to other programs:
  socket listen {
    master {
      # Master socket provides access to userdb information. It's typically
      # used to give Dovecot's local delivery agent access to userdb so it
      # can find mailbox locations.
      path = /var/run/dovecot/auth-master
      mode = 0600
      # Default user/group is the one who started dovecot-auth (root)
      user = vmail
      group = mail
    }
    client {
      # The client socket is generally safe to export to everyone. Typical use
      # is to export it to your SMTP server so it can do SMTP AUTH lookups
      # using it.
      #path = /var/run/dovecot/auth-client
      path = /var/spool/postfix/private/auth
      mode = 0660
      user = postfix
      group = postfix
    }
  }
}

These changes tell Dovecot where to find the configuration to use SQL for the user and passwd database, and to export an authentication socket that Postfix can use. We will take care of both of these things later. First, we need to create the vmail user that we told the authentication socket to use and give it the required uid that we specifed. While we are at it, lets also create the directory to hold the virtual mailboxes.

useradd -r -u 150 -g mail -d /var/vmail -s /sbin/nologin -c “Virtual mailbox” vmail
mkdir /var/vmail
chmod 770 /var/vmail/
chown vmail:mail /var/vmail/

Now for /etc/dovecot-sql.conf On some distros, this file will already exist. We just need to tweak it for our situation.

#Database driver
driver = mysql

# Connect string for the database containing the mailbox table.
connect = host=localhost dbname=mail user=vmail password=thedbpasswd

# Since we want to migrate the existing users over from system accounts
# using shadow passwords, we use the CRYPT function.
default_pass_scheme = CRYPT

# The query needed to get the user/password %n contains only the user part of user@example.com
password_query = SELECT uid as user, pwd as password FROM mailbox WHERE uid = '%n'

# The user query. Since we are only hosting a single domain, it can be hardcoded here.
# This simplifies the DB table and queries. Notice we also always return a static uid and gid
# that match the vmail user we created. This causes the vmail system user to be the user 
# used to read the mailbox data.
user_query = SELECT '/var/vmail/example.com/%n' as home, 'maildir:/var/vmail/example.com/%n' as mail, 150 as uid, 8 as gid FROM mailbox WHERE uid = '%n'

Now, remember that we are moving existing Maildir accounts that currently use shadow passwords. This client's system only had a few existing accounts, so I just manually added the entries into the mailbox table we created above. The passwords were just copy/pasted from the system's shadow file:

INSERT INTO mailbox (uid, pwd) VALUES('userone', '$6$xxxxxxxxxxxxxxxxxxx/xxxxxxxxxxxxxxx');

Take note of the password. It's a SHA-512 encrypted password (as indicated by the $6$). The salt is contained in the string itself. This will be important to know when configuring Horde later.

Next, now that the user accounts are setup in the table, we can copy any existing mailboxes over to the new location. In this case, we have Maildir files located in ~/Maildir. This is fairly simple, copy the directory and change ownership:

cp -r ~/userone/Maildir /var/vmail/example.com/userone
cd /var/vmail/example.com
chown -R vmail:mail userone

At this point, the users can now access their mailboxes just as before. Nothing will look different from the user's point of view. In the next part, we will configure Postfix to know where to deliver incoming mail.