Wednesday, December 12, 2012

Find the nearest location by latitude and longitude in MySQL using the Haversine formula

If you need to find the nearest location of any point of interest by latitude and longitude in MySQL, then you can use the Haversine formula. Following here is and example that assumes you have a table named "destinations" which contains amongst other things, the latitude and longitude of those 'destinations'. An example of valid destinations in a real-world application would be a table listing hotels and their lat/long coordinates which would enable one to write a query to return the nearest hotels from that table based on your lat/long coordinates. (The distance provided is in miles. To get the distance in kilometers, exchange 3539 with 6371)

SET @my_lat = 35.229205 ; -- the source latitude
SET @my_lon = -114.26811 ;-- the source longitude

SELECT *, ( 3959 * acos( cos( radians(
-- latitude
@my_lat) ) * cos( radians( 
destination.latitude ) ) * cos( radians( 
destination.longitude ) - radians(
-- longitude
@my_lon) ) + sin( radians(
-- latitude
@my_lat) ) * sin( radians( 
destination.latitude ) ) ) ) AS distance 
-- table containing targets to compare distance
FROM destination
ORDER BY distance LIMIT 1 ;

Full example at SQL Fiddle --->!2/abba1/4/0 ( PLEASE DONATE )

I recently needed to use this to find the official timezone for a city closest to a person based on their latitude, longitude coordinates in MySQL, so as not to have to depend on look-ups to a web service. For anyone interested, city-timezone data can be downloaded from the website and loaded as the 'destinations' table.


Tuesday, December 11, 2012

Load timezone data by city and country into MySQL

If ever you needed to look-up timezone data in your MySQL database you know that you need to find a nice source for that data and load it into your database.

This is how I did it:
  1. Download timezone information by cities from this site: (this example uses the CSV format - download that file)
  2. Unzip the file and note the location (full path) of the "cities.txt" file.
  3. Use the following SQL to create a table to hold the data:
    CREATE TABLE `timezone_cities` (
      `city` varchar(100) NOT NULL,
      `timezone` varchar(250) NOT NULL,
      `country` varchar(45) NOT NULL,
      `latitude` double NOT NULL,
      `longitude` double NOT NULL,
      `timezone_name` varchar(200) NOT NULL
  4. Adapt and run the following SQL statement in MySQL to load the data:
    LOAD DATA LOCAL INFILE '/Users/nerd/Downloads/cities_csv/cities.txt'
    INTO TABLE timezone_cities

Make sure that when you run the LOAD DATA query, that there are no errors or warnings.
Note that I did not optimize the design of the table (field sizes) to match the data ( DIY )

Tuesday, November 20, 2012

Auto-Constructing DDL for a MySQL federated table from and existing table

For anyone looking for a tool to automatically generate a DDL statement for a federated table from the DDL statement of an existing table, here is the PHP script (note: this was only tested with MySQL 5.5)

Please see,512901,513020#msg-513020

 parse_str(implode('&', array_slice($argv, 1)), $_GET);  
 $mysqli = new mysqli('target_host', 'login_id', 'password', 'db_name', '3306' );  
 if(!$mysqli) die("Connection error. Aborting\n");  
 // this is the name of the server used in the CREATE SERVER statement  
 $fedlink_server = $_GET['fedlink_server'] = "my_preconfigured_fedlink";  
 $source_table = $_GET['source_table']   ;//= "accounts";  
 $target_table = $_GET['target_table']   ;//= "fedlinked_accounts";  
 if ((strlen($source_table) ==0) OR (strlen($target_table) ==0))  
   die("Required values: source_table=?, target_table=?\n\n");  
 echo "source_table=$source_table\n";  
 echo "target_table=$target_table\n";  
 $sql = "SHOW CREATE TABLE `$source_table`;";  
 $mysqli_result = $mysqli->query($sql);  
 if (!$mysqli_result) die("The query failed to return results\n");  
 $row = $mysqli_result->fetch_all();  
 $ddl = $row[0][1];  
 $ddl = str_replace("''", "'", $ddl);  
 $ddl_arr = preg_split("/\n/", $ddl);  
 $ddl_end = $ddl_arr[sizeof($ddl_arr) -1];  
 $ddl_arr[0] = "CREATE TABLE `$target_table` (";  
 $charset= (substr_count($ddl_end, "DEFAULT CHARSET=utf8") == 1)  
     ? $charset = " CHARSET=utf8"  
     : "";  
 $ddl_arr[sizeof($ddl_arr) -1]  
   = ") ENGINE=FEDERATED $charset CONNECTION='$fedlink_server/$source_table';";  
 echo join("\n", $ddl_arr);  
Simply execute the script as follows (for example):

php -f the-script-in-a-file.php source_table=accounts target_table=linkedtable_account

NOTE: This script was written to suite MY needs and won't necessarily work for every situation.

Sunday, October 14, 2012

The Rise of Postgres

I have been advocating PostgreSQL close to a decade now with little success. People just didn't seem to get it. It seems that it's finally catching on now. Various apps I use are now being deployed with pro-PostgresSQL. Even ones which in the past were pro-MySQL (Atlassian Confluence for example). The tide is finally changing. Read the full article here, and another one here.

All I can say on this is that it's nice to be ahead of the curve, even when everyone thinks you're nuts - Unfortunately that doesn't help you get hired, in a place like Las Vegas.

Wednesday, September 26, 2012

TIP: How to save on web page hosting fees

As many of you know web hosting companies charge a fee for hosting websites. Most small businesses just need an online-brochure for their website. Nothing fancy, just a simple template to put images and text into. Such solutions usually cost about $5 per month. If a simple online-brochure is all you need your website to be then there's really no reason to spend that money. Did you know that can host your site at your own "www" URL and you wont have to pay for the hosting?
All you need to do after purchasing your domain, is:
  1. create a account for your business
  2. go to the settings for your account
  3. click "Add a custom domain", and follow the instructions

Probably the most complicated part for most people, is that you will need to update your DNS settings in your domain so that the web page requests to your "www" site point to your blogger site. Don't worry about adds. You don't have to publish Google AddSense adds on your SMB site. I would imagine that many SMBs that use GoDaddy for hosting, might be looking to have their site hosted elsewhere, after GoDaddy's service outage on Sept 10th.  

Asterisk Dictate and the old Hangup Issue

I implemented a custom phone based dictation solution using Asterisk PBX and the Dictate app and noticed that anything in the dictation dial-plan after the Dictate command, was never executed *if* the call was dropped or hung-up by the caller. When researching the issue, I found various other references to people having this same problem:

  • (My solution could have saved someone $40K apparently!)

The most typical reason for adding other dial-plan entries after the Dictate call, is to apply post processing of the .raw files after the dictation call is complete. Note that ending the dictation in the normal manner, by using #,# would exit the Dictate app gracefully and therefore dial-plan entries after the Dictate call, would be executed.

The Asterisk Dictate application has a bug [IMHO] which causes the Dictate application to continue after a caller has hung up the call or the call was dropped. The attached file shows debug information where one can see that Asterisk recieves "SIP BYE" and proceeds to issue the "owner hangup", but immediately after that the log shows a warning: "WARNING[12065] file.c: Failed to write frame", followed by the Dictate app playing another sound file prompting to enter a new file name: "Playing 'dictate/enter_filename.ulaw'".
So it's evident that the Dictate app knows that the dictation has stopped because it prompt for a new file name for a new dictation, however, it does not detect from Asterisk that the call ended, which is why it prompts for a new dictation file-name.

Log output:

 <!-- the call hang-up is detected by Asterisk here: -->  
 [2012-09-26 06:40:07] DEBUG[11441] chan_sip.c: **** Received BYE (8) - Command in SIP BYE  
 [2012-09-26 06:40:07] DEBUG[11441] netsock2.c: Splitting '' into...  
 [2012-09-26 06:40:07] DEBUG[11441] netsock2.c: '' and port '62724'.  
 [2012-09-26 06:40:07] DEBUG[11441] chan_sip.c: Setting SIP_ALREADYGONE on dialog B3L2U8Zl2YEmju4BEo0jwFk8nBTlKiwk  
 [2012-09-26 06:40:07] DEBUG[11441] res_rtp_asterisk.c: Setting RTCP address on RTP instance '0xb7704b18'  
 [2012-09-26 06:40:07] DEBUG[11441] chan_sip.c: Session timer stopped: 777 - B3L2U8Zl2YEmju4BEo0jwFk8nBTlKiwk  
 [2012-09-26 06:40:07] DEBUG[11441] chan_sip.c: Received bye, issuing owner hangup  
 [2012-09-26 06:40:07] DEBUG[11441] chan_sip.c: Trying to put 'SIP/2.0 200' onto UDP socket destined for  
 [2012-09-26 06:40:07] DEBUG[11452] manager.c: Examining event:  
 <!-- Dictate (I assume) tries to create a file handle for a new dictation here - even though the call has ended: -->  
 [2012-09-26 06:40:07] DEBUG[12065] channel.c: Set channel SIP/999-00000004 to write format ulaw  
 [2012-09-26 06:40:07] WARNING[12065] file.c: Failed to write frame  
 [2012-09-26 06:40:07] DEBUG[12065] channel.c: Scheduling timer at (0 requested / 0 actual) timer ticks per second  
 <!-- Dictate prompts for a new dictation filename to write data to here - even though the call has ended: -->  
 [2012-09-26 06:40:07] VERBOSE[12065] file.c: -- <SIP/999-00000004> Playing 'dictate/enter_filename.ulaw' (language 'en')  
 [2012-09-26 06:40:07] DEBUG[12065] channel.c: Set channel SIP/999-00000004 to read format ulaw  
 [2012-09-26 06:40:07] DEBUG[12065] pbx.c: Extension _., priority 11 returned normally even though call was hung up  
 [2012-09-26 06:40:07] DEBUG[12065] channel.c: Soft-Hanging up channel 'SIP/999-00000004'  
 <!-- Notice that Asterisk does answer on the hangup extension for the Dictate app after the call ends, -->  
 <!-- but I have noticed that adding dial-plan directives there (for post-processing) won't work either -->  
 [2012-09-26 06:40:07] DEBUG[12065] pbx.c: Launching 'Answer'  
 [2012-09-26 06:40:07] VERBOSE[12065] pbx.c: -- Executing [h@app-dictate-record-custom:1] Answer("SIP/999-00000004", "") in new stack  
 [2012-09-26 06:40:07] DEBUG[12065] pbx.c: Spawn extension (app-dictate-record-custom,h,1) exited non-zero on 'SIP/999-00000004'  
 [2012-09-26 06:40:07] VERBOSE[12065] pbx.c: == Spawn extension (app-dictate-record-custom, h, 1) exited non-zero on 'SIP/999-00000004'  
 [2012-09-26 06:40:07] DEBUG[12065] channel.c: Scheduling timer at (0 requested / 0 actual) timer ticks per second  
 [2012-09-26 06:40:07] DEBUG[12065] channel.c: Scheduling timer at (0 requested / 0 actual) timer ticks per second  
 [2012-09-26 06:40:07] DEBUG[12065] channel.c: Hanging up channel 'SIP/999-00000004'  
 [2012-09-26 06:40:07] DEBUG[12065] chan_sip.c: Hangup call SIP/999-00000004, SIP callid B3L2U8Zl2YEmju4BEo0jwFk8nBTlKiwk  
 [2012-09-26 06:40:07] DEBUG[12065] chan_sip.c: Updating call counter for incoming call  

Since the CDR is always correctly updated after the caller hangs up, I simply updated my own post-processing script to check the CDR records for dictations that are completed. Problem solved.

Thursday, September 20, 2012

Keep Unix password in sync with Atlassian Crucible password

I developed a little app stack using Atlassian Crucible and Postfix+Dovecot for mail. I'm not using LDAP or any other SSO solution since that would be an over-kill solution for the few users that will be using my app stack. To keep the users passwords in sync from unix (for mail access) with Crucible, I simply have to have a utility to keep the passwords in sync. (BTW, Usermin would also be over-kill)

Since the users are already used to doing everything via a web GUI, I decided to keep the utility in a web page and use PHP for scripting the solution on the back-end.

There are 3 very simple parts to this solution. Firstly, we have the PHP script function update the user's Confluence login via the SOAP service provided in Confluence as follows:

 function change_confluence_password($user, $password, $new_password) {  
   // login to the Confluence SOAP service  
   $soapClient = new SoapClient("http://your-server.local/rpc/soap-axis/confluenceservice-v2?wsdl");  
   try {  
     $token = $soapClient->__soapCall("login", array('in0'=>$user,'in1'=>$password));  
     if (is_soap_fault($token)) { echo $token->faultstring; exit; }  
   } catch (SoapFault $e) {  
     echo $e->getMessage();  
     return 0;  
   // change the password:  
   // boolean changeMyPassword(String token, String oldPass, String newPass) - changes the current user's password  
   $params = array('in0'=>$token,'in1'=>$password,'in2'=>$new_password);  
   try {  
     $result = $soapClient->__soapCall("changeMyPassword", $params);  
   } catch (SoapFault $e) {  
     echo $e->getMessage();  
     return 0;  
   return $result;  

Next we have a PHP function to request a password change from my "rapchan" script. What is rapchan? rapchan is simply a little mechanism to Request A Password Change (rapchan). The solutions I have seen on the web for making a change to a password via PHP were not 100% satisfactory to me so I created a mechanism whereby a PHP script can request a password change and a cron job run as root can fulfill the request. Having the password change mechanism in the same PHP script as the request seems like a bad idea to me.

So how is the rapchan paradigm different? My solution involves having a script (PHP in this case) drop a special file in a special folder, which represents a request to have a password changed. For security purposes that folder may not be readable to anyone except root otherwise requested password changes would be leaked. Furthermore, rapchan does not allow password change requests for root. No-one one the sudo group can have their password changed via rapchan either, unless so configured (user beware).

How does it work? All the script (which requests a password change) needs to do, is drop a file into a certain folder. The filename must be formatted as [username][secret_file_suffix]. The "username" will be used to determine for what user we are requesting a password change. The contents of the file must contain a string on the first line which is what the new password should be.

Some security considerations: The secret suffix is a small security measure. It requires that the calling script add the required suffix at the end of the file in order for the request to be honored. The more important security consideration is that the folder where the file is dropped into, belongs to the "rapchan" group; the requesting application must be in the "rapchan" group and the folder permissions must be 720. That way only applications executed as users in the rapchan group are allowed to actually request password changes by dropping requests in to that folder. The script which contains the configurations settings and executes the actual password change, must be owned by the root user. The last security measure for a PHP script, would be to use Zend Optimizer for some obfuscation of the PHP script so that the secret suffix is harder to divulge.

How do I set this up?
1) Create a group named rapchan
2) Create a folder owned by root:rapchan and set the permissions to 720
3) Drop the following script code into a folder, which you script via cron to be executed as often as you need:
 ############# CONFIGURATION ##############  
 ############## EXECUTION #################  
 # get the permissions on the folder  
 permissions_on_queue_folder="$(stat --format=%a $QFOLDER)"  
 # check the queue for files to process  
 for QFILE in $QFOLDER/*  
  # make sure the folder has the proper permissions  
  if [ $permissions_on_queue_folder != 720 ];  
   # log the problem if it does not  
   logger "rapchan error: skipping $QFILE. Queue folder permissions not 720"  
   # make sure the file has the SECRETSUFFIX  
   if [[ $QFILE == *$SECRETSUFFIX ]]; then  
    # get the username from the filepath  
    # make sure there is a matching USER for the file, but not the root user  
    if [ -n "$(getent passwd $USER)" ] && [ $USER != "root" ]; then  
     # don't change passwords for sudo users unless allowed in config (above)  
     if [ $OKFORSUDO == "no" ] && [[ "$(id $USER)" == *"(sudo)"* ]]; then  
      logger "rapchan error: skipping $USER. User is in sudo group"  
      # change the password  
      PASS="$(head -n 1 $QFILE | sed 's/ *$//g' | sed 's/^ *//g')"  
      echo $USER:$PASS | /usr/sbin/chpasswd  
  # remove the file from the queue  
  rm -f $QFILE  

Finally just make sure that what ever user your PHP script will be running under, is in the rapchan group and have your PHP script write the requested password change into the rapchan queue:
 function rapchan($user, $new_password) {  
   file_put_contents("/etc/rapchan/queue/$user.rap", $new_password);  

There are some other obvious things you need to take care of such as making sure that you dont change the unix password unless the password change call to the Confluence web service was successful, but that level of detail is not covered here.


Sunday, September 16, 2012

OS X Mountain Lion Webserver SSL nightmare

I'm having an issue with OS X Webserver that I'm trying to figure out with Apple. Apparently many other people have also been having SSL issues when using, since before Mountain Lion.

Read more:

Wednesday, August 22, 2012

Setting up a dial-plan for dictations in Asterisk

Setting up a dictation system in FreePBX is a snap ... IF you're familiar with Asterisk and FreePBX.
I have spent 3 days toying around with it and finally setup an Asterisk based system with no prior experience.

First though, let me give out due kudos to the guys over at for putting out a great product. Also check out their site for getting SIP trunking service. Since the creators of FreePBX are closely affiliated with it's no wonder that setting up your SIP account details into your FreePBX server is drop-dead-easy. Thank you guys!

So why was setting up an Asterisk dictation system so time consuming?

  • Well for one thing, there is a lot of documentation to go through if you are a complete noob like me. I am a firm believer that one should read the documentation and ask questions if you're still stuck. The best book I have found to date for Asterisk is Practical Asterisk 1.4 and 1.6
  • One of the opening quotes of that book states in part " much fun tinkering could be" which I whole-heartedly agree with. If you're noob like am, you're going to want to play with your FreePBX system for a few days get a feel of all it can do.
  • For questions that you may post on various forums, dont always expect an accurate; fast or courteous reply, but dont let that discourage you from posting questions either.
  • There are a lot of loose ends that one must string together in FreePBX to accomplish a complete dictation workflow. If you're new to the system, you're just going to have to get familiar with all the features and figure out what works best for you scenario. The reason there are so many loose ends is because it should be that way. There is no one-size-fits-all configuration for anything in Asterisk.
  • Also, I have noticed that when one asks for assistance on the forums with regard to using the Dictate feature, everyone seems to asume that the users are part of the local network. If you are setting up a Asterisk dictation system where the users are calling in from a PSTN then you need to make that clear when you post your questions. It does make a difference when the users of your dictation system are not part of the local phone system.

That being said, my current solution simply involves having calls from an inbound route go to an IVR which I use as a "main menu" for the inbound calls and some other routing, until the user simply dials *34 to begin the dictation app. I would prefer if the dictation app started automatically, but I haven't quite figured that out yet.

The next tricky part is in the configuration of the [app-dictate-record] dialplan. Using my routing model above, the ${AMPUSER} variable is null and hence the default extension in that dialplan cannot work as provided by the FreePBX distro. The intention of how they have it coded is to have the dictation go into a folder which is named after the users extension. Hence, if the user from extension 101 uses the dictation feature, their dictations would go into folder 101. In my case, since the callers from my system are inbound over the PSTN, they are not authenticated as local users, hence ${AMPUSER} is null and all the dictations get dumped into the same folder. Also, the Dictate app asks the users to enter a file name, which is a really really bad idea for my solution, because if a user uses a filename they have used before, Dictate will over-write the existing one - YIKES!
To solve all these issues, I over-write the existing dictation dialplan with this one:

exten => *34,1,Answer
exten => *34,n,Macro(user-callerid,)
exten => *34,n,Noop(CallerID is ${AMPUSER})
exten => *34,n,Set(DICTENABLED=${DB(AMPUSER/${AMPUSER}/dictate/enabled)})
exten => *34,n,Set(CIDPRE=${CALLERID(name)})
exten => *34,n,Set(CIDPRE=${CIDPRE:0:2})
exten => *34,n,GotoIf($[$["x${DICTENABLED}"="x"]|$["x${DICTENABLED}"="xdisabled"]]?nodict:dictok)
exten => *34,n(nodict),Playback(feature-not-avail-line)
exten => *34,n,Hangup
exten => *34,n(dictok),Dictate(/var/lib/asterisk/sounds/dictate/${CIDPRE},${STRFTIME(${EPOCH},,%Y%m%d-%H:%M:%S)})
exten => *34,n,Macro(hangupcall,) 

I use the CIDPRE variable to capture the "CID Prefix" configured elsewhere in my routing model, and that corresponds with a specific user as selected through the IVR and I use an auto-generated date-time file name so that each filename will be unique. One of my main goals is to do as much of the configuring through the FreePBX GUI as possible. The code snippet above is the only thing I had to customize under the hood.

Done, now we're ready to begin testing!

This is what I eventually came up with:
 ;tmsoa custom 1.2  
 exten => _.,1,Answer  
 ;exten => _.,n,SayDigits(1.2)  
 exten => _.,n,Macro(user-callerid,)  
 exten => _.,n,Set(CIDPRE=${CALLERID(name)})          ;capture the CID info  
 exten => _.,n,Set(CIDPRE=${CIDPRE:0:3})          ;get the first 3 characters of the CID prefix  
 ;set a file name that matches the date + time and the and then the paths  
 exten => _.,n,Set(DICTFILENAME=${STRFTIME(${EPOCH},,%y%m%d-%H.%M.%S)})  
 exten => _.,n,Set(DICTFILEPATH=/var/spool/asterisk/dictate)  
 ;make sure the audio file directories exist  
 exten => _.,n,System(mkdir -p ${DICTFILEPATHRAW})  
 exten => _.,n,System(mkdir -p ${DICTFILEPATHWAV})  
 exten => _.,n,System(mkdir -p ${DICTFILEPATHRAW}/.archive)  
 ;authenticate with VM pwd in MB matching CID prefix  
 exten => _.,n,VMAuthenticate(${CIDPRE}@default)  
 ;record dictation to folder matching CID prefix with the file name in date-time format  
 exten => _.,n,Dictate(${DICTFILEPATHRAW},${DICTFILENAME})  

I use the "SayDigits(1.2)" on development hosts to indicate the version number for dial plans I am working on. This gets commented out befor deployment.

Tuesday, August 21, 2012

Convert Asterisk dictation .raw files to .wav

I recently implemented an Asterisk solution and made use of the Dictate application which creates .raw audio files of the Dictations. These files need to be encoded to wav format even though specifying in FreePBX, that the output of dictated files should be WAV files. When researching how to do the conversion I did find various examples of using the sox app to convert these files, however none of these examples worked exactly as documented. Some documentation was incomplete and others were outdated an often, the forums which raised the question, went unanswered.

For the version of FreePBX which I am using, I found the following to command sequence to work:
sox -t auto -w -s -r 8000 -c 1 {input_file}  {output_file}

An excellent way to increase the volume, is to add the -v option with a multiplier to increase the volume by. This is especially helpful in our case since we have a few doctors that like to dictate very softly.

I hope this information is helpful to anyone else that has run into this issue.

Wednesday, August 8, 2012

To bubble exceptions or not to bubble exceptions. That is the question.

The phrase "exceptions must be truly exceptional" is thrown around a lot, but I find that many developers can’t fully understand what that actually means or how to apply that information. They only think they do.

For example, I had a developer argue that an invalid date format is not truly an exception, that it is "simply a MINOR user data entry issue, which could happen often", therefore there was no acceptable case for bubbling, or even throwing an exception for that. He said that having a service go down is a real example of an exception, because that would be a BIG DEAL, and that it rarely occurred.

His distinction was that a web service going down is "rare and exceptional" (true in our case, but it has happened more than once) and that it was a "BIG DEAL", hence the cost of bubbling an exception was acceptable because one would rarely have to pay that price, but at least you're paying the price for something that's EXCEPTIONALLY important and that a user data entry problem is not an exceptionally important problem in the system.
That kind of answer tells me that the developer defines "exceptional" only as something that's SUPERIOR and hence in his mind the issue of a web service going down is superior in importance to the a data entry issue. The developer was ignoring the fact that "exceptional" primarily means "rare".

I had him investigate our logs to see the frequency at which an invalid date format issue, made it through all the layers of our app and he found that it had only happened once, and it never made it past the first service layer.

Now consider the real "expense" between a service going down and a user data entry issue:
The aforementioned service which had an issue that had caused an exception to bubble was 4 layers down in the app stack, and it occurred for hundreds of transactions in the app because of all the users making requests to that area of the app all at the same time. That means that we had hundreds of exceptions bubbling through to the UI, through 4 layers of the app. Also, this happened at least twice in a twelve month period.

In the case of the invalid date issue, this occurred because the input validation did not work in the web app (because the user had JavaScript turned off). The next layer of the app received the bad data, and according to my policy, went no further, but instead bubbled up the exception (just one layer up to the UI). Only IF the first service layer missed that the data provided by the user was invalid, would it ever have gotten to the 2nd layer, which would have to throw an exception to bubble up. Since it was very RARE for invalid data entry to get past the UI, has never before got past the first service layer, it was clear that the data entry error was more RARE than the issue of the web service going down, hence in terms of defining exceptional as RARE, the data entry issue was more exceptional than the web service going down. In fact this is true for ANY data entry errors in our apps (as it should be for any ones apps).

So exceptions bubbled by a service going down occurred with much greater frequency, but it’s also important to note that it went through more layers of the app compared with the data entry error. Hence in terms of actual resource EXPENSE, the magnitude of resource cost on our servers for throwing exceptions for data entry errors, is entirely NEGLIGIBLE in this case when compared with the resource expense incurred for a web service going down.
His rebuttal was that an isolated data entry error is negligible compared to the IMPACT of critical service not being available at all. His statement is true and irrefutable; however that is not the point. In fact, that’s the wrong way to look at the issue entirely.

Good quality standards must also dictate that at no point should a developer minimize the importance of the elementary components of an application (such as data entry), simply because their related issues are “low impact”. When one minimizes importance of the elementary components such as data entry validation, simply because they are “low impact”, you end up with COMMON issues in your application for which you would never throw exceptions, because now, you treat them as “common” and “low-impact”. Essentially you’ll end up with spaghetti code where you’re coding to mitigating un-exceptional issues wherever you might encounter them. This approach is unintuitive and highly counter productive.

To me, anything which fails a basic quality standard must be treated with priority, because if you can’t even maintain good standards for the basics, then your whole app is rubbish. Therefore, I would argue that everything should be treated as an exception, simply because problems should be EQUALLY RARE at every level, and when any exception is rare, then expense is not an issue.

If someone you know is creating apps that have so many issues that they have to start worrying about the expense of all the exceptions all over the place in all the applications various layers, then it’s probably time to suggest they scrap their app and start over.

My point is this: if you treat even minor, common issues in your applications as exceptionally important then they will become exceptionally rare, in which case it would be valid and inexpensive to treat them as exceptions.

Monday, July 30, 2012

Passing variables to SubReports in Telerik Reporting

Anytime you have created a complex report with embedded sub-reports, you might have faced the need to pass a variable from data in the the master report to a sub-report. I have seen this problem solved various ways with differing degrees of unnecessary complexity.

The biggest mistake made in all the code I have seen, is to assume that one must pass the variable through existing mechanisms provided by the Telerik Report class. I think this reply from Steve @ Telerik sums it up best: "Telerik Reports are standard .NET classes so you can abstract yourself from the fact that you're working with reports and achieve your requirement". In other words, use .Net and standard OOP principals without relying on being spoon-fed every feature your project requires.

One of the more interesting solutions I have seen implemented was this one, which was probably not intended as a solution for passing parameters from reports to sub-reports, but which the developer managed to getting working anyhow.

Basically, the common approach people use is to pass the variable using the report parameters collection, which while not wrong, simply takes more code than is necessary.

I find that the most graceful solution is possible when does what Steve recommends and abstracts oneself from the fact that your're working with reports and remember that you're working with standard .Net classes. So what would you do if you needed to get a variable into a instantiated class. Well, you could give the class a property, but even then you have to instantiate the class, and then set the property. So the other solution is to add the variable as a parameter to the class constructor. "But when I create a report it provides the following code where there is not parameter for my variable!" ... I have been told. What developer was talking about was this constructor created when you add a Telerik Report to a C# project:

        public AccountAllocationsSub()

True, that constructor has no parameters, but if you understand that you are just working with a standard .Net class, then there' reason you cant create your own constructor which takes the parameter you want to pass. Consider this constructor:

       public AccountAllocationsSub(int transactionKey)
            this.sqlDataSource1.ConnectionString = SqlConnectionString;
            this.sqlDataSource1.Parameters["@TransactionKey"].Value = transactionKey;
            DataSource = sqlDataSource1;

Now all you need to do is get the data from the master report when it's binding to the controls, and invoke your sub report with your custom constructor:

     private void TransactionGroupHeader_ItemDataBinding_1(object sender, EventArgs e)
            Telerik.Reporting.Processing.GroupSection section =
                (sender as Telerik.Reporting.Processing.GroupSection);
            int transactionKey = 0;
            string strTransactionKey = section.DataObject["TransactionKey"].ToString();
            int.TryParse(strTransactionKey, out transactionKey);
            subReport1.ReportSource = new AccountAllocationsSub(transactionKey);

The biggest advantage I have found using this method is that when I use the preview tab to preview the report, it will call the standard constructor, and I can have that constructor populate the report based on the properties I set for the data object at design time and nicely preview the report.

Then at run time when the report is called from the master report, it will use the variable parameter for the master report and the data the other constructor will set the data based on the variable from the master report.

With the solution developed previously by another developer on our team there was no way to preview the data on the report. In fact using the code from the sample he followed the data-source was specifically neutered unless one was viewing the report at runtime. This was the code in the standard ctor (which is called by the previewer):

public MyTelerikMasterReport()
// Required for Telerik Reporting designer support
this.DataSource = null;
if (System.ComponentModel.LicenseManager.UsageMode ==
this.SqlDataSource1.ConnectionString = ...etc

Thursday, July 26, 2012

Getting control of your Telerik Reports

Recently I took over a reporting project from another developer. We basically have multiple reports that had to be ported to Telerik Reports (ver 2011 Q3). Upon inspecting what he had created I noticed the following problems throughout the project:

  1. He implemented 4 different ways of connecting to the DB instead of one standardized method for all reports [no, there was no good reason for doing what he did, one standardized method would work for all in our situation].
  2. He implemented 3 different ways of retrieving data, using 3 different types of objects instead of standardizing. One method used a standard .Net data object that would attempt to lock all the records in the DB while report was loading, which created major headaches for reports that took a while to generate.
  3. He used in-line SQL in the DLL - yes, I know, there are different schools of thought on this subject, but my team has a specific policy that dictates that developer's applications only use stored procedures and this is considered good practice in many circles.
  4. He created a query for each field in the report header that had to retrieve common header data.
  5. In every report, he had code for retrieving report footer data for stuffing into a text box in the footer of the report.
  6. He had no standard convention for naming parameters which were used in most reports. For example, most reports would require a "customerId" parameter, yet the variable would be coded as any one of the following (as an example): cust / customerID / custId / cusomterAccountKey / accountKey / customerAccountNumber -- I guess some people like variety.
To mitigate some of these issues I simply created a class named "StandardFormatReport" which inherits from Telerik.Reporting.Report:
public class StandardFormatReport : Telerik.Reporting.Report

To mitigate the first issue, I added a property to the StandardFormatReport which would return the one and only connection string the app is supposed to use according to the one and only approved method for retrieving connection strings:

        protected string SqlConnectionString
            get { return Conf.GetDBConnectionString(); }

In order to make sure that I can more easily know where to look to audit how the report is setting the data source, I added the following virtual function, which the developer would have to override in his report:

        protected virtual void SetDataSource()
            throw new System.Exception("The report class which inherits from RRFS.Reports.StandardFormatReport must override SetDataSource()");

All I would need to do in a code review process is look for where the developer would override the function to set the datasource, and make sure that he was connecting to the database correctly:

        protected override void SetDataSource()
            this.sqlDataSource1.ConnectionString = this.SqlConnectionString;

Lastly, the SetDataSource function is the logical place to set the reports parameters, which the developer named every different way under the sun with a string. This drives me nuts, not just because of the lack of consistency, but also because the developer could easily typo the param name and the report would not work. I deplore the using strings where constants should be used instead. To remedy this, I added the necessary constants which would define the standardized way to name params, in the StandardFormatReport class:

        protected static class StandardParams
            public const string CollectionAccountKey = "@CustomerAccountId";
            public const string AssociationKey = "@CompanyId";

Thus instead of the developer using the following code:

        protected override void SetDataSource()
            this.sqlDataSource1.ConnectionString = this.SqlConnectionString;
            this.sqlDataSource1.Parameters.AddRange(new Telerik.Reporting.SqlDataSourceParameter[] {
            new Telerik.Reporting.SqlDataSourceParameter("@CollectionAcountKey", System.Data.DbType.Int32, _collectionKey)});
            this.DataSource = sqlDataSource1;

[notice, I deliberately spelled the parameter incorrectly in this case, because this is highlights the typo issue]

... he would have to use the following code:

        protected override void SetDataSource()
            this.sqlDataSource1.ConnectionString = this.SqlConnectionString;
            this.sqlDataSource1.Parameters.AddRange(new Telerik.Reporting.SqlDataSourceParameter[] {
            new Telerik.Reporting.SqlDataSourceParameter(StandardParams.CollectionAccountKey, System.Data.DbType.Int32, _collectionKey)});
            this.DataSource = sqlDataSource1;

You cant typo that and get your app to compile.

Finally, for all the instances of reports that had certain repeated fields that the developer had coded into each report, I added the following function to the StandardFormatReport class:

        public void SetStandardFields()
            Telerik.Reporting.TextBox tb = null;
            tb = this.Items.Find("txtFooterDate", true)[0] as Telerik.Reporting.TextBox;
            if (tb != null) tb.Value = "Information as of " + DateTime.Now.ToString("mm/dd/yy");

            tb = this.Items.Find("txtFooterNote", true)[0] as Telerik.Reporting.TextBox;
            if (tb != null) tb.Value = GlobalFunctions.GetFooterNote();

            tb = this.Items.Find("txtFooterAddress", true)[0] as Telerik.Reporting.TextBox;
            if (tb != null) tb.Value = GlobalFunctions.GetFooterAddress();

Then in the ctor for the StandardFormatReport, I simply add the call to SetStandardFields, and if they exists with the correct names, they will be set, this eliminating the need to have this coded in every single report -- standard DRY compliance issues.

There is more that one could do to put standard functionality into a class that derives from Telerik.Reporting.Report and enforce standardization through everyday OO programming, but I wont go into detail. The point is this: just because the Report Wizard will create a Telerik.Reporting.Report class for you in Visual Studio, it does not mean that you have to use it as is. It's still basically a C# object, subject to all the benefits that come with the OOP feature C# offers. Since most reports for business applications usually need common functionality, it only makes sense to put all that functionality into one custom-class that derives from Telerik.Reporting.Report, then have your developers create their reports by deriving from your custom-class.

I would argue that if you have a large reporting project where there are many typical elements (a common logo, same size, position, etc) required on just about every report, and you're not doing it this way, but repeating your code per report instead of inheriting from a class that already has the correct code, then you're doing it wrong.

Monday, June 18, 2012

So you think you know Software Engineering?

Someone emailed me this. And I have to say I agree with it:
Common problems in the business world (from what I have experienced):
*everyone* thinks they are a developer/software designer
*everyone* thinks they could run IT and the software staff (and thinks they should!)
*everyone* thinks that innovative solutions can be scheduled, and thus can have predictable end dates.

I have to add to that however, that in my experience, none of these people can even do job #1, which is to say, document properly.

Thursday, May 31, 2012

Pervasive DI Update for version

We finally have the memory leaks which I reported months back, fixed in this release. Now we actually have something usable.

Apparently there were 2 separate memory leaks. One in Java code and another in native C code for the DJImport object. The Java code memory leak was in the the XML-DMS connector. This is what's used when you're pulling data from an XML data source.

Basically what all this means is that if you had a project prior to this release that iterated many XML files to read data from them .. and if you were doing a couple of lookups using the DJImport object while you were processing the map that read the data from the XML file ... you would have run into trouble.

During development, we were processing about 5,000 files, about 250 KB each, through the XML-DMS connector in a map. The map itself would use the DJImport object 7 different times, per transformation. So our job was using the DJImport object 35,000 times in the main processing map and few more times elsewhere.

The first run of the job takes about 70 to 80 minutes depending on resource conditions. Subsequent runs of the job take about 30 to 40 minutes because we allow the map to process XML files it's already processed. After running the job just once we would notice that 2GB or more of system RAM would be used up and not released. Running the job again would take up another 2GB+ of RAM, and so on, until your server ran out of RAM altogether. This is no longer the case after this release. After the job completes, the entire block of RAM that the job was using would be returned to the OS.

There are still other bugs that I reported but received no feedback on, but at least we have something can we push to production now.

Monday, April 23, 2012

Ongoing Woes with Pervasive Data Integrator 10

UPDATE: 2012-05-08

Pervasive tracked me down to where I work (they have even been inspecting my LinkedIn profile) and complained to the company I work for expecting them to lean on me to remove my blog. Apparently they don't like that I have been blogging about the technical defects in their product and that I find their support to be sluggish (save for the fact that they have at least 1 great support person I was eventually able to track down, that's been very sympathetic to our Pervasive Data Integrator issues).

Fortunately I am employed by a magnificent company, run by a benevolent and just CEO who believes in and respects freedom of speech (provided it's fair and based on accurate data). Since everything I've blogged about these Pervasive issues is substantiated by emails and documentation I have no reason to remove my blog. The information is accurate as of when the blog was first posted. To be fair, I have also gone back and noted some things they have fixed.

That being said, I did however remove a non-technical post which I used to have on this page. It was simply a non-technical blog about me ranting over my horrendous support experiences with Pervasive. Out of deep respect for my employer, I will keep my posts about Pervasive's Data Integrator on technical matters.

Tuesday, April 10, 2012

Pervasive DI Update for version

Although not officially released yet, we got early access to version which is supposed to resolve a grave error in XML schema validation. Unfortunately that version does not resolve the bugs they said it resolves. Mind you, they gave us this version specifically because it was supposed to fix the bug.

To make matters worse, I'm finding that there are even more bugs. For example you can't use the built-int LogMessage function from within your own UDF. You wont get any errors. It will just not log anything but instead just silently fail.

With this new version also, we have found that half the time, downloading process logs will give you some HTML mark-up (for wrapping some Flash code) instead of the actual logs you're trying to get to. As a result we've had to come up with yet another kludge - write our own logging mechanism.

What did they fix since the last time I blogged about this broken product. Well, you can actually get back to a running job in the Integration Manager and stop it if it's currently running. Unfortunately you still can't control jobs in the queued. i.e. there is no way to remove an individual job that has been queued. Not even removing it from the schedule will remove it from the queue. To make that issue worse, if you need to run a job immediately it will place it in the queue behind the jobs which have already been queued - regardless of when those other jobs in the queue are scheduled to run. So imagine this scenario: We have a job scheduled to run at 6PM and another at 9PM. Someone in the company decides that they want to run 3rd job that is normally scheduled to run at the end of the month, ... now. No harm will come from running it early, we just want to run it earlier than normal. So we click the "Run the integration immediately" button and expect it run immediately, right?

Nope. Instead for some odd reason it queues the job. So I have to tell the supervisor who requested the ETL to hang tight till tomorrow because PDI decided the "immediately" means "queue" for later.

Tuesday, March 27, 2012

The Pervasive Data Integration Server is a Java server app which by default uses only 64mb of memory. Increasing that value should resolve the 'Java Heap Space' errors that we have been getting from PDI. To implement this setting one must follow these steps:

  1. Locate the PDI installation folder and within that, a folder named "runtime"
  2. Locate a file within the "runtime" folder named "cosmos.ini"
  3. Open the "cosmos.ini" file in a text editor and locate a section named [UserInfo]
  4. Change the default memory allocation by adding a line under the [UserInfo] section as "JvmArgs=-Xmx256m" (or some other valid new memory value
[UPDATE 5/18/12, the original tech provided us the wrong information. The correct information is]:
  1. Locate the PDI installation folder and within that folder is a folder named "etc" which you need to navigate to.
  2. Locate a text file named "com.pervasive.execution.worker.cfg" and open it with a text editor. There you should find the following line: ""
  3. Remove the hash (#) and add your own -Xmx JVM arg there.
Note: This will affect all your worker processes, so take care because if you have a host with 8GB of RAM and you set your -Xmx setting to 4GB each job you run will use the 4GB.

Implementing this setting may not be effective for your situation. After trying this fix we have found that we are still plagued by JVM memory leaks our main PDI project.

More general info about increasing memory for Java using Xmx parameter can be found online.

Thursday, March 22, 2012

Pervasive Data Integrator 10 Review

What can I say about PDI. So far it has been a complete disappointment. Pervasive has certainly made a huge step forward in embracing a web-based solution in a RIA design using Adobe Flash and all the bells and whistles it has to offer. This comes at the expense of having an application that has significant bugs in Chrome and Internet Explorer. When we received our training the instructor not only faithfully stuck to using Firefox, but whenever we experienced significant bugs in the UI, his retort was simply that he was not experiencing any while using Firefox (hint, hint, use Firefox OK?).

I found that the product was shipped with incomplete, inaccurate or out of date documentation. For example, their documentation states that when executing shell commands from EZ Script that you simply use the Shell() function (as documented) however "If you run a .bat file, you cannot use the following syntax: Shell("file.bat"). Instead, use the following: Shell("cmd /c file.bat")". In fact the documentation completely leaves out the fact that even if you are not running a .bat file, you are actually still required to use "cmd /c" if you are using special characters such as the "|" (pipe) to redirect stdout. That one cost me a few hours till I figured out on my own that, contrary to their documentation "cmd /c" was probably needed in other situations also. A follow up from Pervasive's tech support confirmed this. This may seem minor however, when you're complete noob to PDI 10 the documentation and it's accuracy is all you have to rely on. Small issues can quickly become time burners.

Then we started building our projects and ran into some irritating bugs. For example, in the macro administration screen, when you define a macro that uses two or more single quote for it's value, you will notice that when you save the macro, it will automatically delete half your single quote. Click save again and it will do the same thing, until you are left with just one solitary single quote. Obviously the internal mechanics which save the macro value, is not scrubbing for single quotes which are used in SQL to delimit text. Tech support told us it was because we were running the server as an administrator which is obviously not the issue. We were not running it with an admin account.

Schemas are not not properly processed.  We have a schema that has an element's cardinality set as 0 to unbounded. When that node has no data in XML returned it deserializes just fine using .Net WCF but when PDI encounters the same XML it complains that it cannot find the any instances of the element defined in the schema as having a minOccurance of 0 - well duh! [UPDATE: This bug was fixed]

Then there are design flaws. Let's talk about job control. Going to the Integration Manager you can see how many jobs are queued. Unfortunately you it won't tell you what jobs are queued. It will tell you there are jobs running but can't tell you accurately what jobs. Forget about having the ability to stop running jobs or removing them from the queue. Server Utilization in one place will show 100% while at the same time in another place it could show 65%. Manually, run and then stop a job (if you are smart enough not to hose yourself by closing the browser or clicking away to another tab in the app [UPDATE: This bug was fixed]) from the Integration Manager and it will erroneously report that the job failed.

So imagine I'm in a situation where an exec wants to see a specific job run. I have to suspend all scheduled jobs, shut down the PDI server service and start it back up. Then go to an Integration job and click the run button only to notice that PDI is currently reporting that there are "3 Running" jobs right now. (a) I suspended all job and rebooted the service (b) we only have 1 execution engine in our license -- how can there be "3 Running" jobs ??  The fact is I can't stop what ever these 3 jobs are and run the process the exec wants run right now. Suspending all processes and restarting the server service does nothing to stop these 3 jobs from running. Undeterred I repeat the process. Shut down the service and start it back up again, only now it reports there are 4 jobs running. I still don't believe the dashboard. I run my process knowing that since there's only 1 exec engine license, my process will run if in fact there aren't any other processes running - and behold that is the case, my process ran because in fact there are no other jobs running. The report of "4 Running" job is erroneous. [UPDATE: This bug was fixed somewhere in the subsequent releases we received]

Once we got through all the gotchas and traps and finally had a working project we found that we could only execute one job at a time. One would think that for that ridiculous price tag, that our license would actually allow us to run more than one process at a time. No. Before agreeing to a price with Pervasive, *make absolutely sure* that the person that ordered the product with the license knew about, and made sure that it allows for as many execution engines as you have need for. Pervasive sales failed to mention that very important fact to us.

That being said we can now execute individual jobs one at a time, provided that the system does not hang and have to be restarted as we frequently experience in our case.

Before we paid one penny for the product our PM asked the salesperson if we could get a demo version that we could install and try out. It's always a good idea to test drive a product before dropping the dough. Pervasive never provided it, and it's obvious why, I would have failed the evaluation and we would not be where we are now. Unfortunately things did not turn out that way.

We're still in the process of turning this rotten turnip into something usable. Wish us luck. We're going to need it.

[UPDATE: After 3 months of waiting on memory leaks to get fixed, we sort of have a new PDI release (version that is usable (albeit unstable) for what we need it to do. I would like to give a special thanks to Jason from Pervasive for seemingly being the only person to take personal ownership of the memory leak issues I reported, many moons ago. My only gripe is that 3 months is too long to wait for a bug fix when you're dead in the water - especially since they charged us $45,000 for this product. For the record, I was against the purchase of this product in the first place. There are free open source tools on the market that work much better than Pervasive DI and they come with really good community support. And since they're open source and if you're a programmer like I am, you can just fix any glitches yourself. Unfortunately there are some people who think that the more something costs, the better it is which is part reason to why we ended up with this expensive hunk-of-junk. i.e. this is what happens when "the powers that be" are charmed by sales people and totally ignore the justified technical objections of their architect to such a deeply flawed product]

Wednesday, March 7, 2012

Pervasive DI 10 "Shell" function with redirection characters

Although the Pervasive DI 10 documentation states that using the "cmd /c" directive is only necessary when executing a .bat file, it should be noted that when using special characters in a command, one must also use the "cmd /c" directive. Even when calling EXEs.

EZ Script example:


Shell("ipconfig > myIpAddress.txt")


Shell("cmd /c ipconfig > myIpAddress.txt")

Apparently, this is because the Shell function (and the Application step) are subject to unexpected behavior when dealing with redirection characters such as | (pipe), < or >

An example of code that uses 2 of these characters:

Shell("cmd /c fciv.exe data.xml | find "."xml"" > data.xml.md5")