Friday, August 11, 2017

Clean up Docker artifacts in dev environments

Remote dev environments that get continuous Docker deployments should have their images and volumes cleaned up regularly.

Pass the following commands over SSH to the remote server, perhaps in a Jenkins job for example:

 sudo -- bash -c 'docker volume rm $(docker volume ls -f dangling=true -q)' > /dev/null 2>&1  
 sudo -- bash -c 'docker images --quiet --filter=dangling=true | xargs --no-run-if-empty docker rmi' > /dev/null 2>&1  
 echo $?  

echo $?  .. is used at the end because if the images are already cleaned up there may be an exit status code indicating and error .. which can actually safely be ignored so echo $? will evaluate it so that Jenkins will not report the job as failed.

Thursday, August 10, 2017

Connect JasperServer to a Postgres Data Source with SSL

In response to question such this:


In the JDBC connection string, just add this:


i.e. just setting ssl=true is not enough.

And just to state the obvious, your Postgres server must support SSL connections.

Thursday, March 2, 2017

The bait and switch of open source

Great presentation by Katrina Owen. You need to sign up for a Safari Books account but (at this time) its free and does not require a credit card. If you don't want to give them your email, just use mailinator or something.

Some good points from this talk:

  • Understand the the difference of "issues" vs "symptoms" in your product
  • When explaining your product, talk about it's benefits, not its features
  • "Manage your energy rather than your time"

Wednesday, March 1, 2017

Boilerplate Java for AWS Lambda invoked from AWS API Gateway

import java.util.HashMap;
import java.util.Map;

public class LambdaAPIGateway {

    public Map<String, Object> handleRequest(Map<String, Object> request, Context context) {
        LambdaLogger logger = context.getLogger();
        logger.log("Function version: " + context.getFunctionVersion() + "; ");
        logger.log("Event: " + request.toString());
        String bodyAsJson = "{\"data\":\"ok\"}";
        Map<String, Object> result = Respond(200, bodyAsJson, context);
        return result;

    public static Map<String, Object> Respond(int httpStatus, String bodyAsJson, Context context)  {
        Map<String, Object> retval = new HashMap<>();
         Response MUST be in a specific format with "headers"; "statusCode" and "body" ONLY
         Example: { "headers": {"Content-Type":"application/json"}, "body":"...", "statusCode":200 }
         (under the heading Output Format of a Lambda Function for Proxy Integration)
        Map<String, Object> headers = new HashMap<>();
        headers.put("Content-Type", "application/json");
        headers.put("x-request-id", context.getAwsRequestId());
        retval.put("headers", headers);
        retval.put("statusCode", httpStatus);
        retval.put("body", bodyAsJson);
        return retval;

The AWS documentation provides a Java example which uses inputStream and outputStream and takes a lot more code:

I like this version better because is simpler and shorter.

More comprehensive example here:

Wednesday, December 9, 2015

Using Prometheus with Java in a Jersey project

Step 1, add the dependency to your project. If you're using maven, add the dependency:


Step 2, create a class and register some metrics. Here I have a summary and some counters:

public class Metrics {

    public static final Summary requestLatency =
            .help("Request latency in seconds.").register();

    public static final Counter requestFailures =
            .help("Request failures.").register();

    public static final Counter requestsTotal =

    public static final Counter uploadedFilesSucceeded =
            .help("Total files uploaded to S3.").register();

    public static final Counter tmpFilesNotCleared =
            .help("Total files that could not be removed from cache").register();


Step 3, set a class level variable in your main class to instantiate your Metrics class:

private final Metrics metrics = new Metrics();

Step 4, manipulate your metrics in your code as needed. For example you need to call the inc() function on your counters to increment them. You can add to the requestLatency metric as follows:

public String Something() {
    try {
        Summary.Timer timer = Metrics.requestLatency.startTimer();
        // do some work ...
    } catch (Exception e) {
        return e.getMessage();
    return "It works!";

Step 5, add a web method to dish out your metrics. This is the part that I did not like from other online examples (e.g. ... since they use the Metrics Servlet provided by Prometheus and all that does is user the writer to gather the data for you, so you need to mess with registering and deploying a new servlet within your code. TOO MUCH HASSLE. Just do what the Prometheus servlet does. Load the registry and use the TextFormat class to get your metrics. No need to deploy a servlet! :

public String Metrics() {
    StringWriter writer = new StringWriter();
    try {
                writer, CollectorRegistry.defaultRegistry.metricFamilySamples());
    } catch (Exception e) {
        return e.getMessage();
    return writer.toString();


Here is some sample output from one of our apps using this technique:

# HELP requests_failures_total Request failures.
# TYPE requests_failures_total counter
requests_failures_total 0.0
# HELP temp_files_not_cleared Total files that could not be removed from cache
# TYPE temp_files_not_cleared counter
temp_files_not_cleared 0.0
# HELP requests_latency_seconds Request latency in seconds.
# TYPE requests_latency_seconds summary
requests_latency_seconds_count 0.25
requests_latency_seconds_sum 1.0
# HELP upload_file_success Total file uploaded to S3.
# TYPE upload_file_success counter
upload_file_success 25874588.0
# HELP requests_failures_total Request failures.
# TYPE requests_failures_total counter
requests_failures_total 2.0

Wednesday, November 25, 2015

Postgresql upsert example with CTE (before upsert support from 9.5)

 with heartbeat_data (source, time) as (  
  values ('test', now())  
 update_query as (  
  update dvs_system.heartbeats  
    set last_beat = heartbeat_data.time  
  from heartbeat_data  
  where source_key = heartbeat_data.source  
  returning true as updated  
 insert into dvs_system.heartbeats (source_key, last_beat)  
  select source, time  
  from heartbeat_data  
  where not exists (  
   select 1 from update_query where updated = TRUE  


Monday, October 5, 2015

Postgres: Create a function to create a new logging table which inherits from another

When creating large logging tables, it better to define the structure of the logging table then create tables that inherit from that table, to put the data into. It makes look-ups on the table faster if you are searching for info in your logs relevant to data in one partitioned child table and it also makes it easier to trim the logs since you can just drop a child table of logs when the time is appropriate.


 CREATE OR REPLACE FUNCTION app_system.create_request_log_partition(_date timestamp without time zone)  
  RETURNS void AS  
      _table text := format('request_log_%s_q%s', date_part('year', _date), date_part('quarter', _date));  
            'CREATE TABLE system.'|| _table ||'() INHERITS (system.access_log);'||E'\n'  
           ||'CREATE INDEX '|| _table ||'_created_idx ON system.'|| _table ||'(created_at);'||E'\n'  
           ||'CREATE INDEX '|| _table ||'_api_request_idx ON system.'|| _table ||'(api_request);'||E'\n'  
           ||'GRANT INSERT, SELECT ON system.'|| _table ||' TO myapp;'||E'\n'  
 COST 100;  
 ALTER FUNCTION app_system.create_request_log_partition(timestamp without time zone) OWNER TO postgres;