Author Archive

Attach payload into detached pkcs#7 signature

If you are doing signature generation via a hardware token (for instance 3Skey) then, for large files it is impractical to send the file to the hardware token. Instead you send a hash (SHA256), get a detached PKCS#7 signature and you need to re-attach the payload in java code. For once this was easier to do with plain JCE code instead of my favorite BouncyCastle provider. However for really large files BC does provide the streaming mechanism required.

Of course the best commands to use to help debug the code bellow are:

Verify pkcs#7 signature

#the -noverify means do not verify the certificate chain, this will only verify the signature not the originating certificate
openssl smime -inform DER -verify -noverify -in signature.p7s

(more…)

openssl recipes

These last days I had to tinker with openssl a lot and this is a short memory reminder of the params.

PKCS#7 manipulation

Verify pkcs#7 signature

#the -noverify means do not verify the certificate chain, this will only verify the signature not the originating certificate
openssl smime -inform DER -verify -noverify -in signature.p7s

Show the structure of the file (applies to all DER files)

#for debuging
openssl asn1parse -inform DER -i -in signature.p7s

Extract certificate and public key

openssl pkcs7 -inform DER -in signature.p7s -print_certs > certificate.crt
openssl x509 -in certificate.crt -noout -pubkey > pubKey.key

JKS certificate import

Export private key from jks keystore

#convert jks to pkcs#12 format
keytool -importkeystore -srckeystore myKeystore.jks -destkeystore myKeystore.p12
-deststoretype PKCS12 -srcalias myAlias
#export private key (WARNING, manipulate with care)
openssl pkcs12 -in myKeystore.p12  -nodes -nocerts -out myKey.pem

Check .csr or .crt public key against a private key

This will generate the sha256 hash for the public key, compare manualy. Very useful if you lost your key or are getting “No certificate matches private key”.

#generate hash for pubKey generated from privateKey
openssl pkey -in myPrivateKey.key -pubout -outform pem | sha256sum 
#generate hash for pubKey from cert
openssl x509 -in myCertificate.crt -pubkey -noout -outform pem | sha256sum 
#generate hash for pubKey from csr
openssl req -in myCSR.csr -pubkey -noout -outform pem | sha256sum

Convert p7b signed certificate response to jks keystore

This is very useful if you lost your jks keystore containing the original .csr

#export certs from signed certificate response to .pem
openssl pkcs7 -print_certs -in myStore.p7b -out certs.pem
#combine certs and key in pkcs#12 format
openssl pkcs12 -export -name server -in certs.pem -out myKeystore.p12 -inkey myPrivateKey.key
#convert pkcs#12 to jks
keytool -importkeystore -srcstoretype pkcs12 -srckeystore myKeystore.p12 -destkeystore myKeystore.jks

piBot – bot for monitoring temperature

This is a long due project using a raspberry Pi to monitor the temperature of a cabin I’ve been building for a very long time. I’ve bought the hardware almost 4 years ago and only arrived to the point where I could use it.

Motivation

Beside the geek motivation the main practical motivation is to measure inside and outside temperature in order to estimate:

  • degree of insulation and week points
  • min temperature in order to calculate needed anti-freeze mix for heating pipes
  • temperature monitoring for pump automation (TODO)
  • min temperature in order to start some electric heating (TODO)
  • accuracy of weather predictions for the location

Architecture

The architecture of the system is quite simple and has the following components:

    • the raspberry PI, model B which uses several DS18B20 sensors to monitor temperature
    • the piBot (this project) which is basicaly a main loop with sensors and output plugins
  • the vpn client over 3G in order to ensure connectivity in the lack of a fixed ip
  • the vpn server and db where data is stored
  • the visualisation which uses Grafana with a pgsql backend

piBot

PiBot is a python main loop which has a plugin mechanism for sensors and outputs (check it on gitHub). Currently I implemented:

  • sensor: ds18b20 which reads /sys/bus/w1/devices/%s/w1_slave data (sensor integration is present in the raspbian)
  • sensor: /sys/class/thermal/*/temp temperature for CPU temperature
  • output: csv plain output
  • pgsql: postgresql output in grafana friendly format

Grafana output

piBot grafana output

An “obvious” improvement

It’s been a long time since I felt such satisfaction debuging something so I decided to write about it.

Let’s assume that you need to store (cache) in memory a large object tree during some operations. In practice this happens because some regulatory constraints so you end up having to parse a very large file and store the resulting object tree. Actually you have a single entry cache. You parse your object, store it in memory for search and processing while the current object tree is used.

public ObjectHandler getObjectHandler(Long id) throws Exception{
	if(cachedObjectHandler != null){
		if(cachedObjectHandler.getId().equals(id)){
			return cachedObjectHandler;
		}
	}
	//else
	cachedObjectHandler = parse(...)
	return cachedObjectHandler;
}

The code above is a simplified way to do it, no? Please note that the parse(…) function creates the object tree by parsing a stream and allocates a new object tree. In my particular case the object tree holded a max of 120k objects (~150Mb) and did some large xml parsing using stax.

So what is wrong with the code above? Take a look at what a single change can do:

public ObjectHandler getObjectHandler(Long id) throws Exception{
	if(cachedObjectHandler != null){
		if(cachedObjectHandler.getId().equals(id)){
			return cachedObjectHandler;
		}
	}
	//else
	cachedObjectHandler = null;
	cachedObjectHandler = parse(...)
	return cachedObjectHandler;
}

Did we just reduced the max needed memory by 2? In the first case since java does right to left assignment first a new object tree is allocated by the parse function and only when done it is assigned to the cachedObjectHandler object allowing for the old object tree to be gc-ed. However with the null assignment it can be gc-ed while the new allocation takes place if memory is needed.

As I said, a small change with a big smile.
 

Remove old kernels

for i in $(dpkg --list | grep linux-image | cut -c5-48 | grep -v $(uname -r) | grep -v linux-image-generic); do apt-get remove --purge -y $i; done

Java SAML2 + simplesamlphp

The use case is as follows: the java application (SP) must use simplesamlphp as an IdP. I tested 2 libraries, these are the required configs.

SimpleSAMLphp

Please note that the default install from ubuntu (16.04.2) of simplesamlphp (14.0) does not work with the php version installed (php7) because of this bug so I ended installing everything from the tar.gz provided (14.14).

Onelogin

This is the first library I tested. To install it:

  • install maven, requires recent version, does not work with 3.0.5
  • export MAVEN_HOME=/usr/local/java/apache-maven-3.5.0
  • export PATH=$MAVEN_HOME/bin:$PATH
  • git clone https://github.com/onelogin/java-saml
  • cd java-saml
  • mvn package
  • download tomcat 7.0.78
  • install java-saml-toolkit-jspsample as a expanded war in this tomcat
  • tweaked the files: onelogin.saml.properties and the simplesamlphp config until it worked. The key is to use the information from the IdP metadata (http://idp-domain/simplesamlphp/saml2/idp/metadata.php?output=xhtml) and transpose it in the properties file.

Pac4j

This is a more complex library. There is also a demo application for j2e.

To clone, compile and run this demo the sequence is straight forward:

  • git clone https://github.com/pac4j/j2e-pac4j-demo
  • cd j2e-pac4j-demo
  • mvn package
  • mvn jetty:run

The test with https://www.testshib.org works.

To configure if for simpleSAMLphp. Modify DemoConfigFactory.java

final SAML2ClientConfiguration cfg = new SAML2ClientConfiguration("resource:samlKeystore.jks",
 "pac4j-demo-passwd",
 "pac4j-demo-passwd",
 "resource:idp-metadata.xml");
 cfg.setMaximumAuthenticationLifetime(3600);
 cfg.setServiceProviderEntityId("test.pac4j");
 cfg.setServiceProviderMetadataPath(new File("sp-metadata.xml").getAbsolutePath());
 final SAML2Client saml2Client = new SAML2Client(cfg);

The idp-metadata.xml file is the file from: http://idp-domain/simplesamlphp/saml2/idp/metadata.php?output=xhtml wrapped in an additional EntitiesDescriptors element:

<md:EntitiesDescriptor xmlns:md="urn:oasis:names:tc:SAML:2.0:metadata" xmlns:ds="http://www.w3.org/2000/09/xmldsig#"> 
 <md:EntityDescriptor entityID="http://idp-domain/simplesamlphp/saml2/idp/metadata.php">

However at this point the application gives a “fatal error”:

org.pac4j.saml.exceptions.SAMLException: Identity provider has no single sign on service available for the selected profileorg.opensaml.saml.saml2.metadata.impl.IDPSSODescriptorImpl@2d6719d3

The error seems to be here https://github.com/pac4j/pac4j/blob/master/pac4j-saml/src/main/java/org/pac4j/saml/context/SAML2MessageContext.java#L104 so I am left with no clue to the problem. The only solution is to change a bit the code to see which is the binding required.

Just cloning the main repository and trying to compile it with maven does not work. The error is:

[INFO] Scanning for projects...
[ERROR] [ERROR] Some problems were encountered while processing the POMs:
[FATAL] Non-resolvable parent POM for org.pac4j:pac4j-couch:[unknown-version]: Could not find artifact org.pac4j:pac4j:pom:2.0.0-RC3-SNAPSHOT and 'parent.relativePath' points at wrong local POM @ line 5, column 10
@ 
[ERROR] The build could not read 1 project -> [Help 1]
[ERROR] 
[ERROR] The project org.pac4j:pac4j-couch:[unknown-version] (/phantom/java/pac4j/pac4j-couch/pom.xml) has 1 error
[ERROR] Non-resolvable parent POM for org.pac4j:pac4j-couch:[unknown-version]: Could not find artifact org.pac4j:pac4j:pom:2.0.0-RC3-SNAPSHOT and 'parent.relativePath' points at wrong local POM @ line 5, column 10 -> [Help 2]
[ERROR] 
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] 
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1] http://cwiki.apache.org/confluence/display/MAVEN/ProjectBuildingException
[ERROR] [Help 2] http://cwiki.apache.org/confluence/display/MAVEN/UnresolvableModelException

The solution is to checkout the 2.0.0 tag:

  • git clone https://github.com/pac4j/pac4j
  • git tag -l
  • git checkout tags/pac4j-2.0.0

At this point I change the code to give the name of the binding:

org.pac4j.saml.exceptions.SAMLException: Identity provider has no single sign on service available for the selected profileurn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST

The solution is to modify the metadata/saml20-idp-hosted.php file and add:

'SingleSignOnServiceBinding' => array('urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect', 'urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST'),
'SingleLogoutServiceBinding' => array('urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect', 'urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST'),

This will generate the

<md:SingleLogoutService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST"

in the metadata which generated this error.

At this point the SSO works. Of course the entityID for the SP must be configured in metadata/saml20-sp-remote.php

$metadata['diapason.test.pac4j'] = array(
 'AssertionConsumerService' => 'http://localhost:8080/callback?client_name=SAML2Client',

Simple pomodoro script

This is a very basic pomodoro script I am using to avoid getting in a fixed position for hours at a time:

#!/bin/bash
 
UNIT=5
UNIT_CNT=5
PAUSE=6
 
notify-send -i clock "Starting interval..."
 
for i in $(seq $UNIT_CNT); do
    sleep ${UNIT}m
    let c=$i*$UNIT
    notify-send -i clock "$c minutes"
done
 
(for i in $(seq $PAUSE); do let c=$PAUSE-$i+1; echo -n "Pause ${c}m"; echo -e '\f'; sleep 1m; done; echo -e '\f'; echo "Work";) | sm -

Simple hdmi activate script

This is a simple script I bound to ‘meta+F7’ to activate a second hdmi display I am using:

INTERNAL=eDP1
EXTERNAL=HDMI2
LOCK=/tmp/${EXTERNAL}.on
 
disper -l | grep $EXTERNAL
 
function on {
    disper -e -d $INTERNAL,$EXTERNAL -r 1920x1080,1920x1080
    touch $LOCK
}
 
function off {
    disper -s -d $INTERNAL -r auto
    rm -f $LOCK
}
 
if [ $? -eq 1 ]; then #there is no EXTERNAL, run single command
    off
elif [ -f $LOCK ]; then
    off
else
    on
fi

 

The dark side of the force

I have been spending a lot of time lately working on a new javascript based interface. As with any js project we ended up with a lot of layers. For instance for a simple numeric input there are:

The fun part is that we needed of course some functionality which did not existed in the kendo component (adding support for financial shortcuts: 10k => 10,000).

Since I still have an OOP structured mind, with lot of years of java pattern I thought: Ok, will create a component which extends on kendo and re-wrap.

This would look similar to:

$(function() {
 
    // wrap the widget in a closure. Not necessary in doc ready, but a good practice
    (function($) {
 
        // shorten references to variables. this is better for uglification 
        var kendo = window.kendo,
            ui = kendo.ui,
            NumericTextBox = ui.NumericTextBox;
 
        // create a new widget which extends the first custom widget
        var NumberInput = NumericTextBox.extend({
 
            // every widget has an init function
            init: function(element, options) {
                var that = this;
                NumericTextBox.fn.init.call(that, element, options);
 
            },
 
            _keypress: function(e) {
                //override here!!
            },
 
            options: {
                name: "NumberInput"
            }
        });
 
        // add this new widget to the UI namespace.
        ui.plugin(NumberInput);
 
    })(jQuery);
});

Well, this seemed ok and it worked ok but still requires a lot of work to wrap into an usable aurelia component which maps all the events and options: precision, step …

At this moment it struck me! This is javascript! BUHUHU, there is no need for all this. I could just do this in our wrapper:

    attached() {
        this.taskQueue.queueTask(() => {
            //assuming ak-numerictextbox="k-widget.bind:numericTextBox; in the template
            this.numericTextBox._orig_keypress = this.numericTextBox._keypress;
            this.numericTextBox._keypress = this._keypress;
        });
    }

Now I have switched to the dark side of the force! More details on this github.
 

 

A few thoughts about http fetch

Fetch is the “de facto” standard now if building a new javascript code. Even if not yet supported by all browsers it is warmly recommended. There are numerous examples of usage but after reading a lot of them most of them seemed to miss the answer to my questions. These where:

  • how to proper do error handling including having a default error handler which can be overriden
  • how to do http POST?
  • how to use in real-life application? I mean one would expect to do a fetch for a login api and then all other api to work (i.e. have cookie support).

The following code (typescript) will try to answer the above questions:

function myFetch(path: string, params: Object, customCatch: boolean = false): Promise {
    let requestParams: RequestInit = {
        mode: 'cors',
        cache: 'default',
        credentials: 'include' //this is REQUIRED to enable cookie management
    };
 
    requestParams.body = $.params(object); //generate a query string: 'param1=val&amp;param2=val';
    requestParams.headers = {
        "Content-type": "application/x-www-form-urlencoded; charset=UTF-8" //this is REQUIRED for POST with this payload
    };
    requestParams.method = 'POST';
 
    return fetch(path, requestParams)
        .then(response =&gt; {
            if (response.ok) {
                return response.text();
            } else {
                throw (response);
            }
        })
        .catch(err =&gt; {
            //please note that "TYPEERROR: FAILED TO FETCH" is a very BAD error message. From the spec:
            //"A fetch() promise will reject with a TypeError when a network error is encountered, 
            //although this usually means permission issues or similar"
            if (customCatch) {
                //this allows to override error handler with a custom function
                throw (err);
            } else {
                if (err instanceof Response) { //server error code, thrown from else above
                    //handle this error
                } else { //this is network error
                    //handle this error
                }
            }
            return null;
        });
}
 
//fetch with default error handling
myFetch('/api', {user: 'toto', task: 'info'})
    .then(
        response =&gt; {
            if(response != null){
                //handle response
            }
        });
 
//fetch with custom error handling
myFetch('/api', {user: 'toto', task: 'info'}, true)
    .then(
        response =&gt; {
            if(response != null){
                //handle response
            }
        })
    .catch(errResponse =&gt; {
        //handler errResponse (both error status and network error)
    })