Posts Tagged ‘tinker’

piBot – bot for monitoring temperature

This is a long due project using a raspberry Pi to monitor the temperature of a cabin I’ve been building for a very long time. I’ve bought the hardware almost 4 years ago and only arrived to the point where I could use it.


Beside the geek motivation the main practical motivation is to measure inside and outside temperature in order to estimate:

  • degree of insulation and week points
  • min temperature in order to calculate needed anti-freeze mix for heating pipes
  • temperature monitoring for pump automation (TODO)
  • min temperature in order to start some electric heating (TODO)
  • accuracy of weather predictions for the location


The architecture of the system is quite simple and has the following components:

    • the raspberry PI, model B which uses several DS18B20 sensors to monitor temperature
    • the piBot (this project) which is basicaly a main loop with sensors and output plugins
  • the vpn client over 3G in order to ensure connectivity in the lack of a fixed ip
  • the vpn server and db where data is stored
  • the visualisation which uses Grafana with a pgsql backend


PiBot is a python main loop which has a plugin mechanism for sensors and outputs (check it on gitHub). Currently I implemented:

  • sensor: ds18b20 which reads /sys/bus/w1/devices/%s/w1_slave data (sensor integration is present in the raspbian)
  • sensor: /sys/class/thermal/*/temp temperature for CPU temperature
  • output: csv plain output
  • pgsql: postgresql output in grafana friendly format

Grafana output

piBot grafana output

Remove old kernels

for i in $(dpkg --list | grep linux-image | cut -c5-48 | grep -v $(uname -r) | grep -v linux-image-generic); do apt-get remove --purge -y $i; done

Java SAML2 + simplesamlphp

The use case is as follows: the java application (SP) must use simplesamlphp as an IdP. I tested 2 libraries, these are the required configs.


Please note that the default install from ubuntu (16.04.2) of simplesamlphp (14.0) does not work with the php version installed (php7) because of this bug so I ended installing everything from the tar.gz provided (14.14).


This is the first library I tested. To install it:

  • install maven, requires recent version, does not work with 3.0.5
  • export MAVEN_HOME=/usr/local/java/apache-maven-3.5.0
  • export PATH=$MAVEN_HOME/bin:$PATH
  • git clone
  • cd java-saml
  • mvn package
  • download tomcat 7.0.78
  • install java-saml-toolkit-jspsample as a expanded war in this tomcat
  • tweaked the files: and the simplesamlphp config until it worked. The key is to use the information from the IdP metadata (http://idp-domain/simplesamlphp/saml2/idp/metadata.php?output=xhtml) and transpose it in the properties file.


This is a more complex library. There is also a demo application for j2e.

To clone, compile and run this demo the sequence is straight forward:

  • git clone
  • cd j2e-pac4j-demo
  • mvn package
  • mvn jetty:run

The test with works.

To configure if for simpleSAMLphp. Modify

final SAML2ClientConfiguration cfg = new SAML2ClientConfiguration("resource:samlKeystore.jks",
 cfg.setServiceProviderMetadataPath(new File("sp-metadata.xml").getAbsolutePath());
 final SAML2Client saml2Client = new SAML2Client(cfg);

The idp-metadata.xml file is the file from: http://idp-domain/simplesamlphp/saml2/idp/metadata.php?output=xhtml wrapped in an additional EntitiesDescriptors element:

<md:EntitiesDescriptor xmlns:md="urn:oasis:names:tc:SAML:2.0:metadata" xmlns:ds=""> 
 <md:EntityDescriptor entityID="http://idp-domain/simplesamlphp/saml2/idp/metadata.php">

However at this point the application gives a “fatal error”:

org.pac4j.saml.exceptions.SAMLException: Identity provider has no single sign on service available for the selected profileorg.opensaml.saml.saml2.metadata.impl.IDPSSODescriptorImpl@2d6719d3

The error seems to be here so I am left with no clue to the problem. The only solution is to change a bit the code to see which is the binding required.

Just cloning the main repository and trying to compile it with maven does not work. The error is:

[INFO] Scanning for projects...
[ERROR] [ERROR] Some problems were encountered while processing the POMs:
[FATAL] Non-resolvable parent POM for org.pac4j:pac4j-couch:[unknown-version]: Could not find artifact org.pac4j:pac4j:pom:2.0.0-RC3-SNAPSHOT and 'parent.relativePath' points at wrong local POM @ line 5, column 10
[ERROR] The build could not read 1 project -> [Help 1]
[ERROR] The project org.pac4j:pac4j-couch:[unknown-version] (/phantom/java/pac4j/pac4j-couch/pom.xml) has 1 error
[ERROR] Non-resolvable parent POM for org.pac4j:pac4j-couch:[unknown-version]: Could not find artifact org.pac4j:pac4j:pom:2.0.0-RC3-SNAPSHOT and 'parent.relativePath' points at wrong local POM @ line 5, column 10 -> [Help 2]
[ERROR] To see the full stack trace of the errors, re-run Maven with the -e switch.
[ERROR] Re-run Maven using the -X switch to enable full debug logging.
[ERROR] For more information about the errors and possible solutions, please read the following articles:
[ERROR] [Help 1]
[ERROR] [Help 2]

The solution is to checkout the 2.0.0 tag:

  • git clone
  • git tag -l
  • git checkout tags/pac4j-2.0.0

At this point I change the code to give the name of the binding:

org.pac4j.saml.exceptions.SAMLException: Identity provider has no single sign on service available for the selected profileurn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST

The solution is to modify the metadata/saml20-idp-hosted.php file and add:

'SingleSignOnServiceBinding' => array('urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect', 'urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST'),
'SingleLogoutServiceBinding' => array('urn:oasis:names:tc:SAML:2.0:bindings:HTTP-Redirect', 'urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST'),

This will generate the

<md:SingleLogoutService Binding="urn:oasis:names:tc:SAML:2.0:bindings:HTTP-POST"

in the metadata which generated this error.

At this point the SSO works. Of course the entityID for the SP must be configured in metadata/saml20-sp-remote.php

$metadata['diapason.test.pac4j'] = array(
 'AssertionConsumerService' => 'http://localhost:8080/callback?client_name=SAML2Client',

Simple pomodoro script

This is a very basic pomodoro script I am using to avoid getting in a fixed position for hours at a time:

notify-send -i clock "Starting interval..."
for i in $(seq $UNIT_CNT); do
    sleep ${UNIT}m
    let c=$i*$UNIT
    notify-send -i clock "$c minutes"
(for i in $(seq $PAUSE); do let c=$PAUSE-$i+1; echo -n "Pause ${c}m"; echo -e '\f'; sleep 1m; done; echo -e '\f'; echo "Work";) | sm -

Simple hdmi activate script

This is a simple script I bound to ‘meta+F7’ to activate a second hdmi display I am using:

disper -l | grep $EXTERNAL
function on {
    disper -e -d $INTERNAL,$EXTERNAL -r 1920x1080,1920x1080
    touch $LOCK
function off {
    disper -s -d $INTERNAL -r auto
    rm -f $LOCK
if [ $? -eq 1 ]; then #there is no EXTERNAL, run single command
elif [ -f $LOCK ]; then


Ubuntu 16.04

I’we used ubuntu since edgy days and migrating from gentoo. Things got better each time, until they started getting worse or until I started to expect not to have to fix and patch each time. So now I don’t feel like giving any impression, just a list of bugs:

Searching for signal

For the last few years, one of the tool I have greatly used is a Huawei E587 modem. It’s a great little device which gave me a lot of freedom. Even if it is quite old, it outperforms, even without an external antenna any smartphone I used for tethering and especially my new Samsung Galaxy S5 Neo which, as a parenthesis, has one of the poorest software I have ever seen, reminds me of a circa 2000 windows pre-installed on a laptop and filled with junkware.

However, as many other devices, the reporting of signal strength is very simplistic. My goal was to be able to identify the best spot for the external antenna defined by the best signal strength.


Running chrome in docker with audio

The goal is to run google-chrome in a docker container with audio support. I did this trying to get skype.apk to run in archron since skype for linux does not support conferencing anymore. Even if running skype in archron did not seemed to work chrome runs flawlessly with audio support via pulse:

So here is the Dockerfile:

FROM ubuntu:14.04
RUN apt-get update && apt-get install -y wget pulseaudio && echo "deb stable main" > /etc/apt/sources.list.d/google-chrome.list && wget -q -O - | sudo apt-key add - && apt-get update && apt-get install -y google-chrome-stable
RUN rm -rf /var/cache/apt/archives/*
RUN useradd -m -s /bin/bash chrome
USER chrome
ENV PULSE_SERVER /home/chrome/pulse
ENTRYPOINT [ "google-chrome" ]

You can build your container using:

docker build -t len/chrome .

The run it using:

docker run -it -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY -v $HOME/Downloads/chrome:/home/chrome/Downloads -v /run/user/$UID/pulse/native:/home/chrome/pulse -v /dev/shm:/dev/shm --name chrome len/chrome


Ensure rPi connectivity

The problem: make sure I can connect to my raspberry pi B+ even if no network is available or network change.

The idea: set a static IP.

First some information:

  • running raspbian 8.0 (cat /etc/issue)
  • there is no need for a crossover UTP cable if you connect directly to the device you can use a normal cable
  • IP configuration is delegated from /etc/network/interfaces to the dhcpcd daemon. This is why the eth0 is set on manual.

I did not wanted to crash default config. Just wanted to ensure the device will be visible. So I just added an aliased (virtual) interface with a fixed ip:


# Include files from /etc/network/interfaces.d:
source-directory /etc/network/interfaces.d
auto lo
iface lo inet loopback
iface eth0 inet manual
auto eth0:0
allow-hotplug eth0:0
iface eth0:0 inet static
allow-hotplug wlan0
iface wlan0 inet manual
    wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf
allow-hotplug wlan1
iface wlan1 inet manual
    wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf

This is my /etc/network/interfaces. You can now use a normal UTP cable to connect directly to the PI or on the LAN by setting an IP in the same class:

ifconfig eth0 up

Please note that if you are on ubuntu and have a NetworkManager controlled interface you might need to disable auto-control by editing /etc/NetworkManager/NetworkManager.conf (see the unmanaged-devices section)




From monolithic single platform apps, to “write once, run everywhere”, (back) to … ?

‘A “line-of-business application” is one set of critical computer applications perceived as vital to running an enterprise’, as Wikipedia defines it. It might not always include cutting-edge technical innovation, but it involves a lot of functional knowledge on business processes thus becoming critical for the well-being of an enterprise and, as a result, has a very long life span.

The first application of this type I have worked on was running on Sun machines (OMG, how old I am! :). The architecture was simple, monolithical, the application ran on only one OS and connected directly to the DB. I think it is still in operation today somewhere. Trying to port it to NT at some point was more or less a disaster.

The second one I worked on had a html interface based heavily on tables and rarely js, developed using java and hibernate (very new at that point). The multi-tier architecture involved a server and a html client which ran in a browser, deployment and upgrade of the client was instant, the possibilities of connectivity were greatly improved. However, the interface was basic, standardized. It is still in operation today and due to the simple html used, we stepped in browser-hell only for a couple of times.

The third one was designed around 2005, with a Flex interface and java server-side. At this point it the code was ‘truly’ write once, run everywhere. The architecture was similar to the one in the previous case, but the interface had no limits, with a great look and everything a “native” interface could bring. It ran flawlessly on all flash supporting browsers. It took a few years to develop, it takes up to a year to implement at a client site and it will be buried much sooner than expected, by the death of the “so much hated flash” (even if some clients still run IE6).

Now I am looking for the set of technologies for the next application of this type. I will be working on with these technologies hopefully for some good years, and have hit a wall on the interface side. Instead of writing code for a “virtual machine” running on a variety of hosts (browsers) I am facing the possibility of writing a variety of code running on a variety of hosts. How is this anything else but a technological regression? I know the arguments against the “write once, run everywhere” paradigm once used by Sun for its java, but this is a damn gui, nothing else.
Yes, there are subtle bugs and security issues, but how can you compare this with the security issues involved in having n versions of the same application? With something like GWT or Vaadin, you add the huge complexity of handling all the browsers in your application. Consider the security issues involved in patching many such applications instead of a single ‘vm’. On maybe I should not even write a web based application and revert to a situation similar to the one from the first case I described, with completely different branches, for different platforms written in different languages? How can this development effort be justified? Or, in order to ensure longevity and easy development, the logical choice is to choose a single platform and develop natively for it alone?