Posts Tagged ‘work’

piBot – bot for monitoring temperature

This is a long due project using a raspberry Pi to monitor the temperature of a cabin I’ve been building for a very long time. I’ve bought the hardware almost 4 years ago and only arrived to the point where I could use it.

Motivation

Beside the geek motivation the main practical motivation is to measure inside and outside temperature in order to estimate:

  • degree of insulation and week points
  • min temperature in order to calculate needed anti-freeze mix for heating pipes
  • temperature monitoring for pump automation (TODO)
  • min temperature in order to start some electric heating (TODO)
  • accuracy of weather predictions for the location

Architecture

The architecture of the system is quite simple and has the following components:

    • the raspberry PI, model B which uses several DS18B20 sensors to monitor temperature
    • the piBot (this project) which is basicaly a main loop with sensors and output plugins
  • the vpn client over 3G in order to ensure connectivity in the lack of a fixed ip
  • the vpn server and db where data is stored
  • the visualisation which uses Grafana with a pgsql backend

piBot

PiBot is a python main loop which has a plugin mechanism for sensors and outputs (check it on gitHub). Currently I implemented:

  • sensor: ds18b20 which reads /sys/bus/w1/devices/%s/w1_slave data (sensor integration is present in the raspbian)
  • sensor: /sys/class/thermal/*/temp temperature for CPU temperature
  • output: csv plain output
  • pgsql: postgresql output in grafana friendly format

Grafana output

piBot grafana output

An “obvious” improvement

It’s been a long time since I felt such satisfaction debuging something so I decided to write about it.

Let’s assume that you need to store (cache) in memory a large object tree during some operations. In practice this happens because some regulatory constraints so you end up having to parse a very large file and store the resulting object tree. Actually you have a single entry cache. You parse your object, store it in memory for search and processing while the current object tree is used.

public ObjectHandler getObjectHandler(Long id) throws Exception{
	if(cachedObjectHandler != null){
		if(cachedObjectHandler.getId().equals(id)){
			return cachedObjectHandler;
		}
	}
	//else
	cachedObjectHandler = parse(...)
	return cachedObjectHandler;
}

The code above is a simplified way to do it, no? Please note that the parse(…) function creates the object tree by parsing a stream and allocates a new object tree. In my particular case the object tree holded a max of 120k objects (~150Mb) and did some large xml parsing using stax.

So what is wrong with the code above? Take a look at what a single change can do:

public ObjectHandler getObjectHandler(Long id) throws Exception{
	if(cachedObjectHandler != null){
		if(cachedObjectHandler.getId().equals(id)){
			return cachedObjectHandler;
		}
	}
	//else
	cachedObjectHandler = null;
	cachedObjectHandler = parse(...)
	return cachedObjectHandler;
}

Did we just reduced the max needed memory by 2? In the first case since java does right to left assignment first a new object tree is allocated by the parse function and only when done it is assigned to the cachedObjectHandler object allowing for the old object tree to be gc-ed. However with the null assignment it can be gc-ed while the new allocation takes place if memory is needed.

As I said, a small change with a big smile.
 

Simple pomodoro script

This is a very basic pomodoro script I am using to avoid getting in a fixed position for hours at a time:

#!/bin/bash
 
UNIT=5
UNIT_CNT=5
PAUSE=6
 
notify-send -i clock "Starting interval..."
 
for i in $(seq $UNIT_CNT); do
    sleep ${UNIT}m
    let c=$i*$UNIT
    notify-send -i clock "$c minutes"
done
 
(for i in $(seq $PAUSE); do let c=$PAUSE-$i+1; echo -n "Pause ${c}m"; echo -e '\f'; sleep 1m; done; echo -e '\f'; echo "Work";) | sm -

The dark side of the force

I have been spending a lot of time lately working on a new javascript based interface. As with any js project we ended up with a lot of layers. For instance for a simple numeric input there are:

The fun part is that we needed of course some functionality which did not existed in the kendo component (adding support for financial shortcuts: 10k => 10,000).

Since I still have an OOP structured mind, with lot of years of java pattern I thought: Ok, will create a component which extends on kendo and re-wrap.

This would look similar to:

$(function() {
 
    // wrap the widget in a closure. Not necessary in doc ready, but a good practice
    (function($) {
 
        // shorten references to variables. this is better for uglification 
        var kendo = window.kendo,
            ui = kendo.ui,
            NumericTextBox = ui.NumericTextBox;
 
        // create a new widget which extends the first custom widget
        var NumberInput = NumericTextBox.extend({
 
            // every widget has an init function
            init: function(element, options) {
                var that = this;
                NumericTextBox.fn.init.call(that, element, options);
 
            },
 
            _keypress: function(e) {
                //override here!!
            },
 
            options: {
                name: "NumberInput"
            }
        });
 
        // add this new widget to the UI namespace.
        ui.plugin(NumberInput);
 
    })(jQuery);
});

Well, this seemed ok and it worked ok but still requires a lot of work to wrap into an usable aurelia component which maps all the events and options: precision, step …

At this moment it struck me! This is javascript! BUHUHU, there is no need for all this. I could just do this in our wrapper:

    attached() {
        this.taskQueue.queueTask(() => {
            //assuming ak-numerictextbox="k-widget.bind:numericTextBox; in the template
            this.numericTextBox._orig_keypress = this.numericTextBox._keypress;
            this.numericTextBox._keypress = this._keypress;
        });
    }

Now I have switched to the dark side of the force! More details on this github.
 

 

A few thoughts about http fetch

Fetch is the “de facto” standard now if building a new javascript code. Even if not yet supported by all browsers it is warmly recommended. There are numerous examples of usage but after reading a lot of them most of them seemed to miss the answer to my questions. These where:

  • how to proper do error handling including having a default error handler which can be overriden
  • how to do http POST?
  • how to use in real-life application? I mean one would expect to do a fetch for a login api and then all other api to work (i.e. have cookie support).

The following code (typescript) will try to answer the above questions:

function myFetch(path: string, params: Object, customCatch: boolean = false): Promise {
    let requestParams: RequestInit = {
        mode: 'cors',
        cache: 'default',
        credentials: 'include' //this is REQUIRED to enable cookie management
    };
 
    requestParams.body = $.params(object); //generate a query string: 'param1=val&param2=val';
    requestParams.headers = {
        "Content-type": "application/x-www-form-urlencoded; charset=UTF-8" //this is REQUIRED for POST with this payload
    };
    requestParams.method = 'POST';
 
    return fetch(path, requestParams)
        .then(response => {
            if (response.ok) {
                return response.text();
            } else {
                throw (response);
            }
        })
        .catch(err => {
            //please note that "TYPEERROR: FAILED TO FETCH" is a very BAD error message. From the spec:
            //"A fetch() promise will reject with a TypeError when a network error is encountered, 
            //although this usually means permission issues or similar"
            if (customCatch) {
                //this allows to override error handler with a custom function
                throw (err);
            } else {
                if (err instanceof Response) { //server error code, thrown from else above
                    //handle this error
                } else { //this is network error
                    //handle this error
                }
            }
            return null;
        });
}
 
//fetch with default error handling
myFetch('/api', {user: 'toto', task: 'info'})
    .then(
        response => {
            if(response != null){
                //handle response
            }
        });
 
//fetch with custom error handling
myFetch('/api', {user: 'toto', task: 'info'}, true)
    .then(
        response => {
            if(response != null){
                //handle response
            }
        })
    .catch(errResponse => {
        //handler errResponse (both error status and network error)
    })

 

 

Read fast or die

I have spend a lot of time today trying to find and fix an issue which ended up to be a fun discovery at the end.

The following java error occurred when loading a pdf file from an url stream:

java.io.IOException: missing CR
at sun.net.www.http.ChunkedInputStream.processRaw(ChunkedInputStream.java:405)
at sun.net.www.http.ChunkedInputStream.readAheadBlocking(ChunkedInputStream.java:572)
at sun.net.www.http.ChunkedInputStream.readAhead(ChunkedInputStream.java:609)
at sun.net.www.http.ChunkedInputStream.read(ChunkedInputStream.java:696)
at java.io.FilterInputStream.read(FilterInputStream.java:133)
at sun.net.www.protocol.http.HttpURLConnection$HttpInputStream.read(HttpURLConnection.java:3066)
at sun.net.www.protocol.http.HttpURLConnection$HttpInputStream.read(HttpURLConnection.java:3060)

This looked like a java lib error since java version was a bit old so the first idea was to replace the code with some apache httpClient based code to load the URL. This generated the following error, very similar.

java.io.IOException: CRLF expected at end of chunk: 121/79
at org.apache.commons.httpclient.ChunkedInputStream.readCRLF(Unknown Source)
at org.apache.commons.httpclient.ChunkedInputStream.nextChunk(Unknown Source)
at org.apache.commons.httpclient.ChunkedInputStream.read(Unknown Source)
at java.io.FilterInputStream.read(FilterInputStream.java:133)
at org.apache.commons.httpclient.AutoCloseInputStream.read(Unknown Source)
at java.io.FilterInputStream.read(FilterInputStream.java:107)
at org.apache.commons.httpclient.AutoCloseInputStream.read(Unknown Source)

Since this was a windows machine and the requests passed via localhost another try was to use another interface. The result was similar.

After some search I found a nice tool: http://www.netresec.com/?page=RawCap which does not requires any install and can be used even on localhost to generate a pcap compatible file which can then be inspected in wireshark.

The result was strange. Opening the capture in wireshark on my machine showed: [7540 bytes missing in capture file] in the tcp stream. This corresponded to a lot of packets: [TCP ZeroWindow], [TCP ZeroWindowProbe].

Selection_655

Since this was a VMWare install and I previously had some trouble with vmware software switches I assumed this was related to the network card config however it also happened on localhost.

After some more investigation I realized this was only happening when several request where made in parallel. I confirmed this by looking in the code.

The code contained a worker pool. Each worker/thread constructed the url the used then returned an inputStream.

DataSource source = new URLDataSource(reportUrl);
return source.getInputStream();

However all the results were handled in sequence. As such while one url inputStream was read, the server continued to send data on all others request but this data was not read on the client side fast enough. As a result the tcp window got indeed to 0 and the strange error occurred.

Of course the solution was to fully read the data in each worker.

Oracle: drop all schema contents

Purpose: drop all schema contents without dropping the user.

DECLARE
BEGIN
  FOR r1 IN ( SELECT 'DROP ' || object_type || ' ' || object_name || DECODE ( object_type, 'TABLE', ' CASCADE CONSTRAINTS PURGE' ) || DECODE ( object_type, 'TYPE', ' FORCE' ) AS v_sql
                FROM user_objects
               WHERE object_type IN ( 'TABLE', 'VIEW', 'PACKAGE', 'TYPE', 'PROCEDURE', 'FUNCTION', 'TRIGGER', 'SEQUENCE' )
               ORDER BY object_type,
                        object_name ) LOOP
    BEGIN
        EXECUTE IMMEDIATE r1.v_sql;
    EXCEPTION	
    WHEN OTHERS THEN
         DBMS_OUTPUT.PUT_LINE(SQLERRM);
	 END;
  END LOOP;
END;
/

xubuntu 14.04 – Trusty Tahr

There is not much to say about the new xubuntu 14.04 trusty tahr and this is a very very good thing. I had quite some work to do when installing 12.04 on my Tuxedo Laptop but with 14.04 I managed to setup almost everything in 2h on a friday night. Here is a list of just a few small hick-ups:

  • non blocking grub error while booting from my LVM root
Error: diskfilter writes are not supported.
Press any key to continue...

This is described and solved in this bug report (requires grub scripts modification)

  • very small trouble with Oracle XE 11
  • some problems with google-earth
len@tux:~$ google-earth 
./googleearth-bin: error while loading shared libraries: libGLU.so.1: cannot open shared object file: No such file or directory

This is related to the missing i386 libraries but is well described in this post.

Update 05.05.2014: Problems with eclipse:

./eclipse.sh
#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x00007fa9a426f2a1, pid=15491, tid=140369401485056
#
# JRE version: 6.0_27-b07
# Java VM: Java HotSpot(TM) 64-Bit Server VM (20.2-b06 mixed mode linux-amd64 compressed oops)
# Problematic frame:
# C [libsoup-2.4.so.1+0x6c2a1] short+0x11
#
# An error report file with more information is saved as:
# /phantom/now/java/eclipse-3.7-classic-64/hs_err_pid15491.log
#
# If you would like to submit a bug report, please visit:
# http://java.sun.com/webapps/bugreport/crash.jsp
# The crash happened outside the Java Virtual Machine in native code.
# See problematic frame for where to report the bug.
#
*** NSPlugin Viewer *** ERROR: rpc_end_sync called when not in sync!

This is described here but the fix did not worked for me (I was in 3.7)

Oracle 11g release 2 XE on Ubuntu 14.04

There are many, many links, threads, bugs and discussions related to this since oracle 11g installation is no longer breeze at it was the case with oracle 10g, at least on Ubuntu. This is my short, minimal list of things to do to have oracle running on Ubuntu 14.04 12.04.

Last updated 2014-05-01, install on 14.04
Last updated 2013-12-25, install on 12.04.3.

(more…)

Old laptop, broken charger, limited frequency

I have an old laptop with a broken charger. The laptop works on the charger but it does not charges the battery. This should not be a problem since the battery is broken also. However I’ve noticed the laptop is very slow. After further investigation I noticed that the cpu maxfreq is always equal the the low frequency no matter the governor:

cat cat /sys/devices/system/cpu/cpu0/cpufreq/cpuinfo_max_freq

I installed cpufreq and tried setting the governor to performance, no luck. At first the governor was always getting back to ondemand until I realized to stop the ondemand service (update-rc.d remove -f ondemand). Then, even with the performance governor the frequency was never getting as high as it should. Finally after a lot of research I found the following lines which added to /etc/rc.local fixed everything:

echo 1 > /sys/module/processor/parameters/ignore_ppc
echo -n 2000000 > /sys/devices/system/cpu/cpu0/cpufreq/scaling_max_freq
echo -n 2000000 > /sys/devices/system/cpu/cpu1/cpufreq/scaling_max_freq

Of course you should use your max frequency from /sys/devices/system/cpu/cpu1/cpufreq/scaling_available_frequencies