Update NumericTextBox precision on the fly

As currency selection changed you naturally want to change the precision and format of a NumericTextBox which contains an amount in the given currency (damn JPY for not using the same precision as everyone else).

At a first glance one might think this can be accomplished using computed properties and something like:

    @bindable precision:number = 2;
 
    @computedFrom('precision')
    get step():number {
        return Math.pow(10, - this.precision);
    }
 
    @computedFrom('precision')
    get format():string {
        return "n" + this.precision;
    }
<input type="number" ak-numerictextbox="k-value.two-way:value; k-spinners.one-way:spinners; k-format.one-way:format; k-decimals.one-way:precision; k-step.one-way:step"/>

However this is not possible and in this specific case there are no kendo functions to achieve this.

The only option, short of recreating the control each time the currency changes is to use the setOptions method as described here:

    precisionChanged(newValue, oldValue){
        let step = Math.pow(10, - this.precision);
        this.numericTextBox.setOptions({format: 'n' + newValue, decimals: newValue, step: step});
        this.numericTextBox.value(this.value);
    }

Please note the value(value()) is a very ugly hack. See running gist.

 

Dispatch change events in custom components

This is the first post about Aurelia and the Kendo-UI-Bridge. It’s a brave new world for me.

The goal was to create a custom component (which wraps a ak-combobox) and dispatch a change event in a similar way to the standard components (i.e ak-combobox, ak-datepicker, etc.).

<input ak-datepicker="k-value.two-way: model.date" k-on-change.delegate="onChange($event)"/>

For these components, the change event is dispatched after the model changes.

In my component code I had initially wrapped the k-on-change from the base component and continued dispatching it.
This however did not had the desired effect. The wrapping component emitted the event before the model was updated (binding was updated).

<dictionary-combo value.two-way="model.currency" k-on-change.delegate="onChange($event)"></dictionary-combo>
    onChange(event){
       event.stopPropagation();
       this.dispatch()
    }
 
    dispatch(){
      this.dispatchCustomEvent(this.element, "k-on-change", {value: this.value});
    }
 
    dispatchCustomEvent(element:Element, type:string, detail:Object, bubbles:boolean){
        let newEvent;
        if (window.CustomEvent) {
            newEvent = new CustomEvent(type, {
                detail: detail,
                bubbles: true
            });
        } else {
            newEvent = document.createEvent('CustomEvent');
            newEvent.initCustomEvent(type, bubbles, true, {
                detail: detail
            });
        }
        element.dispatchEvent(newEvent);
    }

The solution is to delay the event. Stop propagating the initial event and add the action of sending the new event to the aurelia taskQueue. This would ensure the event occurs after the binding and can be done:

    onChange(event){
       event.stopPropagation();
       console.log("dictionaryCombo.onChange: " + this.value);
       //wrong way: this.dispatch()
       this.taskQueue.queueMicroTask(() => this.dispatch());
    }

Refer to this gist for complete example.

Please note that I found this, thanks to the wonderful guys on the aurelia-kendo-ui gitter to which I remain grateful
 

Parsing network stream into http request/response

The need was to convert the network stream into clear text http request/responses while doing some decoding of the response body. For instance:

request uri + queryString => response body

  1. Capture the stream – easy using tcpdump
  2. Filter the http stream – easy using wireshark with a tcp.port eq 80 filter
  3. Export http #1. using wireshark file -> export objects -> http. This works fine only for files. It does not work for POST requests. FAIL.
  4. Using tshark and a combination of -Tfields and -e parameters. Did not worked easily enough even if I suspect it would. FAIL.
  5. Using tcpflow:  tcpflow -r test.pcapng -ehttp. This generates some nice flows but it had some disadvantages: requests and responses are in different files and are flow sorted not time sorted. I think this can be overcome by writting a script which parses: report.xml using something like this. FAIL.
  6. Final idea was based on pcap2har which parses a .pcap file to a har. Some changes to main.py and voila:
logging.info('Flows=%d. HTTP pairs=%d' % (len(session.flows), len(session.entries)))
 
for e in sorted(session.entries, key=lambda x: x.ts_start):
    if e.request.msg.method == 'GET':
        print 'GET', e.request.url
    elif e.request.msg.method == 'POST':
        print 'POST', e.request.url, urlencode({k: v[0] for k, v in e.request.query.items()})
    if e.response.mimeType == 'application/octet-stream':
        print decode(e.response.text, options.password)
    else:
        print 'unknown:', e.response.mimeType, e.response.raw_body_length
    print '\n'
 
#write the HAR file

 

Ubuntu 16.04

I’we used ubuntu since edgy days and migrating from gentoo. Things got better each time, until they started getting worse or until I started to expect not to have to fix and patch each time. So now I don’t feel like giving any impression, just a list of bugs:

Read fast or die

I have spend a lot of time today trying to find and fix an issue which ended up to be a fun discovery at the end.

The following java error occurred when loading a pdf file from an url stream:

java.io.IOException: missing CR
at sun.net.www.http.ChunkedInputStream.processRaw(ChunkedInputStream.java:405)
at sun.net.www.http.ChunkedInputStream.readAheadBlocking(ChunkedInputStream.java:572)
at sun.net.www.http.ChunkedInputStream.readAhead(ChunkedInputStream.java:609)
at sun.net.www.http.ChunkedInputStream.read(ChunkedInputStream.java:696)
at java.io.FilterInputStream.read(FilterInputStream.java:133)
at sun.net.www.protocol.http.HttpURLConnection$HttpInputStream.read(HttpURLConnection.java:3066)
at sun.net.www.protocol.http.HttpURLConnection$HttpInputStream.read(HttpURLConnection.java:3060)

This looked like a java lib error since java version was a bit old so the first idea was to replace the code with some apache httpClient based code to load the URL. This generated the following error, very similar.

java.io.IOException: CRLF expected at end of chunk: 121/79
at org.apache.commons.httpclient.ChunkedInputStream.readCRLF(Unknown Source)
at org.apache.commons.httpclient.ChunkedInputStream.nextChunk(Unknown Source)
at org.apache.commons.httpclient.ChunkedInputStream.read(Unknown Source)
at java.io.FilterInputStream.read(FilterInputStream.java:133)
at org.apache.commons.httpclient.AutoCloseInputStream.read(Unknown Source)
at java.io.FilterInputStream.read(FilterInputStream.java:107)
at org.apache.commons.httpclient.AutoCloseInputStream.read(Unknown Source)

Since this was a windows machine and the requests passed via localhost another try was to use another interface. The result was similar.

After some search I found a nice tool: http://www.netresec.com/?page=RawCap which does not requires any install and can be used even on localhost to generate a pcap compatible file which can then be inspected in wireshark.

The result was strange. Opening the capture in wireshark on my machine showed: [7540 bytes missing in capture file] in the tcp stream. This corresponded to a lot of packets: [TCP ZeroWindow], [TCP ZeroWindowProbe].

Selection_655

Since this was a VMWare install and I previously had some trouble with vmware software switches I assumed this was related to the network card config however it also happened on localhost.

After some more investigation I realized this was only happening when several request where made in parallel. I confirmed this by looking in the code.

The code contained a worker pool. Each worker/thread constructed the url the used then returned an inputStream.

DataSource source = new URLDataSource(reportUrl);
return source.getInputStream();

However all the results were handled in sequence. As such while one url inputStream was read, the server continued to send data on all others request but this data was not read on the client side fast enough. As a result the tcp window got indeed to 0 and the strange error occurred.

Of course the solution was to fully read the data in each worker.

Searching for signal

For the last few years, one of the tool I have greatly used is a Huawei E587 modem. It’s a great little device which gave me a lot of freedom. Even if it is quite old, it outperforms, even without an external antenna any smartphone I used for tethering and especially my new Samsung Galaxy S5 Neo which, as a parenthesis, has one of the poorest software I have ever seen, reminds me of a circa 2000 windows pre-installed on a laptop and filled with junkware.

However, as many other devices, the reporting of signal strength is very simplistic. My goal was to be able to identify the best spot for the external antenna defined by the best signal strength.

(more…)

Running chrome in docker with audio

The goal is to run google-chrome in a docker container with audio support. I did this trying to get skype.apk to run in archron since skype for linux does not support conferencing anymore. Even if running skype in archron did not seemed to work chrome runs flawlessly with audio support via pulse:

So here is the Dockerfile:

FROM ubuntu:14.04
MAINTAINER len@len.ro
 
RUN apt-get update && apt-get install -y wget pulseaudio && echo "deb http://dl.google.com/linux/chrome/deb/ stable main" > /etc/apt/sources.list.d/google-chrome.list && wget -q -O - https://dl.google.com/linux/linux_signing_key.pub | sudo apt-key add - && apt-get update && apt-get install -y google-chrome-stable
 
RUN rm -rf /var/cache/apt/archives/*
 
RUN useradd -m -s /bin/bash chrome
 
USER chrome
ENV PULSE_SERVER /home/chrome/pulse
ENTRYPOINT [ "google-chrome" ]

You can build your container using:

docker build -t len/chrome .

The run it using:

docker run -it -v /tmp/.X11-unix:/tmp/.X11-unix -e DISPLAY=$DISPLAY -v $HOME/Downloads/chrome:/home/chrome/Downloads -v /run/user/$UID/pulse/native:/home/chrome/pulse -v /dev/shm:/dev/shm --name chrome len/chrome

 

Oracle: drop all schema contents

Purpose: drop all schema contents without dropping the user.

DECLARE
BEGIN
  FOR r1 IN ( SELECT 'DROP ' || object_type || ' ' || object_name || DECODE ( object_type, 'TABLE', ' CASCADE CONSTRAINTS PURGE' ) || DECODE ( object_type, 'TYPE', ' FORCE' ) AS v_sql
                FROM user_objects
               WHERE object_type IN ( 'TABLE', 'VIEW', 'PACKAGE', 'TYPE', 'PROCEDURE', 'FUNCTION', 'TRIGGER', 'SEQUENCE' )
               ORDER BY object_type,
                        object_name ) LOOP
    BEGIN
        EXECUTE IMMEDIATE r1.v_sql;
    EXCEPTION	
    WHEN OTHERS THEN
         DBMS_OUTPUT.PUT_LINE(SQLERRM);
	 END;
  END LOOP;
END;
/

Ensure rPi connectivity

The problem: make sure I can connect to my raspberry pi B+ even if no network is available or network change.

The idea: set a static IP.

First some information:

  • running raspbian 8.0 (cat /etc/issue)
  • there is no need for a crossover UTP cable if you connect directly to the device you can use a normal cable
  • IP configuration is delegated from /etc/network/interfaces to the dhcpcd daemon. This is why the eth0 is set on manual.

I did not wanted to crash default config. Just wanted to ensure the device will be visible. So I just added an aliased (virtual) interface with a fixed ip:

 

# Include files from /etc/network/interfaces.d:
source-directory /etc/network/interfaces.d
 
auto lo
iface lo inet loopback
 
iface eth0 inet manual
 
auto eth0:0
allow-hotplug eth0:0
iface eth0:0 inet static
address 10.13.0.201
netmask 255.255.255.0
network 10.13.0.0
 
allow-hotplug wlan0
iface wlan0 inet manual
    wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf
 
allow-hotplug wlan1
iface wlan1 inet manual
    wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf

This is my /etc/network/interfaces. You can now use a normal UTP cable to connect directly to the PI or on the LAN by setting an IP in the same class:

ifconfig eth0 10.13.0.1 up

Please note that if you are on ubuntu and have a NetworkManager controlled interface you might need to disable auto-control by editing /etc/NetworkManager/NetworkManager.conf (see the unmanaged-devices section)

[main]
plugins=ifupdown,keyfile,ofono
dns=dnsmasq
 
[ifupdown]
managed=false
 
[keyfile]
unmanaged-devices=mac:xx:xx:xx:xx:xx:xx

 

 

From monolithic single platform apps, to “write once, run everywhere”, (back) to … ?

‘A “line-of-business application” is one set of critical computer applications perceived as vital to running an enterprise’, as Wikipedia defines it. It might not always include cutting-edge technical innovation, but it involves a lot of functional knowledge on business processes thus becoming critical for the well-being of an enterprise and, as a result, has a very long life span.

The first application of this type I have worked on was running on Sun machines (OMG, how old I am! :). The architecture was simple, monolithical, the application ran on only one OS and connected directly to the DB. I think it is still in operation today somewhere. Trying to port it to NT at some point was more or less a disaster.

The second one I worked on had a html interface based heavily on tables and rarely js, developed using java and hibernate (very new at that point). The multi-tier architecture involved a server and a html client which ran in a browser, deployment and upgrade of the client was instant, the possibilities of connectivity were greatly improved. However, the interface was basic, standardized. It is still in operation today and due to the simple html used, we stepped in browser-hell only for a couple of times.

The third one was designed around 2005, with a Flex interface and java server-side. At this point it the code was ‘truly’ write once, run everywhere. The architecture was similar to the one in the previous case, but the interface had no limits, with a great look and everything a “native” interface could bring. It ran flawlessly on all flash supporting browsers. It took a few years to develop, it takes up to a year to implement at a client site and it will be buried much sooner than expected, by the death of the “so much hated flash” (even if some clients still run IE6).

Now I am looking for the set of technologies for the next application of this type. I will be working on with these technologies hopefully for some good years, and have hit a wall on the interface side. Instead of writing code for a “virtual machine” running on a variety of hosts (browsers) I am facing the possibility of writing a variety of code running on a variety of hosts. How is this anything else but a technological regression? I know the arguments against the “write once, run everywhere” paradigm once used by Sun for its java, but this is a damn gui, nothing else.
Yes, there are subtle bugs and security issues, but how can you compare this with the security issues involved in having n versions of the same application? With something like GWT or Vaadin, you add the huge complexity of handling all the browsers in your application. Consider the security issues involved in patching many such applications instead of a single ‘vm’. On maybe I should not even write a web based application and revert to a situation similar to the one from the first case I described, with completely different branches, for different platforms written in different languages? How can this development effort be justified? Or, in order to ensure longevity and easy development, the logical choice is to choose a single platform and develop natively for it alone?