OSGi, Vaadin and Apache Bean Validation (JSR303)

March 18, 2015

Today I integrated javax.validation into Vaadin. Vaadin offers built-in support for it. But since I am using OSGi, I had to do it a bit differently.

The idea was the following:

  1. register an instance of javax.validation.ValidatorFactory as an OSGi service
  2. write a Vaadin Validator that will consume this OSGi service

And it was pretty fast. It took me about 30 min and I was done:

javax.validation.ValidatorFactory as an OSGi service

Therefore I have created a bundle org.lunifera.runtime.jsr303.validation, put in an activator and registered the OSGi service.

public class Activator implements BundleActivator {

	private ServiceRegistration<ValidatorFactory> registry;

	@Override
	public void start(BundleContext context) throws Exception {
		// provide the bean validation factory
		ValidatorFactory avf = Validation
				.byProvider(ApacheValidationProvider.class).configure()
				.buildValidatorFactory();

		registry = context.registerService(ValidatorFactory.class, avf, null);
	}

	@Override
	public void stop(BundleContext context) throws Exception {
		if (registry != null) {
			registry.unregister();
			registry = null;
		}
	}

}

Prepare VaadinBeanValidator

Vaadin comes with a com.vaadin.data.Validator implementation for JSR303 (bean validation) called BeanValidator. Since it is licensed under the Apache License v2, I copied the validator and changed the way how the implementation will get access to the validation factory.

// access the OSGi service registry to get an instance of ValidatorFactory
BundleContext context = FrameworkUtil.getBundle(BeanValidationValidator.class)
		.getBundleContext();
ServiceReference<ValidatorFactory> ref = context.getServiceReference(ValidatorFactory.class);
javaxBeanValidatorFactory = context.getService(ref);

It was really fast and straightforward to combine Vaadin and OSGi for javax.validation.

Available by P2

If you would like to use the org.lunifera.runtime.jsr303.validation bundle, it is available the Lunifera P2.

The feature is called org.lunifera.runtime.jsr303.validation – Lunifera runtime: javax.validation ValidationFactory provider. And it is licensed under the EPL Eclipse Public License v1.

So feel free to use it…

Hacking OSGi’s bundle resolving

February 19, 2015

Last week, we ran into troubles when deploying our system via Equinox P2: All of a sudden, our DSLs simply didn’t work after the installation of the system into a new Eclipse – the DSL projects didn’t request the Xtext nature, and Eclipse didn’t open the appropriate editor without displaying any error message. Big showstopper!

A look in the log revealed the following:

org.eclipse.e4.core.di.InjectionException: java.lang.LinkageError: 
loader constraint violation: when resolving overridden method
"org.eclipse.xtext.xbase.ui.contentassist.XbaseProposalProvider.getProposalFactory
(Ljava/lang/String;Lorg/eclipse/xtext/ui/editor/contentassist/ContentAssistContext;)
Lcom/google/common/base/Function;" the class loader (instance of
org/eclipse/osgi/internal/loader/EquinoxClassLoader) of the current class,
org/eclipse/xtext/xbase/ui/contentassist/XbaseProposalProvider, and its
superclass loader (instance of org/eclipse/osgi/internal/loader/EquinoxClassLoader),
have different Class objects for the type com/google/common/base/Function
used in the signature ...

So the root of the problem were different versions of Google Guava (that contains com.google.common.base.Function). Of course, OSGi supports running multiple versions of bundles at once – which we do in our system since Sirius uses Guava 15 and Mylyn uses Guava 18. No problem so far!

The problem

The problem arose because Xtext is open to a wide range of Guava versions. Unfortunately, the Xtext bundles were not wired to one Guava version consistently, but randomly to one of the two (depending on the order of resolution). This introduced incompatibilities between Xtext bundles, which prevented the system from working.

Wiring Xtext bundles to different Guava versions prevents Xtext from working.

The problem: Xtext bundles were wired to different Guava versions, causing class loader problems within the Xtext class-space. (Click to enlarge)

Searching for a way

Now what can be done?

One possible solution for this would be employing the OSGi “uses”-directive (good explanation here): This directive in the bundle headers of multi-bundle projects can ensure that the class space is consistent. The problem with this solution was that we would have had to fork Xtext in order to add the “uses”-directives. Since we are not (yet) ready and willing to do that, we reported a bug with Xtext. The good news is that the helpful guys over there reduced the likelihood of inconsistent bundle wiring by removing some reexports. The bad news: There is no waterproof solution at hand that will work in any case.

Maybe – just maybe – upgrading to Xtext 2.8 might have helped. With our complex system, this would have taken two weeks and forced the same migration on our customer. No good!

 

Breakthrough: Hacking OSGi

So back to the planning table: What we want is to make sure that all Xtext bundles end up with the same version of Guava. So if we could influence the OSGi resolving process, we’d be fine! But to do that, the code that influences the resolving process needs to be loaded first – a chicken and egg problem: Who makes sure that the code that determines bundle resolution order is the first to be fired up?

A closer look at the OSGi specification showed us that there is a mechanism in place already: The Resolver Hook Service makes it possible to influence the resover’s decisions by writing a system extension to the OSGi framework.

After some research about the details of the Resolver Hook Service, we came up with a system bundle fragment that is called whenever a bundle wants its dependencies resolved. This fragment is given a list of possible candidates for the bundle wiring. Now we can kick out the older versions if a dependency can be resolved by more than one version of a bundle, so only the newest one remains. And voilà: All of Xtext ended up with Guava 18.

The Resolver Hook Service makes it possible to ensure correct bundle wiring.

Our solution: Using the Resolver Hook Service, we can influence the decisions made by the Resolver and control which bundles are used to satisfy dependencies of other bundles. (Click to enlarge)

 

Deploying the solution

We still were faced with one minor problem: OSGi needs to be made aware of the system extension fragment at startup. Locally, this is no problem: One can either add “osgi.framework.extensions=…” to the $ECLIPSE_HOME/configuration/config.ini, to the vm section in the $ECLIPSE_HOME/eclipse.ini or pass it as an argument to the VM (‑Dosgi.framework.extensions=…).

But how to do this automatically during a P2 installation? Well, as Dennis Hübner put it:

p2.inf is your friend

Using a p2.inf located next to the feature.xml of the feature containing the bundle fragment, it is easy to update the $ECLIPSE_HOME/configuration/config.ini during the installation process. Yayy, it works!

 

Our code

The code we came up with is available under the EPL. It can be found in our lunifera-runtime repo at Github (development branch, relevant folders: org.lunifera.runtime.systemextension and org.lunifera.runtime.feature.resolverhooks).

 

Originally posted at Lunifera.com

 

Lunifera MQTT Xmas-Tree is online

December 1, 2014

With December just begun, we are happy to announce that the 2014 incarnation of our MQTT Xmas Tree is now online. Feel free to try it out here: The tree is standing in our office, and everybody can change the lights on it, move the Xmas star and have a tiny angel fly around ;-)

How did we do it? Well, it consists of three main parts – not counting the tree itself ;-)

  • a RaspberryPi that controls the LED band on the tree and the movement of the star and the Xmas angel. On this RaspberryPi, we have the Mihini framework running an MQTT client. The MQTT client ties together hardware (GPIO pins) and software (MQTT messages). In order to retrieve MQTT messages, the client uses the Lua implementation of Eclipse Paho. Messages containing valid Xmas Tree commands are then translated to the appropriate GPIO actions (controlling the LED band via an IR diode, powering the motor for the Xmas angel via a transistor and triggering an Arduino Uno that generates a PPM signal for the servo motor that moves the star).
  • a second RaspberryPi that has a webcam attached and serves a video stream via motion and apache2 (we followed this great tutorial approximately to get this running). With DDNS, this stream can be reached from the outside world.
  • a Vaadin Web UI featuring buttons that send MQTT messages with commands for our tree to our MQTT broker (to be picked up by the first RaspberryPi) and displaying the video stream so users can watch the effects of their actions.

Getting this contraption to work was great fun — a great way to spend one’s spare time. A nice team-building activity. And a perfect counterweight to tedious debugging sessions ;-)

Of course, we are going to open source the tree command software on Github. By the way here is an overview picture of the hardware we used for controlling the tree:

Christmastree-blog

 

Happy treeing and have a joyous holiday season!

The Lunifera Crew from Vienna

http://www.lunifera.com

Vaadin 7.3 – Valo, OSGi and e4

September 2, 2014

I got the chance to see a preview of Vaadin 7.3 some days ago and I am really really impressed about the new features it brings.

Until now, I have worked with the Vaadin Reindeer theme and tried to customize it. But since I am a Java developer, I do not have particularly deep knowledge about CSS3 and had a hard time with it. That’s why I am really looking forward to Vaadin 7.3 and going to upgrade my customer projects in the next days. The new Valo-Theme is exactly what I have been trying to do myself: a responsive and highly customizable theme. There are many different styles and most of them meet my objectives without having to change anything in the CSS.

And the best thing about Vaadin 7.3 is that it comes with a high-end Sass compiler. In the last days I was reading a lot about Sass and it is a perfect match for Java developers. Using this very intuitive styling language, Vaadin 7.3 will compile that information into proper CSS3. Really crazy… For me Sass is something like a DSL for CSS3. Thus, I do not have to schedule my CSS training anymore — I just have to use Sass :D

 OSGi and Vaaclipse

During the next days, I will “Run a first Vaadin 7.3 OSGi application”. And I am sure right now: it is a perfect match.

Running a Vaadin 7.3 OSGi application is the base for migrating the Vaaclipse-Project to Valo too. The Vaaclipse-Project is a rendering layer to render the e4-workspace with Vaadin. See http://semanticsoft.github.io/vaaclipse/.

For details about Vaadin 7.3 just click here.

I also added two screenshots about the new theme:

Metro-Theme

Valo-1

Dark-Theme

Valo-2

 

Going to keep you informed…

Best, Florian

Last Sharky talk

September 1, 2014

We started with our first Sharky talk in Germany Darmstadt one year ago. Now after furthermore 9 talks we decided to stop the project. We showed our Sharky in many different cities like Darmstadt, Vienna, San Francisco, Ludwigsburg, Mainz, Zurich and Munich.

Now we are on the way to find new ideas for projects, hopefully as good as the sharky project was.

 

In this video you can see our last Sharky presentation at IoT-Meetup in Vienna. (the Video is in german)

www.youtube.com/embed/sL5ZLUTezHI?list=PLE3pymn7PXXR5qy1jqL2LQWpog0_4mm8X

See you soon,

Sharky team…

Sharky at EclipseCon Europe

November 2, 2013

Klemens and I have been at the EclipseCon Europe in Ludwigsburg and got the chance to demo sharky there. When we came there and saw the big room our talk was assigned to, i became speechless. It wasn’t a room, but rather a hall. Really impressive. Never talked on such a big stage.

The talk was really nice; in fact it was the funniest talk i have ever had. You already know that sharky is a big wild one with his own mind. So this nasty fish refused to follow some of our commands. For instance sharky decided to fly higher and higher without any interest to get back down to us. (Well, it was not sharky’s fault. A loose cable blocked the diving mode). So i sent out Klemens to catch sharky again :D But how to catch a sharky that is flying in a height of 10 meters? Well, i still don’t know, but ask Klemens because he managed.

At the end, we could demo all things we had planned to do. And it seemed that the attendees really loved sharky and his little accidental misbehaviour.

For me it was one of the talks i will never forget. Was sooo much fun and a lot things happened to laugh about. Two people made a movie about the talk and i am looking forward to their release…

Here you can see a little movie by Benjamin Cabe. Jonas Helmig was volunteer to rc sharky by a 3D-sensor.

And an image during prepartion of sharky before the talk by my friend @ekkescorner:

Ece2013_SharkyPrepare

Thanks a lot to Jelena from foundation. She helped a lot preparing things…

Sharky – Jnect on BeagleBone with Eclipse Paho

October 14, 2013

During weekend Klemens and me worked hard on a Jnect-M2M integration.

If you follow the image from “Sharky – his evolution progress“, you can see that OpenNI and NiTE are running on an Ubuntu hardware and not on a beagle bone black. The problem was, that NiTE does not support ARM processors for now.

Ubuntu running OpenNI and NiTE

So we got the idea to track the coordinates of parts of the human body (joints) on the Ubuntu hardware using OpenNI and to send the coordinates to an external M2M-Server running on a beagle bone.

The informations sent by OpenNI java wrapper to the M2M-Server (topic=skeleton) look like:

skeleton {
   joint = Left-Hand
   x = 123.45
   y = 211.77
   z = 86.78
}

Jnect with M2M Client

We used the Jnect project and added M2M support to it. So Jnect no longer depends on the Microsoft Kinect Library, but also may use a M2M-Connector to get skeleton information from the M2M-Server.

Jnect will subscribe to the topic=skeleton from the M2M-Server and gets the coordinates of the tracked joints by OpenNI. Implementing some glue code it was simple to build the EMF-body-model defined by Jnect. Since Jnect also provides API to register GestureDetectors, we could use the provided API to add a LeftHandUpGestureDetector.

Finally we installed the Jnect bundles on an Equinox OSGi runtime running on a beagle bone black.

What happens in detail

The ASUS 3D-sensor is connected to the Ubuntu hardware. Running an implemented Java-Wrapper, the sensor will send pictures to OpenNI and OpenNI will pass skeleton informations to the Java-Wrapper. We put the given coordinates to a datastructure and pass them to the M2M server using Eclipse Paho.

The M2M server receives the messages and passes it to the Jnect M2M client running on the beagle bone black.

Jnect will parse that information and adjusts the EMF-body model. Changes to the body model will invoke the LeftHandUpGestureDetector. If the gesture is matched by the changes of the coordinates sent from OpenNI, then a systemOut is sent to the console.

See details here

Sharky – his evolution progress

October 12, 2013

We could already demo, that sharky may become controlled by a Vaadin web UI properly. Now we are going to help sharky in its natural evolution.

The main idea

We would like to use a natural interface to remote control two sharkies at the same time. A 3D-sensor should observe the left and the right hand movements. Gestures by the left hand should remote control sharky-1 and gestures by the right hand should control sharky-2.

The technical solution should look like the image below.

SharkyEvolutes

The sensor

So we bought Xbox Kinect. The problem was, that Kinect SDK only supports windows and OpenNI dropped linux support for license reasons. Again we explored the web and found Asus Xtion Pro Live. A 3D-sensor developed for developers and native OpenNI support. The sensor will capture 3D-images and sends them to OpenNI. NiTE – as an openNI plugin – provides java API for skeleton and hand tracking. Our first idea was to install OpenNI and NiTE on a beagle bone. But NiTE does not support ARM processors for now. So we adjusted our architecture again and have been installing OpenNI and NiTE on a X86 Ubuntu. Writing some java glue code allows us to track the positions of a hand in 3-dimensions. Since we are addicted to M2M-technologies we do not further process that information on the Ubuntu device, but send them using Eclipse Paho to a M2M-Server using the MQTT protocoll.

M2M-Server

The M2M-Server (a Mosquitto server) is running on a Beagle Bone Black. And it aims as a publish/subscribe server. Clients can subscribe to topics and will get messages sent to the topic. The Ubuntu device sends all messages to the “handinfo” topic at the M2M-server.

Jnect Bodymodel

A very nice project called Jnect provided by Jonas Helmig and Maximilian Kögel (Eclipsesource Munich) implements a body model based on EMF. It also supports gesture recognition. Your own gesture handler may be registered using extension points. So the idea is, that eclipse equinox is becoming installed on an additional beagle bone. Using Paho the beagle bone connects to the M2M-Server at the topic “handinfo”. So all changes of the human hands in any of the 3-dimensions is sent to this beagle bone. Implementing some glue code, the body model based on EMF is prepared.

At a next step, we have to add GestureHandlers. These are being notified about changes in the body model and have to calculate whether a gesture was detected. For instance “left hand up”, “right hand down”, “hands clapped”,… The gestures will be sent to the M2M-Server at the topic “gestures”.

Sharky controller

These gestures are the base information for the Sharky controller. The sharky controller is also installed on a beagle bone black and is based on Mihini. Using Lua it connects to the M2M-Server and subscribes the topic “gestures”.

So if the human raises its right hand, the SharkyController gets the information “right hand up”. Sharky controller will use that information to calculate the required GPIO outputs. The GPIOs are connected to the remote controller and sharky follows the commands introduced by hand movements.

Planned commands – left hand controlls sharky-1 and right hand controlls sharky-2

  • hand left -> “sharky turn left”
  • hand right -> “shary turn right”
  • hand up -> “sharky raise”
  • hand down -> “sharky dive”
  • hand towards the sensor -> “sharky get faster”
  • hand away from sensor -> “sharky get slower”

So we have a lot of work to do, to implement things properly. Lets see what happens…

How to install the Mosquitto-M2M Server on a BeagleBoneBlack

October 3, 2013

As you might know, we are going to present our flying shark at this year’s EclipseCon Europe. Since we won’t have much time there for setting up our equipment (and clearing the stage after the show), we are currently working on scaling things down – literally: Instead of a laptop running the M2M/MQTT server, we are going to make use of a second BeagleBoneBlack (BBB). Of course, it would be possible to have the M2M server running on the same BBB as the Mihini framework that commands our sharky, but we decided to keep it on dedicated hardware. After all, the M2M server in a real-world application might sit on another continent, and it is our goal to demonstrate M2M communication over the net and not on localhost.

Long story short, today we have set up our second BBB with the Mosquitto M2M server running on top of an embedded Linux. Since we had positive experience with the armhf flavour of Debian on our first BBB (with the Mihini framework), we decided to use it on our second BBB as well. After all, Mosquitto is provided as a Debian package …

The installation process was pretty straightforward – an excellent how-to can be found here. After writing the live image to a microSD card, pressing the USER/BOOT button on the BBB while disconnecting and reconnecting power causes the BBB to boot from the microSD card. Note that this worked only when we powered the BBB by USB (and not by adapter).

Connecting the BBB to our router, it obtained an IP address and offered a rudimentary browser terminal:

connection

login

As described in the how-to, the user was “debian” with the password “temppwd”.

When logged in, we downloaded the most recent debian-armhf image from here to the /tmp folder:

wgetting

The next step was to flash this image to the internal eMMC storage of our BBB:

flashing

This took some five minutes. After successfully flashing the image, we shut down the BBB, removed the microSD card and restarted it. Voilà, it came up running an embedded Debian. The only thing noteworthy was the password for the default user – in contrast to the live image on the microSD, this system uses “debian”.

What remained to be done was the installation of our M2M server. Since the Mosquitto broker is provided as a Debian package, this was as simple as

sudo apt-get install mosquitto

And voilà – mosquitto is installed and running as a service, listening on port 1883 of our BBB dedicated to MQTT:

portopen

Our next step will be to make the first BBB communicate with the second one and to subscribe to MQTT topics (and to publish of course). And after that we will set up a third BBB and use it as an additional source of commands (other than the web interface) … cool stuff coming up!

Sharky – at EclipseTestingDay

September 26, 2013

Today i arrived from eclipse testing day in darmstadt organized by bredex. As a conclusion i have to say, that the testing day is one of my favorite conferences. I enjoyed the day a lot.

The bredex team is very nice, the conference was organized properly and the speakers have been really experts in their area.

Before going to the testing day i did not have any idea about mobile testing since i am not involved with mobile development. I thought it might be very similar to the common test stuff. But there are so many aspects of testing mobiles. The talk about “energy testing” was really enlightening. Rating apps for their energy consumption. Also the insight to the mobile fragmentation, the caused problems and how to target mobile testing for various devices has been an information i will carefully remember for future projects.

Next year i am looking forward to go to the testing day again.

As i told in one of my last blog posts, we (sharky team) got the chance to keep a keynote at eclipse testing day. It was my first keynote and so i was really excited about that fact. And it was a lot of fun to demo sharky and to share the visions about M2M with other persons. The following image shows sharky flying during the keynote.

eclipseTestingDay_shark

So if you are looking for a really informative and funny testing conference; visit the eclipse testing day!


Follow

Get every new post delivered to your Inbox.