This ear, I’ve been given the opportunity to attend the French edition of the EclipseCon. It’s been the first time I could attend an event of this nature with Keynotes, workshops, talks and demos on various topics, informal talks with other attendees and speakers.
Bellow is a feedback on my experience of the EclipseCon France 2016.
From my point of view, IoT really was an important topic this year in Toulouse, most notably because of the presence of The Things Network, who’s been invited by the Eclipse Foundation to lead a workshop and give the first Keynote of the conference.
This team, from Amsterdam, wants to federate people and communities anywhere in the world around their worldwide network dedicated to connected devices, based on the LoRa technology.
So, what is LoRa? And what is The Things Network?
Taken from the LoRa Alliance :
LoRa is a diminutive for LoRaWAN™: Low Power Wide Area Network (LPWAM).
Without going into too much details (that I don’t master anyways), here’s what I can say:
This technology particularly fits in as a network layer for communication of small, connected devices because it allows for localization and mobility of devices, low consumption, does not need a big or existing previous installation, and communication can go both ways (bi-directional).
With a single antenna on top of a building in an urban area, or in fields, LoRa allows the connection of thousands of devices without loss, far more than a classic wireless gateway (WiFi, Bluetooth) and energy consumption is kept low and is less costly that a 3G gateway.
A few examples:
LoRa competes with the Sigfox technology, that you’re obviously aware of if you’re working/living in Toulouse like I do. Nonetheless, the two have different approaches, as Sigfox’s technology is proprietary and implies license costs, whereas the LoRa specification is free and open.
Some measures made by The Things Network, as well as a few tech characteristics:
Devices connecting to a LoRa network can be sorted into three categories:
A) Uplink only, device initiates the communication and server can answer; B) Device and network are in sync on a shooting window the data exchange; C) Device constantly listening for updates.
Of course consumption depends on device category.
The Things Network is a project born in Amsterdam, the goal is to build a worldwide, open distributed network for IoT devices.
Following a crowdfunding campaign, the team has started to create a Web platform in order to allow connection of devices via brokers.
All the source code of The Things Network is open source and available on Github, according to their commitment to allow a vast adoption of these technologies.
In parallel, their business entity sells Starter kits for education purposes as well as gateways, and participates in workshops and trainings to allow people to equip their homes/neighborhoods/towns and initiate a global coverage movement.
There already are communities around the world, mostly in Europe at this time. Those communities were sometimes initiated by the team members of The Things Network, who travel a lot to advocate for their project and the LoRa technology, and sometimes communities are spontaneously created by local people.
Everything’s in the title.
This workshop was intended to developers who are more familiar with backend technologies and wanted to have an introduction at the most famous front-end framework of the moment: AngularJS.
The was articulated about an introduction to controllers, scopes, services and directives, based on a tiny project example.
As a full-stack developer I think this workshop was well adapted to its audience, with an iterative process in order to introduce new concepts in turn on the tiny project.
The speakers chose to have TypeScript at the basis of their example, in order to keep their audience, more used to classical object architectures than ECMAScript, in their comfort zone. My co-attendees had the occasion to have their feet wet in a project architecture modeled around interfaces and implementations, with generic types and inheritance. On the other hand, they’ve had to deal with the poor front-end development tooling of the Eclipse IDE.
Let’s talk about tooling.
This year we (the attendees) had a lot of choice regarding sessions about the Eclipse tooling. Here are is a feedback on the tools I’ve been presented.
At the moment, JSDT 2.0 benefits from a new parser, more powerful and robust than the previous one, which is able to handle the ECMAScript 6 specification.
The other objectives are centered around the integration of packets managers (npm / bower), task builders (grunt, gulp), support of Node.js, and additional tools for debugging and browsers integrations (Chrome).
It’s been a few month I got myself interested in Vagrant and Docker and started to mess with them, especially for development and integration environments. The idea to setup and share with teams/contributors an immutable infrastructure and repeatable deployment processes is very exciting.
At the moment, the two plugins provide new “perspectives” to the Eclipse IDE, allowing to do everything (I mean, almost everything) you can do on the command line:
I’m a little bit disappointed, even though I personally don’t have an affinity for IDE integrations of command lines tools (I like my git separated from my IDE for instance).
Anyway it’s worth mentioning that all the developments made on these plugins are the work of developers who are doing it for free, there are not a lot of them, and like everybody else they have to mow their lawn and fix their home on weekends. So thanks guys, and keep up the good work.
I was very enthusiast to attend this talk. I’m very interested in the opportunity to manage build jobs in a pipeline-shaped way. I’m also interested in “Continuous Delivery” and in “Continuous Deployment” for that matter.
So what was this talk about ? Mainly what I could describe as the ability of orchestration, interruption and resilience of build jobs. Nothing less…
Like said in the slides, what happens when you have fairly complicated build jobs, requiring operator inputs and possibly the ability to run in parallel?
Apart from creating multiple individual jobs, that you can link or chain later on, leave alone fail-fast and parallelism, there is no idiomatic way to do.
This is the kind of problems “Jenkins Pipeline Plugins”, which is in reality, a set of plugins, is try to solve. At the core of it is a DSL, the “Pipeline DSL”, allowing to chain builds, as steps, and attach to each a set of configuration options, like parallelism for instance.
It becomes possible, for example, to configure a few dozens look-alike (small variation) jobs shaping the basis (the dependencies of a cascading build job) and trigger the execution of all these builds in parallel, before the execution of the next build job which depends upon them. All of this while specifying that the complete build sequence should stop in case of a failure of any of the base builds (fail-fast).
For the record, the speaker showcased this example exactly, on a Docker Swarm build cluster provided by a cloud provider:
Don’t we all have one like that in our basement?
Anyways, I was really curious about the choice of a DSL, instead of a fully declarative description of the build pipeline thanks to configuration file(s). It’s easy to envision how to describe via simple data structures like maps and collections, the orchestration of jobs and the description of each step.
I did not get a clear answer except that: most of the contributors being Java developers, a DSL (which really looks like Java by the way) seemed a natural choice.
I’ve fully enjoyed my first time at a conference. The organization was perfect and the quality of the speakers was very satisfying.
I will be please to come over next year at the EclipseCon France 2017, and I recommend to any developer having the opportunity to attend it, to do so without hesitation.
All of the Keynotes and talks have been video recorded and are available on the Eclipse Foundation’s Youtube channel.